I just noticed today, that when I compile and run a new XNA 4.0 game, one of CPU threads is running at 100% and the framerate drops to 54 FPS.
The weird thing is that sometimes it works at 60 FPS, but then it just drops to 54 FPS.
I haven't noticed this behaviour before, so I don't know if this is normal. I uninstalled my antivirus and reinstalled XNA Game Studio, XNA Redistributable and .NET Framework 4.
If I set IsFixedTimeStep to false, the game runs at 60 FPS and CPU usage is minimal (1-2%). but as far as I know, this requires me to do velocity calculations using ElapsedGameTime, but I don't know how to do it, since I'm fairly new to XNA. But some say that setting it to false reduces jerky animations.
I already checked this forum thread, but no one has found a good solution.
Has anyone experienced this problem?
EDIT:
I did some more research and I implemented a FPS counter (until now, I measured it with Fraps), and my counter shows the game running at 60 FPS (with IsFixedTimeStep = true), so this solves the FPS issue, but the high CPU usage remains. Is it possible that this happens to everyone?
According to this discussion on XBox Live Indie Games forum , apparently on some processors (and OS-s) XNA takes up 100% CPU time on one core when the default value of Game.IsFixedTimeStep is used.
A common solution (one that worked for me as well) is to put the following in your Game constructor:
IsFixedTimeStep = false;
What does it mean?
The Game.IsFixedTimeStep property, when true, ensures that your frame (Update(), Draw(), ...) is called at a fixed time interval specified in Game.TargetElapsedTime. This defaults to 60 calls per second.
When Game.IsFixedTimeStep = false, the calls to the next frame will happen when the previous one is finished. This is illustrated in the time graph below:
How does this change affect my code?
All fixed time calculations (movements, timings, etc.) will need to be modified to accommodate variable time steps. Thankfully, this is very simple.
Suppose you had
Vector3 velocity;
Vector3 position;
for some object, and you are updating the position with
position += velocity;
On default, this would mean that the speed of your object is 60 * velocity.Length() units per second. You add velocity 60 times each second.
When you translate that sentence into code, you get this simple modification:
position += velocity * 60 * gameTime.ElapsedGameTime.TotalSeconds;
To put it simple: you're scaling the values you add based on how much time has passed.
By making these modifications in places where you perform moving (or timing, etc.), you'll ensure that your game acts as it would back when it was fixed time step.
High CPU usage (100% on one core) is a non-problem for games. In other words, it's expected. The way you're using the CPU when you write a game demands you do this.
Open the code environment and write a simple program
void main(){
while(1) puts( "aaah" ) ;
}
Open the CPU monitor and see that 100% of one core is used.
Now your game is doing this:
void main(){
while(1){
update() ;
draw() ;
while( getTimeElapsedThisFrame() < 1.0/60.0 ) ; // busy wait, until 16ms have passed
}
}
So basically you call update() and draw(), and that takes up most of the 16ms you have to compute a frame (which you will take up most of when your game has a lot of stuff in it).
Now because O/S sleep resolution is not exact (if you call sleep( 1ms ), it may actually sleep for 20ms before your app wakes up again), what happens is games don't ever call the O/S system function Sleep. The Sleep function would cause your game to "oversleep" as it were and the game would appear laggy and slow.
Good CPU usage isn't worth having an unresponsive game app. So the standard is to "busy wait" and whittle away the time while hogging the CPU.
Related
i was trying to duplicate this solar system simulation project and i have a question:
What is "Universe.physicsTimeStep"? i searched in unity manual and scripting API but i didn't found anything. Can someone explain me? Thanks
Historically, when games simulated physics, they did so by updating the physics simulation once every frame. For example, to find distance travelled they would use velocity * timeStep, where timeStep is the time between frames.
Nowadays, especially with multithreaded game engines, the physics simulation runs (somewhat) independently of other simulations and rendering. For example, the rendering could be done at 60 frames per second but the physics is only updated every other frame (i.e. 30 FPS). The more frequently the physics simulation is updated (i.e. the lower the time step), the more accurate the physics simulation is.
Sometimes, the time steps of various systems are adjusted so that the game rus 'properly' even when the hardware is a limiting factor. Typically this is done by 'dropping' (i.e. skipping) render frames, but update frames could be dropped as well. Some games, especially platformers and other games where physics is very important, will always run the physics simulation with the same time step so that running it on a slower CPU (which would result in a higher time step) doesn't affect gameplay, such as by making difficult jumps easier.
In your case Universe.physicsTimeStep seems to be the time step used for updates in the game. It also appears to be a fixed time step, so it will always be the same. If the game is running slowly, it will drop render frames but keep updating the physics.
That is a public constant in the class Universe(You should see Sebastian Lague's Solar System Github repository.
It's clearly mentioned there
I'm creating a rhythm game in C# and Unity, and I haven't been able to find a direct correlation between the problems some of my players are having and their hardware but some of them are having problems with the gameplay gradually desyncing from the music over time. It usually takes a couple minutes before it's noticeable, but it definitely happens and ruins the feel of the game.
I'm keeping track of the elapsed time by using a double and adding Time.deltaTime (the time between frames) to it, and I'm assuming that there's just very small precision errors associated with doing that and that's what's causing the gradual desyncs.
Is there a better way to do this that's more accurate? I'm thinking that getting the time that the gameplay started, and then on each frame subtract that from the current time might make it more accurate? I've tried syncing the gameplay time to the audio time (Bass.NET provides a way to get song position in seconds), but while that works for me and quite a few others it ends up making others with decent hardware stutter (my game generally runs just fine on even 8-10 year old PCs).
Depending on your needs, you can use Time.timeSinceLevelLoad, Time.realtimeSinceStartup (although you are advised to use Time.time instead), or most likely Time.time. If you use Time.timeScale to pause your game, you can use Time.unscaledTime.
The difference between Time.realtimeSinceStartup and Time.time is that Time.realtimeSinceStartup will keep on counting even if you pause your game with Time.timeScale. Assuming that users might pause your game, which may or may not stop your audio from playing, would help you strategize whether you are going to use which property. You may want to set Application.runInBackground to true to get most reliable results, depending on your game logic.
I am working on creating a game, and I have a pathfinding function that takes about 100 ms. I have 5 enemies, each with this function in the contstructor:
newPath = new System.Threading.Timer((e) => {
getNewPath(); //The function that takes ~100 ms
}, null, 0, 5000);
Now, I am using System.Threading.Timer at an earlier point in the program (to run once every 50 ms just for a step function, to update positions and such). That one works fine, but if I run this function (don't forget I have 5 enemies, so it's running 5 times every 5 seconds), my whole entire computer just freezes. Now I don't have a crappy computer (it's not the best, but it's plenty good for what i'm using it for), so I don't know what the issue is. Even if all timers run one after the other (which they shouldn't do, they should run at the same time), the most it should take is 500ms (or half a second), yet it completely kills my computer, to the point where my mouse doesn't move, I can't Ctrl-Alt-Del, and I have to just hold the power button until it turns off.
I tested putting a simple print function in place of the getNewPath(), and it worked flawlessly and as expected, so I don't really know what the issue is.
My questions are:
What is causing my computer to lock up to the point of having to hold the power button.
Is there something I can use other than System.Threading.Timer that will give me the desired result without completely killing my computer? (Being able to run this function up to ~20 times at once, since it's an MMO and there could potentially be hundreds of enemies that it needs to do pathfinding updates on).
Thanks!
Without knowing the code inside getNewPath(), it is impossible to even guess the reason. And it is hard to believe that is only a simple A* path finding algorithm
Here are some points to start the investigation
What is the CPU usage rate before halt happens? What it is the rate when it happens? Which process has the highest rate?
What is the disk, network, memory usage rate?
Beside the above, does getNewPath consume other resources?
You can print 5 messages. But are they printed before/inside/after getNewPath
Do you have source code for getNewPath? Can you modify code in getNewPath?
Is getNewPath thread safe? Does it create more threads?
There are probably more things to look at. But these should be enough to get you started. And they are necessary for anyone to give meaning suggestions.
With the same code i have:
5-10% CPU usage with IsFixedTimeStep = true and TargetElapsedTime = TimeSpan.FromSeconds(1 / 60f)
50-60% CPU usage with IsFixedTimeStep = true and TargetElapsedTime = TimeSpan.FromSeconds(1 / 30f)
by decreasing the frame rate one should expect less CPU usage.
I have tried with different code with similiar results.
Anyone know the cause?
If I had to guess (which I have to, because you've provided very little information to go on), I'd say it's an interaction between the GPU and CPU.
Take a look at this blog post.
Basically, at 60 FPS, you're probably being GPU-limited. The CPU is sitting idle waiting for the GPU to draw a frame before it starts on drawing another one. You're probably dropping frames.
At 30 FPS, the GPU is able to keep up, and so the CPU has to send out frames more frequently.
But, again, this is just a guess. You'd have to instrument your code to properly check for these things.
I'm creating an XNA game and am getting an unexpected result, extremely low FPS (About 2-12 fps). What program should I use to test performance and track down what is slowing it down?
Have you tried using SlimTune?
You could also use StopWatch to measure sections manually.
Okay, let me bring my personal experience with game development in XNA.
The first thing you need to do is go into Debug -> Start Performance Analysis. This profiles your CPU activity and see's what threads are in use and what is doing the most processing.
You also need to factor in a couple of more things:
-You are probably running in debug mode, this means that some of your CPU is being dedicated to VS and to check for exceptions and what not.
-Your code might be inefficient. I recommend trying to limit the amount of Lists, Arrays, ADT's, and objects created during run time, because that slows it down a lot. Last time I checked the Game Loop ran 60 times a second so that imagine what a strain it would be to allocate a new List, then garbage collect it, 60 times a second. It starts adding up.
-I don't know how advanced you are, but read up on parallel threading, or multitasking.
An example would to have your physics engine 1 frames behind your graphics update.
EDIT: I realized you found your mistake but I hope this post can help others.