i was trying to duplicate this solar system simulation project and i have a question:
What is "Universe.physicsTimeStep"? i searched in unity manual and scripting API but i didn't found anything. Can someone explain me? Thanks
Historically, when games simulated physics, they did so by updating the physics simulation once every frame. For example, to find distance travelled they would use velocity * timeStep, where timeStep is the time between frames.
Nowadays, especially with multithreaded game engines, the physics simulation runs (somewhat) independently of other simulations and rendering. For example, the rendering could be done at 60 frames per second but the physics is only updated every other frame (i.e. 30 FPS). The more frequently the physics simulation is updated (i.e. the lower the time step), the more accurate the physics simulation is.
Sometimes, the time steps of various systems are adjusted so that the game rus 'properly' even when the hardware is a limiting factor. Typically this is done by 'dropping' (i.e. skipping) render frames, but update frames could be dropped as well. Some games, especially platformers and other games where physics is very important, will always run the physics simulation with the same time step so that running it on a slower CPU (which would result in a higher time step) doesn't affect gameplay, such as by making difficult jumps easier.
In your case Universe.physicsTimeStep seems to be the time step used for updates in the game. It also appears to be a fixed time step, so it will always be the same. If the game is running slowly, it will drop render frames but keep updating the physics.
That is a public constant in the class Universe(You should see Sebastian Lague's Solar System Github repository.
It's clearly mentioned there
Related
I've put together a little Git repo to share with you some scripts from the Unity 2D platformer I've been working on. I have written some basic WASD movement control in the Movement.cs script (under Scripts/Character in the repo) and it is working really well. It does everything I need it to do, and while it is basic, I intend to polish it up a bit over the course of the development of the game. However, I've been noticing that every time I build the game, every moving entity moves at a faster speed. The "AI" (yes I use the term very loosely) that I programmed into enemies like the Chompas and BadBirds seems to be far too fast or far too slow, as are user-controlled movements and animated powerup bubbles.
Now, I believe I've traced it back to the way I create translations; I add vectors to Transform.Position whenever I need to move an object or entity. These vectors accept float values as their parameters, and seeing as I'm not entirely clear on what those values represent, I feel that may be where the issue lies. Are these distance values representative of some dynamic system of measurement that might be changing between builds? If so, how might I standardize my distances? Am I totally off the mark? Lol. Any and all help is greatly appreciated.
To be perfectly clear, this issue occurs every time I hit play, regardless of whether or not changes have been made. Thanks again!
Git repo
Try multiplying your movement by Time.deltaTime.
If you don't do this (which it doesn't look like you are), then all your GameObjects will move at inconsistent speeds. The reason for this is that void Update() get's called once every frame, and the amount of frames that get drawn every second is variable (sometimes your game will run a little faster than at other times), and so your speed will be variable if you don't multiply it by Time.deltaTime (Time.deltaTime is the amount of time that has passed since the last frame, see the documentation)
Another option is to use Unity's built in physics system with FixedUpdate(), which get's called at a fixed rate.
I would strongly recommend to follow online tutorials for how to make a platformer controller etc., such as this one. Doing this will help teach you good practices and how to avoid common errors.
We are working on simulation game. We have about 25000 objects at world. All has 1 unity c# script. If we activate empty update function we get 15 fps if we activate empty fixed update with 0.02 time scale we get 1-2 fps at average spec computer. In Will we need to do something on update function. So need some help for this performance problem.
In that case, what can we do for performance?
Thanks for advices and help.
Well even though your question is broad i will try to give some tips. My guess is you are doing some heavy calculations in your Update and this causes fps problems. Also rendering 25000 object at the same time would cause fps issues. First i suggest using Unity Profiler and Unity statistics to find out what is causing the issue. Then my suggestions would be based on the problem i assume:
Use Coroutines instead of doing all the calculations in Update.
Use Occlusion culling for rendering as minimum objects as possible. You do not need to render objects which are not in Camera frustum.
If you have meshes which are detailed use level of detail if you have the chance.
If you narrow down your problem i am happy to help further. Good luck!
The very need to make a call to Update() is one of the major slowdowns in Unity. That's why they are now pusing towards Entity Component System where there is no need for such jumps.
If all the objects are having the same script, it should be quite easy to modify the code so that only 1 object has the script, but operates (i.e. via a for (;;) loop) on all other objects. This should bring massive improvements already.
Aslo if theres any chance your objects share meshes - do use Instanced Mesh Rendering it is faster by two orders of magnitude
I'm creating a rhythm game in C# and Unity, and I haven't been able to find a direct correlation between the problems some of my players are having and their hardware but some of them are having problems with the gameplay gradually desyncing from the music over time. It usually takes a couple minutes before it's noticeable, but it definitely happens and ruins the feel of the game.
I'm keeping track of the elapsed time by using a double and adding Time.deltaTime (the time between frames) to it, and I'm assuming that there's just very small precision errors associated with doing that and that's what's causing the gradual desyncs.
Is there a better way to do this that's more accurate? I'm thinking that getting the time that the gameplay started, and then on each frame subtract that from the current time might make it more accurate? I've tried syncing the gameplay time to the audio time (Bass.NET provides a way to get song position in seconds), but while that works for me and quite a few others it ends up making others with decent hardware stutter (my game generally runs just fine on even 8-10 year old PCs).
Depending on your needs, you can use Time.timeSinceLevelLoad, Time.realtimeSinceStartup (although you are advised to use Time.time instead), or most likely Time.time. If you use Time.timeScale to pause your game, you can use Time.unscaledTime.
The difference between Time.realtimeSinceStartup and Time.time is that Time.realtimeSinceStartup will keep on counting even if you pause your game with Time.timeScale. Assuming that users might pause your game, which may or may not stop your audio from playing, would help you strategize whether you are going to use which property. You may want to set Application.runInBackground to true to get most reliable results, depending on your game logic.
I just noticed today, that when I compile and run a new XNA 4.0 game, one of CPU threads is running at 100% and the framerate drops to 54 FPS.
The weird thing is that sometimes it works at 60 FPS, but then it just drops to 54 FPS.
I haven't noticed this behaviour before, so I don't know if this is normal. I uninstalled my antivirus and reinstalled XNA Game Studio, XNA Redistributable and .NET Framework 4.
If I set IsFixedTimeStep to false, the game runs at 60 FPS and CPU usage is minimal (1-2%). but as far as I know, this requires me to do velocity calculations using ElapsedGameTime, but I don't know how to do it, since I'm fairly new to XNA. But some say that setting it to false reduces jerky animations.
I already checked this forum thread, but no one has found a good solution.
Has anyone experienced this problem?
EDIT:
I did some more research and I implemented a FPS counter (until now, I measured it with Fraps), and my counter shows the game running at 60 FPS (with IsFixedTimeStep = true), so this solves the FPS issue, but the high CPU usage remains. Is it possible that this happens to everyone?
According to this discussion on XBox Live Indie Games forum , apparently on some processors (and OS-s) XNA takes up 100% CPU time on one core when the default value of Game.IsFixedTimeStep is used.
A common solution (one that worked for me as well) is to put the following in your Game constructor:
IsFixedTimeStep = false;
What does it mean?
The Game.IsFixedTimeStep property, when true, ensures that your frame (Update(), Draw(), ...) is called at a fixed time interval specified in Game.TargetElapsedTime. This defaults to 60 calls per second.
When Game.IsFixedTimeStep = false, the calls to the next frame will happen when the previous one is finished. This is illustrated in the time graph below:
How does this change affect my code?
All fixed time calculations (movements, timings, etc.) will need to be modified to accommodate variable time steps. Thankfully, this is very simple.
Suppose you had
Vector3 velocity;
Vector3 position;
for some object, and you are updating the position with
position += velocity;
On default, this would mean that the speed of your object is 60 * velocity.Length() units per second. You add velocity 60 times each second.
When you translate that sentence into code, you get this simple modification:
position += velocity * 60 * gameTime.ElapsedGameTime.TotalSeconds;
To put it simple: you're scaling the values you add based on how much time has passed.
By making these modifications in places where you perform moving (or timing, etc.), you'll ensure that your game acts as it would back when it was fixed time step.
High CPU usage (100% on one core) is a non-problem for games. In other words, it's expected. The way you're using the CPU when you write a game demands you do this.
Open the code environment and write a simple program
void main(){
while(1) puts( "aaah" ) ;
}
Open the CPU monitor and see that 100% of one core is used.
Now your game is doing this:
void main(){
while(1){
update() ;
draw() ;
while( getTimeElapsedThisFrame() < 1.0/60.0 ) ; // busy wait, until 16ms have passed
}
}
So basically you call update() and draw(), and that takes up most of the 16ms you have to compute a frame (which you will take up most of when your game has a lot of stuff in it).
Now because O/S sleep resolution is not exact (if you call sleep( 1ms ), it may actually sleep for 20ms before your app wakes up again), what happens is games don't ever call the O/S system function Sleep. The Sleep function would cause your game to "oversleep" as it were and the game would appear laggy and slow.
Good CPU usage isn't worth having an unresponsive game app. So the standard is to "busy wait" and whittle away the time while hogging the CPU.
I'm a pretty big newbie when it comes to optimization. In the current game I'm working on I've managed to optimize a function and shave about 0.5% of its CPU load and that's about as 'awesome' as I've been.
My situation is as follows: I've developed a physics heavy game in MonoTouch using an XNA wrapper library called ExEn and try as I might I've found it very hard to get the game to reach a playable framerate on an iPhone4 (don't even want to think about iPhone3GS at this point).
The performance degradation is almost certainly in the physics calculations, if I turn physics off the framerate jumps up sharply, if I disable everything, rendering, input, audio and just leave physics on performance hovers around 15fps during physics intensive situations.
I used Instruments to profile the performance and this is what I got: http://i.imgur.com/FX25h.png The functions which drain the most performance are either from the physics engine (Farseer) or the ExEn XNA wrapper functions they call (notably Vector2.Max, Vector2.Min).
I looked into those functions and I know wherever it can Farseer is passing values by reference into those functions rather than by value so that's that covered (and it's literally the only way I can think of. The functions are very simple themselves basically amounting to such operations as
return new Vector2(Max(v1.x, v2.x), Max(v1.y, v2.y))
Basically I feel like I'm stuck and in my limited capacity and understanding of code optimizations I'm not sure what my options are or if I even have any options (maybe I should just curl into a fetal position and cry?). With LLVM turned on and built in release I'm getting maybe 15fps at best. I did manage to bring the game up to 30fps by lowering the physics precision but this makes many levels simply unplayable as bodies intersect one another and collapse in on themselves.
So my question is, is this a lost cause or is there anything I can do to beef up performance?
First of all, love your game on Windows Phone 7!
Secondly, I don't see anything out of the ordinary in your profiler output. I did a quick and dirty performance analysis of the Farseer engine once (running in .net) and came up with similar results. It almost looks like you have a slowdown that is proportional across the board and may be due to mono itself.
I suppose you follow the performance hints in http://farseerphysics.codeplex.com/documentation already :-)
The most important thing seems to be
to reduce complexity for the
collision detection calculations,
i.e. not the visual but the colliding
shapes. In Unijty3D they are called
colliders and you can attach a simple
cube as a collider to a complex human
body. I don't know anything about
Fareer but they probably have similar
concept (is it called body?).
If possible, try to replace your main
character or other complex objects by
easy cubes and check if fps raises.
Compiler switches sometimes leverage performance. Be really sure that there are no debug settings activated (I got up to 30 times slower code in a C++ library project). Ensure that armv7 optimisation is turned on and -O3 or -Os
Watch out for logging statements as they are extremely expensive on iPhone
[Update:]
Try to decrease the number of actively calculated AABBs just to find out which part of the physics engine causes the trouble. If it's the pure number follow FFox' advice.
What is about other platforms? Where did you perform the testing during the development phase, on simulator? Which one? Any chance to get it running on Android or Android simulator or Windows Phone? This would give you a hint if it is an iPhone specific problem.
Ah, I just saw that ExEn still is in pre-release state and the final will be launched on July 21th as OS. IMO this changes the situation: If your App is running fine on some other comparable platform, then just wait for the release and give it a new try. Chances are pretty well that there is still debugging code in the pre-release you are working on.