Is fixedUpdate time interval in Unity3d is really constant? - c#

According to the Unity documentation, FixedUpdate get called in a fixed time step. This can be seen when we output the time.Deltatime in FixedUpdate call back.
However, when I tried to check the deltatime using .Net time system (i.e., TimeDate.now.millisecond) in FixedUpdate call back, the output of deltatime is not constant.
I suspected that the clock used by Unity3d is not the same from my PC's clock. But I am not sure if I am right or not. Does anyone know the reason about this?
Thanks
Note: the main question is about why capturing the interval time between consecutive FixedUpdate() using .NET DateTime.now produce the inconsistent time step while it shows consistent time step using Time.Deltatime.

Please check this.
FixedUpdate: FixedUpdate is often called more frequently than Update.
It can be called multiple times per frame, if the frame rate is low
and it may not be called between frames at all if the frame rate is
high. All physics calculations and updates occur immediately after
FixedUpdate. When applying movement calculations inside FixedUpdate,
you do not need to multiply your values by Time.deltaTime. This is
because FixedUpdate is called on a reliable timer, independent of the
frame rate.
And also in image you can read this near FixedUpdate State:
The physics cycle may happen more than once per frame if the fixed
time step is less then the actual frame update time.
If you need to check time in FixedUpdate you should use Time.fixedDeltaTime.

Related

How to fix 60 fps loop on non-perfect-60Hz (59.92Hz) display?

I just started learning game programming, I choosed to use a fixed-timestep game loop since it's straight forward and easier to understand.
Basically like this:
while (game_is_running) {
input();
update();
render();
}
When VSync is enabled, it runs at the speed of my display, but I want to limit it to run as fast as 60 FPS even on higher refresh rate monitors, so I added frame skip and fps limiter:
static void Main()
{
long frame_count = 0;
long tick_frame_end;
Stopwatch stopwatch = new Stopwatch();
WaitForVSync();
stopwatch.Start(); // hoping to get in sync with display
while (game_is_running) {
frame_count++;
tick_frame_end = GetEndTick(frame_count);
if (stopwatch.ElapsedTicks > tick_frame_end) { // falling behind
continue; // skip this frame
}
input();
update(tick_frame_end);
// render(); // render here when VSync is enabled
while (stopwatch.ElapsedTicks < tick_frame_end) {
// Busy waiting, in real code I used Spinwait to avoid 100% cpu usage
}
render(); // I moved it here because I turned off VSync and I want render to happen at a constant rate
}
}
static long GetEndTick(long frame_count) {
return (long)((double)frame_count * Stopwatch.Frequency / 60); // Target FPS: 60
}
When VSync is off, it can run at a steady 30, 60, 120 (or any value in between) FPS. I was happy with the result, but as soon as I turn VSync back on (and set FPS to 60), I noticed there's a lot frame skipping.
I can run a steady 120FPS when VSync is off with not a single frame dropped, yet when running 60FPS VSync on I have a notably number of frames dropped. It took me quiet a while to nail down the cause of this:
My monitor which I thought was running at 60Hz is actually running at about 59.920Hz (tested with https://www.displayhz.com/)
As time goes on, my internal frame time and monitor frame time would become more and more out of sync, causing many unexpected frame drops. This completely breaks my GetEndTick() function, which basically breaks everything: the frame skipping logic, the fps limiter, and most importantly update(tick_frame_end) is broken to.
So the questions are:
1. How can I get the end time of current frame?
In my code above, the end time is calculated by the assumption that there's 60 even frames in every second. If that's true, my internal frame time may not align perfectly to the display frame time, but would still be in-sync which is useful enough.
(Everything was calculated based on the time stamp of frame end, the idea is I generate the image based on the time of frame end, which is just moment before next frame getting displayed on screen.)
I tried to record the time at beginning of the loop (or right after last frame render) and add 1 frame worth of time to it - Doesn't work as the recorded time isn't reliable due to system load, background processes, and many other things. Also I'll need a new method to determine which frame is current frame.
2. If (1.) isn't possible, how can I fix my game loop?
I have read the infamous Fix Your Timestep! by Glenn Fiedler, yet it's about game physics not graphic, and doesn't really provide me an answer to the first question.

RigidBody2D AddForce Not Adding Acceleration

I'm trying to add acceleration to an object with a RigidBody2D component using AddForce, but all it seems to do is add a fixed velocity instead.
Here are the RigidBody2D settings:
For testing I've modified the Update function to allow me to add forces and report the object's velocity periodically:
void Update()
{
time += Time.deltaTime;
if (time > 0.5)
{
Debug.Log("Velocity magnitude = " + rigidbody2D.velocity.magnitude);
time = 0;
}
if (Input.GetKeyDown(KeyCode.Q))
{
rigidbody2D.AddForce(Vector2.right);
}
}
What I see is that every time I press Q, the velocity magnitude increases by a fixed 0.02.
I.e. first it doesn't move at all and the velocity is 0, then after pressing Q the velocity magnitude changes to 0.02 and stays at 0.02 while the object trudges slowly to the right. Pressing Q again increases it to 0.04, then pressing once more increases it to 0.06 and so on.
These are results from two days ago. I did the experiment again today and got different velocity increments but the same behavior otherwise.
The velocity changes only when I add a force.
If I added a force of magnitude 1 to an object at rest with a mass of 1 then its velocity would increase by 1 in magnitude every single unit of time (which I think here is seconds).
Instead it's staying fixed.
Using ForceMode2D.Impulse increases the velocity increments from 0.02 to 1, but the behavior stays the same - no acceleration, only one-time increments in speed.
What am I doing wrong?
I've tried looking for similar posts and for the problem in general but the only things I found were either irrelevant or asking the opposite (how to stop acceleration from occurring).
Any help would be appreciated.
The Unity version is 2019.1.1f1.
TL;DR
What you are doing wrong is that you are expecting the wrong physics behavior from the apply force function. That function does what it says, it applies force. It changes the object velocity instantaneously adding to it. The force mode for 2d bodies only tells the engine to consider the object's mass or not.
The problem is the naming Unity uses, AddForce doesn't "add" the force, it just applies the force for a single frame. In the following frames, all forces are zero again. If you want to keep the force, just create your own vector in your script, modify it, and apply it to the object every frame using AddForce.
how are you?
I have created a sample project to reproduce your issue but I couldn't reproduce the velocity staying fixed as you said. I'm not sure how you set up your scene so I made some assumptions. You probably have disabled either gravity on your object or friction, since you are reporting that the velocity is increased.
Acceleration, on the other hand, is a constant force being applied, think of it. If your car is still, and you push it for a short amount of time, the car will move then stop. If you keep pushing on the other hand, adding more and more force, the car will move faster and faster. Now think about your environment, you have a frictionless object. Every time you press Q you add a small force to it. It moves to the right a bit faster but with a constant speed (there is no force to change its velocity).
Here is what you'd want to do. Instead of adding the force only in the frame you pressed Q, add while you are pressing Q. This is what you want to do:
private void FixedUpdate()
{
time += Time.deltaTime;
if (time > 0.5)
{
Debug.Log("Velocity magnitude = " + rigidbody2D.velocity.magnitude);
time = 0;
}
if (Input.GetKey(KeyCode.Q))
{
rigidbody2D.AddForce(speedMultiplier * Time.fixedTime * Vector2.right, ForceMode2D.Force);
}
}
Another problem is the naming Unity uses, AddForce doesn't "add" the force, it just applies the force for a single frame. In the following frames all forces are zero again. If you want to keep the force, just create your own vector in your script, modify it, and apply it to the object every frame using AddForce.
Also, see that I changed Update to FixedUpdate? In Unity, you are recommended to apply all physics changes in that method instead. Update runs as much as it can (once every frame if possible), fixed update instead runs only after the physics thread is done. This means you are not spending CPU time telling the object to move every frame, and instead you are waiting for the physics to be synchronized.
I don't think that's going to add enough force to the object to get it into motion. Normally you need to multiply your direction by a "force" multiple. Try something like rigidbody2D.AddForce(Vector2.right * 1000);. That is, intentianally, probably way too much force but at least you'll know it's moving.

Unity3D - Using Time.deltaTime as wait time for a coroutine

TL,DR: is it safe to use Time.deltaTime as the wait time for the yield in a coroutine?
Often to avoid unnecessary logic inside Update() for short lived functionality I'll run a coroutine instead. For example in my game I have a slow coroutine that periodically checks the distance between NPCs and the player, if that distance is under some threshold I'll then run a more frequent coroutine with more expensive line of sight testing using raycasting etc. This seems much more efficient than checking everything constantly in Update().
My question is this: is it safe to use Time.deltaTime as the wait time for the yield, thereby simulating Update() for the duration of the coroutine? i.e. yield return new WaitForSeconds(Time.deltaTime);
Since deltaTime is not constant what happens if the next frame takes longer than the last - is the coroutine delayed until the next frame, or is something horrible happening like the whole frame being delayed until the coroutine finishes? I'm tempted to allow a buffer for safety e.g. yield return new WaitForSeconds(Time.deltaTime * 1.2f); but ideally I'd like it to execute precisely every frame.
is it safe to use Time.deltaTime as the wait time for the yield in a
coroutine?
No. That's not how to use the WaitForSeconds function. WaitForSeconds takes in seconds as a parameter not the tiny values provided by Time.deltaTime in every frame.
Below is an example of how to use the WaitForSeconds function.
IEnumerator waitFunction1()
{
Debug.Log("Hello Before Waiting");
yield return new WaitForSeconds(3); //Will wait for 3 seconds then run the code below
Debug.Log("Hello After waiting for 3 seconds");
}
As for waiting with Time.deltaTime, you usually use it in a while loop in addition to another float variable you will increment or decrement until you reach the wanted value.The advantage of using Time.deltaTime is that you can see how much waiting time is left while waiting. You can use that for a countdown or up timer. You also put yield return null; in the while loop so that Unity will allow other scripts to run too and your App won't freeze. Below is an example of how to use Time.deltaTime to wait for 3 seconds. You can easily turn it into a countdown timer.
IEnumerator waitFunction2()
{
const float waitTime = 3f;
float counter = 0f;
Debug.Log("Hello Before Waiting");
while (counter < waitTime)
{
Debug.Log("Current WaitTime: " + counter);
counter += Time.deltaTime;
yield return null; //Don't freeze Unity
}
Debug.Log("Hello After waiting for 3 seconds");
}
If you want to check every frame there are WaitForEndOfFrame
There is no point to WaitForSeconds if you don't need specific time to wait for
You can also yield return www to wait for it finish
Update, there are CustomYieldInstruction and some of it inheritance for used in coroutine, like WaitUntil
I know that this is a bit old, but I just came across it today while looking for something that's related.
Anyways, Programmer's answer is a good start. Using Time.detlaTime is not a good way to release control of a coroutine. For one, using a null value will get you to the next frame in simulation. Second, what happens if the frame you just finished was a bit slow (or the next frame is a bit fast). Please keep in mind that the simulation (or game) runs in frames, much like the frames in a film. Because of this, using WaitForSeconds can have some interesting effects. For instance, if the time you requested happens between frames, what do you think will happen? The coroutine will regain control after the requested time. So, if you were hoping to just shoot off a WaitForSeconds(Time.deltaTime) on each loop in the coroutine, chances are, you are skipping the occasional frame (sometimes more).
I had a coworker that did a bunch of WaitForSeconds, inside a long running coroutine and the total time that passed was anywhere between 5 to 10 seconds longer than what was expected. Those off by part of a frame errors in the requested time will add up, and unlike nature, where errors tend to average out, these errors will always add up. That's because the amount of time that passes will always be greater-than or equal-to the amount of time that's requested.
Programmer's answer showed you how to use Time.deltaTime to get the passage of simulation time. I should warn you that once you start using things other than null to release control of the coroutine, that Time.deltaTime might stop working for you.
If you use a null follow by some code followed by a WaitForEndFrame, you will still be within the same frame when the coroutine resumes control. Because you are still in the same frame as you were before calling WaitForEndFrame, the simulation time that has passed will be zero, but Time.deltaTime will not be zero. So, yeah, measuring time can get tricky once you start mixing different return types. You should also be careful of switching between frames and fixed-frames. Using a WaitForFixedUpdate will put you into the times of the fixed-frames. Calling Time.fixedDeltaTime after a WaitForFixedUdpate will give you the time between that fixed-frame and the previous fixed-frame, which has nothing to do with Time.deltaTime or any of timings for the frames.

Simple Simulation: Fast Forward and Normal Timer

A simple draw and move simulation uses the following:
A clock timer. Interval: 200ms.
A movement timer. Interval: 1ms.
Movement Constant. Value: 2.
Every time the movement timer ticks, a picture moves by addition by the movement constant. (i.e. picture.X = picture.X + movement constant)
The problem is fast forwarding. The user CAN fast-forward the simulation at any time.
How do I change theses three values (clock timer, movement timer, movement constant) to make the simulation speed faster without sacrificing integrity during a fast-forwarded run?
If integer multiples (1x, 2x, 3x, ...) for fast forwarding is enough, you could just run the simulation function several times during the timer handler function.
I'm not sure what you're doing with the clock timer though, but the same principle would apply to whatever it's doing.
You can have an "internal time" that's independant of the "real time". When the system runs at the base speed, these two increase in sync (every tick of the timer - that's set to 1 ms - adds 1ms to the internal time.
When you have a speedup multiplier of 2x, then add 2ms to the internal time for every timer tick.
Next you will have to calculate positions based on the internal time, with maybe a function like
newposition = startposition + speed * time
As Hans implied, there is a 'grain-size' in time and space in most simulations. Agents are then scheduled by skipping over time intervals ('ticks'). However, if you need more flexible and well-tested scheduling, you might want to borrow a scheduler from an ABM simulation package such as Mason (for Java) or Repast (Java or C++), or look at their open-source scheduler class codes and translate to another language.

High precision sleep or How to yield CPU and maintain precise frame rate

I work on a 2D game engine that has a function called LimitFrameRate to ensure that the game does not run so fast that a user cannot play the game. In this game engine the speed of the game is tied to the frame rate. So generally one wants to limit the frame rate to about 60 fps. The code of this function is relatively simple: calculate the amount of time remaining before we should start work on the next frame, convert that to milliseconds, sleep for that number of milliseconds (which may be 0), repeat until it's exactly the right time, then exit. Here's the code:
public virtual void LimitFrameRate(int fps)
{
long freq;
long frame;
freq = System.Diagnostics.Stopwatch.Frequency;
frame = System.Diagnostics.Stopwatch.GetTimestamp();
while ((frame - previousFrame) * fps < freq)
{
int sleepTime = (int)((previousFrame * fps + freq - frame * fps) * 1000 / (freq * fps));
System.Threading.Thread.Sleep(sleepTime);
frame = System.Diagnostics.Stopwatch.GetTimestamp();
}
previousFrame = frame;
}
Of course I have found that due to the imprecise nature of the sleep function on some systems, the frame rate comes out quite differently than expected. The precision of the sleep function is only about 15 milliseconds, so you can't wait less than that. The strange thing is that some systems achieve a perfect frame rate with this code and can achieve a range of frame rates perfectly. But other systems don't. I can remove the sleep function and then the other systems will achieve the frame rate, but then they hog the CPU.
I have read other articles about the sleep function:
Sleep function in c in windows. Does a function with better precision exist?
Sleep Function Error In C
What's a coder to do? I'm not asking for a guaranteed frame rate (or guaranteed sleep time, in other words), just a general behavior. I would like to be able to sleep (for example) 7 milliseconds to yield some CPU to the OS and have it generally return control in 7 milliseconds or less (so long as it gets some of its CPU time back), and if it takes more sometimes, that's OK. So my questions are as follows:
Why does sleep work perfectly and precisely in some Windows environments and not in others? (Is there some way to get the same behavior in all environments?)
How to I achieve a generally precise frame rate without hogging the CPU from C# code?
You can use timeBeginPeriod to increase the timer/sleep accuracy. Note that this globally affects the system and might increase the power consumption.
You can call timeBeginPeriod(1) at the beginning of your program. On the systems where you observed the higher timer accuracy another running program probably did that.
And I wouldn't bother calculating the sleep time and just use sleep(1) in a loop.
But even with only 16ms precision you can write your code so that the error averages out over time. That's what I'd do. Isn't hard to code and should work with few adaptions to your current code.
Or you can switch to code that makes the movement proportional to the elapsed time. But even in this case you should implement a frame-rate limiter so you don't get uselessly high framerates and unnecessarily consume power.
Edit: Based on ideas and comments in this answer, the accepted answer was formulated.
Almost all game engines handle updates by passing the time since last frame and having movement etc... behave proportionally to time, any other implementation than this is faulty.
Although CodeInChaos suggestion is answers your question, and might work partially in some scenarios it's just a plain bad practice.
Limiting the framerate to your desired 60fps will work only when a computer is running faster. However the second that a background task eats up some processor power (for example the virusscanner starts) and your game drops below 60fps everything will go much slower. Even though your game could be perfectly playable on 35fps this will make it impossible to play the game because everything goes half as fast.
Things like sleep are not going to help because they halt your process in favor of another process, that process must first be halted, sleep(1ms) just means that after 1ms your process is returned to the queue waiting for permission to run, sleep(1ms) therefor can easily take 15ms depending on the other running processes and their priorities.
So my suggestions is that you as quickly as possible at some sort of "elapsedSeconds" variable that you use in all your update methods, the earlier you built it in, the less work it is, this will also ensure compatibility with MONO.
If you have 2 parts of your engine, say a physics and a render engine and you want to run these at different framerates, then just see how much time has passed since the last frame, and then decide to update the physics engine or not, as long as you incorporate the time since last update in your calculations you will be fine.
Also never draw more often than you're moving something. It's a waste to draw 2 times the exact same screen, so if the only way something can change on screen is by updating your physics engine, then keep render engine and physics engine updates in sync.
Based on ideas and comments on CodeInChaos' answer, this was the final code I arrived at. I originally edited that answer, but CodeInChaos suggested this be a separate answer.
public virtual void LimitFrameRate(int fps)
{
long freq;
long frame;
freq = System.Diagnostics.Stopwatch.Frequency;
frame = System.Diagnostics.Stopwatch.GetTimestamp();
while ((frame - fpsStartTime) * fps < freq * fpsFrameCount)
{
int sleepTime = (int)((fpsStartTime * fps + freq * fpsFrameCount - frame * fps) * 1000 / (freq * fps));
if (sleepTime > 0) System.Threading.Thread.Sleep(sleepTime);
frame = System.Diagnostics.Stopwatch.GetTimestamp();
}
if (++fpsFrameCount > fps)
{
fpsFrameCount = 0;
fpsStartTime = frame;
}
}

Categories