I'm developing a game in C# with Farseer Physics without XNA. I put world.Step(float dt) in a separated thread in order to keep the form alive (in fact, a while loop would block every other operation). Well, the ball, a dynamic body, runs faster or slower depending on CPU frequency I think. I thought about Thread.Sleep() with a delay time calculated by reading the CPU frequency and mutiplying it by something, but I'm pretty sure I'm wrong. I'm a little noob. Any suggestion?
Thread.Sleep is very much not an accurate way to time things. You want to use something like:
Stopwatch timer = new Stopwatch();
timer.Start();
Which you can then use to drive a timed update loop like this:
// Run enough updates to catch up to the current time
private void Idle()
{
double time = timer.Elapsed.TotalSeconds;
accumulatedTime += (time - lastTime);
lastTime = time;
const double frameTime = 1.0/60.0; // 60 frames per second
while(accumulatedTime > frameTime)
{
world.Step((float)frameTime);
accumulatedTime -= frameTime;
}
}
You can then call the above Idle method followed by a short Sleep in a loop.
Or you could just call it from an event handler for Application.Idle - either directly on the UI thread, or by signalling your other thread.
Related
When I run the game, Unity's inspector updates every few seconds, but totalTimeElapsed only increments in a series of 0.1 seconds. I tried a similar thing with totalTimeElapsed += Time.deltaTime but with the same result.
How do I get the total time since game start?
float totalTimeELapsed = 0;
int dayCount = 0;
private void Update()
{
totalTimeElapsed = Time.time;
dayCount += 1;
AFunctionTakingLotsOfTime();
}
Because AFunctionTakingLotsOfTime() takes a long time, you need Time.realtimeSinceStartup
Time.realtimeSinceStartup deals in real time, rather than computed frame-rate-time amounts that are probably getting messed up due to the fact that your function takes a long time.
Alternatively you could use DateTime.
Note that both will not be affected by things like time scale and you may want to change your function to be less intensive, such as computing only a small chunk any given call (or by using a coroutine) or by splitting the method off onto a Job thread.
float totalTime = 0f;
// Update is called once per frame
void Update () {
totalTime += Time.deltaTime;
}
I believe what you are looking for would be something like this. Assuming this class is present on runtime of your game, you should have an accurate variable which can tell you the elapsed time since the game started.
You might also want to know that the update call occurs once per frame, ie. if you are running at 60 FPS update will be called 60 times in that second. You can try FixedUpdate which updates in real-time intervals instead.
I am creating a rythm VR game for google cardboard, inspired by Beat Saber on PSVR and Oculus Rift.
The concept is that blocks with different directions come at you, following the rythm of the music, just like in Beat Saber.
For every music I have created a ScriptableObject called music, with every parameter it should contain like the difficulty of the song, the name, etc. but also an array of Vector3 called notes : the first float of the Vector corresponds to the time the note should be beaten in the rythm of the music starting from 0: ScriptableObject
Then in a script called SlashSpawnManagement, where every thing is based on WaitForSeconds(), I spawn the blocks to smash. It's realy hard for me to explain wih words the order I do it in, so here is an image : Explanation
In theory, what this script does, it waits for some time, spawns a block, waits for some time, spawn a block, etc. The logic seems okay, but here's the weird part. As you play the song the distance between each block gradually becomes bigger and bigger, meaning the blocks become more and more out of sync with the music. It starts very well, but at the end of the music there is at least a 5s gap.
I figured it has something to do with the frame rate drop, so I tried to set the frameRate to something low with :
QualitySettings.vSyncCount = 0; // VSync must be disabled
Application.targetFrameRate = 30;
But it doesn't solve the problem. I tried with WaitForSecondsRealtime instead of WaitForSeconds, but nothing changes. I have read somewhere that WaitForSeconds depends on the frame rate... Where I calculated the time, I tried sustractng h divided by a number to even out the gradual gap. This works for some blocks but not every block: Notes[j][0] - h / 500
So here's my question, how do I make the WaitForSeconds, or any other method consistent with the seconds provided ?
Thank you In advance,
PS : For more clarifications, please ask and please forgive my typos and my english :)
If you want something to happen in regular time intervals, it is important to make sure that errors don't accumulate.
Don't:
private IEnumerable DoInIntervals()
{
while (this)
{
yield return new WaitForSeconds(1f); // this will NOT wait exactly 1 second but a few ms more. This error accumulates over time
DoSomething();
}
}
Do:
private IEnumerable DoInIntervals()
{
const float interval = 1f;
float nextEventTime = Time.time + interval;
while (this)
{
if (Time.time >= nextEventTime)
{
nextEventTime += interval; // this ensures that nextEventTime is exactly (interval) seconds after the previous nextEventTime. Errors are not accumulated.
DoSomething();
}
yield return null;
}
}
This will make sure your events happen in regular intervals.
Note: Even though this will be regular, it does not guarantee that it will stay in sync with other systems like audio. That can only be achieved by having a shared time between systems, like spender mentioned in his comment on your question.
Trying to use WaitForSeconds the timing of audio and hope for the best is hoping a big hope.
Have a list of Vector3s prepared in advance. If you want to prepare the list using the rythm - it will work. Use AudioSource's time to check every Update whether the timing is right and spawn a block at that moment.
void Update () {
SpawnIfTimingIsRight();
}
void SpawnIfTimingIsRight() {
float nextTiming = timingsArray[nextIndex].x;
// Time For Block To Get To Position
float blockTime = this.blockDistanceToTravel / this.blockSpeed;
float timeToSpawnBlock = nextTiming - blockTime;
if (this.audioSource.time >= nextTiming && (this.audioSource.time - Time.deltaTime) < nextTiming) {
// Spawn block
}
}
I am trying to make a Snake game. And what I want to happen is that each time the snake eats a piece of food it moves quicker.
I want to do this by making the timer tick faster.
I start the timer at an interval time of:
gltimer.Interval = new TimeSpan(20000)
And update it with:
public void updateTimer(object sender, EventArgs e)
{
long nrTicks = gltimer.Interval.Ticks;
nrTicks = (long)(nrTicks * 0.95);
gltimer.Interval = new TimeSpan(nrTicks);
}
But when I run my game the speed stays the same until I reach the 14th snack and then it suddenly changes. I already figured out that the nrTicks then drops below 10000.
My question is why the Interval doesn't update for the intermediate values?
The DispatcherTimer is not designed to execute an action at precise intervals.
While it dispatches the action at the exact interval, the dispatched actions will get executed, when the GUI thread in your case the wpf application loop decides to do so.
You setting the interval to 20.000 ticks results in an interval of 20.000*100ns = 2.000.000ns = 2ms. A wpf application has a resolution of around 10ms at best.
Look at those posts:
WPF Timer problem... Cannot get correct millisecond tick
WPF Dispatchertimer delayed reaction / freeze
I work on a 2D game engine that has a function called LimitFrameRate to ensure that the game does not run so fast that a user cannot play the game. In this game engine the speed of the game is tied to the frame rate. So generally one wants to limit the frame rate to about 60 fps. The code of this function is relatively simple: calculate the amount of time remaining before we should start work on the next frame, convert that to milliseconds, sleep for that number of milliseconds (which may be 0), repeat until it's exactly the right time, then exit. Here's the code:
public virtual void LimitFrameRate(int fps)
{
long freq;
long frame;
freq = System.Diagnostics.Stopwatch.Frequency;
frame = System.Diagnostics.Stopwatch.GetTimestamp();
while ((frame - previousFrame) * fps < freq)
{
int sleepTime = (int)((previousFrame * fps + freq - frame * fps) * 1000 / (freq * fps));
System.Threading.Thread.Sleep(sleepTime);
frame = System.Diagnostics.Stopwatch.GetTimestamp();
}
previousFrame = frame;
}
Of course I have found that due to the imprecise nature of the sleep function on some systems, the frame rate comes out quite differently than expected. The precision of the sleep function is only about 15 milliseconds, so you can't wait less than that. The strange thing is that some systems achieve a perfect frame rate with this code and can achieve a range of frame rates perfectly. But other systems don't. I can remove the sleep function and then the other systems will achieve the frame rate, but then they hog the CPU.
I have read other articles about the sleep function:
Sleep function in c in windows. Does a function with better precision exist?
Sleep Function Error In C
What's a coder to do? I'm not asking for a guaranteed frame rate (or guaranteed sleep time, in other words), just a general behavior. I would like to be able to sleep (for example) 7 milliseconds to yield some CPU to the OS and have it generally return control in 7 milliseconds or less (so long as it gets some of its CPU time back), and if it takes more sometimes, that's OK. So my questions are as follows:
Why does sleep work perfectly and precisely in some Windows environments and not in others? (Is there some way to get the same behavior in all environments?)
How to I achieve a generally precise frame rate without hogging the CPU from C# code?
You can use timeBeginPeriod to increase the timer/sleep accuracy. Note that this globally affects the system and might increase the power consumption.
You can call timeBeginPeriod(1) at the beginning of your program. On the systems where you observed the higher timer accuracy another running program probably did that.
And I wouldn't bother calculating the sleep time and just use sleep(1) in a loop.
But even with only 16ms precision you can write your code so that the error averages out over time. That's what I'd do. Isn't hard to code and should work with few adaptions to your current code.
Or you can switch to code that makes the movement proportional to the elapsed time. But even in this case you should implement a frame-rate limiter so you don't get uselessly high framerates and unnecessarily consume power.
Edit: Based on ideas and comments in this answer, the accepted answer was formulated.
Almost all game engines handle updates by passing the time since last frame and having movement etc... behave proportionally to time, any other implementation than this is faulty.
Although CodeInChaos suggestion is answers your question, and might work partially in some scenarios it's just a plain bad practice.
Limiting the framerate to your desired 60fps will work only when a computer is running faster. However the second that a background task eats up some processor power (for example the virusscanner starts) and your game drops below 60fps everything will go much slower. Even though your game could be perfectly playable on 35fps this will make it impossible to play the game because everything goes half as fast.
Things like sleep are not going to help because they halt your process in favor of another process, that process must first be halted, sleep(1ms) just means that after 1ms your process is returned to the queue waiting for permission to run, sleep(1ms) therefor can easily take 15ms depending on the other running processes and their priorities.
So my suggestions is that you as quickly as possible at some sort of "elapsedSeconds" variable that you use in all your update methods, the earlier you built it in, the less work it is, this will also ensure compatibility with MONO.
If you have 2 parts of your engine, say a physics and a render engine and you want to run these at different framerates, then just see how much time has passed since the last frame, and then decide to update the physics engine or not, as long as you incorporate the time since last update in your calculations you will be fine.
Also never draw more often than you're moving something. It's a waste to draw 2 times the exact same screen, so if the only way something can change on screen is by updating your physics engine, then keep render engine and physics engine updates in sync.
Based on ideas and comments on CodeInChaos' answer, this was the final code I arrived at. I originally edited that answer, but CodeInChaos suggested this be a separate answer.
public virtual void LimitFrameRate(int fps)
{
long freq;
long frame;
freq = System.Diagnostics.Stopwatch.Frequency;
frame = System.Diagnostics.Stopwatch.GetTimestamp();
while ((frame - fpsStartTime) * fps < freq * fpsFrameCount)
{
int sleepTime = (int)((fpsStartTime * fps + freq * fpsFrameCount - frame * fps) * 1000 / (freq * fps));
if (sleepTime > 0) System.Threading.Thread.Sleep(sleepTime);
frame = System.Diagnostics.Stopwatch.GetTimestamp();
}
if (++fpsFrameCount > fps)
{
fpsFrameCount = 0;
fpsStartTime = frame;
}
}
I have this algorithm that are drawing a lot of pixels on a Canvas in an animation. But I have no control of the speed of the animation and it is drawing very fast, so I added a Thread.Sleep(1) but then it is too slow when drawing several thousand pixels.
I have tried a Storyboard approach but that ended up to be a very slow.
So are there an alternative to the Thread.sleep for slowing down my loop?
void DrawGasket(object sender, DoWorkEventArgs e)
{
Random rnd = new Random();
Color color = Colors.White;
while (Counter <= noOfPx)
{
switch (rnd.Next(3))
{
case 0:
m_lastPoint.X = (m_top.X + m_lastPoint.X)/2;
m_lastPoint.Y = (m_top.Y + m_lastPoint.Y)/2;
color = Colors.White;
break;
case 1:
m_lastPoint.X = (m_left.X + m_lastPoint.X)/2;
m_lastPoint.Y = (m_left.Y + m_lastPoint.Y)/2;
color = Colors.Orange;
break;
case 2:
m_lastPoint.X = (m_right.X + m_lastPoint.X)/2;
m_lastPoint.Y = (m_right.Y + m_lastPoint.Y)/2;
color = Colors.Purple;
break;
}
Dispatcher.BeginInvoke(() =>
{
var px = new Rectangle {
Height = m_pxSize,
Width = m_pxSize,
Fill = new SolidColorBrush(color)
};
Canvas.SetTop(px, m_lastPoint.Y);
Canvas.SetLeft(px, m_lastPoint.X);
can.Children.Add(px);
lblCounter.Text = Counter.ToString();
});
Counter++;
Thread.Sleep(1);
}
}
Can't you just sleep every N iterations through the loop rather than every iteration?
Here's the simple method: You want it to draw at a specific time...during each loop have it look to see what time it is. If it's "time" (whatever) rate you determine that to be, then draw/update. Otherwise, just loop. (For example, if you want 10 updates per second, and you just updated, then store the current time in a variable...and then compare against that variable. Until it is a 10th of a second later, don't redraw, just loop.)
This isn't bad, so long as you can do anything else you need to do within this loop. If this loop must complete, however, before anything else happens, you'll just be burning a lot of cpu cycles just waiting.
If you want to get fancy, and do this right, you can have it see how much time has passed, and then update the visual by the correct amount. So, for example, if you want your object to move 30 pixels per second, you could just update where it is precisely, (if .0047 seconds have gone by, for instance, it should move .141 pixels). You store that value. Of course, visually, it won't have moved at all until another round of the loop, but exactly when it moves and how far will be determined by time, rather than the speed of the computer. This way you'll get it moving 30 pixels per second, regardless of the speed of the machine...it will just be jumpier on a slow machine.
The CompositionTarget.Rendering event fires every time Silverlight draws a frame, this is a good place to write any custom drawing code. You can use some other mechanism, like a timer, to keep track of your "game time" and adjust the model appropriately on a separate thread (or at least a separate execution path) from the drawing code. That way when it comes time to draw the UI there are no calculations to perform, you just paint the current state of your model. Basically just move the code in Dispatcher.BeginInvoke to the event handler for CompositionTarget.Rendering and I believe you'll be good.
One option would be to not sleep each time:
if(Counter % 2 == 0) Thread.Sleep(1);
With or without Thread.Sleep(), the render speed is determined by how fast your system is, or by how much time your process gets. -> bad!
You should use the approach that video games are written nowadays. Keep track of the current time, and call BeginInvoke() only x times per second (where x is the frames per second render speed)
In the BeginInvoked function check how much is drawn and how much you still need to draw. Especially don't BeginInvoke for every single Rectangle. Even without a Thread.Sleep() it's always a context switch.