Approach to solve a not accurate replay system - c#

Trying to make accurate replay system in unity and c#
Hi all,
Im working on a racing game and I decided to add a replay system to allow "ghost car" too, initially I was recordng data in some events like key pressed but only recording that data in all frames I manage a smooth replay, well its still ok as file is not huge and replay works, but the trouble is there is always a slight variation in time, only like 0.1 seconds or 0.2 at the most, I have a list of keyframes and in each position I record a time to be shown, the trouble I think is that because fps vary then not in all runs the same time marks are shown then the winning frame's time is not always being rendered so the winning frame happens in next update slightly after it should be shown. Im using c# and unity just in case, but I think its independent to this mainly. Thanks a lot about any clue, I have been around this issue for some time now

It sounds like you're doing frame-by-frame replay which, as you've discovered, requires your frames to play back with the same delay as the recording. In a game-render loop, that's not guaranteed to happen.
As you record the car states (position, heading, etc) per frame, you need to also record a timestamp (in this case, accumulating Time.deltaTime from race start should suffice).
When you play it back, find the current timestamp and interpolate (ie, Lerp) the car's state from the recorded bounding frames.
Hints for frame interpolation:
class Snapshot {
public float Timestamp;
public Matrix4x4 Transform; // As an example. Put more data here.
}
private int PrevIndex = 0;
private List<Snapshot> Snapshots = (new List<Snapshot>()).OrderBy(m => m.Timestamp).ToList();
private float GetLerpFactor(float currentTimestamp) {
if ( PrevIndex == Snapshots.Count - 1)
return 0; // Reached end of Snapshots
while (currentTimestamp >= Snapshots[PrevIndex + 1].Timestamp)
PrevIndex++; // move 'frame' forward
var currentDelta = Mathf.Max(0f, currentTimestamp - Snapshots[PrevIndex].Timestamp);
var fullDelta = Snapshots[PrevIndex + 1].Timestamp - Snapshots[PrevIndex].Timestamp;
var lerpFactor = currentDelta / fullDelta;
return lerpFactor;
}

Unless for some reason you need to interact with "ghost" car (like collisions) record final data on position/speed/direction at frequent enough original moments of time and interpolate that to new simulation. I would not record raw inputs but rather resulting changes (like gear shifts) unless you need to measure/show how fast user reacted to something.
If you really want to replay the same inputs you'd have to run two separate simulations at the same time so physics and timing of "real" version don't impact "ghost" one, most likely you'll have to again interpolate output of "ghost" simulation to align with real one (unless you have fixed time steps).

Related

WaitForSeconds Gradually become out of Sync... Unity

I am creating a rythm VR game for google cardboard, inspired by Beat Saber on PSVR and Oculus Rift.
The concept is that blocks with different directions come at you, following the rythm of the music, just like in Beat Saber.
For every music I have created a ScriptableObject called music, with every parameter it should contain like the difficulty of the song, the name, etc. but also an array of Vector3 called notes : the first float of the Vector corresponds to the time the note should be beaten in the rythm of the music starting from 0: ScriptableObject
Then in a script called SlashSpawnManagement, where every thing is based on WaitForSeconds(), I spawn the blocks to smash. It's realy hard for me to explain wih words the order I do it in, so here is an image : Explanation
In theory, what this script does, it waits for some time, spawns a block, waits for some time, spawn a block, etc. The logic seems okay, but here's the weird part. As you play the song the distance between each block gradually becomes bigger and bigger, meaning the blocks become more and more out of sync with the music. It starts very well, but at the end of the music there is at least a 5s gap.
I figured it has something to do with the frame rate drop, so I tried to set the frameRate to something low with :
QualitySettings.vSyncCount = 0; // VSync must be disabled
Application.targetFrameRate = 30;
But it doesn't solve the problem. I tried with WaitForSecondsRealtime instead of WaitForSeconds, but nothing changes. I have read somewhere that WaitForSeconds depends on the frame rate... Where I calculated the time, I tried sustractng h divided by a number to even out the gradual gap. This works for some blocks but not every block: Notes[j][0] - h / 500
So here's my question, how do I make the WaitForSeconds, or any other method consistent with the seconds provided ?
Thank you In advance,
PS : For more clarifications, please ask and please forgive my typos and my english :)
If you want something to happen in regular time intervals, it is important to make sure that errors don't accumulate.
Don't:
private IEnumerable DoInIntervals()
{
while (this)
{
yield return new WaitForSeconds(1f); // this will NOT wait exactly 1 second but a few ms more. This error accumulates over time
DoSomething();
}
}
Do:
private IEnumerable DoInIntervals()
{
const float interval = 1f;
float nextEventTime = Time.time + interval;
while (this)
{
if (Time.time >= nextEventTime)
{
nextEventTime += interval; // this ensures that nextEventTime is exactly (interval) seconds after the previous nextEventTime. Errors are not accumulated.
DoSomething();
}
yield return null;
}
}
This will make sure your events happen in regular intervals.
Note: Even though this will be regular, it does not guarantee that it will stay in sync with other systems like audio. That can only be achieved by having a shared time between systems, like spender mentioned in his comment on your question.
Trying to use WaitForSeconds the timing of audio and hope for the best is hoping a big hope.
Have a list of Vector3s prepared in advance. If you want to prepare the list using the rythm - it will work. Use AudioSource's time to check every Update whether the timing is right and spawn a block at that moment.
void Update () {
SpawnIfTimingIsRight();
}
void SpawnIfTimingIsRight() {
float nextTiming = timingsArray[nextIndex].x;
// Time For Block To Get To Position
float blockTime = this.blockDistanceToTravel / this.blockSpeed;
float timeToSpawnBlock = nextTiming - blockTime;
if (this.audioSource.time >= nextTiming && (this.audioSource.time - Time.deltaTime) < nextTiming) {
// Spawn block
}
}

Rendering video to file that may have an inconsistent frame rate

I'm getting raw video frames from a source (that can be considered a black box) at a rate that can be inconsistent. I'm trying to record the video feed to the disk. I'm doing so with AForge's VideoRecorder and am writing to an MP4 file.
However, the inconsistent rate at which I receive frames causes the video to appear sped up. It seems that I only have the ability to create video files that have a fixed frame rate, even though the source does not have a fixed frame rate.
This isn't an issue when rendering to the screen, as we can just render as fast as possible. I can't do this when writing to the file, since playing back the file would play at the fixed frame rate.
What solutions are there? The output does not have to be the same video format as long as there's some reasonable way to convert it later (which wouldn't have to be real time). The video feeds can be quite long, so I can't just store everything in memory and encode later.
My code currently looks along the lines of:
VideoFileWriter writer = new VideoFileWriter();
Stopwatch stopwatch = new Stopwatch();
public override void Start() {
writer.Open("output.mp4", videoWidth, videoHeight, frameRate, AForge.Video.FFMPEG.VideoCodec.MPEG4);
stopwatch.Start();
}
public override void End() {
writer.Close();
}
public override void Draw(Frame frame) {
double elapsedTimeInSeconds = stopwatch.ElapsedTicks / (double) Stopwatch.Frequency;
double timeBetweenFramesInSeconds = 1.0 / FrameRate;
if (elapsedTimeInSeconds >= timeBetweenFramesInSeconds) {
stopwatch.Restart();
writer.WriteVideoFrame(frame.ToBitmap());
}
}
Where our black box calls the Start, End, and Draw methods. The current check that I have in Draw prevents us from drawing too fast, but doesn't do anything to handle the case of drawing too slowly.
It turns out WriteVideoFrame is overloaded and one variant of the function is WriteVideoFrame(Bitmap frame, TimeSpan timestamp). As you can guess, the time stamp is used to make a frame appear at a certain time in the video.
Thus, by keeping track of the real time, we can set each frame to use the time that it should be in the video. Of course, the video quality will be worse if you can't render quickly enough, but this resolves the issue at hand.
Here's the code that I used for the Draw function:
// We can provide a frame offset so that each frame has a time that it's supposed to be
// seen at. This ensures that the video looks correct if the render rate is lower than
// the frame rate, since these times will be used (it'll be choppy, but at least it'll
// be paced correctly -- necessary so that sound won't go out of sync).
long currentTick = DateTime.Now.Ticks;
StartTick = StartTick ?? currentTick;
var frameOffset = new TimeSpan(currentTick - StartTick.Value);
// Figure out if we need to render this frame to the file (ie, has enough time passed
// that this frame will be in the file?). This prevents us from going over the
// desired frame rate and improves performance (assuming that we can go over the frame
// rate).
double elapsedTimeInSeconds = stopwatch.ElapsedTicks / (double) Stopwatch.Frequency;
double timeBetweenFramesInSeconds = 1.0 / FrameRate;
if (elapsedTimeInSeconds >= timeBetweenFramesInSeconds)
{
stopwatch.Restart();
Writer.WriteVideoFrame(frame.ToBitmap(), frameOffset);
}
Where StartTick is a long? member of the object.
I have also faced this problem. In my case I'm mimicking a CCTV System using Aforge. CCTV should be accurate in the times it is recording so I faced a big dilemma for that. Here is the work around I used in this.
First, declare a Timespan which will be the base of the recording. You need to set this when you start the recording meaning. The value of this is the time you start the recording. For sake of this answer let's call this tmspStartRecording
Then in a new frame event of your capture device:
var currentTime = DateTime.Now.TimeOfDay;
// this will get the elapse time between
// the current time from the time you start your recording
TimeSpan elapse = currentTime - tmspStartRecording;
writer.WriteVideoFrame((Bitmap)image.Clone(),elapse);
Don't forget to set the value of the starting Timespan, OK?

How to maintain stable frames-per-second while capturing real-time video in C#

I have written a C# program that captures video from a specialized camera through the camera manufacturer's proprietary API. I am able to write captured frames to disk through a FileStream object, but I am at the mercy of the camera and disk I/O when it comes to the framerate.
What is the best way to make sure I write to disk at the required framerate? Is there a certain algorithm available that would compute the real-time average framerate and then add/discard frames to maintain a certain desired framerate?
It's difficult to tell much because of the lack of information.
What's the format? Is there compression?
How is the camera API sending the frames? Are they timed, so the camera will send the frame rate you asked for? If it is, you are really dealing with I/O speed.
If you need high quality, and are writing without compression, you could experiment some lossless compression algorithms to balance between processing and drive I/O. You could gain some speed if the bottleneck is with a high drive I/O.
For frames, there are ways to implement that. Normally frames have time-stamp, you should search on that, and discard frames that are so near of the other.
Let's say that you want 60 fps, so the space in ms between frames is 1000/60=16ms, if the frame you get has a time stamp of 13ms after the last, you can discard it and don't write to disk.
In a perfect world, you would check the first second which gives you a number of frames per second that your system supports.
Say your camera is capturing 60 fps but your computer can really only handle 45 fps. What you have to do is skip a total of 15 frames per second in order to keep up. Up to here, that's easy enough.
The math in this basic case is:
60 / 15 = 4
So skip one frame every four incoming frames like so (keep frames marked with an X, skip the others):
000000000011111111112222222222333333333344444444445555555555
012345678901234567890123456789012345678901234567890123456789
XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX
Of course, it is likely that you do not get such a clear cut case. The math remains the same, it's just that you'll end up skipping frames at what looks like varying points in time:
// simple algorithm based on fps from source & destination
// which fails to keep up long term
double camera_fps = 60.0;
double io_fps = camera_fps;
for(;;)
{
double frames_step = camera_fps / io_fps;
double start_time = time(); // assuming you get at least ms precision
double frame = 0.0;
for(int i(0); i < camera_fps; ++i)
{
buffer frame_data = capture_frame();
if(i == (int)frame) // <- part of the magic happens here
{
save_frame(frame_data);
}
frame += frames_step;
}
double end_time = time();
double new_fps = camera_fps / (end_time - start_time);
if(new_fps < io_fps)
{
io_fps = new_fps;
}
}
As this algorithm shows, you want to adjust your fps as time goes by. During the first second, it is likely to give you an invalid result for all sorts of reasons. For example, writing to disk is likely going to be buffered so go really fast and it may look as if you could support 60 fps. Later, the fps will slow down and you may find that your maximum I/O speed is 57 fps instead.
One issue with this basic algorithm is that you can easily reduce the number of frames to make it work within 1 second, but it will only reduce the fps (i.e. I update io_fps only when new_fps is smaller). If you find the correct number for io_fps, you're fine. If you went too far, you're dropping frames when you shouldn't. This is because new_fps is going to be 1.0 when (end_time - start_time) is exactly 1 second, meaning that you did not spend too much time capturing and saving the incoming frames.
The solution to this issue is to time your save_frame() function. If the total amount is less than 1 second within the inner loop, then you can increase the amount of frames you can save. This will work better if you can use two threads. One thread reads the frame, pushes the frame on an in memory FIFO, the other thread retrieves the frames from that FIFO. This means the amount of time to capture one frame doesn't (as much) affect the time it takes to save a frame.
bool stop = false;
// capture
for(;;)
{
buffer frame_data = capture_frame();
fifo.push(frame_data);
if(stop)
{
break;
}
}
// save to disk
double expected_time_to_save_one_frame = 1.0 / camera_fps;
double next_start_time = time();
for(;;)
{
buffer frame_data = fifo.pop();
double start_time = next_start_time;
save_frame(frame_data);
next_start_time = time();
double end_time = time();
if(start_time - end_time > expected_time_to_save_one_frame)
{
fifo.pop(); // skip one frame
}
}
This is just pseudo code and I may have made a few mistakes. It also expects that the gap between start_time and end_time is not going to be more than one frame (i.e. if you have a case where the capture is 60 fps and the I/O supports less than 30 fps, you often would have to skip two frames in a row).
For people who are compressing frames, keep in mind that the timing of one call to save_frame() will vary a lot. At times, the frame can easily be compressed (no horizontal movement) and at times it's really slow. This is where such dynamism can help tremendously. Your disk I/O, assuming not much else occurs while you are recording, should not vary much once you reach the maximum speed supported.
IMPORTANT NOTE: these algorithms suppose that the camera fps is fixed; this is likely not quite true; you can also time the camera and adjust the camera_fps parameter accordingly (which means the expected_time_to_save_one_frame variable can also vary over time).

Stuttering when moving a single sprite in an XNA application

Update: I have uploaded a video showing the stutter here: http://intninety.co.uk/xnastutter.mp4 you may have to look closely in the video if you are not viewing it at 1920x1080, but you'll see that there is a rather distinct stutter when moving every 2 seconds or so, I'd recommend viewing it in Windows Media Player rather than your web browser to ensure the video itself isn't choppy and thus preventing you seeing the actual stutter
I'm recently picking up a project I started a while ago, however I am still struggling to solve the problem I left it at!
At the moment I have a very simple application which just has a single sprite on screen and is moved around using the directional keys. The problem is every two seconds or so, the game stutters and the sprite appears to jump backwards and then back forwards very quickly.
The sprite itself is a 55x33 bitmap, so isn't anything large, and the code in use is as follows. Hopefully this is enough to get the ball rolling on some ideas as to what may be the problem, if a video is required to see exactly how the stuttering looks I can put one together and upload it somewhere if need be.
As you'll see in the code it does compensate for time lost between frames by making the movement greater should it happen, however that drop is happening very consistently time wise, which is leading me to believe I'm doing something wrong somewhere.
I've tried on a few different machines but the problem persists across all of them, if anyone has any ideas or can see where it is I'm messing up it'd be greatly appreciated if you could point it out.
Thanks :)
Constructor of the Game Setting up the Graphics Device Manager
graphics = new GraphicsDeviceManager(this);
graphics.IsFullScreen = true;
graphics.SynchronizeWithVerticalRetrace = false;
graphics.PreferredBackBufferWidth = 1920;
graphics.PreferredBackBufferHeight = 1080;
Content.RootDirectory = "Content";
this.IsFixedTimeStep = false;
Code from the Game's Update Method
KeyboardState keyboard = Keyboard.GetState();
GamePadState gamePad = GamePad.GetState(PlayerIndex.One);
if (keyboard.IsKeyDown(Keys.Escape)) {
this.Exit();
}
if ((keyboard.IsKeyDown(Keys.Left)) || (gamePad.DPad.Left == ButtonState.Pressed))
{
this.player.MoveLeft((float)gameTime.ElapsedGameTime.TotalMilliseconds);
} else if ((keyboard.IsKeyDown(Keys.Right)) || (gamePad.DPad.Right == ButtonState.Pressed))
{
this.player.MoveRight((float)gameTime.ElapsedGameTime.TotalMilliseconds);
}
if ((keyboard.IsKeyDown(Keys.Up)) || (gamePad.DPad.Up == ButtonState.Pressed))
{
this.player.MoveUp((float)gameTime.ElapsedGameTime.TotalMilliseconds);
} else if ((keyboard.IsKeyDown(Keys.Down)) || (gamePad.DPad.Down == ButtonState.Pressed))
{
this.player.MoveDown((float)gameTime.ElapsedGameTime.TotalMilliseconds);
}
base.Update(gameTime);
The "Move" Methods seen in the above Update Method
public void MoveLeft(float moveBy)
{
this.position.X -= (moveBy * this.velocity.X);
}
public void MoveRight(float moveBy)
{
this.position.X += (moveBy * this.velocity.X);
}
public void MoveUp(float moveBy)
{
this.position.Y -= (moveBy * this.velocity.Y);
}
public void MoveDown(float moveBy)
{
this.position.Y += (moveBy * this.velocity.Y);
}
The Game's Draw Method
GraphicsDevice.Clear(Color.CornflowerBlue);
spriteBatch.Begin();
spriteBatch.Draw(this.player.Texture, this.player.Position, null, Color.White, this.player.Rotation, this.player.Origin, 1.0f, SpriteEffects.None, 0.0f);
spriteBatch.End();
base.Draw(gameTime);
Edit: forgot to mention, the velocity object used in the Move methods is a Vector2
I've managed to see it occur once for a split second which has led me to what I think is the problem. Since you are using the raw ElapsedGameTime.TotalMilliseconds value as a factor for your movement, all computer lag that your program experiences will be directly applied to the motion. For example, if your computer (OS) does something else for one twentieth of a second, then the elapsed time value will accumulate to a value of ~50 milliseconds, when it is normally about 0.3 milliseconds. This would cause a frame that has 150 times more motion than a normal frame.
To cause this to happen manually, you can do the following:
// define a frame counter
private int mCounter;
...
protected override void Update(GameTime pGameTime)
{
// save the elapsed time value
float time = (float)pGameTime.ElapsedGameTime.TotalMilliseconds;
...
// force a long frame every 2500th frame (change depending on your framerate)
if (mCounter++ > 2500)
{
mCounter = 0;
time = 75; // about 225 times longer frame
}
...
// use the time value in your move calls
if ((keyboard.IsKeyDown(Keys.Left)) || (gamePad.DPad.Left == ButtonState.Pressed))
mPlayer.MoveLeft(time);
To prevent this from happening, (aside from setting IsFixedTimeStep = true;, which would fix it immediately; but assuming you want IsFixedTimeStep to be false), you should use a time value, as above, but cap it. It's up to you to determine the proportion of elapsed time to motion and to determine what is a good amount of time to allow to pass per frame. Ex:
protected override void Update(GameTime pGameTime)
{
// save the elapsed time value
float time = (float)pGameTime.ElapsedGameTime.TotalMilliseconds;
if (time > 1)
time = 1;
...
if ((keyboard.IsKeyDown(Keys.Left)) || (gamePad.DPad.Left == ButtonState.Pressed))
mPlayer.MoveLeft(time);
...
While that will correct the issue for your current program, your frames are only at 0.3ms each since there is not a lot happening. Once there is more to your game, more time will be elapsing per frame, and you will want the cap to be much higher than 1ms.
EDIT: To be clear, this is 'downtime' from your CPU/OS. It's not going to go away*, it's just up to you whether to jump ahead a bunch when it happens (which can cause problems if the elapsed time ever makes it to 2000ms, for example), or to cap these spikes and let them cause lag; either way there is going to be a 'hole' in your motion that you can't fill. This really is normal, and it's not as critical as it seems. Once there is more happening in your game, it will become less and less noticeable. It stands out strongly at the moment particularly because there are only two graphics present and nothing else happening.
*(Actually, you might look for other applications and processes that you can close to keep the CPU from being borrowed by some other program, but since you are using a multi-tasking OS, you are never going to be guaranteed to have the CPU to yourself.)
Update Again
Just had a thought. Have you ever checked that (float)gameTime.ElapsedGameTime.TotalMilliseconds does not result in Infinity or a negative number? The elapsed time becoming negative would explain your sprite jumping back then forward.
Its the keyboard. You can prove this by changing moving left and right to left mouse and right mouse click. Use the keydown and keyup triggers to set a state instead of iskeydown.
Update:
It appears there are a few things that cause "stuttering" in XNA. I thought for sure your issue was the keyboard problem one. However, I can only assume you've tested your code with the GamePad also and it suffers the same.
There is another problem with IsFixedTimeStep but you already have that set to false. A quick google search suggests there are more than a few people with these types of problems without any clear fix.
There is a long discussion at http://forums.create.msdn.com/forums/t/9934.aspx?PageIndex=6
Other things to look at:
Try restricting your frame rate to 60 fps. There isn't any reason to go above that anyways. One power suggests that a 2d app that does nothing could stall the graphics pipeline with a significantly high throughput.
Check to see if IsSlowRunning is ever called. Garbage collection and Chaos Theory could cause this to be set. I

High precision sleep or How to yield CPU and maintain precise frame rate

I work on a 2D game engine that has a function called LimitFrameRate to ensure that the game does not run so fast that a user cannot play the game. In this game engine the speed of the game is tied to the frame rate. So generally one wants to limit the frame rate to about 60 fps. The code of this function is relatively simple: calculate the amount of time remaining before we should start work on the next frame, convert that to milliseconds, sleep for that number of milliseconds (which may be 0), repeat until it's exactly the right time, then exit. Here's the code:
public virtual void LimitFrameRate(int fps)
{
long freq;
long frame;
freq = System.Diagnostics.Stopwatch.Frequency;
frame = System.Diagnostics.Stopwatch.GetTimestamp();
while ((frame - previousFrame) * fps < freq)
{
int sleepTime = (int)((previousFrame * fps + freq - frame * fps) * 1000 / (freq * fps));
System.Threading.Thread.Sleep(sleepTime);
frame = System.Diagnostics.Stopwatch.GetTimestamp();
}
previousFrame = frame;
}
Of course I have found that due to the imprecise nature of the sleep function on some systems, the frame rate comes out quite differently than expected. The precision of the sleep function is only about 15 milliseconds, so you can't wait less than that. The strange thing is that some systems achieve a perfect frame rate with this code and can achieve a range of frame rates perfectly. But other systems don't. I can remove the sleep function and then the other systems will achieve the frame rate, but then they hog the CPU.
I have read other articles about the sleep function:
Sleep function in c in windows. Does a function with better precision exist?
Sleep Function Error In C
What's a coder to do? I'm not asking for a guaranteed frame rate (or guaranteed sleep time, in other words), just a general behavior. I would like to be able to sleep (for example) 7 milliseconds to yield some CPU to the OS and have it generally return control in 7 milliseconds or less (so long as it gets some of its CPU time back), and if it takes more sometimes, that's OK. So my questions are as follows:
Why does sleep work perfectly and precisely in some Windows environments and not in others? (Is there some way to get the same behavior in all environments?)
How to I achieve a generally precise frame rate without hogging the CPU from C# code?
You can use timeBeginPeriod to increase the timer/sleep accuracy. Note that this globally affects the system and might increase the power consumption.
You can call timeBeginPeriod(1) at the beginning of your program. On the systems where you observed the higher timer accuracy another running program probably did that.
And I wouldn't bother calculating the sleep time and just use sleep(1) in a loop.
But even with only 16ms precision you can write your code so that the error averages out over time. That's what I'd do. Isn't hard to code and should work with few adaptions to your current code.
Or you can switch to code that makes the movement proportional to the elapsed time. But even in this case you should implement a frame-rate limiter so you don't get uselessly high framerates and unnecessarily consume power.
Edit: Based on ideas and comments in this answer, the accepted answer was formulated.
Almost all game engines handle updates by passing the time since last frame and having movement etc... behave proportionally to time, any other implementation than this is faulty.
Although CodeInChaos suggestion is answers your question, and might work partially in some scenarios it's just a plain bad practice.
Limiting the framerate to your desired 60fps will work only when a computer is running faster. However the second that a background task eats up some processor power (for example the virusscanner starts) and your game drops below 60fps everything will go much slower. Even though your game could be perfectly playable on 35fps this will make it impossible to play the game because everything goes half as fast.
Things like sleep are not going to help because they halt your process in favor of another process, that process must first be halted, sleep(1ms) just means that after 1ms your process is returned to the queue waiting for permission to run, sleep(1ms) therefor can easily take 15ms depending on the other running processes and their priorities.
So my suggestions is that you as quickly as possible at some sort of "elapsedSeconds" variable that you use in all your update methods, the earlier you built it in, the less work it is, this will also ensure compatibility with MONO.
If you have 2 parts of your engine, say a physics and a render engine and you want to run these at different framerates, then just see how much time has passed since the last frame, and then decide to update the physics engine or not, as long as you incorporate the time since last update in your calculations you will be fine.
Also never draw more often than you're moving something. It's a waste to draw 2 times the exact same screen, so if the only way something can change on screen is by updating your physics engine, then keep render engine and physics engine updates in sync.
Based on ideas and comments on CodeInChaos' answer, this was the final code I arrived at. I originally edited that answer, but CodeInChaos suggested this be a separate answer.
public virtual void LimitFrameRate(int fps)
{
long freq;
long frame;
freq = System.Diagnostics.Stopwatch.Frequency;
frame = System.Diagnostics.Stopwatch.GetTimestamp();
while ((frame - fpsStartTime) * fps < freq * fpsFrameCount)
{
int sleepTime = (int)((fpsStartTime * fps + freq * fpsFrameCount - frame * fps) * 1000 / (freq * fps));
if (sleepTime > 0) System.Threading.Thread.Sleep(sleepTime);
frame = System.Diagnostics.Stopwatch.GetTimestamp();
}
if (++fpsFrameCount > fps)
{
fpsFrameCount = 0;
fpsStartTime = frame;
}
}

Categories