I'm getting raw video frames from a source (that can be considered a black box) at a rate that can be inconsistent. I'm trying to record the video feed to the disk. I'm doing so with AForge's VideoRecorder and am writing to an MP4 file.
However, the inconsistent rate at which I receive frames causes the video to appear sped up. It seems that I only have the ability to create video files that have a fixed frame rate, even though the source does not have a fixed frame rate.
This isn't an issue when rendering to the screen, as we can just render as fast as possible. I can't do this when writing to the file, since playing back the file would play at the fixed frame rate.
What solutions are there? The output does not have to be the same video format as long as there's some reasonable way to convert it later (which wouldn't have to be real time). The video feeds can be quite long, so I can't just store everything in memory and encode later.
My code currently looks along the lines of:
VideoFileWriter writer = new VideoFileWriter();
Stopwatch stopwatch = new Stopwatch();
public override void Start() {
writer.Open("output.mp4", videoWidth, videoHeight, frameRate, AForge.Video.FFMPEG.VideoCodec.MPEG4);
stopwatch.Start();
}
public override void End() {
writer.Close();
}
public override void Draw(Frame frame) {
double elapsedTimeInSeconds = stopwatch.ElapsedTicks / (double) Stopwatch.Frequency;
double timeBetweenFramesInSeconds = 1.0 / FrameRate;
if (elapsedTimeInSeconds >= timeBetweenFramesInSeconds) {
stopwatch.Restart();
writer.WriteVideoFrame(frame.ToBitmap());
}
}
Where our black box calls the Start, End, and Draw methods. The current check that I have in Draw prevents us from drawing too fast, but doesn't do anything to handle the case of drawing too slowly.
It turns out WriteVideoFrame is overloaded and one variant of the function is WriteVideoFrame(Bitmap frame, TimeSpan timestamp). As you can guess, the time stamp is used to make a frame appear at a certain time in the video.
Thus, by keeping track of the real time, we can set each frame to use the time that it should be in the video. Of course, the video quality will be worse if you can't render quickly enough, but this resolves the issue at hand.
Here's the code that I used for the Draw function:
// We can provide a frame offset so that each frame has a time that it's supposed to be
// seen at. This ensures that the video looks correct if the render rate is lower than
// the frame rate, since these times will be used (it'll be choppy, but at least it'll
// be paced correctly -- necessary so that sound won't go out of sync).
long currentTick = DateTime.Now.Ticks;
StartTick = StartTick ?? currentTick;
var frameOffset = new TimeSpan(currentTick - StartTick.Value);
// Figure out if we need to render this frame to the file (ie, has enough time passed
// that this frame will be in the file?). This prevents us from going over the
// desired frame rate and improves performance (assuming that we can go over the frame
// rate).
double elapsedTimeInSeconds = stopwatch.ElapsedTicks / (double) Stopwatch.Frequency;
double timeBetweenFramesInSeconds = 1.0 / FrameRate;
if (elapsedTimeInSeconds >= timeBetweenFramesInSeconds)
{
stopwatch.Restart();
Writer.WriteVideoFrame(frame.ToBitmap(), frameOffset);
}
Where StartTick is a long? member of the object.
I have also faced this problem. In my case I'm mimicking a CCTV System using Aforge. CCTV should be accurate in the times it is recording so I faced a big dilemma for that. Here is the work around I used in this.
First, declare a Timespan which will be the base of the recording. You need to set this when you start the recording meaning. The value of this is the time you start the recording. For sake of this answer let's call this tmspStartRecording
Then in a new frame event of your capture device:
var currentTime = DateTime.Now.TimeOfDay;
// this will get the elapse time between
// the current time from the time you start your recording
TimeSpan elapse = currentTime - tmspStartRecording;
writer.WriteVideoFrame((Bitmap)image.Clone(),elapse);
Don't forget to set the value of the starting Timespan, OK?
Related
I just started learning game programming, I choosed to use a fixed-timestep game loop since it's straight forward and easier to understand.
Basically like this:
while (game_is_running) {
input();
update();
render();
}
When VSync is enabled, it runs at the speed of my display, but I want to limit it to run as fast as 60 FPS even on higher refresh rate monitors, so I added frame skip and fps limiter:
static void Main()
{
long frame_count = 0;
long tick_frame_end;
Stopwatch stopwatch = new Stopwatch();
WaitForVSync();
stopwatch.Start(); // hoping to get in sync with display
while (game_is_running) {
frame_count++;
tick_frame_end = GetEndTick(frame_count);
if (stopwatch.ElapsedTicks > tick_frame_end) { // falling behind
continue; // skip this frame
}
input();
update(tick_frame_end);
// render(); // render here when VSync is enabled
while (stopwatch.ElapsedTicks < tick_frame_end) {
// Busy waiting, in real code I used Spinwait to avoid 100% cpu usage
}
render(); // I moved it here because I turned off VSync and I want render to happen at a constant rate
}
}
static long GetEndTick(long frame_count) {
return (long)((double)frame_count * Stopwatch.Frequency / 60); // Target FPS: 60
}
When VSync is off, it can run at a steady 30, 60, 120 (or any value in between) FPS. I was happy with the result, but as soon as I turn VSync back on (and set FPS to 60), I noticed there's a lot frame skipping.
I can run a steady 120FPS when VSync is off with not a single frame dropped, yet when running 60FPS VSync on I have a notably number of frames dropped. It took me quiet a while to nail down the cause of this:
My monitor which I thought was running at 60Hz is actually running at about 59.920Hz (tested with https://www.displayhz.com/)
As time goes on, my internal frame time and monitor frame time would become more and more out of sync, causing many unexpected frame drops. This completely breaks my GetEndTick() function, which basically breaks everything: the frame skipping logic, the fps limiter, and most importantly update(tick_frame_end) is broken to.
So the questions are:
1. How can I get the end time of current frame?
In my code above, the end time is calculated by the assumption that there's 60 even frames in every second. If that's true, my internal frame time may not align perfectly to the display frame time, but would still be in-sync which is useful enough.
(Everything was calculated based on the time stamp of frame end, the idea is I generate the image based on the time of frame end, which is just moment before next frame getting displayed on screen.)
I tried to record the time at beginning of the loop (or right after last frame render) and add 1 frame worth of time to it - Doesn't work as the recorded time isn't reliable due to system load, background processes, and many other things. Also I'll need a new method to determine which frame is current frame.
2. If (1.) isn't possible, how can I fix my game loop?
I have read the infamous Fix Your Timestep! by Glenn Fiedler, yet it's about game physics not graphic, and doesn't really provide me an answer to the first question.
Trying to make accurate replay system in unity and c#
Hi all,
Im working on a racing game and I decided to add a replay system to allow "ghost car" too, initially I was recordng data in some events like key pressed but only recording that data in all frames I manage a smooth replay, well its still ok as file is not huge and replay works, but the trouble is there is always a slight variation in time, only like 0.1 seconds or 0.2 at the most, I have a list of keyframes and in each position I record a time to be shown, the trouble I think is that because fps vary then not in all runs the same time marks are shown then the winning frame's time is not always being rendered so the winning frame happens in next update slightly after it should be shown. Im using c# and unity just in case, but I think its independent to this mainly. Thanks a lot about any clue, I have been around this issue for some time now
It sounds like you're doing frame-by-frame replay which, as you've discovered, requires your frames to play back with the same delay as the recording. In a game-render loop, that's not guaranteed to happen.
As you record the car states (position, heading, etc) per frame, you need to also record a timestamp (in this case, accumulating Time.deltaTime from race start should suffice).
When you play it back, find the current timestamp and interpolate (ie, Lerp) the car's state from the recorded bounding frames.
Hints for frame interpolation:
class Snapshot {
public float Timestamp;
public Matrix4x4 Transform; // As an example. Put more data here.
}
private int PrevIndex = 0;
private List<Snapshot> Snapshots = (new List<Snapshot>()).OrderBy(m => m.Timestamp).ToList();
private float GetLerpFactor(float currentTimestamp) {
if ( PrevIndex == Snapshots.Count - 1)
return 0; // Reached end of Snapshots
while (currentTimestamp >= Snapshots[PrevIndex + 1].Timestamp)
PrevIndex++; // move 'frame' forward
var currentDelta = Mathf.Max(0f, currentTimestamp - Snapshots[PrevIndex].Timestamp);
var fullDelta = Snapshots[PrevIndex + 1].Timestamp - Snapshots[PrevIndex].Timestamp;
var lerpFactor = currentDelta / fullDelta;
return lerpFactor;
}
Unless for some reason you need to interact with "ghost" car (like collisions) record final data on position/speed/direction at frequent enough original moments of time and interpolate that to new simulation. I would not record raw inputs but rather resulting changes (like gear shifts) unless you need to measure/show how fast user reacted to something.
If you really want to replay the same inputs you'd have to run two separate simulations at the same time so physics and timing of "real" version don't impact "ghost" one, most likely you'll have to again interpolate output of "ghost" simulation to align with real one (unless you have fixed time steps).
I have written a C# program that captures video from a specialized camera through the camera manufacturer's proprietary API. I am able to write captured frames to disk through a FileStream object, but I am at the mercy of the camera and disk I/O when it comes to the framerate.
What is the best way to make sure I write to disk at the required framerate? Is there a certain algorithm available that would compute the real-time average framerate and then add/discard frames to maintain a certain desired framerate?
It's difficult to tell much because of the lack of information.
What's the format? Is there compression?
How is the camera API sending the frames? Are they timed, so the camera will send the frame rate you asked for? If it is, you are really dealing with I/O speed.
If you need high quality, and are writing without compression, you could experiment some lossless compression algorithms to balance between processing and drive I/O. You could gain some speed if the bottleneck is with a high drive I/O.
For frames, there are ways to implement that. Normally frames have time-stamp, you should search on that, and discard frames that are so near of the other.
Let's say that you want 60 fps, so the space in ms between frames is 1000/60=16ms, if the frame you get has a time stamp of 13ms after the last, you can discard it and don't write to disk.
In a perfect world, you would check the first second which gives you a number of frames per second that your system supports.
Say your camera is capturing 60 fps but your computer can really only handle 45 fps. What you have to do is skip a total of 15 frames per second in order to keep up. Up to here, that's easy enough.
The math in this basic case is:
60 / 15 = 4
So skip one frame every four incoming frames like so (keep frames marked with an X, skip the others):
000000000011111111112222222222333333333344444444445555555555
012345678901234567890123456789012345678901234567890123456789
XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX
Of course, it is likely that you do not get such a clear cut case. The math remains the same, it's just that you'll end up skipping frames at what looks like varying points in time:
// simple algorithm based on fps from source & destination
// which fails to keep up long term
double camera_fps = 60.0;
double io_fps = camera_fps;
for(;;)
{
double frames_step = camera_fps / io_fps;
double start_time = time(); // assuming you get at least ms precision
double frame = 0.0;
for(int i(0); i < camera_fps; ++i)
{
buffer frame_data = capture_frame();
if(i == (int)frame) // <- part of the magic happens here
{
save_frame(frame_data);
}
frame += frames_step;
}
double end_time = time();
double new_fps = camera_fps / (end_time - start_time);
if(new_fps < io_fps)
{
io_fps = new_fps;
}
}
As this algorithm shows, you want to adjust your fps as time goes by. During the first second, it is likely to give you an invalid result for all sorts of reasons. For example, writing to disk is likely going to be buffered so go really fast and it may look as if you could support 60 fps. Later, the fps will slow down and you may find that your maximum I/O speed is 57 fps instead.
One issue with this basic algorithm is that you can easily reduce the number of frames to make it work within 1 second, but it will only reduce the fps (i.e. I update io_fps only when new_fps is smaller). If you find the correct number for io_fps, you're fine. If you went too far, you're dropping frames when you shouldn't. This is because new_fps is going to be 1.0 when (end_time - start_time) is exactly 1 second, meaning that you did not spend too much time capturing and saving the incoming frames.
The solution to this issue is to time your save_frame() function. If the total amount is less than 1 second within the inner loop, then you can increase the amount of frames you can save. This will work better if you can use two threads. One thread reads the frame, pushes the frame on an in memory FIFO, the other thread retrieves the frames from that FIFO. This means the amount of time to capture one frame doesn't (as much) affect the time it takes to save a frame.
bool stop = false;
// capture
for(;;)
{
buffer frame_data = capture_frame();
fifo.push(frame_data);
if(stop)
{
break;
}
}
// save to disk
double expected_time_to_save_one_frame = 1.0 / camera_fps;
double next_start_time = time();
for(;;)
{
buffer frame_data = fifo.pop();
double start_time = next_start_time;
save_frame(frame_data);
next_start_time = time();
double end_time = time();
if(start_time - end_time > expected_time_to_save_one_frame)
{
fifo.pop(); // skip one frame
}
}
This is just pseudo code and I may have made a few mistakes. It also expects that the gap between start_time and end_time is not going to be more than one frame (i.e. if you have a case where the capture is 60 fps and the I/O supports less than 30 fps, you often would have to skip two frames in a row).
For people who are compressing frames, keep in mind that the timing of one call to save_frame() will vary a lot. At times, the frame can easily be compressed (no horizontal movement) and at times it's really slow. This is where such dynamism can help tremendously. Your disk I/O, assuming not much else occurs while you are recording, should not vary much once you reach the maximum speed supported.
IMPORTANT NOTE: these algorithms suppose that the camera fps is fixed; this is likely not quite true; you can also time the camera and adjust the camera_fps parameter accordingly (which means the expected_time_to_save_one_frame variable can also vary over time).
I work on a 2D game engine that has a function called LimitFrameRate to ensure that the game does not run so fast that a user cannot play the game. In this game engine the speed of the game is tied to the frame rate. So generally one wants to limit the frame rate to about 60 fps. The code of this function is relatively simple: calculate the amount of time remaining before we should start work on the next frame, convert that to milliseconds, sleep for that number of milliseconds (which may be 0), repeat until it's exactly the right time, then exit. Here's the code:
public virtual void LimitFrameRate(int fps)
{
long freq;
long frame;
freq = System.Diagnostics.Stopwatch.Frequency;
frame = System.Diagnostics.Stopwatch.GetTimestamp();
while ((frame - previousFrame) * fps < freq)
{
int sleepTime = (int)((previousFrame * fps + freq - frame * fps) * 1000 / (freq * fps));
System.Threading.Thread.Sleep(sleepTime);
frame = System.Diagnostics.Stopwatch.GetTimestamp();
}
previousFrame = frame;
}
Of course I have found that due to the imprecise nature of the sleep function on some systems, the frame rate comes out quite differently than expected. The precision of the sleep function is only about 15 milliseconds, so you can't wait less than that. The strange thing is that some systems achieve a perfect frame rate with this code and can achieve a range of frame rates perfectly. But other systems don't. I can remove the sleep function and then the other systems will achieve the frame rate, but then they hog the CPU.
I have read other articles about the sleep function:
Sleep function in c in windows. Does a function with better precision exist?
Sleep Function Error In C
What's a coder to do? I'm not asking for a guaranteed frame rate (or guaranteed sleep time, in other words), just a general behavior. I would like to be able to sleep (for example) 7 milliseconds to yield some CPU to the OS and have it generally return control in 7 milliseconds or less (so long as it gets some of its CPU time back), and if it takes more sometimes, that's OK. So my questions are as follows:
Why does sleep work perfectly and precisely in some Windows environments and not in others? (Is there some way to get the same behavior in all environments?)
How to I achieve a generally precise frame rate without hogging the CPU from C# code?
You can use timeBeginPeriod to increase the timer/sleep accuracy. Note that this globally affects the system and might increase the power consumption.
You can call timeBeginPeriod(1) at the beginning of your program. On the systems where you observed the higher timer accuracy another running program probably did that.
And I wouldn't bother calculating the sleep time and just use sleep(1) in a loop.
But even with only 16ms precision you can write your code so that the error averages out over time. That's what I'd do. Isn't hard to code and should work with few adaptions to your current code.
Or you can switch to code that makes the movement proportional to the elapsed time. But even in this case you should implement a frame-rate limiter so you don't get uselessly high framerates and unnecessarily consume power.
Edit: Based on ideas and comments in this answer, the accepted answer was formulated.
Almost all game engines handle updates by passing the time since last frame and having movement etc... behave proportionally to time, any other implementation than this is faulty.
Although CodeInChaos suggestion is answers your question, and might work partially in some scenarios it's just a plain bad practice.
Limiting the framerate to your desired 60fps will work only when a computer is running faster. However the second that a background task eats up some processor power (for example the virusscanner starts) and your game drops below 60fps everything will go much slower. Even though your game could be perfectly playable on 35fps this will make it impossible to play the game because everything goes half as fast.
Things like sleep are not going to help because they halt your process in favor of another process, that process must first be halted, sleep(1ms) just means that after 1ms your process is returned to the queue waiting for permission to run, sleep(1ms) therefor can easily take 15ms depending on the other running processes and their priorities.
So my suggestions is that you as quickly as possible at some sort of "elapsedSeconds" variable that you use in all your update methods, the earlier you built it in, the less work it is, this will also ensure compatibility with MONO.
If you have 2 parts of your engine, say a physics and a render engine and you want to run these at different framerates, then just see how much time has passed since the last frame, and then decide to update the physics engine or not, as long as you incorporate the time since last update in your calculations you will be fine.
Also never draw more often than you're moving something. It's a waste to draw 2 times the exact same screen, so if the only way something can change on screen is by updating your physics engine, then keep render engine and physics engine updates in sync.
Based on ideas and comments on CodeInChaos' answer, this was the final code I arrived at. I originally edited that answer, but CodeInChaos suggested this be a separate answer.
public virtual void LimitFrameRate(int fps)
{
long freq;
long frame;
freq = System.Diagnostics.Stopwatch.Frequency;
frame = System.Diagnostics.Stopwatch.GetTimestamp();
while ((frame - fpsStartTime) * fps < freq * fpsFrameCount)
{
int sleepTime = (int)((fpsStartTime * fps + freq * fpsFrameCount - frame * fps) * 1000 / (freq * fps));
if (sleepTime > 0) System.Threading.Thread.Sleep(sleepTime);
frame = System.Diagnostics.Stopwatch.GetTimestamp();
}
if (++fpsFrameCount > fps)
{
fpsFrameCount = 0;
fpsStartTime = frame;
}
}
We have an application, where we get a message from an external system and then we take a picture, do some processing and return something back to the external system. Doing some performance testing, I found two problems (they are somewhat related). I was hoping someone will be able to explain this to me.
1) Does _capture.QueryFrame() buffer frames?
What we see is, if there is a gap between the query for two frames from a web camera, the second frame is often an older picture and not the one when the queryFrame was called.
We were able to mitigate this problem to some extent by discarding some frames, i.e. calling _capture.QueryFrame() 2-3 times and discarding the results.
2) The second issue is when we timed different parts of the application, we found that clearing the buffer (calling QueryFrame() 2-3 times and not using the results) takes about 65ms and then this line: Image<Bgr, Byte> source = _capture.QueryFrame() takes about 80ms. These two parts take the biggest chunk of processing time, our actual processing takes just about 20-30ms more.
Is there a faster way (a) to clear the buffer (b) to capture the frame?
If you have experience with OpenCV and know of something related, please do let me know.
I answered a similar question System.TypeInitializationException using Emgu.CV in C# and having tested the various possibilities to acquire an up to date frame I found the bellow the bes method.
1) yes when you set up a Capture from a webcam a ring buffer is created to store the images in this allows effcient allocation of memory.
2) yes there is a faster way, Set your Capture device up globally and set it of recording and calling ProcessFrame to get an image from the buffer whenever it can. Now change your QueryFrame simply to copy whatever frames its just acquired. This will hopefully stop your problem of getting the previous frame and you will now have the most recent frame out of the buffer.
private Capture cap;
Image<Bgr, Byte> frame;
public CameraCapture()
{
InitializeComponent();
cap= new Capture();
cap.SetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_FRAME_HEIGHT, height);
cap.SetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_FRAME_WIDTH, width);
Application.Idle += ProcessFrame;
}
private void ProcessFrame(object sender, EventArgs arg)
{
frame = _capture.QueryFrame();
grayFrame = frame.Convert<Gray, Byte>();
}
public Image<Bgr,byte> QueryFrame()
{
return frame.Copy();
}
I hope this helps if not let me know and I'll try and tailor a solution to your requirements. Don't forget you can always have your acquisition running on a different thread and invoke the new QueryFrame method.
Cheers
Chris
This could also be due to the refresh rate of the webcamera you are using. My camera works at 60Hz so I have a timer that takes captures a frame every 15 milliseconds.