EmguCV/OpenCV QueryFrame slow/buffers - c#

We have an application, where we get a message from an external system and then we take a picture, do some processing and return something back to the external system. Doing some performance testing, I found two problems (they are somewhat related). I was hoping someone will be able to explain this to me.
1) Does _capture.QueryFrame() buffer frames?
What we see is, if there is a gap between the query for two frames from a web camera, the second frame is often an older picture and not the one when the queryFrame was called.
We were able to mitigate this problem to some extent by discarding some frames, i.e. calling _capture.QueryFrame() 2-3 times and discarding the results.
2) The second issue is when we timed different parts of the application, we found that clearing the buffer (calling QueryFrame() 2-3 times and not using the results) takes about 65ms and then this line: Image<Bgr, Byte> source = _capture.QueryFrame() takes about 80ms. These two parts take the biggest chunk of processing time, our actual processing takes just about 20-30ms more.
Is there a faster way (a) to clear the buffer (b) to capture the frame?
If you have experience with OpenCV and know of something related, please do let me know.

I answered a similar question System.TypeInitializationException using Emgu.CV in C# and having tested the various possibilities to acquire an up to date frame I found the bellow the bes method.
1) yes when you set up a Capture from a webcam a ring buffer is created to store the images in this allows effcient allocation of memory.
2) yes there is a faster way, Set your Capture device up globally and set it of recording and calling ProcessFrame to get an image from the buffer whenever it can. Now change your QueryFrame simply to copy whatever frames its just acquired. This will hopefully stop your problem of getting the previous frame and you will now have the most recent frame out of the buffer.
private Capture cap;
Image<Bgr, Byte> frame;
public CameraCapture()
{
InitializeComponent();
cap= new Capture();
cap.SetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_FRAME_HEIGHT, height);
cap.SetCaptureProperty(Emgu.CV.CvEnum.CAP_PROP.CV_CAP_PROP_FRAME_WIDTH, width);
Application.Idle += ProcessFrame;
}
private void ProcessFrame(object sender, EventArgs arg)
{
frame = _capture.QueryFrame();
grayFrame = frame.Convert<Gray, Byte>();
}
public Image<Bgr,byte> QueryFrame()
{
return frame.Copy();
}
I hope this helps if not let me know and I'll try and tailor a solution to your requirements. Don't forget you can always have your acquisition running on a different thread and invoke the new QueryFrame method.
Cheers
Chris

This could also be due to the refresh rate of the webcamera you are using. My camera works at 60Hz so I have a timer that takes captures a frame every 15 milliseconds.

Related

How can I create videos from images with transitions in C#?

Using C# code, I want to take a number of images, add some music and create a video.
I think I can best explain what I want in pseudo-code...:
var video = new Video(1080, 1920); //create a video 1080*1920px
video.AddFrame("C:\temp\frame01.jpg", 2000); //show frame for 2000ms
video.AddTransition(Transitions.Fade, 500); //fade from first to second frame for 500ms
video.AddFrame("C:\temp\frame02.jpg", 1000); //show frame for 1000ms
video.AddTransition(Transitions.Fade, 500); //fade from second to third frame for 500ms
video.AddFrame("C:\temp\frame03.jpg", 2000); //show frame for 2000ms
video.AddSound("C:\temp\mymusic.mp3"); //added from start of video
video.Save("C:\temp\MyAwesomeVideo.avi", Format.MP4);
Does something like this exist?
I know there are a couple of older libraries, that can do some stuff with ffmpeg to create slideshows, but I looked at some of them, and they are insanely tricky to get working - and designed for something quite different.
Backstory:
I created a system for a cinema, which every week generates x number of images using movie posters, showtimes etc - and would like to take those images, turn them into a video which will be shared on social media.
Possibly check out AForge.NET.
I have used this previously, and the results were sufficient, especially considering the ease at which you cant construct a video. It uses FFMPEG under the hood, so you don't need to concern yourself with extensive terminal commands.
A possible downside to this is that there is no immediately available option (to my knowledge) to add transitions / keep a still image on the screen for more than one Frame. This would mean you would need to implement those capabilities yourself.
Keep in mind that AForge.NET is licensed under GPLv3
Check out an example of the VideoFileWriter class

Rendering video to file that may have an inconsistent frame rate

I'm getting raw video frames from a source (that can be considered a black box) at a rate that can be inconsistent. I'm trying to record the video feed to the disk. I'm doing so with AForge's VideoRecorder and am writing to an MP4 file.
However, the inconsistent rate at which I receive frames causes the video to appear sped up. It seems that I only have the ability to create video files that have a fixed frame rate, even though the source does not have a fixed frame rate.
This isn't an issue when rendering to the screen, as we can just render as fast as possible. I can't do this when writing to the file, since playing back the file would play at the fixed frame rate.
What solutions are there? The output does not have to be the same video format as long as there's some reasonable way to convert it later (which wouldn't have to be real time). The video feeds can be quite long, so I can't just store everything in memory and encode later.
My code currently looks along the lines of:
VideoFileWriter writer = new VideoFileWriter();
Stopwatch stopwatch = new Stopwatch();
public override void Start() {
writer.Open("output.mp4", videoWidth, videoHeight, frameRate, AForge.Video.FFMPEG.VideoCodec.MPEG4);
stopwatch.Start();
}
public override void End() {
writer.Close();
}
public override void Draw(Frame frame) {
double elapsedTimeInSeconds = stopwatch.ElapsedTicks / (double) Stopwatch.Frequency;
double timeBetweenFramesInSeconds = 1.0 / FrameRate;
if (elapsedTimeInSeconds >= timeBetweenFramesInSeconds) {
stopwatch.Restart();
writer.WriteVideoFrame(frame.ToBitmap());
}
}
Where our black box calls the Start, End, and Draw methods. The current check that I have in Draw prevents us from drawing too fast, but doesn't do anything to handle the case of drawing too slowly.
It turns out WriteVideoFrame is overloaded and one variant of the function is WriteVideoFrame(Bitmap frame, TimeSpan timestamp). As you can guess, the time stamp is used to make a frame appear at a certain time in the video.
Thus, by keeping track of the real time, we can set each frame to use the time that it should be in the video. Of course, the video quality will be worse if you can't render quickly enough, but this resolves the issue at hand.
Here's the code that I used for the Draw function:
// We can provide a frame offset so that each frame has a time that it's supposed to be
// seen at. This ensures that the video looks correct if the render rate is lower than
// the frame rate, since these times will be used (it'll be choppy, but at least it'll
// be paced correctly -- necessary so that sound won't go out of sync).
long currentTick = DateTime.Now.Ticks;
StartTick = StartTick ?? currentTick;
var frameOffset = new TimeSpan(currentTick - StartTick.Value);
// Figure out if we need to render this frame to the file (ie, has enough time passed
// that this frame will be in the file?). This prevents us from going over the
// desired frame rate and improves performance (assuming that we can go over the frame
// rate).
double elapsedTimeInSeconds = stopwatch.ElapsedTicks / (double) Stopwatch.Frequency;
double timeBetweenFramesInSeconds = 1.0 / FrameRate;
if (elapsedTimeInSeconds >= timeBetweenFramesInSeconds)
{
stopwatch.Restart();
Writer.WriteVideoFrame(frame.ToBitmap(), frameOffset);
}
Where StartTick is a long? member of the object.
I have also faced this problem. In my case I'm mimicking a CCTV System using Aforge. CCTV should be accurate in the times it is recording so I faced a big dilemma for that. Here is the work around I used in this.
First, declare a Timespan which will be the base of the recording. You need to set this when you start the recording meaning. The value of this is the time you start the recording. For sake of this answer let's call this tmspStartRecording
Then in a new frame event of your capture device:
var currentTime = DateTime.Now.TimeOfDay;
// this will get the elapse time between
// the current time from the time you start your recording
TimeSpan elapse = currentTime - tmspStartRecording;
writer.WriteVideoFrame((Bitmap)image.Clone(),elapse);
Don't forget to set the value of the starting Timespan, OK?

Unexplainable performance issues with BitmapSource in WPF

I have in my application a 3D world and data for this 3D world. The UI around the application is done with WPF and so far it seems to be working ok. But now I am implementing the following functionality: If you click on the terrain in the 3D view it will show the textures used in this chunk of terrain in a WPF control. The image data of the textures is compressed (S3TC) and I handle creation of BGRA8 data in a separate thread. Once its ready I'm using the main windows dispatcher to do the WPF related tasks. Now to show you this in code:
foreach (var pair in loadTasks)
{
var img = pair.Item2;
var loadInfo = TextureLoader.LoadToArgbImage(pair.Item1);
if (loadInfo == null)
continue;
EditorWindowController.Instance.WindowDispatcher.BeginInvoke(new Action(img =>
{
var watch = Stopwatch.StartNew();
var source = BitmapSource.Create(loadInfo.Width, loadInfo.Height, 96, 96, PixelFormats.Bgra32,
null,
loadInfo.Layers[0], loadInfo.Width * 4);
watch.Stop();
img.Source = source;
Log.Debug(watch.ElapsedMilliseconds);
}));
}
While I cant argue with the visual output there is a weird performance issue. As you can see I have added a stopwatch to check where the time is consumed and I found the culprit: BitmapSource.Create.
Typically I have 5-6 elemets in loadTasks and the images are 256x256 pixels. Interestingly now the first invocation shows 280-285ms for BitmapSource.Create. The next 4-5 all are below 1ms. This consistently happens every time I click the terrain and the loop is started. The only way to avoid the penalty in the first element is to click on the terrain constantly but as soon as I don't click the terrain (and therefore do not invoke the code above) for 1-2 seconds the next call to BitmapSource.Create gets the 280ms penalty again.
Since anything above 5ms is far beyond any reasonable or acceptable time to create 256x256 bitmap (my S3TC decompression does all 10(!) mip layers in less than 2 ms) I guess there has to be something else going on here?
FYI: All properties of loadInfo are static properties and do not perform any calculations you cant see in the code.

Speeding up emguCV methods

I have a question about speeding up a couple of emguCV calls. Currently I have a capture card that takes in a camera at 1920x1080#30Hz. Using directshow with a sample grabber I capture each frame and display it on a form. Now I have written an image stabilizer but the fastest I can run it is about 20Hz.
The first thing I do in my stabilizer is scale the 1920x1080 down to 640x480 beacuse it makes the feature track much faster.
Then I use goodFeaturesToTrack
previousFrameGray.GoodFeaturesToTrack(sampleSize, sampleQuality, minimumDistance, blockSize)[0]);
which takes about 12-15ms.
The next thing I do is an optical flow calculation using this
OpticalFlow.PyrLK(previousFrameGray, frame_gray, prev_corner.ToArray(), new Size(15, 15), 5, new MCvTermCriteria(5), out temp, out status, out err);
and that takes about 15-18ms.
The last time consuming method I call is the warpAffine function
Image<Bgr, byte> warped_frame = frame.WarpAffine(T, interpMethod, Emgu.CV.CvEnum.WARP.CV_WARP_DEFAULT, new Bgr(BackgroundColor));
this takes about 10-12ms.
The rest of the calculations, image scaling and what not take a total of around 7-8ms.
So the total time for a frame calculation is about 48ms or about 21Hz.
Somehow I need to get the total time under 33ms.
So now for my questions.
First: If I switch to using the GPU for goodFeatures and opticalFlow will that provide the nessesary increase in speed if any?
Second: Are there any other methods besides using the GPU that could speed up these calculations?
Well I finally converted the functions to their GPU counterparts and got the speed increase I was looking for. I went from 48ms down to 22ms.

Does anyone know a way to capture webcam video at a constant framerate?

OK, so I have been having a bit of a tough time with webcam capture, and am in need of help to find a way to capture the video at a consistent frame rate.
I have been using Aforge.AVIWriter and Aforge.VideoFileWriter, but to no avail, and have also typed any related phrase I can think of into Google.
I have had a look at the DirectShowLib, but am yet to find it any more accurate.
The video must have a minimum frame rate of 25fps, it is also to be shown in sync with other data which is collected at the same time.
I have also tried an infinite loop:
for (; ; )
{
if (recvid == false)
{
break;
}
if (writer.IsOpen)
{
Bitmap image = (Bitmap)videoSourcePlayer1.GetCurrentVideoFrame();
if (image != null)
{
writer.WriteVideoFrame(image);
}
Thread.Sleep(40);
}
}
Even though this is more accurate for timing, the user can see that the fps changes when they watch the video and view data at the same time.
Any pointers or tips would be greatly appreciated, as I cannot think of a way to go from here.
two main issues that i can see:
writer.write() is it happening in a seperate thread? if not it will take time and hence the timing might not be accurate.
second thread.sleep() says that sleep for at-least 40 ms not exactly 40 ms.. to get better results reduce the wait time to 5 ms and do it in a loop.. use the systems time to actually figure out how long you have slept for and then take a frame capture.
Hope this helps
With most web cameras (except maybe rare exceptions and higher end cameras that offer you fine control over capture process) you don't have enough control over camera frame rate. The camera will be capturing stream of frames at its maximal frame rate for the given mode of operation, esp. capped by resolution and data bandwidth, with possibly lower rate in low level conditions.
No Thread.Sleep is going to help you there because it is way too slow and unresponsive - in order to capture 25 fps the hardware needs to run smoothly without any interruptions and explicit instructions to "capture next frame now" pushing new data on the one end of the queue with you popping captured frames on the other end. You typically have a lag of a few video frames even with decent hardware.

Categories