Comparing Frames of a live Feed - c#

My Scenario:
I have a camera focused at a white screen, which is taking a live feed and displaying that feed in a picture box by virtue of a FrameReceived event.
I need to kick off a process to crop the image if something is inserted between the camera and the screen.
This process needs to start when the image first changes so I need to compare one frame with another to see if anything has changed.
My Efforts
I have tried hashing the images and comparing them, which doesnt work as the frames are never exactly the same
I have tried to loop through each pixel, comparing different values such as brightness, hue etc but this is too slow
I have tried looping through with a sub sample but it is either too slow or too unreliable.
I even tried what I like to call the "Twisted Pair Solution" where I inverted one then added them together and checking the result but this was far too complex and slow.
My Environment
Visual Studio 2012 (2010 if neccessary is available)
Ueye camera
C#
The images are of type System.Drawing.Bitmap
Notes
The biggest problem seems to be that to reliably get this result, it takes longer than we have for a reasonable frame rate, meaning that the calculation is not finished before a new frame comes in, which means that whatever variable I use to store the previous image is being overwritten before it can stop being used, and there appears to be thread after thread building up and it causes a whole lotta shakin.

I would recommend using some sort of image processing library , because the default .Net image processing tools are limited ,you can use an image processing library like http://www.aforgenet.com/framework/.
Than you can for example subtract image 1 from image 2, and sum the differences. If they are below a threshold (you choose the on the fits your need) they are identical.
or you can deep deeper and try this http://thecsharper.com/?p=94

Related

Detect Object Defects with Open CV

I try to identify changes on an object. Therefore I take a picture before and after using the object. At the moment I'm working with the absolute Difference of the two pictures and taking the contours of the resulting difference image. That works fine as long as the object is positioned perfectly and captured like in the image before. Only small differences in its position make my method useless.
Has anybody a different solution approach with OpenCV oder EmguCV? I was thinking about checking if one of the neighbor pixels is identical then there should be no change detected, but I don't know of an existing performant algorithm.
Example Images (Pictures don't match my usecase, but they should be helpful to illustrate my problem):
Before
After
Yes there are many way to do this. I like the following:
Histogram match. Get a histogram before and after and check for differences. Is sensitive to changes in lighting. Very good method if you are in a controlled lighting setting
Correlation match. If you use MatchTemplate you can get the “quality” of the match. This can be made to be less sensitive to light. But is sensitive to rotation changes between the two images.
Try to implement some and let’s see your code.

Reading specific frames from video file with .NET (not necessarily in sequential order)

I have a .NET Winform application and i need to show specific frames of a video file. Frames aren't necessarily in sequential order and are loaded when the user moves a slider, or when the application fires some events. I tried the following things:
Using EmguCV (OpenCV Wrapper): The problem here is that when i use SetCaptureProperty (With CAP_PROP.CV_CAP_PROP_POS_FRAMES, AVI_RATIO or MSEC ) to sets capture's position, the position isn't seted correctly (I checked it using GetCaptureProperty next to the SetCaptureProperty instruction). So, the frame returned by QueryFrame isn't the needed frame.
Using WPF MediaElement with Clock driven behavior: I can set the position of the video at the place that i need. The problem is that i don't know how get only one frame of the video sequence. By default, i have the Clock controller paused. When I set the position, If I call Clock.Controller.Resume(), then the video start playing from here. If I don't call Clock.Controller.Resume(), or if i call Clock.Controller.Resume() and then Clock.Controller.Pause() nothing is happening.
Im looking for another video library that can be used for accomplish this work, but i am not sure about what could be used. Any idea?
Thanks a lot for all comunity members, not only for help with this answer, but for the very big help that you give me with my problems every day. Iam new, but i would try to return these helping others with your problems.
Sorry for my terrible english! (Im spanish speaker and english speaking is not my best quality :S)

Separation and analysis of an image

Here's the scenario:
I am using Visual Studio 2008 with .NET framework 3.5. I am using C#. And for database I am using MySQL. I have a picturebox on a form and 10-12 buttons (each with some image manipulation function). On clicking one of the buttons openfiledialog box is shown up where the user can select the specific file to provide to the program. On clicking another button the program should perform the actions as explained below.
I have an image of a circuit. Suppose this is the image which is provided to the program. e.g.
What I intend to do is that - the program should hypothetically label the circuit as follows:
and then it should separate the image and store the information in a database.
Is there any way to do that. Can anyway tell me the approach to do that? Any help or suggestions please.
Thanks.
In image processing, the problem of finding the 'parts' of the circuit is known as connected component labeling. If you are using C#, I believe that you can use EmguCV (a wrapper to the OpenCV library) to solve the first part of the problem. To do that, you have to consider that the white pixels are the background and that the black pixels are objects.
Now that you have the separated traces, the problem is reduced to finding and labeling the white dots. Again, you can solve it by connected component labeling, but now the objects are represented by white pixels and the background are the black pixels.
At least for your example case, a very simple algorithm would work.
Find a black pixel from the image
Using a flood-fill algorithm, find all the pixels connected to it, and separate it. That's one of your traces.
Working with the separated trace, find a white pixel and use a flood-fill algorithm to find all the pixels connected to it. If you run to the edge of the image, it's not a hole. If you don't, it might be a hole, or a loop in the trace. Use a threshold for the hole size to determine if it's a terminal hole or a loop.
Label the hole and remove it from consideration. Repeat until there are no more unprocessed white pixels.
Remove the whole trace from consideration, and jump to 1.
When there are no more black pixels in consideration in step 1, you're done.
You should probably get pretty far with a basic image editing library that has a flood-fill function, a function to separate a certain color into a new image, and a function to replace colors (the last two are trivial to implement, and there are plenty of flood-fill algorithms available online). You can use different colors to mark different things, for instance, color everything "no in consideration" red. It also makes for an interesting visualization if you look at it in real time!

Capture a single pixel row from each frame of video and compile them together

I'm working on a project where I need to take a single horizontal or vertical pixel row (or column, I guess) from each frame of a supplied video file and create an image out of it, basically appending the pixel row onto the image throughout the video. The video file I plan to supply isn't a regular video, it's actually just a capture of a panning camera from a video game (Halo: Reach) looking straight down (or as far as the game will let me, which is -85.5°). I'll look down, pan the camera forward over the landscape very slowly, then take a single pixel row from each frame the captured video file (30fps) and compile the rows into an image that will effectively (hopefully) reconstruct the landscape into a single image.
I thought about doing this the quick and dirty way, using a AxWindowsMediaPlayer control and locking the form so that it couldn't be moved or resized, then just using a Graphics object to capture the screen, but that wouldn't be fast enough, there would be way too many problems, I need direct access to the frames.
I've heard about FFLib, and DirectShow.NET, I actually just installed the Windows SDK but haven't had a chance to mess with and of the DirectX stuff yet (I remember it being very confusing for me a while back when I messed with it). Hopefully someone can give me a pointer in the right direction.
If anyone has any information they think might help, I'd be super grateful for it. Thank you!
You could use a video rendered in renderless mode (E.g. VMR9, EVR), which allows you to process every frame yourself. By using frame stepping playback you can step one frame each time and process the frame.
DirectShow.NET can help you to use managed code where possible, and I can recommend it. It is however only a wrapper to DirectShow, so it might be worthwhile to look for more advanced libraries as well.
A few sidenotes: wouldn't you experience issues with lighting which differs from angle to angle? Perhaps it's easier to capture some screenshots and use existing stitching algorithms?

what is Image Processing?

I am going to Create an application where user can edit their pictures like color balance effect, gray sheet effect, invert effect , red eye fix etc.
My application will be quite resemble to Acdsee Software.
So i wanna know that can My application be called image Processing software ?
In my point of view Image Proccessing means playing with image's enhancing the image's.
Thanks In Advance
In electrical engineering and computer science, image processing is any form of signal processing for which the input is an image, such as a photograph or video frame; the output of image processing may be either an image or, a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it.[1]
So, I think you can call your software a Image Processing Software, once it check what this definition propose.
[1] http://en.wikipedia.org/wiki/Image_processing
Yes, an application that does color balance, gray sheet effect, invert and red-eye fix, can reasonably be called image processing software. In fact, even a single one of those I would already call an image-processing algorithm.

Categories