We're developing system of getting combined image from camera, which being moved by step motors to make pictures of the whole area. The problem is that when we combine separate frames, the edges are not accurate because of step motors discretization. So we came up with idea to make frames with little overlap, so we can put them over each other to get continual image with no blanks. We're using c# + LeadTools. So I'm wondering is there any option in Lead Tools (or maybe some other sdk) to detect areas, which are equal on both images, so we can stitch them correctly? Thanks in advance.
Related
I'm looking for a good way to isolate an air bubble from the following image. I'm using Visual Studio 2015 and C#.
I've heard of the watershed method and believe it may be a good solution.
I tried implementing the code solution found here: watershed image segmentation
I haven't had much success. The solution has trouble finding functions, for example: FilterGrayToGray.
Does anyone know of a good way to do this?
You should just train a Neural network to recognize parts of image when there are no bubbles (in example groups of 16x16 pixels). Then when recognizing a square is not successfull you do a burst of horizontal scanlines and you register where the edge starts and ends. You can determine pretty precisely the section of a bubble (however determine its volume needs to keep into account surface curvature, wich is possible but more hard) on the image. If you have the possibility to use more cameras you can triangulate more sections of a bubble and get a precise idea of real volume. As another euristic to know bubble size you can also use the known volume throughput, so you know that if in a time interval you emitted X liters of air, and the bubbles have sections given in a certain proportion you can redistribute total volume across bubbles and further increase precision (of course you have to keep in mind pressure since bubbles on bottom of the pool will be more little).
As you see you can play with simple algorithms like gaussian difference and contrast to achieve different quality results.
In the left picture you can easily remove all background noise, however you have lost now part of the bubbles. It is possible you can re-gain the missed bubbles edge by using a different illumination on the pool
In the right picture you have the whole bubbles edges, but now you also have more areas that you need to manually discard from picture.
As for edge detections algorithm you should use an algorithm that do not add a fixed offset to edges (like convolution matrix or laplace), for this I think gaussian difference would work best.
Keep all intermediate data so one can easily verify and tweak the algorithm and increase its precision.
EDIT:
The code depends on wich library you use, you can easily implement Gaussian Blur and Horizontal Scanline, for Neural Networks there are already c# solutions out there.
// Do gaussian difference
Image ComputeGaussianDifference (Image img, float r1, float r2){
Image img = img.GaussianBlur( r1);
Image img2 = img.GaussianBlur( r2);
return (img-img2).Normalize(); // make values more noticeable
}
more edits pending.. try do document yourself in the meantime, I already given enough trace to let you do the job, you just need basic understanding of simple image processing algorithms and usage of ready neural networks.
Just in case if you are looking for some fun - you could investigate Application Example: Photo OCR. Basically you train one NN to detect bubble, and try it on a sliding window across the image. When you capture one - you use another NN, which is trained to estimate bubble size or volume (you probably can measure your air stream to train the NN). It is not so difficult as it sounds, and provides very high precision and adaptability.
P.S. Azure ML looks good as a free source of all the bells and whistles without need to go deep.
To solutions come to mind:
Solution 1:
Use the Hough transform for circles.
Solution 2:
In the past I also had a lot of trouble with similar image segmentation tasks. Basically I ended up with a flood fill, which is similar to the watershed algorithm you programmed.
A few hat tricks that I would try here:
Shrink the image.
Use colors. I notice you're just making everything gray; that makes little sense if you have a dark-blue background and black boundaries.
Do you wish to isolate the air bubble in a single image, or track the same air bubble from an image stream?
To isolate a 'bubble' try using a convolution matrix on the image to detect the edges. You should pick the edge detection convolution based on the nature of the image. Here is an example of a laplace edge detection done in gimp, however it is faily straight forward to implement in code.
This can help in isolating the edges of the bubbles.
If you are tracking the same bubble from a stream, this is more difficult due to as the way bubbles distort when flowing through liquid. If the frame rate is high enough it would be easy to see difference from frame to frame and you can judge which bubble it is likely to be (based on positional difference). i.e you would have to compare current frame to previous frame and use some intelligence to attempt to work out which bubble is the same from frame to frame. Using a fiducial to help give a point of reference would be useful too. The nozzle at the bottom of the image might make a good one, as you can generate a signature for it (nozzle won't change shape!) and check that each time. Signatures for the bubbles aren't going to help much since they could change drastically from one image to the next, so instead you would be processing blobs and their likely location in the image from one frame to the next.
For more information on how convolution matrices work see here.
For more information on edge detection see here.
Hope this helps, good luck.
I'm new with EMGU and image processing and I have a project in C# that needs to detect a transparent object, specifically, a moth's wing inside a plastic bottle. Here are some examples.
I tried using YCbCr in EMGU but I can not detect it nor differentiate it from the background.
Another thing is that I tried to enclose it in a "controlled environment" (inside a box where no light can come in) and used LED back-light. Is this advisable? Or can light from the environment (fluorescent light) will do? Will this affect the detection rate? Do lighting play a factor in this kind of problem?
This is the idea of my project and what I use. Basically, my project is just a proof of concept about detecting a transparent object from an image using a webcam (Logitech C910). This is an example of an old industrial problem here in our country when bottling plant over stock their plastic bottle and it got contaminated before use. Moth body and moth wing are the contaminants that were given to us. Also, this is to see if a webcam can suffice as an alternative to an industrial camera for this application.
I place it inside a controlled environment and use LED lights as backlight (this is just made using a prototyping board and high intensity LED light that is diffused with a bond paper). The object (moth wing) will be placed inside a plastic bottle with water and will be tested into 2 parts. The first part is that the bottle is not moving and the second part is when the bottle is moved on a conveyor but at the same controlled environment. I did all the hardware required so that is not an issue anymore. The moth body is manageable (I think) to detect but the moth wing left me scratching my head.
Any help would be very much appreciated. Thank you in advance!
Consider using as many visual cues as possible:
blur/focus
shape - you can use active contour or findControus() on a clean image
location, intensity, and texture in grabcut framework
you can try IR illumination in case moth and glass react to it differently
You should try to adjust brightness/contrast and color balance.
Another idea is to use auto threshold such as Sauvola or auto local thresholds. It will give you interesting results such as this one (I directly convert the image to grayscale) :
I do this tests very quickly by using imageJ.
Click to the link to the image in order to see which image correspond to which binarization algorithm.
Here's the scenario:
I am using Visual Studio 2008 with .NET framework 3.5. I am using C#. And for database I am using MySQL. I have a picturebox on a form and 10-12 buttons (each with some image manipulation function). On clicking one of the buttons openfiledialog box is shown up where the user can select the specific file to provide to the program. On clicking another button the program should perform the actions as explained below.
I have an image of a circuit. Suppose this is the image which is provided to the program. e.g.
What I intend to do is that - the program should hypothetically label the circuit as follows:
and then it should separate the image and store the information in a database.
Is there any way to do that. Can anyway tell me the approach to do that? Any help or suggestions please.
Thanks.
In image processing, the problem of finding the 'parts' of the circuit is known as connected component labeling. If you are using C#, I believe that you can use EmguCV (a wrapper to the OpenCV library) to solve the first part of the problem. To do that, you have to consider that the white pixels are the background and that the black pixels are objects.
Now that you have the separated traces, the problem is reduced to finding and labeling the white dots. Again, you can solve it by connected component labeling, but now the objects are represented by white pixels and the background are the black pixels.
At least for your example case, a very simple algorithm would work.
Find a black pixel from the image
Using a flood-fill algorithm, find all the pixels connected to it, and separate it. That's one of your traces.
Working with the separated trace, find a white pixel and use a flood-fill algorithm to find all the pixels connected to it. If you run to the edge of the image, it's not a hole. If you don't, it might be a hole, or a loop in the trace. Use a threshold for the hole size to determine if it's a terminal hole or a loop.
Label the hole and remove it from consideration. Repeat until there are no more unprocessed white pixels.
Remove the whole trace from consideration, and jump to 1.
When there are no more black pixels in consideration in step 1, you're done.
You should probably get pretty far with a basic image editing library that has a flood-fill function, a function to separate a certain color into a new image, and a function to replace colors (the last two are trivial to implement, and there are plenty of flood-fill algorithms available online). You can use different colors to mark different things, for instance, color everything "no in consideration" red. It also makes for an interesting visualization if you look at it in real time!
I'm working on a project where I need to take a single horizontal or vertical pixel row (or column, I guess) from each frame of a supplied video file and create an image out of it, basically appending the pixel row onto the image throughout the video. The video file I plan to supply isn't a regular video, it's actually just a capture of a panning camera from a video game (Halo: Reach) looking straight down (or as far as the game will let me, which is -85.5°). I'll look down, pan the camera forward over the landscape very slowly, then take a single pixel row from each frame the captured video file (30fps) and compile the rows into an image that will effectively (hopefully) reconstruct the landscape into a single image.
I thought about doing this the quick and dirty way, using a AxWindowsMediaPlayer control and locking the form so that it couldn't be moved or resized, then just using a Graphics object to capture the screen, but that wouldn't be fast enough, there would be way too many problems, I need direct access to the frames.
I've heard about FFLib, and DirectShow.NET, I actually just installed the Windows SDK but haven't had a chance to mess with and of the DirectX stuff yet (I remember it being very confusing for me a while back when I messed with it). Hopefully someone can give me a pointer in the right direction.
If anyone has any information they think might help, I'd be super grateful for it. Thank you!
You could use a video rendered in renderless mode (E.g. VMR9, EVR), which allows you to process every frame yourself. By using frame stepping playback you can step one frame each time and process the frame.
DirectShow.NET can help you to use managed code where possible, and I can recommend it. It is however only a wrapper to DirectShow, so it might be worthwhile to look for more advanced libraries as well.
A few sidenotes: wouldn't you experience issues with lighting which differs from angle to angle? Perhaps it's easier to capture some screenshots and use existing stitching algorithms?
Hello
I saw that there are some laptops with 3D support. I know that they use polarization for each eye. How can I write a program in C# that shows a simple 3D object in such system? I don't want to show a 3D object in a 2 D medium (Perspective view), but showing a 3D object similar to what you can see in a 3D film using a 3D glass.
Any suggestion for further study is highly appreciated.
Regards
What you need to do is display two images one for each eye. Each image is a perspective view but taken from two slightly different viewpoints - about the distance of your eyes apart.
When viewed through polarising or more likely LCD Shutter glasses you get the illusion of 3D objects.
In this case each eye's view is presented on the screen alternately and a signal is sent to the glasses to become clear or opaque so that the correct image is seen in each eye.
For a passive system you have to use two projectors for the left and right eye images and make sure that they are perfectly aligned so the images overlap correctly. If you get it wrong you won't get a very good 3D effect.
In both cases you need to create two views of your model and render each one for each frame you display. I used to work in this area and a while back wrote a blog post which included an overview on how we did stereo systems.
I think that you need to program directly using OpenGL or Direct3D. For the screen to display the polarized views necessary to achieve the 3D effect, the graphics card will need to know what it has to display. See here for some ideas.