Here's the scenario:
I am using Visual Studio 2008 with .NET framework 3.5. I am using C#. And for database I am using MySQL. I have a picturebox on a form and 10-12 buttons (each with some image manipulation function). On clicking one of the buttons openfiledialog box is shown up where the user can select the specific file to provide to the program. On clicking another button the program should perform the actions as explained below.
I have an image of a circuit. Suppose this is the image which is provided to the program. e.g.
What I intend to do is that - the program should hypothetically label the circuit as follows:
and then it should separate the image and store the information in a database.
Is there any way to do that. Can anyway tell me the approach to do that? Any help or suggestions please.
Thanks.
In image processing, the problem of finding the 'parts' of the circuit is known as connected component labeling. If you are using C#, I believe that you can use EmguCV (a wrapper to the OpenCV library) to solve the first part of the problem. To do that, you have to consider that the white pixels are the background and that the black pixels are objects.
Now that you have the separated traces, the problem is reduced to finding and labeling the white dots. Again, you can solve it by connected component labeling, but now the objects are represented by white pixels and the background are the black pixels.
At least for your example case, a very simple algorithm would work.
Find a black pixel from the image
Using a flood-fill algorithm, find all the pixels connected to it, and separate it. That's one of your traces.
Working with the separated trace, find a white pixel and use a flood-fill algorithm to find all the pixels connected to it. If you run to the edge of the image, it's not a hole. If you don't, it might be a hole, or a loop in the trace. Use a threshold for the hole size to determine if it's a terminal hole or a loop.
Label the hole and remove it from consideration. Repeat until there are no more unprocessed white pixels.
Remove the whole trace from consideration, and jump to 1.
When there are no more black pixels in consideration in step 1, you're done.
You should probably get pretty far with a basic image editing library that has a flood-fill function, a function to separate a certain color into a new image, and a function to replace colors (the last two are trivial to implement, and there are plenty of flood-fill algorithms available online). You can use different colors to mark different things, for instance, color everything "no in consideration" red. It also makes for an interesting visualization if you look at it in real time!
Related
I try to identify changes on an object. Therefore I take a picture before and after using the object. At the moment I'm working with the absolute Difference of the two pictures and taking the contours of the resulting difference image. That works fine as long as the object is positioned perfectly and captured like in the image before. Only small differences in its position make my method useless.
Has anybody a different solution approach with OpenCV oder EmguCV? I was thinking about checking if one of the neighbor pixels is identical then there should be no change detected, but I don't know of an existing performant algorithm.
Example Images (Pictures don't match my usecase, but they should be helpful to illustrate my problem):
Before
After
Yes there are many way to do this. I like the following:
Histogram match. Get a histogram before and after and check for differences. Is sensitive to changes in lighting. Very good method if you are in a controlled lighting setting
Correlation match. If you use MatchTemplate you can get the “quality” of the match. This can be made to be less sensitive to light. But is sensitive to rotation changes between the two images.
Try to implement some and let’s see your code.
I'm looking for a good way to isolate an air bubble from the following image. I'm using Visual Studio 2015 and C#.
I've heard of the watershed method and believe it may be a good solution.
I tried implementing the code solution found here: watershed image segmentation
I haven't had much success. The solution has trouble finding functions, for example: FilterGrayToGray.
Does anyone know of a good way to do this?
You should just train a Neural network to recognize parts of image when there are no bubbles (in example groups of 16x16 pixels). Then when recognizing a square is not successfull you do a burst of horizontal scanlines and you register where the edge starts and ends. You can determine pretty precisely the section of a bubble (however determine its volume needs to keep into account surface curvature, wich is possible but more hard) on the image. If you have the possibility to use more cameras you can triangulate more sections of a bubble and get a precise idea of real volume. As another euristic to know bubble size you can also use the known volume throughput, so you know that if in a time interval you emitted X liters of air, and the bubbles have sections given in a certain proportion you can redistribute total volume across bubbles and further increase precision (of course you have to keep in mind pressure since bubbles on bottom of the pool will be more little).
As you see you can play with simple algorithms like gaussian difference and contrast to achieve different quality results.
In the left picture you can easily remove all background noise, however you have lost now part of the bubbles. It is possible you can re-gain the missed bubbles edge by using a different illumination on the pool
In the right picture you have the whole bubbles edges, but now you also have more areas that you need to manually discard from picture.
As for edge detections algorithm you should use an algorithm that do not add a fixed offset to edges (like convolution matrix or laplace), for this I think gaussian difference would work best.
Keep all intermediate data so one can easily verify and tweak the algorithm and increase its precision.
EDIT:
The code depends on wich library you use, you can easily implement Gaussian Blur and Horizontal Scanline, for Neural Networks there are already c# solutions out there.
// Do gaussian difference
Image ComputeGaussianDifference (Image img, float r1, float r2){
Image img = img.GaussianBlur( r1);
Image img2 = img.GaussianBlur( r2);
return (img-img2).Normalize(); // make values more noticeable
}
more edits pending.. try do document yourself in the meantime, I already given enough trace to let you do the job, you just need basic understanding of simple image processing algorithms and usage of ready neural networks.
Just in case if you are looking for some fun - you could investigate Application Example: Photo OCR. Basically you train one NN to detect bubble, and try it on a sliding window across the image. When you capture one - you use another NN, which is trained to estimate bubble size or volume (you probably can measure your air stream to train the NN). It is not so difficult as it sounds, and provides very high precision and adaptability.
P.S. Azure ML looks good as a free source of all the bells and whistles without need to go deep.
To solutions come to mind:
Solution 1:
Use the Hough transform for circles.
Solution 2:
In the past I also had a lot of trouble with similar image segmentation tasks. Basically I ended up with a flood fill, which is similar to the watershed algorithm you programmed.
A few hat tricks that I would try here:
Shrink the image.
Use colors. I notice you're just making everything gray; that makes little sense if you have a dark-blue background and black boundaries.
Do you wish to isolate the air bubble in a single image, or track the same air bubble from an image stream?
To isolate a 'bubble' try using a convolution matrix on the image to detect the edges. You should pick the edge detection convolution based on the nature of the image. Here is an example of a laplace edge detection done in gimp, however it is faily straight forward to implement in code.
This can help in isolating the edges of the bubbles.
If you are tracking the same bubble from a stream, this is more difficult due to as the way bubbles distort when flowing through liquid. If the frame rate is high enough it would be easy to see difference from frame to frame and you can judge which bubble it is likely to be (based on positional difference). i.e you would have to compare current frame to previous frame and use some intelligence to attempt to work out which bubble is the same from frame to frame. Using a fiducial to help give a point of reference would be useful too. The nozzle at the bottom of the image might make a good one, as you can generate a signature for it (nozzle won't change shape!) and check that each time. Signatures for the bubbles aren't going to help much since they could change drastically from one image to the next, so instead you would be processing blobs and their likely location in the image from one frame to the next.
For more information on how convolution matrices work see here.
For more information on edge detection see here.
Hope this helps, good luck.
How can I smooth a Graphics object in C# ? To be more precise, I need to run a smoothing at a very precise moment of my Graphics object generation, on the whole object. The image is coloured.
I am flexible in terms of input classes (Graphics, etc..). I just suggested Graphics at it is a central class for image manipulations in C#.
Graphics.SmoothingMode is out of context for what I need to do and I imagine WU's algorithm only applies to drawing lines in greyscale.
Have a look at the image processing features of AForge.Net. It is an open source framework that includes a lot of useful image processing capabilities. You will find many smoothing filters among them.
I think you used the wrong words to describe your problem. Anti aliasing refers to (as Hand mentioned) the point in time when individual objects are drawn for the first time. For instance, when drawing a diagonal line on an empty surface.
You already have an image, and you want that image to be smoothed. I suggest you detect edges in the image using a standard algorithm, then smooth those edges. I am not familiar with the exact process to do this myself, sadly.
C# newbie here so please forgive me if my terminology isn't quite correct.
As part of this project, I have a user hold up a piece of paper to a webcam so I can capture, isolate and then eventually display back what they've drawn on it. I've put some restrictions on where the corners of the paper have to be in order for the program to accept it, but there's still the chance that it's distorted by perspective.
Here's an example image that I've captured and isolated the paper out of: image
What I want to be able to do is to distort this image so that the corners of the piece of paper are turned back into a 8.5x11-proportioned rectangle (as if the user had scanned it rather than held it up to the webcam). Rotation and skewing can only get me so far, ideally I would be able to freely transform the image, like in Photoshop. I found this example, I am basically trying to do the opposite. Curious if anyone's had to do this, before I start trying to reverse that four-point image distortion example.
This is sometimes called a Quadrilateral warp.
Disclaimer: I work for Atalasoft.
Our DotImage Photo SDK can do this and it's free. Look at QuadrilateralWarpCommand. You need to know the source and destination quadrilateral.
i've been working on a webapp. i got stuck here in a problematic issue.
i'll try to explain what im trying to do.
here you see first big image which has green shapes in it.
what i want to do is to crop those shapes into different png files and make their background transparent like the example cropped images below the big one.
The first image will be uploaded by user and i want to crop into pieces like the example cropped images above.it can be done with GD library of php or by a server-side software written in python or c#. but i dunno what this operation called so i dunno what to google to find information. it is something to do with computer vision detecting blobs and cropping them into pieces etc.
any keywords,links would be helpful.
thanks for helps
A really easy way to do this is to use Flood Fill/Connected Component Labeling. Basically, this would just be using a greedy algorithm by grouping any pixels that were the same or similar in color.
This is definitely not the ideal way to detect blobs and is only going to be effective in limited situations. However, it is much easier to understand and code and might be sufficient for your purposes.
Opencv provides a function named cv::findContours to find connected components in an image. If it's always green vs white, You want to cv::split the image into channels, use cv::threshold on the blue or the red channel (those will be white in the white regions and near black in the green region) with THRESH_BINARY_INV (because you want to extract the dark regions), then use cv::findContours to detect the blobs. You can then compute the bounding rectangle with cv::boundingRect, create a new image of that size, and use the contour you got as a mask to fill the new image.
Note: These are links to the C++ documentation, but those functions should be exposed in the python and C# wrappers - see http://www.emgu.com for the latter.
I believe this Wikipedia article covers the problem really well: http://en.wikipedia.org/wiki/Blob_detection
Can't remember any ready-to-use solutions though (-:
It really depends on what kinds of images you will be processing.
As Brian mentioned, you could use Connected Component Labeling, which usually is applied to binary images, where foreground is denoted by white pixels and background by black pixels (or the opposite). The problem is then how to transform the original image to a binary one. If all images are like the example you provided, this is straightforward and can be accomplished with thresholding. OpenCV provides useful methods:
Threshold
FindContours for finding contours of connected components
DrawContours for extracting each component individually into a separate image
For more complex images, however, all bets are off.