I want to use AForge.NET library to examine similiar images and to localize the differences. I can imagine the following algorithm.
a. Compare 2 images, generate as result binary image with white pixels for differences and black pixels for matches.
b. Use BlobCounter for searching of the connected pixels.
What filter can be used for a)? How to count the pixels in each Blob ?
Take a look at my previous answer here Aforge Blob Detection
For A), you can use the ThresholdDifference, this will give you black pixels when there are no changes and white pixels when there is a difference. You can invert this with a Image>Invert (http://www.aforgenet.com/framework/docs/html/458e1304-0858-ae29-113f-e2ec9072c626.htm)
As for B), you can use Connected Component Labeling (see the post), this will give an approximate width and height of the objects. If you want count exactly how many pixels are different you will probably need to write an procedure for this. It is not very difficult, it is just a two nested For cycle that will go over each X, Y pixel and then will increase a counter every time it finds an specific color on it.
Related
i am trying to read sprite/texture to get areas between blacklines in the first black and white photo. I don't know is it efficient but i am trying to do painting with numbers like in the third photo. I need to add numbers to all areas and i need to match the colors with original colored photo. Thanks.
[Black & White Image] (https://i.stack.imgur.com/JExFf.png)
[Colored Image] (https://i.stack.imgur.com/AVTax.jpg)
[Numbers Added] (https://i.stack.imgur.com/QAxRQ.png)
The basic algorithm you need is a 'flood fill' algorithm. Given a starting pixel, it will find all the pixels in a closed area. You would need to do this repeatedly until all the pixels in the image have been found.
There are lots of implementations for flood fill algorithms online.
So your routine could be something like-
Create an array of int's that stores which group each pixel is in. 0 means it's not been found yet.
Pick an unassigned pixel, and floodfill. Assign all the found pixels to a group.
Repeat 2 until all the pixels are assigned a group.
I do have different images which all have some kind of border around the "real" image. What I would like to achieve is to find the "real" image (size and location in pixels).
For me the challenge is that the border is not always black (can be any kind of black or grey with a lot of noise) and the "real" image (water with shark in this example) can have any combination of color, saturation, ...
Now in general I'm aware of algorithms like Canny, Blob detection, hough lines, ..., but I have just started using them. So far I managed to find the border for a specific image, but as soon as I try to apply the same algorithms and parameters to the next image it doesn't work. My current approach looks like this (pseudo code):
convert to gray CvInvoke.CvtColor(_processedImage, tempMat, CvEnum.ColorConversion.Rgb2Gray)
downsample with CvInvoke.PyrDown(srcImage, targetImage) and CvInvoke.PyrUp(srcImage, targetImage)
blur image with CvInvoke.GaussianBlur(_processedImage, bluredImage, New Drawing.Size(5, 5), 0)
Binarize with CvInvoke.Threshold(_processedImage, blackWhiteImage, _parameters.BinarizeThreshold, 255, CvEnum.ThresholdType.Binary)
Detect Edges with CvInvoke.Canny(_processedImage, imgEdges, 60, 100)
Find Contours with CvInvoke.FindContours(_processedImage, contours, Nothing, CvEnum.RetrType.External, CvEnum.ChainApproxMethod.ChainApproxSimple)
Assume that largest contour is the real image
I already tried different approaches based on for example:
Thresholding saturation channel and bounding box
Thresholding, canny edge and finding contours
Any hint especially on how to find proper parameters (that apply for all images) for algorithms like (adaptive) threshold and canny as well as ideas for improving the processing pipeline would be highly appreciated.
you can try to subtract black image from this image , and you will get the inside image , way to do this:
Use image subtraction to compare images in C# ,
If the border was uniform, this would be easy. Use cv::reduce to find MIN and MAX of each row and column; then count the top,left,bottom,right rows/columns whose MIN and MAX are equal (or very close) to the pixel value in a nearby corner. For sanity, maybe check the border colour is the same on all sides.
In your example the border contains faint red stuff, but a row/column approach might still be a useful way to simplify the problem. Maybe, as Nofar suggests, take an absolute difference with what you think is the background colour; square it, convert to grey, then reduce to Sums of rows and columns. You still need to find edges, but have reduced the data from two dimensions to one.
If there's a large border and lots of noise, maybe iterate: in the second pass, exclude the rows you think comprise the border, from statistics on columns (and vice versa).
EDIT: The above only works for an upright rectangle! If it could be rotated then the row/column projection method won't work. In that case I might go for sum-of-squared differences as above (don't start by converting to grey as it could throw away information), followed by blurring or some morphology, edge detection then some kind of Hough transform to find straight edges.
I'm trying to develop object detection algorithm. I plan to compare 2 image with different focus length. One image that correct focus on the object and one image that correct focus on background.
By reading about autofocus algorithm. I think it can done with contrast detection passive autofocus algorithm. It work on light intensity on the sensor.
But I don't sure that light intensity value from the image file has the same value as from the sensor. (it not a RAW image file. a jpeg image.) Is the light intensity value in jpeg image were the same as on the sensor? Can I use it to detect focus correctness with contrast detection? Is there a better way to detect which area of image were correct focus on the image?
I have tried to process the images a bit and I saw some progress. THis is what I did using opencv:
converted images to gray using cvtColor(I, Mgrey, CV_RGB2GRAY);
downsampled/decimated them a bit since they are huge (several Mb)
Took the sum of absolute horizontal and vertical gradients using http://docs.opencv.org/modules/imgproc/doc/filtering.html?highlight=sobel#cv.Sobel.
The result is below. The foreground when in focus does look brighter than background and vice versa.
You can probably try to match and subtract these images using translation from matchTemplate() on the original gray images; and then assemble pieces using the convex hull of the results as initialization mask for grab cut and plugging in color images. In case you aren’t familiar with the grab cut, chceck out my answer to this question.
But may be a simpler method will work here as well. You can try to apply a strong blur to your gradient images instead of precise matching and see what the difference give you in this case. The images below demonstrate the idea when I turned the difference in the binary masks.
It will be helpful to see your images. It I understood you correctly you try to separate background from foreground using focus (or blur) cue. Contrast in the image depends on focus but it also depend on the contrast of the target. So if the target is clouds you will never get sharp edges or high contrast. Finally jpeg image that use little compression should not affect the critical properties of your algorithm.
I would try to get a number of images at all possible focus lengths in a row and then build a graph of the contrast as a function of focal length (or even better focusing distance). The peak in this graph will give you the distance to the object regardless of object's own contrast. Note, however, that the accuracy of such visual cues goes down sharply with viewing distance.
This is what I expect you to obtain when measuring the sum of absolute gradient in a small window:
The next step for you will be to combine areas that are in focus with the areas that are solid color that is has no particular peak in the graph but none the less belong to the same object. Sometimes getting a convex hull of the focused areas can help to pinpoint the raw boundary of the object.
I have an image like below.
What I want is a monochrome image such that white parts are kept white, the rest is black. However, the tricky part is that I also want to reduce the white parts to be one pixel in thickness.
It's the second part that I'm stuck with.
My first thought was to do a simple threshold, then use a sort of "Game of Life" type iterative process where a white pixel was removed if it had neighbours on one side but not the other (i.e. it's an edge) however I have a feeling this would reduce ends of lines to nothing over time so I'd end up with a blank image.
What algorithm can I use to get the image I want, given the original image?
(My language of choice is C#, but anything is fine)
Original Image:
After detecting the morphological extended maxima of a given height:
and then thinning gives:
You can also manipulate the height parameter, or prune the thinned image.
Code in Mathematica:
img = ColorConvert[Import["http://i.stack.imgur.com/zPtl6.png"], "Grayscale"];
max = MaxDetect[img, .55]
Thinning[max]
EDIT I followed my own advice and a height of .4 gives segments which are more precisely localized:
I suggest that you look into Binary Morphological transformations such as Erosion and Dilation. Graphics libraries such as OpenCV() http://opencv.willowgarage.com/wiki/ and that statistical/matrix tool Gnu Octave http://octave.sourceforge.net/image/function/bwmorph.html support these operations.
I'm trying to design an algorithm for selecting photos which are monochrome (the one photographers call "black & white") and have a single color toning such as sepia. (think as you've applied a solid color filter to a monochrome image) If I was after just monochrome only, all I need is to check the saturation which is easy (and I'm currently doing it), but it is unsuccessful at finding solid-color-toned monochrome photos. What could be an approach for that one?
If an image is monochrome, either black and white or sepia, the colors corresponding to each lightness level should be nearly identical. Convert the image colorspace to one which contains a Luminance (Y) or Lightness component, such as YCbCr or HSL, and for each Y/L value look at the variance in the other two channels. If the values cluster together, you have a monochrome image.
Maybe you could look at the distribution/pattern of values within the R,G & B histograms, if you use something like photoshop to visualize the histogram of a monochromatic image you will see that the distribution of the "pattern" of the histogram is similar across all three channels (would need to be normalized), whereas the distribution for each channel on a full color image is (typically) quite different. For the most part that is... there are situations where a full-color image may still give off a monochromatic flavor like a rainbow gradient in photoshop - but this is contrived and it's likely that this wouldn't occur frequently in "natural" photography.
If your image is monochrome, the saturation of all the pixels will all be approximately 0. If your image is toned (like sepia colored), all of the pixels will have approximately the same hue.