Segmenting Part of the image using threshold - c#

Im trying to isolate and segment the yellow car body to change the color of it. in order to do that i need to separately identify the body from the image. And continue oration with the remaining white pixels. And im using C#, here the plan
Color d;
Color newColor = Color.YellowGreen;
for(inti =0;i<carimage.Width;i++){
for(intj =0;j<carimage.Height;j++){
d = carimage.GetPixel(i, j);
if(d.R == 255 && d.G==255 && d.B == 255)
image.SetPixel(i, j, newColor );
}
}
simple thresholding will trow the second image where car body is not separated correctly. i tried Aforge.net Fill holes image filter but no significant change has been done to the threshold image. I tried to use color filter but it i did not return a correct output due to color vary of the body. can anyone suggest and solution for this?
Original Image
Threshold Image

Instead of thresholding, you might want to look into clustering.
As a quick&dirty test, I've increased the image brightness in HSB space (using Mathematica):
brightnessAdjusted = Image[
Map[#^{1, 1, 0.2} &, ImageData[ColorConvert[img, "HSB"]], {2}],
ColorSpace -> "HSB"]
Then I've used simple K-Nearest clustering:
(clusters = ClusteringComponents[ColorConvert[brightnessAdjusted, "RGB"], 3,
Method -> "KMeans"]) // Colorize
to find clusters of similar colors in the image (there are many more, probably more suitable clustering algorithms, so you should experiment a little). Then I can just adjust the color in one of the clusters:
Image[MapThread[If[#1 == 2, #2[[{1, 3, 2}]], #2] &, {clusters, ImageData[brightnessAdjusted]}, 2]]
If you want to use thresholding, you should probably use a CIE color space, since euclidian distances in that color space are closer to human perception.

I had a similar project few years ago. I can't remember the exact details, but the idea was to shift a (not too small) sliding window over the image, and calculate the average intensity (maybe for R, G and B separately) inside the window at each position. I filled a "threshold image" with these averages, and subtracted it from the original image. There was a scaling factor somewhere, and other tuning stuff, but the point is, such an approach was way better than using a constant threshold.

If you are going to use a set of thresholds, you might be better of selecting yellow hues in the Hue Saturation Value colorspace. See the related SO question.

I=imread('test.jpg');
I=im2double(rgb2gray(I));
BW=im2bw(I,0.64);imshow(BW)
Gives me :
I got the 0.64 threshold by looking at the image's histogram. I suggest you use MATLAB to do image processing as it is much easier. Hope that helps you in colouring the image.

Related

Emgu GetAverage Color of black color of Image

I am new at Emgu cv. My target is to compare two photos which are the same photo but the brightness is slightly different and gets the dark color or dark spot percentage of ROI.
I saw GetAverage method but how can I get color percentage of image by specify color. eg black is 80%, white 20%.
What is the mask parameter in the GetAverage method?
I read the documentation but I don't understand.
My idea is I will change both photos to grayscale and set ROI then get the average value. I don't know its the correct way to get my target.
So How can I done this?
Update
Below are ROI images of grayscale.
Left photo average intensity: 37.4879
Right photo average intensity: 40.9773
I add some scratch to the right photo.
Left photo average intensity: 37.4879
Right photo average intensity:
40.7638
Why scratch photo intensity is lower than without scratch photo. By right, It should be greater right? because it have more gray color right? Why?
Or
I added scratch, it means more black, So gray color reduction?
What is the mask parameter in the GetAverage method?
The mask parameter in the overloaded GetAverage(mask) allow you to find the average color of the masked area only according to the official documentation.
Regarding the concept of mask (which is related to the concept of ROI), it allows you to analyze or elaborate (depending on the context) only the location of the image where the mask is non-zero. I invite you to deepen on this here. If you find the documentation not clear enough, try to understand how masks works in Photoshop, the concept is the same and surely you will find many more explanations and much more intuitive.
Why scratch photo intensity is lower than without scratch photo. By
right, It should be greater right? because it have more gray color
right? Why?
No, it shouldn't be grather. Here as indicated by the documentation, GetAverage() returns to you the average color with the structure you originale image have (TColor), Gray (grayscale) in this case.
As you can see here the grayscale value for darker color tents to 0, on the contrary the white corresponds to 255.
So adding scratch implies that your image darkens.
If you try apply GetAverage() to an image with a different color type (Bgr, Bgra, Hsv, Hls, ...), it will return you the average as color of that type.
Last thing,
I saw GetAverage method but how can I get color percentage of image by
specify color. eg black is 80%, white 20%.
you cannot with this approach. As said above GetAverage() returns to you a Gray color. The only thing I can suggest is that you manually convert the result obtained into a percentage of color with something like this:
private double ConvertToPercentage(double valueToConvert, int rangeStartAt, int rangeEndAt) {
return valueToConvert/(rangeEndAt - rangeStartAt) * 100;
}
Obviously this piece of code only works with single values colors (not rgb for example, which has 3 values). You can very easily re-implement something like this in case you need it.

Find image within image (object detection)

I do have different images which all have some kind of border around the "real" image. What I would like to achieve is to find the "real" image (size and location in pixels).
For me the challenge is that the border is not always black (can be any kind of black or grey with a lot of noise) and the "real" image (water with shark in this example) can have any combination of color, saturation, ...
Now in general I'm aware of algorithms like Canny, Blob detection, hough lines, ..., but I have just started using them. So far I managed to find the border for a specific image, but as soon as I try to apply the same algorithms and parameters to the next image it doesn't work. My current approach looks like this (pseudo code):
convert to gray CvInvoke.CvtColor(_processedImage, tempMat, CvEnum.ColorConversion.Rgb2Gray)
downsample with CvInvoke.PyrDown(srcImage, targetImage) and CvInvoke.PyrUp(srcImage, targetImage)
blur image with CvInvoke.GaussianBlur(_processedImage, bluredImage, New Drawing.Size(5, 5), 0)
Binarize with CvInvoke.Threshold(_processedImage, blackWhiteImage, _parameters.BinarizeThreshold, 255, CvEnum.ThresholdType.Binary)
Detect Edges with CvInvoke.Canny(_processedImage, imgEdges, 60, 100)
Find Contours with CvInvoke.FindContours(_processedImage, contours, Nothing, CvEnum.RetrType.External, CvEnum.ChainApproxMethod.ChainApproxSimple)
Assume that largest contour is the real image
I already tried different approaches based on for example:
Thresholding saturation channel and bounding box
Thresholding, canny edge and finding contours
Any hint especially on how to find proper parameters (that apply for all images) for algorithms like (adaptive) threshold and canny as well as ideas for improving the processing pipeline would be highly appreciated.
you can try to subtract black image from this image , and you will get the inside image , way to do this:
Use image subtraction to compare images in C# ,
If the border was uniform, this would be easy. Use cv::reduce to find MIN and MAX of each row and column; then count the top,left,bottom,right rows/columns whose MIN and MAX are equal (or very close) to the pixel value in a nearby corner. For sanity, maybe check the border colour is the same on all sides.
In your example the border contains faint red stuff, but a row/column approach might still be a useful way to simplify the problem. Maybe, as Nofar suggests, take an absolute difference with what you think is the background colour; square it, convert to grey, then reduce to Sums of rows and columns. You still need to find edges, but have reduced the data from two dimensions to one.
If there's a large border and lots of noise, maybe iterate: in the second pass, exclude the rows you think comprise the border, from statistics on columns (and vice versa).
EDIT: The above only works for an upright rectangle! If it could be rotated then the row/column projection method won't work. In that case I might go for sum-of-squared differences as above (don't start by converting to grey as it could throw away information), followed by blurring or some morphology, edge detection then some kind of Hough transform to find straight edges.

Fill png image with certain number of points

I'm working to a software that need a strange feature. I choose a png image like image attached and I need to place uniformly a certain number of points on black surface. I started with loop each pixel and change it's color to black, only for design, but now I need to think an algorithm to fill it with points (like 200 points (pixels with red color). You have a idea how to do this ?
Now in my mind is to count black pixels, then do something like this blackPixelsPerPoint = blackPixels / numberOfPoints. After this I now i need to have a red point every blackPixelsPerPoint.
The result need to be something like N letter
The points need to have almost same space between them and fill all the black surface if is possible (depend by number of points).

Image processing - Reduce object thickness without removing

I have an image like below.
What I want is a monochrome image such that white parts are kept white, the rest is black. However, the tricky part is that I also want to reduce the white parts to be one pixel in thickness.
It's the second part that I'm stuck with.
My first thought was to do a simple threshold, then use a sort of "Game of Life" type iterative process where a white pixel was removed if it had neighbours on one side but not the other (i.e. it's an edge) however I have a feeling this would reduce ends of lines to nothing over time so I'd end up with a blank image.
What algorithm can I use to get the image I want, given the original image?
(My language of choice is C#, but anything is fine)
Original Image:
After detecting the morphological extended maxima of a given height:
and then thinning gives:
You can also manipulate the height parameter, or prune the thinned image.
Code in Mathematica:
img = ColorConvert[Import["http://i.stack.imgur.com/zPtl6.png"], "Grayscale"];
max = MaxDetect[img, .55]
Thinning[max]
EDIT I followed my own advice and a height of .4 gives segments which are more precisely localized:
I suggest that you look into Binary Morphological transformations such as Erosion and Dilation. Graphics libraries such as OpenCV() http://opencv.willowgarage.com/wiki/ and that statistical/matrix tool Gnu Octave http://octave.sourceforge.net/image/function/bwmorph.html support these operations.

How to detect if image is being displayed as negative (inverted)?

Is it possible to detect if an image is being displayed as a "Negative Image" in C#? In other words the colors are inverted?
If you don't have the original image to compare (and then what you are trying to do is detecting if an image is the negative of another one), then you can only try to guess that it is some kind of negative image, but you cannot be 100% sure about it.
The negative image is simply the image where each pixel colour is:
Red = 255 - originalRed
Green = 255 - originalGreen
Blue = 255 - originalBlue
If you know what the colour of any particular pixel should be you can test that to see if it matches or is inverted.
Other than that I can't think of a foolproof way that would work for any image. You could look at colour distributions but that will depend on the image.
You would need to do some statistical analysis on the properties of inverted and non-inverted images to get some criteria to check. For example, perhaps there are colors that are uncommon in normal images but common in inverted ones. Maybe the center of a normal image is usually brighter than the edges, or the top is brighter than the bottom.
No method is going to be 100% accurate, as any image is ultimately just as valid as any other.

Categories