I have an image like below.
What I want is a monochrome image such that white parts are kept white, the rest is black. However, the tricky part is that I also want to reduce the white parts to be one pixel in thickness.
It's the second part that I'm stuck with.
My first thought was to do a simple threshold, then use a sort of "Game of Life" type iterative process where a white pixel was removed if it had neighbours on one side but not the other (i.e. it's an edge) however I have a feeling this would reduce ends of lines to nothing over time so I'd end up with a blank image.
What algorithm can I use to get the image I want, given the original image?
(My language of choice is C#, but anything is fine)
Original Image:
After detecting the morphological extended maxima of a given height:
and then thinning gives:
You can also manipulate the height parameter, or prune the thinned image.
Code in Mathematica:
img = ColorConvert[Import["http://i.stack.imgur.com/zPtl6.png"], "Grayscale"];
max = MaxDetect[img, .55]
Thinning[max]
EDIT I followed my own advice and a height of .4 gives segments which are more precisely localized:
I suggest that you look into Binary Morphological transformations such as Erosion and Dilation. Graphics libraries such as OpenCV() http://opencv.willowgarage.com/wiki/ and that statistical/matrix tool Gnu Octave http://octave.sourceforge.net/image/function/bwmorph.html support these operations.
Related
I do have different images which all have some kind of border around the "real" image. What I would like to achieve is to find the "real" image (size and location in pixels).
For me the challenge is that the border is not always black (can be any kind of black or grey with a lot of noise) and the "real" image (water with shark in this example) can have any combination of color, saturation, ...
Now in general I'm aware of algorithms like Canny, Blob detection, hough lines, ..., but I have just started using them. So far I managed to find the border for a specific image, but as soon as I try to apply the same algorithms and parameters to the next image it doesn't work. My current approach looks like this (pseudo code):
convert to gray CvInvoke.CvtColor(_processedImage, tempMat, CvEnum.ColorConversion.Rgb2Gray)
downsample with CvInvoke.PyrDown(srcImage, targetImage) and CvInvoke.PyrUp(srcImage, targetImage)
blur image with CvInvoke.GaussianBlur(_processedImage, bluredImage, New Drawing.Size(5, 5), 0)
Binarize with CvInvoke.Threshold(_processedImage, blackWhiteImage, _parameters.BinarizeThreshold, 255, CvEnum.ThresholdType.Binary)
Detect Edges with CvInvoke.Canny(_processedImage, imgEdges, 60, 100)
Find Contours with CvInvoke.FindContours(_processedImage, contours, Nothing, CvEnum.RetrType.External, CvEnum.ChainApproxMethod.ChainApproxSimple)
Assume that largest contour is the real image
I already tried different approaches based on for example:
Thresholding saturation channel and bounding box
Thresholding, canny edge and finding contours
Any hint especially on how to find proper parameters (that apply for all images) for algorithms like (adaptive) threshold and canny as well as ideas for improving the processing pipeline would be highly appreciated.
you can try to subtract black image from this image , and you will get the inside image , way to do this:
Use image subtraction to compare images in C# ,
If the border was uniform, this would be easy. Use cv::reduce to find MIN and MAX of each row and column; then count the top,left,bottom,right rows/columns whose MIN and MAX are equal (or very close) to the pixel value in a nearby corner. For sanity, maybe check the border colour is the same on all sides.
In your example the border contains faint red stuff, but a row/column approach might still be a useful way to simplify the problem. Maybe, as Nofar suggests, take an absolute difference with what you think is the background colour; square it, convert to grey, then reduce to Sums of rows and columns. You still need to find edges, but have reduced the data from two dimensions to one.
If there's a large border and lots of noise, maybe iterate: in the second pass, exclude the rows you think comprise the border, from statistics on columns (and vice versa).
EDIT: The above only works for an upright rectangle! If it could be rotated then the row/column projection method won't work. In that case I might go for sum-of-squared differences as above (don't start by converting to grey as it could throw away information), followed by blurring or some morphology, edge detection then some kind of Hough transform to find straight edges.
This is a work project. I inherited some code using SharpDX (a DirectX layer). One of my tasks is to fix a piece of code where certain image effects are applying to a geometric shape containing a fill. If the filter is applied to the fill itself, it doesn't conform to the edges. I've figured out the code to pull out an excerpt using the Geometry of the object. For various reasons, they want to keep the fill that exists outside of the shape (namely, we have some distortion effects that pull in pixels outside of the shape), so I need to overlay it over the background. The problem I'm running into is that I'm getting this single-pixel border...
Applying the Soft Edge filter to the visible part
The background with the shape cut out
The two composited together in the program
What I'm actually getting
I can't share a good bit of the code, due to parts of it being proprietary, but the mask is a byte array. I'm building it using the following code:
SingleChannelBitmap mask = new SingleChannelBitmap(MaxRequiredPixels.Width, MaxRequiredPixels.Height, 255);
mask.FillShape(new RectangleF(new PointF(0,0), mask.Size), this.Geometry, 0);
255 is the maximum Alpha value (transparent). I invert it to take the slice out of the background. The only thing I can think of is that, when I do the masking, it's not including the outer edge of the Geometry. I'm going to try expanding the mask by one pixel in the crudest way possible (basically, scanning through and taking anything which is 0 transparency and adding a 0 transparency pixel to the left, right, up, and down), but I know there has to be a more elegant solution.
This has to work for the 3D Edge bevel filter as well, so doing an arbitrarily large whitespace probably won't work for me either.
What you describe is essentially the same haloing problem that sometimes occurs with displaying PNG images. The PNG export process from several programs will store a solid color for any portions of the PNG that has zero alpha, instead of the actual color at those pixels. This makes them function similar to other image formats (GIF) which use a specific color to encode transparent pixels. This significantly reduces the size of the file, however, can cause issues when sampling the image.
Your situation is similar. Although the masked pixels have zero alpha, when doing bilinear sampling, you may sample in between pixels, mixing both color and alpha values (unless pixel and texel centers are perfectly aligned). For example, if you have a 100% alpha, white pixel, next to a 0% alpha red pixel, and sample in between both, the result will be a pink pixel at 50% alpha.
There are several possible solutions:
You could extend the borders of the color layer, such that the 0% alpha border has the same color as its non-0% alpha adjacent pixels.
Intentionally line up the pixel and texel centers, although this can be tricky and/or not possible, depending on your requirements (mostly dependent on resolution).
Use 'nearest' sampling, instead of bilinear when displaying the image. This way, you will never blend in a 0% alpha pixel. However, this may also not be desirable, because your image will likely exhibit more aliasing effects.
your help is much appreciated. I am using C# and EmguCV for image processing
I have tried noise removal, but nothing happens. I also tried image median filter, and it only works on the first image, but it does not work on the second image. It only makes the second image blurry and the objects larger and more square-like.
I want to remove obviously distinct objects(green ones) in my image below so that it would turn all black because they are obviously separated and are not grouped unlike the second image below.
Image 1:
At the same way, I want to do it in my image below, but remove only those objects -- (the black ones) -- that are not grouped/(lumped?) so that what remains on the image are the objects that are grouped/larger in scale?
Image 2:
Thank you
You may try Gaussian Blur and then apply Threshold to the image.
Gaussian Blur is a widely used effect in graphics software, typically to reduce image noise and reduce detail, which I think matches your requirements well.
For your 1st image:
CvInvoke.GaussianBlur(srcImg, destImg, new Size(0, 0), 5);//You may need to customize Size and Sigma depends on different input image.
You will get:
Then
CvInvoke.Threshold(srcImg, destImg, 10, 255, Emgu.CV.CvEnum.ThresholdType.Binary);
You will get:
For your 2nd image:
CvInvoke.GaussianBlur(srcImg, destImg, new Size(0, 0), 5);//You may need to customize Size and Sigma depends on different input image.
You will get:
Then
CvInvoke.Threshold(srcImg, destImg, 240, 255, Emgu.CV.CvEnum.ThresholdType.Binary);
You will get:
Hope this help!
You should first threshold the image using Otsus method. Second run connected component analysis on the threshold image. Third go over all the component that you found and for the ones that have a size that is smaller than some min size delete it from the original image.
cvThreshold (with CV_THRESH_BINARY/CV_THRESH_BINARY_INV(choose according to the image) + CV_THRESH_OTSU) http://www.emgu.com/wiki/files/1.3.0.0/html/9624cb8e-921e-12a0-3c21-7821f0deb402.htm + http://www.emgu.com/wiki/files/1.3.0.0/html/bc08707a-63f5-9c73-18f4-aeab7878d7a6.htm
CvInvoke.FindContours (RetrType == External,ChainApproxNone)
For each contour that we found in 2 calculate CvInvoke.ContourArea
If area is smaller than minArea draw on the original image(The one you want to filter) the value you want for them(0 I suppose) in the original image using CvInvoke.DrawContours with the current contour and with thickness==-1 to fill the inside of the contour.
I'm trying to develop object detection algorithm. I plan to compare 2 image with different focus length. One image that correct focus on the object and one image that correct focus on background.
By reading about autofocus algorithm. I think it can done with contrast detection passive autofocus algorithm. It work on light intensity on the sensor.
But I don't sure that light intensity value from the image file has the same value as from the sensor. (it not a RAW image file. a jpeg image.) Is the light intensity value in jpeg image were the same as on the sensor? Can I use it to detect focus correctness with contrast detection? Is there a better way to detect which area of image were correct focus on the image?
I have tried to process the images a bit and I saw some progress. THis is what I did using opencv:
converted images to gray using cvtColor(I, Mgrey, CV_RGB2GRAY);
downsampled/decimated them a bit since they are huge (several Mb)
Took the sum of absolute horizontal and vertical gradients using http://docs.opencv.org/modules/imgproc/doc/filtering.html?highlight=sobel#cv.Sobel.
The result is below. The foreground when in focus does look brighter than background and vice versa.
You can probably try to match and subtract these images using translation from matchTemplate() on the original gray images; and then assemble pieces using the convex hull of the results as initialization mask for grab cut and plugging in color images. In case you aren’t familiar with the grab cut, chceck out my answer to this question.
But may be a simpler method will work here as well. You can try to apply a strong blur to your gradient images instead of precise matching and see what the difference give you in this case. The images below demonstrate the idea when I turned the difference in the binary masks.
It will be helpful to see your images. It I understood you correctly you try to separate background from foreground using focus (or blur) cue. Contrast in the image depends on focus but it also depend on the contrast of the target. So if the target is clouds you will never get sharp edges or high contrast. Finally jpeg image that use little compression should not affect the critical properties of your algorithm.
I would try to get a number of images at all possible focus lengths in a row and then build a graph of the contrast as a function of focal length (or even better focusing distance). The peak in this graph will give you the distance to the object regardless of object's own contrast. Note, however, that the accuracy of such visual cues goes down sharply with viewing distance.
This is what I expect you to obtain when measuring the sum of absolute gradient in a small window:
The next step for you will be to combine areas that are in focus with the areas that are solid color that is has no particular peak in the graph but none the less belong to the same object. Sometimes getting a convex hull of the focused areas can help to pinpoint the raw boundary of the object.
I'm looking for the algorithm (C#) or some info about how detect the edge of some object on the image (the target is the melanoma). I've found the AForge.NET library but I need to detect an edge without losing the image color. The examples below were prepared using the Paint.NET
Before:
after
I need only that blue edge (the color inside doesn't matter) or the that blue pixels coordinates
EDIT
Matthew Chambers was right, converting the image to the grayscale improves the effectiveness of the algorithm.
The first result image is based on the original coloured image, and the second one is based on the grayscaled image. The blue pixels corresponds to the HSV's Value < 30. You can see the differences by yourself. Thanks m8 !
You need to consider a few points related to the problem you are trying to solve.
What is an edge in a picture? Generally it is when the colour changes at a high rate. I.e. from dark to light in a short space.
But this has an issue about what a high rate of change for a colour actually is. For example, the Red and Green values could stay the same, while the Blue value changes at a high rate. Would we count this as an edge? This could occur for any combination of the RGB values.
For this reason, images are typically converted to greyscale for edge detection - otherwise you could get different edge results for each RGB value.
For this reason, the simplest approach would be to use a typical edge detection algorithm that works on a greyscale image, and then overlay the result onto the original image.
How about using the Edges class:
// create filter
Edges filter = new Edges();
// apply the filter
var newImage = filter.Apply(orignalImage);
Then do a threshold on newImages, finally overlay newImage on top of originalImage.
You could convert to Lab color space, split channels, run edge detection code (like the one provided in other answer) on L channel only, then add back a and b channels.
Considering the images and the question, it does not look like you need edge detection.
I would use an adaptive threshold technique instead, find the blob and finally extract the edge from it.
Here is a code in Matlab to illustrate what I mean:
function FindThresh()
i = imread('c:\b.png');
figure;imshow(i);
graythresh(i)
th = graythresh(i(:,:,2))*255;
figure;imshow(i(:,:,2)>th)
i1 = imclose(i(:,:,2)>th,strel('diamond',3));
figure;imshow(i1)
e = edge(i1);
indexes = find(e);
[r,c]=ind2sub(size(i1),indexes)
figure;imshow(e)
figure;imshow(i);hold on; scatter(c,r);
end
and the intermediate results images:
You can see that it is not perfect, but by improving it a little bit, you will get more powerful results than by using edge detection which is not a stable operation.