I am trying to use image processing to detect and extract information such
Attacker hero
Victim hero
However my method is not that accurate and I could use some ideas to help make it better.
I have a screenshot of a killfeed from a game:
I use Emgu to perform a canny edge detection of the image:
using (Image<Bgr, Byte> img = new Image<Bgr, byte>(bitmap))
{
//Convert the image to grayscale and filter out the noise
using (Image<Gray, Byte> gray = img.Convert<Gray, Byte>().PyrDown().PyrUp())
{
Image<Gray, Byte> cannyEdges = gray.Canny(40, 100);
//Smooth out found edges
cannyEdges._SmoothGaussian(3);
}
}
And get this result:
From the result I loop through the image to find a straight line with a width of 2 to then detect the border of the icon.
This however is really inaccurate as you can see the canny edged image has lines that I am not interested in, and the second icon does not even have a clear enough border for me to detect.
Is there any way better way to detect the icons?
Before using hard algorithm like cannyEdge and other fft things, I would recommand to use common sense and eyes to :
select a subset of the image you want to search In
clean it
For instance, you may want to :
filter on cyan color in order to detect the cyan square and/or cyan name (it also works with red).
search for the white arrow that is always the same, then deduce where the squares are
Once it's clean, you'll have only the zone you want to process (for instance both squares with heroes). Then all kinds of algorythms you will use will be much more effective.
If the heroes pics always are the same you even may want to perform a simple difference between screen-zone and reference.
Related
I do have different images which all have some kind of border around the "real" image. What I would like to achieve is to find the "real" image (size and location in pixels).
For me the challenge is that the border is not always black (can be any kind of black or grey with a lot of noise) and the "real" image (water with shark in this example) can have any combination of color, saturation, ...
Now in general I'm aware of algorithms like Canny, Blob detection, hough lines, ..., but I have just started using them. So far I managed to find the border for a specific image, but as soon as I try to apply the same algorithms and parameters to the next image it doesn't work. My current approach looks like this (pseudo code):
convert to gray CvInvoke.CvtColor(_processedImage, tempMat, CvEnum.ColorConversion.Rgb2Gray)
downsample with CvInvoke.PyrDown(srcImage, targetImage) and CvInvoke.PyrUp(srcImage, targetImage)
blur image with CvInvoke.GaussianBlur(_processedImage, bluredImage, New Drawing.Size(5, 5), 0)
Binarize with CvInvoke.Threshold(_processedImage, blackWhiteImage, _parameters.BinarizeThreshold, 255, CvEnum.ThresholdType.Binary)
Detect Edges with CvInvoke.Canny(_processedImage, imgEdges, 60, 100)
Find Contours with CvInvoke.FindContours(_processedImage, contours, Nothing, CvEnum.RetrType.External, CvEnum.ChainApproxMethod.ChainApproxSimple)
Assume that largest contour is the real image
I already tried different approaches based on for example:
Thresholding saturation channel and bounding box
Thresholding, canny edge and finding contours
Any hint especially on how to find proper parameters (that apply for all images) for algorithms like (adaptive) threshold and canny as well as ideas for improving the processing pipeline would be highly appreciated.
you can try to subtract black image from this image , and you will get the inside image , way to do this:
Use image subtraction to compare images in C# ,
If the border was uniform, this would be easy. Use cv::reduce to find MIN and MAX of each row and column; then count the top,left,bottom,right rows/columns whose MIN and MAX are equal (or very close) to the pixel value in a nearby corner. For sanity, maybe check the border colour is the same on all sides.
In your example the border contains faint red stuff, but a row/column approach might still be a useful way to simplify the problem. Maybe, as Nofar suggests, take an absolute difference with what you think is the background colour; square it, convert to grey, then reduce to Sums of rows and columns. You still need to find edges, but have reduced the data from two dimensions to one.
If there's a large border and lots of noise, maybe iterate: in the second pass, exclude the rows you think comprise the border, from statistics on columns (and vice versa).
EDIT: The above only works for an upright rectangle! If it could be rotated then the row/column projection method won't work. In that case I might go for sum-of-squared differences as above (don't start by converting to grey as it could throw away information), followed by blurring or some morphology, edge detection then some kind of Hough transform to find straight edges.
your help is much appreciated. I am using C# and EmguCV for image processing
I have tried noise removal, but nothing happens. I also tried image median filter, and it only works on the first image, but it does not work on the second image. It only makes the second image blurry and the objects larger and more square-like.
I want to remove obviously distinct objects(green ones) in my image below so that it would turn all black because they are obviously separated and are not grouped unlike the second image below.
Image 1:
At the same way, I want to do it in my image below, but remove only those objects -- (the black ones) -- that are not grouped/(lumped?) so that what remains on the image are the objects that are grouped/larger in scale?
Image 2:
Thank you
You may try Gaussian Blur and then apply Threshold to the image.
Gaussian Blur is a widely used effect in graphics software, typically to reduce image noise and reduce detail, which I think matches your requirements well.
For your 1st image:
CvInvoke.GaussianBlur(srcImg, destImg, new Size(0, 0), 5);//You may need to customize Size and Sigma depends on different input image.
You will get:
Then
CvInvoke.Threshold(srcImg, destImg, 10, 255, Emgu.CV.CvEnum.ThresholdType.Binary);
You will get:
For your 2nd image:
CvInvoke.GaussianBlur(srcImg, destImg, new Size(0, 0), 5);//You may need to customize Size and Sigma depends on different input image.
You will get:
Then
CvInvoke.Threshold(srcImg, destImg, 240, 255, Emgu.CV.CvEnum.ThresholdType.Binary);
You will get:
Hope this help!
You should first threshold the image using Otsus method. Second run connected component analysis on the threshold image. Third go over all the component that you found and for the ones that have a size that is smaller than some min size delete it from the original image.
cvThreshold (with CV_THRESH_BINARY/CV_THRESH_BINARY_INV(choose according to the image) + CV_THRESH_OTSU) http://www.emgu.com/wiki/files/1.3.0.0/html/9624cb8e-921e-12a0-3c21-7821f0deb402.htm + http://www.emgu.com/wiki/files/1.3.0.0/html/bc08707a-63f5-9c73-18f4-aeab7878d7a6.htm
CvInvoke.FindContours (RetrType == External,ChainApproxNone)
For each contour that we found in 2 calculate CvInvoke.ContourArea
If area is smaller than minArea draw on the original image(The one you want to filter) the value you want for them(0 I suppose) in the original image using CvInvoke.DrawContours with the current contour and with thickness==-1 to fill the inside of the contour.
I have an image like below.
What I want is a monochrome image such that white parts are kept white, the rest is black. However, the tricky part is that I also want to reduce the white parts to be one pixel in thickness.
It's the second part that I'm stuck with.
My first thought was to do a simple threshold, then use a sort of "Game of Life" type iterative process where a white pixel was removed if it had neighbours on one side but not the other (i.e. it's an edge) however I have a feeling this would reduce ends of lines to nothing over time so I'd end up with a blank image.
What algorithm can I use to get the image I want, given the original image?
(My language of choice is C#, but anything is fine)
Original Image:
After detecting the morphological extended maxima of a given height:
and then thinning gives:
You can also manipulate the height parameter, or prune the thinned image.
Code in Mathematica:
img = ColorConvert[Import["http://i.stack.imgur.com/zPtl6.png"], "Grayscale"];
max = MaxDetect[img, .55]
Thinning[max]
EDIT I followed my own advice and a height of .4 gives segments which are more precisely localized:
I suggest that you look into Binary Morphological transformations such as Erosion and Dilation. Graphics libraries such as OpenCV() http://opencv.willowgarage.com/wiki/ and that statistical/matrix tool Gnu Octave http://octave.sourceforge.net/image/function/bwmorph.html support these operations.
I'm looking for the algorithm (C#) or some info about how detect the edge of some object on the image (the target is the melanoma). I've found the AForge.NET library but I need to detect an edge without losing the image color. The examples below were prepared using the Paint.NET
Before:
after
I need only that blue edge (the color inside doesn't matter) or the that blue pixels coordinates
EDIT
Matthew Chambers was right, converting the image to the grayscale improves the effectiveness of the algorithm.
The first result image is based on the original coloured image, and the second one is based on the grayscaled image. The blue pixels corresponds to the HSV's Value < 30. You can see the differences by yourself. Thanks m8 !
You need to consider a few points related to the problem you are trying to solve.
What is an edge in a picture? Generally it is when the colour changes at a high rate. I.e. from dark to light in a short space.
But this has an issue about what a high rate of change for a colour actually is. For example, the Red and Green values could stay the same, while the Blue value changes at a high rate. Would we count this as an edge? This could occur for any combination of the RGB values.
For this reason, images are typically converted to greyscale for edge detection - otherwise you could get different edge results for each RGB value.
For this reason, the simplest approach would be to use a typical edge detection algorithm that works on a greyscale image, and then overlay the result onto the original image.
How about using the Edges class:
// create filter
Edges filter = new Edges();
// apply the filter
var newImage = filter.Apply(orignalImage);
Then do a threshold on newImages, finally overlay newImage on top of originalImage.
You could convert to Lab color space, split channels, run edge detection code (like the one provided in other answer) on L channel only, then add back a and b channels.
Considering the images and the question, it does not look like you need edge detection.
I would use an adaptive threshold technique instead, find the blob and finally extract the edge from it.
Here is a code in Matlab to illustrate what I mean:
function FindThresh()
i = imread('c:\b.png');
figure;imshow(i);
graythresh(i)
th = graythresh(i(:,:,2))*255;
figure;imshow(i(:,:,2)>th)
i1 = imclose(i(:,:,2)>th,strel('diamond',3));
figure;imshow(i1)
e = edge(i1);
indexes = find(e);
[r,c]=ind2sub(size(i1),indexes)
figure;imshow(e)
figure;imshow(i);hold on; scatter(c,r);
end
and the intermediate results images:
You can see that it is not perfect, but by improving it a little bit, you will get more powerful results than by using edge detection which is not a stable operation.
So here are the details (I am using C# BTW):
I receive a 32bpp image (JPEG compressed) from a server. At some point, I would like to use the Palette property of a bitmap to color over-saturated pixels (brightness > 240) red. To do so, I need to get the image into an indexed format.
I have tried converting the image to a GIF, but I get quality loss. I have tried creating a new bitmap in an index format by these methods:
// causes a "Parameter not valid" error
Bitmap indexed = new Bitmap(orig.Width, orig.Height, PixelFormat.Indexed)
// no error, but the resulting image is black due to information loss I assume
Bitmap indexed = new Bitmap(orig.Width, orig.Height, PixelFormat.Format8bppIndexed)
I am at a loss now. The data in this image is changed constantly by the user, so I don't want to manually set pixels that have a brightness > 240 if I can avoid it. If I can set the palette once when the image is created, my work is done. If I am going about this the wrong way to begin with please let me know.
EDIT: Thanks guys, here is some more detail on what I am attempting to accomplish.
We are scanning a tissue slide at high resolution (pathology application). I write the interface to the actual scanner. We use a line-scan camera. To test the line rate of the camera, the user scans a very small portion and looks at the image.
The image is displayed next to a track bar. When the user moves the track bar (adjusting line rate), I change the overall intensity of the image in an attempt to model what it would look like at the new line rate. I do this using an ImageAttributes and ColorMatrix object currently.
When the user adjusts the track bar, I adjust the matrix. This does not give me per pixel information, but the performance is very nice. I could use LockBits and some unsafe code here, but I would rather not rewrite it if possible. When the new image is created, I would like for all pixels with a brightness value of > 240 to be colored red. I was thinking that defining a palette for the bitmap up front would be a clean way of doing this.
Going from 32bpp to 8bpp indexed will almost always result in quality loss, unless the original image has less than 256 colors total.
Can you create another image that is a overlay with the affected pixels red, then show both of those?
Since you are going for brightness > 240, you can convert the overlay to grayscale first, then to indexed to get the overbright pixels.
You don't specify what you are doing with it once you have tagged the offenders, so I don't know if that will work.
Sounds like something you could do easily with a pixel shader. Even very early shader models would support something as easy as this.
The question is however:
Can you include shader support in your application without too much hastle?
Do you know shader programming?
EDIT:
You probably don't have a 3D context where you can do stuff like this =/
I was mostly just airing my thoughts.
Manipulating the picture pixel by pixel should be doable in real-time with a single CPU shouldn't it?
If not, look into GPGPU programming and Open CL.
EDIT AGAIN:
If you gave some more details about what the app actually does we might help a bit more? For example, if you're making a web-app none of my tips would make sense.
Thanks for the help everyone. It seems that this can be solved using the ImageAttributes class and simply setting a color remap table.
ColorMap[] maps = new ColorMap[someNum]
// add mappings
imageAttrs.SetRemapTable(maps);
Thanks for the help again, at least I learned something.