I have a white map of country with black edges of provinces. I want to recognize areas of provinces and keep pixels of each province. Then I would like to color this areas(as polygon) by different colors. I would be grateful if you could help me. I know that exist AForge.Net library but I didn't find any helpful information.
If you can get a good binary image by thresholding the original one, you can try 'connected component analysis'. See for example here:
OpenCV how to find a list of connected components in a binary image
By a 'good' binary image I mean one where the provinces are completely separated by the black boundaries.
Related
I have searched the internet thoroughly but couldn't find a solution
Here what i want
This is my image
This is how it looks with 0 transparency when i have selected
So programmatically i want to split image into 6 pieces. Each one containing one of the egg with 0 transparent area left
How can i do that?
My preferred solutions based on c# or photoshop script but i am open to all solutions
An example output
To solve this problem for any image size, egg size, orientation, position, count I suggest to use the following approach:
Load the image file.
Extract the alpha channel (this contains the transparency information)
find the egg blobs (blob search/analysis, region labelling, connected components, countless names for this method)
get the bounding boxes of those blobs
crop the sub images using those bounding boxes
This can be achieved with most image processing librarys. If you prefer C#, give EmguCV a try. Or use websearch to find others.
http://www.emgu.com/wiki/files/3.1.0/document/html/e13fa7a9-5eee-b46c-4b65-ff3e7e427719.htm
I would like the system to pin point/highlight the deformed area to the user during video rendering. Lets say currently I have an image with a list of squares as shown in the image below, this is the original image without defects.
In the following image, is the sample image that consist of a defect, whereby there is an extra line in between the squares.
I would like to have something as shown in the sample image below where by it will have a red square to highlight the "extra line" to inform the user that there is a defect and will pin point the defect to the user.
The defects may appear in any kinds of shapes or forms, and I would like to pin point the defects to the user. So what kind of algorithm I should use in order to achieve this?
Also, is machine learning required in order to achieve this?
Currently I am using Emgucv in C#, but I am not sure what algorithm I should use that can achieve this. Any suggestions are appreciated.
Thank you.
You could just to an cv::absdiff to find the changed areas. After that, use findcontours to group the defects and draw an outline around them.
I'm trying to develop object detection algorithm. I plan to compare 2 image with different focus length. One image that correct focus on the object and one image that correct focus on background.
By reading about autofocus algorithm. I think it can done with contrast detection passive autofocus algorithm. It work on light intensity on the sensor.
But I don't sure that light intensity value from the image file has the same value as from the sensor. (it not a RAW image file. a jpeg image.) Is the light intensity value in jpeg image were the same as on the sensor? Can I use it to detect focus correctness with contrast detection? Is there a better way to detect which area of image were correct focus on the image?
I have tried to process the images a bit and I saw some progress. THis is what I did using opencv:
converted images to gray using cvtColor(I, Mgrey, CV_RGB2GRAY);
downsampled/decimated them a bit since they are huge (several Mb)
Took the sum of absolute horizontal and vertical gradients using http://docs.opencv.org/modules/imgproc/doc/filtering.html?highlight=sobel#cv.Sobel.
The result is below. The foreground when in focus does look brighter than background and vice versa.
You can probably try to match and subtract these images using translation from matchTemplate() on the original gray images; and then assemble pieces using the convex hull of the results as initialization mask for grab cut and plugging in color images. In case you aren’t familiar with the grab cut, chceck out my answer to this question.
But may be a simpler method will work here as well. You can try to apply a strong blur to your gradient images instead of precise matching and see what the difference give you in this case. The images below demonstrate the idea when I turned the difference in the binary masks.
It will be helpful to see your images. It I understood you correctly you try to separate background from foreground using focus (or blur) cue. Contrast in the image depends on focus but it also depend on the contrast of the target. So if the target is clouds you will never get sharp edges or high contrast. Finally jpeg image that use little compression should not affect the critical properties of your algorithm.
I would try to get a number of images at all possible focus lengths in a row and then build a graph of the contrast as a function of focal length (or even better focusing distance). The peak in this graph will give you the distance to the object regardless of object's own contrast. Note, however, that the accuracy of such visual cues goes down sharply with viewing distance.
This is what I expect you to obtain when measuring the sum of absolute gradient in a small window:
The next step for you will be to combine areas that are in focus with the areas that are solid color that is has no particular peak in the graph but none the less belong to the same object. Sometimes getting a convex hull of the focused areas can help to pinpoint the raw boundary of the object.
I'm writing a CSS sprite sheet generator. I know there are several decent ones out there, but this is more a personal interest project than anything. I've got a decent algorithm now, but I'm ending up with a lot of leftover whitespace due to the way I'm calculating the position for the next image to pack. So my thought was this:
Given images A and B in C#, how could I find the top-left-most transparent area in image A that would accomodate the area of image B? In other words, assuming image B is a 10x10 image, how could I find the first 10x10 transparent area in image A (assuming there is one)?
I am trying find a solution that would allow me change color for parts of an image.
For example lets say a picture of a chair.
A chair can have different frame or cushion colors.
I intend to let users select a frame or cushion and update the image.
We can only upload one chair image.
Any idea is much appreciated!
Detecting parts of the image to be modified
Detecting objects on images programatically is probably not the simplet thing to do ;). I suppose one of the good solutions to this problem was suggested by Collin O'Dell in his comment to the question. He suggested to use few images with manually separated parts of the image you want to recolor. Then, you can compose the final image from few different layers.
However, you can also keep the main image with all the objects and manually make some additional images, but only to keep masks of the objects (i.e. white pixels in places where the object is). You can then easily check which pixels should be recolored. This would allow you to paint directly on one image and avoid compositing.
Calculating the color
When you want to recolor the photograph, you probably want to be able to keep the shading effects etc. rather than covering all with the same solid color.
In order to achieve this, you can use HSV color space instead of RGB. To use this method you should:
read pixel's color (in RGB)
recalculate it to HSV
change Hue value of the color to get the desired color shade
recaluclate modified color back to RGB
set the new color of the pixel
For more information about HSV color space you can look here: http://en.wikipedia.org/wiki/HSL_and_HSV.