I'm making a program and I need to know how to see if a pixel in a bitmap image is very dark or black. Big thanks if you can help.
//need help
if (_Bitmap.GetPixel(c, d, Color.FromArgb(255) == 255)) mouse_event((int)Test.MOUSEEVENTF_LEFTDOWN, 0, 0, 0, 0);
//need help
In order to get color of a pixel you should use GetPixel method.
You can use GetBrightness method of Color class to get a brightness and check "how dark" is the color.
Brightness equals to 0 for pure black, and maximum value is 1.
So you can compare a brightness with some threshold for example 0.02.
if (_Bitmap.GetPixel(c,d).GetBrightness() < 0.02) { ... }
Your question is either very simple or deceptively complex.
Whether a pixel is black or not depends on the colour model the pixel is expressed in. In RGB a black pixel is expressed by the integer 0. Other colour models have other representations for black. In each case it is very straight forward to figure out whether a pixel is pure black.
However you also ask whether a pixel is very, very dark, and that may be tricky.
If you have a clear threshold for darkness (say the intensity/luminescence is less than 10%) then the answer is also straight forward.
This is easier in colour models with a separate intensity component but in RGB you might make due with the following approximation redComponent < 25 && greenComponent < 25 && blueComponent < 25
However if you ask whether a pixel is perceived as dark, then we thread into more complex territory. To determine whether a pixel is perceived as dark you'll need to take into consideration the values of adjacent pixels and figure out whether a darkening gradient exists. Some pixel shades will look bright if they are near the local intensity maximum, but will look dark if they are near the local intensity minimum (see checker shadow illusion for a well known example). Likewise a very dark vary narrow line adjacent a bright object may look as the edge of object and not as a dark element in the image.
Related
I have some circles located in an image, I could find the location for each circle (diameter and origin or centre), so first of all how could I check all pixels inside that circle (I'm thinking of for loop). second how could I ask if pixel color in nearly gray?
At first I thought of asking if Red, Green and Blue values are higher than 125 But that doesn't work. For example: (200,130,170) is for sure not a gray color!
If you want to check whether a point is within a circle, use pythagoras..
Math.sqrt((x1-x2)*(x1-x2) + (y1-y2)*(y1-y2))
..to work out how far the point (x1,y1) is from the center (x2,y2) of the circle. If the value you calculate is less than the radius of the circle, the point is in the circle. This can be optimised slightly by removing the sqrt, and testing if the result is less than the square of the radius
Anything is nearly gray if the RGB are nearly the same. 64,64,64 = dark grey, 72,64,64 = slightly red looking dark grey. You'll have to define what "nearly" means
I have a database that holds data of eyetracking on some videos.
I export those data to an int[,] input matrix for this issue. And then try to create a heatmap.
What I get so far is something like this:
But this is not actually what I want it to be. I want something like the heatmaps that you see when you google it, e.g.:
Treat each spot as an artificial circle with the center at that spot. A circle that has, say, 50 pixel radius. Now, go over all pixels of the image and for each one count all circles that cover that pixels. This is your score for that pixel. Translate it into color, i.e 0: black/transparent, 10: light green, 20: yelow, and so on. After analyzing all pixels you will get a color for each pixel. Write a bitmap, look at it. It should be something close to what you want.
Of course, circle radius, color mappings, etc, need adjusting to your needs. Also, that's probably not the best/simplest/fastest algorithm.
Different approach would be to store the "heat" in the pixels greyvalue.
Just create a second image with the same size as the original one and count up the greyvalue of a pixel everytime it was looked at.
Later you can use that value to calculate a size and color for the circle you want to draw.
You can then lay the heatmap image on top of the original one and you are done (dont forget to set transparency).
I have implemented Background remove functionality(aka : Green Screen Implemetation) using kinect in my Windows-RT application over there the noise of pixels (Jitter) is very high at foot area as well on hair of the acquired user so how to reduce this noise of pixels ?
There are a few techniques you could apply to reduce noise:
cv::bilateralFilter, most intensive, but with the right number of iterations will smooth out the image.
cv::morphologyEx, morphological closing will remove small gaps (of a few pixels) in the image, if the structuing element (cross, circle etc.) is the right kind and size.
cv::inpaint, will close bigger gaps and fill out the image where data is unavaliable. I suggest trying bilateral filtering (smoothing) after this step.
cv::findContours, filtering contours with an area smaller than a threshold could be used to remove big gaps in the image.
1 & 3 are mostly for salt and pepper noise and 2 & 3 are most appropriate in removing missing data.
Scaling down the depth data and scaling it back up to size (with good interpolation) also has the effect of smoothing out the image whilst preserving edges.
Using the K2, you might also find that mapping from color to depth coordinate space or vice vera gives you better results than the former.
Lastly, I would suggest you to look at some techniques used by traditional green screening and VR/AR, such as colouring the outermost edges of the foreground with a light or dark outline to get a 'clean' look.
In my 2D tile based game, the player can paint tiles to change there color. The simple approach (and what I have already done) is to use the tint parameter in SpriteBatch.Draw.
This looks nice, and here is an example:
But say, I want to paint the wood white. If you have ever messed with the tint, you know this isn't possible using the tint parameter. Color.White will tint the sprite the default color, and not white. This isn't a major problem, but I suppose some users might want it.
My question is, is there a method to color a sprite based on hue/saturation instead of tint. Similar to the "colorify" function in GIMP.
An obvious approach would to be to use this function in GIMP and just export sprites for each color. The trouble comes in at the fact that this would take forever to do for all my sprites, and each sprite has multiple variations, meaning you could have in total 100+ combos for one block type.
Is this possible? I suppose a shader might get the job done.
The "colourify" function in The GIMP simply does a desaturate (convert to grayscale), followed by a colour multiplication. Very simple. This should be the equivalent HLSL (untested):
float4 original = tex2d(...);
float q = (original.r + original.g + original.b) / 3;
q /= original.a; // optional - undo alpha premultiplication
return float4(tint.rgb * q, tint.a * original.a);
But you could achieve the same effect by simply storing all your textures desaturated to begin with and using the default SpriteBatch shader for its multiplication. If you don't want to modify your source art, you could do it in a custom content processor.
Although, if you want to use a custom shader, you can do something more sophisticated. You could implement a full hue-rotation (kinda complex). But perhaps you could consider something like the "Overlay" blend mode (very simple) - which lets you colourize the grays, while preserving both the highlights and lowlights (instead of multiply, which also colourizes the highlights).
To continue using the tint in SPriteBatch.Draw. Just make your "paintable" textures grayscale. So the white wood would be your default, but you draw it with a tint that makes it wood-colored.
I often use this to make UI and team coloring :)
in case it is interesting, what the tint does is just to multiply each pixel in the texture by the color you choose.
example:
texture pixel is (rgba) [1.0, 0.5, 0.5, 1.0]
tint is [1.0, 0.5, 0.5, 0.5]
result is [1.0, 0.25, 0.25, 0.5] (half transparent and more red-ish)
I don't know if this is possible with Monotouch so I thought I'd ask the experts. Let's say I want to be able to take a picture of a painted wall and recognize the general color from it - how would I go about doing that in C#/Monotouch?
I know I need to capture the image and do some image processing but I'm more curious about the dynamics of it. Would I need to worry about lighting conditions? I assume the flash would "wash out" my image, right?
Also, I dont need to know exact colors, I just need to know the general color family. I dont need to know a wall is royal blue, I just need it to return "blue". I dont need to know hunter green, I just need it to return "green". I've never done that with image processing.
The code below relies on the .NET System.Drawing.Bitmap class and the System.Drawing.Color class, but I believe these are both supported in MonoTouch (at least based on my reading of the Mono Documentation).
So assuming you have an image in a System.Drawing.Bitmap object named bmp. You can obtain the average hue of that image with code like this:
float hue = 0;
int w = bmp.Width;
int h = bmp.Height;
for (int y = 0; y < bmp.Height; y++) {
for (int x = 0; x < bmp.Width; x++) {
Color c = bmp.GetPixel(x, y);
hue += c.GetHue();
}
}
hue /= (bmp.Width*bmp.Height);
That's iterating over the entire image which may be quite slow for a large image. If performance is an issue, you may want to limit the pixels evaluated to a smaller subsection of the image (as suggested by juhan_h), or just use a smaller image to start with.
Then given the average hue, which is in the range 0 to 360 degrees, you can map that number to a color name with something like this:
String[] hueNames = new String[] {
"red","orange","yellow","green","cyan","blue","purple","pink"
};
float[] hueValues = new float[] {
18, 54, 72, 150, 204, 264, 294, 336
};
String hueName = hueNames[0];
for (int i = 0; i < hueNames.Length; i++) {
if (hue < hueValues[i]) {
hueName = hueNames[i];
break;
}
}
I've just estimated some values for the hueValues and hueNames tables, so you may want to adjust those tables to suit your requirements. The values are the point at which the color appears to change to the next name (e.g. the dividing line between red and orange occurs at around 18 degrees).
To get an idea of the range of colors represent by the hue values, look at the color wheel below. Starting at the top it goes from red/orange (around 0° - north) to yellow/green (around 90° - east), to cyan (around 180° - south), to blue/purple (around 270° - west).
You should note, however, that we are ignoring the saturation and brightness levels, so the results of this calculation will be less than ideal on faded colors and under low light conditions. However, if all you are interested in is the general color of the wall, I think it might be adequate for your needs.
I recently dealt with shifting white balance on iOS (original question here: iOS White point/white balance adjustment examples/suggestions) which included a similar problem.
I can not give you code samples in C# but here are the steps that I would take:
Capture the image
Decide what point/part of the image is of interest (the smaller the better)
Calculate the "color" of that point of the image
Convert the "color" to human readable form (I guess that is what you need?)
To accomplish step #2 I would either let the user choose the point or take the point to be in the center of the image, because that is usually the place to which the camera is actually pointed.
How to accomplys step #3 depends on how big is the area chosen in step #2. If the area is 1x1 pixels then you render it in RGB and get the component (ie red green and blue) values from that rendered pixel. If the area is larger then you would need to get the RGB values of each of the pixels contained in that area and average them.
If you only need a general color this would be mostly it. But if you need to compensate for lighting conditions the problem gets very much more complicated. To compensate for lighting (ie White Balancing) you need to do some transformations and some guesses in which conditions the photo was taken. I will not go into details (I wrote my Bachelors thesis on those details) but Wikipedias article on White Balance is a good starting point.
It is also worth to note that the solution to White Balancing problem will always be subjective and dependent on the guesses made in which light the photo was taken (at least as far as I know).
To accomplish step #4 you should search for tables that map RGB values to human-readable colors. I have not had the need for these kinds of tables, but I am sure they exist somwhere on the Internet.