Histogram Plot for RGB values in WinRT App - c#

I'm having an issue with creating a histogram representation of an image in a WinRT app. What I'd like to make consists of four histogram plots for Red, Green, Blue, Luminosity for an image.
My main issue is how to actually draw a picture of that Histogram so I could show it on the screen. My code so far is pretty... messy, I've searched a lot for this topic, mostly my results consisted of code in Java, which I'm trying somehow to translate it in C#, but API is pretty different... Had an attempt from AForge as well but that's winforms...
Here's my messy code, I know it looks bad but I'm striving to make this work first :
public static WriteableBitmap CreateHistogramRepresentation(long[] histogramData, HistogramType type)
{
//I'm trying to determine a max height of a histogram bar, so
//I could determine a max height of the image that then I'll remake it
//at a lower resolution :
var max = histogramData[0];
//Determine the max value, the highest bar in the histogram, the initial height of the image.
for (int i = 0; i < histogramData.Length; i++)
{
if (histogramData[i] > max)
max = histogramData[i];
}
var bitmap = new WriteableBitmap(256, 500);
//Set a color to draw with according to the type of the histogram :
var color = Colors.White;
switch (type)
{
case HistogramType.Blue :
{
color = Colors.RoyalBlue;
break;
}
case HistogramType.Green:
{
color = Colors.OliveDrab;
break;
}
case HistogramType.Red:
{
color = Colors.Firebrick;
break;
}
case HistogramType.Luminosity:
{
color = Colors.DarkSlateGray;
break;
}
}
//Compute a scaler to scale the bars to the actual image dimensions :
var scaler = 1;
while (max/scaler > 500)
{
scaler++;
}
var stream = bitmap.PixelBuffer.AsStream();
var streamBuffer = new byte[stream.Length];
//Make a white image initially :
for (var i = 0; i < streamBuffer.Length; i++)
{
streamBuffer[i] = 255;
}
//Color the image :
for (var i = 0; i < 256; i++) // i = column
{
for (var j = 0; j < histogramData[i] / scaler; j++) // j = line
{
streamBuffer[j*256*4 + i] = color.B; //the image has a 256-pixel width
streamBuffer[j*256*4 + i + 1] = color.G;
streamBuffer[j*256*4 + i + 2] = color.R;
streamBuffer[j*256*4 + i + 2] = color.A;
}
}
//Write the Pixel Data into the Pixel Buffer of the future Histogram image :
stream.Seek(0, 0);
stream.Write(streamBuffer, 0, streamBuffer.Length);
return bitmap.Flip(WriteableBitmapExtensions.FlipMode.Horizontal);
}
This creates a pretty bad histogram representation, it doesn't even colour it with an corresponding colour... It's not working properly, I'm working on it to fix it...
If you can contribute with a link you might know any code for a histogram representation for WinRT apps or everything else is greatly appreciated.

While you could use a charting control as JP Alioto pointed out, histograms tend to represent a lot of data. In your sample alone you're rendering 256 bars * 4 axis (R,G,B,L). The problem with charting controls is that they usually like to be handed collections (or arrays) of hydrated data, which they draw, and which they tend to keep in memory. A histogram like yours would need to have 1024 objects in memory (256 * 4) and passed to the chart as a whole. It's just not a good use of memory management.
The alternative of course is to draw it yourself. But as you've found, pixel-by-pixel drawing can be a bit of a pain. The best answer - in my opinion - is to agree with Shahar and recommend you use WriteableBitmapEx on CodePlex.
http://writeablebitmapex.codeplex.com
WriteableBitmapEx includes methods for drawing shapes like lines and rectangles that are very very fast. You can draw the data as you enumerate it (instead of having to have it all in memory at one time) and the result is a nice compact image that is already "bitmap cached" (meaning it renders very fast since it doesn't have to redrawn on each frame).
Dev support, design support and more awesome goodness on the way: http://bit.ly/winappsupport

Related

Plot values of a line from a grayscale image

I am trying to take a grayscale bitmap and extract a single line from it and then graph the gray values. I got something to work, but I'm not really happy with it. It just seems slow and tedious. I am sure someone has a better idea
WriteableBitmap someImg; //camera image
int imgWidth = someImg.PixelWidth;
int imgHeight = someImg.PixelHeight;
Int32Rect rectLine = new Int32Rect(0, imgHeight / 2, imgWidth, 1); //horizontal line half way down the image as a rectangle with height 1
//calculate stride and buffer size
int imgStride = (imgWidth * someImg.Format.BitsPerPixel + 7) / 8; // not sure I understand this part
byte[] buffer = new byte[imgStride * rectLine.Height];
//copy pixels to buffer
someImg.CopyPixels(rectLine, buffer, imgStride, 0);
const int xGraphHeight = 256;
WriteableBitmap xgraph = new WriteableBitmap(imgWidth, xGraphHeight, someImg.DpiX, someImg.DpiY, PixelFormats.Gray8, null);
//loop through pixels
for (int i = 0; i < imgWidth; i++)
{
Int32Rect dot = new Int32Rect(i, buffer[i], 1, 1); //1x1 rectangle
byte[] WhiteDotByte = { 255 }; //white
xgraph.WritePixels(dot, WhiteDotByte, imgStride, 0);//write pixel
}
You can see the image and the plot below the green line. I guess I am having some WPF issues that make it look funny but that's a problem for another post.
I assume the goal is to create a plot of the pixel value intensities of the selected line.
The first approach to consider it to use an actual plotting library. I have used oxyplot, it works fine, but is lacking in some aspects. Unless you have specific performance requirements this will likely be the most flexible approach to take.
If you actually want to render to an image you might be better of using unsafe code to access the pixel values directly. For example:
xgraph.Lock();
for (int y = 0; y < imgHeight; y++){
var rowPtr = (byte*)(xgraph.BackBuffer + y * xgraph.BackBufferStride);
for(int x = 0; x < imgWidth; x++){
rowPtr[x] = (byte)(y < buffer[i] ? 0 : 255);
}
}
self.Unlock(); // this should be placed in a finally statement
This should be faster than writing 1x1 rectangles. It should also write columns instead of single pixels, and that should help making the graph more visible. You might also consider allowing arbitrary image height and scale the comparison value.
If you want to plot the pixel values along an arbitrary line, and not just a horizontal one. You can take equidistant samples along the line, and use bilinear interpolation to sample the image.

Kinect v2 Alignment of Infrared Sensor & RGB Image always slightly off

I'm using the official Kinect SDK 2.0 and Emgu CV in order to recognize the colors of a Rubik's Cube.
At first I use Canny Edge Extraction on the Infrared Camera since it handles different lightning conditions better than the RGB Camera and is much better to detect contours.
Then I use this code to convert the coordinates of the infrared sensor to the ones of the RGB camera.
As you can see the in the picture they are still off from what I am looking for. Since I already use the official KinectSensor.CoordinateMapper.MapDepthFrameToColorSpace I don't know how else I can improve the situation.
using (var colorFrame = reference.ColorFrameReference.AcquireFrame())
using (var irFrame = reference.InfraredFrameReference.AcquireFrame())
{
if (colorFrame == null || irFrame == null)
return;
// initialize depth frame data
FrameDescription depthDesc = irFrame.FrameDescription;
if (_depthData == null)
{
uint depthSize = depthDesc.LengthInPixels;
_depthData = new ushort[depthSize];
_colorSpacePoints = new ColorSpacePoint[depthSize];
// fill Array with max value so all pixels can be mapped
for (int i = 0; i < _depthData.Length; i++)
{
_depthData[i] = UInt16.MaxValue;
}
// didn't work so well with the actual depth-data
//depthFrame.CopyFrameDataToArray(_depthData);
_sensor.CoordinateMapper.MapDepthFrameToColorSpace(_depthData, _colorSpacePoints);
}
}
This is a helper-function I created in order to convert Point-Arrays in Infrared-Space to Color-Space
public static System.Drawing.Point[] DepthPointsToColorSpace(System.Drawing.Point[] depthPoints, ColorSpacePoint[] colorSpace){
for (int i = 0; i < depthPoints.Length; i++)
{
// 512 is the width of the depth/infrared image
int index = 512 * depthPoints[i].Y + depthPoints[i].X;
depthPoints[i].X = (int)Math.Floor(colorSpace[index].X + 0.5);
depthPoints[i].Y = (int)Math.Floor(colorSpace[index].Y + 0.5);
}
return depthPoints;
}
We can solve this problem by transforming infrared image coordinates to color image coordinates with 2 quadrilateral mapping.
A quadrilateral Q(x1,y1,x2,y2,x3,y3,x4,y4) in an infrared image, similarly,
it's mapping quadrilateral Q'(x1',y1',x2',y2',x3',y3',x4',y4') in the corresponding color image.
We can write the above mapping in form of equation as follows:
Q'= Q*A
where, A is a 3 X 3 matrix with coefficients a11, a12, a13, a21,.., a33;
The formula to obtain the coefficients are listed as follows:
x1=173; y1=98; x2=387; y2=93; x3=395; y3=262; x4=172; y4=264;
x1p=787; y1p=235; x2p=1407; y2p=215; x3p=1435; y3p=705; x4p=795; y4p=715;
tx=(x1p-x2p+x3p-x4p)*(y4p-y3p)-(y1p-y2p+y3p-y4p)*(x4p-x3p);
ty=(x2p-x3p)*(y4p-y3p)-(x4p-x3p)*(y2p-y3p);
a31=tx/ty;
tx=(y1p-y2p+y3p-y4p)*(x2p-x3p)-(x1p-x2p+x3p-x4p)*(y2p-y3p);
ty=(x2p-x3p)*(y4p-y3p)-(x4p-x3p)*(y2p-y3p);
a32=tx/ty;
a11=x2p-x1p+a31*x2p;
a12=x4p-x1p+a32*x4p;
a13=x1p;
a21=y2p-y1p+a31*y2p;
a22=y4p-y1p+a32*y4p;
a23=y1p;
a33=1.0;
Its because its not the same camera the camera that retrieves the depth data and the one that retrieves color data.
So you should apply a correction factor to displace the depth data.
Its a factor that is almost constant but its related to the distance.
I've got no code for you, but its something you can calculate yourself.

Reduce the gray scale bit representation of a pixel in C#

I am sorry if the question in the header is not descriptive enough. But, basically what my problem is the following.
I am taking Bitmap and making it gray scale. It works nice if i do not reduce the number of bits and I still use 8 bits. However, the point of the hw I have is to show how the image changes when I reduce the number of bits holding the information. In the example bellow I am reducing the binary string to 4 bits and then rebuilding the image again. The problem is that the image becomes black. I think is because the image has mostly gray values (in the 80's range) and when I am reducing the binary string I am left with black image only. It seems to me that i heave to check for lower and high gray scale values and then make the more light-gray go to white and dark gray go to black. In the end with 1 bit representation I should only have black and white image.Any idea how can i do that separation?
Thanks
Bitmap bmpIn = (Bitmap)Bitmap.FromFile("c:\\test.jpg");
var grayscaleBmp = MakeGrayscale(bmpIn);
public Bitmap MakeGrayscale(Bitmap original)
{
//make an empty bitmap the same size as original
Bitmap newBitmap = new Bitmap(original.Width, original.Height);
for (int i = 0; i < original.Width; i++)
{
for (int j = 0; j < original.Height; j++)
{
//get the pixel from the original image
Color originalColor = original.GetPixel(i, j);
//create the grayscale version of the pixel
int grayScale = (int)((originalColor.R * .3) + (originalColor.G * .59)
+ (originalColor.B * .11));
//now turn it into binary and reduce the number of bits that hold information
byte test = (byte) grayScale;
string binary = Convert.ToString(test, 2).PadLeft(8, '0');
string cuted = binary.Remove(4);
var converted = Convert.ToInt32(cuted, 2);
//create the color object
Color newColor = Color.FromArgb(converted, converted, converted);
//set the new image's pixel to the grayscale version
newBitmap.SetPixel(i, j, newColor);
}
}
return newBitmap;
}
As mbeckish said, it is easier and much faster to use ImageAttributes.SetThreshold.
One way to do it manually is to get the median value of the grayscale pixels in the image, and use that for the threshold between black and white.

c# .NET green screen background remove

I am working on a photo software for desktop PC that works on Windows 8. I would like to be able to remove the green background from the photo by means of chroma keying.
I'm a beginner in image manipulation, i found some cool links ( like http://www.quasimondo.com/archives/000615.php ), but I can't transale it in c# code.
I'm using a webcam (with aforge.net) to see a preview and take a picture.
I tried color filters but the green background isn't really uniform, so this doesn't work.
How to do that properly in C#?
It will work, even if the background isn't uniform, you'll just need the proper strategy that is generous enough to grab all of your greenscreen without replacing anything else.
Since at least some links on your linked page are dead, I tried my own approach:
The basics are simple: Compare the image pixel's color with some reference value or apply some other formula to determine whether it should be transparent/replaced.
The most basic formula would involve something as simple as "determine whether green is the biggest value". While this would work with very basic scenes, it can screw you up (e.g. white or gray will be filtered as well).
I've toyed around a bit using some simple sample code. While I used Windows Forms, it should be portable without problems and I'm pretty sure you'll be able to interpret the code. Just note that this isn't necessarily the most performant way to do this.
Bitmap input = new Bitmap(#"G:\Greenbox.jpg");
Bitmap output = new Bitmap(input.Width, input.Height);
// Iterate over all piels from top to bottom...
for (int y = 0; y < output.Height; y++)
{
// ...and from left to right
for (int x = 0; x < output.Width; x++)
{
// Determine the pixel color
Color camColor = input.GetPixel(x, y);
// Every component (red, green, and blue) can have a value from 0 to 255, so determine the extremes
byte max = Math.Max(Math.Max(camColor.R, camColor.G), camColor.B);
byte min = Math.Min(Math.Min(camColor.R, camColor.G), camColor.B);
// Should the pixel be masked/replaced?
bool replace =
camColor.G != min // green is not the smallest value
&& (camColor.G == max // green is the biggest value
|| max - camColor.G < 8) // or at least almost the biggest value
&& (max - min) > 96; // minimum difference between smallest/biggest value (avoid grays)
if (replace)
camColor = Color.Magenta;
// Set the output pixel
output.SetPixel(x, y, camColor);
}
}
I've used an example image from Wikipedia and got the following result:
Just note that you might need different thresholds (8 and 96 in my code above), you might even want to use a different term to determine whether some pixel should be replaced. You can also add smoothening between frames, blending (where there's less green difference), etc. to reduce the hard edges as well.
I've tried Mario solution and it worked perfectly but it's a bit slow for me.
I looked for a different solution and I found a project that uses a more efficient method here.
Github postworthy GreenScreen
That project takes a folder and process all files, I just need an image so I did this:
private Bitmap RemoveBackground(Bitmap input)
{
Bitmap clone = new Bitmap(input.Width, input.Height, PixelFormat.Format32bppArgb);
{
using (input)
using (Graphics gr = Graphics.FromImage(clone))
{
gr.DrawImage(input, new Rectangle(0, 0, clone.Width, clone.Height));
}
var data = clone.LockBits(new Rectangle(0, 0, clone.Width, clone.Height), ImageLockMode.ReadWrite, clone.PixelFormat);
var bytes = Math.Abs(data.Stride) * clone.Height;
byte[] rgba = new byte[bytes];
System.Runtime.InteropServices.Marshal.Copy(data.Scan0, rgba, 0, bytes);
var pixels = Enumerable.Range(0, rgba.Length / 4).Select(x => new {
B = rgba[x * 4],
G = rgba[(x * 4) + 1],
R = rgba[(x * 4) + 2],
A = rgba[(x * 4) + 3],
MakeTransparent = new Action(() => rgba[(x * 4) + 3] = 0)
});
pixels
.AsParallel()
.ForAll(p =>
{
byte max = Math.Max(Math.Max(p.R, p.G), p.B);
byte min = Math.Min(Math.Min(p.R, p.G), p.B);
if (p.G != min && (p.G == max || max - p.G < 7) && (max - min) > 20)
p.MakeTransparent();
});
System.Runtime.InteropServices.Marshal.Copy(rgba, 0, data.Scan0, bytes);
clone.UnlockBits(data);
return clone;
}
}
Do not forget to dispose of your Input Bitmap and the return of this method.
If you need to save the image just use the Save instruction of Bitmap.
clone.Save(#"C:\your\folder\path", ImageFormat.Png);
Here you can find methods to process an image even faster.Fast Image Processing in C#
Chromakey on a photo should assume an analog input. In the real world, exact values are very rare.
How do you compensate for this? Provide a threshold around the green of your choice in both hue and tone. Any colour within this threshold (inclusive) should be replaced by your chosen background; transparent may be best. In the first link, the Mask In and Mask Out parameters achieve this. The pre and post blur parameters attempt to make the background more uniform to reduce encoding noise side effects so that you can use a narrower (preferred) threshold.
For performance, you may want to write a pixel shader to zap the 'green' to transparent but that is a consideration for after you get it working.

OpenCV: how to increase color channel

Within a RGB image (from a webcam) I'm looking for a way to increase the intensity/brightness of green. Glad if anyone can give a starting point.
I'm using AFORGE.NET in C# and/or OpenCV directly in C++.
in general multiplication of pixel values is though of as an increase in contrast and addition is though of as an increase in brightness.
in c#
where you have an array to the first pixel in the image such as this:
byte[] pixelsIn;
byte[] pixelsOut; //assuming RGB ordered data
and contrast and brightness values such as this:
float gC = 1.5;
float gB = 50;
you can multiply and/or add to the green channel to achieve your desired effect: (r - row, c - column, ch - nr of channels)
pixelsOut[r*w*ch + c*ch] = pixelsIn[r*w*ch + c*ch] //red
int newGreen = (int)(pixelsIn[r*w*ch + c*ch+1] * gC + gB); //green
pixelsOut[r*w*ch + c*ch+1] = (byte)(newGreen > 255 ? 255 : newGreen < 0 ? 0 : newGreen); //check for overflow
pixelsOut[r*w*ch + c*ch+2] = pixelsIn[r*w*ch + c*ch+2]//blue
obviously you would want to use pointers here to speed things up.
(Please note: this code has NOT BEEN TESTED)
For AFORGE.NET, I suggest use ColorRemapping class to map the values in your green channel to other value. The mapping function should be a concave function from [0,255] to [0,255] if your want to increase the brightness without losing details.
This is what I came up with after reading through many pages of AForge.NET and OpenCV documentation. If you apply the saturation filter first, you might get a dizzy image. If you apply it later you will get a much clearer image but some "light green" pixels might have been lost before while applying the HSV filter.
// apply saturation filter to increase green intensity
var f1 = new SaturationCorrection(0.5f);
f1.ApplyInPlace(image);
var filter = new HSLFiltering();
filter.Hue = new IntRange(83, 189); // all green (large range)
//filter.Hue = new IntRange(100, 120); // light green (small range)
// this will convert all pixels outside the range into gray-scale
//filter.UpdateHue = false;
//filter.UpdateLuminance = false;
// this will convert all pixels outside that range blank (filter.FillColor)
filter.Saturation = new Range(0.4f, 1);
filter.Luminance = new Range(0.4f, 1);
// apply the HSV filter to get only green pixels
filter.ApplyInPlace(image);

Categories