I wrote function:
public static byte[, ,] Bitmap2Byte(Bitmap image)
{
int h = image.Height;
int w = image.Width;
byte[, ,] result= new byte[w, h, 3];
for (int i = 0; i < w; i++)
{
for (int j = 0; j < h; j++)
{
Color c= image.GetPixel(i, j);
result[i, j, 0] = c.R;
result[i, j, 1] = c.G;
result[i, j, 2] = c.B;
}
}
return result;
}
But it takes almost 6 seconds to convert 1800x1800 image. Can I do this faster?
EDIT:
OK, I found this: http://msdn.microsoft.com/en-us/library/system.drawing.imaging.bitmapdata.aspx
There is nice example. Only question I have is about Marshal.Copy. Can I make it copy data directly into byte[,,] ?
EDIT 2:
OK, sometimes I got strange values of pixels and they do not seem to follow r0 g0 b0 r1 g1 b1 rule. Why? Never mind. Figured it out.
EDIT 3:
Made it. 0,13s vs 5,35s :)
You can speed this up considerably by using a BitmapData object which is returned from Bitmap.LockBits. Google "C# Bitmap LockBits" for a bunch of examples.
GetPixel is painfully, painfully slow, making it (ironically) completely unsuitable for the manipulation of individual pixels.
I've been wondering this for a while.
In .NET 4.0 Microsoft introduced the Parallel library. Basically what this does is there is a Parallel.For method that will automatically spawn off numerous threads to help with the work.
For instance if you originally had a For(int i =0;i<3;i++){ code...}, A parallel.For loop would probably create 3 threads and each thread would have a different value for i running through the inner code. So the best thing i can suggest is a Parallel.For loop with a
Color c
lock(obraz)
{
c = obraz.GetPixel(..)
}
...
when getting the pixel.
If you need any more explanation on parallelism I can't really assist you before you take some time to study it as it is a huge area of study.
I just tried parallel For.
It doesn't work without SyncLock on the bitmap.
It says the object is in use.
So it pretty much just works, in serial lol... what a mess.
For xx As Integer = 0 To 319
pq.ForAll(Sub(yy)
Dim Depth = getDepthValue(Image, xx, yy) / 2047
Dim NewColor = Depth * 128
Dim Pixel = Color.FromArgb(NewColor, NewColor, NewColor)
SyncLock Bmp2
Bmp2.SetPixel(xx, yy, Pixel)
End SyncLock
End Sub)
Next
In case you're wondering, this is converting kinect's depth map -> bitmap.
Kinect's depth range is from 11bit(0-2047) and represents distance not color.
Related
I am new to image processing. I have a portion of an image that I have to search in the whole image by comparing pixels. I need to get the coordinates of the small image present in the complete image.
So, I am doing
for int i = 0 to Complete_Image.Lenght
for int j = 0 to Complete_Image.Height
for int x = 0 to Small_Image.Lenght
for int y = 0 to Small_Image.Height
if Complete_Image[i+j+x][i+j+y] == Small_Image[x][y]
Message "image found at coordinate x, y"
Break
It is a simple pixel-matching algorithm that finds a certain portion of a image in a complete image by comparing pixels.
It is very time-consuming. For example, if I have to find coordinates of a 50X50 image in a 1000 X 1000 image, it will take 1000 X 1000 X 50 X 50 pixel color comparisons.
So:
Is there a better way to do image comparison in C#?
Can I use AMD Radeon 460 GPU to do this comparison thing in parallel? Or at least some part of the algorithm using GPU power?
Anyway i have run out of time, and might be able to finish the parallel version later.
The premise is passively walking across the sub image, if it finds a full line pixels matches, it the does a sub loop to compare the whole sub image
[MethodImpl(MethodImplOptions.AggressiveInlining)]
private static bool CheckSubImage(int* m0, int* s0, Rectangle mR, Rectangle sR, int x, int y, out Point? result)
{
result = null;
for (int sX = 0, mX = x; sX < sR.Width && mX < mR.Right; sX++, mX++)
for (int sY = 0, mY = y; sY < sR.Height && mY < mR.Bottom; sY++, mY++)
if (*(m0 + mX + mY * mR.Width) != *(s0 + sX + sY * sR.Width))
return false;
result = new Point(x, y);
return true;
}
protected override Point? GetPoint(string main, string sub)
{
using (Bitmap m = new Bitmap(main), s = new Bitmap(sub))
{
Rectangle mR = new Rectangle(Point.Empty, m.Size), sR = new Rectangle(Point.Empty, s.Size);
var mD = m.LockBits(mR, ImageLockMode.ReadOnly, PixelFormat.Format32bppPArgb);
var sD = s.LockBits(sR, ImageLockMode.ReadOnly, PixelFormat.Format32bppPArgb);
int* m0 = (int*)mD.Scan0, s0 = (int*)sD.Scan0;
for (var x = mR.Left; x < mR.Right; x++)
for (var y = mR.Top; y < mR.Bottom; y++)
if (*(m0 + x + y * mR.Width) == *s0)
if (CheckSubImage(m0, s0, mR, sR, x, y, out var result))
return result;
m.UnlockBits(mD);
s.UnlockBits(sD);
}
return null;
}
Usage
var result = GetPoint(#"D:\TestImages\Main.bmp", #"D:\TestImages\1159-980.bmp");
The results are about 100 times faster than simple 4 loop approach you had.
Note, this uses the unsafe keyword so you will have to set project to allow unsafe.
Disclaimer : This could be optimized more also, also it could be done in parallel, and obviously would be faster on the gpu. the point was its the algorithm that matters not the processor
Yep, you can use the GPU from C#
cmsoft has a tutorial for their library here
You will need write some instructions in OpenCL
You may also need to check you have a driver/runtime for opencl (or for AMD)
The code is mostly boilerplate stuff. Pretty straight forward. You may spend more time getting the dependencies installed than writing the code.
This method called correlation and can be optimized by FFT.
Correlation can be simply converted to convolution by rotation of the kernel. So if both the Complete_Image and Small_Image padded by enough zeros and the Small_Image rotated by 180 degree, then the IFFT of the product of the FFT of the two images can compute the entire correlation image in O(n log n) order. Where n is the size of the image(LengthHeight). The main correlation problem has the order of nearly nn.
I've been working on an edge detection program in C#, and to make it run faster, I recently made it use lock bits. However, lockBits is still not as fast as I would like it to run. Although the problem could be my general algorithm, I'm also wondering if there is anything better than lockBits I can use for image processing.
In case the problem is the algorithm, here's a basic explanation. Go through an array of Colors (made using lockbits, which represent pixels) and for each Color, check the color of the eight pixels around that pixel. If those pixels do not match the current pixel closely enough, consider the current pixel an edge.
Here's the basic code that defines if a pixel is an edge. It takes in a Color[] of nine colors, the first of which is the pixel is to check.
public Boolean isEdgeOptimized(Color[] colors)
{
//colors[0] should be the checking pixel
Boolean returnBool = true;
float percentage = percentageInt; //the percentage used is set
//equal to the global variable percentageInt
if (isMatching(colors[0], colors[1], percentage) &&
isMatching(colors[0], colors[2], percentage) &&
isMatching(colors[0], colors[3], percentage) &&
isMatching(colors[0], colors[4], percentage) &&
isMatching(colors[0], colors[5], percentage) &&
isMatching(colors[0], colors[6], percentage) &&
isMatching(colors[0], colors[7], percentage) &&
isMatching(colors[0], colors[8], percentage))
{
returnBool = false;
}
return returnBool;
}
This code is applied for every pixel, the colors of which are fetched using lockbits.
So basically, the question is, how can I get my program to run faster? Is it my algorithm, or is there something I can use that is faster than lockBits?
By the way, the project is on gitHub, here
Are you really passing in a floating point number as a percentage to isMatching?
I looked at your code for isMatching on GitHub and well, yikes. You ported this from Java, right? C# uses bool not Boolean and while I don't know for sure, I don't like the looks of code that does that much boxing and unboxing. Further, you're doing a ton of floating point multiplication and comparison when you don't need to:
public static bool IsMatching(Color a, Color b, int percent)
{
//this method is used to identify whether two pixels,
//of color a and b match, as in they can be considered
//a solid color based on the acceptance value (percent)
int thresh = (int)(percent * 255);
return Math.Abs(a.R - b.R) < thresh &&
Math.Abs(a.G - b.G) < thresh &&
Math.Abs(a.B - b.B) < thresh;
}
This will cut down the amount of work you're doing per pixel. I still don't like it because I try to avoid method calls in the middle of a per-pixel loop especially an 8x per-pixel loop. I made the method static to cut down on an instance being passed in that isn't used. These changes alone will probably double your performance since we're doing only 1 multiply, no boxing, and are now using the inherent short-circuit of && to cut down the work.
If I were doing this, I'd be more likely to do something like this:
// assert: bitmap.Height > 2 && bitmap.Width > 2
BitmapData data = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
int scaledPercent = percent * 255;
unsafe {
byte* prevLine = (byte*)data.Scan0;
byte* currLine = prevLine + data.Stride;
byte* nextLine = currLine + data.Stride;
for (int y=1; y < bitmap.Height - 1; y++) {
byte* pp = prevLine + 3;
byte* cp = currLine + 3;
byte* np = nextLine + 3;
for (int x = 1; x < bitmap.Width - 1; x++) {
if (IsEdgeOptimized(pp, cp, np, scaledPercent))
{
// do what you need to do
}
pp += 3; cp += 3; np += 3;
}
prevLine = currLine;
currLine = nextLine;
nextLine += data.Stride;
}
}
private unsafe static bool IsEdgeOptimized(byte* pp, byte* cp, byte* np, int scaledPecent)
{
return IsMatching(cp, pp - 3, scaledPercent) &&
IsMatching(cp, pp, scaledPercent) &&
IsMatching(cp, pp + 3, scaledPercent) &&
IsMatching(cp, cp - 3, scaledPercent) &&
IsMatching(cp, cp + 3, scaledPercent) &&
IsMatching(cp, np - 3, scaledPercent) &&
IsMatching(cp, np, scaledPercent) &&
IsMatching(cp, np + 3, scaledPercent);
}
private unsafe static bool IsMatching(byte* p1, byte* p2, int thresh)
{
return Math.Abs(p1++ - p2++) < thresh &&
Math.Abs(p1++ - p2++) < thresh &&
Math.Abs(p1 - p2) < thresh;
}
Which now does all kinds of horrible pointer mangling to cut down on array accesses and so on. If all of this pointer work makes you feel uncomfortable, you can allocate byte arrays for prevLine, currLine and nextLine and do a Marshal.Copy for each row as you go.
The algorithm is this: start one pixel in from the top and left and iterate over every pixel in the image except the outside edge (no edge conditions! Yay!). I keep pointers to the starts of each line, prevLine, currLine, and nextLine. Then when I start the x loop, I make up pp, cp, np which are previous pixel, current pixel and next pixel. current pixel is really the one we care about. pp is the pixel directly above it, np directly below it. I pass those into IsEdgeOptimized which looks around cp, calling IsMatching for each.
Now this all assume 24 bits per pixel. If you're looking at 32 bits per pixel, all those magic 3's in there need to be 4's, but other than that the code doesn't change. You could parameterize the number of bytes per pixel if you want so it could handle either.
FYI, the channels in the pixels are typically b, g, r, (a).
Colors are stored as bytes in memory. Your actual Bitmap, if it is a 24 bit image is stored as a block of bytes. Scanlines are data.Stride bytes wide, which is at least as large as 3 * the number of pixels in a row (it may be larger because scan lines are often padded).
When I declare a variable of type byte * in C#, I'm doing a few things. First, I'm saying that this variable contains the address of a location of a byte in memory. Second, I'm saying that I'm about to violate all the safety measures in .NET because I could now read and write any byte in memory, which can be dangerous.
So when I have something like:
Math.Abs(*p1++ - *p2++) < thresh
What it says is (and this will be long):
Take the byte that p1 points to and hold onto it
Add 1 to p1 (this is the ++ - it makes the pointer point to the next byte)
Take the byte that p2 points to and hold onto it
Add 1 to p2
Subtract step 3 from step 1
Pass that to Math.Abs.
The reasoning behind this is that, historically, reading the contents of a byte and moving forward is a very common operation and one that many CPUs build into a single operation of a couple instructions that pipeline into a single cycle or so.
When we enter IsMatching, p1 points to pixel 1, p2 points to pixel 2 and in memory they are laid out like this:
p1 : B
p1 + 1: G
p1 + 2: R
p2 : B
p2 + 1: G
p2 + 2: R
So IsMatching just does the the absolute difference while stepping through memory.
Your follow-on question tells me that you don't really understand pointers. That's OK - you can probably learn them. Honestly, the concepts really aren't that hard, but the problem with them is that without a lot of experience, you are quite likely to shoot yourself in the foot, and perhaps you should consider just using a profiling tool on your code and cooling down the worst hot spots and call it good.
For example, you'll note that I look from the first row to the penultimate row and the first column to the penultimate column. This is intentional to avoid having to handle the case of "I can't read above the 0th line", which eliminates a big class of potential bugs which would involve reading outside a legal memory block, which may be benign under many runtime conditions.
Instead of copying each image to a byte[], then copying to a Color[], creating another temp Color[9] for each pixel, and then using SetPixel to set the color, compile using the /unsafe flag, mark the method as unsafe, replace copying to a byte[] with Marshal.Copy to:
using (byte* bytePtr = ptr)
{
//code goes here
}
Make sure you replace the SetPixel call with setting the proper bytes. This isn't an issue with LockBits, you need LockBits, the issue is that you're being inefficient with everything else related to processing the image.
If you want to use parallel task execution, you can use the Parallel class in System.Threading.Tasks namespace. Following link has some samples and explanations.
http://csharpexamples.com/fast-image-processing-c/
http://msdn.microsoft.com/en-us/library/dd460713%28v=vs.110%29.aspx
You can split the image into 10 bitmaps and process each one, then finally combine them (just an idea).
I'm working on a strange project. I have access to a laser cutter that I am using to make stencils (from metal). I can use coordinates to program the machine to cut a certain image, but what I was wondering was: how can I write a program that would take a scanned image that was black and white, and give me the coordinates of the black areas? I don't mind if it gives every pixel even though I need only the outer lines, I can do that part.
I've searched for this for a while, but the question has so many words with lots of results such as colors and pixels, that I find tons of information that isn't relevant. I would like to use C++ or C#, but I can use any language including scripting.
I used GetPixel in C#:
public List<String> GetBlackDots()
{
Color pixelColor;
var list = new st<String>();
for (int y = 0; y < bitmapImage.Height; y++)
{
for (int x = 0; x < bitmapImage.Width; x++)
{
pixelColor = bitmapImage.GetPixel(x, y);
if (pixelColor.R == 0 && pixelColor.G == 0 && pixelColor.B == 0)
list.Add(String.Format("x:{0} y:{1}", x, y));
}
}
return list;
}
If we assume that the scanned image is perfectly white and perfectly black with no in-between colors, then we can just take the image as an array of rgb values and simply scan for 0 values. If the value is 0, it must be black right? However, the image probably won't be perfectly black, so you'll want some wiggle room.
What you do then would look something like this:
for(int i = 0; i < img.width; i++){
for(int j = 0; j < img.height; j++){
// 20 is an arbitrary value and subject to your opinion and need.
if(img[i][j].color <= 20)
//store i and j, those are your pixel location
}
}
Now if you use C#, it'll be easy to import most image formats, stick em in an array, and get your results. But if you want faster results, you'd be better off with C++.
This shortcut relies completely on the image values being very extreme. If large areas of your images are really grey, then the accuracy of this approach is terrible.
While there are many solutions in many languages, I'll outline a simple solution that I would probably use myself. There is a imaging great library for Python called PIL (Python Imaging Library - http://www.pythonware.com/products/pil/) which could accomplish what you need very easily.
Here's an example of something that might help you get started.
image = Image.open("image.png")
datas = image.getdata()
for item in datas:
if item[0] < 255 and item[1] < 255 and item[2] < 255 :
// THIS PIXEL IS NOT WHITE
Of course that will count any pixel that is not completely white, you might want to add some padding so pixels which are not EXACTLY white also get picked up as being white. You'll also have to keep track of which pixel you are currently looking at.
Suppose I have a series of black and white images, NOT grayscale. I'm trying to calculate an average of all images. I have some sample code that should work but I'm wondering if there is a better way?
Bitmap[] Images = ReadAndScale(Width: 50, Height: 50);
int Width = 50;
int Height = 50;
double[,] Result = new int[50,50]
for (int i = 0 i < Images.Count; i++)
{
for (int j = 0; j < Width; j++)
{
for (int k = 0; k < Height; k++)
{
Result[j, k] += Images[i].GetPixel(j,k) == Color.White ? 0
: 1.0 / (double)Images.Count;
}
}
}
At the end of these loops you have an array Result[] that contains the average value; > .5 is black otherwise the average is white.
Well, I think your general algorithm is fine, not really a way to make that "better".
The only way I see you could make it "better" would be to scrape more performance out of it, but do you need that?
The main perf issue I see is the use of GetPixel() as it is a relatively slow method. Here is an example using unsafe code that should run much faster: Unsafe Bitmap
Don't let the word "unsafe" scare you, it just is the keyword for enabling true pointers in C#.
Well, you could make the images the inner loop, and break out as soon as you've guaranteed the average will be less than or greater than .5. Not sure if that is really "better".
I'm building a System.Drawing.Bitmap in code from a byte array, and i'm not sure what properties and such need to be set so that it will properly save as a .BMP file. I have Bitmap b = new Bitmap(width, height, PixelFormat.Format32bppArgb); as my constructor and
for (int i = 1; i < data.Length; i += 2;)
{
Color c = new Color();
c.A = data[i];
c.R = data[i];
c.G = data[i];
c.B = data[i];
int x = (i + 1) / 2;
int y = x / width;
x %= width;
b.SetPixel(x, y, c);
}
as the code that sets the bitmap data (it's reading from a byte array containing 16 bit little'endian values and converting them to grayscale pixels). What else should be done to make this bitmap saveable?
Nothing else needs to be done, you can save the bitmap right after setting it's image data.
Also note that the SetPixel method used in a loop is utterly slow, see this link for more information.
An instance of the Bitmap class can be saved as a .bmp file just by calling the Save(string filename) method.
As mentioned in other answers, setting the pixels one at a time in a loop is a bit slow.
Also, you can't set the properties of a Color struct, you will need to create it as follows:
Color c = Color.FromArgb(data[i], data[i + 1], data[i + 2], data[i + 3]);
(Not sure what is in your data[] array)
(starting at i=1 looks like a bad move, btw - should be 0?)
What happens when you try to save it (b.Save("foo.bmp");)?
Also; if this is a large image, you may want to look at LockBits; a related post is here (although it uses a rectangular array, not a 1-dimensional array).
Your formula for calculating grayscale is normally not the one used. Granted, it will work pretty well but you most likely want to keep to the standard.
The eye is more receptive to some colors (green) so you usually use another weighing. I would set the color like this;
c.R = 0.3 * data[i];
c.G = 0.59 * data[i];
c.B = 0.11 * data[i];
c.A = 1;
See Wikipedia article on Grayscale for more information