My Application
I am writing an application that needs to convert RGB to grayscale images.
The conversion works but converting an image of 3648 * 2736 pixel takes round about 7 secs.
I know that set and getpixel take some time.
But I think that it shouldn't take so long if you are using Lockbits even though the image is not small. (please correct me if that is wrong).
Maybe I just did a fatal mistake within my code.
The code
public static long ConvertToGrayScaleV2(Bitmap imageColor, bool useHDTVConversion)
{
Stopwatch stpw = new Stopwatch();
stpw.Start();
System.Drawing.Imaging.BitmapData imageColorData = imageColor.LockBits(new Rectangle(new Point(0, 0), imageColor.Size),
System.Drawing.Imaging.ImageLockMode.ReadWrite, imageColor.PixelFormat);
IntPtr PtrColor = imageColorData.Scan0;
int strideColor = imageColorData.Stride;
byte[] byteImageColor = new byte[Math.Abs(strideColor) * imageColor.Height];
System.Runtime.InteropServices.Marshal.Copy(PtrColor, byteImageColor, 0, Math.Abs(strideColor) * imageColor.Height);
int bytesPerPixel = getBytesPerPixel(imageColor);
byte value;
if (bytesPerPixel == -1)
throw new Exception("Can't get bytes per pixel because it is not defined for this image format.");
for (int x = 0, position; x < imageColor.Width * imageColor.Height; x++)
{
position = x * bytesPerPixel;
if (useHDTVConversion)
{
value = (byte)(byteImageColor[position] * 0.0722 + byteImageColor[position + 1] * 0.7152 + byteImageColor[position + 2] * 0.2126);
}
else
{
value = (byte)(byteImageColor[position] * 0.114 + byteImageColor[position + 1] * 0.587 + byteImageColor[position + 2] * 0.299);
}
byteImageColor[position] = value;
byteImageColor[position+1] = value;
byteImageColor[position+2] = value;
}
System.Runtime.InteropServices.Marshal.Copy(byteImageColor, 0, PtrColor, Math.Abs(strideColor) * imageColor.Height);
imageColor.UnlockBits(imageColorData);
stpw.Stop();
return stpw.ElapsedMilliseconds;
}
public static int getBytesPerPixel(Image img)
{
switch (img.PixelFormat)
{
case System.Drawing.Imaging.PixelFormat.Format16bppArgb1555: return 2;
case System.Drawing.Imaging.PixelFormat.Format16bppGrayScale: return 2;
case System.Drawing.Imaging.PixelFormat.Format16bppRgb555: return 2;
case System.Drawing.Imaging.PixelFormat.Format16bppRgb565: return 2;
case System.Drawing.Imaging.PixelFormat.Format1bppIndexed: return 1;
case System.Drawing.Imaging.PixelFormat.Format24bppRgb: return 3;
case System.Drawing.Imaging.PixelFormat.Format32bppArgb: return 4;
case System.Drawing.Imaging.PixelFormat.Format32bppPArgb: return 4;
case System.Drawing.Imaging.PixelFormat.Format32bppRgb: return 4;
case System.Drawing.Imaging.PixelFormat.Format48bppRgb: return 6;
case System.Drawing.Imaging.PixelFormat.Format4bppIndexed: return 1;
case System.Drawing.Imaging.PixelFormat.Format64bppArgb: return 8;
case System.Drawing.Imaging.PixelFormat.Format64bppPArgb: return 8;
case System.Drawing.Imaging.PixelFormat.Format8bppIndexed: return 1;
default: return -1;
}
}
I know this is old, but a few possible valuable points:
The imageColor.Width * imageColor.Height is an expensive operation that you are running nearly 10 million times (3648 * 2736) more than you need to.
The for loop is recalculating that every single iteration
Not only that, but the CLR has to navigate to the Bitmap object's Width and Height properties each of those 10 million times, too. This is 30 million more operations than you need every time you try to run this on your bitmap.
Change:
for (int x = 0, position; x < imageColor.Width * imageColor.Height; x++)
{
...
}
To:
var heightWidth = imageColor.Width * imageColor.Height;
for (int x = 0, position; x < heightWidth; x++)
{
...
}
If you cache all the potential results of the three various operations (R, G, B, with 255 possible values) and use a lookup to the new values instead of calculating the new value 10 million times you'll also see a huge performance increase.
Here is the full, very fast code (much faster than a ColorMatrix). Notice I have moved all possible pre-calculated values into local variables and within the loop there absolutely minimal work involved.
var lookupR = new byte[256];
var lookupG = new byte[256];
var lookupB = new byte[256];
var rVal = hdtv ? 0.114 : 0.0722;
var gVal = hdtv ? 0.587 : 0.7152;
var bVal = hdtv ? 0.299 : 0.2126;
for (var originalValue = 0; originalValue < 256; originalValue++)
{
var r = (byte)(originalValue * rVal);
var g = (byte)(originalValue * gVal);
var b = (byte)(originalValue * bVal);
// Just in case...
if (r > 255) r = 255;
if (g > 255) g = 255;
if (b > 255) b = 255;
lookupR[originalValue] = r;
lookupG[originalValue] = g;
lookupB[originalValue] = b;
}
unsafe
{
var pointer = (byte*)(void*)bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadWrite, bitmap.PixelFormat);
var bytesPerPixel = getBytesPerPixel(bitmap);
var heightWidth = bitmap.Width * bitmap.Height;
for (var y = 0; y < heightWidth; ++y)
{
var value = (byte) (lookupR[pointer[0]] + lookupG[pointer[1]] + lookupB[pointer[2]]);
pointer[0] = value;
pointer[1] = value;
pointer[2] = value;
pointer += bytesPerPixel;
}
bitmap.UnlockBits();
}
break;
I ran across a similar performance issue where I need to iterate over an array of bitmap data. I found that there is a significant performance hit referencing the width or height properties of the bitmap within or as the bounds for the loop like you are doing with imagecolor.width and .height. By simply declaring an integer outside the loop and caching the bitmap height and width there in advance, I cut my loop time in half.
If you're converting to greyscale, try using a ColorMatrix transformation instead.
from: https://web.archive.org/web/20141230145627/http://bobpowell.net/grayscale.aspx
Image img = Image.FromFile(dlg.FileName);
Bitmap bm = new Bitmap(img.Width,img.Height);
Graphics g = Graphics.FromImage(bm);
ColorMatrix cm = new ColorMatrix(new float[][]{ new float[]{0.5f,0.5f,0.5f,0,0},
new float[]{0.5f,0.5f,0.5f,0,0},
new float[]{0.5f,0.5f,0.5f,0,0},
new float[]{0,0,0,1,0,0},
new float[]{0,0,0,0,1,0},
new float[]{0,0,0,0,0,1}});
/*
//Gilles Khouzams colour corrected grayscale shear
ColorMatrix cm = new ColorMatrix(new float[][]{ new float[]{0.3f,0.3f,0.3f,0,0},
new float[]{0.59f,0.59f,0.59f,0,0},
new float[]{0.11f,0.11f,0.11f,0,0},
new float[]{0,0,0,1,0,0},
new float[]{0,0,0,0,1,0},
new float[]{0,0,0,0,0,1}});
*/
ImageAttributes ia = new ImageAttributes();
ia.SetColorMatrix(cm);
g.DrawImage(img,new Rectangle(0,0,img.Width,img.Height),0,0,img.Width,img.Height,GraphicsUnit.Pixel,ia);
g.Dispose();
I would guess that the marshalling and copying is taking a large chunk of the time.
This link describes 3 methods for greyscaling an image.
Related
At a school we are preparing artwork which we have scanned and want automatically crop to the correct size. The kids (attempt) to draw within a rectangle:
I want to detect the inner rectangle borders, so I have applied a few filters with accord.net:
var newImage = new Bitmap(#"C:\Temp\temp.jpg");
var g = Graphics.FromImage(newImage);
var pen = new Pen(Color.Purple, 10);
var grayScaleFilter = new Grayscale(1, 0, 0);
var image = grayScaleFilter.Apply(newImage);
image.Save(#"C:\temp\grey.jpg");
var skewChecker = new DocumentSkewChecker();
var angle = skewChecker.GetSkewAngle(image);
var rotationFilter = new RotateBilinear(-angle);
rotationFilter.FillColor = Color.White;
var rotatedImage = rotationFilter.Apply(image);
rotatedImage.Save(#"C:\Temp\rotated.jpg");
var thresholdFilter = new IterativeThreshold(10, 128);
thresholdFilter.ApplyInPlace(rotatedImage);
rotatedImage.Save(#"C:\temp\threshold.jpg");
var invertFilter = new Invert();
invertFilter.ApplyInPlace(rotatedImage);
rotatedImage.Save(#"C:\temp\inverted.jpg");
var bc = new BlobCounter
{
BackgroundThreshold = Color.Black,
FilterBlobs = true,
MinWidth = 1000,
MinHeight = 1000
};
bc.ProcessImage(rotatedImage);
foreach (var rect in bc.GetObjectsRectangles())
{
g.DrawRectangle(pen, rect);
}
newImage.Save(#"C:\Temp\test.jpg");
This produces the following inverted image that the BlobCounter uses as input:
But the result of the blobcounter isn't super accurate, the purple lines indicate what the BC has detected.
Would there be a better alternative to the BlobCounter in accord.net or are there other C# library better suited for this kind of computer vision?
Here is a simple solution while I was bored on my lunch break.
Basically it just scans all the dimensions from outside to inside for a given color threshold (black), then takes the most prominent result.
Given
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static unsafe bool IsValid(int* scan0Ptr, int x, int y,int stride, double thresh)
{
var c = *(scan0Ptr + x + y * stride);
var r = ((c >> 16) & 255);
var g = ((c >> 8) & 255);
var b = ((c >> 0) & 255);
// compare it against the threshold
return r * r + g * g + b * b < thresh;
}
private static int GetBest(IEnumerable<int> array)
=> array.Where(x => x != 0)
.GroupBy(i => i)
.OrderByDescending(grp => grp.Count())
.Select(grp => grp.Key)
.First();
Example
private static unsafe Rectangle ConvertImage(string path, Color source, double threshold)
{
var thresh = threshold * threshold;
using var bmp = new Bitmap(path);
// lock the array for direct access
var bitmapData = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), ImageLockMode.ReadWrite, PixelFormat.Format32bppPArgb);
int left, top, bottom, right;
try
{
// get the pointer
var scan0Ptr = (int*)bitmapData.Scan0;
// get the stride
var stride = bitmapData.Stride / 4;
var array = new int[bmp.Height];
for (var y = 0; y < bmp.Height; y++)
for (var x = 0; x < bmp.Width; x++)
if (IsValid(scan0Ptr, x, y, stride, thresh))
{
array[y] = x;
break;
}
left = GetBest(array);
array = new int[bmp.Height];
for (var y = 0; y < bmp.Height; y++)
for (var x = bmp.Width-1; x > 0; x--)
if (IsValid(scan0Ptr, x, y, stride, thresh))
{
array[y] = x;
break;
}
right = GetBest(array);
array = new int[bmp.Width];
for (var x = 0; x < bmp.Width; x++)
for (var y = 0; y < bmp.Height; y++)
if (IsValid(scan0Ptr, x, y, stride, thresh))
{
array[x] = y;
break;
}
top = GetBest(array);
array = new int[bmp.Width];
for (var x = 0; x < bmp.Width; x++)
for (var y = bmp.Height-1; y > 0; y--)
if (IsValid(scan0Ptr, x, y, stride, thresh))
{
array[x] = y;
break;
}
bottom = GetBest(array);
}
finally
{
// unlock the bitmap
bmp.UnlockBits(bitmapData);
}
return new Rectangle(left,top,right-left,bottom-top);
}
Usage
var fileName = #"D:\7548p.jpg";
var rect = ConvertImage(fileName, Color.Black, 50);
using var src = new Bitmap(fileName);
using var target = new Bitmap(rect.Width, rect.Height);
using var g = Graphics.FromImage(target);
g.DrawImage(src, new Rectangle(0, 0, target.Width, target.Height), rect, GraphicsUnit.Pixel);
target.Save(#"D:\Test.Bmp");
Output
Notes :
This is not meant to be bulletproof or the best solution. Just a fast simple one.
There are many approaches to this, even machine learning ones that are likely better and more robust.
There is a lot of code repetition here, basically I just copied, pasted and tweaked for each side
I have just picked an arbitrary threshold that seems to work. Play with it
Getting the most common occurrence for the side is likely not the best approach, maybe you would want to bucket the results.
You could probably sanity limit the amount a side needs to scan in.
I am trying to automate something with my C# application, for which I use a bitmap detection system to detect if an icon has appeared on screen. This works perfectly on a PC. However, when I put the application on a server, it never works. I am using a Google Cloud instance with a Tesla K80, 2 vcpus running Windows server 2012.
Here is my code:
// Capture the current screen as a bitmap
public static Bitmap CaptureScreen()
{
// Bitmap format
Bitmap ScreenCapture = new Bitmap(Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height);
// Capture screen
Graphics GFX = Graphics.FromImage(ScreenCapture);
GFX.CopyFromScreen(Screen.PrimaryScreen.Bounds.X,
Screen.PrimaryScreen.Bounds.Y, 0, 0,
ScreenCapture.Size, CopyPixelOperation.SourceCopy);
return ScreenCapture;
}
// Find a list of all the points of a bitmap within another bitmap
public static List<Point> FindBitmapsEntry(Bitmap SourceBitmap, Bitmap SearchedBitmap)
{
#region Arguments check
if (SourceBitmap == null || SearchedBitmap == null)
throw new ArgumentNullException();
if (SourceBitmap.PixelFormat != SearchedBitmap.PixelFormat)
throw new ArgumentException("Pixel formats aren't equal.");
if (SourceBitmap.Width < SearchedBitmap.Width || SourceBitmap.Height < SearchedBitmap.Height)
throw new ArgumentException("Size of SearchedBitmap is bigger than SourceBitmap!");
#endregion
var PixelFormatSize = Image.GetPixelFormatSize(SourceBitmap.PixelFormat) / 8;
// Copy SourceBitmap to byte array
var SourceBitmapData = SourceBitmap.LockBits(new Rectangle(0, 0, SourceBitmap.Width, SourceBitmap.Height),
ImageLockMode.ReadOnly, SourceBitmap.PixelFormat);
var SourceBitmapByteLength = SourceBitmapData.Stride * SourceBitmap.Height;
var SourceBytes = new byte[SourceBitmapByteLength];
Marshal.Copy(SourceBitmapData.Scan0, SourceBytes, 0, SourceBitmapByteLength);
SourceBitmap.UnlockBits(SourceBitmapData);
// Copy SearchedBitmap to byte array
var SearchingBitmapData =
SearchedBitmap.LockBits(new Rectangle(0, 0, SearchedBitmap.Width, SearchedBitmap.Height),
ImageLockMode.ReadOnly, SearchedBitmap.PixelFormat);
var SearchingBitmapByteLength = SearchingBitmapData.Stride * SearchedBitmap.Height;
var SearchingBytes = new byte[SearchingBitmapByteLength];
Marshal.Copy(SearchingBitmapData.Scan0, SearchingBytes, 0, SearchingBitmapByteLength);
SearchedBitmap.UnlockBits(SearchingBitmapData);
var PointsList = new List<Point>();
// Searching entries, minimizing searching zones
// SourceBitmap.Height - SearchedBitmap.Height + 1
for (var MainY = 0; MainY < SourceBitmap.Height - SearchedBitmap.Height + 1; MainY++)
{
var SourceY = MainY * SourceBitmapData.Stride;
for (var MainX = 0; MainX < SourceBitmap.Width - SearchedBitmap.Width + 1; MainX++)
{
// MainY & MainX - pixel coordinates of SourceBitmap
// SourceY + SourceX = pointer in array SourceBitmap bytes
var SourceX = MainX * PixelFormatSize;
var IsEqual = true;
for (var c = 0; c < PixelFormatSize; c++)
{
// Check through the bytes in pixel
if (SourceBytes[SourceX + SourceY + c] == SearchingBytes[c])
continue;
IsEqual = false;
break;
}
if (!IsEqual) continue;
var ShouldStop = false;
// Find first equation and search deeper
for (var SecY = 0; SecY < SearchedBitmap.Height; SecY++)
{
var SearchY = SecY * SearchingBitmapData.Stride;
var SourceSecY = (MainY + SecY) * SourceBitmapData.Stride;
for (var SecX = 0; SecX < SearchedBitmap.Width; SecX++)
{
// SecX & SecY - coordinates of SearchingBitmap
// SearchX + SearchY = pointer in array SearchingBitmap bytes
var SearchX = SecX * PixelFormatSize;
var SourceSecX = (MainX + SecX) * PixelFormatSize;
for (var c = 0; c < PixelFormatSize; c++)
{
// Check through the bytes in pixel
if (SourceBytes[SourceSecX + SourceSecY + c] == SearchingBytes[SearchX + SearchY + c]) continue;
// Not equal - abort iteration
ShouldStop = true;
break;
}
if (ShouldStop) break;
}
if (ShouldStop) break;
}
if (!ShouldStop) // Bitmap is found
{
PointsList.Add(new Point(MainX, MainY));
}
}
}
return PointsList;
}
And here is how I use it:
Bitmap HighlightBitmap = new Bitmap(Resources.icon);
Bitmap CurrentScreen = CaptureScreen();
List<Point> HighlightPoints = FindBitmapsEntry(CurrentScreen, HighlightBitmap);
with this HighlightPoints[0] is supposed to give me the first point the two bitmaps (icon, screenshot) collide. But as mentioned before, it just doesn't work on the server.
Thanks in advance!
P.S. I am using the server with a RDP so it does have a visual interface to work with
I wanted to turn a regular for loop into a Parallel.For loop.
This-
for (int i = 0; i < bitmapImage.Width; i++)
{
for (int x = 0; x < bitmapImage.Height; x++)
{
System.Drawing.Color oc = bitmapImage.GetPixel(i, x);
int gray = (int)((oc.R * 0.3) + (oc.G * 0.59) + (oc.B * 0.11));
System.Drawing.Color nc = System.Drawing.Color.FromArgb(oc.A, gray, gray, gray);
bitmapImage.SetPixel(i, x, nc);
}
}
Into this-
Parallel.For(0, bitmapImage.Width - 1, i =>
{
Parallel.For(0, bitmapImage.Height - 1, x =>
{
System.Drawing.Color oc = bitmapImage.GetPixel(i, x);
int gray = (int)((oc.R * 0.3) + (oc.G * 0.59) + (oc.B * 0.11));
System.Drawing.Color nc = System.Drawing.Color.FromArgb(oc.A, gray, gray, gray);
bitmapImage.SetPixel(i, x, nc);
});
});
It fails with message-
Object is currently in use elsewhere.
at below line since multiple threads trying to access the non-thread safe reasources. Any idea how I can make this work?
System.Drawing.Color oc = bitmapImage.GetPixel(i, x);
It's not a clean solution, seeing what you would like to achieve. It would be better to get all the pixels in one shot, and then process them in the parallel for.
An alternative that I personally used, and improved the performance dramatically, is doing this conversion using unsafe functions to output a grayscale image.
public static byte[] MakeGrayScaleRev(byte[] source, ref Bitmap bmp,int Hei,int Wid)
{
int bytesPerPixel = 4;
byte[] bytesBig = new byte[Wid * Hei]; //create array to contain bitmap data with padding
unsafe
{
int ic = 0, oc = 0, x = 0;
//Convert the pixel to it's luminance using the formula:
// L = .299*R + .587*G + .114*B
//Note that ic is the input column and oc is the output column
for (int ind = 0, i = 0; ind < 4 * Hei * Wid; ind += 4, i++)
{
int g = (int)
((source[ind] / 255.0f) *
(0.301f * source[ind + 1] +
0.587f * source[ind + 2] +
0.114f * source[ind + 3]));
bytesBig[i] = (byte)g;
}
}
try
{
bmp = new Bitmap(Wid, Hei, PixelFormat.Format8bppIndexed);
bmp.Palette = GetGrayScalePalette();
Rectangle dimension = new Rectangle(0, 0, Wid, Hei);
BitmapData picData = bmp.LockBits(dimension, ImageLockMode.ReadWrite, bmp.PixelFormat);
IntPtr pixelStartAddress = picData.Scan0;
Marshal.Copy(forpictures, 0, pixelStartAddress, forpictures.Length);
bmp.UnlockBits(picData);
return bytesBig;
}
catch (Exception ex)
{
Console.WriteLine(ex.StackTrace);
return null;
}
}
It gets the bytearray of all the pixels of the input image, its height and width and output the computed grayscale array, and in ref Bitmap bmp the output grayscale bitmap.
I'm working on a screen sharing app, which runs a loop and grab fast screenshots using GDI methods . example here
Of course I also use a flood fill algorithm to find the changes areas between 2 images (previous screenshot and current).
I use another small trick - I downscale the snapshot resolution in 10, because processing 1920*1080=2073600 pixels very constantly is not very efficient.
However when I find the rectangle bounds - I apply it on the original full size bitmap and I just multiply by 10 the dimension (including top, left, width, height).
This is the scanning code:
unsafe bool ArePixelsEqual(byte* p1, byte* p2, int bytesPerPixel)
{
for (int i = 0; i < bytesPerPixel; ++i)
if (p1[i] != p2[i])
return false;
return true;
}
private unsafe List<Rectangle> CodeImage(Bitmap bmp, Bitmap bmp2)
{
List<Rectangle> rec = new List<Rectangle>();
var bmData1 = bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp.PixelFormat);
var bmData2 = bmp2.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp2.PixelFormat);
int bytesPerPixel = 4;
IntPtr scan01 = bmData1.Scan0;
IntPtr scan02 = bmData2.Scan0;
int stride1 = bmData1.Stride;
int stride2 = bmData2.Stride;
int nWidth = bmp.Width;
int nHeight = bmp.Height;
bool[] visited = new bool[nWidth * nHeight];
byte* base1 = (byte*)scan01.ToPointer();
byte* base2 = (byte*)scan02.ToPointer();
for (int y = 0; y < nHeight; y ++)
{
byte* p1 = base1;
byte* p2 = base2;
for (int x = 0; x < nWidth; ++x)
{
if (!ArePixelsEqual(p1, p2, bytesPerPixel) && !(visited[x + nWidth * y]))
{
// fill the different area
int minX = x;
int maxX = x;
int minY = y;
int maxY = y;
var pt = new Point(x, y);
Stack<Point> toBeProcessed = new Stack<Point>();
visited[x + nWidth * y] = true;
toBeProcessed.Push(pt);
while (toBeProcessed.Count > 0)
{
var process = toBeProcessed.Pop();
var ptr1 = (byte*)scan01.ToPointer() + process.Y * stride1 + process.X * bytesPerPixel;
var ptr2 = (byte*)scan02.ToPointer() + process.Y * stride2 + process.X * bytesPerPixel;
//Check pixel equality
if (ArePixelsEqual(ptr1, ptr2, bytesPerPixel))
continue;
//This pixel is different
//Update the rectangle
if (process.X < minX) minX = process.X;
if (process.X > maxX) maxX = process.X;
if (process.Y < minY) minY = process.Y;
if (process.Y > maxY) maxY = process.Y;
Point n; int idx;
//Put neighbors in stack
if (process.X - 1 >= 0)
{
n = new Point(process.X - 1, process.Y); idx = n.X + nWidth * n.Y;
if (!visited[idx]) { visited[idx] = true; toBeProcessed.Push(n); }
}
if (process.X + 1 < nWidth)
{
n = new Point(process.X + 1, process.Y); idx = n.X + nWidth * n.Y;
if (!visited[idx]) { visited[idx] = true; toBeProcessed.Push(n); }
}
if (process.Y - 1 >= 0)
{
n = new Point(process.X, process.Y - 1); idx = n.X + nWidth * n.Y;
if (!visited[idx]) { visited[idx] = true; toBeProcessed.Push(n); }
}
if (process.Y + 1 < nHeight)
{
n = new Point(process.X, process.Y + 1); idx = n.X + nWidth * n.Y;
if (!visited[idx]) { visited[idx] = true; toBeProcessed.Push(n); }
}
}
//finaly set a rectangle.
Rectangle r = new Rectangle(minX * 10, minY * 10, (maxX - minX + 1) * 10, (maxY - minY + 1) * 10);
rec.Add(r);
//got the rectangle now i'll do whatever i want with that.
//notify i scaled everything by x10 becuse i want to apply the changes on the originl 1920x1080 image.
}
p1 += bytesPerPixel;
p2 += bytesPerPixel;
}
base1 += stride1;
base2 += stride2;
}
bmp.UnlockBits(bmData1);
bmp2.UnlockBits(bmData2);
return rec;
}
This is my call:
private void Start()
{
full1 = GetDesktopImage();//the first,intial screen.
while (true)
{
full2 = GetDesktopImage();
a = new Bitmap(full1, 192, 108);//resizing for faster processing the images.
b = new Bitmap(full2, 192, 108); // resizing for faster processing the images.
CodeImage(a, b);
count++; // counter for the performance.
full1 = full2; // assign old to current bitmap.
}
}
However, after all the tricks and techniques I used, the algorithm runs quite slow... on my machine - Intel i5 4670k 3.4ghz - it runs only 20 times (at the maximum! It might get lower)! It maybe sounds fast (don't forget I have to send each changed area over the network after), but I'm looking to achieve more processed image per second. I think the main bottleneck is in the resizing of the 2 images - but I just thought it would be even faster after resizing - because it would have to loop through less pixels... 192*108=200,000 only..
I would appreciate any help, any improvement. Thanks.
Sorry for the previous post. I now show the full code here.
I need to know what the bitmap.Width and bitmap.Height - 1 for and also the bitmap.Scan0.
I search in the internet but it does not give any full explanation for that.
I will appreciate anyone who can briefly explain the whole thing. Thank you.
public static double[][] GetRgbProjections(Bitmap bitmap)
{
var width = bitmap.Width - 1;
var height = bitmap.Height - 1;
var horizontalProjection = new double[width];
var verticalProjection = new double[height];
var bitmapData1 = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
unsafe
{
var imagePointer1 = (byte*)bitmapData1.Scan0;
for (var y = 0; y < height; y++)
{
for (var x = 0; x < width; x++)
{
var blu = imagePointer1[0];
var green = imagePointer1[1];
var red = imagePointer1[2];
int luminosity = (byte)(((0.2126 * red) + (0.7152 * green)) + (0.0722 * blu));
horizontalProjection[x] += luminosity;
verticalProjection[y] += luminosity;
imagePointer1 += 4;
}
imagePointer1 += bitmapData1.Stride - (bitmapData1.Width * 4);
}
}
MaximizeScale(ref horizontalProjection, height);
MaximizeScale(ref verticalProjection, width);
var projections =
new[]
{
horizontalProjection,
verticalProjection
};
bitmap.UnlockBits(bitmapData1);
return projections;
}
Apparently it runs through every pixel of a RGBA bitmap and calculates the luminosity per pixel which its tracks inside two arrays, luminosity per horizontal line and luminosity per vertical line.
Unless I am mistaken, the -1 should not even be there. When you have a bitmap of 100x100 you want to create an array with 100 elements, not an array with 99 elements (width-1) since you want to track every horizontal and vertical line.