I'm having problems converting a grayscale array of ints (int32[,]) into BMP format in C#.
I tried cycling through the array to set pixel color in the BMP, it does work but it ends up being really slow and practically unusable.
I did a lot of googling but I cannot find the answer to my question.
I need to put that image in a PictureBox in real time so the method needs to be fast.
Relevant discussion here
Edit: the array is 8bit depth but stored as int32
Edit2: Just found this code
private unsafe Task<Bitmap> BitmapFromArray(Int32[,] pixels, int width, int height)
{
return Task.Run(() =>
{
Bitmap bitmap = new Bitmap(width, height, PixelFormat.Format24bppRgb);
BitmapData bitmapData = bitmap.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.WriteOnly, PixelFormat.Format24bppRgb);
for (int y = 0; y < height; y++)
{
byte* row = (byte*)bitmapData.Scan0 + bitmapData.Stride * y;
for (int x = 0; x < width; x++)
{
byte grayShade8bit = (byte)(pixels[x, y] >> 4);
row[x * 3 + 0] = grayShade8bit;
row[x * 3 + 1] = grayShade8bit;
row[x * 3 + 2] = grayShade8bit;
}
}
bitmap.UnlockBits(bitmapData);
return bitmap;
});
}
Seems to work fast enough but the image is almost black. If I remove the top of the camera the Image should be completely white but it just displays a really dark grey. I guess it's interpreting the pixel value as 32bit, not 8bit. Then tried to cast (ushort)pixels[x, y] but doesn't work
I actually wrote a universally usable BuildImagefunction here on SO to build an image out of a byte array, but of course, you're not starting from a byte array, you're starting from a two-dimensional Int32 array. The easy way to get around it is simply to transform it in advance.
Your array of bytes-as-integers is a rather odd thing. If this is read from a grayscale image I'd rather assume this is 32-bit ARGB data, and you're just using the lowest component of each value (which would be the blue one), but if downshifting the values by 4 bits produced uniformally dark values I'm inclined to take your word for that; otherwise the bits of the next colour component (green) would bleed in, giving bright colours as well.
Anyway, musing and second-guessing aside, here's my actual answer.
You may think each of your values, when poured into an 8-bit image, is simply the brightness, but this is actually false. There is no specific type in the System.Drawing pixel formats to indicate 8-bit grayscale, and 8-bit images are paletted, which means that each value on the image refers to a colour on the colour palette. So, to actually make an 8-bit grayscale image where your byte values indicate the pixel's brightness, you'll need to explicitly define a colour palette where the indices of 0 to 255 on the palette contain gray colours going from (0,0,0) to (255,255,255). Of course, this is pretty easy to generate.
This code will transform your array into an 8-bit image. It uses the aforementioned BuildImage function. Note that that function uses no unsafe code. The use of Marshal.Copy means raw pointers are never handled directly, making the code completely managed.
public static Bitmap FromTwoDimIntArrayGray(Int32[,] data)
{
// Transform 2-dimensional Int32 array to 1-byte-per-pixel byte array
Int32 width = data.GetLength(0);
Int32 height = data.GetLength(1);
Int32 byteIndex = 0;
Byte[] dataBytes = new Byte[height * width];
for (Int32 y = 0; y < height; y++)
{
for (Int32 x = 0; x < width; x++)
{
// logical AND to be 100% sure the int32 value fits inside
// the byte even if it contains more data (like, full ARGB).
dataBytes[byteIndex] = (Byte)(((UInt32)data[x, y]) & 0xFF);
// More efficient than multiplying
byteIndex++;
}
}
// generate palette
Color[] palette = new Color[256];
for (Int32 b = 0; i < 256; b++)
palette[b] = Color.FromArgb(b, b, b);
// Build image
return BuildImage(dataBytes, width, height, width, PixelFormat.Format8bppIndexed, palette, null);
}
Note, even if the integers were full ARGB values, the above code would still work exactly the same; if you only use the lowest of the four bytes inside the integer, as I said, that'll simply be the blue component of the full ARGB integer. If the image is grayscale, all three colour components should be identical, so you'll still get the same result.
Assuming you ever find yourself with the same kind of byte array where the integers actually do contain full 32bpp ARGB data, you'd have to shift out all four byte values, and there would be no generated gray palette, but besides that, the code would be pretty similar. Just, handling 4 bytes per X iteration.
public static Bitmap fromTwoDimIntArrayGray(Int32[,] data)
{
Int32 width = data.GetLength(0);
Int32 height = data.GetLength(1);
Int32 stride = width * 4;
Int32 byteIndex = 0;
Byte[] dataBytes = new Byte[height * stride];
for (Int32 y = 0; y < height; y++)
{
for (Int32 x = 0; x < width; x++)
{
// UInt32 0xAARRGGBB = Byte[] { BB, GG, RR, AA }
UInt32 val = (UInt32)data[x, y];
// This code clears out everything but a specific part of the value
// and then shifts the remaining piece down to the lowest byte
dataBytes[byteIndex + 0] = (Byte)(val & 0x000000FF); // B
dataBytes[byteIndex + 1] = (Byte)((val & 0x0000FF00) >> 08); // G
dataBytes[byteIndex + 2] = (Byte)((val & 0x00FF0000) >> 16); // R
dataBytes[byteIndex + 3] = (Byte)((val & 0xFF000000) >> 24); // A
// More efficient than multiplying
byteIndex+=4;
}
}
return BuildImage(dataBytes, width, height, stride, PixelFormat.Format32bppArgb, null, null);
}
Of course, if you want this without transparency, you can either go with three bytes as you did, or simply change PixelFormat.Format32bppArgb in the final call to PixelFormat.Format32bppRgb, which changes the meaning of the fourth byte from alpha to mere padding.
Solved (had to remove the four bits shift):
private unsafe Task<Bitmap> BitmapFromArray(Int32[,] pixels, int width, int height)
{
return Task.Run(() =>
{
Bitmap bitmap = new Bitmap(width, height, PixelFormat.Format24bppRgb);
BitmapData bitmapData = bitmap.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.WriteOnly, PixelFormat.Format24bppRgb);
for (int y = 0; y < height; y++)
{
byte* row = (byte*)bitmapData.Scan0 + bitmapData.Stride * y;
for (int x = 0; x < width; x++)
{
byte grayShade8bit = (byte)(pixels[x, y]);
row[x * 3 + 0] = grayShade8bit;
row[x * 3 + 1] = grayShade8bit;
row[x * 3 + 2] = grayShade8bit;
}
}
bitmap.UnlockBits(bitmapData);
return bitmap;
});
}
Still not sure why substituting Format24bppRgb with Format8bppIndexed doesn't work. Any clue?
Related
I have a data that is (2448*2048) 5Mpixel image data, but the picturebox only has (816*683) about 500,000 pixels, so I lowered the pixels and I only need a black and white image, so I used the G value to create the image, but The image I output is shown in the following figure. Which part of my mistake?
public int[,] lowered(int[,] greenar)
{
int[,] Sy = new int[816, 683];
int x = 0;
int y = 0;
for (int i = 1; i < 2448; i += 3)
{
for (int j = 1; j < 2048; j += 3)
{
Sy[x, y] = greenar[i, j];
y++;
}
y = 0;
x++;
}
return Sy;
}
static Bitmap Create(int[,] R, int[,] G, int[,] B)
{
int iWidth = G.GetLength(1);
int iHeight = G.GetLength(0);
Bitmap Result = new Bitmap(iWidth, iHeight,
System.Drawing.Imaging.PixelFormat.Format24bppRgb);
Rectangle rect = new Rectangle(0, 0, iWidth, iHeight);
System.Drawing.Imaging.BitmapData bmpData = Result.LockBits(rect,
System.Drawing.Imaging.ImageLockMode.ReadWrite, System.Drawing.Imaging.PixelFormat.Format24bppRgb);
IntPtr iPtr = bmpData.Scan0;
int iStride = bmpData.Stride;
int iBytes = iWidth * iHeight * 3;
byte[] PixelValues = new byte[iBytes];
int iPoint = 0;
for (int i = 0; i < iHeight; i++)
{
for (int j = 0; j < iWidth; j++)
{
int iG = G[i, j];
int iB = G[i, j];
int iR = G[i, j];
PixelValues[iPoint] = Convert.ToByte(iB);
PixelValues[iPoint + 1] = Convert.ToByte(iG);
PixelValues[iPoint + 2] = Convert.ToByte(iR);
iPoint += 3;
}
}
System.Runtime.InteropServices.Marshal.Copy(PixelValues, 0, iPtr, iBytes);
Result.UnlockBits(bmpData);
return Result;
}
https://upload.cc/i1/2018/04/26/WHOXTJ.png
You don't need to downsample your image, you can do it in this way. Set picturebox property BackgroundImageLayout as either zoom or stretch and assign it as:
picturebox.BackgroundImageLayout = System.Windows.Forms.ImageLayout.Zoom;
picturebox.BackgroundImage = bitmap;
System.Windows.Forms.ImageLayout.Zoom will automatically adjust your bitmap to the size of picturebox.
You seem to be constantly mixing up your x and y offsets, which can easily be avoided simply by actually calling your loop variables x and y whenever you loop through image data. Also, image data is generally saved line by line, so your outer loop should be the Y loop going over the height, and the inner loop should process the X coordinates on one line, and should thus loop over the width.
Also, I'm not sure where your original data comes from, but in most of the cases I've seen where the image data is in multidimensional arrays like this, the Y is actually the first index in the array. Your actual image building function also assumes this, since it uses G.GetLength(0) to get the height of the image. But your channel resize function doesn't; it makes a multidimensional array as new int[816, 683], which would be a 683*816 image, not 816*683 as you said. So that certainly seems wrong.
Since you confirmed it to be [x,y], I adapted this solution to use it like that.
That aside, you hardcoded a lot of values in your functions, which is very bad practice. If you know you will reduce the image to 1/3rd by taking only one in three pixels, just give that 3 as parameter.
The reduction code:
public static Int32[,] ResizeChannel(Int32[,] origChannel, Int32 lossfactor)
{
Int32 newWidth = origChannel.GetLength(0) / lossfactor;
Int32 newHeight = origChannel.GetLength(1) / lossfactor;
// to avoid rounding errors
Int32 origHeight = newHeight * lossfactor;
Int32 origWidth = newWidth *lossfactor;
Int32[,] newChannel = new Int32[newWidth, newHeight];
Int32 newX = 0;
Int32 newY = 0;
for (Int32 y = 1; y < origHeight; y += lossfactor)
{
newX = 0;
for (Int32 x = 1; x < origWidth; x += lossfactor)
{
newChannel[newX, newY] = origChannel[x, y];
newX++;
}
newY++;
}
return newChannel;
}
The actual build code, as was remarked by GSerg in the comments, is wrong because you don't take the stride into account. The stride is the actual byte length of each line of pixels, and this is not just width * BytesPerPixel, since it gets rounded up to the next multiple of 4 bytes.
So you need to initialize your array as height * stride, not as height * width * 3, and you need to skip your write offset to the next multiple of the stride whenever you go to a lower Y line, rather than assuming it will just get there automatically because your X processing adds 3 for each pixel. Because it will not get there automatically, unless, by pure coincidence, your image width happens to be a multiple of 4 pixels.
Also, if you only use one channel for this, there is no reason to give it all three channels. Just give a single one.
public static Bitmap CreateGreyImage(Int32[,] greyChannel)
{
Int32 width = greyChannel.GetLength(0);
Int32 height = greyChannel.GetLength(1);
Bitmap result = new Bitmap(width, height, PixelFormat.Format24bppRgb);
Rectangle rect = new Rectangle(0, 0, width, height);
BitmapData bmpData = result.LockBits(rect, ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
Int32 stride = bmpData.Stride;
// stride is the actual line width in bytes.
Int32 bytes = stride * height;
Byte[] pixelValues = new Byte[bytes];
Int32 offset = 0;
for (Int32 y = 0; y < height; y++)
{
Int32 workOffset = offset;
for (Int32 x = 0; x < width; x++)
{
pixelValues[workOffset + 0] = (Byte)greyChannel[x, y];
pixelValues[workOffset + 1] = (Byte)greyChannel[x, y];
pixelValues[workOffset + 2] = (Byte)greyChannel[x, y];
workOffset += 3;
}
// Add stride to get the start offset of the next line
offset += stride;
}
Marshal.Copy(pixelValues, 0, bmpData.Scan0, bytes);
result.UnlockBits(bmpData);
return result;
}
Now, this works as expected if your R, G and B channels are indeed identical, But if they are not, you have to realize there is a difference between reducing the image to grayscale and just building a grey image from the green channel. On a colour image, you will get totally different results if you take the blue or red channel instead.
This was the code I executed for this:
Int32[,] greyar = ResizeChannel(greenar, 3);
Bitmap newbm = CreateGreyImage(greyar);
In this case, a grayscale Array2D for ShapePredictor.
Here is what I am trying, without much success.
using DlibDotNet;
using Rectangle = System.Drawing.Rectangle;
using System.Runtime.InteropServices;
public static class Extension
{
public static Array2D<byte> ToArray2D(this Bitmap bitmap)
{
var bits = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format32bppPArgb);
var length = bits.Stride * bits.Height;
var data = new byte[length];
Marshal.Copy(bits.Scan0, data, 0, length);
bitmap.UnlockBits(bits);
var array = new Array2D<byte>(bitmap.Width, bitmap.Height);
for (var x = 0; x < bitmap.Width; x++)
for (var y = 0; y < bitmap.Height; y++)
{
var offset = x * 4 + y * bitmap.Width * 4;
array[x][y] = data[offset];
}
return array;
}
I've searched and have not yet found a clear answer.
As noted before, you first need to convert your image to grayscale. There are plenty of answers here on StackOverflow to help you with that. I advise the ColorMatrix method used in this answer:
A: Convert an image to grayscale
I'll be using the MakeGrayscale3(Bitmap original) method shown in that answer in my code below.
Typically, images are looped through line by line for processing, so for clarity, you should put your Y loop as outer loop. It also makes the calculation of the data offsets a lot more efficient.
As for the actual data, if the image is grayscale, the R, G and B bytes should all be the same. The order "ARGB" in 32-bit pixel data refers to one UInt32 value, but those are little-endian, meaning the actual order of the bytes is [B, G, R, A]. This means that in each loop iteration we can just take the first of the four bytes, and it'll be the blue component.
public static Array2D<Byte> ToArray2D(this Bitmap bitmap)
{
Int32 stride;
Byte[] data;
// Removes unnecessary getter calls.
Int32 width = bitmap.Width;
Int32 height = bitmap.Height;
// 'using' block to properly dispose temp image.
using (Bitmap grayImage = MakeGrayscale(bitmap))
{
BitmapData bits = grayImage.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.ReadOnly, PixelFormat.Format32bppPArgb);
stride = bits.Stride;
Int32 length = stride*height;
data = new Byte[length];
Marshal.Copy(bits.Scan0, data, 0, length);
grayImage.UnlockBits(bits);
}
// Constructor is (rows, columns), so (height, width)
Array2D<Byte> array = new Array2D<Byte>(height, width);
Int32 offset = 0;
for (Int32 y = 0; y < height; y++)
{
// Offset variable for processing one line
Int32 curOffset = offset;
// Get row in advance
Array2D<Byte>.Row<Byte> curRow = array[y];
for (Int32 x = 0; x < width; x++)
{
curRow[x] = data[curOffset]; // Should be the Blue component.
curOffset += 4;
}
// Stride is the actual data length of one line. No need to calculate that;
// not only is it already given by the BitmapData object, but in some situations
// it may differ from the actual data length. This also saves processing time
// by avoiding multiplications inside each loop.
offset += stride;
}
return array;
}
I have been trying to implement the image comparing algorithm seen here: http://www.dotnetexamples.com/2012/07/fast-bitmap-comparison-c.html
The problem I have been having is that when I try to compare a large amount of images one after another using the method pasted below (a slightly modified version from the link above), my results seem to be inaccurate. In particular, if I try to compare too many different images, even the ones that are the same will occasionally be detected as different. The problem seems to be that certain bytes in the array are different, as you can see in the screenshot I have included of two of the same images being compared (this occurs when I repeatedly compare images from an array of about 100 images - but there are actually only 3 unique images in the array):
private bool byteCompare(Bitmap image1, Bitmap image2) {
if (object.Equals(image1, image2))
return true;
if (image1 == null || image2 == null)
return false;
if (!image1.Size.Equals(image2.Size) || !image1.PixelFormat.Equals(image2.PixelFormat))
return false;
#region Optimized code for performance
int bytes = image1.Width * image1.Height * (Image.GetPixelFormatSize(image1.PixelFormat) / 8);
byte[] b1bytes = new byte[bytes];
byte[] b2bytes = new byte[bytes];
Rectangle rect = new Rectangle(0, 0, image1.Width - 1, image1.Height - 1);
BitmapData bmd1 = image1.LockBits(rect, ImageLockMode.ReadOnly, image1.PixelFormat);
BitmapData bmd2 = image2.LockBits(rect, ImageLockMode.ReadOnly, image2.PixelFormat);
try
{
Marshal.Copy(bmd1.Scan0, b1bytes, 0, bytes);
Marshal.Copy(bmd2.Scan0, b2bytes, 0, bytes);
for (int n = 0; n < bytes; n++)
{
if (b1bytes[n] != b2bytes[n]) //This line is where error occurs
return false;
}
}
finally
{
image1.UnlockBits(bmd1);
image2.UnlockBits(bmd2);
}
#endregion
return true;
}
I've added a comment to show where in the method this error is occurring. I assume it has something to do with the memory not being allocated properly, but I haven't been able to figure out what the source of the error is.
I should probably also mention that I don't get any issues when I convert the image to a byte array like so:
ImageConverter converter = new ImageConverter();
byte[] b1bytes = (byte[])converter.ConvertTo(image1, typeof(byte[]));
However, this approach is far slower.
If (Width * bytesperpixel) != Stride, then there will be unused bytes at the end of each line that are not guaranteed to have any particular value and in practice can be filled with random garbage.
You need to iterate line by line, increment by Stride each time, and only checking the bytes that actually correspond to pixels on each line.
Once you got the BitmapData object, the Stride can be found in that BitmapData object's Stride property. Make sure to extract that for both images.
Then, you have to loop over all pixels in the data so you can accurately determine where the image width for each line ends and the leftover data of the stride begins.
Also note this only works for high-colour images. Comparing 8-bit images is still possible (though you need to compare their palettes as well), but for lower than 8 you need to go bit-shifting to get the actual palette offset out of the image.
A simple workaround for that is to just paint your image on a new 32bpp image, effectively converting it to high colour.
public static Boolean CompareHiColorImages(Byte[] imageData1, Int32 stride1, Byte[] imageData2, Int32 stride2, Int32 width, Int32 height, PixelFormat pf)
{
Int32 byteSize = Image.GetPixelFormatSize(pf) / 8;
for (Int32 y = 0; y < height; y++)
{
for (Int32 x = 0; x < width; x++)
{
Int32 offset1 = y * stride1 + x * byteSize;
Int32 offset2 = y * stride2 + x * byteSize;
for (Int32 n = 0; n > byteSize; n++)
if (imageData1[offset1 + n] != imageData2[offset2 + n])
return false;
}
}
return true;
}
I need to locate the points/cordinates (x,y) of the first pixel it finds with the specified color.
I have used the GetPixel() method, but it's a bit too slow and were looking into LockBits. How ever I can't figure out if this actually could solve my problem. Can I return the points for the found pixel using LockBits?
Here is my current code:
public Point FindPixel(Image Screen, Color ColorToFind)
{
Bitmap bit = new Bitmap(Screen);
BitmapData bmpData = bit.LockBits(new Rectangle(0, 0, bit.Width, bit.Height),
ImageLockMode.ReadWrite,
PixelFormat.Format32bppPArgb);
unsafe
{
byte* ptrSrc = (byte*)bmpData.Scan0;
for (int y = 0; y < bmpData.Height; y++)
{
for (int x = 0; x < bmpData.Width; x++)
{
Color c = bit.GetPixel(x, y);
if (c == ColorToFind)
return new Point(x, y);
}
}
}
bit.UnlockBits(bmpData);
return new Point(0, 0);
}
You didn't stop using GetPixel() so you are not ahead. Write it like this instead:
int IntToFind = ColorToFind.ToArgb();
int height = bmpData.Height; // These properties are slow so read them only once
int width = bmpData.Width;
unsafe
{
for (int y = 0; y < height; y++)
{
int* pline = (int*)bmpData.Scan0 + y * bmpData.Stride/4;
for (int x = 0; x < width; x++)
{
if (pline[x] == IntToFind)
return new Point(x, bit.Height - y - 1);
}
}
}
The odd looking Point constructor is necessary because lines are stored upside-down in a bitmap. And don't return new Point(0, 0) on failure, that's a valid pixel.
There are few things wrong with your code:
You are using PixelFormat.Format32bppPArgb - you should use pixel format of the image, if they won't match, all pixels will be copied under the hood anyways.
You are still using GetPixel, so all this hassle will not give you any advantage.
To use LockBits efficiently, you basically want to lock your image and then use unsafe pointers to get values of pixels. Code to do this will vary a bit for different pixel formats, assuming you really will have 32bpp format with blue being on LSB, your code could look like this:
for (int y = 0; y < bmpData.Height; ++y)
{
byte* ptrSrc = (byte*)(bmpData.Scan0 + y * bmpData.Stride);
int* pixelPtr = (int*)ptrSrc;
for (int x = 0; x < bmpData.Width; ++x)
{
Color col = Color.FromArgb(*pixelPtr);
if (col == ColorToFind) return new Point(x, y);
++pixelPtr; //Increate ptr by 4 bytes, because it is int
}
}
Few remarks:
For each line new ptrSrc is computed using Scan0 + stride value. This is because just increasing the pointer might fail, if Stride != bpp * width, which may be the case.
I assumed that blue pixel is represented as LSB, and alpha as MSB, which I think was not the case, because those GDI pixel formats were.. strange ;), just make sure you check it, if it's the other way, inverse bytes before using FromArgb() method.
If your pixel format is 24bpp, it's a bit more tricky, because you can't use int pointer and increase it by 1 (4 bytes), for obvious reasons.
I have an array of int pixels in my C# program and I want to convert it into an image. The problem is I am converting Java source code for a program into equivalent C# code. In java the line reads which displays the array of int pixels into image:
Image output = createImage(new MemoryImageSource(width, height, orig, 0, width));
can someone tell me the C# equivalent?
Here orig is the array of int pixels. I searched the Bitmap class and there is a method called SetPixel but the problem is it takes a x,y coordinate number. But what I have in my code is an array of int pixels. Another weird thing is my orig array has negative number and they are way far away from 255. In Java this is the same case (meaning both the array in C# and Java have equivalent value) and the values is working fine in Java.
But I can't get that line translated into C#. Please help.
Using WPF, you can create a bitmap (image) directly from your array. You can then encode this image or display it or play with it:
int width = 200;
int height = 200;
//
// Here is the pixel format of your data, set it to the proper value for your data
//
PixelFormat pf = PixelFormats.Bgr32;
int rawStride = (width * pf.BitsPerPixel + 7) / 8;
//
// Here is your raw data
//
int[] rawImage = new int[rawStride * height / 4];
//
// Create the BitmapSource
//
BitmapSource bitmap = BitmapSource.Create(
width, height,
96, 96, pf, null,
rawImage, rawStride);
You can use Bitmap.LockBits to obtain the bitmap data that you can then manipulate directly, rather than via SetPixel. (How to use LockBits)
I like the WPF option already presented, but here it is using LockBits and Bitmap:
// get the raw image data
int width, height;
int[] data = GetData(out width, out height);
// create a bitmap and manipulate it
Bitmap bmp = new Bitmap(width,height, PixelFormat.Format32bppArgb);
BitmapData bits = bmp.LockBits(new Rectangle(0, 0, width, height),
ImageLockMode.ReadWrite, bmp.PixelFormat);
unsafe
{
for (int y = 0; y < height; y++)
{
int* row = (int*)((byte*)bits.Scan0 + (y * bits.Stride));
for (int x = 0; x < width; x++)
{
row[x] = data[y * width + x];
}
}
}
bmp.UnlockBits(bits);
With (as test data):
public static int[] GetData(out int width, out int height)
{
// diagonal gradient over a rectangle
width = 127;
height = 128;
int[] data = new int[width * height];
for (int x = 0; x < width; x++)
{
for (int y = 0; y < height; y++)
{
int val = x + y;
data[y * width + x] = 0xFF << 24 | (val << 16) | (val << 8) | val;
}
}
return data;
}
Well, I'm assuming each int is the composite ARGB value? If there isn't an easy option, then LockBits might be worth looking at - it'll be a lot quicker than SetPixel, but is more complex. You'll also have to make sure you know how the int is composed (ARGB? RGBA?). I'll try to see if there is a more obvious option...
MemoryImageSource's constructor's 3rd argument is an array of ints composed of argb values in that order
The example in that page creates such an array by;
pix[index++] = (255 << 24) | (red << 16) | blue;
You need to decompose that integer array to a byte array (shift operator would be useful), but it should be in bgr order, for LockBits method to work.
I would recommend using LockBits but a slower SetPixel based algorithm might look something like
// width - how many int's per row
// array - array of integers
Bitmap createImage(int width, int[] array)
{
int height = array.Length / width;
Bitmap bmp = new Bitmap(width, height);
for (int y = 0; y < height; y++)
{
for (int x = 0; x < array.Length; x += width)
{
bmp.SetPixel(x, y, Color.FromArgb(array[i]));
}
}
return bmp;
}