get image pixels into array - c#

i am trying to rewrite following code from silverlight to wpf. found here https://slmotiondetection.codeplex.com/
my problem is that WritaeableBitmap.Pixels is missing from wpf. how to achieve that? i understand how it works but i started with C# like week ago.
could you please point me to right direction?
public WriteableBitmap GetMotionBitmap(WriteableBitmap current)
{
if (_previousGrayPixels != null && _previousGrayPixels.Length > 0)
{
WriteableBitmap motionBmp = new WriteableBitmap(current.PixelWidth, current.PixelHeight);
int[] motionPixels = motionBmp.Pixels;
int[] currentPixels = current.Pixels;
int[] currentGrayPixels = ToGrayscale(current).Pixels;
for (int index = 0; index < current.Pixels.Length; index++)
{
byte previousGrayPixel = BitConverter.GetBytes(_previousGrayPixels[index])[0];
byte currentGrayPixel = BitConverter.GetBytes(currentGrayPixels[index])[0];
if (Math.Abs(previousGrayPixel - currentGrayPixel) > Threshold)
{
motionPixels[index] = _highlightColor;
}
else
{
motionPixels[index] = currentPixels[index];
}
}
_previousGrayPixels = currentGrayPixels;
return motionBmp;
}
else
{
_previousGrayPixels = ToGrayscale(current).Pixels;
return current;
}
}
public WriteableBitmap ToGrayscale(WriteableBitmap source)
{
WriteableBitmap gray = new WriteableBitmap(source.PixelWidth, source.PixelHeight);
int[] grayPixels = gray.Pixels;
int[] sourcePixels = source.Pixels;
for (int index = 0; index < sourcePixels.Length; index++)
{
int pixel = sourcePixels[index];
byte[] pixelBytes = BitConverter.GetBytes(pixel);
byte grayPixel = (byte)(0.3 * pixelBytes[2] + 0.59 * pixelBytes[1] + 0.11 * pixelBytes[0]);
pixelBytes[0] = pixelBytes[1] = pixelBytes[2] = grayPixel;
grayPixels[index] = BitConverter.ToInt32(pixelBytes, 0);
}
return gray;
}
`

In order to get the bitmap's raw pixel data you may use one of the BitmapSource.CopyPixels methods, e.g. like this:
var bytesPerPixel = (source.Format.BitsPerPixel + 7) / 8;
var stride = source.PixelWidth * bytesPerPixel;
var bufferSize = source.PixelHeight * stride;
var buffer = new byte[bufferSize];
source.CopyPixels(buffer, stride, 0);
Writing to a WriteableBitmap can be done by one of its WritePixels methods.
Alternatively you may access the bitmap buffer by the WriteableBitmap's BackBuffer property.
For converting a bitmap to grayscale, you might use a FormatConvertedBitmap like this:
var grayscaleBitmap = new FormatConvertedBitmap(source, PixelFormats.Gray8, null, 0d);

Related

Update PDF image in-place

I am trying to replace an image stream within an SDF document, using PDFNet 7.0.4 and netcoreapp3.1. As much as possible, I want to maintain the original object and its metadata; same dimensions, color system, compression, etc. Ideally object number and even generation would remain the same as well - the goal is that a before and after comparison would show only the changed pixels within the stream.
I'm getting the raw pixel data as a Stream object with this method:
private Stream GetImageData(int objectNum)
{
var image = new PDF.Image(sdfDoc.GetObj(objectNum));
var bits = image.GetBitsPerComponent();
var channels = image.GetComponentNum();
var bytesPerChannel = bits / 8;
var height = image.GetImageHeight();
var width = image.GetImageWidth();
var data = image.GetImageData();
var len = height * width * channels * bytesPerChannel;
using (var reader = new pdftron.Filters.FilterReader(data))
{
var buffer = new byte[len];
reader.Read(buffer);
return new MemoryStream(buffer);
}
}
After manipulating the image data, I want to update it before saving the underlying SDFDoc object. I've tried using the following method:
private void SetImageData(int objectNum, Stream stream)
{
var image = new PDF.Image(sdfDoc.GetObj(objectNum));
var bits = image.GetBitsPerComponent();
var channels = image.GetComponentNum();
var bytesPerChannel = bits / 8;
var height = image.GetImageHeight();
var width = image.GetImageWidth();
var len = height * width * channels * bytesPerChannel;
if (stream.Length != len) { throw new DataMisalignedException("Stream length does not match expected image dimensions"); }
using (var ms = new MemoryStream())
using (var writer = new pdftron.Filters.FilterWriter(image.GetImageData()))
{
stream.CopyTo(ms);
writer.WriteBuffer(ms.ToArray());
}
}
This runs without error, but nothing actually appears to get updated. I've tried playing around with SDFObj.SetStreamData(), but haven't been able to make that work either. What is the lowest impact, highest performance way to directly replace just the raw pixel data within an image stream?
edit
I have this halfway working with this method:
private void SetImageData(int objectNum, Stream stream)
{
var sdfObj = sdfDoc.GetObj(objectNum);
var image = new PDF.Image(sdfObj);
var bits = image.GetBitsPerComponent();
var channels = image.GetComponentNum();
var bytesPerChannel = bits / 8;
var height = image.GetImageHeight();
var width = image.GetImageWidth();
var len = height * width * channels * bytesPerChannel;
if (stream.Length != len) { throw new DataMisalignedException("Stream length does not match expected image dimensions"); }
var buffer = new byte[len];
stream.Read(buffer, 0, len);
sdfObj.SetStreamData(buffer);
sdfObj.Erase("Filters");
}
This works as expected, but with the obvious caveat that it just ignores any existing compression and turns the image into a raw uncompressed stream.
I've tried sdfObj.SetStreamData(buffer, image.GetImageData()); and sdfObj.SetStreamData(buffer, image.GetImageData().GetAttachedFilter());
and this does update the object in the file, but the resulting image fails to render.
The following code shows how to retain an Image object, but change the actual stream data.
static private Stream GetImageData(Obj o)
{
var image = new pdftron.PDF.Image(o);
var bits = image.GetBitsPerComponent();
var channels = image.GetComponentNum();
var bytesPerChannel = bits / 8;
var height = image.GetImageHeight();
var width = image.GetImageWidth();
var data = image.GetImageData();
var len = height * width * channels * bytesPerChannel;
using (var reader = new pdftron.Filters.FilterReader(data))
{
var buffer = new byte[len];
reader.Read(buffer);
return new MemoryStream(buffer);
}
}
static private void SetImageData(PDFDoc doc, Obj o, Stream stream)
{
var image = new pdftron.PDF.Image(o);
var bits = image.GetBitsPerComponent();
var channels = image.GetComponentNum();
var bytesPerChannel = bits / 8;
var height = image.GetImageHeight();
var width = image.GetImageWidth();
var len = height * width * channels * bytesPerChannel;
if (stream.Length != len) { throw new DataMisalignedException("Stream length does not match expected image dimensions"); }
o.Erase("DecodeParms"); // Important: this won'be accurate after SetStreamData
// now we actually do the stream swap
o.SetStreamData((stream as MemoryStream).ToArray(), new FlateEncode(null));
}
static private void InvertPixels(Stream stream)
{
// This function is for DEMO purposes
// this code assumes 3 channel 8bit
long length = stream.Length;
long pixels = length / 3;
for(int p = 0; p < pixels; ++p)
{
int c1 = stream.ReadByte();
int c2 = stream.ReadByte();
int c3 = stream.ReadByte();
stream.Seek(-3, SeekOrigin.Current);
stream.WriteByte((byte)(255 - c1));
stream.WriteByte((byte)(255 - c2));
stream.WriteByte((byte)(255 - c3));
}
stream.Seek(0, SeekOrigin.Begin);
}
And then here is sample code to use.
static void Main(string[] args)
{
PDFNet.Initialize();
var x = new PDFDoc(#"2002.04610.pdf");
x.InitSecurityHandler();
var o = x.GetSDFDoc().GetObj(381);
Stream source = GetImageData(o);
InvertPixels(source);
SetImageData(x, o, source);
x.Save(#"2002.04610-MOD.pdf", SDFDoc.SaveOptions.e_remove_unused);
}

C# - Padding image bytes with white bytes to fill 512 x 512

I'm using Digital Persona SDK to scan fingerprints in wsq format, for requeriment I need 512 x 512 image, the SDK only export 357 x 392 image.
The sdk provide a method to compress captured image from device in wsq format and return a byte array that I can write to disk.
-I've tried to allocate a buffer of 262144 for 512 x 512 image.
-Fill the new buffer with white pixel data each byte to value 255.
-Copy the original image buffer into the new image buffer. The original image doesn’t need to be centered but it's important to make sure to copy without corrupting the image data.
To summarize I've tried to copy the old image into the upper right corner of the new image.
DPUruNet.Compression.Start();
DPUruNet.Compression.SetWsqBitrate(95, 0);
Fid capturedImage = captureResult.Data;
//Fill the new buffer with white pixel data each byte to value 255.
byte[] bytesWSQ512 = new byte[262144];
for (int i = 0; i < bytesWSQ512.Length; i++)
{
bytesWSQ512[i] = 255;
}
//Compress capturedImage and get bytes (357 x 392)
byte[] bytesWSQ = DPUruNet.Compression.CompressRaw(capturedImage.Views[0].Width, capturedImage.Views[0].Height, 500, 8, capturedImage.Views[0].RawImage, CompressionAlgorithm.COMPRESSION_WSQ_NIST);
//Copy the original image buffer into the new image buffer
for (int i = 0; i < capturedImage.Views[0].Height; i++)
{
for (int j = 0; j < capturedImage.Views[0].Width; j++)
{
bytesWSQ512[i * bytesWSQ512.Length + j ] = bytesWSQ[i * capturedImage.Views[0].Width + j];
}
}
//Write bytes to disk
File.WriteAllBytes(#"C:\Users\Admin\Desktop\bytesWSQ512.wsq", bytesWSQ512);
DPUruNet.Compression.Finish();
When running that snippet I get IndexOutOfRangeException, I don't know if the loop or the calculation of indexes for new array are right.
Here is a representation of what I'm trying to do.
If someone is trying to achieve something like this or padding a raw image, I hope this will help.
DPUruNet.Compression.
DPUruNet.Compression.SetWsqBitrate(75, 0);
Fid ISOFid = captureResult.Data;
byte[] paddedImage = PadImage8BPP(captureResult.Data.Views[0].RawImage, captureResult.Data.Views[0].Width, captureResult.Data.Views[0].Height, 512, 512, 255);
byte[] bytesWSQ512 = Compression.CompressRaw(512, 512, 500, 8, paddedImage, CompressionAlgorithm.COMPRESSION_WSQ_NIST);
And the method to resize (pad) the image is:
public byte[] PadImage8BPP(byte[] original, int original_width, int original_height, int desired_width, int desired_height, byte pad_color)
{
byte[] canvas_8bpp = new byte[desired_width * desired_height];
for (int i = 0; i < canvas_8bpp.Length; i++)
canvas_8bpp[i] = pad_color; //Fill background. Note this type of fill will fail histogram checks.
int clamp_y_begin = 0;
int clamp_y_end = original_height;
int clamp_x_begin = 0;
int clamp_x_end = original_width;
int pad_y = 0;
int pad_x = 0;
if (original_height > desired_height)
{
int crop_distance = (int)Math.Ceiling((original_height - desired_height) / 2.0);
clamp_y_begin = crop_distance;
clamp_y_end = original_height - crop_distance;
}
else
{
pad_y = (desired_height - original_height) / 2;
}
if (original_width > desired_width)
{
int crop_distance = (int)Math.Ceiling((original_width - desired_width) / 2.0);
clamp_x_begin = crop_distance;
clamp_x_end = original_width - crop_distance;
}
else
{
pad_x = (desired_width - original_width) / 2;
}
//We traverse the captured image (either whole image or subset)
for (int y = clamp_y_begin; y < clamp_y_end; y++)
{
for (int x = clamp_x_begin; x < clamp_x_end; x++)
{
byte image_pixel = original[y * original_width + x];
canvas_8bpp[(pad_y + y - clamp_y_begin) * desired_width + pad_x + x - clamp_x_begin] = image_pixel;
}
}
return canvas_8bpp;
}

Convert a RenderTargetBitmap to a byte[] to be displayed on a embedded screen?

I'm trying to convert a RenderTargetBitmap to a byte array that will then get sent off to an external monochrome OLED screen. I know that for the bitmap to display correctly the bit/byte alignment should be LSB to MSB & Top to Bottom:
But I can't figure out how to get the RenderTargetBitmap's pixeldata in that format.
For the moment I've got:
RenderTargetBitmap renderTargetBitmap; //This is already set higher up
DataReader reader = DataReader.FromBuffer(await renderTargetBitmap.GetPixelsAsync());
// Placeholder for reading pixels
byte[] pixel = new byte[4]; // RGBA8
// Write out pixels
int index = 0;
byte[] array = new byte[renderTargetBitmap.PixelWidth*renderTargetBitmap.PixelHeight];
using (reader)
{
//THIS IS WHERE I THINK I'M SCREWING UP
for (int x = 0; x < rHeight; x++)
{
for (int y = 0; x < rWidth; y++)
{
reader.ReadBytes(pixel);
if (pixel[2] == 255)
array[index] = 0xff;
else
array[index] = 0x00;
index++;
}
}
}
sh1106.ShowBitmap(buffer); //Send off the byte array
I faced the same issue, this is what I did to get it working (It will convert the BGRA8 output to a 1BPP output that can then be used on a monochrome display, SSD1306 in my case)
At 1BPP output means that 8 pixels are stored in 1 byte. So you need to convert every 4 bytes into 1 bit.
public async Task Draw()
{
ActiveCanvas.UpdateLayout();
ActiveCanvas.Measure(ActiveCanvas.DesiredSize);
ActiveCanvas.Arrange(new Rect(new Point(0, 0), ActiveCanvas.DesiredSize));
// Create a render bitmap and push the surface to it
RenderTargetBitmap renderBitmap = new RenderTargetBitmap();
await renderBitmap.RenderAsync(ActiveCanvas, (int)ActiveCanvas.DesiredSize.Width, (int)ActiveCanvas.DesiredSize.Height);
DataReader bitmapStream = DataReader.FromBuffer(await renderBitmap.GetPixelsAsync());
if (_device != null)
{
byte[] pixelBuffer_1BPP = new byte[(int)(renderBitmap.PixelWidth * renderBitmap.PixelHeight) / 32];
pixelBuffer_1BPP.Initialize();
using (bitmapStream)
{
while (bitmapStream.UnconsumedBufferLength > 0)
{
uint index = (uint)(renderBitmap.PixelWidth * renderBitmap.PixelHeight * 4) - bitmapStream.UnconsumedBufferLength;
for (int bit = 0; bit < 8; bit++)
{
bitmapStream.ReadBytes(BGRA8);
byte value = (byte)(((BGRA8[0] & 0x80) | (BGRA8[1] & 0x80) | (BGRA8[2] & 0x80)) == 0x80 ? 1 : 0);
pixelBuffer_1BPP[index] |= (byte)(value << (7 - bit));
}
}
_device.DrawBitmap(0, 0, pixelBuffer_1BPP, (short)renderBitmap.PixelWidth, (short)renderBitmap.PixelHeight, Colors.White);
}
}
}

Using TiffBitmapEncoder with Gray32Float

I'm trying to create a 32 BPP gray scale tiff using this code which I found on MSDN
BitmapSource image = BitmapSource.Create(
width,
height,
96,
96,
PixelFormats.Gray32Float,
null,
pixels,
stride);
FileStream stream = new FileStream("test file.tif", FileMode.Create);
TiffBitmapEncoder encoder = new TiffBitmapEncoder();
encoder.Compression = TiffCompressOption.None;
var bitmapFrame = BitmapFrame.Create(image);
encoder.Frames.Add(bitmapFrame);
encoder.Save(stream);
The file gets created and the image looks correct when I open it, but the file properties says that it is a 16 BPP (0-65536) image not a 32 bit floating point as specified by the Gray32Float parameter.
I've confirmed the file format is 16 BPP by looking at the file properties in windows explorer and by opening the file in ImageJ
I can create 32 BPP tiffs in Paint.Net and ImageJ, to confirm that format is supported.
Anyone know why the .Net TiffBitmapEncoder is creating the wrong type?
Under the hood, .Net uses the Windows Imaging Component (WIC). WIC supports reading of TIFFs in Gray32Float (GUID_WICPixelFormat32bppGrayFloat in WIC) but not writing. Take a look at the WIC Native Pixel Formats Overview. I had the same experience discovering the image was written as Gray16.
This is very frustrating. I've been attempting to writes some scientific data using Gray32Float, but I have not been successful.
Old question, but I tried, and almost made it, but still - doesn't work correctly:
What I have here is a solution which saves as 32bit, using TiffLib, but the value range is somehow not correct.
I save an image in float range of -0.5 to 3, and imageJ reads it as 32 bit, BUT the range is ~-1000K to ~3000K...
I tried using TiffLib adding the following functions:
public static void Write32BitTiff_(string path, int W, int H, float[] data, ref byte[] FileData, int numPage = 0)
{
var numBytes = sizeof(float);
var size = H * W * numBytes;
byte[] arr = null;
arr = new byte[size];
var ctr = 0;
byte[] floatVal;
for (int i = 0; i < size; i += numBytes)
{
try
{
float val = data[ctr++];
floatVal = BitConverter.GetBytes(val);
for (int j = 0; j < numBytes; j++)
arr[i + j] = floatVal[j];
}
catch (IndexOutOfRangeException)
{
break;
}
catch (Exception eee) { }
}
Tiff t = openTiff(path, W, H, numPage, numBytes * 8);
t.WriteRawStrip(0, arr, size);
t.Close();
t.Dispose();
}
Where "OpenTiff" looks like this:
private static Tiff openTiff(string path, int W, int H, int pageNum, int numBits, bool overrideFile = false)
{
Tiff t;
int numberOfPages;
if (!File.Exists(path) || overrideFile )
{
t = Tiff.Open(path, "w");
numberOfPages = 1;
}
else
{
var start = DateTime.Now;
t = Tiff.Open(path, "a");
numberOfPages = t.NumberOfDirectories() + 1; ;
numberOfPages = (pageNum > numberOfPages) ? pageNum : numberOfPages;
}
t.SetField(TiffTag.IMAGEWIDTH, W);
t.SetField(TiffTag.IMAGELENGTH, H);
const int NUM_CHANNELS = 1;//for RGB set 3. for ARGB set 4, not sure supported.
t.SetField(TiffTag.SAMPLESPERPIXEL, NUM_CHANNELS);
t.SetField(TiffTag.BITSPERSAMPLE, numBits);
t.SetField(TiffTag.PHOTOMETRIC, Photometric.MINISBLACK);
t.SetField(TiffTag.SUBFILETYPE, FileType.PAGE);
t.SetField(TiffTag.PAGENUMBER, pageNum, numberOfPages);
t.SetDirectory((short)pageNum);
}
So if this might help someone, or if someone could find the "bug" with it - this would be great!

Problem to convert byte array to double

I have a problem to convert an byte array to double array using BitConverter.ToDouble().
Simply my program will select an image then convert the image to byte array.
Then it will convert the byte array to double array.
The problem that when I convert the byte array to the double I will get this error before the loop finish.
(Destination array is not long enough to copy all the items in the collection. Check array index and length.)
The error happen exactly at array.Length-7 position which is last seventh position before the last position on the array.
I need help to solve this problem and here is my code:
private Bitmap loadPic;
byte[] imageArray;
double[] dImageArray;
private void btnLoad_Click(object sender, EventArgs e)
{
try
{
OpenFileDialog open = new OpenFileDialog();
open.Filter = "Image Files(*.jpg; *.jpeg; *.gif; *.bmp)|*.jpg; *.jpeg; *.gif; *.bmp";
if (open.ShowDialog() == DialogResult.OK)
{
pictureBox1.Image = new Bitmap(open.FileName);
loadPic = new Bitmap(pictureBox1.Image);
}
}
catch
{
throw new ApplicationException("Failed loading image");
}
pictureBox1.SizeMode = PictureBoxSizeMode.StretchImage;
}
private void btnConvert_Click(object sender, EventArgs e)
{
imageArray = imageToByteArray(loadPic);
int index = imageArray.Length;
dImageArray = new double[index];
for (int i = 0; i < index; i++)
{
dImageArray[i] = BitConverter.ToDouble(imageArray,i);
}
}
public byte[] imageToByteArray(Image imageIn)
{
MemoryStream ms = new MemoryStream();
imageIn.Save(ms, ImageFormat.Gif);
return ms.ToArray();
}
BitConverter.ToDouble(byte[], int)
uses eight bytes to construct a 64-bit double, which explains your problem (once you get to the 7th to last element, there are no longer eight bytes left). I'm guessing this is not what you want to do, based on how you set up your loop.
I imagine you want something like:
for(int i = 0; i < index; i++)
{
dImageArray[i] = (double)imageArray[i];
}
Edit - or using LINQ, just for fun:
double[] dImageArray = imageArray.Select(i => (double)i).ToArray();
On the other hand...
If BitConverter is definitely what you want, then you'll need something like:
double[] dImageArray = new double[imageArray.Length / 8];
for (int i = 0; i < dImageArray.Length; i++)
{
dImageArray[i] = BitConverter.ToDouble(imageArray, i * 8);
}
Again, based on your code, I think the first solution is what you need.
class Program
{
static void Main(string[] args)
{
Program p = new Program();
p.Test();
}
private void Test()
{
Image i = Image.FromFile(#"C:\a.jpg");
Bitmap b = new Bitmap(i);
MemoryStream ms = new MemoryStream();
b.Save(ms, System.Drawing.Imaging.ImageFormat.Gif);
byte[] by = ms.ToArray();
double[] db = new double[(int)(Math.Ceiling((double)by.Length / 8))];
int startInterval = 1;
int interval = 8;
int k = 0;
byte[] bys = new byte[8];
int n = 1;
for (int m = startInterval; m <= interval && m<=by.Length; m++,n++)
{
bys[n-1] = by[m-1];
if (m == interval)
{
db[k] = BitConverter.ToDouble(bys, 0);
startInterval += 8;
interval += 8;
k++;
n = 0;
Array.Clear(bys, 0, bys.Length);
}
if (m == by.Length)
{
db[k] = BitConverter.ToDouble(bys, 0);
}
}
}
}
I think you need to back up a bit and explain what you are actually trying to do. Each BitConverter.ToDouble will convert 8 consecutive bytes into 1 double. If you start at the next position in the byte array, you are using 7 bytes that have already been used. Since each conversion will need 8 bytes, you will need to stop at Length - 7.
Anyway, you are going to end up inflating the size of the data by a factor of 8.
I think some explanation of what this is for might help you get some better answers.

Categories