Using TiffBitmapEncoder with Gray32Float - c#

I'm trying to create a 32 BPP gray scale tiff using this code which I found on MSDN
BitmapSource image = BitmapSource.Create(
width,
height,
96,
96,
PixelFormats.Gray32Float,
null,
pixels,
stride);
FileStream stream = new FileStream("test file.tif", FileMode.Create);
TiffBitmapEncoder encoder = new TiffBitmapEncoder();
encoder.Compression = TiffCompressOption.None;
var bitmapFrame = BitmapFrame.Create(image);
encoder.Frames.Add(bitmapFrame);
encoder.Save(stream);
The file gets created and the image looks correct when I open it, but the file properties says that it is a 16 BPP (0-65536) image not a 32 bit floating point as specified by the Gray32Float parameter.
I've confirmed the file format is 16 BPP by looking at the file properties in windows explorer and by opening the file in ImageJ
I can create 32 BPP tiffs in Paint.Net and ImageJ, to confirm that format is supported.
Anyone know why the .Net TiffBitmapEncoder is creating the wrong type?

Under the hood, .Net uses the Windows Imaging Component (WIC). WIC supports reading of TIFFs in Gray32Float (GUID_WICPixelFormat32bppGrayFloat in WIC) but not writing. Take a look at the WIC Native Pixel Formats Overview. I had the same experience discovering the image was written as Gray16.
This is very frustrating. I've been attempting to writes some scientific data using Gray32Float, but I have not been successful.

Old question, but I tried, and almost made it, but still - doesn't work correctly:
What I have here is a solution which saves as 32bit, using TiffLib, but the value range is somehow not correct.
I save an image in float range of -0.5 to 3, and imageJ reads it as 32 bit, BUT the range is ~-1000K to ~3000K...
I tried using TiffLib adding the following functions:
public static void Write32BitTiff_(string path, int W, int H, float[] data, ref byte[] FileData, int numPage = 0)
{
var numBytes = sizeof(float);
var size = H * W * numBytes;
byte[] arr = null;
arr = new byte[size];
var ctr = 0;
byte[] floatVal;
for (int i = 0; i < size; i += numBytes)
{
try
{
float val = data[ctr++];
floatVal = BitConverter.GetBytes(val);
for (int j = 0; j < numBytes; j++)
arr[i + j] = floatVal[j];
}
catch (IndexOutOfRangeException)
{
break;
}
catch (Exception eee) { }
}
Tiff t = openTiff(path, W, H, numPage, numBytes * 8);
t.WriteRawStrip(0, arr, size);
t.Close();
t.Dispose();
}
Where "OpenTiff" looks like this:
private static Tiff openTiff(string path, int W, int H, int pageNum, int numBits, bool overrideFile = false)
{
Tiff t;
int numberOfPages;
if (!File.Exists(path) || overrideFile )
{
t = Tiff.Open(path, "w");
numberOfPages = 1;
}
else
{
var start = DateTime.Now;
t = Tiff.Open(path, "a");
numberOfPages = t.NumberOfDirectories() + 1; ;
numberOfPages = (pageNum > numberOfPages) ? pageNum : numberOfPages;
}
t.SetField(TiffTag.IMAGEWIDTH, W);
t.SetField(TiffTag.IMAGELENGTH, H);
const int NUM_CHANNELS = 1;//for RGB set 3. for ARGB set 4, not sure supported.
t.SetField(TiffTag.SAMPLESPERPIXEL, NUM_CHANNELS);
t.SetField(TiffTag.BITSPERSAMPLE, numBits);
t.SetField(TiffTag.PHOTOMETRIC, Photometric.MINISBLACK);
t.SetField(TiffTag.SUBFILETYPE, FileType.PAGE);
t.SetField(TiffTag.PAGENUMBER, pageNum, numberOfPages);
t.SetDirectory((short)pageNum);
}
So if this might help someone, or if someone could find the "bug" with it - this would be great!

Related

Fast lossless encoding of SKBitmap images

I'm trying to store large 4096x3072 SKBitmap images with lossless compression as fast as I can. I've tried storing them as PNG using SKImage.FromBitmap(bitmap).Encode(SKEncodedImageFormat.Png, 100) but this was really slow. Then using information from this question and this example code I made a method to store them as a Tiff image, which was a lot faster but still not fast enough for my purposes. The code has to work on Linux as well. This is my current code:
public static class SKBitmapExtensions
{
public static void SaveToPng(this SKBitmap bitmap, string filename)
{
using (Stream s = File.OpenWrite(filename))
{
SKData d = SKImage.FromBitmap(bitmap).Encode(SKEncodedImageFormat.Png, 100);
d.SaveTo(s);
}
}
public static void SaveToTiff(this SKBitmap img, string filename)
{
using (var tifImg = Tiff.Open(filename, "w"))
{
// Set the tiff information
tifImg.SetField(TiffTag.IMAGEWIDTH, img.Width);
tifImg.SetField(TiffTag.IMAGELENGTH, img.Height);
tifImg.SetField(TiffTag.COMPRESSION, Compression.LZW);
tifImg.SetField(TiffTag.PHOTOMETRIC, Photometric.RGB);
tifImg.SetField(TiffTag.ROWSPERSTRIP, img.Height);
tifImg.SetField(TiffTag.BITSPERSAMPLE, 8);
tifImg.SetField(TiffTag.SAMPLESPERPIXEL, 4);
tifImg.SetField(TiffTag.XRESOLUTION, 1);
tifImg.SetField(TiffTag.YRESOLUTION, 1);
tifImg.SetField(TiffTag.PLANARCONFIG, PlanarConfig.CONTIG);
tifImg.SetField(TiffTag.EXTRASAMPLES, 1, new short[] { (short)ExtraSample.UNASSALPHA });
// Copy the data
byte[] bytes = img.Bytes;
// Swap red and blue
convertSamples(bytes, img.Width, img.Height);
// Write the image into the memory buffer
for (int i = 0; i < img.Height; i++)
tifImg.WriteScanline(bytes, i * img.RowBytes, i, 0);
}
}
private static void convertSamples(byte[] data, int width, int height)
{
int stride = data.Length / height;
const int samplesPerPixel = 4;
for (int y = 0; y < height; y++)
{
int offset = stride * y;
int strideEnd = offset + width * samplesPerPixel;
for (int i = offset; i < strideEnd; i += samplesPerPixel)
{
byte temp = data[i + 2];
data[i + 2] = data[i];
data[i] = temp;
}
}
}
}
And the test code:
SKBitmap bitmap = SKBitmap.Decode("test.jpg");
Stopwatch stopwatch = new();
stopwatch.Start();
int iterations = 20;
for (int i = 0; i < iterations; i++)
bitmap.SaveToTiff("encoded.tiff");
stopwatch.Stop();
Console.WriteLine($"Average Tiff encoding time for a {bitmap.Width}x{bitmap.Height} image = {stopwatch.ElapsedMilliseconds / iterations} ms");
stopwatch.Restart();
for (int i = 0; i < iterations; i++)
bitmap.SaveToPng("encoded.png");
stopwatch.Stop();
Console.WriteLine($"Average PNG encoding time for a {bitmap.Width}x{bitmap.Height} image = {stopwatch.ElapsedMilliseconds / iterations} ms");
As a result I get:
Average Tiff encoding time for a 4096x3072 image = 630 ms
Average PNG encoding time for a 4096x3072 image = 3092 ms
Is there any faster way to store these images? I can imagine that I can avoid copying the data at var bytes = img.Bytes but I'm not sure how. The encoded file size for the PNG is 10.3MB and for the Tiff it is 26MB now.
If you are not so interested in making the most optimal png (from a file size point of view) then you can get access to some faster encoding options through:
SKBitmap bitmap = SKBitmap.Decode("test.jpg");
using(var pixmap= bitmap.PeekPixels())
{
var filters = SKPngEncoderFilterFlags.NoFilters;
int compress = 0;
var options = new SKPngEncoderOptions(filters, compress);
using (var data = pixmap.Encode(options))
{
byte[] bytes = data.ToArray();
// use data - write bytes to file etc
}
}
In the above example:
compress = 0 will use no zlib compression so the pngs will effectively be similar size to an uncompressed TIFF. You could try a higher value for compress (I think 9 is maximum but slowest).
filters = SKPngEncoderFilterFlags.NoFilters will be fastest under all scenarios. It could make files produced with compress != 0 larger in file size. The filters option is used to try to improve compressibility at the file with SKPngEncoderFilterFlags.AllFilters producing potentially the most compressed file.

C# - Padding image bytes with white bytes to fill 512 x 512

I'm using Digital Persona SDK to scan fingerprints in wsq format, for requeriment I need 512 x 512 image, the SDK only export 357 x 392 image.
The sdk provide a method to compress captured image from device in wsq format and return a byte array that I can write to disk.
-I've tried to allocate a buffer of 262144 for 512 x 512 image.
-Fill the new buffer with white pixel data each byte to value 255.
-Copy the original image buffer into the new image buffer. The original image doesn’t need to be centered but it's important to make sure to copy without corrupting the image data.
To summarize I've tried to copy the old image into the upper right corner of the new image.
DPUruNet.Compression.Start();
DPUruNet.Compression.SetWsqBitrate(95, 0);
Fid capturedImage = captureResult.Data;
//Fill the new buffer with white pixel data each byte to value 255.
byte[] bytesWSQ512 = new byte[262144];
for (int i = 0; i < bytesWSQ512.Length; i++)
{
bytesWSQ512[i] = 255;
}
//Compress capturedImage and get bytes (357 x 392)
byte[] bytesWSQ = DPUruNet.Compression.CompressRaw(capturedImage.Views[0].Width, capturedImage.Views[0].Height, 500, 8, capturedImage.Views[0].RawImage, CompressionAlgorithm.COMPRESSION_WSQ_NIST);
//Copy the original image buffer into the new image buffer
for (int i = 0; i < capturedImage.Views[0].Height; i++)
{
for (int j = 0; j < capturedImage.Views[0].Width; j++)
{
bytesWSQ512[i * bytesWSQ512.Length + j ] = bytesWSQ[i * capturedImage.Views[0].Width + j];
}
}
//Write bytes to disk
File.WriteAllBytes(#"C:\Users\Admin\Desktop\bytesWSQ512.wsq", bytesWSQ512);
DPUruNet.Compression.Finish();
When running that snippet I get IndexOutOfRangeException, I don't know if the loop or the calculation of indexes for new array are right.
Here is a representation of what I'm trying to do.
If someone is trying to achieve something like this or padding a raw image, I hope this will help.
DPUruNet.Compression.
DPUruNet.Compression.SetWsqBitrate(75, 0);
Fid ISOFid = captureResult.Data;
byte[] paddedImage = PadImage8BPP(captureResult.Data.Views[0].RawImage, captureResult.Data.Views[0].Width, captureResult.Data.Views[0].Height, 512, 512, 255);
byte[] bytesWSQ512 = Compression.CompressRaw(512, 512, 500, 8, paddedImage, CompressionAlgorithm.COMPRESSION_WSQ_NIST);
And the method to resize (pad) the image is:
public byte[] PadImage8BPP(byte[] original, int original_width, int original_height, int desired_width, int desired_height, byte pad_color)
{
byte[] canvas_8bpp = new byte[desired_width * desired_height];
for (int i = 0; i < canvas_8bpp.Length; i++)
canvas_8bpp[i] = pad_color; //Fill background. Note this type of fill will fail histogram checks.
int clamp_y_begin = 0;
int clamp_y_end = original_height;
int clamp_x_begin = 0;
int clamp_x_end = original_width;
int pad_y = 0;
int pad_x = 0;
if (original_height > desired_height)
{
int crop_distance = (int)Math.Ceiling((original_height - desired_height) / 2.0);
clamp_y_begin = crop_distance;
clamp_y_end = original_height - crop_distance;
}
else
{
pad_y = (desired_height - original_height) / 2;
}
if (original_width > desired_width)
{
int crop_distance = (int)Math.Ceiling((original_width - desired_width) / 2.0);
clamp_x_begin = crop_distance;
clamp_x_end = original_width - crop_distance;
}
else
{
pad_x = (desired_width - original_width) / 2;
}
//We traverse the captured image (either whole image or subset)
for (int y = clamp_y_begin; y < clamp_y_end; y++)
{
for (int x = clamp_x_begin; x < clamp_x_end; x++)
{
byte image_pixel = original[y * original_width + x];
canvas_8bpp[(pad_y + y - clamp_y_begin) * desired_width + pad_x + x - clamp_x_begin] = image_pixel;
}
}
return canvas_8bpp;
}

Reading 32-bit grayscale Tiff using Libtiff.Net

I've tried to read a 32-bit grayscale tiff file which each pixel in the image contains a floating point number. But during the reading process, the buffer array contains 4 values for each pixel. For instance [ pixel value = 43.0 --> byte values for the pixel = {0 , 0 , 44 , 66}]. I can't understand the relation between float pixel value and the byte values. I also wrote the image using the buffer but pixel values for output image are int values like 1073872896. Any suggestion would be appreciated.
using (Tiff input = Tiff.Open(#"E:\Sample_04.tif", "r"))
{
// get properties to use in writing output image file
int width = input.GetField(TiffTag.IMAGEWIDTH)[0].ToInt();
int height = input.GetField(TiffTag.IMAGELENGTH)[0].ToInt();
int samplesPerPixel = input.GetField(TiffTag.SAMPLESPERPIXEL)[0].ToInt();
int bitsPerSample = input.GetField(TiffTag.BITSPERSAMPLE)[0].ToInt();
int photo = input.GetField(TiffTag.PHOTOMETRIC)[0].ToInt();
int scanlineSize = input.ScanlineSize();
byte[][] buffer = new byte[height][];
for (int i = 0; i < height; i++)
{
buffer[i] = new byte[scanlineSize];
input.ReadScanline(buffer[i], i);
}
using (Tiff output = Tiff.Open("output.tif", "w"))
{
output.SetField(TiffTag.SAMPLESPERPIXEL, samplesPerPixel);
output.SetField(TiffTag.IMAGEWIDTH, width);
output.SetField(TiffTag.IMAGELENGTH, height);
output.SetField(TiffTag.BITSPERSAMPLE, bitsPerSample);
output.SetField(TiffTag.ROWSPERSTRIP, output.DefaultStripSize(0));
output.SetField(TiffTag.PHOTOMETRIC, photo);
output.SetField(TiffTag.PLANARCONFIG, PlanarConfig.CONTIG);
output.SetField(TiffTag.COMPRESSION, compression);
int j = 0;
for (int i = 0; i < h; i++)
{
output.WriteScanline(buffer[i], j);
j++;
}
}
}
Update 1:
I found out the relation between four bytes and the pixel value using BitConverter class in c# that is like this:
byte[] a = { 0, 0, 44, 66 } --> 43 = BitConverter.ToSingle(a, 0) and 1110179840 = BitConverter.ToInt32(a, 0). It seem bytes are converted to int32 and now the question is how convert byte values to float?
Update 2:
The original tiff file and the tiff after writing the snippet code have been attached.Why is the output tiff file messed up?
I added this line of code to convert pixel values to floating number and it works fine.
output.SetField(TiffTag.SAMPLEFORMAT, SampleFormat.IEEEFP);

Out of memory exception in reading a 1.5 GB tile-base Tiff file using LibTiff.Net

I tried to read and write a 1.5 GB tiled Tiff file using LibTiff.Net library as it's declared that support BigTiff (>4 GB) image files. I wrote the code below but get an error in line "buffer[tiles]..." which throws out of memory exception. I would appreciate developers who can help me to solve this problem.
using (Tiff input = Tiff.Open(#"E:\active folder\Sample_04.tif", "r"))
{
int width = input.GetField(TiffTag.IMAGEWIDTH)[0].ToInt();
int height = input.GetField(TiffTag.IMAGELENGTH)[0].ToInt();
int tileWidth = input.GetField(TiffTag.TILEWIDTH)[0].ToInt();
int tileLentgh = input.GetField(TiffTag.TILELENGTH)[0].ToInt();
int samplesPerPixel = input.GetField(TiffTag.SAMPLESPERPIXEL)[0].ToInt();
int bitsPerSample = input.GetField(TiffTag.BITSPERSAMPLE)[0].ToInt();
int photo = input.GetField(TiffTag.PHOTOMETRIC)[0].ToInt();
int tiles = 0;
int tileSize = input.TileSize();
byte[][] buffer = new byte[tileSize][];
for (int y = 0; y < height; y += tileLentgh)
{
for (int x = 0; x < width; x += tileWidth)
{
buffer[tiles] = new byte[tileSize];
input.ReadTile(buffer[tiles], 0, x, y, 0, 0);
tiles++;
}
}
// writing
using (Tiff output = Tiff.Open("output.tif", "w"))
{
output.SetField(TiffTag.SAMPLESPERPIXEL, samplesPerPixel);
output.SetField(TiffTag.IMAGEWIDTH, width );
output.SetField(TiffTag.IMAGELENGTH, height);
output.SetField(TiffTag.BITSPERSAMPLE, bitsPerSample);
output.SetField(TiffTag.ROWSPERSTRIP, output.DefaultStripSize(0));
output.SetField(TiffTag.PHOTOMETRIC, photo);
output.SetField(TiffTag.PLANARCONFIG, PlanarConfig.CONTIG);
int c = 0;
for (int y = 0; y < height; y += tileLentgh)
{
for (int x = 0; x < width; x += tileWidth)
{
output.WriteTile(buffer[c], x, y, 0, 0);
c++;
}
}
}
}
System.Diagnostics.Process.Start("output.tif");
}
The problem is not the library not supporting BigTiff files, the error is thrown when you try to allocate a huge array. The code you wrote tries to allocate the array in the memory of your computer, expecting that there is enough space there to do so and it seems that there is not.
Handling data with sizes comparable to the available memory on the target system always requires extra attention (that's why you can see BigTiff support emphasized in the library's description).
Fortunately for you, this is not a new problem and there are solutions for this: see some answers here or here.
Basically, the idea behind these solutions is to use your hard drive (or other storage device) to store the data and provide an interface for you to swap the neccessary parts to the memory when needed (just like virtual memory).

Convert a RenderTargetBitmap to a byte[] to be displayed on a embedded screen?

I'm trying to convert a RenderTargetBitmap to a byte array that will then get sent off to an external monochrome OLED screen. I know that for the bitmap to display correctly the bit/byte alignment should be LSB to MSB & Top to Bottom:
But I can't figure out how to get the RenderTargetBitmap's pixeldata in that format.
For the moment I've got:
RenderTargetBitmap renderTargetBitmap; //This is already set higher up
DataReader reader = DataReader.FromBuffer(await renderTargetBitmap.GetPixelsAsync());
// Placeholder for reading pixels
byte[] pixel = new byte[4]; // RGBA8
// Write out pixels
int index = 0;
byte[] array = new byte[renderTargetBitmap.PixelWidth*renderTargetBitmap.PixelHeight];
using (reader)
{
//THIS IS WHERE I THINK I'M SCREWING UP
for (int x = 0; x < rHeight; x++)
{
for (int y = 0; x < rWidth; y++)
{
reader.ReadBytes(pixel);
if (pixel[2] == 255)
array[index] = 0xff;
else
array[index] = 0x00;
index++;
}
}
}
sh1106.ShowBitmap(buffer); //Send off the byte array
I faced the same issue, this is what I did to get it working (It will convert the BGRA8 output to a 1BPP output that can then be used on a monochrome display, SSD1306 in my case)
At 1BPP output means that 8 pixels are stored in 1 byte. So you need to convert every 4 bytes into 1 bit.
public async Task Draw()
{
ActiveCanvas.UpdateLayout();
ActiveCanvas.Measure(ActiveCanvas.DesiredSize);
ActiveCanvas.Arrange(new Rect(new Point(0, 0), ActiveCanvas.DesiredSize));
// Create a render bitmap and push the surface to it
RenderTargetBitmap renderBitmap = new RenderTargetBitmap();
await renderBitmap.RenderAsync(ActiveCanvas, (int)ActiveCanvas.DesiredSize.Width, (int)ActiveCanvas.DesiredSize.Height);
DataReader bitmapStream = DataReader.FromBuffer(await renderBitmap.GetPixelsAsync());
if (_device != null)
{
byte[] pixelBuffer_1BPP = new byte[(int)(renderBitmap.PixelWidth * renderBitmap.PixelHeight) / 32];
pixelBuffer_1BPP.Initialize();
using (bitmapStream)
{
while (bitmapStream.UnconsumedBufferLength > 0)
{
uint index = (uint)(renderBitmap.PixelWidth * renderBitmap.PixelHeight * 4) - bitmapStream.UnconsumedBufferLength;
for (int bit = 0; bit < 8; bit++)
{
bitmapStream.ReadBytes(BGRA8);
byte value = (byte)(((BGRA8[0] & 0x80) | (BGRA8[1] & 0x80) | (BGRA8[2] & 0x80)) == 0x80 ? 1 : 0);
pixelBuffer_1BPP[index] |= (byte)(value << (7 - bit));
}
}
_device.DrawBitmap(0, 0, pixelBuffer_1BPP, (short)renderBitmap.PixelWidth, (short)renderBitmap.PixelHeight, Colors.White);
}
}
}

Categories