Padding in PCX decoder - c#

I've got a PCX decoder in C# (see below) that is meant to read in a Stream and output a Bitmap. It works when dealing with an image that has dimensions that are multiples of 8, and seems to work with most images that are less than 8bpp regardless of dimensions, but images with different dimensions become skewed in an unusal way (see this link ). The pixels are all there, just it seems to be almost moved left in a weird way. The image is a valid PCX and opens in IrfanView and Paint.net.
Edit 1:
Okay, here's the result of quite a bit of testing: images with a byte-per-line value that divides by 8 (e.g. 316x256) decode fine, but images with an odd value don't. This is not true for all PCX files; it would seem that some (most?) images created in IrfanView work fine, but those I've found elsewhere do not. I was working on this some time ago, so I can't recall where they came from, I do know that images saved with the paint.net plug-in (here) also reproduce this problem. I think it's likely due to a padding issue, either with them or my decoder, but the images to decode fine elsewhere so, it's likely I'm the one with the problem, I just can't see where :(
End of Edit 1.
My code for importing is here (there's a lot, but it's the whole decoding algorithm, minus the header, which is processed separately):
public IntPtr ReadPixels(Int32 BytesPerScanline, Int32 ScanLines, Stream file)
{
//BytesPerScanLine is the taken from the header, ScanLines is the height and file is the filestream
IntPtr pBits;
Boolean bRepeat;
Int32 RepeatCount;
Byte ReadByte;
Int32 Row = 0;
Int32 Col = 0;
Byte[] PCXData = new Byte[BytesPerScanline * ScanLines]; //BytesPerScanline * ScanLines);
BinaryReader r = new BinaryReader(file);
r.BaseStream.Seek(128, SeekOrigin.Begin);
while (Row < ScanLines)
{
ReadByte = r.ReadByte();
bRepeat = (0xc0 == (ReadByte & 0xC0));
RepeatCount = (ReadByte & 0x3f);
if (!(Col >= BytesPerScanline))
{
if (bRepeat)
{
ReadByte = r.ReadByte();
while (RepeatCount > 0)
{
PCXData[(Row * BytesPerScanline) + Col] = ReadByte;
RepeatCount -= 1;
Col += 1;
}
}
else
{
PCXData[(Row * BytesPerScanline) + Col] = ReadByte;
Col += 1;
}
}
if (Col >= BytesPerScanline)
{
Col = 0;
Row += 1;
}
}
pBits = System.Runtime.InteropServices.Marshal.AllocHGlobal(PCXData.Length);
System.Runtime.InteropServices.Marshal.Copy(PCXData, 0, pBits, PCXData.Length);
return pBits;
}
I've been advised that it might be an issue with padding, but I can't see where this may be in the code and I'm struggling to see how to understand where the padding is.

Related

BitMiracle LibTIFF.NET Can't Decompress TIFF Previously Created By Itself

I've implemented a class that reads 24 bit-per-pixel TIFF generated by Microsoft.Reporting.WinForms.ReportViewer, converts it to a 1 bit-per-pixel TIFF and stores the result into a file.
This part is working just fine - I'm able to open the resulting TIFF in a TIFF viewer and view the contents.
For compression I'm using the following codec:
outImage.SetField(TiffTag.COMPRESSION, Compression.CCITT_T6);
Now I'm trying to read the same 1 bit-per-pixel TIFF and decompress it. I wrote the following methods:
public static void DecompressTiff(byte[] inputTiffBytes)
{
using (var tiffStream = new MemoryStream(inputTiffBytes))
using (var inImage = Tiff.ClientOpen("in-memory", "r", tiffStream, new TiffStream()))
{
if (inImage == null)
return null;
int totalPages = inImage.NumberOfDirectories();
for (var i = 0; i < totalPages; )
{
if (!inImage.SetDirectory((short) i))
return null;
var decompressedTiff = DecompressTiff(inImage);
...
}
private static byte[] DecompressTiff(Tiff image)
{
// Read in the possibly multiple strips
var stripSize = image.StripSize();
var stripMax = image.NumberOfStrips();
var imageOffset = 0;
int row = 0;
var bufferSize = image.NumberOfStrips() * stripSize;
var buffer = new byte[bufferSize];
int height = 0;
var result = image.GetField(TiffTag.IMAGELENGTH);
if (result != null)
height = result[0].ToInt();
int rowsperstrip = 0;
result = image.GetField(TiffTag.ROWSPERSTRIP);
if (result != null)
rowsperstrip = result[0].ToInt();
if (rowsperstrip > height && rowsperstrip != -1)
rowsperstrip = height;
for (var stripCount = 0; stripCount < stripMax; stripCount++)
{
int countToRead = (row + rowsperstrip > height) ? image.VStripSize(height - row) : stripSize;
var readBytesCount = image.ReadEncodedStrip(stripCount, buffer, imageOffset, countToRead); // Returns -1 for the last strip of the very first page
if (readBytesCount == -1)
return null;
imageOffset += readBytesCount;
row += rowsperstrip;
}
return buffer;
}
The problem is that when ReadEncodedStrip() is called for the last strip of the very first page - it returns -1, indicating that there is an error. And I can't figure out what's wrong even after debugging LibTIFF.NET decoder code. It's something with EOL TIFF marker discovered where it's not expected.
By some reason, LibTIFF.NET can't read a TIFF produced by itself or most likely I'm missing something. Here is the problem TIFF.
Could anyone please help to find the root cause?
After a more than a half day investigation, I've finally managed to detect the cause of this strange issue.
To convert from 24 bit-per-pixel TIFF to 1 bit-per-pixel, I ported algorithms from C to C# of the the 2 tools shipping with original libtiff: tiff2bw and tiffdither.
tiffdither has the bug that it doesn't include last image row in the output image, i.e. if you feed to it an image with 2200 rows height, you get the image with 2199 rows height as output.
I've noticed this bug in the very beginning of the porting and tried to fix, but, as it turned out eventually, not completely and the ported algorithm actually didn't write the last row via WriteScanline() method to the output TIFF. So this was the reason why LibTIFF.NET wasn't able to read last strip\row of the image depending on what reading method I used.
What was surprising to me is that LibTIFF.NET allows to write such actually corrupted TIFF without any error during writing. For example WriteDirectory() method returns true in this situation when image height set via TiffTag.IMAGELENGTH differs from the actual coount of rows written to it. However, later it can't read such the image and the error is thrown while reading.
Maybe this behavior inherited from the original libtiff, though.

Encode JPG image file as DICOM PixelData using ClearCanvas

I have a set of JPG images that are actually slices of a CT scan, which I want to reconstruct into DICOM image files and import into a PACS.
I am using ClearCanvas, and have set all of the requisite tags (and confirmed them by converting one of my JPG files to DICOM using a proprietary application to make sure they are the same). I am just not sure how I should be processing my JPG file to get it into the PixelData tag?
Currently I am converting it to a Byte array, on advice from ClearCanvas forums, but the image is just garbled in the DICOM viewer. How should I be processing the image data to get it into a readable format?
public DicomFile CreateFileFromImage(Image image)
{
int height = image.Height;
int width = image.Width;
short bitsPerPixel = (short)Image.GetPixelFormatSize(image.PixelFormat);
byte[] imageBuffer = ImageToByteArray(image);
DicomFile dicomFile = new DicomFile();
dicomFile.DataSet[DicomTags.Columns].SetInt32(0, width);
dicomFile.DataSet[DicomTags.Rows].SetInt32(0, height);
dicomFile.DataSet[DicomTags.BitsStored].SetInt16(0, bitsPerPixel);
dicomFile.DataSet[DicomTags.BitsAllocated].SetInt16(0, bitsPerPixel);
dicomFile.DataSet[DicomTags.HighBit].SetInt16(0, 7);
//other tags not shown
dicomFile.DataSet[DicomTags.PixelData].Values = imageBuffer;
return dicomFile;
}
public static byte[] ImageToByteArray(Image imageIn)
{
MemoryStream ms = new MemoryStream();
imageIn.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg);
return ms.ToArray();
}
The ClearCanvas library as two helper classes that make it easier to encode and decode pixel data within a DicomFile. They are the DicomCompressedPixelData class and the DicomUncompressedPixelData class. You can use these to set the parameters for the image, and encode them into the DicomFile object.
In your case, since you're encoding a compressed object, you should use the DicomCompressedPixelData class. There are properties on the class that can be set. Calling the UpdateMessage method will copy these property values over to the DicomFile object. Also, this class has an AddFrameFragment method that properly encodes the pixel data. Note that compressed pixel data has to have some specific binary wrappers around each frame of data. This was the part missing from your previous code. The code below shows how to set this up.
short bitsPerPixel = (short)Image.GetPixelFormatSize(image.PixelFormat);
var dicomFile = new DicomFile();
var pd = new DicomCompressedPixelData(dicomFile);
pd.ImageWidth = (ushort)image.Width;
pd.ImageHeight = (ushort) image.Height;
pd.BitsStored = (ushort)bitsPerPixel;
pd.BitsAllocated = (ushort) bitsPerPixel;
pd.HighBit = 7;
pd.SamplesPerPixel = 3;
pd.PlanarConfiguration = 0;
pd.PhotometricInterpretation = "YBR_FULL_422";
byte[] imageBuffer = ImageToByteArray(image);
pd.AddFrameFragment(imageBuffer);
pd.UpdateMessage(dicomFile);
return dicomFile;
I ended up processing the bitmap manually and creating an array out of the Red Channel in the image, following some code in a plugin:
int size = rows * columns;
byte[] pData = new byte[size];
int i = 0;
for (int row = 0; row < rows; ++row)
{
for (int column = 0; column < columns; column++)
{
pData[i++] = image.GetPixel(column, row).R;
}
}
It does work, but it is horribly slow and creates bloated DICOM files. I'd love to get the inbuilt DicomCompressedPixelData class working.
Any further suggestions would be very welcome.
It is important to know the bit depth and color components of your JPEG CT image before inserting into DICOM dataset. It could be 8-bit lossy (JPEG Compression Process 2) or 12-bit lossy (JPEG Compression Process 4) or 8, 12 or 16 bit lossless grayscale JPEG (JPEG Compression Process 14 - lossless, non-hierarchical). This information is critical for updating the Pixel Data related information such as Photometric Interpretation, Sample per Pixel, Planer Configuration, Bits Allocated, High Bit to Transfer Syntax.

C# Real time waveform data plot using NAudio

I am new on processing wav file and C#.My goal is to real time data plotting in waveform of wavfile.I mean while recording sound(wav) file,i want to plot its graph simultaneously.I searched some sound libiraries and decide to use NAudio.(Dont know it is the best choice for me.I am open to any suggestions about choosing audio libirary). However i have no idea about real time data plotting using sound. Some people suggest GDI but as i said i am new and i think it will take too much time to use GDI efficiently.If i must learn GDI,pls share any article that can help a beginner like me. Actually i look like dont know where should i start. Need to be guided :)) And i have a question.
One of the tutorial of NAudio,he works with byte array to plot the waveform in Chart.It is fine if you know the size of wav file.However it works too slow and gives Out of Memory Exception for bigger wav files than 10mb.The code below refers to what i mean.
OpenFileDialog open = new OpenFileDialog();
open.Filter = "Wave File (*.wav)|*.wav;";
if (open.ShowDialog() != DialogResult.OK) return;
chart1.Series.Add("wave");
chart1.Series["wave"].ChartType = System.Windows.Forms.DataVisualization.Charting.SeriesChartType.FastLine;
chart1.Series["wave"].ChartArea = "ChartArea1";
NAudio.Wave.WaveChannel32 wave = new NAudio.Wave.WaveChannel32(new NAudio.Wave.WaveFileReader(open.FileName));
byte[] buffer = new byte[426565];
int read;
while (wave.Position < wave.Length)
{
read = wave.Read(buffer, 0, 426565);
for (int i = 0; i < read / 4; i++)
{
chart1.Series["wave"].Points.Add(BitConverter.ToSingle(buffer, i * 4));
}
}
Is there a way to perform this operation faster?
If you plot every single sample, you will end up with a waveform that is unmanageably large since audio usually contains many thousands of samples per second. A common way waveforms are drawn is by selecting the maximum value over a period of time, and then drawing a vertical line to represent it. For example, if you had a three minute song, and wanted a waveform around 600 pixels wide, each pixel would represent about a third of a second. So you'd find the largest sample value in that third of a second and use that to plot your waveform.
Also, in your sample code you are reading an odd number of bytes. But since this is floating point audio, you should always read in multiples of four bytes.
This worked for me
WaveChannel32 wave = new WaveChannel32(new WaveFileReader(txtWave.Text));
int sampleSize = 1024;
var bufferSize = 16384 * sampleSize;
var buffer = new byte[bufferSize];
int read = 0;
chart.Series.Add("wave");
chart.Series["wave"].ChartType = System.Windows.Forms.DataVisualization.Charting.SeriesChartType.FastLine;
chart.Series["wave"].ChartArea = "ChartArea1";
while (wave.Position < wave.Length)
{
read = wave.Read(buffer, 0, bufferSize);
for (int i = 0; i < read / sampleSize; i++)
{
var point = BitConverter.ToSingle(buffer, i * sampleSize);
chart.Series["wave"].Points.Add(point);
}
}

AVIStreamSetFormat error

I'm trying to create a Desktop Recording Application. When I record the full screen, the program works as it is supposed to, but in some cases when I select a specific region from the desktop to record I get an error at: int result = AVIStreamSetFormat(psCompress, 0, ref bi, (Int32)bi.biSize);
Error in VideoStreamSetFormat: -2147205016.
I'm using Xvid MPEG-4 Codec to create AVI video. I think the problem might me that Xvid MPEG-4 Codec does not accept certaint image sizes (width and height). I'm not sure and stuck on this problem and I'm asking if somebody can help me understand why it is not working.
private void SetFormat(IntPtr psCompress)
{
BITMAPINFOHEADER bi = new BITMAPINFOHEADER();
bi.biSize = (uint)Marshal.SizeOf(bi);
bi.biWidth = (Int32)_width;
bi.biHeight = (Int32)_height;
bi.biPlanes = 1;
bi.biBitCount = 24;
bi.biCompression = 0; // 0 = BI_RGB
bi.biSizeImage = _stride * _height;
int result = AVIStreamSetFormat(psCompress, 0, ref bi, (Int32)bi.biSize);
if (result != 0)
{
throw new Exception("Error in VideoStreamSetFormat: " + result.ToString());
}
}
I found what was the problem. When taking screenshots from selected regions on the desktop I had to be sure that the height and width are divisible to 2. It seems that Xvid MPEG-4 Codec does not accept just any image size.
I had the same problem.
In my case I was setting the scale and rate to 0.
Make sure you are specifying the speed of the avi correctly before calling that function.

How to combine several JPEGs into a bigger JPEG in a lossless way programmaticaly (.net)

I have several JPEG images and I want to combine them into one big JPEG image.
I can do that by creating a Bitmap and then combining them there but that way if I save it again as JPEG the image will deteriorate.
So, is there any method that I can use to do that without losing quality while decoding/encoding?
In ACDSee program I saw an option to rotate JPEGs without quality loss, so there might be a way to combine several images without losing quality.
thanks
According to Wikipedia/Jpeg it could be possible if your images have sizes that are multiples of 16.
Wikipedia/Lossless editing/JPEG also talks about JPEGJoin that can combine several images.
There is nothing build in in .NET Framework, but you might be able to use the above tools from C#.
Almost all lossless JPEG tools are based on jpegtran from http://sylvana.net/jpegcrop/jpegtran/ (source code is available).
What you need is to extend the jpeg canvas and then use the still experimental "drop" functionality to put an image into another image.
I am not sure of this, but I think the images need to use the same quantization tables (~ encoding quality) in order to be joined losslessly.
well i have written the code and so i wanted to share it here:)
please note that the code wont work on all situations but its fine for my use.
i am using the LibJpeg.Net library http://bitmiracle.com/libjpeg .
also there is a bug in the library (or a bug in me:) ) that you cant get the component height_in_blocks this is why that code will only work on square tiles.
i think that the images need to have the same quantization table as Vlasta mentioned.
i think this code can be expanded to support that but well i didnt need such support.
and now here comes the code :)
public void CreateBigImage()
{
const int iTileWidth = 256;
const int iTileHeigth = 256;
int iImageWidthInTiles = 2;
int iImageHeigthInTiles = 2;
//Open Image to read its header in the new image
BitMiracle.LibJpeg.Classic.jpeg_decompress_struct objJpegDecompressHeader = new BitMiracle.LibJpeg.Classic.jpeg_decompress_struct();
System.IO.FileStream objFileStreamHeaderImage = new System.IO.FileStream(GetImagePath(0), System.IO.FileMode.Open, System.IO.FileAccess.Read);
objJpegDecompressHeader.jpeg_stdio_src(objFileStreamHeaderImage);
objJpegDecompressHeader.jpeg_read_header(true);
BitMiracle.LibJpeg.Classic.jvirt_array<BitMiracle.LibJpeg.Classic.JBLOCK>[] varrJBlockBigImage = new BitMiracle.LibJpeg.Classic.jvirt_array<BitMiracle.LibJpeg.Classic.JBLOCK>[10];
for (int i = 0; i < 3; i++)//3 compounds per image (YCbCr)
{
int iComponentWidthInBlocks = objJpegDecompressHeader.Comp_info[i].Width_in_blocks;
int iComponentHeigthInBlocks = iComponentWidthInBlocks;//there is no Height_in_blocks in the library so will use widht for heigth also (wont work if image is not rectangular)
varrJBlockBigImage[i] = BitMiracle.LibJpeg.Classic.jpeg_common_struct.CreateBlocksArray(iComponentWidthInBlocks * iImageWidthInTiles, iComponentHeigthInBlocks * iImageHeigthInTiles);
}
for (int iX = 0; iX < iImageWidthInTiles; iX++)
{
for (int iY = 0; iY < iImageHeigthInTiles; iY++)
{
WriteImageToJBlockArr(varrJBlockBigImage, GetImagePath(iY*iImageHeigthInTiles+iX), iX, iY);
}
}
System.IO.FileStream objFileStreamMegaMap = System.IO.File.Create(GetImagePath(999));
BitMiracle.LibJpeg.Classic.jpeg_compress_struct objJpegCompress = new BitMiracle.LibJpeg.Classic.jpeg_compress_struct();
objJpegCompress.jpeg_stdio_dest(objFileStreamMegaMap);
objJpegDecompressHeader.jpeg_copy_critical_parameters(objJpegCompress);//will copy the critical parameter from the header image
objJpegCompress.Image_height = iTileHeigth * iImageHeigthInTiles;
objJpegCompress.Image_width = iTileWidth * iImageWidthInTiles;
objJpegCompress.jpeg_write_coefficients(varrJBlockBigImage);
objJpegCompress.jpeg_finish_compress();
objFileStreamMegaMap.Close();
objJpegDecompressHeader.jpeg_abort_decompress();
objFileStreamHeaderImage.Close();
}
public void WriteImageToJBlockArr(BitMiracle.LibJpeg.Classic.jvirt_array<BitMiracle.LibJpeg.Classic.JBLOCK>[] varrJBlockNew, string strImagePath, int iTileX, int iTileY)
{
BitMiracle.LibJpeg.Classic.jpeg_decompress_struct objJpegDecompress = new BitMiracle.LibJpeg.Classic.jpeg_decompress_struct();
System.IO.FileStream objFileStreamImage = new System.IO.FileStream(strImagePath, System.IO.FileMode.Open, System.IO.FileAccess.Read);
objJpegDecompress.jpeg_stdio_src(objFileStreamImage);
objJpegDecompress.jpeg_read_header(true);
BitMiracle.LibJpeg.Classic.jvirt_array<BitMiracle.LibJpeg.Classic.JBLOCK>[] varrJBlockOrg = objJpegDecompress.jpeg_read_coefficients();
for (int i = 0; i < 3; i++)//3 compounds per image (YCbCr)
{
int iComponentWidthInBlocks = objJpegDecompress.Comp_info[i].Width_in_blocks;
int iComponentHeigthInBlocks = iComponentWidthInBlocks;//there is no Height_in_blocks in the library so will use widht for heigth also (wont work if image is not rectangular)
for (int iY = 0; iY < iComponentHeigthInBlocks; iY++)
{
for (int iX = 0; iX < iComponentWidthInBlocks; iX++)
{
varrJBlockNew[i].Access(iY + iTileY * iComponentHeigthInBlocks, 1)[0][iX + iTileX * iComponentWidthInBlocks] = varrJBlockOrg[i].Access(iY, 1)[0][iX];
}
}
}
objJpegDecompress.jpeg_finish_decompress();
objFileStreamImage.Close();
}

Categories