Lower image quality after converting from bytes - c#

to give a background on what this topic is about. I am trying to convert an image file to byte[] by using a memorystream to return the memorystream.ToArray();
However, i have noticed that the image quality decreases after the conversion inputBitmap -> byte[] -> outputBitmap.
outputBitmap has a lower quality than the inputBitmap.
My code to convert the image to byte[] is as follows
MemoryStream mstream = new MemoryStream();
myImage.Save(mstream,System.Drawing.Imaging.ImageFormat.Jpeg);
byte[] buffer = mstream.ToArray();
and to convert from the byte[] back to an image,
MemoryStream mstream = new MemoryStream(buffer);
Image newImage = Image.FromStream(mstream);
can somebody explain why this is and hopefully guide me to correct this problem?
Note that before i used the inputBitmap as my pictureBox.Image, it looks great in quality. But after converting from byte[] to outputBitmap, setting outputBitmap as my pictureBox.Image becomes kind of blurred and low in quality.

A couple of things stand out for me.
You are saving to JPG rather than a lossless format like PNG.
You are not setting the quality of the compression used to save the image.
This means that you are probably compressing an image that has already been compressed thus losing even more information in the process.
I'd change to saving the file as PNG if I could, failing that make sure you set the quality of the JPG to 100% before you save it. This will reduce to a minimum the compression on the file and hence minimise the data loss.
If you're still seeing a difference in quality then the only thing that I can think of that might explain a this is a difference in the resolution (number of pixels and/or colour depth) between the screen shot and the saved file. Make sure you set the target bitmap size and colour depth to be the same as the source bitmap.

Related

Decrease bitmap size after GreyScaling

I'm taking a screenshot of the screen, serializing the bitmap and sending it over the network. Overall this ends up being ~26KB of data transferred.
I'm trying to make this smaller. One thing I'm trying to do is converting the bitmap to greyscale. This is the function I'm using.
Public Function ConvertGreyscale(original As Bitmap) As Bitmap
Dim NewBitmap As New Bitmap(original.Width, original.Height)
Dim g As Graphics = Graphics.FromImage(NewBitmap)
Dim attributes As New ImageAttributes
attributes.SetColorMatrix(New ColorMatrix(New Single()() {New Single() {0.3F, 0.3F, 0.3F, 0, 0}, New Single() {0.59F, 0.59F, 0.59F, 0, 0}, New Single() {0.11F, 0.11F, 0.11F, 0, 0}, New Single() {0, 0, 0, 1, 0}, New Single() {0, 0, 0, 0, 1}}))
g.DrawImage(original, New Rectangle(0, 0, original.Width, original.Height), 0, 0, original.Width, original.Height, GraphicsUnit.Pixel, attributes)
g.Dispose()
Return NewBitmap
End Function
This works fine, and i end up getting a greyscale image. Problem is, that the size of the bitmap doesn't change. It's still 26KB, even though it's greyscale. I'm thinking that the new bitmap that's being created is just a regular 32bppargb bitmap with a greyscale image stuck into it.
I tried doing:
Dim NewBitmap As New Bitmap(original.Width, original.Height, PixelFormat.Format16bppgreyscale)
but i end up getting an "out of memory error".
What am i doing wrong? Also, are there any other ways to minimize the size of my bitmap?
EDIT:
So in an effort to take baby steps to tackle this problem, I'm using this code to convert the 32bpp bitmap to a 16bpp bitmap
Dim clone = New Bitmap(tmpImg.Width, tmpImg.Height, Imaging.PixelFormat.Format16bppRgb565)
Using gr = Graphics.FromImage(clone)
gr.DrawImage(tmpImg, New Rectangle(0, 0, clone.Width, clone.Height))
End Using
I tried doing Format16bbpGreyscale or Format16bppRgb555, but both of those cause "Our of memory errors". The only one that seems to work is the Format16bppRgb256
Regardless, I'm doing my packet sniffing again, and changing the format to 16bppRgb265 INCREASES the size of the image packet from ~26KB to 29KB. So changing to this format seems to increase size. I don't understand ;_;
EDIT2:
I've found multiple ways to convert the image to greyscale now and/or changing the pixelformat of the bitmap to something smaller than 32bpp. Unfortunately none of this seems to decrease the size of the serialized bitmap when it's being sent over the network. Some things seem to even increase the size. Not sure what i can do.
I recommend checking out Aforge's AForge.Imaging.ColorReduction.ColorImageQuantizer .
It reduced a screenshot of a SO homepage from 96kB to 33kB (going to 16 colors) while maintaining readabilty much better that an equally reduced jpg. Reducing to 32 or 64 colors left almost no artifacts, other than color changes while still staying at 48kB.
It does take a few seconds for processing, though..
Here is a piece of code that uses the Aforge libraries.
using AForge.Imaging.ColorReduction;
void reduceColors(string inFile, string outFile, int numColors)
{
using (Bitmap image = new Bitmap(inFile) )
{
ColorImageQuantizer ciq = new ColorImageQuantizer(new MedianCutQuantizer());
Color[] colorTable = ciq.CalculatePalette(image, numColors);
using (Bitmap newImage = ciq.ReduceColors(image, numColors))
newImage.Save(outFile);
}
}
If you're interested I also have a home-grown piece of code, that results in 40% of the original size with perfext text, albeit a little color shift; it is very fast.
Converting to greyscale doesnt do much by itself because all you are doing is changing the RGB values of the pixels. Unfortunately many of the greyscale formats are not fully supported, though there are some opensource image libraries which will do this.
Significant reduction can be gotten using JPG and some quality reduction. 26kb for a full size (?) screenshot doesn't sound all that large (or it is only part of a screen?), and we dont know what your desired target size is. Here is how to reduce quality via JPG.
Dim jpgEncoder As ImageCodecInfo = GetJPGEncoder()
Dim myEncoder As System.Drawing.Imaging.Encoder =
System.Drawing.Imaging.Encoder.Quality
Dim jEncoderParams As New EncoderParameters(1)
' set the quality (100& here)
jEncoderParams.Param(0) = New EncoderParameter(myEncoder, 100&)
' dont do this...creates a false baseline for size tests
'Dim bmp As Bitmap = My.Resources.testimage
Using ms As New System.IO.MemoryStream(),
fs As New FileStream("C:\Temp\zser.bin", FileMode.Create),
bmp As New Bitmap(My.Computer.Screen.WorkingArea.Width,
My.Computer.Screen.WorkingArea.Height),
g As Graphics = Graphics.FromImage(bmp)
' get screen in (BMP format)
g.CopyFromScreen(0, 0, 0, 0, My.Computer.Screen.WorkingArea.Size)
' save image to memstream in desired format
bmp.Save(ms, Imaging.ImageFormat.Png)
' use jpgEncoder to control JPG quality/compression
'bmp.Save(ms, jpgEncoder , jEncoderParams)
ms.Position = 0
Dim bf As New BinaryFormatter
bf.Serialize(fs, ms) ' serialize memstr to file str
jEncoderParams.Dispose()
End Using
Metrics from a screen capture (ACTUAL size depends on screen size and what is on it; the size differences are what is important):
Method memstr size file size after BF
BMP 5,568,054 5438 (same)
PNG 266,624 261k
JPG 100 634,861 1025
JPG 90 277,575 513
The content of the image plays a role in determining the sizes etc. In this case, PNG seems best size/quality balance; you'd have to compress JPG quite a bit to get the same size but with much less quality.
An actual photo type image will result in much larger sizes: 19MB for a 2500x1900 image and almost 13MB for a PNG, so test using actual images.
So eventually I've figured out the problem.
Essentially i had to binary serialize a bitmap and transmit it over a network stream. And i was trying to decrease the size of the bitmap to make transfer faster.
.NET's "image" class seems to only support bitmaps in certain pixelformats. So no matter what i did to the image (greyscaled, lossy compression, whatever) the size would be the same because i wasn't change the pixelformat of the image, s i was just moving pixels around and changing their colors.
From what i know, there is no native class for JPGs or PNGs that i could serialize, so i was forced to use the image class with it's pixel formats.
One thing i tried was to convert the compressed, greyscaled image into a jpeg, and then convert that into a byte(), and then gzip and serialize that byte(). Problem is that resulted in a 2x increase in network data being transmitted for some reason.
An interesting quick note, is that when you serialize an image object, it is converted to PNG format (compressed), according to the network traffic i sniffed anyway. So there is some optimization that has been done by microsoft to make image serialization efficient.
So i was pretty much forced to use the image class, and just somehow figure out how to convert my image into the 1bpp or 8bpp pixelformats. The 16bpp formats (greyscale for instance) are randomly "unsupported" by microsoft, and just "don't work", and apparently never will. Of course MSDN doesn't mention any of this.
Converting my image to 8bpp or lower was impossible, because i would get "out of memory" errors for unknown reasons, or something about not being allowed to draw on indexed images.
The solution i finally found was the CopyToBpp() function from here:
http://www.wischik.com/lu/programmer/1bpp.html
That function, along with the many API's in it, allowed me to quickly convert my 32bpp Image into an 8bpp or even 1bbp image. Which could then be easily and efficiently serialized a binary formatter and sent over my network stream.
So now it works.

save greyscale image as color JPEG

I'm handling a lot of different image formats within my application. Luckily I was able to source out the biggest part of dealing with image formats to C#WPF. However, I've got an issue when saving images:
I'd like to save my images as JPEGs using JpegBitmapEncoder with an RGB profile. This works all fine for color image formats. However, when handling images i.e. of format Gray16 or Gray8, the resulting JPEGs will have a Grayscale profile. I really do need an RGB profile for my JPEGs!
I know that I could just create a new WriteableBitmap in Bgra32, copy to it the data and use it to create the JPEG. This however means, that I would need to handle the different image formats myself. Most importantly I believe that this detour would be quite inefficient computationally (I need to convert a lot of image data!).
Also I can't use any solutions outside of C#/WPF (like Imagemagick).
I hope that there's a way to solve this easily and efficiently. I found no way to configure JpegBitmapEncoder for this and I had a try with ColorConvertedBitmap but to no avail!
Any ideas?
The hint for FormatConvertedBitmap gave me the solution to my problem! Thanks RononDex!
FormatConvertedBitmap convertImg = new FormatConvertedBitmap(img, PixelFormats.Bgra32, null, 0);
JpegBitmapEncoder encoder = new JpegBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(convertImg));
encoder.Save(stream);

Getting a LSB of a BMP binary image

i was already able to convert a BMP image into binary memory stream but im confused with detecting LSB in pixel values..
I have the byte[] stream as '10101011101010101010010' ... .. ..
First is there a way that i can filter this binary stream to pixel values and detect LSB ?
If you want to read / write the Least Significant Byte to use the bitmap to hide information you will need to load the bmp data into an image, then access the pixel-data using GetPixel(). The BMP File itself might use RLL or some other compression so you cannot access the pixel data directly.
For detecting LSB in an image, it largely depends on the algorithm used, some are harder to detect as others. Do you have the description of the LSB-variant that might be in that image?

What quality level does Image.Save() use for jpeg files?

I just got a real surprise when I loaded a jpg file and turned around and saved it with a quality of 100 and the size was almost 4x the original. To further investigate I open and saved without explicitly setting the quality and the file size was exactly the same. I figured this was because nothing changed so it's just writing the exact same bits back to a file. To test this assumption I drew a big fat line diagonally across the image and saved again without setting quality (this time I expected the file to jump up because it would be "dirty") but it decreased ~10Kb!
At this point I really don't understand what is happening when I simply call Image.Save() w/out specifying a compression quality. How is the file size so close (after the image is modified) to the original size when no quality is set yet when I set quality to 100 (basically no compression) the file size is several times larger than the original?
I've read the documentation on Image.Save() and it's lacking any detail about what is happening behind the scenes. I've googled every which way I can think of but I can't find any additional information that would explain what I'm seeing. I have been working for 31 hours straight so maybe I'm missing something obvious ;0)
All of this has come about while I implement some library methods to save images to a database. I've overloaded our "SaveImage" method to allow explicitly setting a quality and during my testing I came across the odd (to me) results explained above. Any light you can shed will be appreciated.
Here is some code that will illustrate what I'm experiencing:
string filename = #"C:\temp\image testing\hh.jpg";
string destPath = #"C:\temp\image testing\";
using(Image image = Image.FromFile(filename))
{
ImageCodecInfo codecInfo = ImageUtils.GetEncoderInfo(ImageFormat.Jpeg);
// Set the quality
EncoderParameters parameters = new EncoderParameters(1);
// Quality: 10
parameters.Param[0] = new EncoderParameter(
System.Drawing.Imaging.Encoder.Quality, 10L);
image.Save(destPath + "10.jpg", codecInfo, parameters);
// Quality: 75
parameters.Param[0] = new EncoderParameter(
System.Drawing.Imaging.Encoder.Quality, 75L);
image.Save(destPath + "75.jpg", codecInfo, parameters);
// Quality: 100
parameters.Param[0] = new EncoderParameter(
System.Drawing.Imaging.Encoder.Quality, 100L);
image.Save(destPath + "100.jpg", codecInfo, parameters);
// default
image.Save(destPath + "default.jpg", ImageFormat.Jpeg);
// Big line across image
using (Graphics g = Graphics.FromImage(image))
{
using(Pen pen = new Pen(Color.Red, 50F))
{
g.DrawLine(pen, 0, 0, image.Width, image.Height);
}
}
image.Save(destPath + "big red line.jpg", ImageFormat.Jpeg);
}
public static ImageCodecInfo GetEncoderInfo(ImageFormat format)
{
return ImageCodecInfo.GetImageEncoders().ToList().Find(delegate(ImageCodecInfo codec)
{
return codec.FormatID == format.Guid;
});
}
Using reflector, it turns out Image.Save() boils down to the GDI+ function GdipSaveImageToFile, with the encoderParams NULL. So I think the question is what the JPEG encoder does when it gets a null encoderParams. 75% has been suggested here, but I can't find any solid reference.
EDIT You could probably find out for yourself by running your program above for quality values of 1..100 and comparing them with the jpg saved with the default quality (using, say, fc.exe /B)
IIRC, it is 75%, but I dont recall where I read this.
I don't know much about the Image.Save method, but I can tell you that adding that fat line would logicly reduce the size of the jpg image. This is due to the way a jpg is saved (and encoded).
The thick black line makes for a very simple and smaller encoding (If I remember correctly this is relevent mostly after the Discrete cosine transform), so the modified image can be stored using less data (bytes).
jpg encoding steps
Regarding the changes in size (without the added line), I'm not sure which image you reopened and resaved
To further investigate I open and saved without explicitly setting the quality and the file size was exactly the same
If you opened the old (original normal size) image and resaved it, then maybe the default compression and the original image compression are the same.
If you opened the new (4X larger) image and resaved it, then maybe the default compression for the save method is derived from the image (as it was when loaded).
Again, I don't know the save method, so I'm just throwing ideas (maybe they'll give you a lead).
When you save an image as a JPEG file with a quality level of <100%, you are introducing artefacts into the saved-off image, which are a side-effect of the compression process. This is why re-saving the image at 100% is actually increasing the size of your file beyond the original - ironically there's more information present in the bitmap.
This is also why you should always attempt to save in a non-lossy format (such as PNG) if you intend to do any edits to your file afterwards, otherwise you'll be affecting the quality of the output through multiple lossy transformations.

Image manipulation in C#

I am loading a JPG image from hard disk into a byte[]. Is there a way to resize the image (reduce resolution) without the need to put it in a Bitmap object?
thanks
There are always ways but whether they are better... a JPG is a compressed image format which means that to do any image manipulation on it you need something to interpret that data. The bimap object will do this for you but if you want to go another route you'll need to look into understanding the jpeg spec, creating some kind of parser, etc. It might be that there are shortcuts that can be used without needing to do full intepretation of the original jpg but I think it would be a bad idea.
Oh, and not to forget there are different file formats for JPG apparently (JFIF and EXIF) that you will ened to understand...
I'd think very hard before avoiding objects that are specifically designed for the sort of thing you are trying to do.
A .jpeg file is just a bag o' bytes without a JPEG decoder. There's one built into the Bitmap class, it does a fine job decoding .jpeg files. The result is a Bitmap object, you can't get around that.
And it supports resizing through the Graphics class as well as the Bitmap(Image, Size) constructor. But yes, making a .jpeg image smaller often produces a file that's larger. That's an unavoidable side-effect of Graphics.Interpolation mode. It tries to improve the appearance of the reduced image by running the pixels through a filter. The Bicubic filter does an excellent job of it.
Looks great to the human eye, doesn't look so great to the JPEG encoder. The filter produces interpolated pixel colors, designed to avoid making image details disappear completely when the size is reduced. These blended pixel values however make it harder on the encoder to compress the image, thus producing a larger file.
You can tinker with Graphics.InterpolationMode and select a lower quality filter. Produces a poorer image, but easier to compress. I doubt you'll appreciate the result though.
Here's what I'm doing.
And no, I don't think you can resize an image without first processing it in-memory (i.e. in a Bitmap of some kind).
Decent quality resizing involves using an interpolation/extrapolation algorithm; it can't just be "pick out every n pixels", unless you can settle with nearest neighbor.
Here's some explanation: http://www.cambridgeincolour.com/tutorials/image-interpolation.htm
protected virtual byte[] Resize(byte[] data, int width, int height) {
var inStream = new MemoryStream(data);
var outStream = new MemoryStream();
var bmp = System.Drawing.Bitmap.FromStream(inStream);
var th = bmp.GetThumbnailImage(width, height, null, IntPtr.Zero);
th.Save(outStream, System.Drawing.Imaging.ImageFormat.Jpeg);
return outStream.ToArray(); }

Categories