I just got a real surprise when I loaded a jpg file and turned around and saved it with a quality of 100 and the size was almost 4x the original. To further investigate I open and saved without explicitly setting the quality and the file size was exactly the same. I figured this was because nothing changed so it's just writing the exact same bits back to a file. To test this assumption I drew a big fat line diagonally across the image and saved again without setting quality (this time I expected the file to jump up because it would be "dirty") but it decreased ~10Kb!
At this point I really don't understand what is happening when I simply call Image.Save() w/out specifying a compression quality. How is the file size so close (after the image is modified) to the original size when no quality is set yet when I set quality to 100 (basically no compression) the file size is several times larger than the original?
I've read the documentation on Image.Save() and it's lacking any detail about what is happening behind the scenes. I've googled every which way I can think of but I can't find any additional information that would explain what I'm seeing. I have been working for 31 hours straight so maybe I'm missing something obvious ;0)
All of this has come about while I implement some library methods to save images to a database. I've overloaded our "SaveImage" method to allow explicitly setting a quality and during my testing I came across the odd (to me) results explained above. Any light you can shed will be appreciated.
Here is some code that will illustrate what I'm experiencing:
string filename = #"C:\temp\image testing\hh.jpg";
string destPath = #"C:\temp\image testing\";
using(Image image = Image.FromFile(filename))
{
ImageCodecInfo codecInfo = ImageUtils.GetEncoderInfo(ImageFormat.Jpeg);
// Set the quality
EncoderParameters parameters = new EncoderParameters(1);
// Quality: 10
parameters.Param[0] = new EncoderParameter(
System.Drawing.Imaging.Encoder.Quality, 10L);
image.Save(destPath + "10.jpg", codecInfo, parameters);
// Quality: 75
parameters.Param[0] = new EncoderParameter(
System.Drawing.Imaging.Encoder.Quality, 75L);
image.Save(destPath + "75.jpg", codecInfo, parameters);
// Quality: 100
parameters.Param[0] = new EncoderParameter(
System.Drawing.Imaging.Encoder.Quality, 100L);
image.Save(destPath + "100.jpg", codecInfo, parameters);
// default
image.Save(destPath + "default.jpg", ImageFormat.Jpeg);
// Big line across image
using (Graphics g = Graphics.FromImage(image))
{
using(Pen pen = new Pen(Color.Red, 50F))
{
g.DrawLine(pen, 0, 0, image.Width, image.Height);
}
}
image.Save(destPath + "big red line.jpg", ImageFormat.Jpeg);
}
public static ImageCodecInfo GetEncoderInfo(ImageFormat format)
{
return ImageCodecInfo.GetImageEncoders().ToList().Find(delegate(ImageCodecInfo codec)
{
return codec.FormatID == format.Guid;
});
}
Using reflector, it turns out Image.Save() boils down to the GDI+ function GdipSaveImageToFile, with the encoderParams NULL. So I think the question is what the JPEG encoder does when it gets a null encoderParams. 75% has been suggested here, but I can't find any solid reference.
EDIT You could probably find out for yourself by running your program above for quality values of 1..100 and comparing them with the jpg saved with the default quality (using, say, fc.exe /B)
IIRC, it is 75%, but I dont recall where I read this.
I don't know much about the Image.Save method, but I can tell you that adding that fat line would logicly reduce the size of the jpg image. This is due to the way a jpg is saved (and encoded).
The thick black line makes for a very simple and smaller encoding (If I remember correctly this is relevent mostly after the Discrete cosine transform), so the modified image can be stored using less data (bytes).
jpg encoding steps
Regarding the changes in size (without the added line), I'm not sure which image you reopened and resaved
To further investigate I open and saved without explicitly setting the quality and the file size was exactly the same
If you opened the old (original normal size) image and resaved it, then maybe the default compression and the original image compression are the same.
If you opened the new (4X larger) image and resaved it, then maybe the default compression for the save method is derived from the image (as it was when loaded).
Again, I don't know the save method, so I'm just throwing ideas (maybe they'll give you a lead).
When you save an image as a JPEG file with a quality level of <100%, you are introducing artefacts into the saved-off image, which are a side-effect of the compression process. This is why re-saving the image at 100% is actually increasing the size of your file beyond the original - ironically there's more information present in the bitmap.
This is also why you should always attempt to save in a non-lossy format (such as PNG) if you intend to do any edits to your file afterwards, otherwise you'll be affecting the quality of the output through multiple lossy transformations.
Related
Using WinForm, I have a small bmp file in the resources which is just black and white so two colour, saved as Lion.bmp. I need it just two colour as bmp. When the user clicks on a button a dialog comes up asking them where they want to save this. I want to then copy the resource to that space but save it as two colour bmp just as the original in resources. I have managed to save it as all manner of other files and even a 32 bit bitmap but not a two colour bitmap.
First I thought I would just copy byte for byte but found that was not possible with my knowledge. I could probably do it if I knew how.
Next I thought I would just create a new bitmap and save it
new Bitmap(Resources.Lion).Save(dialog.SelectedPath + "\\lion.bmp");
This compiled and I was happy till I realised I was saving a PNG file called .bmp. Next I find that I can add an Image format so I try
new Bitmap(Resources.Lion).Save(dialog.SelectedPath + "\\lion.bmp", ImageFormat.Bmp);
Again it compiles and saves a file but even though the resource is monochrome it saves it now a 32bit colour depth and not 2. I next try to just write it out as a stream of bytes which was my original plan
File.WriteAllBytes((dialog.SelectedPath + "\\lion.bmp"), Resources.Lion);
That does not compile as it says Resources.Lion is not a byte[] but I think it must be as it is in resources. Next I find in Bitmap I can have
Encoder.ColourDepth, 2
I think this would work but I cannot work out how to use that as ever time I try it will not compile.
new Bitmap(Resources.Lion).Save(dialog.SelectedPath + "\\lion.bmp", ImageFormat.Bmp, (Encoder.ColorDepth,2));
I guess I need to ask those wiser than me what the syntax may be to get this to work so that I can copy the monochrome bitmap from resources to monochrome bitmap on disk.
In the end a colleague came up with the suggestion of cloning the resource "Lion" that I wanted.This has worked. The code I used is below:
Resources.Lion.Clone(new Rectangle(0, 0, Resources.Lion.Width, Resources.Lion.Height),System.Drawing.Imaging.PixelFormat.Format1bppIndexed).Save(dialog.SelectedPath + "\\lion.bmp", ImageFormat.Bmp);
I would like to compress image to jpeg using CoreCompat library in ASP.NET Core 2. There is quality parameter that I would like to change and get images with different qualities and file sizes. Problem is that with different values for quality parameters, I get same file size. What am I doing wrong? For quality I was using values 0, 50 and 100. Here is my code:
const int size = 500;
const long quality = 50L;
string inputPath = #"D:\Images\land.jpg";
string outputPath = $#"D:\Images\land_{quality}.jpg";
using (var image = new Bitmap(System.Drawing.Image.FromFile(inputPath)))
{
var resized = new Bitmap(size, size);
using (var graphics = Graphics.FromImage(resized))
{
graphics.CompositingQuality = CompositingQuality.HighSpeed;
graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphics.CompositingMode = CompositingMode.SourceCopy;
graphics.DrawImage(image, 0, 0, size, size);
using (var output = File.Open(outputPath, FileMode.Create))
{
var qualityParamId = Encoder.Quality;
var encoderParameters = new EncoderParameters(1);
encoderParameters.Param[0] = new EncoderParameter(qualityParamId, quality);
var codec = ImageCodecInfo.GetImageDecoders()
.FirstOrDefault(c => c.FormatID == ImageFormat.Jpeg.Guid);
resized.Save(output, codec, encoderParameters);
}
}
}
My input file lang.jpg has size 5MB (8386x2229) and output files land_0.jpg, land_50.jpg and land_100.jpg all have 62KB and dimensions 500x500. Why these output files have same size?
The quality adjustment doesn't guarantee smaller file sizes, it only determines how "lossy" the compression is. However, ultimately a pixel is a pixel and there must be one or more bytes to encode that pixel and its color. The only guaranteed way to reduce file size is to reduce resolution, which means there's less pixels to encode and thus less bytes necessary to represent the image.
Image compression works in two ways. First, some attempt is made to reduce the overall bytes needed to encode the image. In types like GIF and PNG, this is done by limiting the colorspace. By limiting the image to a total of 256 colors, for example, instead of potentially millions, more pixels will share the same color and can therefore rely on the color index instead of a specific color. In the case of something like a JPEG, this is achieved by reducing the fine detail. The end result is largely the same: more pixels end up sharing the same colors, allowing better compression.
The second part of compression is achieved by using these shared pixel characteristics to reduce the overall bytes necessary to encode. Instead of each pixel having to include bytes to encode its particular color, you can just encode a single color and say this set of pixels uses that. It's the same way archives like zip work, in that it's using placeholders to share bytes that would otherwise have to be encoded individually.
The point of this is that compression ratios vary wildy and are 100% dependent on the image/data being compressed. In the case of an image, if there's a lot of colors and/or lots of fine detail, even turning the quality down to 0 may not actually achieve much. Some image information is discarded, but there's so much information to begin with that the end result is relatively insignificant. Again, 500x500 pixels is 500x500 pixels, regardless of the quality setting. Turning the quality down allows more aggressive compression, but depending on the source, even aggressive compression may not be able to remove many bytes.
to give a background on what this topic is about. I am trying to convert an image file to byte[] by using a memorystream to return the memorystream.ToArray();
However, i have noticed that the image quality decreases after the conversion inputBitmap -> byte[] -> outputBitmap.
outputBitmap has a lower quality than the inputBitmap.
My code to convert the image to byte[] is as follows
MemoryStream mstream = new MemoryStream();
myImage.Save(mstream,System.Drawing.Imaging.ImageFormat.Jpeg);
byte[] buffer = mstream.ToArray();
and to convert from the byte[] back to an image,
MemoryStream mstream = new MemoryStream(buffer);
Image newImage = Image.FromStream(mstream);
can somebody explain why this is and hopefully guide me to correct this problem?
Note that before i used the inputBitmap as my pictureBox.Image, it looks great in quality. But after converting from byte[] to outputBitmap, setting outputBitmap as my pictureBox.Image becomes kind of blurred and low in quality.
A couple of things stand out for me.
You are saving to JPG rather than a lossless format like PNG.
You are not setting the quality of the compression used to save the image.
This means that you are probably compressing an image that has already been compressed thus losing even more information in the process.
I'd change to saving the file as PNG if I could, failing that make sure you set the quality of the JPG to 100% before you save it. This will reduce to a minimum the compression on the file and hence minimise the data loss.
If you're still seeing a difference in quality then the only thing that I can think of that might explain a this is a difference in the resolution (number of pixels and/or colour depth) between the screen shot and the saved file. Make sure you set the target bitmap size and colour depth to be the same as the source bitmap.
if I try to create a bitmap bigger than 19000 px I get the error: Parameter is not valid.
How can I workaround this??
System.Drawing.Bitmap myimage= new System.Drawing.Bitmap(20000, 20000);
Keep in mind, that is a LOT of memory you are trying to allocate with that Bitmap.
Refer to http://social.msdn.microsoft.com/Forums/en-US/netfxbcl/thread/37684999-62c7-4c41-8167-745a2b486583/
.NET is likely refusing to create an image that uses up that much contiguous memory all at once.
Slightly harder to read, but this reference helps as well:
Each image in the system has the amount of memory defined by this formula:
bit-depth * width * height / 8
This means that an image 40800 pixels by 4050 will require over 660
megabytes of memory.
19000 pixels square, at 32bpp, would require 11552000000 bits (1.37 GB) to store the raster in memory. That's just the raw pixel data; any additional overhead inherent in the System.Drawing.Bitmap would add to that. Going up to 20k pixels square at the same color depth would require 1.5GB just for the raw pixel memory. In a single object, you are using 3/4 of the space reserved for the entire application in a 32-bit environment. A 64-bit environment has looser limits (usually), but you're still using 3/4 of the max size of a single object.
Why do you need such a colossal image size? Viewed at 1280x1024 res on a computer monitor, an image 19000 pixels on a side would be 14 screens wide by 18 screens tall. I can only imagine you're doing high-quality print graphics, in which case a 720dpi image would be a 26" square poster.
Set the PixelFormat when you new a bitmap, like:
new Bitmap(2000, 40000,PixelFormat.Format16bppRgb555)
and with the exact number above, it works for me. This may partly solve the problem.
I suspect you're hitting memory cap issues. However, there are many reasons a bitmap constructor can fail. The main reasons are GDI+ limits in CreateBitmap. System.Drawing.Bitmap, internally, uses the GDI native API when the bitmap is constructed.
That being said, a bitmap of that size is well over a GB of RAM, and it's likely that you're either hitting the scan line size limitation (64KB) or running out of memory.
Got this error when opening a TIF file. The problem was due to not able to open CMYK. Changed colorspace from RGB to CMYK and didn't get an error.
So I used taglib library to get image file size instead.
Code sample:
try
{
var image = new System.Drawing.Bitmap(filePath);
return string.Format("{0}px by {1}px", image.Width, image.Height);
}
catch (Exception)
{
try
{
TagLib.File file = TagLib.File.Create(filePath);
return string.Format("{0}px by {1}px", file.Properties.PhotoWidth, file.Properties.PhotoHeight);
}
catch (Exception)
{
return ("");
}
}
I am loading a JPG image from hard disk into a byte[]. Is there a way to resize the image (reduce resolution) without the need to put it in a Bitmap object?
thanks
There are always ways but whether they are better... a JPG is a compressed image format which means that to do any image manipulation on it you need something to interpret that data. The bimap object will do this for you but if you want to go another route you'll need to look into understanding the jpeg spec, creating some kind of parser, etc. It might be that there are shortcuts that can be used without needing to do full intepretation of the original jpg but I think it would be a bad idea.
Oh, and not to forget there are different file formats for JPG apparently (JFIF and EXIF) that you will ened to understand...
I'd think very hard before avoiding objects that are specifically designed for the sort of thing you are trying to do.
A .jpeg file is just a bag o' bytes without a JPEG decoder. There's one built into the Bitmap class, it does a fine job decoding .jpeg files. The result is a Bitmap object, you can't get around that.
And it supports resizing through the Graphics class as well as the Bitmap(Image, Size) constructor. But yes, making a .jpeg image smaller often produces a file that's larger. That's an unavoidable side-effect of Graphics.Interpolation mode. It tries to improve the appearance of the reduced image by running the pixels through a filter. The Bicubic filter does an excellent job of it.
Looks great to the human eye, doesn't look so great to the JPEG encoder. The filter produces interpolated pixel colors, designed to avoid making image details disappear completely when the size is reduced. These blended pixel values however make it harder on the encoder to compress the image, thus producing a larger file.
You can tinker with Graphics.InterpolationMode and select a lower quality filter. Produces a poorer image, but easier to compress. I doubt you'll appreciate the result though.
Here's what I'm doing.
And no, I don't think you can resize an image without first processing it in-memory (i.e. in a Bitmap of some kind).
Decent quality resizing involves using an interpolation/extrapolation algorithm; it can't just be "pick out every n pixels", unless you can settle with nearest neighbor.
Here's some explanation: http://www.cambridgeincolour.com/tutorials/image-interpolation.htm
protected virtual byte[] Resize(byte[] data, int width, int height) {
var inStream = new MemoryStream(data);
var outStream = new MemoryStream();
var bmp = System.Drawing.Bitmap.FromStream(inStream);
var th = bmp.GetThumbnailImage(width, height, null, IntPtr.Zero);
th.Save(outStream, System.Drawing.Imaging.ImageFormat.Jpeg);
return outStream.ToArray(); }