Reduce Quality of Image/Stream Before Saving - c#

I'm trying to take an input stream (a zip file of images) and extract each file. But i must reduce the quality of each image before they are saved (if quality < 100). I have tried the following but it never compresses the image:
public void UnZip(Stream inputStream, string destinationPath, int quality = 80) {
using (var zipStream = new ZipInputStream(inputStream)) {
ZipEntry entry;
while ((entry = zipStream.GetNextEntry()) != null) {
var directoryPath = Path.GetDirectoryName(destinationPath + Path.DirectorySeparatorChar + entry.Name);
var fullPath = directoryPath + Path.DirectorySeparatorChar + Path.GetFileName(entry.Name);
// Create the stream to unzip the file to
using (var stream = new MemoryStream()) {
// Write the zip stream to the stream
if (entry.Size != 0) {
var size = 2048;
var data = new byte[2048];
while (true) {
size = zipStream.Read(data, 0, data.Length);
if (size > 0)
stream.Write(data, 0, size);
else
break;
}
}
// Compress the image and save it to the stream
if (quality < 100)
using (var image = Image.FromStream(stream)) {
var info = ImageCodecInfo.GetImageEncoders();
var #params = new EncoderParameters(1);
#params.Param[0] = new EncoderParameter(Encoder.Quality, quality);
image.Save(stream, info[1], #params);
}
}
// Save the stream to disk
using (var fs = new FileStream(fullPath, FileMode.Create)) {
stream.WriteTo(fs);
}
}
}
}
}
I'd appreciate it if someone could show me what i'm doing wrong. Also any advice on tidying it up would be appreciated as the code's grown abit ugly. Thanks

You really shouldn't be using the same stream to save the compressed image. The MSDN documentation clearly says: "Do not save an image to the same stream that was used to construct the image. Doing so might damage the stream." (MSDN Article on Image.Save(...))
using (var compressedImageStream = new MemoryStream())
{
image.Save(compressedImageStream, info[1], #params);
}
Also, what file format are you encoding into? You haven't specified. You're just getting the second encoder found. You shouldn't rely on the order of the results. Search for a specific codec instead:
var encoder = ImageCodecInfo.GetImageEncoders().Where(x => x.FormatID == ImageFormat.Jpeg.Guid).SingleOrDefault()
... and don't forget to check if the encoder doesn't exist on your system:
if (encoder != null)
{ .. }
The Quality parameter doesn't have meaning for all file formats. I assume you might be working with JPEGs? Also, keep in mind that 100% JPEG Quality != Lossless Image. You can still encode with Quality = 100 and reduce space.

There is no code to compress the image after you've extracted it from the zip stream. All you seem to be doing is getting the unzipped data into a MemoryStream, then proceeding the write the image to the same stream based on quality information (which may or may not compress an image, depending on the codec). I would first recommend not writing to the same stream you're reading from. Also, what "compression" you get out of the Encoder.Quality property depends on the type of image--which you haven't provided any detail on. If the image type supports compression and the incoming image quality is lower than 100 to start, you won't get any reduction in size. Also, you've not provided any information with regard to that. Long story short, you haven't provided enough information for anyone to give you a real answer.

Related

How do I read and write XMP metadata in C#?

I have this method for resizing images, and I have managed to input all of the metadata into the new image except for the XMP data. Now, I can only find topics on how manage the XMP part in C++ but I need it in C#. The closest I've gotten is the xmp-sharp project which is based on some old port of Adobe's SDK, but I can't get that working for me. The MetaDataExtractor project gives me the same results - that is, file format/encoding not supported. I've tried this with .jpg, .png and .tif files.
Is there no good way of reading and writing XMP in C#?
Here is my code if it's of any help (omitting all irrelevant parts):
public Task<Stream> Resize(Size size, Stream image)
{
using (var bitmap = Image.FromStream(image))
{
var newSize = new Size(size.Width, size.Height);
var ms = new MemoryStream();
using (var bmPhoto = new Bitmap(newSize.Width, newSize.Height, PixelFormat.Format24bppRgb))
{
// This saves all metadata except XMP
foreach (var id in bitmap.PropertyIdList)
bmPhoto.SetPropertyItem(bitmap.GetPropertyItem(id));
// Trying to use xmp-sharp for the XMP part
try
{
IXmpMeta xmp = XmpMetaFactory.Parse(image);
}
catch (XmpException e)
{
// Here, I always get "Unsupported Encoding, XML parsing failure"
}
// Trying to use MetadataExtractor for the XMP part
try
{
var xmpDirs = ImageMetadataReader.ReadMetadata(image).Where(d => d.Name == "XMP");
}
catch (Exception e)
{
// Here, I always get "File format is not supported"
}
// more code to modify image and save to stream
}
ms.Position = 0;
return Task.FromResult<Stream>(ms);
}
}
The reason you get "File format is not supported" is because you already consumed the image from the stream when you called Image.FromStream(image) in the first few lines.
If you don't do that, you should find that you can read out the XMP just fine.
var xmp = ImageMetadataReader.ReadMetadata(stream).OfType<XmpDirectory().FirstOrDefault();
If your stream is seekable, you might be able to seek back to the origin (using the Seek method, or by setting Position to zero.)

Writing the stream of a png back into another png increases the size. Why?

Can someone explain to me why writing the stream from a PNG image back into another PNG file increases the size of the final output?
Original file: size (28.6 KB), size on disk (32.0 KB)
Output file: size (32.1 KB), size on disk (36.0 KB).
The code for doing this operation is pretty straight forward:
private void button1_Click(object sender, EventArgs e)
{
var result = openFileDialog1.ShowDialog();
if (result == DialogResult.OK)
{
var file = openFileDialog1.FileName;
var stream = new FileStream(file, FileMode.Open);
var newImg = Image.FromStream(stream);
newImg.Save("newPNG.png", ImageFormat.Png);
stream.Close();
}
}
How can I avoid this? I would like the final image to have the exact same size as the original one.
LE: I uploaded the original image if anyone wants to try it out.
cat image
From Wikipedia:
There are five possible filter types that can be specified separately
on each scan line and several possible strategies for searching LZ77
matches. Thus, there are a very large number of different combinations
for how the image can be compressed. Which combination gives the best
compression will depend on the individual image's properties.
That is, there are many ways to compress a PNG, and apparently in your case the original file was compressed in a different way from .NET's default. I'm not sure how much you can affect .NET's output, but there's an override of Image.Save that takes EncoderParameters. You might want to look at that. Link.
I'd bet this due to something like GDI+ saves the image with different settings, and possibly different encodings.
You can retain the same size if you create a new FileStream and read bytes from the first one, then to write them into the second one. Thus copying the file.
stream.Position = 0;
using(FileStream fs = new FileStream("newPNG.png"))
{
int totalBytesRead = 0;
while(totalBytesRead < stream.Length)
{
byte[] byteBuffer = new byte[8192];
int bytesRead = stream.Read(readBytes, 0, byteBuffer.Length);
fs.Write(byteBuffer, totalBytesRead, bytesRead);
totalBytesRead += bytesRead;
}
}
stream.Position = 0;
I set stream.Position to zero both before and after because I don't know where you'll be using this code. Setting it to zero will make the FileStream start reading from the beginning of the file.

ImageMagick Pdf to image conversion is too slow

I'm using ImageMagick.NET for generating image from pdf. Its working, but the conversion process is too slow. Code -->
public void ProcessRequest(HttpContext context)
{
if (context.Request["id"] != null)
{
string id = context.Request["id"].ToString();
MagickReadSettings settings = new MagickReadSettings();
settings.Density = new MagickGeometry(300, 300);
using (MagickImageCollection images = new MagickImageCollection())
{
images.Read(System.Web.HttpContext.Current.Server.MapPath(string.Format("~/Reciepts/order{0}.pdf", id)), settings);
MagickImage vertical= images.AppendVertically();
using (var memoryStream = new MemoryStream())
{
vertical.ToBitmap().Save(memoryStream, ImageFormat.Jpeg);
var d = memoryStream.GetBuffer();
context.Response.Clear();
context.Response.ContentType = "image/jpeg";
context.Response.BinaryWrite(d);
context.Response.End();
}
}
}
}
Where i can improve ?
You are using Magick.NET not ImageMagick.NET.
It is not necessary to create a bitmap before you send it to the output stream. You can just do this:
using (MagickImage vertical=images.AppendVertically())
{
vertical.Format = MagickFormat.Jpeg;
vertical.Write(context.Response.OutputStream);
}
And maybe you should cache the result to a file?
If you decided to use Magick.NET, method is not wrong.
First answer gives you "Using" statement for MagickImage.
But this differs only a few milliseconds to finish the job.
I see that the slow line is this line:
images.Read(System.Web.HttpContext.Current.
Server.MapPath(string.Format("~/Reciepts/order{0}.pdf", id)), settings);
because of settings objects properties.
Your property says that image must be in 300dpi and 300 might be very high for your cpu:
settings.Density = new MagickGeometry(300, 300);
You can try to use lower density, not 300dpi. Lower density is more fast:
settings.Density = new Density(72, 72);
I think there must be another fast way to create image from pdf file. Magick.NET uses Ghostscript to generate image from pdf and Ghostscript is slow and sometimes not successful to generate image from complicated (layered) pdf's.

RAM saving when loading many Bitmap objects

Is there any difference when you load into bitmap the same image from bmp or from png (or other format)? Does the original image format influence Bitmap object size in RAM?
Is there a way to archive Bitmap objects in order to make them less RAM resources consuming?
The size is only influenced by the size of the file, regardless of format (but obviously, certain formats result in smaller files than others).
One way to archive bitmaps, if you need to keep them as bitmaps, is simply to zip them. Alternatively, convert them to another image format that includes compression (ideally, lossless compression so not jpg). Sorry this was explaining archiving the files, not conserving live memory usage.
To stop bitmap objects using memory, you will need to let go of the item in memory and reload it when you want to use it again. Alternatively, though I've no experience with this, look into the new .NET 4 memory mapped files.
There are two ways to save the data in memory
Serialize and compress object with GZipStream in memory
Save images to temporary directory and read them to ram if only needed.
Image object size is not influenced with the original image format. but the size of the stream , that saves the object - does.
Here is the way how to get stream from the object:
public static Stream GetPNGBitmapStream(Image initial)
{
return GetBitmapStream(initial, "image/PNG");
}
public static Stream GetJPGBitmapStream(Image initial)
{
return GetBitmapStream(initial, "image/jpeg");
}
private static Stream GetBitmapStream(Image initial, string mimeType)
{
MemoryStream ms = new MemoryStream();
var qualityEncoder = Encoder.Quality;
var quality = (long)90;
var ratio = new EncoderParameter(qualityEncoder, quality);
var codecParams = new EncoderParameters(1);
codecParams.Param[0] = ratio;
ImageCodecInfo[] infos = ImageCodecInfo.GetImageEncoders();
ImageCodecInfo jpegCodecInfo = null;
for (int i = 0; i < infos.Length; i++)
{
if (string.Compare(infos[i].MimeType, mimeType,true) == 0)
{
jpegCodecInfo = infos[i];
break;
}
}
if (jpegCodecInfo != null)
{
initial.Save(ms, jpegCodecInfo, codecParams);
MemoryStream ms2 = new MemoryStream(ms.ToArray());
ms.Close();
ms.Dispose();
return ms2;
}
return null;
}

Generating a multipage TIFF is not working

I'm trying to generate a multipage TIFF file from an existing picture using code by Bob Powell:
picture.SelectActiveFrame(FrameDimension.Page, 0);
var image = new Bitmap(picture);
using (var stream = new MemoryStream())
{
ImageCodecInfo codecInfo = null;
foreach (var imageEncoder in ImageCodecInfo.GetImageEncoders())
{
if (imageEncoder.MimeType != "image/tiff") continue;
codecInfo = imageEncoder;
break;
}
var parameters = new EncoderParameters
{
Param = new []
{
new EncoderParameter(Encoder.SaveFlag, (long) EncoderValue.MultiFrame)
}
};
image.Save(stream, codecInfo, parameters);
parameters = new EncoderParameters
{
Param = new[]
{
new EncoderParameter(Encoder.SaveFlag, (long) EncoderValue.FrameDimensionPage)
}
};
for (var i = 1; i < picture.GetFrameCount(FrameDimension.Page); i++)
{
picture.SelectActiveFrame(FrameDimension.Page, i);
var img = new Bitmap(picture);
image.SaveAdd(img, parameters);
}
parameters = new EncoderParameters
{
Param = new[]
{
new EncoderParameter(Encoder.SaveFlag, (long)EncoderValue.Flush)
}
};
image.SaveAdd(parameters);
stream.Flush();
}
But it's not working (only the first frame is included in the image) and I don't know why.
What I want to do is to change a particular frame of a TIFF file (add annotations to it).
I don't know if there's a simpler way to do it but what I have in mind is to create a multipage TIFF from the original picture and add my own picture instead of that frame.
[deleted first part after comment]
I'm working with multi-page TIFFs using LibTIFF.NET; I found many quicks in handling of TIFF using the standard libraries (memory related and also consistent crashes on 16-bit gray scale images).
What is your test image? Have you tried a many-frame tiff (preferably with a large '1' on the first frame, a '2 on the next etc; this could help you to be certain on the frame included in the file.
Another useful diagnosis may be tiffdump utility, as included in LibTiff binaries (also for windows). This will tell you exactly what frames you have.
See Using LibTiff from c# to access tiled tiff images
[Edit] If you want to understand the .NET stuff: I've found a new resource on multi-page tiffs using the standard .NET functionality (although I'll stick with LibTIFF.NET): TheCodeProject : Save images into a multi-page TIFF file... If you download it, the code snippet in Form1.cs function saveMultipage(..) is similar (but still slightly different) than your code. Especially the flushing at the end is done in a differnt way, and the file is deleted before the first frame...
[/Edit]
It seems that this process doesn't change image object but it changes the stream so I should get the memory stream buffer and build another image object:
var buffer=stream.GetBuffer();
using(var newStream=new MemoryStream(buffer))
{
var result=Image.FromStream(newStream);
}
Now result will include all frames.

Categories