Is there any difference when you load into bitmap the same image from bmp or from png (or other format)? Does the original image format influence Bitmap object size in RAM?
Is there a way to archive Bitmap objects in order to make them less RAM resources consuming?
The size is only influenced by the size of the file, regardless of format (but obviously, certain formats result in smaller files than others).
One way to archive bitmaps, if you need to keep them as bitmaps, is simply to zip them. Alternatively, convert them to another image format that includes compression (ideally, lossless compression so not jpg). Sorry this was explaining archiving the files, not conserving live memory usage.
To stop bitmap objects using memory, you will need to let go of the item in memory and reload it when you want to use it again. Alternatively, though I've no experience with this, look into the new .NET 4 memory mapped files.
There are two ways to save the data in memory
Serialize and compress object with GZipStream in memory
Save images to temporary directory and read them to ram if only needed.
Image object size is not influenced with the original image format. but the size of the stream , that saves the object - does.
Here is the way how to get stream from the object:
public static Stream GetPNGBitmapStream(Image initial)
{
return GetBitmapStream(initial, "image/PNG");
}
public static Stream GetJPGBitmapStream(Image initial)
{
return GetBitmapStream(initial, "image/jpeg");
}
private static Stream GetBitmapStream(Image initial, string mimeType)
{
MemoryStream ms = new MemoryStream();
var qualityEncoder = Encoder.Quality;
var quality = (long)90;
var ratio = new EncoderParameter(qualityEncoder, quality);
var codecParams = new EncoderParameters(1);
codecParams.Param[0] = ratio;
ImageCodecInfo[] infos = ImageCodecInfo.GetImageEncoders();
ImageCodecInfo jpegCodecInfo = null;
for (int i = 0; i < infos.Length; i++)
{
if (string.Compare(infos[i].MimeType, mimeType,true) == 0)
{
jpegCodecInfo = infos[i];
break;
}
}
if (jpegCodecInfo != null)
{
initial.Save(ms, jpegCodecInfo, codecParams);
MemoryStream ms2 = new MemoryStream(ms.ToArray());
ms.Close();
ms.Dispose();
return ms2;
}
return null;
}
Related
This is the code I'm using to convert the TIFF to PNG.
var image = Image.FromFile(#"Test.tiff");
var encoders = ImageCodecInfo.GetImageEncoders();
var imageCodecInfo = encoders.FirstOrDefault(encoder => encoder.MimeType == "image/tiff");
if (imageCodecInfo == null)
{
return;
}
var imageEncoderParams = new EncoderParameters(1);
imageEncoderParams.Param[0] = new EncoderParameter(Encoder.Quality, 100L);
image.Save(#"Test.png", imageCodecInfo, imageEncoderParams);
The TIFF file size is 46.8 MB (49,161,628 bytes) the PNG that is made using this code is 46.8 MB (49,081,870 bytes) but if I use MS paint the PNG file size is 6.69 MB (7,021,160 bytes).
So what do I change in the code to get the same compress I get by using MS Paint?
Without a good Minimal, Complete, and Verifiable code example, it's impossible to know for sure. But…
The code you posted appears to be getting a TIFF encoder, not a PNG encoder. Just because you name the file with a ".png" extension does not mean that you will get a PNG file. It's the encoder that determines the actual file format.
And it makes perfect sense that if you use the TIFF encoder, you're going to get a file that's exactly the same size as the TIFF file you started with.
Instead, try:
var imageCodecInfo = encoders.FirstOrDefault(encoder => encoder.MimeType == "image/png");
Note that this may or may not get you exactly the same compression used by Paint. PNG has a wide variety of compression "knobs" to adjust the exact way it compresses, and you don't get access to most of those through the .NET API. Paint may or may not be using the same values as your .NET program. But you should at least get a similar level of compression.
OK, after a lot of trial and error I came up with this.
var image = Image.FromFile(#"Test.tiff");
Bitmap bm = null;
PictureBox pb = null;
pb = new PictureBox();
pb.Size = new Size(image.Width, image.Height);
pb.Image = image;
bm = new Bitmap(image.Width, image.Height);
ImageCodecInfo png = GetEncoder(ImageFormat.Png);
EncoderParameters imageEncoderParams = new EncoderParameters(1);
imageEncoderParams.Param[0] = new EncoderParameter(Encoder.Quality, 100L);
pb.DrawToBitmap(bm, pb.ClientRectangle);
bm.Save(#"Test.png", png, encodePars);
pb.Dispose();
And add this to my code.
private ImageCodecInfo GetEncoder(ImageFormat format)
{
ImageCodecInfo[] codecs = ImageCodecInfo.GetImageDecoders();
foreach (ImageCodecInfo codec in codecs)
if (codec.FormatID == format.Guid)
return codec;
return null;
}
By loading the TIFF in a PictureBox then saving it as a PNG the output PNG file size is 7.64 MB (8,012,608 bytes). Witch is a little larger then Paint But that is fine.
I'm working with the Kinect 2.0 and in particular with the color stream. The color stream arrives with a whooping 1920×1080 resolution, which is great! Except, I intend to capture the image bytes and write them to disk. So, the most viable solution for me is to compress each image frame and then store the compressed image rather than the raw high resolution image.
I have a solution but I feel its a bit of a "round the houses" way of doing this.
Basically, I get the raw pixels, write them to a WriteableBitmap and then compress the ImageSource using the following two methods:
1) Write to WriteableBitmap:
this.colorBitmap.WritePixels(
new Int32Rect(0, 0, this.colorBitmap.PixelWidth, this.colorBitmap.PixelHeight),
this.colorPixels,
this.colorBitmap.PixelWidth * (int)this.bytesPerPixel,
0);
2) Compress this to jpeg:
public static byte[] CompressToBytes(ImageSource src, int compressionrate)
{
var enc = new JpegBitmapEncoder();
enc.QualityLevel = compressionrate;
var bf = BitmapFrame.Create((BitmapSource)src);
enc.Frames.Add(bf);
using (var ms = new MemoryStream())
{
enc.Save(ms);
return ms.ToArray();
}
}
Which works fine, as I said, but I think it would be much better if I could get the raw pixels from the Kinect and then directly compress byte array rather than writing to a WriteableBitmap and then compressing that. Just seems like an extra step.
BTW, this is the code I use to grab the bytes from the color frame:
using (ColorFrame colorFrame = e.FrameReference.AcquireFrame())
{
if (colorFrame != null)
{
FrameDescription colorFrameDescription = colorFrame.FrameDescription;
if ((colorFrameDescription.Width == this.colorBitmap.PixelWidth) && (colorFrameDescription.Height == this.colorBitmap.PixelHeight))
{
if (colorFrame.RawColorImageFormat == ColorImageFormat.Bgra)
{
colorFrame.CopyRawFrameDataToArray(this.colorPixels);
}
else
{
colorFrame.CopyConvertedFrameDataToArray(this.colorPixels, ColorImageFormat.Bgra);
}
}
}
}
Baring in mind that when working with the color stream at all the number of frames per second you are able to capture drops, (from say 30-32 to 26-29 roughly), I want to use the most efficient way.
One thing I've noticed that when searching for compression solutions in c#, is that a good percentage of articles are aimed at storing a single image to disk. Whereas I need this compression to be carried out in memory, as I am writing multiple images to disk (using a binary writer).
I'm trying to take an input stream (a zip file of images) and extract each file. But i must reduce the quality of each image before they are saved (if quality < 100). I have tried the following but it never compresses the image:
public void UnZip(Stream inputStream, string destinationPath, int quality = 80) {
using (var zipStream = new ZipInputStream(inputStream)) {
ZipEntry entry;
while ((entry = zipStream.GetNextEntry()) != null) {
var directoryPath = Path.GetDirectoryName(destinationPath + Path.DirectorySeparatorChar + entry.Name);
var fullPath = directoryPath + Path.DirectorySeparatorChar + Path.GetFileName(entry.Name);
// Create the stream to unzip the file to
using (var stream = new MemoryStream()) {
// Write the zip stream to the stream
if (entry.Size != 0) {
var size = 2048;
var data = new byte[2048];
while (true) {
size = zipStream.Read(data, 0, data.Length);
if (size > 0)
stream.Write(data, 0, size);
else
break;
}
}
// Compress the image and save it to the stream
if (quality < 100)
using (var image = Image.FromStream(stream)) {
var info = ImageCodecInfo.GetImageEncoders();
var #params = new EncoderParameters(1);
#params.Param[0] = new EncoderParameter(Encoder.Quality, quality);
image.Save(stream, info[1], #params);
}
}
// Save the stream to disk
using (var fs = new FileStream(fullPath, FileMode.Create)) {
stream.WriteTo(fs);
}
}
}
}
}
I'd appreciate it if someone could show me what i'm doing wrong. Also any advice on tidying it up would be appreciated as the code's grown abit ugly. Thanks
You really shouldn't be using the same stream to save the compressed image. The MSDN documentation clearly says: "Do not save an image to the same stream that was used to construct the image. Doing so might damage the stream." (MSDN Article on Image.Save(...))
using (var compressedImageStream = new MemoryStream())
{
image.Save(compressedImageStream, info[1], #params);
}
Also, what file format are you encoding into? You haven't specified. You're just getting the second encoder found. You shouldn't rely on the order of the results. Search for a specific codec instead:
var encoder = ImageCodecInfo.GetImageEncoders().Where(x => x.FormatID == ImageFormat.Jpeg.Guid).SingleOrDefault()
... and don't forget to check if the encoder doesn't exist on your system:
if (encoder != null)
{ .. }
The Quality parameter doesn't have meaning for all file formats. I assume you might be working with JPEGs? Also, keep in mind that 100% JPEG Quality != Lossless Image. You can still encode with Quality = 100 and reduce space.
There is no code to compress the image after you've extracted it from the zip stream. All you seem to be doing is getting the unzipped data into a MemoryStream, then proceeding the write the image to the same stream based on quality information (which may or may not compress an image, depending on the codec). I would first recommend not writing to the same stream you're reading from. Also, what "compression" you get out of the Encoder.Quality property depends on the type of image--which you haven't provided any detail on. If the image type supports compression and the incoming image quality is lower than 100 to start, you won't get any reduction in size. Also, you've not provided any information with regard to that. Long story short, you haven't provided enough information for anyone to give you a real answer.
I need to transfer some images through Network, I saved images with Jpeg and 40% quality as following:
public void SaveJpeg(string path, Image image, int quality) {
if((quality < 0) || (quality > 100)) {
string error = string.Format("Jpeg image quality must be
between 0 and 100, with 100 being the highest quality. A value of {0} was
specified.", quality);
throw new ArgumentOutOfRangeException(error);
}
EncoderParameter qualityParam = new
EncoderParameter(System.Drawing.Imaging.Encoder.Quality, quality);
ImageCodecInfo jpegCodec = GetEncoderInfo("image/jpeg");
EncoderParameters encoderParams = new EncoderParameters(1);
encoderParams.Param[0] = qualityParam;
image.Save(path, jpegCodec, encoderParams);
}
But with this way the size of Jpeg files not enough small, Also I change the quality but that's not good appearance. Is there any way to save pictures with smaller file size and proper appearance? I don't know but is there any way to use System.Drawing.Graphics object, also I don't need to zip files, or change dimension of images, at now just the size of picture file is important.
With image compression, there's a fine line between creating a small file and creating a poor quality image. JPEG is a lossy compression format which means that data is removed when compressed, which is why constantly re-encoding a JPEG file will continually decrease its quality.
On the other hand, PNG files are lossless but may still result in bigger files. You could try encoding the file as a PNG using PngBitmapEncoder. This will ensure the quality remains high, but the size may or may not decrease enough for your program (it depends on the image).
If you're performing this on a local machine and don't need to do it too often (e.g. for many concurrent users), you could invoke an external program to do it for you. PNG Monster is very good at compressing PNG files without decreasing the quality. You could call this from your program and send the resulting PNG file. (You may want to check the licensing terms to ensure that it's compatible with your program).
There aren't many ways where you can maintain a high quality and perform a high compression at the same time, without manipulating the image (e.g. changing dimension).
I have a method for creating and saving the thumbnail of an uploaded picture, I think NewImageSize method might help you.It also handles the quality issue.
public Size NewImageSize(int OriginalHeight, int OriginalWidth, double FormatSize)
{
Size NewSize;
double tempval;
if (OriginalHeight > FormatSize && OriginalWidth > FormatSize)
{
if (OriginalHeight > OriginalWidth)
tempval = FormatSize / Convert.ToDouble(OriginalHeight);
else
tempval = FormatSize / Convert.ToDouble(OriginalWidth);
NewSize = new Size(Convert.ToInt32(tempval * OriginalWidth), Convert.ToInt32(tempval * OriginalHeight));
}
else
NewSize = new Size(OriginalWidth, OriginalHeight); return NewSize;
}
private bool save_image_with_thumb(string image_name, string path)
{
ResimFileUpload1.SaveAs(path + image_name + ".jpg"); //normal resim kaydet
///////Thumbnail yarat ve kaydet//////////////
try
{
Bitmap myBitmap;
myBitmap = new Bitmap(path + image_name + ".jpg");
Size thumbsize = NewImageSize(myBitmap.Height, myBitmap.Width, 100);
System.Drawing.Image.GetThumbnailImageAbort myCallBack = new System.Drawing.Image.GetThumbnailImageAbort(ThumbnailCallback);
// If jpg file is a jpeg, create a thumbnail filename that is unique.
string sThumbFile = path + image_name + "_t.jpg";
// Save thumbnail and output it onto the webpage
System.Drawing.Image myThumbnail = myBitmap.GetThumbnailImage(thumbsize.Width, thumbsize.Height, myCallBack, IntPtr.Zero);
myThumbnail.Save(sThumbFile);
// Destroy objects
myThumbnail.Dispose();
myBitmap.Dispose();
return true;
}
catch //yaratamazsa normal ve thumb iptal
{
return false;
}
///////////////////////////////////
}
I'm trying to generate a multipage TIFF file from an existing picture using code by Bob Powell:
picture.SelectActiveFrame(FrameDimension.Page, 0);
var image = new Bitmap(picture);
using (var stream = new MemoryStream())
{
ImageCodecInfo codecInfo = null;
foreach (var imageEncoder in ImageCodecInfo.GetImageEncoders())
{
if (imageEncoder.MimeType != "image/tiff") continue;
codecInfo = imageEncoder;
break;
}
var parameters = new EncoderParameters
{
Param = new []
{
new EncoderParameter(Encoder.SaveFlag, (long) EncoderValue.MultiFrame)
}
};
image.Save(stream, codecInfo, parameters);
parameters = new EncoderParameters
{
Param = new[]
{
new EncoderParameter(Encoder.SaveFlag, (long) EncoderValue.FrameDimensionPage)
}
};
for (var i = 1; i < picture.GetFrameCount(FrameDimension.Page); i++)
{
picture.SelectActiveFrame(FrameDimension.Page, i);
var img = new Bitmap(picture);
image.SaveAdd(img, parameters);
}
parameters = new EncoderParameters
{
Param = new[]
{
new EncoderParameter(Encoder.SaveFlag, (long)EncoderValue.Flush)
}
};
image.SaveAdd(parameters);
stream.Flush();
}
But it's not working (only the first frame is included in the image) and I don't know why.
What I want to do is to change a particular frame of a TIFF file (add annotations to it).
I don't know if there's a simpler way to do it but what I have in mind is to create a multipage TIFF from the original picture and add my own picture instead of that frame.
[deleted first part after comment]
I'm working with multi-page TIFFs using LibTIFF.NET; I found many quicks in handling of TIFF using the standard libraries (memory related and also consistent crashes on 16-bit gray scale images).
What is your test image? Have you tried a many-frame tiff (preferably with a large '1' on the first frame, a '2 on the next etc; this could help you to be certain on the frame included in the file.
Another useful diagnosis may be tiffdump utility, as included in LibTiff binaries (also for windows). This will tell you exactly what frames you have.
See Using LibTiff from c# to access tiled tiff images
[Edit] If you want to understand the .NET stuff: I've found a new resource on multi-page tiffs using the standard .NET functionality (although I'll stick with LibTIFF.NET): TheCodeProject : Save images into a multi-page TIFF file... If you download it, the code snippet in Form1.cs function saveMultipage(..) is similar (but still slightly different) than your code. Especially the flushing at the end is done in a differnt way, and the file is deleted before the first frame...
[/Edit]
It seems that this process doesn't change image object but it changes the stream so I should get the memory stream buffer and build another image object:
var buffer=stream.GetBuffer();
using(var newStream=new MemoryStream(buffer))
{
var result=Image.FromStream(newStream);
}
Now result will include all frames.