I am totally stuck on image resizing because I am getting OutOfMemoryException using the typical examples of image resizing that can be found on the many questions that feature OOMs.
I even tried DynamicImage, which can be found on Nuget, and this also threw an OutOfMemoryException.
Can anyone tell me how I can reduce the quality/size of an image in C#, without loading it into memory?
Edit: I want the c# equivalent to this, if there is one?
Edit: I give up with the typical methods of resizing, as I just can't avoid OutOfMemoryExceptions on my live site, which is running on an old server.
Further Edit: My server's OS is Microsoft Server 2003 Standard Edition
I can post examples of my code, but I'm trying to find a way around OutOfMemoryExceptions.
public static void ResizeImage(string imagePath, int imageWidth, int imageHeight, bool upscaleImage) {
using (Image image = Image.FromFile(imagePath, false)) {
int width = image.Width;
int height = image.Height;
if (width > imageWidth || height > imageHeight || upscaleImage) {
image.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipX);
image.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipX);
float ratio = 0;
if (width > height) {
ratio = (float)width / (float)height;
width = imageWidth;
height = Convert.ToInt32(Math.Round((float)width / ratio));
}
else {
ratio = (float)height / (float)width;
height = imageHeight;
width = Convert.ToInt32(Math.Round((float)height / ratio));
}
using (Bitmap bitmap = new Bitmap(width, height)) {
bitmap.SetResolution(image.HorizontalResolution, image.VerticalResolution);
using (Graphics graphic = Graphics.FromImage(bitmap)) {
graphic.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
graphic.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
graphic.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.HighQuality;
graphic.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality;
graphic.DrawImage(image, 0, 0, width, height);
string extension = ".jpg"; // Path.GetExtension(originalFilePath);
using (EncoderParameters encoderParameters = new EncoderParameters(1)) {
encoderParameters.Param[0] = new EncoderParameter(Encoder.Quality, 100L);
using (MemoryStream imageMemoryStream = new MemoryStream()) {
bitmap.Save(imageMemoryStream, GetImageCodec(extension), encoderParameters);
using (Image result = Image.FromStream(imageMemoryStream, true, false)) {
string newFullPathName = //path;
result.Save(newFullPathName);
}
}
}
}
}
}
}
}
I also tried this code as I hoped GetThumbnailImage would reduce the picture quality/size for me, but this is also throwing an OOM exception:
viewModel.File.SaveAs(path);
Image image = Image.FromFile(path);
Image thumbnail = image.GetThumbnailImage(600, 600, null, new IntPtr());
image.Dispose();
File.Delete(path);
thumbnail.Save(path);
thumbnail.Dispose();
Again, both my code examples work for me in my local machine, so I am not trying to find faults/fixes in the code as they should be fine. I'm looking for any solution to avoid the OOM exceptions, I had the idea of reducing the fize size somehow without loading the image into memory, but any alternative ideas that can help me would be appreciated.
You can try using ImageMagick via command line or the .NET bindings. ImageMagick has some options to resize as the file is being read, which should reduce memory consumption.
It may be that the "image" you are using is either not a supported format or is corrupted.
I failed to mention that I inherited legacy code that was not disposing of images properly, because I did not think it was relevant since fixing it. The strange thing is, after restarting the website and the AppPool, I was able to upload pictures again without getting an OutOfMemoryExcepiton. I'm struggling to understand why this happened as I have changed the code to dispose of images properly and have done several deploys since, so I would expect that to clear any undisposed images from memory? All the code for picture resizing and uploading was in a static class and I believe that GC.collect() does not work on static variables?
My theory is that the undisposed images have built up in memory and have remained even when I have redepolyed to the site, as that's the only conclusion I can reach since the code began working again after restarting the app pool.
I would delete my question but it has been answered now, happy to reassign the answer if anyone can help explain what was going on here.
Related
This is the kinect for xbox 360, using the kinect.dll library.
No problem with the RgbResolution1280x960Fps12 or RgbResolution640x480Fps30 color stream.
The problem occurs with the infrared stream (InfraredResolution640x480Fps30)
Below is the code used and the resulting image.
using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
{
byte[] colorData = null;
if (colorFrame == null) return;
if (colorData == null)
colorData = new byte[colorFrame.PixelDataLength];
colorFrame.CopyPixelDataTo(colorData);
Marshal.FreeHGlobal(colorPtr);
colorPtr = Marshal.AllocHGlobal(colorData.Length);
Marshal.Copy(colorData, 0, colorPtr, colorData.Length);
if (ir) //ir is true if there is infrared stream
{
kinectVideoBitmap = new Bitmap(
colorFrame.Width / 2, //stream in 16 bit
colorFrame.Height,
colorFrame.Width * colorFrame.BytesPerPixel,
System.Drawing.Imaging.PixelFormat.Format32bppRgb, //probable error
colorPtr);
}
else
{
kinectVideoBitmap = new Bitmap(
colorFrame.Width,
colorFrame.Height,
colorFrame.Width * colorFrame.BytesPerPixel,
System.Drawing.Imaging.PixelFormat.Format32bppRgb,
colorPtr);
}
pic.Image = kinectVideoBitmap; //pic is picturebox
byte[] pixelData = new byte[colorFrame.PixelDataLength];
colorFrame.CopyPixelDataTo(pixelData);
An error could be the handling of the PixelFormat, indicated in the code, but I don't think it depends only on that. Below are the image derived from my code, and the one that generates the example app "kinect explorer".
(both images are screenshots)
This is the image of the Kinect explorer example app, you can see how well defined it is in quality and noise, more importantly, even distant objects are well seen.
This is the image of my app, in addition to the various colors deriving from the wrong pixelformat, the noise is such that you cannot see any of the distant objects.
Do you have ideas on how to fix and get a clean image?
I thank everyone in advance.
I'm using the following code right now, and it works for ~300 images, but I have to merge more than one thousand.
private static void CombineThumbStripImages(string[] imageFiles)
{
int index = 0;
using (var result = new Bitmap(192 * imageFiles.Length, 112))
{
using (var graphics = Graphics.FromImage(result))
{
graphics.Clear(Color.White);
int leftPosition = 0;
for (index = 0; index < imageFiles.Length; index++)
{
string file = imageFiles[index];
using (var image = new Bitmap(file))
{
var rect = new Rectangle(leftPosition, 0, 192, 112);
graphics.DrawImage(image, rect);
leftPosition += 192;
}
}
}
result.Save("result.jpg", ImageFormat.Jpeg);
}
}
It throws the following exception:
An unhandled exception of type 'System.Runtime.InteropServices.ExternalException' occurred in System.Drawing.dll
Additional information: A generic error occurred in GDI+.
Can somebody help?
Why not use something like xna or opengl to do this?
I know you can with texture2d ... and as i am currently learning opengl id like to think you can with that but do not know how.
http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.texture2d_members.aspx (SaveAsJPEG on public)
I have done something similar to make sprite sheets, you basically create a huge texture2d using whatever tesselation or organizing algorithm to optimize space usage. There are lots of ways to do this though.
such as
http://xbox.create.msdn.com/en-US/education/catalog/sample/sprite_sheet
would probably only take a couple of hours to do. XNA v easy especially if you are just leaning on the externals. Doing it this was should work pretty well.
The problems lies in this line of code.
result.Save("result.jpg", ImageFormat.Jpeg);
Looks like it throws an error saving the jpeg/png format. I've loaded a copy of your code into VS2008 + Windows 7.
If you want to use the same code, changed your image format to bmp or tiff
result.Save("result.bmp", ImageFormat.Bmp); // this works, but the file size is huge
or
result.Save("result.tiff", ImageFormat.Tiff); // this also works, files is not as big
I have a method that I use to get a thumbnail as a Byte[] from an image. The method receives the path to the image.
The class works pretty well, until you reach a size threshold, after which performance drops to pitiful low levels. No, I haven't nailed down the threshold...mainly because after some research, it seems like I'm using a methodology that is inefficient and not very scalable.
My scenario seems to be confirmed in this SO thread.
That question lays out exactly what I am experiencing. The reason it isn't a solution is because the answer talks of using other graphics APIs and Paint overrides, which obviously doesn't apply here. I tried the miscellaneous things, like setting graphics parameters, but that made little difference.
One example of a large image I am dealing with is 3872 x 2592 and about 3.5Mb in size. Some a lot more, and many that size or smaller.
My searching has not yielded much. In fact, it seems that I can only find advice that includes the use of System.Drawing.Graphics.DrawImage(). In one exception, it was suggested to include assemblies to attempt use of PresentationFramework. This is a WinForms app, so that seems a bit much just to grab a thumbnail image.
Another suggestion I came across had to do with extracting Exif information from the file (if I recall) and attempting to grab just that data rather than the entire image. I'm not opposed, but I have yet to find a complete enough example of how that is carried out.
I wonder about P/Invoke options. Better performance than what GDI+ is (apparently) capable of delivering. But, by all means, if there's an optimization I am missing in this code, please point it out.
Here is my current method:
public static Byte[] GetImageThumbnailAsBytes(String FilePath)
{
if (File.Exists(FilePath))
{
Byte[] ba = File.ReadAllBytes(FilePath);
if (ba != null)
{
using (MemoryStream ms = new MemoryStream(ba, false))
{
Int32 thWidth = _MaxThumbWidth;
Int32 thHeight = _MaxThumbHeight;
Image i = Image.FromStream(ms, true, false);
ImageFormat imf = i.RawFormat;
Int32 w = i.Width;
Int32 h = i.Height;
Int32 th = thWidth;
Int32 tw = thWidth;
if (h > w)
{
Double ratio = (Double)w / (Double)h;
th = thHeight < h ? thHeight : h;
tw = thWidth < w ? (Int32)(ratio * thWidth) : w;
}
else
{
Double ratio = (Double)h / (Double)w;
th = thHeight < h ? (Int32)(ratio * thHeight) : h;
tw = thWidth < w ? thWidth : w;
}
Bitmap target = new Bitmap(tw, th);
Graphics g = Graphics.FromImage(target);
g.SmoothingMode = SmoothingMode.HighQuality;
g.CompositingQuality = CompositingQuality.HighQuality;
g.InterpolationMode = InterpolationMode.Bilinear; //NearestNeighbor
g.CompositingMode = CompositingMode.SourceCopy;
Rectangle rect = new Rectangle(0, 0, tw, th);
g.DrawImage(i, rect, 0, 0, w, h, GraphicsUnit.Pixel);
using (MemoryStream ms2 = new MemoryStream())
{
target.Save(ms2, imf);
target.Dispose();
i.Dispose();
return ms2.ToArray();
}
}
}
}
return new Byte[] { };
}
P.S. I got here in the first place by using the Visual Studio 2012 profiler, which told me that DrawImage() is responsible for 97.7% of the CPU load while loading images (I did a pause/start to isolate the loading code).
I have a web application where users can upload pictures to create their galleries. Years ago when I wrote the application I choose ImageMagick, and I did all my cropping and resizing with ImageMagick.
Now that I am rewriting the application from scratch, I replaced ImageMagick with native GDI+ operations, but the more I learn around GDI+ the more I am scared I made the wrong choice.
Everywhere I read that GDI+ is for the desktop and should not be used on a server application. I don't know the details, but I guess it's for the memory consumption, and indeed I can see GDI+ is using more memory to do the same operation (crop and resize) over the same image than ImageMagick (while to be honest GDI+ is faster).
I believed GDI+, ImageMagick or any other library should be more or less the same for those basic operations, and I liked the idea of using native GDI+ believing whatever MS is shipping with .NET should be at least OK.
What is the right approach/tool to use?
This is the code I use to crop:
internal Image Crop(Image image, Rectangle r)
{
Bitmap bmpCrop;
using (Bitmap bmpImage = new Bitmap(image))
{
bmpCrop = bmpImage.Clone(r, bmpImage.PixelFormat);
bmpImage.Dispose();
}
return (Image)(bmpCrop);
}
This is the code I use to resize:
internal Image ResizeTo(Image sourceImage, int width, int height)
{
System.Drawing.Image newImage = new Bitmap(width, height);
using (Graphics gr = Graphics.FromImage(newImage))
{
gr.SmoothingMode = SmoothingMode.AntiAlias;
gr.InterpolationMode = InterpolationMode.HighQualityBicubic;
gr.PixelOffsetMode = PixelOffsetMode.HighQuality;
gr.DrawImage(sourceImage, new Rectangle(0, 0, width, height));
gr.Dispose();
}
return newImage;
}
Can you link to somewhere people have said that GDI+ shouldn't be used on the server? Maybe they know something I don't.
I know a few things about how GDI+ works but nothing about ImageMagick. I did happen upon this page describing ImageMagick's architecture: http://www.imagemagick.org/script/architecture.php
It seems ImageMagick will internally convert images to an uncompressed format with 4 channels and a specific bit depth, typically 16 bits per channel, and do its work with the uncompressed data, which may be in memory or on disk depending on the size. 'identify -version' will tell you what your bit depth is. My impression is that in practice ImageMagick will typically work with a 64-bit RGBA buffer internally, unless you use a Q8 version which will use 32-bit RGBA. It also can use multiple threads, but I doubt that will matter unless you're working with very large images. (If you are working with very large images, ImageMagick is the clear winner.)
GDI+ Bitmap objects will always store uncompressed data in memory and will generally default to 32-bit RGBA. That and 32-bit RGB are probably the most efficient formats. GDI+ is a drawing library, and it wasn't designed for large images, but at least a Bitmap object won't hold any resources other than the memory for the pixel data and image metadata (contrary to popular belief, they do not contain HBITMAP objects).
So they seem very similar to me. I can't say one is clearly better than the other for your use case. If you go with imagemagick, you should probably use a Q8 build for the speed and memory gains, unless the extra precision is important to you.
It seems like if your only operations are load, save, scale, and crop, you should be able to easily replace the implementation later if you want.
Unless you need to work with metafiles, you should probably be using Bitmap objects internally and not Images. Then you wouldn't have to create an intermediate Bitmap object in your Crop function. That intermediate object may be behind some of the extra memory consumption you observed. If you get Image objects from an external source, I'd suggest trying to cast them to Bitmap and creating a new Bitmap if that doesn't work.
Also, the "using" statement calls Dispose automatically, so there's no need to also call it explicitly.
I wrote something myself:
public void ResizeImageAndRatio(string origFileLocation, string newFileLocation, string origFileName, string newFileName, int newWidth, int newHeight, bool resizeIfWider)
{
System.Drawing.Image initImage = System.Drawing.Image.FromFile(origFileLocation + origFileName);
int templateWidth = newWidth;
int templateHeight = newHeight;
double templateRate = double.Parse(templateWidth.ToString()) / templateHeight;
double initRate = double.Parse(initImage.Width.ToString()) / initImage.Height;
if (templateRate == initRate)
{
System.Drawing.Image templateImage = new System.Drawing.Bitmap(templateWidth, templateHeight);
System.Drawing.Graphics templateG = System.Drawing.Graphics.FromImage(templateImage);
templateG.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.High;
templateG.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
templateG.Clear(Color.White);
templateG.DrawImage(initImage, new System.Drawing.Rectangle(0, 0, templateWidth, templateHeight), new System.Drawing.Rectangle(0, 0, initImage.Width, initImage.Height), System.Drawing.GraphicsUnit.Pixel);
templateImage.Save(newFileLocation + newFileName, System.Drawing.Imaging.ImageFormat.Jpeg);
}
else
{
System.Drawing.Image pickedImage = null;
System.Drawing.Graphics pickedG = null;
Rectangle fromR = new Rectangle(0, 0, 0, 0);
Rectangle toR = new Rectangle(0, 0, 0, 0);
if (templateRate > initRate)
{
pickedImage = new System.Drawing.Bitmap(initImage.Width, int.Parse(Math.Floor(initImage.Width / templateRate).ToString()));
pickedG = System.Drawing.Graphics.FromImage(pickedImage);
fromR.X = 0;
fromR.Y = int.Parse(Math.Floor((initImage.Height - initImage.Width / templateRate) / 2).ToString());
fromR.Width = initImage.Width;
fromR.Height = int.Parse(Math.Floor(initImage.Width / templateRate).ToString());
toR.X = 0;
toR.Y = 0;
toR.Width = initImage.Width;
toR.Height = int.Parse(Math.Floor(initImage.Width / templateRate).ToString());
}
else
{
pickedImage = new System.Drawing.Bitmap(int.Parse(Math.Floor(initImage.Height * templateRate).ToString()), initImage.Height);
pickedG = System.Drawing.Graphics.FromImage(pickedImage);
fromR.X = int.Parse(Math.Floor((initImage.Width - initImage.Height * templateRate) / 2).ToString());
fromR.Y = 0;
fromR.Width = int.Parse(Math.Floor(initImage.Height * templateRate).ToString());
fromR.Height = initImage.Height;
toR.X = 0;
toR.Y = 0;
toR.Width = int.Parse(Math.Floor(initImage.Height * templateRate).ToString());
toR.Height = initImage.Height;
}
pickedG.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
pickedG.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
pickedG.DrawImage(initImage, toR, fromR, System.Drawing.GraphicsUnit.Pixel);
System.Drawing.Image templateImage = new System.Drawing.Bitmap(templateWidth, templateHeight);
System.Drawing.Graphics templateG = System.Drawing.Graphics.FromImage(templateImage);
templateG.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.High;
templateG.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
templateG.Clear(Color.White);
templateG.DrawImage(pickedImage, new System.Drawing.Rectangle(0, 0, templateWidth, templateHeight), new System.Drawing.Rectangle(0, 0, pickedImage.Width, pickedImage.Height), System.Drawing.GraphicsUnit.Pixel);
templateImage.Save(newFileLocation + newFileName, System.Drawing.Imaging.ImageFormat.Jpeg);
templateG.Dispose();
templateImage.Dispose();
pickedG.Dispose();
pickedImage.Dispose();
}
initImage.Dispose();
}
I'm scaling images down in c#, and I've compared my methods with the best method in Photoshop cs5 and cannot replicate it.
In PS i'm using bicubic sharper, which looks really good. However, when trying to do the same in c# I don't get as high quality results. I've tried bicubic interpolation as well as HQ bicubic, smoothing mode HQ/None/AA. Composition modes, I've tried about 50 different variations and each one comes out pretty close to the image on the right.
You'll notice the pixelation on her back and around the title, as well as the authors name not coming out too well.
(Left is PS, right is c#.)
It seems that c# bicubic does too much smoothing even with smoothing set to none. I've been playing around with many variations of the following code:
g.CompositingQuality = CompositingQuality.HighQuality;
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.PixelOffsetMode = PixelOffsetMode.None;
g.SmoothingMode = SmoothingMode.None;
Edit: As requested here is the starting image (1mb).
Perhaps I am missing something, but I have typically used the following code below to resize/compress JPEG Images. Personally, I think the result turned out pretty well based on your source image. The code doesn't handle a few edge cases concerning input parameters, but overall gets the job done (I have additional extension methods for Cropping, and Combining image transformations if interested).
Image Scaled to 25% original size and using 90% Compression. (~30KB output file)
Image Scaling Extension Methods:
public static Image Resize(this Image image, Single scale)
{
if (image == null)
return null;
scale = Math.Max(0.0F, scale);
Int32 scaledWidth = Convert.ToInt32(image.Width * scale);
Int32 scaledHeight = Convert.ToInt32(image.Height * scale);
return image.Resize(new Size(scaledWidth, scaledHeight));
}
public static Image Resize(this Image image, Size size)
{
if (image == null || size.IsEmpty)
return null;
var resizedImage = new Bitmap(size.Width, size.Height, image.PixelFormat);
resizedImage.SetResolution(image.HorizontalResolution, image.VerticalResolution);
using (var g = Graphics.FromImage(resizedImage))
{
var location = new Point(0, 0);
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.DrawImage(image, new Rectangle(location, size), new Rectangle(location, image.Size), GraphicsUnit.Pixel);
}
return resizedImage;
}
Compression Extension Method:
public static Image Compress(this Image image, Int32 quality)
{
if (image == null)
return null;
quality = Math.Max(0, Math.Min(100, quality));
using (var encoderParameters = new EncoderParameters(1))
{
var imageCodecInfo = ImageCodecInfo.GetImageEncoders().First(encoder => String.Compare(encoder.MimeType, "image/jpeg", StringComparison.OrdinalIgnoreCase) == 0);
var memoryStream = new MemoryStream();
encoderParameters.Param[0] = new EncoderParameter(Encoder.Quality, Convert.ToInt64(quality));
image.Save(memoryStream, imageCodecInfo, encoderParameters);
return Image.FromStream(memoryStream);
}
}
Usage:
using(var source = Image.FromFile(#"C:\~\Source.jpg"))
using(var resized = source.Resize(0.25F))
using(var compressed = resized.Compress(90))
compressed.Save(#"C:\~\Output.jpg");
NOTE:
For anyone who may comment, you cannot dispose the MemoryStream created in the Compress method until after the image is disposed. If you reflect in to the implementation of Dispose on MemoryStream, it is actually save to not explicitly call dispose. The only alternative would be to wrap the image/memory stream in a custom implementation of a class that implements Image/IDisposable.
Looking at the amount of JPEG artifacts, especially at the top of the image, I think you set the jpg compression to high. That results in a smaller (filesize) file, but reduces image quality and seems to add more blur.
Can you try saving it in a higher quality? I assume the line containing CompositingQuality.HighQuality does this already, but maybe you can find an even higher quality mode. What are the differences in file size between Photoshop and C#? And how does the Photoshop image look after you saved it and reopened it? Just resizing in Photoshop doesn't introduce any jpg data loss. You will only notice that after you've saved the image as jpg and then closed and reopened it.
I stumbled upon this question.
I used this code to use no compression of the jpeg and it comes out like the PS version:
ImageCodecInfo[] codecs = ImageCodecInfo.GetImageEncoders();
ImageCodecInfo ici = null;
foreach (ImageCodecInfo codec in codecs)
{
if (codec.MimeType == "image/jpeg")
ici = codec;
}
EncoderParameters ep = new EncoderParameters();
ep.Param[0] = new EncoderParameter(System.Drawing.Imaging.Encoder.Quality, (long)100);