I have a method that I use to get a thumbnail as a Byte[] from an image. The method receives the path to the image.
The class works pretty well, until you reach a size threshold, after which performance drops to pitiful low levels. No, I haven't nailed down the threshold...mainly because after some research, it seems like I'm using a methodology that is inefficient and not very scalable.
My scenario seems to be confirmed in this SO thread.
That question lays out exactly what I am experiencing. The reason it isn't a solution is because the answer talks of using other graphics APIs and Paint overrides, which obviously doesn't apply here. I tried the miscellaneous things, like setting graphics parameters, but that made little difference.
One example of a large image I am dealing with is 3872 x 2592 and about 3.5Mb in size. Some a lot more, and many that size or smaller.
My searching has not yielded much. In fact, it seems that I can only find advice that includes the use of System.Drawing.Graphics.DrawImage(). In one exception, it was suggested to include assemblies to attempt use of PresentationFramework. This is a WinForms app, so that seems a bit much just to grab a thumbnail image.
Another suggestion I came across had to do with extracting Exif information from the file (if I recall) and attempting to grab just that data rather than the entire image. I'm not opposed, but I have yet to find a complete enough example of how that is carried out.
I wonder about P/Invoke options. Better performance than what GDI+ is (apparently) capable of delivering. But, by all means, if there's an optimization I am missing in this code, please point it out.
Here is my current method:
public static Byte[] GetImageThumbnailAsBytes(String FilePath)
{
if (File.Exists(FilePath))
{
Byte[] ba = File.ReadAllBytes(FilePath);
if (ba != null)
{
using (MemoryStream ms = new MemoryStream(ba, false))
{
Int32 thWidth = _MaxThumbWidth;
Int32 thHeight = _MaxThumbHeight;
Image i = Image.FromStream(ms, true, false);
ImageFormat imf = i.RawFormat;
Int32 w = i.Width;
Int32 h = i.Height;
Int32 th = thWidth;
Int32 tw = thWidth;
if (h > w)
{
Double ratio = (Double)w / (Double)h;
th = thHeight < h ? thHeight : h;
tw = thWidth < w ? (Int32)(ratio * thWidth) : w;
}
else
{
Double ratio = (Double)h / (Double)w;
th = thHeight < h ? (Int32)(ratio * thHeight) : h;
tw = thWidth < w ? thWidth : w;
}
Bitmap target = new Bitmap(tw, th);
Graphics g = Graphics.FromImage(target);
g.SmoothingMode = SmoothingMode.HighQuality;
g.CompositingQuality = CompositingQuality.HighQuality;
g.InterpolationMode = InterpolationMode.Bilinear; //NearestNeighbor
g.CompositingMode = CompositingMode.SourceCopy;
Rectangle rect = new Rectangle(0, 0, tw, th);
g.DrawImage(i, rect, 0, 0, w, h, GraphicsUnit.Pixel);
using (MemoryStream ms2 = new MemoryStream())
{
target.Save(ms2, imf);
target.Dispose();
i.Dispose();
return ms2.ToArray();
}
}
}
}
return new Byte[] { };
}
P.S. I got here in the first place by using the Visual Studio 2012 profiler, which told me that DrawImage() is responsible for 97.7% of the CPU load while loading images (I did a pause/start to isolate the loading code).
Related
I am trying to improve our web applications image upload feature which stores images in to a database as a byte array and then reads them out later and puts them in to an HTML image tag to be displayed.
In order to display all the images uploaded we have a separate set of methods to retrieve a thumbnail image which involves reading out of the database, converting to a memory stream and then using that to create a C# Image followed by the .GetThumbnail method before converting it back to a byte array via another memory stream object.
A grid loads up all the data (image name, description, category, etc...) then calls a separate URL with the image id to retrieve a thumbnail image. This URL returns an C# MVC ImageResult. When I call this URL by itself it loads up the correct thumbnail with no problem. However, when I call the grid it loads up other images fine and then falls over with a Out Of Memory Exception. If I skip over this it will continue to load up the other images ok too.
At first I thought it might be due to leaving one of the streams open but everything is enclosed in a using with Dispose() and Close() called on both memory streams (the first to convert it to an image, the second to convert it back to a byte array) in finally blocks.
I am completely out of ideas as it looks like the byte array is the same, the method being called is the same but in one instance it works and the other it doesn't.
I have copied offending code which is the method that converts the image to a thumbnail via our Image object (passed in as the 4th parameters) consistently falls over on the line System.Drawing.Image.FromStream(ms) line.
private static void ConvertToImage(int size, bool fixWidth, bool fixHeight, Image img)
{
byte[] picbyte = img.Img;
using (MemoryStream ms = new MemoryStream(picbyte))
{
try
{
System.Drawing.Image image = System.Drawing.Image.FromStream(ms);
int width = image.Width;
int height = image.Height;
if (fixWidth && !fixHeight)
{
height = (int)Math.Round(((decimal)height / width) * size);
width = size;
}
if (fixHeight && !fixWidth)
{
width = (int)Math.Round(((decimal)width / height) * size);
height = size;
}
if ((fixWidth && fixHeight) || (!fixWidth && !fixHeight))
{
width = size;
height = size;
}
IntPtr ptr = Marshal.AllocHGlobal(sizeof(int));
int ptrInt = 0;
Marshal.WriteInt32(ptr, ptrInt);
Marshal.FreeHGlobal(ptr);
MemoryStream ms2 = new MemoryStream();
using (image = image.GetThumbnailImage(width, height, delegate () { return false; }, ptr))
{
try
{
image.Save(ms2, System.Drawing.Imaging.ImageFormat.Png);
img.Img = ms2.ToArray();
img.MIMEType = "image/png";
image.Dispose();
}
finally
{
ms2.Close();
ms2.Dispose();
}
}
}
finally //Ensure we close the stream if anything happens.
{
ms.Close();
ms.Dispose();
}
}
}
GDI (the thing classes like Image and Bitmap are wrappers for) is bad about throwing a OutOfMemoryExecption when a better exception would have been the better named, non exsitant, OutOfHandlesException.
When working with images in .NET you MUST always dispose of your resources, the objects you are working with are often classes that do not take a lot of managed memory but hold on to limited unmanaged resources. Because they do not put much memory pressure on the Garbage Collector if you create a lot of them you can easily run out of GDI handles before the GC runs and collects them.
At the top of your function you do
System.Drawing.Image image = System.Drawing.Image.FromStream(ms);
then later on you do
using (image = image.GetThumbnailImage(width, height, delegate () { return false; }, ptr))
This is causing you to lose the reference to the first Image object without disposing it. Use a different variable name for your thumbnail and put that first image in a using block.
P.S. Your .Close(); and .Dispose() calls are unnessesary. Putting the disposeable objects inside a using block does both those operations, you can get rid of all of your try-finally blocks and get rid of the extra image.Dispose() call. Also your ptr is not correct, the MSDN states you should be passing in IntPtr.Zero not the value 0 written to a pointer.
Here is a quickly updated version that has all the fixes.
private static void ConvertToImage(int size, bool fixWidth, bool fixHeight, Image img)
{
byte[] picbyte = img.Img;
using (MemoryStream ms = new MemoryStream(picbyte))
using (System.Drawing.Image image = System.Drawing.Image.FromStream(ms))
{
int width = image.Width;
int height = image.Height;
if (fixWidth && !fixHeight)
{
height = (int)Math.Round(((decimal)height / width) * size);
width = size;
}
if (fixHeight && !fixWidth)
{
width = (int)Math.Round(((decimal)width / height) * size);
height = size;
}
if ((fixWidth && fixHeight) || (!fixWidth && !fixHeight))
{
width = size;
height = size;
}
using(MemoryStream ms2 = new MemoryStream())
using (var thumnailImage = image.GetThumbnailImage(width, height, delegate () { return false; }, IntPtr.Zero))
{
thumnailImage.Save(ms2, System.Drawing.Imaging.ImageFormat.Png);
img.Img = ms2.ToArray();
img.MIMEType = "image/png";
}
}
}
I am totally stuck on image resizing because I am getting OutOfMemoryException using the typical examples of image resizing that can be found on the many questions that feature OOMs.
I even tried DynamicImage, which can be found on Nuget, and this also threw an OutOfMemoryException.
Can anyone tell me how I can reduce the quality/size of an image in C#, without loading it into memory?
Edit: I want the c# equivalent to this, if there is one?
Edit: I give up with the typical methods of resizing, as I just can't avoid OutOfMemoryExceptions on my live site, which is running on an old server.
Further Edit: My server's OS is Microsoft Server 2003 Standard Edition
I can post examples of my code, but I'm trying to find a way around OutOfMemoryExceptions.
public static void ResizeImage(string imagePath, int imageWidth, int imageHeight, bool upscaleImage) {
using (Image image = Image.FromFile(imagePath, false)) {
int width = image.Width;
int height = image.Height;
if (width > imageWidth || height > imageHeight || upscaleImage) {
image.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipX);
image.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipX);
float ratio = 0;
if (width > height) {
ratio = (float)width / (float)height;
width = imageWidth;
height = Convert.ToInt32(Math.Round((float)width / ratio));
}
else {
ratio = (float)height / (float)width;
height = imageHeight;
width = Convert.ToInt32(Math.Round((float)height / ratio));
}
using (Bitmap bitmap = new Bitmap(width, height)) {
bitmap.SetResolution(image.HorizontalResolution, image.VerticalResolution);
using (Graphics graphic = Graphics.FromImage(bitmap)) {
graphic.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
graphic.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
graphic.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.HighQuality;
graphic.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality;
graphic.DrawImage(image, 0, 0, width, height);
string extension = ".jpg"; // Path.GetExtension(originalFilePath);
using (EncoderParameters encoderParameters = new EncoderParameters(1)) {
encoderParameters.Param[0] = new EncoderParameter(Encoder.Quality, 100L);
using (MemoryStream imageMemoryStream = new MemoryStream()) {
bitmap.Save(imageMemoryStream, GetImageCodec(extension), encoderParameters);
using (Image result = Image.FromStream(imageMemoryStream, true, false)) {
string newFullPathName = //path;
result.Save(newFullPathName);
}
}
}
}
}
}
}
}
I also tried this code as I hoped GetThumbnailImage would reduce the picture quality/size for me, but this is also throwing an OOM exception:
viewModel.File.SaveAs(path);
Image image = Image.FromFile(path);
Image thumbnail = image.GetThumbnailImage(600, 600, null, new IntPtr());
image.Dispose();
File.Delete(path);
thumbnail.Save(path);
thumbnail.Dispose();
Again, both my code examples work for me in my local machine, so I am not trying to find faults/fixes in the code as they should be fine. I'm looking for any solution to avoid the OOM exceptions, I had the idea of reducing the fize size somehow without loading the image into memory, but any alternative ideas that can help me would be appreciated.
You can try using ImageMagick via command line or the .NET bindings. ImageMagick has some options to resize as the file is being read, which should reduce memory consumption.
It may be that the "image" you are using is either not a supported format or is corrupted.
I failed to mention that I inherited legacy code that was not disposing of images properly, because I did not think it was relevant since fixing it. The strange thing is, after restarting the website and the AppPool, I was able to upload pictures again without getting an OutOfMemoryExcepiton. I'm struggling to understand why this happened as I have changed the code to dispose of images properly and have done several deploys since, so I would expect that to clear any undisposed images from memory? All the code for picture resizing and uploading was in a static class and I believe that GC.collect() does not work on static variables?
My theory is that the undisposed images have built up in memory and have remained even when I have redepolyed to the site, as that's the only conclusion I can reach since the code began working again after restarting the app pool.
I would delete my question but it has been answered now, happy to reassign the answer if anyone can help explain what was going on here.
I have a web application where users can upload pictures to create their galleries. Years ago when I wrote the application I choose ImageMagick, and I did all my cropping and resizing with ImageMagick.
Now that I am rewriting the application from scratch, I replaced ImageMagick with native GDI+ operations, but the more I learn around GDI+ the more I am scared I made the wrong choice.
Everywhere I read that GDI+ is for the desktop and should not be used on a server application. I don't know the details, but I guess it's for the memory consumption, and indeed I can see GDI+ is using more memory to do the same operation (crop and resize) over the same image than ImageMagick (while to be honest GDI+ is faster).
I believed GDI+, ImageMagick or any other library should be more or less the same for those basic operations, and I liked the idea of using native GDI+ believing whatever MS is shipping with .NET should be at least OK.
What is the right approach/tool to use?
This is the code I use to crop:
internal Image Crop(Image image, Rectangle r)
{
Bitmap bmpCrop;
using (Bitmap bmpImage = new Bitmap(image))
{
bmpCrop = bmpImage.Clone(r, bmpImage.PixelFormat);
bmpImage.Dispose();
}
return (Image)(bmpCrop);
}
This is the code I use to resize:
internal Image ResizeTo(Image sourceImage, int width, int height)
{
System.Drawing.Image newImage = new Bitmap(width, height);
using (Graphics gr = Graphics.FromImage(newImage))
{
gr.SmoothingMode = SmoothingMode.AntiAlias;
gr.InterpolationMode = InterpolationMode.HighQualityBicubic;
gr.PixelOffsetMode = PixelOffsetMode.HighQuality;
gr.DrawImage(sourceImage, new Rectangle(0, 0, width, height));
gr.Dispose();
}
return newImage;
}
Can you link to somewhere people have said that GDI+ shouldn't be used on the server? Maybe they know something I don't.
I know a few things about how GDI+ works but nothing about ImageMagick. I did happen upon this page describing ImageMagick's architecture: http://www.imagemagick.org/script/architecture.php
It seems ImageMagick will internally convert images to an uncompressed format with 4 channels and a specific bit depth, typically 16 bits per channel, and do its work with the uncompressed data, which may be in memory or on disk depending on the size. 'identify -version' will tell you what your bit depth is. My impression is that in practice ImageMagick will typically work with a 64-bit RGBA buffer internally, unless you use a Q8 version which will use 32-bit RGBA. It also can use multiple threads, but I doubt that will matter unless you're working with very large images. (If you are working with very large images, ImageMagick is the clear winner.)
GDI+ Bitmap objects will always store uncompressed data in memory and will generally default to 32-bit RGBA. That and 32-bit RGB are probably the most efficient formats. GDI+ is a drawing library, and it wasn't designed for large images, but at least a Bitmap object won't hold any resources other than the memory for the pixel data and image metadata (contrary to popular belief, they do not contain HBITMAP objects).
So they seem very similar to me. I can't say one is clearly better than the other for your use case. If you go with imagemagick, you should probably use a Q8 build for the speed and memory gains, unless the extra precision is important to you.
It seems like if your only operations are load, save, scale, and crop, you should be able to easily replace the implementation later if you want.
Unless you need to work with metafiles, you should probably be using Bitmap objects internally and not Images. Then you wouldn't have to create an intermediate Bitmap object in your Crop function. That intermediate object may be behind some of the extra memory consumption you observed. If you get Image objects from an external source, I'd suggest trying to cast them to Bitmap and creating a new Bitmap if that doesn't work.
Also, the "using" statement calls Dispose automatically, so there's no need to also call it explicitly.
I wrote something myself:
public void ResizeImageAndRatio(string origFileLocation, string newFileLocation, string origFileName, string newFileName, int newWidth, int newHeight, bool resizeIfWider)
{
System.Drawing.Image initImage = System.Drawing.Image.FromFile(origFileLocation + origFileName);
int templateWidth = newWidth;
int templateHeight = newHeight;
double templateRate = double.Parse(templateWidth.ToString()) / templateHeight;
double initRate = double.Parse(initImage.Width.ToString()) / initImage.Height;
if (templateRate == initRate)
{
System.Drawing.Image templateImage = new System.Drawing.Bitmap(templateWidth, templateHeight);
System.Drawing.Graphics templateG = System.Drawing.Graphics.FromImage(templateImage);
templateG.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.High;
templateG.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
templateG.Clear(Color.White);
templateG.DrawImage(initImage, new System.Drawing.Rectangle(0, 0, templateWidth, templateHeight), new System.Drawing.Rectangle(0, 0, initImage.Width, initImage.Height), System.Drawing.GraphicsUnit.Pixel);
templateImage.Save(newFileLocation + newFileName, System.Drawing.Imaging.ImageFormat.Jpeg);
}
else
{
System.Drawing.Image pickedImage = null;
System.Drawing.Graphics pickedG = null;
Rectangle fromR = new Rectangle(0, 0, 0, 0);
Rectangle toR = new Rectangle(0, 0, 0, 0);
if (templateRate > initRate)
{
pickedImage = new System.Drawing.Bitmap(initImage.Width, int.Parse(Math.Floor(initImage.Width / templateRate).ToString()));
pickedG = System.Drawing.Graphics.FromImage(pickedImage);
fromR.X = 0;
fromR.Y = int.Parse(Math.Floor((initImage.Height - initImage.Width / templateRate) / 2).ToString());
fromR.Width = initImage.Width;
fromR.Height = int.Parse(Math.Floor(initImage.Width / templateRate).ToString());
toR.X = 0;
toR.Y = 0;
toR.Width = initImage.Width;
toR.Height = int.Parse(Math.Floor(initImage.Width / templateRate).ToString());
}
else
{
pickedImage = new System.Drawing.Bitmap(int.Parse(Math.Floor(initImage.Height * templateRate).ToString()), initImage.Height);
pickedG = System.Drawing.Graphics.FromImage(pickedImage);
fromR.X = int.Parse(Math.Floor((initImage.Width - initImage.Height * templateRate) / 2).ToString());
fromR.Y = 0;
fromR.Width = int.Parse(Math.Floor(initImage.Height * templateRate).ToString());
fromR.Height = initImage.Height;
toR.X = 0;
toR.Y = 0;
toR.Width = int.Parse(Math.Floor(initImage.Height * templateRate).ToString());
toR.Height = initImage.Height;
}
pickedG.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
pickedG.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
pickedG.DrawImage(initImage, toR, fromR, System.Drawing.GraphicsUnit.Pixel);
System.Drawing.Image templateImage = new System.Drawing.Bitmap(templateWidth, templateHeight);
System.Drawing.Graphics templateG = System.Drawing.Graphics.FromImage(templateImage);
templateG.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.High;
templateG.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
templateG.Clear(Color.White);
templateG.DrawImage(pickedImage, new System.Drawing.Rectangle(0, 0, templateWidth, templateHeight), new System.Drawing.Rectangle(0, 0, pickedImage.Width, pickedImage.Height), System.Drawing.GraphicsUnit.Pixel);
templateImage.Save(newFileLocation + newFileName, System.Drawing.Imaging.ImageFormat.Jpeg);
templateG.Dispose();
templateImage.Dispose();
pickedG.Dispose();
pickedImage.Dispose();
}
initImage.Dispose();
}
I'm taking a Stream convert it to Image, process that image, then return a FileStream.
Is this a performance problem? If not, whats the optimized way to convert and return back a stream?
public FileStream ResizeImage(int h, int w, Stream stream)
{
var img = Image.FromStream(stream);
/* ..Processing.. */
//converting back to stream? is this right?
img.Save(stream, ImageFormat.Png);
return stream;
}
The situation in which this is running: User uploads image on my site (controller gives me a Stream, i resize this, then send this stream to rackspace (Rackspace takes a FileStream).
You basically want something like this, don't you:
public void Resize(Stream input, Stream output, int width, int height)
{
using (var image = Image.FromStream(input))
using (var bmp = new Bitmap(width, height))
using (var gr = Graphics.FromImage(bmp))
{
gr.CompositingQuality = CompositingQuality.HighSpeed;
gr.SmoothingMode = SmoothingMode.HighSpeed;
gr.InterpolationMode = InterpolationMode.HighQualityBicubic;
gr.DrawImage(image, new Rectangle(0, 0, width, height));
bmp.Save(output, ImageFormat.Png);
}
}
which will be used like this:
using (var input = File.OpenRead("input.jpg"))
using (var output = File.Create("output.png"))
{
Resize(input, output, 640, 480);
}
That looks as simple as it can be. You have to read the entire image contents to be able to process it and you have to write the result back.
FileStreams are the normal .NET way to handle files, so for normal purposes your approach is okay.
The only thing I don't understand is why you return the FileStream again - it is the same object as was passed by a parameter.
If you are doing a lot of images and only modify parts of the data, memory mapped files could improve performance. However it is a more advanced concept to use.
I'm scaling images down in c#, and I've compared my methods with the best method in Photoshop cs5 and cannot replicate it.
In PS i'm using bicubic sharper, which looks really good. However, when trying to do the same in c# I don't get as high quality results. I've tried bicubic interpolation as well as HQ bicubic, smoothing mode HQ/None/AA. Composition modes, I've tried about 50 different variations and each one comes out pretty close to the image on the right.
You'll notice the pixelation on her back and around the title, as well as the authors name not coming out too well.
(Left is PS, right is c#.)
It seems that c# bicubic does too much smoothing even with smoothing set to none. I've been playing around with many variations of the following code:
g.CompositingQuality = CompositingQuality.HighQuality;
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.PixelOffsetMode = PixelOffsetMode.None;
g.SmoothingMode = SmoothingMode.None;
Edit: As requested here is the starting image (1mb).
Perhaps I am missing something, but I have typically used the following code below to resize/compress JPEG Images. Personally, I think the result turned out pretty well based on your source image. The code doesn't handle a few edge cases concerning input parameters, but overall gets the job done (I have additional extension methods for Cropping, and Combining image transformations if interested).
Image Scaled to 25% original size and using 90% Compression. (~30KB output file)
Image Scaling Extension Methods:
public static Image Resize(this Image image, Single scale)
{
if (image == null)
return null;
scale = Math.Max(0.0F, scale);
Int32 scaledWidth = Convert.ToInt32(image.Width * scale);
Int32 scaledHeight = Convert.ToInt32(image.Height * scale);
return image.Resize(new Size(scaledWidth, scaledHeight));
}
public static Image Resize(this Image image, Size size)
{
if (image == null || size.IsEmpty)
return null;
var resizedImage = new Bitmap(size.Width, size.Height, image.PixelFormat);
resizedImage.SetResolution(image.HorizontalResolution, image.VerticalResolution);
using (var g = Graphics.FromImage(resizedImage))
{
var location = new Point(0, 0);
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.DrawImage(image, new Rectangle(location, size), new Rectangle(location, image.Size), GraphicsUnit.Pixel);
}
return resizedImage;
}
Compression Extension Method:
public static Image Compress(this Image image, Int32 quality)
{
if (image == null)
return null;
quality = Math.Max(0, Math.Min(100, quality));
using (var encoderParameters = new EncoderParameters(1))
{
var imageCodecInfo = ImageCodecInfo.GetImageEncoders().First(encoder => String.Compare(encoder.MimeType, "image/jpeg", StringComparison.OrdinalIgnoreCase) == 0);
var memoryStream = new MemoryStream();
encoderParameters.Param[0] = new EncoderParameter(Encoder.Quality, Convert.ToInt64(quality));
image.Save(memoryStream, imageCodecInfo, encoderParameters);
return Image.FromStream(memoryStream);
}
}
Usage:
using(var source = Image.FromFile(#"C:\~\Source.jpg"))
using(var resized = source.Resize(0.25F))
using(var compressed = resized.Compress(90))
compressed.Save(#"C:\~\Output.jpg");
NOTE:
For anyone who may comment, you cannot dispose the MemoryStream created in the Compress method until after the image is disposed. If you reflect in to the implementation of Dispose on MemoryStream, it is actually save to not explicitly call dispose. The only alternative would be to wrap the image/memory stream in a custom implementation of a class that implements Image/IDisposable.
Looking at the amount of JPEG artifacts, especially at the top of the image, I think you set the jpg compression to high. That results in a smaller (filesize) file, but reduces image quality and seems to add more blur.
Can you try saving it in a higher quality? I assume the line containing CompositingQuality.HighQuality does this already, but maybe you can find an even higher quality mode. What are the differences in file size between Photoshop and C#? And how does the Photoshop image look after you saved it and reopened it? Just resizing in Photoshop doesn't introduce any jpg data loss. You will only notice that after you've saved the image as jpg and then closed and reopened it.
I stumbled upon this question.
I used this code to use no compression of the jpeg and it comes out like the PS version:
ImageCodecInfo[] codecs = ImageCodecInfo.GetImageEncoders();
ImageCodecInfo ici = null;
foreach (ImageCodecInfo codec in codecs)
{
if (codec.MimeType == "image/jpeg")
ici = codec;
}
EncoderParameters ep = new EncoderParameters();
ep.Param[0] = new EncoderParameter(System.Drawing.Imaging.Encoder.Quality, (long)100);