I'm scaling images down in c#, and I've compared my methods with the best method in Photoshop cs5 and cannot replicate it.
In PS i'm using bicubic sharper, which looks really good. However, when trying to do the same in c# I don't get as high quality results. I've tried bicubic interpolation as well as HQ bicubic, smoothing mode HQ/None/AA. Composition modes, I've tried about 50 different variations and each one comes out pretty close to the image on the right.
You'll notice the pixelation on her back and around the title, as well as the authors name not coming out too well.
(Left is PS, right is c#.)
It seems that c# bicubic does too much smoothing even with smoothing set to none. I've been playing around with many variations of the following code:
g.CompositingQuality = CompositingQuality.HighQuality;
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.PixelOffsetMode = PixelOffsetMode.None;
g.SmoothingMode = SmoothingMode.None;
Edit: As requested here is the starting image (1mb).
Perhaps I am missing something, but I have typically used the following code below to resize/compress JPEG Images. Personally, I think the result turned out pretty well based on your source image. The code doesn't handle a few edge cases concerning input parameters, but overall gets the job done (I have additional extension methods for Cropping, and Combining image transformations if interested).
Image Scaled to 25% original size and using 90% Compression. (~30KB output file)
Image Scaling Extension Methods:
public static Image Resize(this Image image, Single scale)
{
if (image == null)
return null;
scale = Math.Max(0.0F, scale);
Int32 scaledWidth = Convert.ToInt32(image.Width * scale);
Int32 scaledHeight = Convert.ToInt32(image.Height * scale);
return image.Resize(new Size(scaledWidth, scaledHeight));
}
public static Image Resize(this Image image, Size size)
{
if (image == null || size.IsEmpty)
return null;
var resizedImage = new Bitmap(size.Width, size.Height, image.PixelFormat);
resizedImage.SetResolution(image.HorizontalResolution, image.VerticalResolution);
using (var g = Graphics.FromImage(resizedImage))
{
var location = new Point(0, 0);
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.DrawImage(image, new Rectangle(location, size), new Rectangle(location, image.Size), GraphicsUnit.Pixel);
}
return resizedImage;
}
Compression Extension Method:
public static Image Compress(this Image image, Int32 quality)
{
if (image == null)
return null;
quality = Math.Max(0, Math.Min(100, quality));
using (var encoderParameters = new EncoderParameters(1))
{
var imageCodecInfo = ImageCodecInfo.GetImageEncoders().First(encoder => String.Compare(encoder.MimeType, "image/jpeg", StringComparison.OrdinalIgnoreCase) == 0);
var memoryStream = new MemoryStream();
encoderParameters.Param[0] = new EncoderParameter(Encoder.Quality, Convert.ToInt64(quality));
image.Save(memoryStream, imageCodecInfo, encoderParameters);
return Image.FromStream(memoryStream);
}
}
Usage:
using(var source = Image.FromFile(#"C:\~\Source.jpg"))
using(var resized = source.Resize(0.25F))
using(var compressed = resized.Compress(90))
compressed.Save(#"C:\~\Output.jpg");
NOTE:
For anyone who may comment, you cannot dispose the MemoryStream created in the Compress method until after the image is disposed. If you reflect in to the implementation of Dispose on MemoryStream, it is actually save to not explicitly call dispose. The only alternative would be to wrap the image/memory stream in a custom implementation of a class that implements Image/IDisposable.
Looking at the amount of JPEG artifacts, especially at the top of the image, I think you set the jpg compression to high. That results in a smaller (filesize) file, but reduces image quality and seems to add more blur.
Can you try saving it in a higher quality? I assume the line containing CompositingQuality.HighQuality does this already, but maybe you can find an even higher quality mode. What are the differences in file size between Photoshop and C#? And how does the Photoshop image look after you saved it and reopened it? Just resizing in Photoshop doesn't introduce any jpg data loss. You will only notice that after you've saved the image as jpg and then closed and reopened it.
I stumbled upon this question.
I used this code to use no compression of the jpeg and it comes out like the PS version:
ImageCodecInfo[] codecs = ImageCodecInfo.GetImageEncoders();
ImageCodecInfo ici = null;
foreach (ImageCodecInfo codec in codecs)
{
if (codec.MimeType == "image/jpeg")
ici = codec;
}
EncoderParameters ep = new EncoderParameters();
ep.Param[0] = new EncoderParameter(System.Drawing.Imaging.Encoder.Quality, (long)100);
Related
I'm writing a program the from some text file inputs outputs a large number of image files.
Currently these images are being created and saved with
Parallel.ForEach(set, c =>
{
using (Bitmap b = Generate_Image(c, Watermark))
{
//The encoder needs some set up to function properly
string s = string.Format("{0:0000}", c.Index);
string filepath = $#"{Directory}\{s}.png";
//best quality comes from manually configuring the codec and
//encoder used for the image saved
b.Save(filepath, ImageFormat.Png);
}
});
However I noticed that b.Save() has an overload for taking an ImageCodeInfo and an EncoderProperties, which should be able to produce a higher quality image output (the image quality is paramount to the program).
However, I haven't been able to find anywhere what needs to be done to create these objects to then pass in as parameters, at least not ones that work even in the Microsoft Documents, strangely enough, their samples didn't compile. So if I may ask how does one uses the method overload of Image.Save(Filepath,encoder,settings)?
Thank you in advance for any help offered.
I wrote a small method to retrieve that information in my own code:
private static ImageCodecInfo GetEncoderInfo(string mimeType)
{
foreach (ImageCodecInfo codec in ImageCodecInfo.GetImageEncoders())
if (codec.MimeType == mimeType)
return codec;
return null;
}
And this is how I use it to save a Jpeg:
ImageCodecInfo jpegCodec = GetEncoderInfo("image/jpeg");
if (jpegCodec == null)
return;
EncoderParameters encoderParameters = new EncoderParameters(1);
encoderParameters.Param[0] = new EncoderParameter(Encoder.Quality, quality);
image.Save(imagePath, jpegCodec, encoderParameters);
Its not used to save a Png:
image.Save(imagePath, ImageFormat.Png);
You'll need the following namespaces:
using System.Drawing;
using System.Drawing.Drawing2D;
using System.Drawing.Imaging;
Also, I should mention, Png is a lossless image format. That is why there is no encoder parameters for it, because you cannot achieve better quality than lossless.
For the highest quality when working with the Graphics class, make sure you set the following properties:
graphics.CompositingQuality = CompositingQuality.HighQuality;
graphics.SmoothingMode = SmoothingMode.HighQuality;
graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphics.PixelOffsetMode = PixelOffsetMode.HighQuality;
This is the code I'm using to convert the TIFF to PNG.
var image = Image.FromFile(#"Test.tiff");
var encoders = ImageCodecInfo.GetImageEncoders();
var imageCodecInfo = encoders.FirstOrDefault(encoder => encoder.MimeType == "image/tiff");
if (imageCodecInfo == null)
{
return;
}
var imageEncoderParams = new EncoderParameters(1);
imageEncoderParams.Param[0] = new EncoderParameter(Encoder.Quality, 100L);
image.Save(#"Test.png", imageCodecInfo, imageEncoderParams);
The TIFF file size is 46.8 MB (49,161,628 bytes) the PNG that is made using this code is 46.8 MB (49,081,870 bytes) but if I use MS paint the PNG file size is 6.69 MB (7,021,160 bytes).
So what do I change in the code to get the same compress I get by using MS Paint?
Without a good Minimal, Complete, and Verifiable code example, it's impossible to know for sure. But…
The code you posted appears to be getting a TIFF encoder, not a PNG encoder. Just because you name the file with a ".png" extension does not mean that you will get a PNG file. It's the encoder that determines the actual file format.
And it makes perfect sense that if you use the TIFF encoder, you're going to get a file that's exactly the same size as the TIFF file you started with.
Instead, try:
var imageCodecInfo = encoders.FirstOrDefault(encoder => encoder.MimeType == "image/png");
Note that this may or may not get you exactly the same compression used by Paint. PNG has a wide variety of compression "knobs" to adjust the exact way it compresses, and you don't get access to most of those through the .NET API. Paint may or may not be using the same values as your .NET program. But you should at least get a similar level of compression.
OK, after a lot of trial and error I came up with this.
var image = Image.FromFile(#"Test.tiff");
Bitmap bm = null;
PictureBox pb = null;
pb = new PictureBox();
pb.Size = new Size(image.Width, image.Height);
pb.Image = image;
bm = new Bitmap(image.Width, image.Height);
ImageCodecInfo png = GetEncoder(ImageFormat.Png);
EncoderParameters imageEncoderParams = new EncoderParameters(1);
imageEncoderParams.Param[0] = new EncoderParameter(Encoder.Quality, 100L);
pb.DrawToBitmap(bm, pb.ClientRectangle);
bm.Save(#"Test.png", png, encodePars);
pb.Dispose();
And add this to my code.
private ImageCodecInfo GetEncoder(ImageFormat format)
{
ImageCodecInfo[] codecs = ImageCodecInfo.GetImageDecoders();
foreach (ImageCodecInfo codec in codecs)
if (codec.FormatID == format.Guid)
return codec;
return null;
}
By loading the TIFF in a PictureBox then saving it as a PNG the output PNG file size is 7.64 MB (8,012,608 bytes). Witch is a little larger then Paint But that is fine.
I am totally stuck on image resizing because I am getting OutOfMemoryException using the typical examples of image resizing that can be found on the many questions that feature OOMs.
I even tried DynamicImage, which can be found on Nuget, and this also threw an OutOfMemoryException.
Can anyone tell me how I can reduce the quality/size of an image in C#, without loading it into memory?
Edit: I want the c# equivalent to this, if there is one?
Edit: I give up with the typical methods of resizing, as I just can't avoid OutOfMemoryExceptions on my live site, which is running on an old server.
Further Edit: My server's OS is Microsoft Server 2003 Standard Edition
I can post examples of my code, but I'm trying to find a way around OutOfMemoryExceptions.
public static void ResizeImage(string imagePath, int imageWidth, int imageHeight, bool upscaleImage) {
using (Image image = Image.FromFile(imagePath, false)) {
int width = image.Width;
int height = image.Height;
if (width > imageWidth || height > imageHeight || upscaleImage) {
image.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipX);
image.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipX);
float ratio = 0;
if (width > height) {
ratio = (float)width / (float)height;
width = imageWidth;
height = Convert.ToInt32(Math.Round((float)width / ratio));
}
else {
ratio = (float)height / (float)width;
height = imageHeight;
width = Convert.ToInt32(Math.Round((float)height / ratio));
}
using (Bitmap bitmap = new Bitmap(width, height)) {
bitmap.SetResolution(image.HorizontalResolution, image.VerticalResolution);
using (Graphics graphic = Graphics.FromImage(bitmap)) {
graphic.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
graphic.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
graphic.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.HighQuality;
graphic.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality;
graphic.DrawImage(image, 0, 0, width, height);
string extension = ".jpg"; // Path.GetExtension(originalFilePath);
using (EncoderParameters encoderParameters = new EncoderParameters(1)) {
encoderParameters.Param[0] = new EncoderParameter(Encoder.Quality, 100L);
using (MemoryStream imageMemoryStream = new MemoryStream()) {
bitmap.Save(imageMemoryStream, GetImageCodec(extension), encoderParameters);
using (Image result = Image.FromStream(imageMemoryStream, true, false)) {
string newFullPathName = //path;
result.Save(newFullPathName);
}
}
}
}
}
}
}
}
I also tried this code as I hoped GetThumbnailImage would reduce the picture quality/size for me, but this is also throwing an OOM exception:
viewModel.File.SaveAs(path);
Image image = Image.FromFile(path);
Image thumbnail = image.GetThumbnailImage(600, 600, null, new IntPtr());
image.Dispose();
File.Delete(path);
thumbnail.Save(path);
thumbnail.Dispose();
Again, both my code examples work for me in my local machine, so I am not trying to find faults/fixes in the code as they should be fine. I'm looking for any solution to avoid the OOM exceptions, I had the idea of reducing the fize size somehow without loading the image into memory, but any alternative ideas that can help me would be appreciated.
You can try using ImageMagick via command line or the .NET bindings. ImageMagick has some options to resize as the file is being read, which should reduce memory consumption.
It may be that the "image" you are using is either not a supported format or is corrupted.
I failed to mention that I inherited legacy code that was not disposing of images properly, because I did not think it was relevant since fixing it. The strange thing is, after restarting the website and the AppPool, I was able to upload pictures again without getting an OutOfMemoryExcepiton. I'm struggling to understand why this happened as I have changed the code to dispose of images properly and have done several deploys since, so I would expect that to clear any undisposed images from memory? All the code for picture resizing and uploading was in a static class and I believe that GC.collect() does not work on static variables?
My theory is that the undisposed images have built up in memory and have remained even when I have redepolyed to the site, as that's the only conclusion I can reach since the code began working again after restarting the app pool.
I would delete my question but it has been answered now, happy to reassign the answer if anyone can help explain what was going on here.
I have a web application where users can upload pictures to create their galleries. Years ago when I wrote the application I choose ImageMagick, and I did all my cropping and resizing with ImageMagick.
Now that I am rewriting the application from scratch, I replaced ImageMagick with native GDI+ operations, but the more I learn around GDI+ the more I am scared I made the wrong choice.
Everywhere I read that GDI+ is for the desktop and should not be used on a server application. I don't know the details, but I guess it's for the memory consumption, and indeed I can see GDI+ is using more memory to do the same operation (crop and resize) over the same image than ImageMagick (while to be honest GDI+ is faster).
I believed GDI+, ImageMagick or any other library should be more or less the same for those basic operations, and I liked the idea of using native GDI+ believing whatever MS is shipping with .NET should be at least OK.
What is the right approach/tool to use?
This is the code I use to crop:
internal Image Crop(Image image, Rectangle r)
{
Bitmap bmpCrop;
using (Bitmap bmpImage = new Bitmap(image))
{
bmpCrop = bmpImage.Clone(r, bmpImage.PixelFormat);
bmpImage.Dispose();
}
return (Image)(bmpCrop);
}
This is the code I use to resize:
internal Image ResizeTo(Image sourceImage, int width, int height)
{
System.Drawing.Image newImage = new Bitmap(width, height);
using (Graphics gr = Graphics.FromImage(newImage))
{
gr.SmoothingMode = SmoothingMode.AntiAlias;
gr.InterpolationMode = InterpolationMode.HighQualityBicubic;
gr.PixelOffsetMode = PixelOffsetMode.HighQuality;
gr.DrawImage(sourceImage, new Rectangle(0, 0, width, height));
gr.Dispose();
}
return newImage;
}
Can you link to somewhere people have said that GDI+ shouldn't be used on the server? Maybe they know something I don't.
I know a few things about how GDI+ works but nothing about ImageMagick. I did happen upon this page describing ImageMagick's architecture: http://www.imagemagick.org/script/architecture.php
It seems ImageMagick will internally convert images to an uncompressed format with 4 channels and a specific bit depth, typically 16 bits per channel, and do its work with the uncompressed data, which may be in memory or on disk depending on the size. 'identify -version' will tell you what your bit depth is. My impression is that in practice ImageMagick will typically work with a 64-bit RGBA buffer internally, unless you use a Q8 version which will use 32-bit RGBA. It also can use multiple threads, but I doubt that will matter unless you're working with very large images. (If you are working with very large images, ImageMagick is the clear winner.)
GDI+ Bitmap objects will always store uncompressed data in memory and will generally default to 32-bit RGBA. That and 32-bit RGB are probably the most efficient formats. GDI+ is a drawing library, and it wasn't designed for large images, but at least a Bitmap object won't hold any resources other than the memory for the pixel data and image metadata (contrary to popular belief, they do not contain HBITMAP objects).
So they seem very similar to me. I can't say one is clearly better than the other for your use case. If you go with imagemagick, you should probably use a Q8 build for the speed and memory gains, unless the extra precision is important to you.
It seems like if your only operations are load, save, scale, and crop, you should be able to easily replace the implementation later if you want.
Unless you need to work with metafiles, you should probably be using Bitmap objects internally and not Images. Then you wouldn't have to create an intermediate Bitmap object in your Crop function. That intermediate object may be behind some of the extra memory consumption you observed. If you get Image objects from an external source, I'd suggest trying to cast them to Bitmap and creating a new Bitmap if that doesn't work.
Also, the "using" statement calls Dispose automatically, so there's no need to also call it explicitly.
I wrote something myself:
public void ResizeImageAndRatio(string origFileLocation, string newFileLocation, string origFileName, string newFileName, int newWidth, int newHeight, bool resizeIfWider)
{
System.Drawing.Image initImage = System.Drawing.Image.FromFile(origFileLocation + origFileName);
int templateWidth = newWidth;
int templateHeight = newHeight;
double templateRate = double.Parse(templateWidth.ToString()) / templateHeight;
double initRate = double.Parse(initImage.Width.ToString()) / initImage.Height;
if (templateRate == initRate)
{
System.Drawing.Image templateImage = new System.Drawing.Bitmap(templateWidth, templateHeight);
System.Drawing.Graphics templateG = System.Drawing.Graphics.FromImage(templateImage);
templateG.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.High;
templateG.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
templateG.Clear(Color.White);
templateG.DrawImage(initImage, new System.Drawing.Rectangle(0, 0, templateWidth, templateHeight), new System.Drawing.Rectangle(0, 0, initImage.Width, initImage.Height), System.Drawing.GraphicsUnit.Pixel);
templateImage.Save(newFileLocation + newFileName, System.Drawing.Imaging.ImageFormat.Jpeg);
}
else
{
System.Drawing.Image pickedImage = null;
System.Drawing.Graphics pickedG = null;
Rectangle fromR = new Rectangle(0, 0, 0, 0);
Rectangle toR = new Rectangle(0, 0, 0, 0);
if (templateRate > initRate)
{
pickedImage = new System.Drawing.Bitmap(initImage.Width, int.Parse(Math.Floor(initImage.Width / templateRate).ToString()));
pickedG = System.Drawing.Graphics.FromImage(pickedImage);
fromR.X = 0;
fromR.Y = int.Parse(Math.Floor((initImage.Height - initImage.Width / templateRate) / 2).ToString());
fromR.Width = initImage.Width;
fromR.Height = int.Parse(Math.Floor(initImage.Width / templateRate).ToString());
toR.X = 0;
toR.Y = 0;
toR.Width = initImage.Width;
toR.Height = int.Parse(Math.Floor(initImage.Width / templateRate).ToString());
}
else
{
pickedImage = new System.Drawing.Bitmap(int.Parse(Math.Floor(initImage.Height * templateRate).ToString()), initImage.Height);
pickedG = System.Drawing.Graphics.FromImage(pickedImage);
fromR.X = int.Parse(Math.Floor((initImage.Width - initImage.Height * templateRate) / 2).ToString());
fromR.Y = 0;
fromR.Width = int.Parse(Math.Floor(initImage.Height * templateRate).ToString());
fromR.Height = initImage.Height;
toR.X = 0;
toR.Y = 0;
toR.Width = int.Parse(Math.Floor(initImage.Height * templateRate).ToString());
toR.Height = initImage.Height;
}
pickedG.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
pickedG.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
pickedG.DrawImage(initImage, toR, fromR, System.Drawing.GraphicsUnit.Pixel);
System.Drawing.Image templateImage = new System.Drawing.Bitmap(templateWidth, templateHeight);
System.Drawing.Graphics templateG = System.Drawing.Graphics.FromImage(templateImage);
templateG.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.High;
templateG.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
templateG.Clear(Color.White);
templateG.DrawImage(pickedImage, new System.Drawing.Rectangle(0, 0, templateWidth, templateHeight), new System.Drawing.Rectangle(0, 0, pickedImage.Width, pickedImage.Height), System.Drawing.GraphicsUnit.Pixel);
templateImage.Save(newFileLocation + newFileName, System.Drawing.Imaging.ImageFormat.Jpeg);
templateG.Dispose();
templateImage.Dispose();
pickedG.Dispose();
pickedImage.Dispose();
}
initImage.Dispose();
}
I have a form (using MVC2) which has an image-upload script, but the rules for the final image stored on the server are pretty strict. I can force the file to the dimensions I want but it always ends up exceeding the file-size required... so I can allow a sub-200k image but once my code has processed it ends up slightly bigger.
These are the rules I have to adhere to:
Photographs should be in colour
The permitted image types for the
photograph are .JPG or .GIF
The maximum size of the image is 200kb
The dimensions of the photograph on the badge will be 274 pixels
(wide) x 354 pixels (high) # 200dpi (depth of pixels per inch)
This is what I have currently:
[HttpPost]
public ActionResult ImageUpload(HttpPostedFileBase fileBase)
{
ImageService imageService = new ImageService();
if (fileBase != null && fileBase.ContentLength > 0 && fileBase.ContentLength < 204800 && fileBase.ContentType.Contains("image/"))
{
string profileUploadPath = "~/Resources/images";
Path.GetExtension(fileBase.ContentType);
var newGuid = Guid.NewGuid();
var extension = Path.GetExtension(fileBase.FileName);
if (extension.ToLower() != ".jpg" && extension.ToLower() != ".gif") // only allow these types
{
return View("WrongFileType", extension);
}
EncoderParameters encodingParameters = new EncoderParameters(1);
encodingParameters.Param[0] = new EncoderParameter(System.Drawing.Imaging.Encoder.Quality, 70L); // Set the JPG Quality percentage
ImageCodecInfo jpgEncoder = imageService.GetEncoderInfo("image/jpeg");
var uploadedimage = Image.FromStream(fileBase.InputStream, true, true);
Bitmap originalImage = new Bitmap(uploadedimage);
Bitmap newImage = new Bitmap(originalImage, 274, 354);
Graphics g = Graphics.FromImage(newImage);
g.InterpolationMode = InterpolationMode.HighQualityBilinear;
g.DrawImage(originalImage, 0, 0, newImage.Width, newImage.Height);
var streamLarge = new MemoryStream();
newImage.Save(streamLarge, jpgEncoder, encodingParameters);
var fileExtension = Path.GetExtension(extension);
var ImageName = newGuid + fileExtension;
newImage.Save(Server.MapPath(profileUploadPath) + ImageName);
//newImage.WriteAllBytes(Server.MapPath(profileUploadPath) + ImageName, streamLarge.ToArray());
originalImage.Dispose();
newImage.Dispose();
streamLarge.Dispose();
return View("Success");
}
return View("InvalidImage");
}
Just to add:
The images are going off to print on a card so the DPI is important. But I realise that 200k is not a lot for a printed image.. none of these are my business rules! As it stands with this code an image uploaded that is pretty much 200k, ends up costing 238k(ish)
It's very difficult to calculate the size of a jpeg in advance. Having said that, you don't need to compress it much.
Let's just look at some metrics:
274 * 354 = 96996 pixels. If you have 8 bits per pixel and 3 colour
channels (i.e. 24bit colour) then you have:
274* 354 * 8 * 3 = 2,327,904 bits = 290988 bytes = 284.17 kb.
200 / 284.17 ~ 0.70.
You only need to reduce it to 70% of its original size.
Sadly, it's at this point we get to the limit of my knowledge in this area! But I reckon that by saving as a jpeg it will be in the right size range anyway, even if saving at the highest quality setting.
I would guess at setting the quality to 70 and see what happens.
EDIT: DPI settings
Apparently you only need to change the EXIF data. See this answer: https://stackoverflow.com/a/4427411/234415
You should experiment with the JPEG quality setting. You currently have it set to 90, 80 might be sufficient and will result in a smaller file.
I see some problems with the code:
You are using GetThumbnailImage to create a thumbnail, but that is not intended for such large thumbnails. It works up to about 120x120 pixels. If the image has an embedded thumbnail, that will be used instead of scaling down the full image, so you will be scaling up a smaller image, with obvious quality problems.
You are saving the thumbnail to a memory stream, which you then just throw away.
You are saving the thumbnail to file without specifying the encoder, which means that it will either be saved as a low compressed JPEG image or a PNG image, that's why you get a larger file size.
You never dispose the uploadedImage object.
Note: The resolution (PPI/DPI) has no relevance when you display images on the web.