What I want to achieve is to read a JPEG image from disk, resize it (reduce the resolution) and return the resulting image to a different module which is going to save the image to an AWS S3 bucket... I am confused whether I should return the resulting image in a byte[] or MemoryStream.
I have seen this tutorial and have written the following function which reads an image, resize it and returns the resulting image in a byte[].
public byte[] GetResizedImage(string folderPath, string fileName)
{
// read the original image from disk
FileInfo originalImage = ReadFileFromDisk(folderPath, fileName);
// resize the image, set width to 640px and respect original aspect ratio
using (var image = Image.FromStream(originalImage.OpenRead()))
{
int newWidth = image.Width;
int newHeight = image.Height;
float aspectRatio = image.Width / image.Height;
if (image.Width > 640)
{
newWidth = 640;
newHeight = Convert.ToInt32(GlobalConstants.NoOfPixelsForImageResizing / aspectRatio);
}
// resize image
Image thumbnail = image.GetThumbnailImage(newWidth, newHeight, null, IntPtr.Zero);
using (var thumbnailStream = new MemoryStream())
{
thumbnail.Save(thumbnailStream, ImageFormat.Jpeg);
return thumbnailStream.ToArray(); // <-- return image in byte[]
}
}
}
I am not clear if it makes any difference to change the above code and return MemoryStream instead of byte[]? If I change the code to return MemoryStream then I need to change the last 3 lines to:
var thumbnailStream = new MemoryStream();
thumbnail.Save(thumbnailStream, ImageFormat.Jpeg);
return thumbnailStream;
This way, I won't be able to dispose of MemoryStream... not sure if this would be a bad practice?
The reason for this question is that my other module which saves the reduced image to an S3 bucket, accepts the file input as Stream:
SavetoS3bucket(Stream image, string name)
{
// save image to S3 bucket
}
So I am not clear if I am better off passing a Stream or byte[] to the above method?
Related
I'm trying to scale a dicom slice which its resolution, 1024*1024 (row*columns). I can really scale the slice, but my problem is: When I convert the scaling slice to array of bytes and write it to the dicom file, I can see nothing as you see in the picture below.
var renderImage = new DicomImage(#"D:\11_29_2022_17_21_52\"+i);
renderImage.Scale = 0.5;//Scaling factor
Bitmap renderdImageAsBitmap =renderImage.RenderImage().As<Bitmap>();
renderdImageAsBitmap.Save(#"D:\test\"+i+".jpg", ImageFormat.Jpeg);
//now get the image as jpeg or bitmap i tried the both
using (IImage renderedImage = renderImage.RenderImage())
{
Bitmap bitmap = renderedImage.As<Bitmap>();
// Copy image to byte array using MemoryStream
using (MemoryStream targetStream = new MemoryStream())
{
bitmap.Save(targetStream, ImageFormat.Jpeg);
newRawBytes = targetStream.ToArray();
//get the image as rawbytes to add it to my dicomfile
}
}
dicomFile.Dataset.AddOrUpdate(DicomTag.PixelData, newRawBytes); //Add the rawbyte to dicomfile
I wrote some code to create ico files from any png, jpg, etc. images. The icons seem to be getting created correctly, and looks almost like the original image, when opened in Paint3d. Here is how it looks:
But when setting the image as a thumbnail to a folder, it looks weird and shiny.
Here is how it looks in windows file explorer:
Firstly, I would like to know if this is an issue in Windows itself, or is it code related? If this is Windows related, the code doesn't matter. If not, here it is:
I picked up a couple of code snippets from across the internet, so probably some non-optimized code, but here is the meat of my code:
//imagePaths => all images which I am converting to ico files
imagePaths.ForEach(imgPath => {
//create a temp png at this path after changing the original img to a squared img
var tempPNGpath = Path.Combine(icoDirPath, imgName.Replace(ext, ".png"));
var icoPath = tempPNGpath.Replace(".png", ".ico");
using (FileStream fs1 = File.OpenWrite(tempPNGpath)) {
Bitmap b = ((Bitmap)Image.FromFile(imgPath));
b = b.CopyToSquareCanvas(Color.Transparent);
b.Save(fs1, ImageFormat.Png);
fs1.Flush();
fs1.Close();
ConvertToIco(b, icoPath, 256);
}
File.Delete(tempPNGpath);
});
public static void ConvertToIco(Image img, string file, int size) {
Icon icon;
using (var msImg = new MemoryStream())
using (var msIco = new MemoryStream()) {
img.Save(msImg, ImageFormat.Png);
using (var bw = new BinaryWriter(msIco)) {
bw.Write((short)0); //0-1 reserved
bw.Write((short)1); //2-3 image type, 1 = icon, 2 = cursor
bw.Write((short)1); //4-5 number of images
bw.Write((byte)size); //6 image width
bw.Write((byte)size); //7 image height
bw.Write((byte)0); //8 number of colors
bw.Write((byte)0); //9 reserved
bw.Write((short)0); //10-11 color planes
bw.Write((short)32); //12-13 bits per pixel
bw.Write((int)msImg.Length); //14-17 size of image data
bw.Write(22); //18-21 offset of image data
bw.Write(msImg.ToArray()); // write image data
bw.Flush();
bw.Seek(0, SeekOrigin.Begin);
icon = new Icon(msIco);
}
}
using (var fs = new FileStream(file, FileMode.Create, FileAccess.Write))
icon.Save(fs);
}
In the Extension class, the method goes:
public static Bitmap CopyToSquareCanvas(this Bitmap sourceBitmap, Color canvasBackground) {
int maxSide = sourceBitmap.Width > sourceBitmap.Height ? sourceBitmap.Width : sourceBitmap.Height;
Bitmap bitmapResult = new Bitmap(maxSide, maxSide, PixelFormat.Format32bppArgb);
using (Graphics graphicsResult = Graphics.FromImage(bitmapResult)) {
graphicsResult.Clear(canvasBackground);
int xOffset = (maxSide - sourceBitmap.Width) / 2;
int yOffset = (maxSide - sourceBitmap.Height) / 2;
graphicsResult.DrawImage(sourceBitmap, new Rectangle(xOffset, yOffset, sourceBitmap.Width, sourceBitmap.Height));
}
return bitmapResult;
}
The differences in scaling are the result of the fact you're not doing the scaling yourself.
The icon format technically only supports images up to 256x256. You have code to make a square image out of the given input, but you never resize it to 256x256, meaning you end up with an icon file in which the header says the image is 256x256, but which is really a lot larger. This is against the format specs, so you are creating a technically corrupted ico file. The strange differences you're seeing are a result of different downscaling methods the OS is using in different situations to remedy this situation.
So the solution is simple: resize the image to 256x256 before putting it into the icon.
If you want more control over any smaller display sizes for the icon, you can add code to resize it to a number of classic used formats, like 16x16, 32x32, 64x64 and 128x128, and put them all in an icon file together. I have written an answer to another question that details the process of putting multiple images into a single icon:
A: Combine System.Drawing.Bitmap[] -> Icon
There are quite a few other oddities in your code, though:
I see no reason to save your in-between image as png file. That whole fs1 stream serves no purpose at all. You never use or load the temp file; you just keep using the b variable, which does not need anything written to disk.
There is no point in first making the icon in a MemoryStream, then loading that as Icon class through its file loading function, and then saving that to a file. You can just write the contents of that stream straight to a file, or, heck, use a FileStream right away.
As I noted in the comments, Bitmap is a disposable class, so any bitmap objects you create should be put in using statements as well.
The adapted loading code, with the temp png writing removed, and the using statements and resizes added:
public static void WriteImagesToIcons(List<String> imagePaths, String icoDirPath)
{
// Change this to whatever you prefer.
InterpolationMode scalingMode = InterpolationMode.HighQualityBicubic;
//imagePaths => all images which I am converting to ico files
imagePaths.ForEach(imgPath =>
{
// The correct way of replacing an extension
String icoPath = Path.Combine(icoDirPath, Path.GetFileNameWithoutExtension(imgPath) + ".ico");
using (Bitmap orig = new Bitmap(imgPath))
using (Bitmap squared = orig.CopyToSquareCanvas(Color.Transparent))
using (Bitmap resize16 = squared.Resize(16, 16, scalingMode))
using (Bitmap resize32 = squared.Resize(32, 32, scalingMode))
using (Bitmap resize48 = squared.Resize(48, 48, scalingMode))
using (Bitmap resize64 = squared.Resize(64, 64, scalingMode))
using (Bitmap resize96 = squared.Resize(96, 96, scalingMode))
using (Bitmap resize128 = squared.Resize(128, 128, scalingMode))
using (Bitmap resize192 = squared.Resize(192, 192, scalingMode))
using (Bitmap resize256 = squared.Resize(256, 256, scalingMode))
{
Image[] includedSizes = new Image[]
{ resize16, resize32, resize48, resize64, resize96, resize128, resize192, resize256 };
ConvertImagesToIco(includedSizes, icoPath);
}
});
}
The CopyToSquareCanvas remains the same, so I didn't copy it here. The Resize function is fairly simple: just use Graphics.DrawImage to paint the picture on a different-sized canvas, after setting the desired interpolation mode.
public static Bitmap Resize(this Bitmap source, Int32 width, Int32 height, InterpolationMode scalingMode)
{
Bitmap result = new Bitmap(width, height, PixelFormat.Format32bppArgb);
using (Graphics g = Graphics.FromImage(result))
{
// Set desired interpolation mode here
g.InterpolationMode = scalingMode;
g.PixelOffsetMode = PixelOffsetMode.Half;
g.DrawImage(source, new Rectangle(0, 0, width, height), new Rectangle(0, 0, source.Width, source.Height), GraphicsUnit.Pixel);
}
return result;
}
And, finally, the above-linked Bitmap[] to Icon function, slightly tweaked to write to a FileStream directly instead of loading the result into an Icon object:
public static void ConvertImagesToIco(Image[] images, String outputPath)
{
if (images == null)
throw new ArgumentNullException("images");
Int32 imgCount = images.Length;
if (imgCount == 0)
throw new ArgumentException("No images given!", "images");
if (imgCount > 0xFFFF)
throw new ArgumentException("Too many images!", "images");
using (FileStream fs = new FileStream(outputPath, FileMode.Create, FileAccess.Write))
using (BinaryWriter iconWriter = new BinaryWriter(fs))
{
Byte[][] frameBytes = new Byte[imgCount][];
// 0-1 reserved, 0
iconWriter.Write((Int16)0);
// 2-3 image type, 1 = icon, 2 = cursor
iconWriter.Write((Int16)1);
// 4-5 number of images
iconWriter.Write((Int16)imgCount);
// Calculate header size for first image data offset.
Int32 offset = 6 + (16 * imgCount);
for (Int32 i = 0; i < imgCount; ++i)
{
// Get image data
Image curFrame = images[i];
if (curFrame.Width > 256 || curFrame.Height > 256)
throw new ArgumentException("Image too large!", "images");
// for these three, 0 is interpreted as 256,
// so the cast reducing 256 to 0 is no problem.
Byte width = (Byte)curFrame.Width;
Byte height = (Byte)curFrame.Height;
Byte colors = (Byte)curFrame.Palette.Entries.Length;
Int32 bpp;
Byte[] frameData;
using (MemoryStream pngMs = new MemoryStream())
{
curFrame.Save(pngMs, ImageFormat.Png);
frameData = pngMs.ToArray();
}
// Get the colour depth to save in the icon info. This needs to be
// fetched explicitly, since png does not support certain types
// like 16bpp, so it will convert to the nearest valid on save.
Byte colDepth = frameData[24];
Byte colType = frameData[25];
// I think .Net saving only supports colour types 2, 3 and 6 anyway.
switch (colType)
{
case 2: bpp = 3 * colDepth; break; // RGB
case 6: bpp = 4 * colDepth; break; // ARGB
default: bpp = colDepth; break; // Indexed & greyscale
}
frameBytes[i] = frameData;
Int32 imageLen = frameData.Length;
// Write image entry
// 0 image width.
iconWriter.Write(width);
// 1 image height.
iconWriter.Write(height);
// 2 number of colors.
iconWriter.Write(colors);
// 3 reserved
iconWriter.Write((Byte)0);
// 4-5 color planes
iconWriter.Write((Int16)0);
// 6-7 bits per pixel
iconWriter.Write((Int16)bpp);
// 8-11 size of image data
iconWriter.Write(imageLen);
// 12-15 offset of image data
iconWriter.Write(offset);
offset += imageLen;
}
for (Int32 i = 0; i < imgCount; i++)
{
// Write image data
// png data must contain the whole png data file
iconWriter.Write(frameBytes[i]);
}
iconWriter.Flush();
}
}
For uploading image I am using plupload on client side. Then in my controlled I have next logic:
public ActionResult UploadFile()
{
try
{
var file = Request.Files.Count > 0 ? Request.Files[0] : null;
using (var fileStream = new MemoryStream())
{
using (var oldImage = new Bitmap(file.InputStream))
{
var format = oldImage.RawFormat;
using (var newImage = ImageUtility.ResizeImage(oldImage, 800, 2000))
{
newImage.Save(fileStream, format);
}
byte[] bits = fileStream.ToArray();
}
}
{
catch (Exception ex)
{
}
}
ImageUtility.ResizeImage Method:
public static class ImageUtility
{
public static Bitmap ResizeImage(Bitmap image, int width, int height)
{
if (image.Width <= width && image.Height <= height)
{
return image;
}
int newWidth;
int newHeight;
if (image.Width > image.Height)
{
newWidth = width;
newHeight = (int)(image.Height * ((float)width / image.Width));
}
else
{
newHeight = height;
newWidth = (int)(image.Width * ((float)height / image.Height));
}
var newImage = new Bitmap(newWidth, newHeight);
using (var graphics = Graphics.FromImage(newImage))
{
graphics.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
graphics.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
graphics.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality;
graphics.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.HighQuality;
graphics.FillRectangle(Brushes.Transparent, 0, 0, newWidth, newHeight);
graphics.DrawImage(image, 0, 0, newWidth, newHeight);
return newImage;
}
}
}
The issue which i have here that Image size is increased.
I uploaded image of 1.62MB and after this controller is called and it creates instance if Bitmap and then save Bitmap to filestream and read bits with "fileStream.ToArray();" I am getting 2.35MB in "bits".
Can anyone tell me what's the reason of increasing the image size after I save it as bitmap. I need Bitmap because I need to check with and height of uploaded image and resize it if I need.
The answer is simple, the bitmap takes up more memory the whatever format the image was in previously because it's uncompressed it stays in that uncompressed format after saving it.
jpeg, png, gif, etc. are compressed and therefore use less bytes tha a bitmap which is uncompressed.
If you just want to save the original image, just save file.InputStream.
If you need to resize, you can use a library to apply jpg/png/etc compression and then save the result.
What is the goal here? Are you merely trying to upload an image? Does it need to be validated as an image? Or are you just trying to upload the file?
If upload is the goal, without any regard to validation, just move the bits and save them with the name of the file. As soon as you do this ...
using (var oldImage = new Bitmap(file.InputStream))
... you are converting to a bitmap. Here is where you are telling the bitmap what format to use (raw).
var format = oldImage.RawFormat;
If you merely want to move the file (upload), you can run the memory stream to a filestream object and you save the bits.
If you want a few checks on whether the image is empty, etc, you can try this page (http://www.codeproject.com/Articles/1956/NET-Image-Uploading), but realize it is still putting it in an image, which is not your desire if you simply want to save "as is".
I am currently trying to resize an image to a thumbnail, to show as a preview when its done uploading. I am using fineuploader plugin for the uploading part of the image. I consistently keep getting a "parameter is not valid". I've seen many posts related to this, and tried most of the solution, but have no success. Here is the snippet of the code:
public static byte[] CreateThumbnail(byte[] PassedImage, int LargestSide)
{
byte[] ReturnedThumbnail = null;
using (MemoryStream StartMemoryStream = new MemoryStream(),
NewMemoryStream = new MemoryStream())
{
StartMemoryStream.Write(PassedImage, 0, PassedImage.Length); //error being fire in this line
System.Drawing.Bitmap startBitmap = new Bitmap(StartMemoryStream);
int newHeight;
int newWidth;
double HW_ratio;
if (startBitmap.Height > startBitmap.Width)
{
newHeight = LargestSide;
HW_ratio = (double)((double)LargestSide / (double)startBitmap.Height);
newWidth = (int)(HW_ratio * (double)startBitmap.Width);
}
else
{
newWidth = LargestSide;
HW_ratio = (double)((double)LargestSide / (double)startBitmap.Width);
newHeight = (int)(HW_ratio * (double)startBitmap.Height);
}
System.Drawing.Bitmap newBitmap = new Bitmap(newWidth, newHeight);
newBitmap = ResizeImage(startBitmap, newWidth, newHeight);
newBitmap.Save(NewMemoryStream, System.Drawing.Imaging.ImageFormat.Jpeg);
ReturnedThumbnail = NewMemoryStream.ToArray();
}
return ReturnedThumbnail;
}
I'm out of ideas, any help is appreciated.
Your error is in the new Bitmap(startMemoryStream) line, not the line above.
The documentation states that this exception can occur when:
stream does not contain image data or is null.
-or-
stream contains a PNG image file with a single dimension greater than 65,535 pixels.
You should check that you have a valid PNG file in there. For example, write it to a file and try opening it in an image viewer.
That code is dangerous - every instance of a System.Drawing class must be placed in a using(){} clause.
Here's an alternate solution that uses the ImageResizer NuGet package and resizes the image safely.
var ms = new MemoryStream();
ImageResizer.Current.Build(PassedImage, ms, new ResizeSettings(){MaxWidth=LargestSide, MaxHeight=LargestSide});
return ImageResizer.ExtensionMethods.StreamExtensions.CopyToBytes(ms);
I have a byte array that contains a jpeg image. I was just wondering if it is possible to reduce its size?
Edit: Ok. I acknowledged my mistake. Then my question is how can I reduce the quality of the image comes from a byte array.
Please understand that there is no free lunch. Decreasing the size of a JPEG image by increasing the compression will also decrease the quality of the image. However, that said, you can reduce the size of a JPEG image using the Image class. This code assumes that inputBytes contains the original image.
var jpegQuality = 50;
Image image;
using (var inputStream = new MemoryStream(inputBytes)) {
image = Image.FromStream(inputStream);
var jpegEncoder = ImageCodecInfo.GetImageDecoders()
.First(c => c.FormatID == ImageFormat.Jpeg.Guid);
var encoderParameters = new EncoderParameters(1);
encoderParameters.Param[0] = new EncoderParameter(Encoder.Quality, jpegQuality);
Byte[] outputBytes;
using (var outputStream = new MemoryStream()) {
image.Save(outputStream, jpegEncoder, encoderParameters);
outputBytes = outputStream.ToArray();
}
}
Now outputBytes contains a recompressed version of the image using a different JPEG quality.
By decreasing the jpegQuality (should be in the range 0-100) you can increase the compression at the cost of lower image quality. See the Encoder.Quality field for more information.
Here is an example where you can see how jpegQuality affects the image quality. It is the same photo compressed using 20, 50 and 80 as the value of jpegQuality. Sizes are 4.99, 8.28 and 12.9 KB.
Notice how the text becomes "smudged" even when the quality is high. This is why you should avoid using JPEG for images with uniformly colored areas (images/diagrams/charts created on a computer). Use PNG instead. For photos JPEG is very suitable if you do not lower the quality too much.
Please try:
// Create a thumbnail in byte array format from the image encoded in the passed byte array.
// (RESIZE an image in a byte[] variable.)
public static byte[] CreateThumbnail(byte[] PassedImage, int LargestSide)
{
byte[] ReturnedThumbnail;
using (MemoryStream StartMemoryStream = new MemoryStream(),
NewMemoryStream = new MemoryStream())
{
// write the string to the stream
StartMemoryStream.Write(PassedImage, 0, PassedImage.Length);
// create the start Bitmap from the MemoryStream that contains the image
Bitmap startBitmap = new Bitmap(StartMemoryStream);
// set thumbnail height and width proportional to the original image.
int newHeight;
int newWidth;
double HW_ratio;
if (startBitmap.Height > startBitmap.Width)
{
newHeight = LargestSide;
HW_ratio = (double)((double)LargestSide / (double)startBitmap.Height);
newWidth = (int)(HW_ratio * (double)startBitmap.Width);
}
else
{
newWidth = LargestSide;
HW_ratio = (double)((double)LargestSide / (double)startBitmap.Width);
newHeight = (int)(HW_ratio * (double)startBitmap.Height);
}
// create a new Bitmap with dimensions for the thumbnail.
Bitmap newBitmap = new Bitmap(newWidth, newHeight);
// Copy the image from the START Bitmap into the NEW Bitmap.
// This will create a thumnail size of the same image.
newBitmap = ResizeImage(startBitmap, newWidth, newHeight);
// Save this image to the specified stream in the specified format.
newBitmap.Save(NewMemoryStream, System.Drawing.Imaging.ImageFormat.Jpeg);
// Fill the byte[] for the thumbnail from the new MemoryStream.
ReturnedThumbnail = NewMemoryStream.ToArray();
}
// return the resized image as a string of bytes.
return ReturnedThumbnail;
}
// Resize a Bitmap
private static Bitmap ResizeImage(Bitmap image, int width, int height)
{
Bitmap resizedImage = new Bitmap(width, height);
using (Graphics gfx = Graphics.FromImage(resizedImage))
{
gfx.DrawImage(image, new Rectangle(0, 0, width, height),
new Rectangle(0, 0, image.Width, image.Height), GraphicsUnit.Pixel);
}
return resizedImage;
}
The byte array size can always be reduced but you will lose data about your image. You could reduce the quality of the jpeg then it would take up less data as a byte array.
Think of the JPEG bytes as being a very compressed representation of a much larger number of bytes.
So if you try to apply some function to reduce the number of bytes it would be like trying to compress an already compressed thing.