ImageResizer - specify portrait or landscape - c#

We want to take a scanned image and determine if it is landscape or portrait, then based on the user's preference of having it rotated to landscape or portrait, rotate it if needed.
i.e. the user wants it in portrait, but its original format was landscape.
How do we determine if it is portrait or landscape and then if required, rotate it 90 degrees.
Trying something like this, but getting stuck on streams not being opened or reset, etc...But more importantly, is this the right/most efficient approach? I don't see any instructions to do this automatically like :desiredAspectRation=Portrait; that would do this already in ImageResizer.net, correct?
int? imageWidth;
int? imageHeight;
using (var updatedImageFileStream = new MemoryStream())
{
ImageJob imageJob = new ImageJob(origFileStream, updatedImageFileStream,
new Instructions(strInstructions)
);
imageJob.Build();
//change to portrait if required
imageWidth = imageJob.SourceWidth;
imageHeight = imageJob.SourceHeight;
if(imageWidth > imageHeight)
{
//updatedImageFileStream.Seek(0, SeekOrigin.Begin);
strInstructions = "rotate=90;";
imageJob = new ImageJob(updatedImageFileStream, updatedImageFileStream,
new Instructions(strInstructions)
);
imageJob.Build();
}
updatedImageFileStream.Seek(0, SeekOrigin.Begin);
//upload image to azure
await azureRepository.UploadAsync(serverRelativePath, updatedImageFileStream, contentType);
origFileStream.Dispose();
}
UPDATE
Got it to work using the below. But not sure it's the most efficient.
Do you need to create a new stream or can ImageJob take the same source and destination to overwrite it...I got an error about cannot access a closed stream when trying, so maybe you do need to create a new stream as I did.
I don't like the duplicate call to upload the image(await azureRepository.UploadAsync(serverRelativePath, updatedImageFileStream, contentType);) but I couldn't figure out how to copy the 2nd image stream over the first so I could retain just one call to the UploadAsync. I kept getting cannot access closed stream type errors.
am I missing anything else?
Working code, but efficient?
int? imageWidth;
int? imageHeight;
using (var updatedImageFileStream = new MemoryStream())
{
ImageJob imageJob = new ImageJob(origFileStream, updatedImageFileStream,
new Instructions(strInstructions)
);
imageJob.Build();
//change to portrait if required - WORKS, uncomment if want to do 100% of time, or this is how to implement based on a parameter of "change to Portrait"
imageWidth = imageJob.SourceWidth;
imageHeight = imageJob.SourceHeight;
if (imageWidth > imageHeight)
{
updatedImageFileStream.Seek(0, SeekOrigin.Begin);
strInstructions = "rotate=90;";
var updatedImageFileStream2 = new MemoryStream();
imageJob = new ImageJob(updatedImageFileStream, updatedImageFileStream2,
new Instructions(strInstructions)
);
imageJob.Build();
updatedImageFileStream2.Seek(0, SeekOrigin.Begin);
//upload image to azure
await azureRepository.UploadAsync(serverRelativePath, updatedImageFileStream2, contentType);
updatedImageFileStream2.Dispose();
}
else
{
updatedImageFileStream.Seek(0, SeekOrigin.Begin);
//upload image to azure
await azureRepository.UploadAsync(serverRelativePath, updatedImageFileStream, contentType);
}
origFileStream.Dispose();
}

Instead of processing the image just to get the dimensions, you could use Config.Current.CurrentImageBuilder.LoadImage instead and query the .Width and .Height on the resulting Bitmap instance. Remember to dispose it.
For an MUCH faster solution, you could use Imageflow.NET, with the ImageJob.GetImageInfo() method to get the width/height, and ImageJob.BuildCommandString(source, dest, "rotate=90") to perform the rotation. This will also have the benefit of producing much smaller file sizes at improved quality levels.

Related

Way to decode an image into a thumbnail, skipping loading the image completely

I am currently trying to create thumbnails the moment I am decoding them to avoid loading the entire image into memory and then scaling it. Secondly I want to get rid from my other thumbnail code which is using ShellObjects and what the OS file explorer cached as thumbnails. The problem with the later is that its depending on if there is anything cached.
The following code is my attempt to create an image the moment its decoded which fails with a Format Unknown error. I am pretty close to have the solution so I am coming here because I have not found an answer. Every "solution" I found loaded the entire file, creating two images, scaling the original which creates more overhead than what I believe is needed to accomplish this. Pretty resource intensive for an image manager with a thousand image files being loaded asynchronously.)
public static async Task<Texture2D> GetThumbnail(string filePath)
{
// Decode the image directly in the given DecodePixelHeight (or width), maintaining aspect ratio.
var thumbnail = new BitmapImage();
thumbnail.BeginInit();
thumbnail.UriSource = new Uri(filePath, UriKind.Absolute);
thumbnail.DecodePixelHeight = 144; //Fit to this height.
thumbnail.EndInit();
// Here I am trying reset a format. I am aiming to not need this step.
// Format the bitmap image into a known format.
var formatted = new FormatConvertedBitmap();
formatted.BeginInit();
formatted.Source = thumbnail;
formatted.DestinationFormat = System.Windows.Media.PixelFormats.Default;
formatted.EndInit();
using var stream = new MemoryStream();
var bytesPerPixel = (formatted.DestinationFormat.BitsPerPixel + 7) / 8;
var stride = 4 * ((formatted.PixelWidth * bytesPerPixel + 3) / 4);
var buffer = new byte[formatted.PixelHeight * stride];
formatted.CopyPixels(buffer, stride, 0);
await stream.WriteAsync(buffer, 0, buffer.Length);
return Texture2D.FromStream(___.GraphicsDevice, stream);
}

JPEGEncoder Windows Media Imaging not honoring color profile of Image

I have the following image (have put a screen-grab of the image as its size is more than 2 MB - the original can be downloaded from https://drive.google.com/file/d/1rC2QQBzMhZ8AG5Lp5PyrpkOxwlyP9QaE/view?usp=sharing
I'm reading the image using the BitmapDecoder class and saving it using a JPEG Encoder.
This results in the following image which is off color and faded.
var frame = BitmapDecoder.Create(new Uri(inputFilePath, UriKind.RelativeOrAbsolute),BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.None).Frames[0];
var encoder = new JpegBitmapEncoder();
encoder.Frames.Add(frame);
using (var stream = File.OpenWrite(outputFilePath))
{
encoder.Save(stream);
}
The image is using PhotoShop RGB Color scheme.I tried setting the color profile using the following code,but this results in this error The designated BitmapEncoder does not support ColorContexts
encoder.ColorContexts = frame.ColorContexts;
Update:
Cloning the image seems to fix the issue. But when i resize the image using the following code for transformation the color profile is not preserved
Transform transform = new ScaleTransform(width / frame.Width * 96 / frame.DpiX, height / frame.Height * 96 / frame.DpiY, 0, 0);
var copy = BitmapFrame.Create(frame);
var resized = BitmapFrame.Create(new
TransformedBitmap(copy, transform));
encoder.Frames.Add(resized);
using (var stream = File.OpenWrite(outputFilePath))
{
encoder.Save(stream);
}
The image bits are identical. This is a metadata issue. This image file contains lots of metadata (Xmp, Adobe, unknown, etc.) and this metadata contains two color profiles/spaces/contexts:
ProPhoto RG (usually found in C:\WINDOWS\system32\spool\drivers\color\ProPhoto.icm)
sRGB IEC61966-2.1 (usually found in WINDOWS\system32\spool\drivers\color\sRGB Color Space Profile.icm)
The problem occurs because the two contexts order may differ in the target file for some reason. An image viewer can either use no color profile (Paint, Pain3D, Paint.NET, IrfanView, etc.), or use (from my experience) the last color profile in the file (Windows Photo Viewer, Photoshop, etc.).
You can fix your issue if you clone the frame, ie:
var frame = BitmapDecoder.Create(new Uri(inputFilePath, UriKind.RelativeOrAbsolute),BitmapCreateOptions.PreservePixelFormat, BitmapCacheOption.None).Frames[0];
var encoder = new JpegBitmapEncoder();
var copy = BitmapFrame.Create(frame);
encoder.Frames.Add(copy);
using (var stream = File.OpenWrite(outputFilePath))
{
encoder.Save(stream);
}
As in this case, the order is preserved as is.
If you recreate the frame or transform it in any way, you can copy the metadata and color contexts like this:
var ctxs = new List<ColorContext>();
ctxs.Add(frame.ColorContexts[1]); // or just remove this
ctxs.Add(frame.ColorContexts[0]); // first color profile is ProPhoto RG, make sure it's last
var resized = BitmapFrame.Create(new TransformedBitmap(frame, transform), frame.Thumbnail, (BitmapMetadata)frame.Metadata, ctxs.AsReadOnly());
var encoder = new JpegBitmapEncoder();
encoder.Frames.Add(resized);
using (var stream = File.OpenWrite("resized.jpg"))
{
encoder.Save(stream);
}
Note an image that has more than one color context is painful (and IMHO should not be created / saved). Color profiles are for there to ensure a correct display, so two or more (rather conflicting! sRGB vs ProPhoto) profiles associated with one image means it can be displayed ... in two or more ways.
You'll have to determine what's you're preferred color profile in this strange case.

Why does writing an image to a stream give me a different result than directly saving to a file?

I am trying to add some rectangles to an existing image. When using the following code, everything works fine:
var bytes = File.ReadAllBytes("myPath\\input.jpg");
var stream = new MemoryStream(bytes);
using (var i = new Bitmap(stream))
{
using (var graphics = Graphics.FromImage(i))
{
var selPen = new Pen(Color.Blue);
graphics.DrawRectangle(selPen, 10, 10, 50, 50);
i.Save("myPath\\output.jpg", ImageFormat.Jpeg);
}
}
But saving the image to the same MemoryStream and then later writing all bytes to a file gives me an almost grey-only image.
This does not work:
var bytes = File.ReadAllBytes("myPath\\input.jpg");
var stream = new MemoryStream(bytes);
using (var i = new Bitmap(stream))
{
using (var graphics = Graphics.FromImage(i))
{
var selPen = new Pen(Color.Blue);
graphics.DrawRectangle(selPen, 10, 10, 50, 50);
i.Save(stream, ImageFormat.Jpeg);
}
}
File.WriteAllBytes("myPath\\output.jpg", stream.ToArray());
The (wrong) image looks like this:
As you can see, only part of the image is grey. There still is some part visible (the white one) of the actual image.
Why is this happening and what is the correct solution?
Thanks!
You've double-written to the Stream in the second example; it still contains the original data, and then you've appended more data with the Save. Stream works like a video tape (sort of). If you want to overwrite the stream, you need to do that very carefully (and: not all streams even support that concept - think "network stream", "encryption stream", etc). Note that ToArray (and the GetBuffer / TryGetBuffer methods) see all the data, not just what you're thinking of as the "new" data (a concept that doesn't even exist, really - like a video tape, you only have the "current" position and the length - if you need to know where the first show ends and the second show starts, you need to note that yourself, manually). In this case, adding:
stream.Position = 0; // rewind
stream.SetLength(0); // truncate (important in case the new data is *shorter* than the old)
after reading it and before Save, should fix it.
You are saving the image to an already initialized stream from a byte array.
Create a new stream and save to it.
var stream2 = new MemoryStream();
i.Save(stream2, ImageFormat.Jpeg);
Or simply reset the previous one
memoryStream = new MemoryStream(stream.Capacity());

Magick.Net resize image

I have some code to convert an image, which is working now, but the width of the image that is generated is really small. I would like to force it to use a width of 600px.
My code looks like this:
public async Task<string> ConvertImage(byte[] data)
{
// Create our settings
var settings = new MagickReadSettings
{
Width = 600
};
// Create our image
using (var image = new MagickImage(data, settings))
{
// Create a new memory stream
using (var memoryStream = new MemoryStream())
{
// Set to a png
image.Format = MagickFormat.Png;
image.Write(memoryStream);
memoryStream.Position = 0;
// Create a new blob block to hold our image
var blockBlob = container.GetBlockBlobReference(Guid.NewGuid().ToString() + ".png");
// Upload to azure
await blockBlob.UploadFromStreamAsync(memoryStream);
// Return the blobs url
return blockBlob.StorageUri.PrimaryUri.ToString();
}
}
}
The image I have uploaded is an AI file, but when it gets converted it is only 64px wide.
Does anyone know why and how I can fix it?
With the current version of Magick.NET this is the expected behavior. But after seeing your post we made some changes to ImageMagick/Magick.NET. With Magick.NET 7.0.0.0103 and higher you will get an image that fits inside the bounds that you specify with Width and Height. So when you specify a Width of 600 you will get an image that is 600 pixels wide.

ImageMagick Pdf to image conversion is too slow

I'm using ImageMagick.NET for generating image from pdf. Its working, but the conversion process is too slow. Code -->
public void ProcessRequest(HttpContext context)
{
if (context.Request["id"] != null)
{
string id = context.Request["id"].ToString();
MagickReadSettings settings = new MagickReadSettings();
settings.Density = new MagickGeometry(300, 300);
using (MagickImageCollection images = new MagickImageCollection())
{
images.Read(System.Web.HttpContext.Current.Server.MapPath(string.Format("~/Reciepts/order{0}.pdf", id)), settings);
MagickImage vertical= images.AppendVertically();
using (var memoryStream = new MemoryStream())
{
vertical.ToBitmap().Save(memoryStream, ImageFormat.Jpeg);
var d = memoryStream.GetBuffer();
context.Response.Clear();
context.Response.ContentType = "image/jpeg";
context.Response.BinaryWrite(d);
context.Response.End();
}
}
}
}
Where i can improve ?
You are using Magick.NET not ImageMagick.NET.
It is not necessary to create a bitmap before you send it to the output stream. You can just do this:
using (MagickImage vertical=images.AppendVertically())
{
vertical.Format = MagickFormat.Jpeg;
vertical.Write(context.Response.OutputStream);
}
And maybe you should cache the result to a file?
If you decided to use Magick.NET, method is not wrong.
First answer gives you "Using" statement for MagickImage.
But this differs only a few milliseconds to finish the job.
I see that the slow line is this line:
images.Read(System.Web.HttpContext.Current.
Server.MapPath(string.Format("~/Reciepts/order{0}.pdf", id)), settings);
because of settings objects properties.
Your property says that image must be in 300dpi and 300 might be very high for your cpu:
settings.Density = new MagickGeometry(300, 300);
You can try to use lower density, not 300dpi. Lower density is more fast:
settings.Density = new Density(72, 72);
I think there must be another fast way to create image from pdf file. Magick.NET uses Ghostscript to generate image from pdf and Ghostscript is slow and sometimes not successful to generate image from complicated (layered) pdf's.

Categories