Image manipulation in C# - c#

I am loading a JPG image from hard disk into a byte[]. Is there a way to resize the image (reduce resolution) without the need to put it in a Bitmap object?
thanks

There are always ways but whether they are better... a JPG is a compressed image format which means that to do any image manipulation on it you need something to interpret that data. The bimap object will do this for you but if you want to go another route you'll need to look into understanding the jpeg spec, creating some kind of parser, etc. It might be that there are shortcuts that can be used without needing to do full intepretation of the original jpg but I think it would be a bad idea.
Oh, and not to forget there are different file formats for JPG apparently (JFIF and EXIF) that you will ened to understand...
I'd think very hard before avoiding objects that are specifically designed for the sort of thing you are trying to do.

A .jpeg file is just a bag o' bytes without a JPEG decoder. There's one built into the Bitmap class, it does a fine job decoding .jpeg files. The result is a Bitmap object, you can't get around that.
And it supports resizing through the Graphics class as well as the Bitmap(Image, Size) constructor. But yes, making a .jpeg image smaller often produces a file that's larger. That's an unavoidable side-effect of Graphics.Interpolation mode. It tries to improve the appearance of the reduced image by running the pixels through a filter. The Bicubic filter does an excellent job of it.
Looks great to the human eye, doesn't look so great to the JPEG encoder. The filter produces interpolated pixel colors, designed to avoid making image details disappear completely when the size is reduced. These blended pixel values however make it harder on the encoder to compress the image, thus producing a larger file.
You can tinker with Graphics.InterpolationMode and select a lower quality filter. Produces a poorer image, but easier to compress. I doubt you'll appreciate the result though.

Here's what I'm doing.
And no, I don't think you can resize an image without first processing it in-memory (i.e. in a Bitmap of some kind).
Decent quality resizing involves using an interpolation/extrapolation algorithm; it can't just be "pick out every n pixels", unless you can settle with nearest neighbor.
Here's some explanation: http://www.cambridgeincolour.com/tutorials/image-interpolation.htm
protected virtual byte[] Resize(byte[] data, int width, int height) {
var inStream = new MemoryStream(data);
var outStream = new MemoryStream();
var bmp = System.Drawing.Bitmap.FromStream(inStream);
var th = bmp.GetThumbnailImage(width, height, null, IntPtr.Zero);
th.Save(outStream, System.Drawing.Imaging.ImageFormat.Jpeg);
return outStream.ToArray(); }

Related

C# Read huge image into array

I have huge images (1.800MP # 8bit or 16bit), all grayscale, no alpha, transparency or other stuff.
They may come as png, tiff, bmp, or even jpeg, so I need an image library to handle the reading, decompression and stuff.
After this, I just want to get an array with the grayscale pixel values out - preferrably 2d, but 1d is also alright. It also may be ushort all the time, even for the 8bit images.
I tried using the buid-int BitmapImage of C# - no luck, just throws exceptions for images this large.
Any other libraries that can give me the grayscale values, without hassle?
It will be faster if you use simple FileReader to read the content and generate your own array rather then looking for a library.

C# - Working with high resolution screenshots

I need to capture an area within my desktop. But I need this area to be very high resolution (like, at least few thousand's pixels horizontal, same goes for vertical). Is it possible to get a screen capture that has high density of pixels? How can I do this? I tried capturing the screen with some AutoIt script, and got some very good results (images that were 350MB big), now I would like to do the same using C#.
Edit:
I am doing my read/write of a .tif file like that, and it already loses most of the data:
using (Bitmap bitmap = (Bitmap)Image.FromFile(#"ScreenShot.tif")) //this file has 350MB
{
using (Bitmap newBitmap = new Bitmap(bitmap))
{
newBitmap.Save("TESTRES.TIF", ImageFormat.Tiff); //now this file has about 60MB, Why?
}
}
I am trying to capture my screen like that, but the best I can get from this is few megabytes (nowhere near 350MB):
using (var bmpScreenCapture = new Bitmap(window[2], window[3], PixelFormat.Format32bppArgb))
{
using (var i = Graphics.FromImage(bmpScreenCapture))
{
i.InterpolationMode = InterpolationMode.High;
i.CopyFromScreen(window[0], window[1], 0, 0, bmpScreenCapture.Size, CopyPixelOperation.SourceCopy);
}
bmpScreenCapture.Save("test2.tif", ImageFormat.Tiff);
}
You can't gather more information than the source has.
This is a basic truth and it does apply here, too.
So you can't capture more the your 1920x1080 pixels at their color depth.
OTOH, since you want to feed the captured image into OCR, there a few more things to consider and in fact to do..
OCR is very happy if you help it by optimizing the image. This should involve
reducing colors and adding contrast
enlarging to the recommended dpi resolution
adding even more contrast
Funnily, this will help OCR although the real information cannot increase above the original source. But a good resizing algorithm will add invented data and these often will be just what the OCR software needs.
You should also take care to use a good i.e. non lossy format when you store the image to a file like png or tif and never jpg.
The best way will have to be adjusted by trial and error until the OCR results are good enough.
Hint: Due to font antialiasing most text on screenshots is surrounded by a halo of colorful pixels. Getting rid of it by the reducing or even removing saturation is one way; maybe you want to turn it off in your display properties? (Check out ClearType!)

save greyscale image as color JPEG

I'm handling a lot of different image formats within my application. Luckily I was able to source out the biggest part of dealing with image formats to C#WPF. However, I've got an issue when saving images:
I'd like to save my images as JPEGs using JpegBitmapEncoder with an RGB profile. This works all fine for color image formats. However, when handling images i.e. of format Gray16 or Gray8, the resulting JPEGs will have a Grayscale profile. I really do need an RGB profile for my JPEGs!
I know that I could just create a new WriteableBitmap in Bgra32, copy to it the data and use it to create the JPEG. This however means, that I would need to handle the different image formats myself. Most importantly I believe that this detour would be quite inefficient computationally (I need to convert a lot of image data!).
Also I can't use any solutions outside of C#/WPF (like Imagemagick).
I hope that there's a way to solve this easily and efficiently. I found no way to configure JpegBitmapEncoder for this and I had a try with ColorConvertedBitmap but to no avail!
Any ideas?
The hint for FormatConvertedBitmap gave me the solution to my problem! Thanks RononDex!
FormatConvertedBitmap convertImg = new FormatConvertedBitmap(img, PixelFormats.Bgra32, null, 0);
JpegBitmapEncoder encoder = new JpegBitmapEncoder();
encoder.Frames.Add(BitmapFrame.Create(convertImg));
encoder.Save(stream);

Generate huge image in C#

I need to create a huge image (aprox 24000 x 22000) with PixelFormat.Format24bppRgb encoding. I know it will barely impossible to open it...
What I'm trying to do is this:
Bitmap final = new Bitmap(width, height, PixelFormat.Format24bppRgb);
As expected, an exception is thrown as I can't handle a 11GB file in memory easy that way.
But I had an idea: could I write the file as I'm generating it? So, instead of working on RAM, I would be working on the HD.
Just to better explain: I have about 13K tiles and I plan to stitch it together in this stupidly humongous file. As I can iterate them in a give order, I thing I could write it down directly to the memory using unsafe code.
Any suggestions?
ImageMagick's Large Image Support (tera-pixel) can help you put the image together once you have the tiles that compose it. You can either use use the command line and issue commands to it using this wrapper or use this ImageMagick.NET as an API.
You could write it in a non-compressed format like BMP. BMP saves raw color bytes in rows. So you would load first row of tiles, read their separate pixel rows and write it as composite single row in output image. This way, you can have open only few tiles and imediately write down the output image.
But I don't know how to write it as compressed image, like JPG or PNG. But I'm sure some specialised software exists for that.
Depending on what you intend to do with this image upon completion, I would suggest dividing it into 4 and working with it that way. I have worked with 10,000 x 10,000 pixels without the OOM exception being thrown.

What quality level does Image.Save() use for jpeg files?

I just got a real surprise when I loaded a jpg file and turned around and saved it with a quality of 100 and the size was almost 4x the original. To further investigate I open and saved without explicitly setting the quality and the file size was exactly the same. I figured this was because nothing changed so it's just writing the exact same bits back to a file. To test this assumption I drew a big fat line diagonally across the image and saved again without setting quality (this time I expected the file to jump up because it would be "dirty") but it decreased ~10Kb!
At this point I really don't understand what is happening when I simply call Image.Save() w/out specifying a compression quality. How is the file size so close (after the image is modified) to the original size when no quality is set yet when I set quality to 100 (basically no compression) the file size is several times larger than the original?
I've read the documentation on Image.Save() and it's lacking any detail about what is happening behind the scenes. I've googled every which way I can think of but I can't find any additional information that would explain what I'm seeing. I have been working for 31 hours straight so maybe I'm missing something obvious ;0)
All of this has come about while I implement some library methods to save images to a database. I've overloaded our "SaveImage" method to allow explicitly setting a quality and during my testing I came across the odd (to me) results explained above. Any light you can shed will be appreciated.
Here is some code that will illustrate what I'm experiencing:
string filename = #"C:\temp\image testing\hh.jpg";
string destPath = #"C:\temp\image testing\";
using(Image image = Image.FromFile(filename))
{
ImageCodecInfo codecInfo = ImageUtils.GetEncoderInfo(ImageFormat.Jpeg);
// Set the quality
EncoderParameters parameters = new EncoderParameters(1);
// Quality: 10
parameters.Param[0] = new EncoderParameter(
System.Drawing.Imaging.Encoder.Quality, 10L);
image.Save(destPath + "10.jpg", codecInfo, parameters);
// Quality: 75
parameters.Param[0] = new EncoderParameter(
System.Drawing.Imaging.Encoder.Quality, 75L);
image.Save(destPath + "75.jpg", codecInfo, parameters);
// Quality: 100
parameters.Param[0] = new EncoderParameter(
System.Drawing.Imaging.Encoder.Quality, 100L);
image.Save(destPath + "100.jpg", codecInfo, parameters);
// default
image.Save(destPath + "default.jpg", ImageFormat.Jpeg);
// Big line across image
using (Graphics g = Graphics.FromImage(image))
{
using(Pen pen = new Pen(Color.Red, 50F))
{
g.DrawLine(pen, 0, 0, image.Width, image.Height);
}
}
image.Save(destPath + "big red line.jpg", ImageFormat.Jpeg);
}
public static ImageCodecInfo GetEncoderInfo(ImageFormat format)
{
return ImageCodecInfo.GetImageEncoders().ToList().Find(delegate(ImageCodecInfo codec)
{
return codec.FormatID == format.Guid;
});
}
Using reflector, it turns out Image.Save() boils down to the GDI+ function GdipSaveImageToFile, with the encoderParams NULL. So I think the question is what the JPEG encoder does when it gets a null encoderParams. 75% has been suggested here, but I can't find any solid reference.
EDIT You could probably find out for yourself by running your program above for quality values of 1..100 and comparing them with the jpg saved with the default quality (using, say, fc.exe /B)
IIRC, it is 75%, but I dont recall where I read this.
I don't know much about the Image.Save method, but I can tell you that adding that fat line would logicly reduce the size of the jpg image. This is due to the way a jpg is saved (and encoded).
The thick black line makes for a very simple and smaller encoding (If I remember correctly this is relevent mostly after the Discrete cosine transform), so the modified image can be stored using less data (bytes).
jpg encoding steps
Regarding the changes in size (without the added line), I'm not sure which image you reopened and resaved
To further investigate I open and saved without explicitly setting the quality and the file size was exactly the same
If you opened the old (original normal size) image and resaved it, then maybe the default compression and the original image compression are the same.
If you opened the new (4X larger) image and resaved it, then maybe the default compression for the save method is derived from the image (as it was when loaded).
Again, I don't know the save method, so I'm just throwing ideas (maybe they'll give you a lead).
When you save an image as a JPEG file with a quality level of <100%, you are introducing artefacts into the saved-off image, which are a side-effect of the compression process. This is why re-saving the image at 100% is actually increasing the size of your file beyond the original - ironically there's more information present in the bitmap.
This is also why you should always attempt to save in a non-lossy format (such as PNG) if you intend to do any edits to your file afterwards, otherwise you'll be affecting the quality of the output through multiple lossy transformations.

Categories