Generate huge image in C# - c#

I need to create a huge image (aprox 24000 x 22000) with PixelFormat.Format24bppRgb encoding. I know it will barely impossible to open it...
What I'm trying to do is this:
Bitmap final = new Bitmap(width, height, PixelFormat.Format24bppRgb);
As expected, an exception is thrown as I can't handle a 11GB file in memory easy that way.
But I had an idea: could I write the file as I'm generating it? So, instead of working on RAM, I would be working on the HD.
Just to better explain: I have about 13K tiles and I plan to stitch it together in this stupidly humongous file. As I can iterate them in a give order, I thing I could write it down directly to the memory using unsafe code.
Any suggestions?

ImageMagick's Large Image Support (tera-pixel) can help you put the image together once you have the tiles that compose it. You can either use use the command line and issue commands to it using this wrapper or use this ImageMagick.NET as an API.

You could write it in a non-compressed format like BMP. BMP saves raw color bytes in rows. So you would load first row of tiles, read their separate pixel rows and write it as composite single row in output image. This way, you can have open only few tiles and imediately write down the output image.
But I don't know how to write it as compressed image, like JPG or PNG. But I'm sure some specialised software exists for that.

Depending on what you intend to do with this image upon completion, I would suggest dividing it into 4 and working with it that way. I have worked with 10,000 x 10,000 pixels without the OOM exception being thrown.

Related

iTextSharp change outcome quality / compression like PDF24Creator

I was wondering if I can compress/Change the Quality of my outcoming pdf-file with iTextSharp and C# like I can do with Adobe Acrobat Pro or PDF24Creator.
Using the PDF24Creator I can open the pdf, save the file again and set the "Quality of the PDF" to "Low Quality" and my file size decreases from 88,6MB to 12,5MB while the Quality is still good enough.
I am already using the
writer = new PdfCopy(doc, fs);
writer.SetPdfVersion(PdfCopy.PDF_VERSION_1_7);
writer.CompressionLevel = PdfStream.BEST_COMPRESSION;
writer.SetFullCompression();
which decreases the file size from about 92MB to 88MB.
Alternatively: Can I run the pdf24 Program through my C# code using command line arguments or starting Parameters? Something like that:
pdf24Creator.exe -save -Quality:low -inputfile -outputfile
Thanks for your help (Bruno)!
Short answer: no.
Long answer: yes but you must do a lot of the work yourself.
If you read the third and fourth paragraphs here you'll hopefully get a better understanding of what "compression" actually means from a PDF perspective.
Programs like Adobe Acrobat and PDF24 Creator allow you to reduce the size of a file by destroying the data within the PDF. When you select a low quality setting one of the most common changes these programs make is to actually extract all of the images, reduce their quality and replace the original files in the PDF. So a JPEG originally saved without any compression might be knocked down to 60% quality. And just to be clear, that 60% is non-reversible, it isn't zipping the file, it is literally destroying the data in order to save space.
Another setting is to reduce the effective DPI of an image. A 500 pixel wide image placed into a 2 inch wide box is effectively 250 DPI. These programs will extract the image, reduce the image to maybe 96 or 72 DPI which means the 500 pixel image be reduced to 192 or 144 pixels in width and replace the original file in the PDF. Once again, this is a destructive non-reversible change.
(And by destructive non-reversible, you still probably have the original file, I just want to be clear that this isn't true "compression" like ZIP.)
However, if you really want to do it you can look at code like this which shows how you can use iText to perform the extraction and re-insertion of images. It is 100% up to you, however, to change the images because iText won't make destructive changes to your data (and that's a good thing I'd say!)

DrawImage is slow, lockbits to the rescue?

I've read many C# tutorials on using lockbits to manipulate images, but I just don't know how to apply this info into PowerShell.
This is the problem:
$image1's height is 2950 pixels. $image2's height is 50 px taller, 3000 pixels. I need to fit $image2 into $image1 and I can skip $image2's first 49 px lines. So in pseudocode:
For(y=0... For(x=0.... { image1(x,y) = image2(x,y+50) } ....))
The PowerShell script below works, but does not work very fast:
$rect = new-object Drawing.Rectangle 0, 0, $image1.width, $image1.height
$image1drawing.drawimage($image2,
$rect,
0, 50, $image2.width, ($image2.height - 50),
$graphicalUnit)
The pages I've found, such as this (Not able to successfully use lockbits) or this (https://web.archive.org/web/20121203144033/http://www.bobpowell.net/lockingbits.htm) are in "plain English" but how to convert this concept into PowerShell?
You are using the correct approach. Using DrawImage will be faster that copying the pixels one by one.
Some suggestions to make it faster:
Try using Image.Clone to copy a rectangle from the original image, this will result in the smallest number of objects you need to create.
Make sure you use the same PixelFormat as the original image (faster copying). There is a PixelFormat attribute in Image.
Most important: accessing Width and Height takes a long time so save them to a local variable for reuse. If you know them before hand that's a good way to speed-up things.
Don't expect miracles at a width to height ratio of 4:3 each image is 3932 * 2950 * 3(assuming 24 bit RGB) = 33Mb per image. That's a lot of data, you may easily be be trying to copy a few gigs depending on how many images you have.
You are better off writing a simple cmdlet and using it in your PowerShell script.
BTW in case you are still interested:
Using lockbits in C# (as per your examples) relies on unsafe context and using pointers. I don't believe PowerShell has access to unsafe context.
You can manipulate unmanaged data without using an unsafe context by using the Marshall class; specifically the Read and Write methods (and you might be able to speed things up with the Copy method at expense of memory).
PowerShell was not meant as a replacement for .Net generic languages, that's what CmdLets are for.

C# .NET Dynamic Image Library?

Hi there I am looking to create a image dynamically from an array[500][500] (500x500pixel)
Each array item has the pixel color data,
Does anyone know which .NET library/interface would be best for this? Could point me in right direction? I need to create/save the file.
Also, The image is a composite of the data from many images, I am wondering if it is possible to use various formats, or if I need to first convert the small images into one format,
Also which image format is best to use (the most compatible?) JPG/PNG24? (for web)
Thanks for your input!
if you can use unsafe code in your website (in other words, your code runs under full trust), just use the Bitmap class, and use the LockBits method, then you can use pointers just like in C++ to access the pixels (tip: create a Pixel struct to hold RGB values). You will see GetPixel and SetPixel methods, DO NOT EVER use them. The performance is terrible, more than 100 times slower than using pointers. Just go with BitmapData.Scan0.ToPointer() and then iterate with for.
You could use the System.Drawing.Bitmap class.
If you are able to use unsafe code, construct the bitmap using pointers and BitmapData rather than SetPixel
var bitmap = new Bitmap(500, 500);
// Update the pixels with values in you array...
bitmap.Save("myfilename.jpg", ImageFormat.Jpeg);
The format depends on what you need (for example png can support transaprency, jpg does not).
You could start wich System.Drawing.Bitmap .

Image manipulation in C#

I am loading a JPG image from hard disk into a byte[]. Is there a way to resize the image (reduce resolution) without the need to put it in a Bitmap object?
thanks
There are always ways but whether they are better... a JPG is a compressed image format which means that to do any image manipulation on it you need something to interpret that data. The bimap object will do this for you but if you want to go another route you'll need to look into understanding the jpeg spec, creating some kind of parser, etc. It might be that there are shortcuts that can be used without needing to do full intepretation of the original jpg but I think it would be a bad idea.
Oh, and not to forget there are different file formats for JPG apparently (JFIF and EXIF) that you will ened to understand...
I'd think very hard before avoiding objects that are specifically designed for the sort of thing you are trying to do.
A .jpeg file is just a bag o' bytes without a JPEG decoder. There's one built into the Bitmap class, it does a fine job decoding .jpeg files. The result is a Bitmap object, you can't get around that.
And it supports resizing through the Graphics class as well as the Bitmap(Image, Size) constructor. But yes, making a .jpeg image smaller often produces a file that's larger. That's an unavoidable side-effect of Graphics.Interpolation mode. It tries to improve the appearance of the reduced image by running the pixels through a filter. The Bicubic filter does an excellent job of it.
Looks great to the human eye, doesn't look so great to the JPEG encoder. The filter produces interpolated pixel colors, designed to avoid making image details disappear completely when the size is reduced. These blended pixel values however make it harder on the encoder to compress the image, thus producing a larger file.
You can tinker with Graphics.InterpolationMode and select a lower quality filter. Produces a poorer image, but easier to compress. I doubt you'll appreciate the result though.
Here's what I'm doing.
And no, I don't think you can resize an image without first processing it in-memory (i.e. in a Bitmap of some kind).
Decent quality resizing involves using an interpolation/extrapolation algorithm; it can't just be "pick out every n pixels", unless you can settle with nearest neighbor.
Here's some explanation: http://www.cambridgeincolour.com/tutorials/image-interpolation.htm
protected virtual byte[] Resize(byte[] data, int width, int height) {
var inStream = new MemoryStream(data);
var outStream = new MemoryStream();
var bmp = System.Drawing.Bitmap.FromStream(inStream);
var th = bmp.GetThumbnailImage(width, height, null, IntPtr.Zero);
th.Save(outStream, System.Drawing.Imaging.ImageFormat.Jpeg);
return outStream.ToArray(); }

How do I create a 256 megapixels image in C#?

I am making an imaging application. I need a 16000 x 16000 pixel image. This is not impossible because in PhotoShop I can create this image for print. (56 x 56 inches, in 300dpi)
I am using this code:
Image WorkImage = new Bitmap(16000, 16000);
This generates an "Invalid Parameter" exception, but not when I do 9000 x 9000 Pixels.
MSDN doesn't say anything about the limits in the constructor.
I know that the data in the bitmap object is in memory, because if the array is too big it can throw an "Out Of Memory" exception, but this is not the case. I would prefer manage this data in a file, but I don't know how.
Thanks.
Photoshop does not allocate gigantic images in contiguous portions of memory as you are trying to do. There are some memory limitations I've encountered when creating very large images.
Consider subdividing your images. This has the benefit of better memory management. If you edit one of your subdivided images, you won't have to update the entire image.
As an aside, a 16000 x 16000 at 4 bytes per pixel is roughly a gigabyte! That's huge. Good luck!
Why not generate a bunch of smaller bitmaps? E.g. 16 bitmaps that are 4k x 4k pixels...?
Oh, and although probably not the cause of the exception you got, there are some funny quirks with large objects / the CLR large object heap. This is covered in some other SO topics that you may want to read just for fun since you're playing with large chunks of memory... E.g.: How to get unused memory back from the large object heap LOH from multiple managed apps?
While I agree with Charlie that you're probably better off with several smaller bitmaps, I just ran the code below on my 32 bit Windows with 2 GB RAM, and it took a while to complete, but I received no errors.
var b = new Bitmap(16000, 16000);
Console.WriteLine("size is {0}x{1}", b.Width, b.Height);
Really, dont mind where is the Bitmap Data. I only need in the output a file (TIFF) with the 16000 x 16000 pixels image. I think that i can create a class (How the bitmap class and the image class) but with the data in the file itself, where i can edit the image.
I am thinking to check the TIFF structure and create a class for create and edit those files partially buffered in memory. I dont want create a object that are more big than a few MB.
But i want to know if there is some class with BMP or TIFF File editing capability... really i dont know.
Thanks for your previous Answers. :)

Categories