I am making an imaging application. I need a 16000 x 16000 pixel image. This is not impossible because in PhotoShop I can create this image for print. (56 x 56 inches, in 300dpi)
I am using this code:
Image WorkImage = new Bitmap(16000, 16000);
This generates an "Invalid Parameter" exception, but not when I do 9000 x 9000 Pixels.
MSDN doesn't say anything about the limits in the constructor.
I know that the data in the bitmap object is in memory, because if the array is too big it can throw an "Out Of Memory" exception, but this is not the case. I would prefer manage this data in a file, but I don't know how.
Thanks.
Photoshop does not allocate gigantic images in contiguous portions of memory as you are trying to do. There are some memory limitations I've encountered when creating very large images.
Consider subdividing your images. This has the benefit of better memory management. If you edit one of your subdivided images, you won't have to update the entire image.
As an aside, a 16000 x 16000 at 4 bytes per pixel is roughly a gigabyte! That's huge. Good luck!
Why not generate a bunch of smaller bitmaps? E.g. 16 bitmaps that are 4k x 4k pixels...?
Oh, and although probably not the cause of the exception you got, there are some funny quirks with large objects / the CLR large object heap. This is covered in some other SO topics that you may want to read just for fun since you're playing with large chunks of memory... E.g.: How to get unused memory back from the large object heap LOH from multiple managed apps?
While I agree with Charlie that you're probably better off with several smaller bitmaps, I just ran the code below on my 32 bit Windows with 2 GB RAM, and it took a while to complete, but I received no errors.
var b = new Bitmap(16000, 16000);
Console.WriteLine("size is {0}x{1}", b.Width, b.Height);
Really, dont mind where is the Bitmap Data. I only need in the output a file (TIFF) with the 16000 x 16000 pixels image. I think that i can create a class (How the bitmap class and the image class) but with the data in the file itself, where i can edit the image.
I am thinking to check the TIFF structure and create a class for create and edit those files partially buffered in memory. I dont want create a object that are more big than a few MB.
But i want to know if there is some class with BMP or TIFF File editing capability... really i dont know.
Thanks for your previous Answers. :)
Related
I wanna send a series of integers to HLSL in the form of a 3D array using unity. I've been trying to do this for a couple of days now, but without any gain. I tried to pack the buffers into each other (StructuredBuffer<StructuredBuffer<StructuredBuffer<int>>>), but it simply won't work. And I need to make this thing resizable, so I can't use arrays in structs. What should I do?
EDIT: To clarify a bit more what I am trying to do here, this is a medical program. When you go make a scan of your body, some files are generated. Those files are called DICOM files(.dcm). Those files are given out to a doctor. The doctor should open the program, select all of the DICOM files and load them. Each DICOM file contains an image. However, those images are not as the normal images used in our daily life. Those images are grayscale and each pixel has a value that ranges between -1000 to a couple of thousands, so each pixel is saved as 2 bytes(or an Int16). I need to generate a 3D model of the body that got scanned, so I'm using the Marching Cubes algorithm to generate it(have a look at Polygonising a Scalar Field). The problem is I used to loop over each pixel in about 360 512*512 sized images, which took too much time. I used to read the pixel data from each file once I needed it when I used the CPU. Now I'm trying to make this process occur at runtime. I need to send all of the pixel data to the GPU before processing it. That's my problem. I need the GPU to read data from disk. Because that ain't possible, I need to send 360*512*512*4 bytes of data to the GPU in the form of 3D array of Ints. I'm also planning to keep the data there to avoid retransfer of that huge amount of memory. What should I do? Refer to this link to know more about what I'm doing
From what I've understood, I would suggest to try the following:
Flatten your data (nested buffers are not what you want on your gpu)
Split your data across multiple ComputeBuffers if necessary (when I played around with them on a Nvidia Titan X I could store approximately 1GB of data per buffer. I was rendering a 3D point cloud with 1.5GB of data or something, the 360MBytes of data you mentioned should not be a problem then)
If you need multiple buffers: let them overlap as needed for your marching cubes algorithm
Do all of your calculations in a ComputeShader (I think requires DX11, if you have multiple buffer, run it multiple times and accumulate your results) and then use the results in a standard shader which your call from OnPostRender function (use Graphics.DrawProcedural inside to just draw points or build a mesh on the gpu)
Edit (Might be interesting to you)
If you want to append data to a gpu buffer (because you don't know the exact size or you can't write it to the gpu at once), you can use AppendBuffers and a ComputeShader.
C# Script Fragments:
struct DataStruct
{
...
}
DataStruct[] yourData;
yourData = loadStuff();
ComputeBuffer tmpBuffer = new ComputeBuffer(512, Marshal.SizeOf(typeof(DataStruct)));
ComputeBuffer gpuData = new ComputeBuffer(MAX_SIZE, Marshal.SizeOf(typeof(DataStruct)), ComputeBufferType.Append);
for (int i = 0; i < yourData.Length / 512; i++) {
// write data subset to temporary buffer on gpu
tmpBuffer.SetData(DataStruct.Skip(i*512).Take((i+1)*512).ToArray()); // Use fancy Linq stuff to select data subset
// set up and run compute shader for appending data to "gpuData" buffer
AppendComputeShader.SetBuffer(0, "inBuffer", tmpBuffer);
AppendComputeShader.SetBuffer(0, "appendBuffer", gpuData);
AppendComputeShader.Dispatch(0, 512/8, 1, 1); // 8 = gpu work group size -> use 512/8 work groups
}
ComputeShader:
struct DataStruct // replicate struct in shader
{
...
}
#pragma kernel append
StructuredBuffer<DataStruct> inBuffer;
AppendStructuredBuffer<DataStruct> appendBuffer;
[numthreads(8,1,1)]
void append(int id: SV_DispatchThreadID) {
appendBuffer.Append(inBuffer[id]);
}
Note:
AppendComputeShader has to be assigned via the Inspector
512 is an arbitrary batch size, there is an upper limit of how much data you can append to a gpu buffer at once, but I think that depends on the hardware (for me it seemed to be 65536 * 4 Bytes)
you have to provide a maximum size for gpu buffers (on the Titan X it seems to be ~1GB)
In Unity we currently have the MaterialPropertyBlock that allows SetMatrixArray and SetVectorArray, and to make this even sweeter, we can set globally using the Shader static helpers SetGlobalVectorArray and SetGlobalMatrixArray. I believe that these will help you out.
In case you prefer the old way, please look at this quite nice article showing how to pass arrays of vectors.
I am trying to develop an application for image processing.
Here is my complete code in DotNetFiddle.
I have tested my application with different images from the Internet:
Cameraman is GIF.
Baboon is PNG.
Butterfly is PNG.
Pheasant is JPG.
Butterfly and Pheasant are re-sized to 300x300.
The following two images show correct Fourier and Inverse Fourier spectrum:
The following two images do not show the expected outcome:
What could be the reason?
Are there any problem with the later two images?
Do we need to use images of specific quality to test Image-processing applications?
The code you linked to is a radix-2 FFT implementation which would work for any image with sizes that are exact powers of 2.
Incidentally, the Cameraman image is 256 x 256 (powers of 2) and the Baboon image is 512 x 512 (again powers of 2). The other two images, being resized to 300 x 300 are not powers of 2. After resizing those images to an exact power of 2 (for example 256 or 512), the output of FrequencyPlot for the brightness component of the last two images should look somewhat like the following:
butterfly
pheasant
A common workaround for images of other sizes is to pad the image to sizes that are exact powers of 2. Otherwise, if you must process arbitrary sized images, you should consider other 2D discrete Fourier transform (DFT) algorithms or libraries which will often support sizes that are the product of small primes.
Note that for the purpose of validating your output, you also have option to use the direct DFT formula (though you should not expect the same performance).
I got not time to dig through your code. Like I said in my comments you should focus on the difference between those images.
There is no reason why you should not be able to calculate the FFT of one image and fail for another. Unless you have some problem in your code that can't handle some difference between those images. If you can display them you should be able to process them.
So the first thing that catches my eye is that both images you succeed with have even dimensions while the images your algorithm produces garbage for have at least one odd dimension. I won't look into it any further as from experience I'm pretty confident that this causes your issue.
So befor you do anything else:
Take one of those images that work fine, remove one line or row and see if you get a good result. Then fix your code.
I was wondering if I can compress/Change the Quality of my outcoming pdf-file with iTextSharp and C# like I can do with Adobe Acrobat Pro or PDF24Creator.
Using the PDF24Creator I can open the pdf, save the file again and set the "Quality of the PDF" to "Low Quality" and my file size decreases from 88,6MB to 12,5MB while the Quality is still good enough.
I am already using the
writer = new PdfCopy(doc, fs);
writer.SetPdfVersion(PdfCopy.PDF_VERSION_1_7);
writer.CompressionLevel = PdfStream.BEST_COMPRESSION;
writer.SetFullCompression();
which decreases the file size from about 92MB to 88MB.
Alternatively: Can I run the pdf24 Program through my C# code using command line arguments or starting Parameters? Something like that:
pdf24Creator.exe -save -Quality:low -inputfile -outputfile
Thanks for your help (Bruno)!
Short answer: no.
Long answer: yes but you must do a lot of the work yourself.
If you read the third and fourth paragraphs here you'll hopefully get a better understanding of what "compression" actually means from a PDF perspective.
Programs like Adobe Acrobat and PDF24 Creator allow you to reduce the size of a file by destroying the data within the PDF. When you select a low quality setting one of the most common changes these programs make is to actually extract all of the images, reduce their quality and replace the original files in the PDF. So a JPEG originally saved without any compression might be knocked down to 60% quality. And just to be clear, that 60% is non-reversible, it isn't zipping the file, it is literally destroying the data in order to save space.
Another setting is to reduce the effective DPI of an image. A 500 pixel wide image placed into a 2 inch wide box is effectively 250 DPI. These programs will extract the image, reduce the image to maybe 96 or 72 DPI which means the 500 pixel image be reduced to 192 or 144 pixels in width and replace the original file in the PDF. Once again, this is a destructive non-reversible change.
(And by destructive non-reversible, you still probably have the original file, I just want to be clear that this isn't true "compression" like ZIP.)
However, if you really want to do it you can look at code like this which shows how you can use iText to perform the extraction and re-insertion of images. It is 100% up to you, however, to change the images because iText won't make destructive changes to your data (and that's a good thing I'd say!)
I need to create a huge image (aprox 24000 x 22000) with PixelFormat.Format24bppRgb encoding. I know it will barely impossible to open it...
What I'm trying to do is this:
Bitmap final = new Bitmap(width, height, PixelFormat.Format24bppRgb);
As expected, an exception is thrown as I can't handle a 11GB file in memory easy that way.
But I had an idea: could I write the file as I'm generating it? So, instead of working on RAM, I would be working on the HD.
Just to better explain: I have about 13K tiles and I plan to stitch it together in this stupidly humongous file. As I can iterate them in a give order, I thing I could write it down directly to the memory using unsafe code.
Any suggestions?
ImageMagick's Large Image Support (tera-pixel) can help you put the image together once you have the tiles that compose it. You can either use use the command line and issue commands to it using this wrapper or use this ImageMagick.NET as an API.
You could write it in a non-compressed format like BMP. BMP saves raw color bytes in rows. So you would load first row of tiles, read their separate pixel rows and write it as composite single row in output image. This way, you can have open only few tiles and imediately write down the output image.
But I don't know how to write it as compressed image, like JPG or PNG. But I'm sure some specialised software exists for that.
Depending on what you intend to do with this image upon completion, I would suggest dividing it into 4 and working with it that way. I have worked with 10,000 x 10,000 pixels without the OOM exception being thrown.
if I try to create a bitmap bigger than 19000 px I get the error: Parameter is not valid.
How can I workaround this??
System.Drawing.Bitmap myimage= new System.Drawing.Bitmap(20000, 20000);
Keep in mind, that is a LOT of memory you are trying to allocate with that Bitmap.
Refer to http://social.msdn.microsoft.com/Forums/en-US/netfxbcl/thread/37684999-62c7-4c41-8167-745a2b486583/
.NET is likely refusing to create an image that uses up that much contiguous memory all at once.
Slightly harder to read, but this reference helps as well:
Each image in the system has the amount of memory defined by this formula:
bit-depth * width * height / 8
This means that an image 40800 pixels by 4050 will require over 660
megabytes of memory.
19000 pixels square, at 32bpp, would require 11552000000 bits (1.37 GB) to store the raster in memory. That's just the raw pixel data; any additional overhead inherent in the System.Drawing.Bitmap would add to that. Going up to 20k pixels square at the same color depth would require 1.5GB just for the raw pixel memory. In a single object, you are using 3/4 of the space reserved for the entire application in a 32-bit environment. A 64-bit environment has looser limits (usually), but you're still using 3/4 of the max size of a single object.
Why do you need such a colossal image size? Viewed at 1280x1024 res on a computer monitor, an image 19000 pixels on a side would be 14 screens wide by 18 screens tall. I can only imagine you're doing high-quality print graphics, in which case a 720dpi image would be a 26" square poster.
Set the PixelFormat when you new a bitmap, like:
new Bitmap(2000, 40000,PixelFormat.Format16bppRgb555)
and with the exact number above, it works for me. This may partly solve the problem.
I suspect you're hitting memory cap issues. However, there are many reasons a bitmap constructor can fail. The main reasons are GDI+ limits in CreateBitmap. System.Drawing.Bitmap, internally, uses the GDI native API when the bitmap is constructed.
That being said, a bitmap of that size is well over a GB of RAM, and it's likely that you're either hitting the scan line size limitation (64KB) or running out of memory.
Got this error when opening a TIF file. The problem was due to not able to open CMYK. Changed colorspace from RGB to CMYK and didn't get an error.
So I used taglib library to get image file size instead.
Code sample:
try
{
var image = new System.Drawing.Bitmap(filePath);
return string.Format("{0}px by {1}px", image.Width, image.Height);
}
catch (Exception)
{
try
{
TagLib.File file = TagLib.File.Create(filePath);
return string.Format("{0}px by {1}px", file.Properties.PhotoWidth, file.Properties.PhotoHeight);
}
catch (Exception)
{
return ("");
}
}