Large bitmap images memory allocation in blob detectin, C# .Net - c#

I have bitmap images like 14000x18000(~30MB ) height and width. I am trying to process them with different image processing libraries (OpenCV (using the wrapper OpenCvSharp), Aforge.NET..) in order to do blob detection. However, labeling the bitmap image causes memory allocation problems. The libraries tries to map the labeled image to 32bit image.
Is there a way to da the labeling operation with a less amount of memory? (Cropping the image is not a solution)
For example labeling the bitmap image to a 8bit image instead of 32?

In case there isn't an answer for the 8-bit thing... and even if there is...
For speed and memory purposes, I would highly recommend resizing the image down (not cropping). Use high-quality interpolation like this sample does, only just resize to 50%, not thumbnail (7.5MB im memory).
You didn't mention that you don't want to do this, and I am assuming you probably don't want to try it, thinking the library will do better blob detection at full resolution. Before you pooh-pooh the idea you need to test it with a full-resolution subsection of a sample image, of a size that the library will handle, compared to the same subsection at 50%.
Unless you've actually done this, you can't know. You can also figure a maximum amount of memory that the picture can use, compute a resize factor to target that number (reduce it for safety - you'll figure this out when things blow up in testing). If you care where the stuff is in the original image, scale it back up by the factor.

This may not solve your particular problem (or it might), but have you considered splitting / segmenting the frame into a 2x2 (or 3x3) matrix and try to work on each of them separately. Then based on where you find the blobs in the 4 (or 9) frames, correlate and coalesce the adjoining blobs to make single blob. Of course, this high level blob coalescing would have to be your own logic.
PS> Admittedly, working off highly superficial knowledge of Aforge. No hands-on experience what-so-ever.

Related

C# loading partial images into memory

Say for example that you have a 30,000 x 30,000 image, but at any given time you would only need a specific section of it that is for example 512 x 512.
Is there a way (or framework) to iterate or query for the pixels you are looking for without having to load the entire image into memory first?
Check out Microsoft's DeepZoom. A good primer is here.
DeepZoom accomplishes its goal by partitioning an image (or a
composition of images) into tiles. While tiling the image, the
composer also creates a pyramid of lower resolution tiles for the
original composition.
You can download the DeepZoom composer here.
Also check out OpenSeadragon for a JavaScript solution.
It depends.
If your image is a jpeg/png, when you load a part, all must be loaded (because of compression)
If the image is a bmp, you can create your own load, because is a disk bitmap.
My advice is to create separate images of 512x512 and load separately (it's what google maps do!)

Reduce & Optimize Scanned Documents File Size

My customer has about 100,000 scanned documents (jpg) which they work with everyday. I want to know how can I reduce the file size of those images for faster file transfer and browsing.
The documents are scanned in black/white, saved in jpg format. They have a resolution of 150dpi and size of 1275x1753 (width x height). The main problem is their size which is between ~150kb and ~500kb which I think is too high for a black/white picture.
Is there a chance that I can reduce their size with changing the resolution, changing some color mode or something? Tried playing around with Photoshop but no luck.
The scanned documents are just for the sole purpose of Reviewing. So I don't think they need much detail or the original pic size.
Gonna write the program in c#, So tell me if there is a good image library for this purpose.
If your images are JPEG-compressed than they are either grayscale (8 bits per pixel) or full color (24 or 32 bits per pixel). I am not aware of any other JPEG types out there.
Given that, you probably won't get much benefit if you try to convert these images to other formats without changes to their size (number of pixels in both directions) and/or color space.
There is a possibility that JPEG 2000 might compress your images better than JPEG, but another lossy compression will introduce some more artifacts. You might try for yourself and see if this approach is acceptable for you. I can't recommend you any tools for this approach, though.
I would recommend you to try and convert your images to bilevel ones (i.e. with only two colors) and compress them with one of the FAX compression schemes (Group 3 or Group 4). You might try to reduce images sizes at the same time, too. This can be easily achieved using Docotic.Pdf library (Disclaimer: I work for the vendor of the library).
Please take a look at my answer to a question similar to yours. The answer shows how to use RecompressWithGroup4Fax and/or Scale methods to recompress existing images in PDF.
There is also valuable advice from #plinth about JBIG2 compression and other stuff. Well worth reading.

OpenGL Tiling an Image into Textures and Run Out of Memory

I'm trying to implement a map of the world that the user can scale and pan around in. I'm running into a problem where my program runs out of memory.
Here's my situation:
0) I'm writing my program in C# and OpenGL
1) I downloaded a JPG image off the web that's a detailed image of the world that is 19 megs in size. Each pixel is 24 bits long.
2) I chop this image up into tiles at run time when my program initializes. I feed each tile into OpenGL using glGenTexture, glBindTexture, and glTexImage2d.
3) In my drawing routine, I draw the tiles one by one and try to draw all of them regardless as to whether or not they're on the screen (that's another problem, I know).
For a small image, everything works fine. For large images however, when I decode the JPG into a bitmap so that I can chop it up, my program's memory footprint explodes to over a gigabyte. I know that JPGs are compressed and the .NET JpegBitmapDecoder object is doing its job decompressing a huge image which is probably what gets me into trouble.
My question is, what exactly can one do in this kind of a situation? Since the C# decoding code is the 1st responsible party for blowing up memory, is there an alternate method of creating tiles and feeding them to OpenGL that I should pursue instead (minus bitmaps)?
Obviously you should buy more RAM!
You can store textures in a compressed format on the GPU. See this page (also the SC3T/DXT section.) You may also want to swap textures in and out of video memory as needed. Don't keep the uncompressed data around if you don't need it.
I'd look into decoding the jpg to a bitmap file and using a stream on the bitmap to asynchronously load the relevant tiles data in view (or as much as memory can handle) - similar to e.g. Google Maps on Android.
I also saw this, but can not confirm that "OOM from a Bitmap is very suspect. This class will throw the memory exception for just about anything that goes wrong, including bad parameters etc."

Large Bitmap Serialization

Is there an easy way, or free library, that will allow you to append small bitmaps into one large bitmap on file? I'm doing a capture of a web page that sometimes is quite large vertically. To avoid OOM exceptions I load small vertical by full horizontal slices of the capture into memory and would like to save those to disk. An append to an open filestream would be great. I'm not an expert on the Bitmap format but I am aware there is probably header / footer information that may prevent this.
There is header information, but it's a fixed size. You could write the header information and then append rows of pixels, keeping track of the height and other information. When you're done, position to the front of the file and update the header.
Bitmap File Format is a pretty good description of the format.
I would suggest using the version 3 format unless there's something you really need from the V4 structure. 24 bits per pixel is the easiest to deal with, since you don't have to mess with a color palette. 16 and 32 bit color are easier than 4 and 8 bit color (which require a palette).

Image.FromFile is very SLOW. Any alternatives, optimizations?

I have a winforms image list which contains say like 200 images 256x256.
I use the method Images.FromFile to load the images and then add them to the image list.
According to ANTS .NET profiler, half of the program's time is spent in Images.FromFile. Is there a better way to load an image to add to an image list?
Another thing that might be optimized is, the images that are loaded are larger than 256x256. So is there a way to load them by resizing them first or something? I just want to uniform scale them if they their height is larger than 256 pixels.
Any idea to optimize this?
EDIT: They are JPEGs.
You don't say how much bigger than 256x256 the images actually are - modern digital cameras images are much bigger than this.
Disk I/O can be very slow, and I would suggest you first get a rough idea how many megabytes of data you're actually reading.
Then you can decide if there's a subtle 'Image.FromFile' problem or a simple 'this is how slow my computer/drives/anti-virus scanner/network actually is' problem.
A simple test of the basic file I/O performance would be do to File.ReadAllBytes() for each image instead of Image.FromFile() - that will tell you what proportion of the time was spent with the disk and what with the image handling - I suspect you'll find it's largely disk, at which point your only chance to speed it up might be one of the techniques for getting JFIF thumbnails out of files. Or perhaps one can imagine clever stuff with partial reads of progressive JPEGs, though I don't know if anyone does that, nor if your files are progressive (they're probably not).
I don't really know how fast you need these to load, but if the problem is that an interactive application is hanging while you load the files, then think of ways to make that better for the user - perhaps use a BackgroundWorker to load them asynchronously, perhaps sort the images by ascending file-size and load the small ones first for a better subjective performance.
If you are trying to make thumbnails then try this code here it will let you extract thumbnails without completely loading the image.
You can use FreeImage.NET which is great for loading images on background.
The Image.FromFile contains hidden Mutex which locks your app if you are try to load large images even on background thread.
As for the JPEGs, the FreeImage library uses OpenJPEG library which can load JPEG images in smaller scale more quickly. It can also utilize embedded thumbnails.
The WPF classes also allow loading images in smaller resolution, but this cannot be used if you are restricted to WinForms.
If you want speed then prescale your images, don't do it in runtime.
You didn't mention want type of images you loading (jpeg, png, gif, bmp) of course that bmp is the fastest one since it has not (or almost no) compression.
Are your images 256 colors (8 bit w/ palette), bmp, gif and png support that format that can load pretty fast.
It sounds like you need some sort of image thumbnails? Don't forget that jpeg images already contains thumbnails inside, so you can extract only this small image and you do not need to scale. Such images however smaller than 256x256.
Another option is to move loading logic into separate thread, it will not be faster, but from users perspective it can look as significant speedup.
I have a winforms image list
I would avoid using ImageList in this scenario. Internally, it is an old Windows common control intended for working with GUI icons, usually 32x32 pixels or less.
Another thing that might be optimized is, the images that are loaded are larger than 256x256.
Be aware that GDI+ is not optimized for working with very large bitmaps of the kind that you would normally edit using photo software. Photo editing software packages generally have sophisticated algorithms that divide the image into smaller parts, swap parts to and from disk as needed to efficiently utilize memory, and so on.
Resizing an image is CPU intensive, especially when using a good quality interpolation algorithm. Take a cue from how Windows Explorer does this -- it saves the thumbnails to a file on disk for future access, and it does the processing in the background to avoid monopolizing the system.
This might be too late but using ImageLocation property will do two things
speed up image loading
work around the bug (Image file is locked when you set the PictureBox Image property to a file)
pictureBox1.ImageLocation = "image.jpg";
pictureBox1 .SizeMode = PictureBoxSizeMode.StretchImage;

Categories