I'm trying to implement a map of the world that the user can scale and pan around in. I'm running into a problem where my program runs out of memory.
Here's my situation:
0) I'm writing my program in C# and OpenGL
1) I downloaded a JPG image off the web that's a detailed image of the world that is 19 megs in size. Each pixel is 24 bits long.
2) I chop this image up into tiles at run time when my program initializes. I feed each tile into OpenGL using glGenTexture, glBindTexture, and glTexImage2d.
3) In my drawing routine, I draw the tiles one by one and try to draw all of them regardless as to whether or not they're on the screen (that's another problem, I know).
For a small image, everything works fine. For large images however, when I decode the JPG into a bitmap so that I can chop it up, my program's memory footprint explodes to over a gigabyte. I know that JPGs are compressed and the .NET JpegBitmapDecoder object is doing its job decompressing a huge image which is probably what gets me into trouble.
My question is, what exactly can one do in this kind of a situation? Since the C# decoding code is the 1st responsible party for blowing up memory, is there an alternate method of creating tiles and feeding them to OpenGL that I should pursue instead (minus bitmaps)?
Obviously you should buy more RAM!
You can store textures in a compressed format on the GPU. See this page (also the SC3T/DXT section.) You may also want to swap textures in and out of video memory as needed. Don't keep the uncompressed data around if you don't need it.
I'd look into decoding the jpg to a bitmap file and using a stream on the bitmap to asynchronously load the relevant tiles data in view (or as much as memory can handle) - similar to e.g. Google Maps on Android.
I also saw this, but can not confirm that "OOM from a Bitmap is very suspect. This class will throw the memory exception for just about anything that goes wrong, including bad parameters etc."
Related
I'm writing a program that uses Win2D to load an image file and display it on the screen. However, the image file itself is about 5 Mb in size, but when I load it using CanvasBitmap.LoadAsync, the process memory jumps up to over 600MB in memory and then settles down to around 300MB. Is there any way to reduce the process memory without having to resize the image manually in an image editor? I've seen code for resizing other types of bitmaps and I was wondering if that's also possible in Win2D.
Regards,
Alex
Update (1/27/2020)
Realized that bitmaps are uncompressed image files, so the only avalible options are either to reduce the image size somehow, or use a different file format. Decided to use the later because I'm working with PDF files. They can be converted into SVG files using Inkscape. Also, SVG files conveniently happen to be supported by Win2D.
Say for example that you have a 30,000 x 30,000 image, but at any given time you would only need a specific section of it that is for example 512 x 512.
Is there a way (or framework) to iterate or query for the pixels you are looking for without having to load the entire image into memory first?
Check out Microsoft's DeepZoom. A good primer is here.
DeepZoom accomplishes its goal by partitioning an image (or a
composition of images) into tiles. While tiling the image, the
composer also creates a pyramid of lower resolution tiles for the
original composition.
You can download the DeepZoom composer here.
Also check out OpenSeadragon for a JavaScript solution.
It depends.
If your image is a jpeg/png, when you load a part, all must be loaded (because of compression)
If the image is a bmp, you can create your own load, because is a disk bitmap.
My advice is to create separate images of 512x512 and load separately (it's what google maps do!)
I have bitmap images like 14000x18000(~30MB ) height and width. I am trying to process them with different image processing libraries (OpenCV (using the wrapper OpenCvSharp), Aforge.NET..) in order to do blob detection. However, labeling the bitmap image causes memory allocation problems. The libraries tries to map the labeled image to 32bit image.
Is there a way to da the labeling operation with a less amount of memory? (Cropping the image is not a solution)
For example labeling the bitmap image to a 8bit image instead of 32?
In case there isn't an answer for the 8-bit thing... and even if there is...
For speed and memory purposes, I would highly recommend resizing the image down (not cropping). Use high-quality interpolation like this sample does, only just resize to 50%, not thumbnail (7.5MB im memory).
You didn't mention that you don't want to do this, and I am assuming you probably don't want to try it, thinking the library will do better blob detection at full resolution. Before you pooh-pooh the idea you need to test it with a full-resolution subsection of a sample image, of a size that the library will handle, compared to the same subsection at 50%.
Unless you've actually done this, you can't know. You can also figure a maximum amount of memory that the picture can use, compute a resize factor to target that number (reduce it for safety - you'll figure this out when things blow up in testing). If you care where the stuff is in the original image, scale it back up by the factor.
This may not solve your particular problem (or it might), but have you considered splitting / segmenting the frame into a 2x2 (or 3x3) matrix and try to work on each of them separately. Then based on where you find the blobs in the 4 (or 9) frames, correlate and coalesce the adjoining blobs to make single blob. Of course, this high level blob coalescing would have to be your own logic.
PS> Admittedly, working off highly superficial knowledge of Aforge. No hands-on experience what-so-ever.
I have a winforms image list which contains say like 200 images 256x256.
I use the method Images.FromFile to load the images and then add them to the image list.
According to ANTS .NET profiler, half of the program's time is spent in Images.FromFile. Is there a better way to load an image to add to an image list?
Another thing that might be optimized is, the images that are loaded are larger than 256x256. So is there a way to load them by resizing them first or something? I just want to uniform scale them if they their height is larger than 256 pixels.
Any idea to optimize this?
EDIT: They are JPEGs.
You don't say how much bigger than 256x256 the images actually are - modern digital cameras images are much bigger than this.
Disk I/O can be very slow, and I would suggest you first get a rough idea how many megabytes of data you're actually reading.
Then you can decide if there's a subtle 'Image.FromFile' problem or a simple 'this is how slow my computer/drives/anti-virus scanner/network actually is' problem.
A simple test of the basic file I/O performance would be do to File.ReadAllBytes() for each image instead of Image.FromFile() - that will tell you what proportion of the time was spent with the disk and what with the image handling - I suspect you'll find it's largely disk, at which point your only chance to speed it up might be one of the techniques for getting JFIF thumbnails out of files. Or perhaps one can imagine clever stuff with partial reads of progressive JPEGs, though I don't know if anyone does that, nor if your files are progressive (they're probably not).
I don't really know how fast you need these to load, but if the problem is that an interactive application is hanging while you load the files, then think of ways to make that better for the user - perhaps use a BackgroundWorker to load them asynchronously, perhaps sort the images by ascending file-size and load the small ones first for a better subjective performance.
If you are trying to make thumbnails then try this code here it will let you extract thumbnails without completely loading the image.
You can use FreeImage.NET which is great for loading images on background.
The Image.FromFile contains hidden Mutex which locks your app if you are try to load large images even on background thread.
As for the JPEGs, the FreeImage library uses OpenJPEG library which can load JPEG images in smaller scale more quickly. It can also utilize embedded thumbnails.
The WPF classes also allow loading images in smaller resolution, but this cannot be used if you are restricted to WinForms.
If you want speed then prescale your images, don't do it in runtime.
You didn't mention want type of images you loading (jpeg, png, gif, bmp) of course that bmp is the fastest one since it has not (or almost no) compression.
Are your images 256 colors (8 bit w/ palette), bmp, gif and png support that format that can load pretty fast.
It sounds like you need some sort of image thumbnails? Don't forget that jpeg images already contains thumbnails inside, so you can extract only this small image and you do not need to scale. Such images however smaller than 256x256.
Another option is to move loading logic into separate thread, it will not be faster, but from users perspective it can look as significant speedup.
I have a winforms image list
I would avoid using ImageList in this scenario. Internally, it is an old Windows common control intended for working with GUI icons, usually 32x32 pixels or less.
Another thing that might be optimized is, the images that are loaded are larger than 256x256.
Be aware that GDI+ is not optimized for working with very large bitmaps of the kind that you would normally edit using photo software. Photo editing software packages generally have sophisticated algorithms that divide the image into smaller parts, swap parts to and from disk as needed to efficiently utilize memory, and so on.
Resizing an image is CPU intensive, especially when using a good quality interpolation algorithm. Take a cue from how Windows Explorer does this -- it saves the thumbnails to a file on disk for future access, and it does the processing in the background to avoid monopolizing the system.
This might be too late but using ImageLocation property will do two things
speed up image loading
work around the bug (Image file is locked when you set the PictureBox Image property to a file)
pictureBox1.ImageLocation = "image.jpg";
pictureBox1 .SizeMode = PictureBoxSizeMode.StretchImage;
I have a live 16-bit gray-scale video stream that is pushed through a ring-buffer in memory as a raw, uncompressed byte stream (2 bytes per pixel, 2^18 pixels/frame, 32 frames/sec). (This is coming from a scientific grade camera, via a PCI frame-grabber). I would like to do some simple processing on the video (clip dynamic range, colorize, add overlays) and then show it in a window, using C#.
I have this working using Windows Forms & GDI (for each frame, build a Bitmap object, write raw 32-bit RGB pixel values based on my post-processing steps, and then draw the frame using the Graphics class). But this uses a significant chunk of CPU that I'd like to use for other things. So I'm interested in using WPF for its GPU-accelerated video display. (I'd also like to start using WPF for its data binding & layout features.)
But I've never used WPF before, so I'm unsure how to approach this. Most of what I find online about video & WPF involves reading a compressed video file from disk (e.g. WMV), or getting a stream from a consumer-grade camera using a driver layer that Windows already understands. So it doesn't seem to apply here (but correct me if I'm wrong about this).
So, my questions:
Is there a straighforward, WPF-based way to play video from raw, uncompressed bytes in memory (even if just as 8-bit grayscale, or 24-bit RGB)?
Will I need to build DirectShow filters (or other DirectShow/Media Foundation-ish things) to get the post-processing working on the GPU?
Also, any general advice / suggestions for documentation, examples, blogs, etc that are appropriate to these tasks would be appreciated. Thanks!
Follow-up: After some experimentation, I found WriteableBitmap to be fast enough for my needs, and extremely easy to use correctly: Simply call WritePixels() and any Image controls bound to it will update themselves. InteropBitmap with memory-mapped sections is noticeably faster, but I had to write p/invokes to kernel32.dll to use it on .NET 3.5.
My VideoRendererElement, though very efficient, does use some hackery to make it work. You may also want to experiment with the WriteableBitmap in .NET 3.5 SP1.
Also the InteropBitmap is very fast too. Much more efficient than the WB as it's not double buffered. Though it can be subject to video tearing.
Some further Google-searching yielded this:
http://www.codeplex.com/VideoRendererElement
which I'm looking into now, but may be the right approach here. Of course further thoughts/suggestions are still very much welcome.