Say for example that you have a 30,000 x 30,000 image, but at any given time you would only need a specific section of it that is for example 512 x 512.
Is there a way (or framework) to iterate or query for the pixels you are looking for without having to load the entire image into memory first?
Check out Microsoft's DeepZoom. A good primer is here.
DeepZoom accomplishes its goal by partitioning an image (or a
composition of images) into tiles. While tiling the image, the
composer also creates a pyramid of lower resolution tiles for the
original composition.
You can download the DeepZoom composer here.
Also check out OpenSeadragon for a JavaScript solution.
It depends.
If your image is a jpeg/png, when you load a part, all must be loaded (because of compression)
If the image is a bmp, you can create your own load, because is a disk bitmap.
My advice is to create separate images of 512x512 and load separately (it's what google maps do!)
Related
I'm trying to implement a map of the world that the user can scale and pan around in. I'm running into a problem where my program runs out of memory.
Here's my situation:
0) I'm writing my program in C# and OpenGL
1) I downloaded a JPG image off the web that's a detailed image of the world that is 19 megs in size. Each pixel is 24 bits long.
2) I chop this image up into tiles at run time when my program initializes. I feed each tile into OpenGL using glGenTexture, glBindTexture, and glTexImage2d.
3) In my drawing routine, I draw the tiles one by one and try to draw all of them regardless as to whether or not they're on the screen (that's another problem, I know).
For a small image, everything works fine. For large images however, when I decode the JPG into a bitmap so that I can chop it up, my program's memory footprint explodes to over a gigabyte. I know that JPGs are compressed and the .NET JpegBitmapDecoder object is doing its job decompressing a huge image which is probably what gets me into trouble.
My question is, what exactly can one do in this kind of a situation? Since the C# decoding code is the 1st responsible party for blowing up memory, is there an alternate method of creating tiles and feeding them to OpenGL that I should pursue instead (minus bitmaps)?
Obviously you should buy more RAM!
You can store textures in a compressed format on the GPU. See this page (also the SC3T/DXT section.) You may also want to swap textures in and out of video memory as needed. Don't keep the uncompressed data around if you don't need it.
I'd look into decoding the jpg to a bitmap file and using a stream on the bitmap to asynchronously load the relevant tiles data in view (or as much as memory can handle) - similar to e.g. Google Maps on Android.
I also saw this, but can not confirm that "OOM from a Bitmap is very suspect. This class will throw the memory exception for just about anything that goes wrong, including bad parameters etc."
is there a way to grab a screen capture of a specific section of the screen and compare it to another image already store in disk. And if the image is identical, it would prompt something.
If you are trying to compare an image (or part of it) with another images, I searched internet and found that leadtools has correlation functions that compare an image with all the areas of the same dimensions in another image. For more information, see this link: http://www.leadtools.com/help/leadtools/v175/dh/po/leadtools.imageprocessing.core~leadtools.imageprocessing.core.correlationcommand.html
OR
This project may be what you're looking for:
https://github.com/cameronmcefee/Image-Diff-View-Modes/commit/8e95f70c9c47168305970e91021072673d7cdad8
Check http://www.developerfusion.com/code/4630/capture-a-screen-shot/ for examples on how to capture the screenshot.
The image comparison is a lot more complicated. The image that is stored on disk might have a different size, a different aspect ratio, or a different depth. Unless you can be sure that the screenshot and the image on the disk are absolutely identical, you'll have to look at a method that identifies similar images.
Take a look at the pHash Demo, which takes two images and outputs a similarity score. The algorithm it uses is pretty straightforward, and is available on the website.
I am trying to load a large image ( size > 80 MB) into web page. User doesn't really need to see the whole image at once and only need to see the requested portion.
The dimensions of the image are approx 10k x 10k.
I looked around a little bit but couldn't found a reasonable solution to the problem.
I would like to split the image into some amount of pieces as needed (for ex 9 pieces, 3k x 3k each) and load them into the page as user request or moves into next section of the image (ex. if user crosses 3k x 3k boundary, server will send side or bottom piece as needed).
I did found ways how to split image but couldn't find a way to do that dynamically and sew them together dynamically.
UPDATE
I tried using Microsoft Deep Zoom Composer but it didn't work. I think it does not support such large image size. I came to that conclusion as I tried the same image in Microsoft PhotoSynth and got an error message that it only supports files up to 32MB. Deep Zoom Composer and Photo Synth use same file format so I think they might have same file size constraints.
Deep zoom Composer didn't produced meaningful error message as the error message was, file format is not right, but file is in right format (i.e. jpg).
Thanks
You could use Microsoft Deep Zoom Composer
Since that is a rather large image to display I am going to assume you cannot size it to more manageable dimensions and you have to show it in some grid. You could use the same idea as Google Maps where you load the blocks individually as the user moves across the view.
How you are going to structure that view will be up to you since even 3k x 3k pixels is somewhat larger than most screen resolutions. You may want even smaller blocks.
I don't know of any component off-hand to do what you require but rolling your own shouldn't be too difficult. You could use some container div with the grid divs arranged inside and as each comes into view you load the background image bit or you could render a fixed number of divs as your grid and load the necessary background images as your "viewport" moves.
Hope that makes sense.
I think that you should convert it to another file format, for example bmp, so you can easyly navigate in the file to the needed pixels, compose a new image with those pixels and send it, or save it and reerence it. The big problem would be the new image size.
I have bitmap images like 14000x18000(~30MB ) height and width. I am trying to process them with different image processing libraries (OpenCV (using the wrapper OpenCvSharp), Aforge.NET..) in order to do blob detection. However, labeling the bitmap image causes memory allocation problems. The libraries tries to map the labeled image to 32bit image.
Is there a way to da the labeling operation with a less amount of memory? (Cropping the image is not a solution)
For example labeling the bitmap image to a 8bit image instead of 32?
In case there isn't an answer for the 8-bit thing... and even if there is...
For speed and memory purposes, I would highly recommend resizing the image down (not cropping). Use high-quality interpolation like this sample does, only just resize to 50%, not thumbnail (7.5MB im memory).
You didn't mention that you don't want to do this, and I am assuming you probably don't want to try it, thinking the library will do better blob detection at full resolution. Before you pooh-pooh the idea you need to test it with a full-resolution subsection of a sample image, of a size that the library will handle, compared to the same subsection at 50%.
Unless you've actually done this, you can't know. You can also figure a maximum amount of memory that the picture can use, compute a resize factor to target that number (reduce it for safety - you'll figure this out when things blow up in testing). If you care where the stuff is in the original image, scale it back up by the factor.
This may not solve your particular problem (or it might), but have you considered splitting / segmenting the frame into a 2x2 (or 3x3) matrix and try to work on each of them separately. Then based on where you find the blobs in the 4 (or 9) frames, correlate and coalesce the adjoining blobs to make single blob. Of course, this high level blob coalescing would have to be your own logic.
PS> Admittedly, working off highly superficial knowledge of Aforge. No hands-on experience what-so-ever.
I have a winforms image list which contains say like 200 images 256x256.
I use the method Images.FromFile to load the images and then add them to the image list.
According to ANTS .NET profiler, half of the program's time is spent in Images.FromFile. Is there a better way to load an image to add to an image list?
Another thing that might be optimized is, the images that are loaded are larger than 256x256. So is there a way to load them by resizing them first or something? I just want to uniform scale them if they their height is larger than 256 pixels.
Any idea to optimize this?
EDIT: They are JPEGs.
You don't say how much bigger than 256x256 the images actually are - modern digital cameras images are much bigger than this.
Disk I/O can be very slow, and I would suggest you first get a rough idea how many megabytes of data you're actually reading.
Then you can decide if there's a subtle 'Image.FromFile' problem or a simple 'this is how slow my computer/drives/anti-virus scanner/network actually is' problem.
A simple test of the basic file I/O performance would be do to File.ReadAllBytes() for each image instead of Image.FromFile() - that will tell you what proportion of the time was spent with the disk and what with the image handling - I suspect you'll find it's largely disk, at which point your only chance to speed it up might be one of the techniques for getting JFIF thumbnails out of files. Or perhaps one can imagine clever stuff with partial reads of progressive JPEGs, though I don't know if anyone does that, nor if your files are progressive (they're probably not).
I don't really know how fast you need these to load, but if the problem is that an interactive application is hanging while you load the files, then think of ways to make that better for the user - perhaps use a BackgroundWorker to load them asynchronously, perhaps sort the images by ascending file-size and load the small ones first for a better subjective performance.
If you are trying to make thumbnails then try this code here it will let you extract thumbnails without completely loading the image.
You can use FreeImage.NET which is great for loading images on background.
The Image.FromFile contains hidden Mutex which locks your app if you are try to load large images even on background thread.
As for the JPEGs, the FreeImage library uses OpenJPEG library which can load JPEG images in smaller scale more quickly. It can also utilize embedded thumbnails.
The WPF classes also allow loading images in smaller resolution, but this cannot be used if you are restricted to WinForms.
If you want speed then prescale your images, don't do it in runtime.
You didn't mention want type of images you loading (jpeg, png, gif, bmp) of course that bmp is the fastest one since it has not (or almost no) compression.
Are your images 256 colors (8 bit w/ palette), bmp, gif and png support that format that can load pretty fast.
It sounds like you need some sort of image thumbnails? Don't forget that jpeg images already contains thumbnails inside, so you can extract only this small image and you do not need to scale. Such images however smaller than 256x256.
Another option is to move loading logic into separate thread, it will not be faster, but from users perspective it can look as significant speedup.
I have a winforms image list
I would avoid using ImageList in this scenario. Internally, it is an old Windows common control intended for working with GUI icons, usually 32x32 pixels or less.
Another thing that might be optimized is, the images that are loaded are larger than 256x256.
Be aware that GDI+ is not optimized for working with very large bitmaps of the kind that you would normally edit using photo software. Photo editing software packages generally have sophisticated algorithms that divide the image into smaller parts, swap parts to and from disk as needed to efficiently utilize memory, and so on.
Resizing an image is CPU intensive, especially when using a good quality interpolation algorithm. Take a cue from how Windows Explorer does this -- it saves the thumbnails to a file on disk for future access, and it does the processing in the background to avoid monopolizing the system.
This might be too late but using ImageLocation property will do two things
speed up image loading
work around the bug (Image file is locked when you set the PictureBox Image property to a file)
pictureBox1.ImageLocation = "image.jpg";
pictureBox1 .SizeMode = PictureBoxSizeMode.StretchImage;