Is there an easy way, or free library, that will allow you to append small bitmaps into one large bitmap on file? I'm doing a capture of a web page that sometimes is quite large vertically. To avoid OOM exceptions I load small vertical by full horizontal slices of the capture into memory and would like to save those to disk. An append to an open filestream would be great. I'm not an expert on the Bitmap format but I am aware there is probably header / footer information that may prevent this.
There is header information, but it's a fixed size. You could write the header information and then append rows of pixels, keeping track of the height and other information. When you're done, position to the front of the file and update the header.
Bitmap File Format is a pretty good description of the format.
I would suggest using the version 3 format unless there's something you really need from the V4 structure. 24 bits per pixel is the easiest to deal with, since you don't have to mess with a color palette. 16 and 32 bit color are easier than 4 and 8 bit color (which require a palette).
Related
I'm currently trying to figure out how JPEG's are made in depth out of interest. I found documents on the different sections (soi, sof, sos, eoi etc) which are pretty straight forward, but not how to get a single pixel out of there.
My first thought was to make a small image, 2x2 for example, but with all the headers and sections it's still to big to isolate the pixel information without knowing the exact location and method to extract it. I'm sure it's compressed, but is their a way to get it out manually? (as RGB?)
Anyone has a clue on how to do this?
Getting the value of a single pixel of a JPEG image requires parsing some (if not most) of those sections anyway.
There's a good step-by-step guide available at https://www.imperialviolet.org/binary/jpeg/ (though the code is in Haskell, so it might be moderately inscrutable to mere mortals) that explains the concepts behind turning a JPEG into a bunch of RGB values.
This is the only source I know that explains JPEG end-to-end:
https://www.amazon.com/gp/product/B01JXRY4R0/ref=dbs_a_def_rwt_bibl_vppi_i4
Parsing the structure of a JPEG stream is easy. Decoding a JPEG scan is very difficult and involves several compression steps. Plus there are two different types of scan that are commonly in use (progressive & sequential).
My customer has about 100,000 scanned documents (jpg) which they work with everyday. I want to know how can I reduce the file size of those images for faster file transfer and browsing.
The documents are scanned in black/white, saved in jpg format. They have a resolution of 150dpi and size of 1275x1753 (width x height). The main problem is their size which is between ~150kb and ~500kb which I think is too high for a black/white picture.
Is there a chance that I can reduce their size with changing the resolution, changing some color mode or something? Tried playing around with Photoshop but no luck.
The scanned documents are just for the sole purpose of Reviewing. So I don't think they need much detail or the original pic size.
Gonna write the program in c#, So tell me if there is a good image library for this purpose.
If your images are JPEG-compressed than they are either grayscale (8 bits per pixel) or full color (24 or 32 bits per pixel). I am not aware of any other JPEG types out there.
Given that, you probably won't get much benefit if you try to convert these images to other formats without changes to their size (number of pixels in both directions) and/or color space.
There is a possibility that JPEG 2000 might compress your images better than JPEG, but another lossy compression will introduce some more artifacts. You might try for yourself and see if this approach is acceptable for you. I can't recommend you any tools for this approach, though.
I would recommend you to try and convert your images to bilevel ones (i.e. with only two colors) and compress them with one of the FAX compression schemes (Group 3 or Group 4). You might try to reduce images sizes at the same time, too. This can be easily achieved using Docotic.Pdf library (Disclaimer: I work for the vendor of the library).
Please take a look at my answer to a question similar to yours. The answer shows how to use RecompressWithGroup4Fax and/or Scale methods to recompress existing images in PDF.
There is also valuable advice from #plinth about JBIG2 compression and other stuff. Well worth reading.
I am trying to load a large image ( size > 80 MB) into web page. User doesn't really need to see the whole image at once and only need to see the requested portion.
The dimensions of the image are approx 10k x 10k.
I looked around a little bit but couldn't found a reasonable solution to the problem.
I would like to split the image into some amount of pieces as needed (for ex 9 pieces, 3k x 3k each) and load them into the page as user request or moves into next section of the image (ex. if user crosses 3k x 3k boundary, server will send side or bottom piece as needed).
I did found ways how to split image but couldn't find a way to do that dynamically and sew them together dynamically.
UPDATE
I tried using Microsoft Deep Zoom Composer but it didn't work. I think it does not support such large image size. I came to that conclusion as I tried the same image in Microsoft PhotoSynth and got an error message that it only supports files up to 32MB. Deep Zoom Composer and Photo Synth use same file format so I think they might have same file size constraints.
Deep zoom Composer didn't produced meaningful error message as the error message was, file format is not right, but file is in right format (i.e. jpg).
Thanks
You could use Microsoft Deep Zoom Composer
Since that is a rather large image to display I am going to assume you cannot size it to more manageable dimensions and you have to show it in some grid. You could use the same idea as Google Maps where you load the blocks individually as the user moves across the view.
How you are going to structure that view will be up to you since even 3k x 3k pixels is somewhat larger than most screen resolutions. You may want even smaller blocks.
I don't know of any component off-hand to do what you require but rolling your own shouldn't be too difficult. You could use some container div with the grid divs arranged inside and as each comes into view you load the background image bit or you could render a fixed number of divs as your grid and load the necessary background images as your "viewport" moves.
Hope that makes sense.
I think that you should convert it to another file format, for example bmp, so you can easyly navigate in the file to the needed pixels, compose a new image with those pixels and send it, or save it and reerence it. The big problem would be the new image size.
I have bitmap images like 14000x18000(~30MB ) height and width. I am trying to process them with different image processing libraries (OpenCV (using the wrapper OpenCvSharp), Aforge.NET..) in order to do blob detection. However, labeling the bitmap image causes memory allocation problems. The libraries tries to map the labeled image to 32bit image.
Is there a way to da the labeling operation with a less amount of memory? (Cropping the image is not a solution)
For example labeling the bitmap image to a 8bit image instead of 32?
In case there isn't an answer for the 8-bit thing... and even if there is...
For speed and memory purposes, I would highly recommend resizing the image down (not cropping). Use high-quality interpolation like this sample does, only just resize to 50%, not thumbnail (7.5MB im memory).
You didn't mention that you don't want to do this, and I am assuming you probably don't want to try it, thinking the library will do better blob detection at full resolution. Before you pooh-pooh the idea you need to test it with a full-resolution subsection of a sample image, of a size that the library will handle, compared to the same subsection at 50%.
Unless you've actually done this, you can't know. You can also figure a maximum amount of memory that the picture can use, compute a resize factor to target that number (reduce it for safety - you'll figure this out when things blow up in testing). If you care where the stuff is in the original image, scale it back up by the factor.
This may not solve your particular problem (or it might), but have you considered splitting / segmenting the frame into a 2x2 (or 3x3) matrix and try to work on each of them separately. Then based on where you find the blobs in the 4 (or 9) frames, correlate and coalesce the adjoining blobs to make single blob. Of course, this high level blob coalescing would have to be your own logic.
PS> Admittedly, working off highly superficial knowledge of Aforge. No hands-on experience what-so-ever.
I'm working on Steganography application. I need to hide a message inside an image file and secure it with a password, with not much difference in the file size. I am using Least Significant Bit algorithm and could do it successfully with BMP files but it does not work with JPEG, PNG or TIFF files. Does this algorithm work with these files at all? Is there a better way to achieve this? Thanks.
This heavily depends on the way the particular image format works. You'll need to dive into the internals of the format you want to use.
For JPEG, you could fiddle with the last bits of the DCT coefficients for each block.
For palette-based files (GIFs, and some PNGs), you could add extra colours to the palette that look identical to the existing ones, and encode information based on which one you use.
You'll have to distinguish between pixel-based (Bitmap) and palette-based formats (GIF) for which the steganographic technique is quite different. Also be aware that there are image formats like JPG that lose information in the compression process.
I'd also advice some general introduction to steganography including different formats.
Least Significant Bit approach does not work with JPEG and GIF images because you are using the pixel data (raw image) to store hidden information before compression. A pixel p, with data 0x123456 will probably not have this value after compression because its value depends on the compression rate and neighbour pixels. In this case we are talking about algorithms that does not only compact the image (like a ZIP, that keeps the content), but changes the color distribution, texture, and quality in order to decrease the number of bits to represent it.
However, PNG can be used just to compact the image in the same sense of ZIP file, keeping the content. Therefore, you can use the Least Significant Bit for PNG images, so that Wikipedia Steganography page shows example in this format.
As long as the image format is lossless, you can use the LSB steganography in pixels (BMP, PNG, TIFF, PPM). If it is lossy, you have to try something else, as compression and subsequent decompression cause small changes in the pixels and the message is gone. In GIF, you can embed your message into the palette. In JPEG you change the DCT coefficients, a low-level frequency representation of the image, which can be read from and saved as JPEG file losslessly.
There is an extensive research on steganography in JPEG. For introduction, I personally recommend Steganography in Digital Media: Principles, Algorithms, and Applications by Jessica Fridrich - must-read material for serious attempts in steganography. The approaches for various image formats are discussed in-depth there.
Also, LSB is inefficient and very easily detectable, you should not use that. There are better algorithms, however usually heavy on math and complex. Look for "steganography embedding distortion" and "steganography codes".