We have a lot of images with similar background (can be represented as the camera on the street, make a 1 photo per minute). It is necessary to optimize their storage, ie, convert into something that will take up less space than the batch of jpeg images.
Our idea was to convert images to video (for example mpeg)/use any video codec, calculated that he was "find a common background" and greatly reduce the size of stored without loss of quality.
Are there any ready-made solutions for c# to achieve the goal?
Googling have so far not resulted in.
I remembered the question. As a result, I decided to use ffmpeg, the most suitable codec was vp9 (pretty good x3 compress and acceptable x5).
Related
I'm currently trying to figure out how JPEG's are made in depth out of interest. I found documents on the different sections (soi, sof, sos, eoi etc) which are pretty straight forward, but not how to get a single pixel out of there.
My first thought was to make a small image, 2x2 for example, but with all the headers and sections it's still to big to isolate the pixel information without knowing the exact location and method to extract it. I'm sure it's compressed, but is their a way to get it out manually? (as RGB?)
Anyone has a clue on how to do this?
Getting the value of a single pixel of a JPEG image requires parsing some (if not most) of those sections anyway.
There's a good step-by-step guide available at https://www.imperialviolet.org/binary/jpeg/ (though the code is in Haskell, so it might be moderately inscrutable to mere mortals) that explains the concepts behind turning a JPEG into a bunch of RGB values.
This is the only source I know that explains JPEG end-to-end:
https://www.amazon.com/gp/product/B01JXRY4R0/ref=dbs_a_def_rwt_bibl_vppi_i4
Parsing the structure of a JPEG stream is easy. Decoding a JPEG scan is very difficult and involves several compression steps. Plus there are two different types of scan that are commonly in use (progressive & sequential).
My customer has about 100,000 scanned documents (jpg) which they work with everyday. I want to know how can I reduce the file size of those images for faster file transfer and browsing.
The documents are scanned in black/white, saved in jpg format. They have a resolution of 150dpi and size of 1275x1753 (width x height). The main problem is their size which is between ~150kb and ~500kb which I think is too high for a black/white picture.
Is there a chance that I can reduce their size with changing the resolution, changing some color mode or something? Tried playing around with Photoshop but no luck.
The scanned documents are just for the sole purpose of Reviewing. So I don't think they need much detail or the original pic size.
Gonna write the program in c#, So tell me if there is a good image library for this purpose.
If your images are JPEG-compressed than they are either grayscale (8 bits per pixel) or full color (24 or 32 bits per pixel). I am not aware of any other JPEG types out there.
Given that, you probably won't get much benefit if you try to convert these images to other formats without changes to their size (number of pixels in both directions) and/or color space.
There is a possibility that JPEG 2000 might compress your images better than JPEG, but another lossy compression will introduce some more artifacts. You might try for yourself and see if this approach is acceptable for you. I can't recommend you any tools for this approach, though.
I would recommend you to try and convert your images to bilevel ones (i.e. with only two colors) and compress them with one of the FAX compression schemes (Group 3 or Group 4). You might try to reduce images sizes at the same time, too. This can be easily achieved using Docotic.Pdf library (Disclaimer: I work for the vendor of the library).
Please take a look at my answer to a question similar to yours. The answer shows how to use RecompressWithGroup4Fax and/or Scale methods to recompress existing images in PDF.
There is also valuable advice from #plinth about JBIG2 compression and other stuff. Well worth reading.
I am writing a program in C# that involves image compression. I need to be able to take any picture and compress it so that it is less than 10KB. Quality is not a huge concern, but I would like to keep as much as possible. I have searched for and tried numerous solutions. Any of the ones I have found cannot resize to a specific size and even with iteration, reach maximum compression much before 10KB. Also it would be great if it was open source or free!
Thanks in advance!
We have a around 600,000 images that were converted from JPEG to TIFF files and uploaded to our FileNet repository. These TIFF images are multi-page, made by stitching multiple JPEGs.
This was done couple of years ago. Now we started getting complaints from users the quality of the TIFF images are not the same as they were when they were JPEGs.
Is there any way we can improve the quality of TIFF files? If I have to re-migrate this data, can JPEGs be of multiple pages? Please advice.
You can't just add quality to an image, so you can either try improving the appearance of the current information or you'll need to re-create the images to get better information.
To me, it sounds like the initial creation process is the most likely cause of the quality issue. How you create the image is important.
For example, I had a large number of photos I needed to re-size, so I used irfanview's batch convert and the results were horrible. Perhaps I had the settings wrong, I don't know.
I then tried using ImageMagick, and the results were great.
The point being, the conversion process isn't trivial.
If I were you, I'd look at how the images were created, experiment with different settings to determine what gives the best appearance, then re-create your photo gallery.
For photographic material, there's no real reason to use anything other than a jpeg if the target market is the general consumer.
Both TIFF and JPEG support lossless and lossy storage of your images. You mentioned that there was a previous conversion. The conversion was probably a lossy conversion as such you probably won't be able to recover that data to the way it was previously.
That said if you have the original source images you might be able to get back to where you where. Regarding multi-image jpegs, there is such a format *.mpo but I haven't seen it used before so your millage may vary.
You probably converted gray scale or color Jpeg to Tiff. The most common is Tiff G4 which is only 1 bit per pixel. So 24 or 8 bits was converted to 1 bit and you will see a lot of images losses. There are multiple methods to improve image quality but I would have to see the images first to suggest a method.
I have a winforms image list which contains say like 200 images 256x256.
I use the method Images.FromFile to load the images and then add them to the image list.
According to ANTS .NET profiler, half of the program's time is spent in Images.FromFile. Is there a better way to load an image to add to an image list?
Another thing that might be optimized is, the images that are loaded are larger than 256x256. So is there a way to load them by resizing them first or something? I just want to uniform scale them if they their height is larger than 256 pixels.
Any idea to optimize this?
EDIT: They are JPEGs.
You don't say how much bigger than 256x256 the images actually are - modern digital cameras images are much bigger than this.
Disk I/O can be very slow, and I would suggest you first get a rough idea how many megabytes of data you're actually reading.
Then you can decide if there's a subtle 'Image.FromFile' problem or a simple 'this is how slow my computer/drives/anti-virus scanner/network actually is' problem.
A simple test of the basic file I/O performance would be do to File.ReadAllBytes() for each image instead of Image.FromFile() - that will tell you what proportion of the time was spent with the disk and what with the image handling - I suspect you'll find it's largely disk, at which point your only chance to speed it up might be one of the techniques for getting JFIF thumbnails out of files. Or perhaps one can imagine clever stuff with partial reads of progressive JPEGs, though I don't know if anyone does that, nor if your files are progressive (they're probably not).
I don't really know how fast you need these to load, but if the problem is that an interactive application is hanging while you load the files, then think of ways to make that better for the user - perhaps use a BackgroundWorker to load them asynchronously, perhaps sort the images by ascending file-size and load the small ones first for a better subjective performance.
If you are trying to make thumbnails then try this code here it will let you extract thumbnails without completely loading the image.
You can use FreeImage.NET which is great for loading images on background.
The Image.FromFile contains hidden Mutex which locks your app if you are try to load large images even on background thread.
As for the JPEGs, the FreeImage library uses OpenJPEG library which can load JPEG images in smaller scale more quickly. It can also utilize embedded thumbnails.
The WPF classes also allow loading images in smaller resolution, but this cannot be used if you are restricted to WinForms.
If you want speed then prescale your images, don't do it in runtime.
You didn't mention want type of images you loading (jpeg, png, gif, bmp) of course that bmp is the fastest one since it has not (or almost no) compression.
Are your images 256 colors (8 bit w/ palette), bmp, gif and png support that format that can load pretty fast.
It sounds like you need some sort of image thumbnails? Don't forget that jpeg images already contains thumbnails inside, so you can extract only this small image and you do not need to scale. Such images however smaller than 256x256.
Another option is to move loading logic into separate thread, it will not be faster, but from users perspective it can look as significant speedup.
I have a winforms image list
I would avoid using ImageList in this scenario. Internally, it is an old Windows common control intended for working with GUI icons, usually 32x32 pixels or less.
Another thing that might be optimized is, the images that are loaded are larger than 256x256.
Be aware that GDI+ is not optimized for working with very large bitmaps of the kind that you would normally edit using photo software. Photo editing software packages generally have sophisticated algorithms that divide the image into smaller parts, swap parts to and from disk as needed to efficiently utilize memory, and so on.
Resizing an image is CPU intensive, especially when using a good quality interpolation algorithm. Take a cue from how Windows Explorer does this -- it saves the thumbnails to a file on disk for future access, and it does the processing in the background to avoid monopolizing the system.
This might be too late but using ImageLocation property will do two things
speed up image loading
work around the bug (Image file is locked when you set the PictureBox Image property to a file)
pictureBox1.ImageLocation = "image.jpg";
pictureBox1 .SizeMode = PictureBoxSizeMode.StretchImage;