I'm having difficulties when trying to save huge images with C# (I'm talking about over one gigabyte).
Basically I'm trying to do this in parts - I have around 200 bitmap sources and I need a way to combine them before or after encoding them to a .png file.
I know this is going to require lots of RAM unless I somehow stream the data directly from hard drive but I have no idea how to do this either.
Each bitmap source is 895x895 pixels so combining the images after encoding doesn't seem easy because C# doesn't let you create a bitmap with size of 13425 x 13425.
This PngCs library (disclaimer: I'm the author) lets you read and write huge PNG images line by line, so that you don't need to keep the full image in memory; perhaps you find it useful.
Related
I'm currently trying to figure out how JPEG's are made in depth out of interest. I found documents on the different sections (soi, sof, sos, eoi etc) which are pretty straight forward, but not how to get a single pixel out of there.
My first thought was to make a small image, 2x2 for example, but with all the headers and sections it's still to big to isolate the pixel information without knowing the exact location and method to extract it. I'm sure it's compressed, but is their a way to get it out manually? (as RGB?)
Anyone has a clue on how to do this?
Getting the value of a single pixel of a JPEG image requires parsing some (if not most) of those sections anyway.
There's a good step-by-step guide available at https://www.imperialviolet.org/binary/jpeg/ (though the code is in Haskell, so it might be moderately inscrutable to mere mortals) that explains the concepts behind turning a JPEG into a bunch of RGB values.
This is the only source I know that explains JPEG end-to-end:
https://www.amazon.com/gp/product/B01JXRY4R0/ref=dbs_a_def_rwt_bibl_vppi_i4
Parsing the structure of a JPEG stream is easy. Decoding a JPEG scan is very difficult and involves several compression steps. Plus there are two different types of scan that are commonly in use (progressive & sequential).
I have written a program that has a feature for viewing images. It works fine for common files types, I simply use a picturebox. But I want to be able to load DNG (Digital Negative) images too.
I don't need to load the whole thing, loading just the baked in jpg preview would be fine, even if not all DNGs have it, I am willing to settle for this to avoid a huge hassle. Plus since DNGs are so big and my program is meant to browse through them quickly this might be best anyway.
I have tried using WindowsAPICodePack.Shell to get the Extra Large Bitmap or Icon from the file. I am already using this for my "explorer" thumbnails. But there is nothing larger than "Large" which I believe is around 256px. I know that the image has something around 1024px jpg preview.
Is there an easy way to get at the jpg preview of a DNG file?
We have a around 600,000 images that were converted from JPEG to TIFF files and uploaded to our FileNet repository. These TIFF images are multi-page, made by stitching multiple JPEGs.
This was done couple of years ago. Now we started getting complaints from users the quality of the TIFF images are not the same as they were when they were JPEGs.
Is there any way we can improve the quality of TIFF files? If I have to re-migrate this data, can JPEGs be of multiple pages? Please advice.
You can't just add quality to an image, so you can either try improving the appearance of the current information or you'll need to re-create the images to get better information.
To me, it sounds like the initial creation process is the most likely cause of the quality issue. How you create the image is important.
For example, I had a large number of photos I needed to re-size, so I used irfanview's batch convert and the results were horrible. Perhaps I had the settings wrong, I don't know.
I then tried using ImageMagick, and the results were great.
The point being, the conversion process isn't trivial.
If I were you, I'd look at how the images were created, experiment with different settings to determine what gives the best appearance, then re-create your photo gallery.
For photographic material, there's no real reason to use anything other than a jpeg if the target market is the general consumer.
Both TIFF and JPEG support lossless and lossy storage of your images. You mentioned that there was a previous conversion. The conversion was probably a lossy conversion as such you probably won't be able to recover that data to the way it was previously.
That said if you have the original source images you might be able to get back to where you where. Regarding multi-image jpegs, there is such a format *.mpo but I haven't seen it used before so your millage may vary.
You probably converted gray scale or color Jpeg to Tiff. The most common is Tiff G4 which is only 1 bit per pixel. So 24 or 8 bits was converted to 1 bit and you will see a lot of images losses. There are multiple methods to improve image quality but I would have to see the images first to suggest a method.
I have a winforms image list which contains say like 200 images 256x256.
I use the method Images.FromFile to load the images and then add them to the image list.
According to ANTS .NET profiler, half of the program's time is spent in Images.FromFile. Is there a better way to load an image to add to an image list?
Another thing that might be optimized is, the images that are loaded are larger than 256x256. So is there a way to load them by resizing them first or something? I just want to uniform scale them if they their height is larger than 256 pixels.
Any idea to optimize this?
EDIT: They are JPEGs.
You don't say how much bigger than 256x256 the images actually are - modern digital cameras images are much bigger than this.
Disk I/O can be very slow, and I would suggest you first get a rough idea how many megabytes of data you're actually reading.
Then you can decide if there's a subtle 'Image.FromFile' problem or a simple 'this is how slow my computer/drives/anti-virus scanner/network actually is' problem.
A simple test of the basic file I/O performance would be do to File.ReadAllBytes() for each image instead of Image.FromFile() - that will tell you what proportion of the time was spent with the disk and what with the image handling - I suspect you'll find it's largely disk, at which point your only chance to speed it up might be one of the techniques for getting JFIF thumbnails out of files. Or perhaps one can imagine clever stuff with partial reads of progressive JPEGs, though I don't know if anyone does that, nor if your files are progressive (they're probably not).
I don't really know how fast you need these to load, but if the problem is that an interactive application is hanging while you load the files, then think of ways to make that better for the user - perhaps use a BackgroundWorker to load them asynchronously, perhaps sort the images by ascending file-size and load the small ones first for a better subjective performance.
If you are trying to make thumbnails then try this code here it will let you extract thumbnails without completely loading the image.
You can use FreeImage.NET which is great for loading images on background.
The Image.FromFile contains hidden Mutex which locks your app if you are try to load large images even on background thread.
As for the JPEGs, the FreeImage library uses OpenJPEG library which can load JPEG images in smaller scale more quickly. It can also utilize embedded thumbnails.
The WPF classes also allow loading images in smaller resolution, but this cannot be used if you are restricted to WinForms.
If you want speed then prescale your images, don't do it in runtime.
You didn't mention want type of images you loading (jpeg, png, gif, bmp) of course that bmp is the fastest one since it has not (or almost no) compression.
Are your images 256 colors (8 bit w/ palette), bmp, gif and png support that format that can load pretty fast.
It sounds like you need some sort of image thumbnails? Don't forget that jpeg images already contains thumbnails inside, so you can extract only this small image and you do not need to scale. Such images however smaller than 256x256.
Another option is to move loading logic into separate thread, it will not be faster, but from users perspective it can look as significant speedup.
I have a winforms image list
I would avoid using ImageList in this scenario. Internally, it is an old Windows common control intended for working with GUI icons, usually 32x32 pixels or less.
Another thing that might be optimized is, the images that are loaded are larger than 256x256.
Be aware that GDI+ is not optimized for working with very large bitmaps of the kind that you would normally edit using photo software. Photo editing software packages generally have sophisticated algorithms that divide the image into smaller parts, swap parts to and from disk as needed to efficiently utilize memory, and so on.
Resizing an image is CPU intensive, especially when using a good quality interpolation algorithm. Take a cue from how Windows Explorer does this -- it saves the thumbnails to a file on disk for future access, and it does the processing in the background to avoid monopolizing the system.
This might be too late but using ImageLocation property will do two things
speed up image loading
work around the bug (Image file is locked when you set the PictureBox Image property to a file)
pictureBox1.ImageLocation = "image.jpg";
pictureBox1 .SizeMode = PictureBoxSizeMode.StretchImage;
I have a raw pixel data in a byte[] from a DICOM image.
Now I would like to convert this byte[] to an Image object.
I tried:
Image img = Image.FromStream(new MemoryStream(byteArray));
but this is not working for me. What else should I be using ?
One thing to be aware of is that a dicom "image" is not necessarily just image data. The dicom file format contains much more than raw image data. This may be where you're getting hung up. Consider checking out the dicom file standard which you should be able to find linked on the wikipedia article for dicom. This should help you figure out how to parse out the information you're actually interested in.
You have to do the following
Identify the PIXEL DATA tag from the file. You may use FileStream to read byte by byte.
Read the pixel data
Convert it to RGB
Create a BitMap object from the RGB
Use Graphics class to draw the BitMap on a panel.
The pixel data usually (if not always) ends up at the end of the DICOM data. If you can figure out width, height, stride and color depth, it should be doable to skip to the (7FE0,0010) data element value and just grab the succeeding bytes. This is the trick that most normal image viewers use when they show DICOM images.
There is a C# library called EvilDicom (http://rexcardan.com/evildicom/) that can be used to pull the image out of a DICOM file. It has a tutorial on how to do it on the website.
You should use GDCM.
Grassroots DiCoM is a C++ library for DICOM medical files. It is automatically wrapped to python/C#/Java (using swig). It supports RAW, JPEG 8/12/16bits (lossy/lossless), JPEG 2000, JPEG-LS, RLE and deflated (zlib).
It is portable and is known to run on most system (Win32, linux, MacOSX).
http://gdcm.sourceforge.net/wiki/index.php/GDCM_Release_2.4
See for example:
http://gdcm.sourceforge.net/html/DecompressImage_8cs-example.html
Are you working with a pure standard DICOM File? I've been maintainning a DICOM parser for over a two years and I came across some realy strange DICOM files that didn't completely fulfill the standard (companies implementing their "own" twisted standard DICOM files) . flush you byte array into a file and test whether your image viewer(irfanview, picassa or whatever) can show it. If your code is working with a normal JPEG stream then from my experience , 99.9999% chance that this simply because the file voilate the standard in some strange way ( and believe me , medical companies does that a lot)
Also note that DICOM standard support several variants of the JPEG standard . could be that the Bitmap class doesn't support the data you get from the DICOM file. Can you please write down the transfer syntax?
You are welcome to send me the file (if it's not big) yossi1981#gmail.com , I can check it out , There was a time I've been hex-editing DICOM file for a half a year.
DICOM is a ridiculous specification and I sincerely hope it gets overhauled in the near future. That said Offis has a software suite "DCMTK" which is fairly good at converting dicoms with the various popular encodings. Just trying to skip ahead in the file x-bytes will probably be fine for a single file but if you have a volume or several volumes a more robust strategy is in order. I used DCMTK's conversion code and just grabbed the image bits before they went into a pnm. The file you'll be looking for in DCMTK is dcm2pnm or possibly dcmj2pnm depending on the encoding scheme.
I had a problem with the scale window that I fixed with one of the runtime flags. DCMTK is open source and comes with fairly simple build instructions.