Compressing a TIF file - c#

I'm trying to convert a multipage color tiff file to a c# CompressionCCITT3 tiff in C#. I realize that I need to make sure that all pixels are 1 bit. I have not found a useful example of this online.

You need this conversion as CCITT3 and CCITT4 don't support color (if I remember right).

Pimping disclaimer: I work for Atalasoft, a company that makes .NET imaging software.
Using dotImage, this task becomes something like this:
FileSystemImageSource source = new FileSystemImageSource("path-to-your-file.tif", true); // true = loop over all frames
// tiff encoder will auto-select an appropriate compression - CCITT4 for 1 bit.
TiffEncoder encoder = new TiffEncoder();
encoder.Append = true;
// DynamicThresholdCommand is very good for documents. For pictures, use DitherCommand
DynamicThresholdCommand threshold = new DynamicThresholdCommand();
using (FileStream outstm = new FileStream("path-to-output.tif", FileMode.Create)) {
while (source.HasMoreImages()) {
AtalaImage image = source.AcquireNext();
AtalaImage finalImage = image;
// convert when needed.
if (image.PixelFormat != PixelFormat.Pixel1bppIndexed) {
finalImage = threshold.Apply().Image;
}
encoder.Save(outstm, finalImage, null);
if (finalImage != image) {
finalImage.Dispose();
}
source.Release(image);
}
}
The Bob Powell example is good, as far as it goes, but it has a number of problems, not the least of which is that it's using a simple threshold, which is terrific if you want speed and don't actually care what your output looks like or your input domain is such that really is pretty much black and white already - just represented in color. Binarization is a tricky problem. When your task is to reduce available information by 1/24th, how to keep the right information and throw away the rest is a challenge. DotImage has six different tools (IIRC) for binarization. SimpleThreshold is bottom of the barrel, from my point of view.

I suggest to experiment with the desired results first using tiff and image utilities before diving into the coding. I found VIPS to be a handy tool. The next option is to look into what LibTIFF can do. I've had good results with the free LibTiff.NET using c# (see also stackoverflow). I was very disappointed by the GDI tiff functionality, although your milage may vary (I need the missing 16-bit-grayscale).
Also you can use the LibTiff utilities (i.e. see http://www.libtiff.org/man/tiffcp.1.html)

I saw the above code, and it looked like it was converting every pixel with manual logic.
Would this work for you?
Imports System.Drawing.Imaging
'get the color tif file
Dim bmpColorTIF As New Bitmap("C:\color.tif")
'select the an area of the tif (will grab all frames)
Dim rectColorTIF As New Rectangle(0, 0, bmpColorTIF.Width, bmpColorTIF.Height )
'clone the rectangle as 1-bit color tif
Dim bmpBlackWhiteTIF As Bitmap = bmpColorTIF.Clone(rectColorTIF, PixelFormat.Format1bppIndexed)
'do what you want with the new bitmap (save, etc)
...
Note: there are a ton of pixelformats to choose from.

Related

Reduce the size of the PNG image

I want to compress PNG image
I am using below code: (Below code is working fine for jpeg,jpg) but not for png.
var qualityParam = new EncoderParameter(Encoder.Quality,80);
// PNG image codec
var pngCodec = GetEncoderInfo(ImageFormat.Png);
var encoderParams = new EncoderParameters(1) { Param = { [0] = qualityParam } };
rigImage.Save(imagePath, pngCodec, encoderParams);
private static ImageCodecInfo GetEncoderInfo(ImageFormat format)
{
// Get image codecs for all image formats
var codecs = ImageCodecInfo.GetImageEncoders();
// Find the correct image codec
return codecs.FirstOrDefault(t => t.FormatID == format.Guid);
}
JPEG "works fine" because it can always discard more information.
PNG is a lossless image compression format, so getting better compression is trickier and there's a pretty high "low mark". There are tools like PNGOut or OxiPNG which exist solely to optimise PNG images, but most strategies are very computationally expensive so your average image processing library just doesn't bother:
you can discard irrelevant metadata chunk
you can enumerate and try out various filtering strategies, this amounts to compressing the image dozens of time with slightly different tuning and checking out the best
you can switch out the DEFLATE implementation from the default (usually the stdlib or system's standard) to something better but much more expensive like zopfli
finally — and this one absolutely requires human eyeballs — you can try and switch to palletised
As noted above, most image libraries simply won't bother with that, they'll use a builtin zlib/deflate and some default filter, it can takes minutes to make a PNG go through an entire optimisation pipeline, and there's a chance the gain will be non-existent.
As #Masklinn said, PNG is a lossless format, and I think the BCL in .NET does not have an API that can help you to "optimize" the size of the PNG.
Instead, you could use the libimagequant library; you can get more information about this library here, and how to use it from .NET here.
This is the same library used by PNGoo, I have got really impressive results with it when optimizing PNGs in my projects.
Also, if you are planning to use the library in a commercial project, keep in mind the license terms, as indicated at https://pngquant.org/lib/.

How to improve the performance of loading textures?

I am currently looking for performance optimizations for my game project. The bottleneck of my initialization process is loading the textures of my models. Loading and assigning a large texture takes up to 100ms. This is a problem, because I have a lot of them. I analyzed my code and found out, that most of the time (around 96%) is spent on the call of CopyPixels (see below).
I attached the function I use for importing all my textures below. This function works like a charm and is based on the official SharpDX-samples-master example codes. First, I load the image bytes from my custom file format (which takes around 2 ms for large textures). Then, I create the format converter and use it to copy the pixels to the data stream. However, copying the pixels is very slow.
Is there any faster way to achieve the same?
public static Resources.Texture ImportTexture(Resources.AResourceManager resourceManager, string resourcePackFileName, int bytePosition, out string resourceItemName)
{
// Load image bytes from file.
FileReader fileReader = new FileReader();
DataTypes.Content.ResourceItem resourceItem = fileReader.ImportTextureFromCollection(resourcePackFileName, bytePosition);
resourceItemName = resourceItem.Name;
// Create texture.
Resources.Texture tex = null;
using (SharpDX.WIC.BitmapDecoder bitmapDecoder = new SharpDX.WIC.BitmapDecoder(resourceManager.ImagingFactory, new MemoryStream(resourceItem.Data, false), SharpDX.WIC.DecodeOptions.CacheOnDemand))
{
using (SharpDX.WIC.FormatConverter formatConverter = new SharpDX.WIC.FormatConverter(resourceManager.ImagingFactory))
{
formatConverter.Initialize(bitmapDecoder.GetFrame(0), SharpDX.WIC.PixelFormat.Format32bppPRGBA, SharpDX.WIC.BitmapDitherType.None, null, 0.0, SharpDX.WIC.BitmapPaletteType.Custom);
SharpDX.DataStream dataStream = new SharpDX.DataStream(formatConverter.Size.Height * formatConverter.Size.Width * 4, true, true);
// This takes most of the time!
formatConverter.CopyPixels(formatConverter.Size.Width * 4, dataStream);
// Creating texture data structure.
tex = new Resources.Texture(formatConverter.Size.Width, formatConverter.Size.Height, formatConverter.Size.Width * 4)
{
DataStream = dataStream
};
}
}
return tex;
}
Looks like you are using bitmaps. Have you considered using DDS files instead? It supports both compressed and uncompressed formats – Asesh
Asesh was right. At first, I was sceptical, but I did some research and found an older article which states, that an average PNG texture takes less memory on hard drive than a comparable DDS texture. However, PNG textures need to be converted at run-time which is slower than using DDS textures.
I spent last night looking for proper conversion tools and after testing some stuff (like a plug-in for GIMP), I used the Compressonator from AMD to convert all my PNG textures to DDS textures. The new files take even less memory on my hard drive than the PNG files (1.7 GB instead of 2.1 GB).
My texture loading method I presented in the initial post still worked and was slightly faster. However, I decided to code a DDS importer based on several code samples I found online.
The result: A large textures takes only 1 ms instead of 103 ms to import. I think you can that an improvement. :-D
Thank you very much, Asesh!

fastest way to detect A) color and B) size of image from byte[]

Currently, we are determining the size, and whether or not an image contains color by converting it to a Bitmap and the checking the height/width, and checking the PixelFormat for type System.Drawing.Imaging.PixelFormat.Format1bppIndexed to detect color.
What I've noticed though, stepping through the code, it can take 3-5 seconds just to initialize this Bitmap (at least for a very high-resolution TIF image):
ms = new MemoryStream(fileBytes);
bitmap = new System.Drawing.Bitmap(ms);
Is there a faster way to check these two things, straight from the byte array, so I can avoid the slowness of the Bitmap class or is this just what to expect with large TIF images?
This wasn't quite the answer I was hoping for, so I'll leave it open still, but I do want to at least mention one possible "answer". My original problem was that loading up the Bitmap was slow.
I stumbled upon this MSDN article, which explains how Image.FromStream() has an overload that allows you to tell it not to validate the image data. By default, that is set to true. By using this new overload, and setting validateImageData to false - this speeds things up tremendously.
So for example:
using (FileStream fs = new FileStream(this.fileInfo.FullName, FileMode.Open, FileAccess.ReadWrite))
{
using (Image photo = Image.FromStream(fs, true, false))
{
// do stuff
}
}
The author of the article, found that his code ran 93x faster (!).

Reading raw image files in c#

How do i decode/open raw image files like .CR2 or .NEF and .ARW without having the codec installed, something like lightroom open raw files ? My code look like this:
if (fe == "CR2" | fe == "NEF" | fe == "ARW" )
{
BitmapDecoder bmpDec = BitmapDecoder.Create(new Uri(op.FileName), BitmapCreateOptions.DelayCreation, BitmapCacheOption.None);
BitmapSource bsource = bmpDec.Frames[0];
info_box.Content = fe;
imgControl.Source = bsource;
}
This work only with the raw codecs installed and dont work with ARW format.
If you don't have a codec installed, then you'll have to read the raw image data and convert it to a bitmap or other format that you can read. In order to do that, you need a copy of the format specification so that you can write code that reads the binary data.
I strongly recommend getting a codec, or finding code that somebody has written that already handles the conversion. But if you really want to try your hand at writing image format conversion code, your first order of business is to get the format specification.
A quick Google search on [CR2 image format] reveals this Canon CR2 Specification. Truthfully, I don't know how accurate that is, but it looks reasonable. A little time with a search engine will probably reveal similar documents for the other formats.
Be forewarned: writing these conversions can be a very difficult task. Again, I recommend that you find some existing code that you can leverage.
If you insist on not installing a codec, then your best bet might be these:
http://www.cybercom.net/~dcoffin/dcraw/
- written in C, supports most cameras
http://sourceforge.net/projects/dcrawnet/
- apparently a (partial?) port of DCRAW to C#, but project does not seem to be active

Is using the .NET Image Conversion enough?

I've seen a lot of people try to code their own image conversion techniques. It often seems to be very complicated, and ends up using GDI+ function calls, and manipulating bits of the image. This has got me wondering if I am missing something in the simplicity of .NET's image conversion call when saving an image. Here's the code I have:
Bitmap tempBmp = new Bitmap("c:\temp\img.jpg");
Bitmap bmp = new Bitmap(tempBmp, 800, 600);
bmp.Save(c:\temp\img.bmp, //extension depends on format
ImageFormat.Bmp) //These are all the ImageFormats I allow conversion to within the program. Ignore the syntax for a second ;)
ImageFormat.Gif) //or
ImageFormat.Jpeg) //or
ImageFormat.Png) //or
ImageFormat.Tiff) //or
ImageFormat.Wmf) //or
ImageFormat.Bmp)//or
);
This is all I'm doing in my image conversion. Just setting the location of where the image should be saved, and passing it an ImageFormat type. I've tested it the best I can, but I'm wondering if I am missing anything in this simple format conversion, or if this is sufficient?
System.Drawing.Imaging does give you additional control over the image compression if you pass in the JPEG codec and set the encoder parameter for Quality, which is essentially percent retention.
Here's a function I use with the parameter "image/jpeg" to get the JPEG codec. (This isn't related to compression per se, but the Image.Save overloads that accept EncoderParameters require ImageCodecInfo instead of ImageFormat.)
// assumes an encoder for "image/jpeg" will be available.
public static ImageCodecInfo GetCodec( string mimeType )
{
ImageCodecInfo[] encoders = ImageCodecInfo.GetImageEncoders();
for( int i = 0; i < encoders.Length; i++ )
if( encoders[i].MimeType == mimeType )
return encoders[i];
return null;
}
Then you can set the encoder parameters before saving the image.
EncoderParameters ep = new EncoderParameters(2);
ep.Param[0] = new EncoderParameter( Encoder.Quality, percentRetention ); // 1-100
ep.Param[1] = new EncoderParameter( Encoder.ColorDepth, colorDepth ); // e.g. 24L
(There are other encoder parameters — see the documentation.
So, put it all together, and you can say
image.Save( outFile, GetCodec("image/jpeg"), ep );
(I store the codec and parameters in static values, as they are used over and over again, but I wanted to simplify the example here.)
Hope this helps!
EDIT: If you are scaling images, you also have some control over the quality. Yes it is very "black box", but I find that it works well. Here are the "good & slow" settings (which you need to set before you call DrawImage, but you can look up the "quick & dirty" versions.
// good & slow
graphics.SmoothingMode = SmoothingMode.HighQuality;
graphics.InterpolationMode = InterpolationMode.HighQualityBicubic;
graphics.PixelOffsetMode = PixelOffsetMode.HighQuality;
graphics.CompositingQuality = CompositingQuality.HighQuality;
I think by passing the imageformat .NET does the compression associated with the ImageFormat for you.
But when it comes to Graphic-Compression there can be much more than that (See the Photoshop or a Graphic-Programms Save-As Dialog as an example).
JPEG for example is just a Standard... and for alot of web-graphics you can still reduze the size by blurrying or taking away more colors without a noticable quality loss.
Will be up to you which techniques you're gonna use or if you're fine with a Standard.
What you are doing will work but is by far not the most efficient nor produces the most econonomical file sizes.
The reason you see complex GDI code when dealing with imaging is that there is no reasonable middle ground between using the default one-size-fits-all methods and even the most modest degree of fine tuning.
The .net<->gdi interop borders on black magic where obscure bugs abound. Luckily teh google has all you need to navigate this minefield.
(did i paint a dire enough picture? ;-))
Seriously though, you can do fairly well with gdi interop but it is by no means an obvious task. If you are willing to do the research and take the time to work out the kinks you can end up with some good code.
Otherwise find an imaging library that does most of this for you.

Categories