How to improve the performance of loading textures? - c#

I am currently looking for performance optimizations for my game project. The bottleneck of my initialization process is loading the textures of my models. Loading and assigning a large texture takes up to 100ms. This is a problem, because I have a lot of them. I analyzed my code and found out, that most of the time (around 96%) is spent on the call of CopyPixels (see below).
I attached the function I use for importing all my textures below. This function works like a charm and is based on the official SharpDX-samples-master example codes. First, I load the image bytes from my custom file format (which takes around 2 ms for large textures). Then, I create the format converter and use it to copy the pixels to the data stream. However, copying the pixels is very slow.
Is there any faster way to achieve the same?
public static Resources.Texture ImportTexture(Resources.AResourceManager resourceManager, string resourcePackFileName, int bytePosition, out string resourceItemName)
{
// Load image bytes from file.
FileReader fileReader = new FileReader();
DataTypes.Content.ResourceItem resourceItem = fileReader.ImportTextureFromCollection(resourcePackFileName, bytePosition);
resourceItemName = resourceItem.Name;
// Create texture.
Resources.Texture tex = null;
using (SharpDX.WIC.BitmapDecoder bitmapDecoder = new SharpDX.WIC.BitmapDecoder(resourceManager.ImagingFactory, new MemoryStream(resourceItem.Data, false), SharpDX.WIC.DecodeOptions.CacheOnDemand))
{
using (SharpDX.WIC.FormatConverter formatConverter = new SharpDX.WIC.FormatConverter(resourceManager.ImagingFactory))
{
formatConverter.Initialize(bitmapDecoder.GetFrame(0), SharpDX.WIC.PixelFormat.Format32bppPRGBA, SharpDX.WIC.BitmapDitherType.None, null, 0.0, SharpDX.WIC.BitmapPaletteType.Custom);
SharpDX.DataStream dataStream = new SharpDX.DataStream(formatConverter.Size.Height * formatConverter.Size.Width * 4, true, true);
// This takes most of the time!
formatConverter.CopyPixels(formatConverter.Size.Width * 4, dataStream);
// Creating texture data structure.
tex = new Resources.Texture(formatConverter.Size.Width, formatConverter.Size.Height, formatConverter.Size.Width * 4)
{
DataStream = dataStream
};
}
}
return tex;
}

Looks like you are using bitmaps. Have you considered using DDS files instead? It supports both compressed and uncompressed formats – Asesh
Asesh was right. At first, I was sceptical, but I did some research and found an older article which states, that an average PNG texture takes less memory on hard drive than a comparable DDS texture. However, PNG textures need to be converted at run-time which is slower than using DDS textures.
I spent last night looking for proper conversion tools and after testing some stuff (like a plug-in for GIMP), I used the Compressonator from AMD to convert all my PNG textures to DDS textures. The new files take even less memory on my hard drive than the PNG files (1.7 GB instead of 2.1 GB).
My texture loading method I presented in the initial post still worked and was slightly faster. However, I decided to code a DDS importer based on several code samples I found online.
The result: A large textures takes only 1 ms instead of 103 ms to import. I think you can that an improvement. :-D
Thank you very much, Asesh!

Related

Reading pictures from DB using memorystreams and images often crashes with Out of Memory

I'm trying to figure out if there is something seriously wrong with the following code. It reads the binary from the database, stores it as a picture and associates with an object of an Animal record.
For each row (record of an animal):
byte[] ba = (byte[])x.ItemArray[1]; //reading binary from a DB row
using (MemoryStream m=new MemoryStream(ba))
{
Image i = Image.FromStream(m); //exception thrown occassionally
c.Photo = i;
listOfAnimals.Add(c);
}
First of all, with 18 pictures loaded (the JPG files have 105 Mb in total), the running app uses 2 gb of memory. With no pictures loaded, it is only 500 Mb.
Often the exception gets raised in the marked point, the source of which is System Drawing.
Could anyone help me optimize the code or tell me what the problem is? I must have used some wrong functions...
According to Image.FromStream Method
OutOfMemoryException
The stream does not have a valid image format.
Remarks
You must keep the stream open for the lifetime of the Image.
The stream is reset to zero if this method is called successively with the same stream.
For more information see: Loading an image from a stream without keeping the stream open and Returning Image using Image.FromStream
Try the following:
Create a method to convert byte[] to image
ConvertByteArrayToImage
public static Image ConvertByteArrayToImage(byte[] buffer)
{
using (MemoryStream ms = new MemoryStream(buffer))
{
return Image.FromStream(ms);
}
}
Then:
byte[] ba = (byte[])x.ItemArray[1]; //reading binary from a DB row
c.Photo = ConvertByteArrayToImage(ba);
listOfAnimals.Add(c);
Checking the documentation, a possible reason for out of memory exceptions are that the stream is not a valid image. If this is the case it should fail reliably for a given image, so check if any particular source image is causing this issue.
Another possibility should be that you simply run out of memory. Jpeg typically gets a 10:1 compression level, so 105Mib of compressed data could use +1Gib of memory. I would recommend switching to x64 if at all possible, I see be little reason not to do so today.
There could also be a memory leak, the best way to investigate this would be with a memory profiler. This might be in just about any part of your code, so it is difficult to know without profiling.
You might also need to care about memory fragmentation. Large datablocks are stored in the large object heap, and this is not automatically defragmented. So after running a while you might still have memory available, just not in any continuous block. Again, switching to x64 would mostly solve this problem.
Also, as mjwills comments, please do not store large files in the database. I just spent several hours recovering a huge database, something that would have been much faster if images where stored as files instead.

Reduce the size of the PNG image

I want to compress PNG image
I am using below code: (Below code is working fine for jpeg,jpg) but not for png.
var qualityParam = new EncoderParameter(Encoder.Quality,80);
// PNG image codec
var pngCodec = GetEncoderInfo(ImageFormat.Png);
var encoderParams = new EncoderParameters(1) { Param = { [0] = qualityParam } };
rigImage.Save(imagePath, pngCodec, encoderParams);
private static ImageCodecInfo GetEncoderInfo(ImageFormat format)
{
// Get image codecs for all image formats
var codecs = ImageCodecInfo.GetImageEncoders();
// Find the correct image codec
return codecs.FirstOrDefault(t => t.FormatID == format.Guid);
}
JPEG "works fine" because it can always discard more information.
PNG is a lossless image compression format, so getting better compression is trickier and there's a pretty high "low mark". There are tools like PNGOut or OxiPNG which exist solely to optimise PNG images, but most strategies are very computationally expensive so your average image processing library just doesn't bother:
you can discard irrelevant metadata chunk
you can enumerate and try out various filtering strategies, this amounts to compressing the image dozens of time with slightly different tuning and checking out the best
you can switch out the DEFLATE implementation from the default (usually the stdlib or system's standard) to something better but much more expensive like zopfli
finally — and this one absolutely requires human eyeballs — you can try and switch to palletised
As noted above, most image libraries simply won't bother with that, they'll use a builtin zlib/deflate and some default filter, it can takes minutes to make a PNG go through an entire optimisation pipeline, and there's a chance the gain will be non-existent.
As #Masklinn said, PNG is a lossless format, and I think the BCL in .NET does not have an API that can help you to "optimize" the size of the PNG.
Instead, you could use the libimagequant library; you can get more information about this library here, and how to use it from .NET here.
This is the same library used by PNGoo, I have got really impressive results with it when optimizing PNGs in my projects.
Also, if you are planning to use the library in a commercial project, keep in mind the license terms, as indicated at https://pngquant.org/lib/.

fastest way to detect A) color and B) size of image from byte[]

Currently, we are determining the size, and whether or not an image contains color by converting it to a Bitmap and the checking the height/width, and checking the PixelFormat for type System.Drawing.Imaging.PixelFormat.Format1bppIndexed to detect color.
What I've noticed though, stepping through the code, it can take 3-5 seconds just to initialize this Bitmap (at least for a very high-resolution TIF image):
ms = new MemoryStream(fileBytes);
bitmap = new System.Drawing.Bitmap(ms);
Is there a faster way to check these two things, straight from the byte array, so I can avoid the slowness of the Bitmap class or is this just what to expect with large TIF images?
This wasn't quite the answer I was hoping for, so I'll leave it open still, but I do want to at least mention one possible "answer". My original problem was that loading up the Bitmap was slow.
I stumbled upon this MSDN article, which explains how Image.FromStream() has an overload that allows you to tell it not to validate the image data. By default, that is set to true. By using this new overload, and setting validateImageData to false - this speeds things up tremendously.
So for example:
using (FileStream fs = new FileStream(this.fileInfo.FullName, FileMode.Open, FileAccess.ReadWrite))
{
using (Image photo = Image.FromStream(fs, true, false))
{
// do stuff
}
}
The author of the article, found that his code ran 93x faster (!).

Is there a different way than saving an image to a stream to calculate how big it would be on disk?

I'm having an image variable which contains a .png picture.
In order to calculate how big it would be on disk I'm currently asving it to a memory cache and then using the length of that "memory file" to calculate if it is within the file sizes I want.
As that seems pretty inefficient to me (a "real" calculation is probably faster and less memory intense) I'm wondering if there is a way to do it in a different way.
Example:
private bool IsImageTooLarge(Image img, long maxSize)
{
using (MemoryStream ms = new MemoryStream())
{
img.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg);
if (ms.ToArray().Length > maxSize)
{
return true;
}
}
return false;
}
Additional infos:
The source code is part of what will be a .dll thus web specific things won't work there as I need to do things with C# itself.
You can save on memory by implementing your own Stream, say, PositionNullStream, which would be similar to NullStream class behind the Stream.Null object, but with the position counter. Your implementation would provide a write-only stream to the Save method of the image, and collect the current position from it when the Save has finished.
private bool IsImageTooLarge(Image img, long maxSize)
{
using (var ps = new PositionNullStream()) {
img.Save(ps, System.Drawing.Imaging.ImageFormat.Png);
return ps.Position > maxSize;
}
}
You can find a sample implementation of NullStream on lines 1445..1454 here. Change the implementation to store the current position when writing and re-positioning methods are called (NullStream is hardcoded to return zero).
No, there is no way. Because you do not need the size of the picture but the size of the file. The only way to get it - with anything involving compression - is to compress it and see what comes out.
Without that you can say X * Y * Bytes Per Pixel - but that is the bitmap, not anything with compression.

Compressing a TIF file

I'm trying to convert a multipage color tiff file to a c# CompressionCCITT3 tiff in C#. I realize that I need to make sure that all pixels are 1 bit. I have not found a useful example of this online.
You need this conversion as CCITT3 and CCITT4 don't support color (if I remember right).
Pimping disclaimer: I work for Atalasoft, a company that makes .NET imaging software.
Using dotImage, this task becomes something like this:
FileSystemImageSource source = new FileSystemImageSource("path-to-your-file.tif", true); // true = loop over all frames
// tiff encoder will auto-select an appropriate compression - CCITT4 for 1 bit.
TiffEncoder encoder = new TiffEncoder();
encoder.Append = true;
// DynamicThresholdCommand is very good for documents. For pictures, use DitherCommand
DynamicThresholdCommand threshold = new DynamicThresholdCommand();
using (FileStream outstm = new FileStream("path-to-output.tif", FileMode.Create)) {
while (source.HasMoreImages()) {
AtalaImage image = source.AcquireNext();
AtalaImage finalImage = image;
// convert when needed.
if (image.PixelFormat != PixelFormat.Pixel1bppIndexed) {
finalImage = threshold.Apply().Image;
}
encoder.Save(outstm, finalImage, null);
if (finalImage != image) {
finalImage.Dispose();
}
source.Release(image);
}
}
The Bob Powell example is good, as far as it goes, but it has a number of problems, not the least of which is that it's using a simple threshold, which is terrific if you want speed and don't actually care what your output looks like or your input domain is such that really is pretty much black and white already - just represented in color. Binarization is a tricky problem. When your task is to reduce available information by 1/24th, how to keep the right information and throw away the rest is a challenge. DotImage has six different tools (IIRC) for binarization. SimpleThreshold is bottom of the barrel, from my point of view.
I suggest to experiment with the desired results first using tiff and image utilities before diving into the coding. I found VIPS to be a handy tool. The next option is to look into what LibTIFF can do. I've had good results with the free LibTiff.NET using c# (see also stackoverflow). I was very disappointed by the GDI tiff functionality, although your milage may vary (I need the missing 16-bit-grayscale).
Also you can use the LibTiff utilities (i.e. see http://www.libtiff.org/man/tiffcp.1.html)
I saw the above code, and it looked like it was converting every pixel with manual logic.
Would this work for you?
Imports System.Drawing.Imaging
'get the color tif file
Dim bmpColorTIF As New Bitmap("C:\color.tif")
'select the an area of the tif (will grab all frames)
Dim rectColorTIF As New Rectangle(0, 0, bmpColorTIF.Width, bmpColorTIF.Height )
'clone the rectangle as 1-bit color tif
Dim bmpBlackWhiteTIF As Bitmap = bmpColorTIF.Clone(rectColorTIF, PixelFormat.Format1bppIndexed)
'do what you want with the new bitmap (save, etc)
...
Note: there are a ton of pixelformats to choose from.

Categories