Image resizing efficiency in C# and .NET 3.5 - c#

I have written a web service to resize user uploaded images and all works correctly from a functional point of view, but it causes CPU usage to spike every time it is used. It is running on Windows Server 2008 64 bit. I have tried compiling to 32 and 64 bit and get about the same results.
The heart of the service is this function:
private Image CreateReducedImage(Image imgOrig, Size NewSize)
{
var newBM = new Bitmap(NewSize.Width, NewSize.Height);
using (var newGrapics = Graphics.FromImage(newBM))
{
newGrapics.CompositingQuality = CompositingQuality.HighSpeed;
newGrapics.SmoothingMode = SmoothingMode.HighSpeed;
newGrapics.InterpolationMode = InterpolationMode.HighQualityBicubic;
newGrapics.DrawImage(imgOrig, new Rectangle(0, 0, NewSize.Width, NewSize.Height));
}
return newBM;
}
I put a profiler on the service and it seemed to indicate the vast majority of the time is spent in the GDI+ library itself and there is not much to be gained in my code.
Questions:
Am I doing something glaringly inefficient in my code here? It seems to conform to the example I have seen.
Are there gains to be had in using libraries other than GDI+? The benchmarks I have seen seem to indicate that GDI+ does well compare to other libraries but I didn't find enough of these to be confident.
Are there gains to be had by using "unsafe code" blocks?
Please let me know if I have not included enough of the code...I am happy to put as much up as requested but don't want to be obnoxious in the post.

Image processing is usually an expensive operation. You have to remember that a 32 bit color image is expanded in memory into 4 * pixel width * pixel height before your app even starts any kind of processing. A spike is definitely to be expected especially when doing any kind of pixel processing.
That being said, the only place i could see you in being able to speed up the process or lowering the impact on your processor is to try a lower quality interpolation mode.

You could try
newGrapics.InterpolationMode = InterpolationMode.Low;
as HighQualityBicubic will be the most processor-intensive of the resampling operations, but of course you will then lose image quality.
Apart from that, I can't really see anything that can be done to speed up your code. GDI+ will almost certainly be the fastest on a Windows machine (no code written in C# is going to surpass a pure C library), and using other image libraries carries the potential risk of unsafe and/or buggy code.
The bottom line is, resizing an image is an expensive operation no matter what you do. The simplest solution is your case might simply be to replace your server's CPU with a faster model.

I know that the DirectX being released with Windows 7 is said to provide 2D hardware acceleration. Whether this implies it will beat out GDI+ on this kind of operation, I don't know. MS has a pretty unflattering description of GDI here which implies it is slower than it should be, among other things.
If you really want to try to do this kind of stuff yourself, there is a great GDI Tutorial that shows it. The author makes use of both SetPixel and "unsafe blocks," in different parts of his tutorials.
As an aside, multi-threading will probably help you here, assuming your server has more than one CPU. That is, you can process more than one image at once and probably get faster results.

When you write
I have written a web service to resize
user uploaded images
It sounds to mee that the user uploads an image to a (web?) server, and the server then calls a web service to do the scaling?
If that is the case, I would simply move the scaling directly to the server. Imho, scaling an image doesn't justify it's own web service. And you get quite a bit unnecessary traffic going from the server to the web service, and back. In particular because the image is probably base64 encoded, which makes the data traffic even bigger.
But I'm just guessing here.
p.s. Unsafe blocks in itself doesn't give any gain, they just allow unsafe code to be compiled. So unless you write your own scaling routing, an unsafe block isn't going to help.

You may want to try ImageMagick. It's free, and there is also a .NET wrapper: click here. Or here.
Or you can send a command to a DOS Shell.
We have used ImageMagick on Windows Servers now and then, for batch processing and sometimes for a more flexible image conversion.
Of course, there are commercial components as well, like those by Leadtools and Atalasoft. We have never tried those.

I suspect the spike is because you have the interpolation mode cranked right up. All interpolation modes work per pixel and BiCubic High Quality is about as high as you can go with GDI+ so I suspect the per pixel calculations are chewing up your CPU.
As a test try dropping the interpolation mode down to InterpolationModeNearestNeighbor and see if the CPU spike drops - if so then that's your culprit.
If so then do some trial and error for cost vs quality, chances are you might not need High Quality BiCubic to get decent results

Related

C++ AMP calculations and WPF rendering graphics card dual use performance

Situation:
In an application that has both the need for calculation as well as rendering images (image preprocessing and then displaying) I want to use both AMP and WPF (with AMP doing some filters on the images and WPF not doing much more than displaying scaled/rotated images and some simple overlays, both running at roughly 30fps, new images will continuously stream in).
Question:
Is there any way to find out how the 2 will influence each other?
I am wondering on whether I will see the hopefully nice speed-up I will see in an isolated AMP only environment in the actual application later on as well.
Additional Info:
I will be able and am going to measure the AMP performance separately, since it is low level and new functionality that I am going to set up in a separate project anyway. The WPF rendering part already exists though in a complex application, so it would be difficult to isolate that.
I am not planning on doing the filters etc for rendering only since the results will be needed in intermediate levels as well (other algorithms, e. g. edge detection, saving, ...).
There are a couple of things you should consider here;
Is there any way to find out how the 2 will influence each other?
Directly no, but indirectly yes. Both WPF and AMP are making use of the GPU for rendering. If the AMP portion of your application uses too much of the GPU's resources it will interfere with your frame rate. The Cartoonizer case study from the C++ AMP book uses MFC and C++ AMP to do exactly the way you describe. On slower hardware with high image processing loads you can see the application's responsiveness suffer. However in almost all cases cartoonizing images on the GPU is much faster and can achieve video frame rates.
I am wondering on whether I will see the hopefully nice speed-up
With any GPU application the key to seeing performance improvements is that the speedup from running compute on the GPU, rather than the CPU, must make up for the additional overhead of copying data to and from the GPU.
In this case there is additional overhead as you must also marshal data from the native (C++ AMP) to managed (WPF) environments. You need to take care to do this efficiently by ensuring that your data types are blitable and do not require explicit marshaling. I implemented an N-body modeling application that used WPF and native code.
Ideally you would want to render the results of the GPU calculation without moving it through the CPU. This is possible but not if you explicitly involve WPF. The N-body example achieves this by embedding a DirectX render area directly into the WPF and then renders data directly from the AMP arrays. This was largely because the WPF viewport3D really didn't meet my performance needs. For rendering images WPF may be fine.
Unless things have changed with VS 2013 you definitely want your C++ AMP code in a separate DLL as there are some limitations when using native code in a C++/CLI project.
As #stijn suggests I would build a small prototype to make sure that the gains you get by moving some of the compute to the GPU are not lost due to the overhead of moving data both to and from the GPU but also into WPF.

Fast copying of GUI in C#?

Right now I'm copying window graphics from one window to another via BitBlt from WinApi. I wonder if there is any other fast / faster way to do the same in C#.
Keyword is performance here. If I should stay with WinApi I would hold HDC in memory for quick drawing and if .NET Framework has other possibilities I would probably hold Graphics objects. Right now I'm something to slow when I have to copy ~ 1920x1080 windows.
So how can I boost performance of gui copying in C# ?
I just want to know if I can get better than this. Explicit hardware acceleration (OpenGL, DirectX) is out of my interest here. I decided to stay pure .NET + WinApi.
// example to copy desktop to window
Graphics g = Graphics.FromHwnd(Handle);
IntPtr dc = g.GetHdc();
IntPtr dc0 = Windows.GetWindowDC(Windows.GetDesktopWindow());
Windows.BitBlt(dc, 0, 0, Width, Height, dc0, 0, 0, Windows.SRCCOPY);
// clean up of DCs and Graphics left out
Hans's questions:
How slow is it?
Too slow. Feels (very) stiff.
How fast does it need to be?
The software mustn't feel slow.
Why is it important?
User friendliness of software.
What does the hardware look like?
Any random PC.
Why can't you solve it with better hardware?
It's just software that runs on Windows machines. You won't goy and buy a new PC for a random software that runs slow on your old ?
Get a better video card!
The whole point of the GDI, whether you access it from native code or via .Net, is that it abstracts the details of the graphics subsystem. The downside being that the low level operations such as blitting are in the hands of the graphic driver writers. You can safely assume that these are as optimised as it's possible to get (after all, a video card manufacturer wants to make their card look the best).
The overhead of using the .Net wrappers instead of using the native calls directly pale into insignificance compared to the time spent doing the operation itself, so you're not going to gain much really.
Your code is doing a non-scaling, no blending copy which is possibly the fastest way to copy images.
Of course, you should be profiling the code to see what effect any changes you do to the code is having.
The question then is why are you copying such large images from one window to another? Do you control the contents of both windows?
Update
If you control both windows, why not draw to a single surface and then blit that to both windows? This should replace the often costly operation of reading data from the video card (i.e. blitting from one window to another) with two write operations. It's just a thought and might not work but you'd need timing data to see if it makes any difference.

What is a performance-critical hotspot and its purpose?

I´m reading C# 5.0 in a Nutshell (O'Reilly) and in the first chapter there is a section that talks about Memory Management. This sections explains about the the unnecessary usage of pointers in C#, because it eliminates the problem of incorrect pointers found in other languages like C++. Finally it mentions the critical usage of pointers in performance-critical hotspots.
Then, what is a performance-critical hotspot and its purpose?
Thanks in advance for your help.
A "performance critical hotspot" refers to a piece of code which is a performance bottleneck. This could be many things, but a good example of this is image processing.
Let's say I have a rather large bitmap and I need to perform some operation on each pixel. This is going to be a loop with many iterations and perhaps a lot going on. Saving a bit of CPU and/or IO time during each iteration of this loop (this "hotspot") will result in a large overall performance gain.
So, GetPixel and SetPixel are out the window. They're slow and, from experience, I know they will not perform well on large images. In this case I can use LockBits to pin the image to its current memory location and obtain a pointer to the raw image bits.
This sort of traversal will result in far faster code and I have now optimized a "performance critical hotspot"

GDI+ Image EXTREMELY faster than C# Image

I decided to benchmark reading an image in C#, and in C++, to decide which language to use in a project i'm thinking about making for myself.
I expected the benchmarks to be extremely close with C++ maybe pushing ahead slightly.
The C# code takes about 300ms each run (I ran each test 100 times), where the C++ code takes about 1.5ms.
So is my C# code wrong? Am I benchmarking it badly? Or is it really just this much slower?
Here's the c# code I used:
Stopwatch watch = new Stopwatch();
watch.Start();
Image image = Image.FromFile(imagePath);
watch.Stop();
Console.WriteLine("DEBUG: {0}", watch.ElapsedMilliseconds);
And the C++ code pretty much boiled down to this:
QueryPerformanceCounter(&start);
Image * img = Image::FromFile(imagePath);
QueryPerformanceCounter(&stop);
delete img;
return (stop.QuadPart - start.QuadPart) * 1000.0 / freq.QuadPart;
Regardless of which language, they need to end up in an Image object, as it provides the functionality i'm going to need.
=======================================================================
As xanatos pointed out in the comments, the Image.FromFile does do checking.
More specifically, this:
num = SafeNativeMethods.Gdip.GdipImageForceValidation(new HandleRef(null, zero));
if (num != 0)
{
SafeNativeMethods.Gdip.GdipDisposeImage(new HandleRef(null, zero));
throw SafeNativeMethods.Gdip.StatusException(num);
}
Using Image.FromStream() instead, you can avoid this.
What i'm wondering is, if you do avoid this and try to load an invalid image file it throws an OutOfMemory exception.
And in C++, you don't do checking like this. So how important is this checking? Can anyone give me a situation where it would be bad to avoid this?
Yes, your benchmark is flawed. The problem is that you forgot to actually do something with the bitmap. Like paint it.
GDI+ heavily optimizes the loading of an image. Very similar to the way .NET optimizes loading an assembly. It does the bare things necessary, it reads the header of the file to retrieve essential properties. Format, Width, Height, Dpi. Then it creates a memory-mapped file to create a mapping to the pixel data in the file. But doesn't actually read the pixel data.
Now the difference comes into play. System.Drawing.Image next actually reads the pixel data. That causes page faults, the operating system now reads the file and copies the pixel data into RAM. Highly desirable, if there's anything wrong with the file then you'll get an exception at the FromFile() call instead of some time later, typically when your program draws the image and is buried in framework code you didn't write. Your bench mark for the C# code times the creation of the mmf plus the reading of the pixel data.
The C++ program is always going to have to pay for reading the pixel data too. But you didn't measure that, you only measured the cost of creating the MMF.
Few Points which I know
There is a thing called as CLR. If the said c++ framework(It seems Qt) used The system calls which not depends on .net framework, then obviously it will run fast.
Regarding c# => It is possible to load that assembly before you call it in your code. If you do so,then You can find it's fastness in good scale.
If you use windows platform, then MS wont reduce the execution speed of its own language unless otherwise there is some necessity.

GDI+ System.Drawing.Bitmap gives error Parameter is not valid intermittently

I have some C# code in an ASP.Net application that does this:
Bitmap bmp = new Bitmap(1184, 1900);
And occasionally it throws an exception "Parameter is not valid". Now i've been googling around and apparently GDI+ is infamous for throwing random exceptions, and lots of people have had this problem, but nobody has a solution to it! I've checked the system and it has plenty of both RAM and swap space.
Now in the past if i do an 'iisreset' then the problem goes away, but it comes back in a few days. But i'm not convinced i've caused a memory leak, because as i say above there is plenty of ram+swap free.
Anyone have any solutions?
Stop using GDI+ and start using the WPF Imaging classes (.NET 3.0). These are a major cleanup of the GDI+ classes and tuned for performance. Additionally, it sets up a "bitmap chain" that allows you to easily perform multiple actions on the bitmap in an efficient manner.
Find more by reading about BitmapSource
Here's an example of starting with a blank bitmap just waiting to receive some pixels:
using System.Windows.Media.Imaging;
class Program {
public static void Main(string[] args) {
var bmp = new WriteableBitmap(1184, 1900, 96.0, 96.0, PixelFormat.Bgr32, null);
}
}
For anyone who's interested, the solution i'm going to use is the Mono.Cairo libraries from the mono C# distribution instead of using system.drawing. If i simply drag the mono.cairo.dll, libcairo-2.dll, libpng13.dll and zlib1.dll files from the windows version of mono into the same folder as my executable, then i can develop in windows using visual studio 2005 and it all works nicely.
Update - i've done the above, and stress tested the application and it all seems to run smoothly now, and uses up to 200mb less ram to boot. Very happy.
Everything I've seen to date in my context is related to memory leaks / handle leaks. I recommend you get a fresh pair of eyes to investigate your code.
What actually happens is that the image is disposed at a random point in the future, even if you've created it on the previous line of code. This may be because of a memory/handle leak (cleaning some out of my code appears to improve but not completely resolve this problem).
Because this error happens after the application has been in use for a while, sometimes using lots of memory, sometimes not, I feel the garbage collector doesn't obey the rules because of some special tweaks related to services and that is why Microsoft washes their hands of this problem.
http://blog.lavablast.com/post/2007/11/The-Mysterious-Parameter-Is-Not-Valid-Exception.aspx
You not only need enough memory, it needs to be contiguous. Over time memory becomes fragmented and it becomes harder to find big blocks. There aren't a lot of good solutions to this, aside from building up images from smaller bitmaps.
new Bitmap(x, y) pretty much just needs to allocate memory -- assuming that your program isn't corrupted in some way (is there any unsafe code that could corrupt the heap), then I would start with this allocation failing. Needing a contiguous block is how a seemingly small allocation could fail. Fragmentation of the heap is something that is usually solved with a custom allocator -- I don't think this is a good idea in IIS (or possible).
To see what error you get on out of memory, try just allocation a gigantic Bitmap as a test -- see what error it throws.
One strategy I've seen is to pre-allocate some large blocks of memory (in your case Bitmaps) and treat them as a pool (get and return them to the pool). If you only need them for a short period of time, you might be able to get away with just keeping a few in memory and sharing them.
I just got a reply from microsoft support. Apparently if you look here:
http://msdn.microsoft.com/en-us/library/system.drawing.aspx
You can see it says "Classes within the System.Drawing namespace are not supported for use within a Windows or ASP.NET service. Attempting to use these classes from within one of these application types may produce unexpected problems, such as diminished service performance and run-time exceptions."
So they're basically washing their hands of the issue.
It appears that they're admitting that this section of the .Net framework is unreliable. I'm a bit disappointed.
Next up - can anyone recommend a similar library to open a gif file, superimpose some text, and save it again?
Classes within the System.Drawing namespace are not supported for use within a Windows or ASP.NET service
For a supported alternative, see Windows Imaging Components (msdn), a native library which ironically System.Drawing is based on.

Categories