GDI+ System.Drawing.Bitmap gives error Parameter is not valid intermittently - c#

I have some C# code in an ASP.Net application that does this:
Bitmap bmp = new Bitmap(1184, 1900);
And occasionally it throws an exception "Parameter is not valid". Now i've been googling around and apparently GDI+ is infamous for throwing random exceptions, and lots of people have had this problem, but nobody has a solution to it! I've checked the system and it has plenty of both RAM and swap space.
Now in the past if i do an 'iisreset' then the problem goes away, but it comes back in a few days. But i'm not convinced i've caused a memory leak, because as i say above there is plenty of ram+swap free.
Anyone have any solutions?

Stop using GDI+ and start using the WPF Imaging classes (.NET 3.0). These are a major cleanup of the GDI+ classes and tuned for performance. Additionally, it sets up a "bitmap chain" that allows you to easily perform multiple actions on the bitmap in an efficient manner.
Find more by reading about BitmapSource
Here's an example of starting with a blank bitmap just waiting to receive some pixels:
using System.Windows.Media.Imaging;
class Program {
public static void Main(string[] args) {
var bmp = new WriteableBitmap(1184, 1900, 96.0, 96.0, PixelFormat.Bgr32, null);
}
}

For anyone who's interested, the solution i'm going to use is the Mono.Cairo libraries from the mono C# distribution instead of using system.drawing. If i simply drag the mono.cairo.dll, libcairo-2.dll, libpng13.dll and zlib1.dll files from the windows version of mono into the same folder as my executable, then i can develop in windows using visual studio 2005 and it all works nicely.
Update - i've done the above, and stress tested the application and it all seems to run smoothly now, and uses up to 200mb less ram to boot. Very happy.

Everything I've seen to date in my context is related to memory leaks / handle leaks. I recommend you get a fresh pair of eyes to investigate your code.
What actually happens is that the image is disposed at a random point in the future, even if you've created it on the previous line of code. This may be because of a memory/handle leak (cleaning some out of my code appears to improve but not completely resolve this problem).
Because this error happens after the application has been in use for a while, sometimes using lots of memory, sometimes not, I feel the garbage collector doesn't obey the rules because of some special tweaks related to services and that is why Microsoft washes their hands of this problem.
http://blog.lavablast.com/post/2007/11/The-Mysterious-Parameter-Is-Not-Valid-Exception.aspx

You not only need enough memory, it needs to be contiguous. Over time memory becomes fragmented and it becomes harder to find big blocks. There aren't a lot of good solutions to this, aside from building up images from smaller bitmaps.
new Bitmap(x, y) pretty much just needs to allocate memory -- assuming that your program isn't corrupted in some way (is there any unsafe code that could corrupt the heap), then I would start with this allocation failing. Needing a contiguous block is how a seemingly small allocation could fail. Fragmentation of the heap is something that is usually solved with a custom allocator -- I don't think this is a good idea in IIS (or possible).
To see what error you get on out of memory, try just allocation a gigantic Bitmap as a test -- see what error it throws.
One strategy I've seen is to pre-allocate some large blocks of memory (in your case Bitmaps) and treat them as a pool (get and return them to the pool). If you only need them for a short period of time, you might be able to get away with just keeping a few in memory and sharing them.

I just got a reply from microsoft support. Apparently if you look here:
http://msdn.microsoft.com/en-us/library/system.drawing.aspx
You can see it says "Classes within the System.Drawing namespace are not supported for use within a Windows or ASP.NET service. Attempting to use these classes from within one of these application types may produce unexpected problems, such as diminished service performance and run-time exceptions."
So they're basically washing their hands of the issue.
It appears that they're admitting that this section of the .Net framework is unreliable. I'm a bit disappointed.
Next up - can anyone recommend a similar library to open a gif file, superimpose some text, and save it again?

Classes within the System.Drawing namespace are not supported for use within a Windows or ASP.NET service
For a supported alternative, see Windows Imaging Components (msdn), a native library which ironically System.Drawing is based on.

Related

C# Memory Issues

I've got an application that:
Targets C# 6
Targets .net 4.5.2
Is a Windows Forms application
Builds in AnyCPU Mode beacuse it...
Utilizes old 32 bit libraries that cannot be upgraded to 64 bit, unmanaged memory
Uses DevExpress, a third party control vendor
Processes many gigabytes of data daily to produce reports
After a few hours of use in jobs that have many plots, the application eventually runs out of memory. I've spend quite a long time cleaning up many leaks found in the code and have gotten the project to a state where at the worst case it may be using upwards 400,000K of memory at any given time, according to performance counters. Processing this data has not yielded any issues at this point since data is processed in Jagged arrays, preventing any issues with the Large Object Heap.
Last time this happened the user was using ~305,000K of memory. The application is so "out of memory" that the error dialog cannot even draw the error icon in the MessageBox that comes up, the space where the icon would usually be is all black.
So far I've done the following to clean this up:
Windows forms utilize the Disposed event to ensure that resources are cleaned up, dispose is called manually when required
Business objects utilize IDisposable to remove references
Verified cleanup using ANTS memory profiler and SciTech memory profiler.
The low memory usage suggests this is not the case but I wanted to see if I saw anything that could be helpful, I could not
Utilized the GCSettings.LargeObjectHeapCompactionMode property to remove any fragmentation from processing data that may be fragmented in the Large Object Heap (LoH)
Nearly every article that I've used to get to this point suggests that out of memory actually means out of contiguous address space and given the amount that's in use, I agree with this. I'm not sure what to do at this point since from what I understand (and am probably very wrong about) is that the garbage collector clears this up to make room as the process moves along, with the exception of the LoH, which is cleaned up manually now using the new LargeObejctHeapCompactionMode property introduced in .net 4.5.1.
What am I missing here? I cannot build to 64 bit due to the old 32 bit libraries that contain proprietary algorithms that we do not have access to even dream of producing a 64 bit version of. Are there any modes in these profiles I should be using to identify exactly what is growing out of control here?
If this address space cannot be cleared up does this mean that all c# applications will eventually run "out of memory" because of this?
Nearly every article that I've used to get to this point suggests that out of memory actually means out of contiguous address space and given the amount that's in use, I agree with this.
This is a reasonable hypothesis, but even reasonable hypotheses can be wrong. Yours probably is wrong. What should you do?
Test it with science. That is, look for evidence that falsifies your hypothesis. You want to assume that it is anything else, and be forced by the evidence you've gathered that your hypothesis is not false.
So:
at the point where your application runs out of memory, is it actually out of contiguous free pages of the necessary size? It sure sounds like your observations do not indicate that this is true, so the hypothesis is probably false.
What is other evidence that the hypothesis might be false?
"After a few hours of use in jobs that have many plots, the application eventually runs out of memory."
"Uses DevExpress, a third party control vendor"
"the error dialog cannot even draw the error icon in the MessageBox"
None of this sounds like an out of memory problem. This sounds like a third party control library leaking OS handles for graphics objects. Unfortunately, such leaks usually surface as "out of memory" errors and not "out of handles" errors.
So, that's a new hypothesis. Look for evidence for and against this hypothesis too. You're doing a good job by using a memory profiler. Use a handle profiler next.
If this address space cannot be cleared up does this mean that all c# applications will eventually run "out of memory" because of this?
Nope. The GC does a good job of cleaning up managed memory; lots of applications have no problem running forever without leaking.

AccessViolationException when accessing variable solution value

We've been utilizing the OR tools to solve linear optimizations in a real-time, .NET application. That is, solving linear optimizations regularly using different inputs as time progresses.
Recently we ran into an issue that we haven't seen before while running our application on a server for extended periods of time, in which seemingly random attempts to solve the optimizations were causing AccessViolationExceptions. Specifically,
Framework Version: v4.0.30319
Description: The process was terminated due to an unhandled exception.
Exception Info: System.AccessViolationException
at Google.OrTools.LinearSolver.operations_research_linear_solverPINVOKE.Variable_SolutionValue(System.Runtime.InteropServices.HandleRef)
...
I'm trying to find out more specifically where this is happening in the pipeline, but given the output there I believe it is a section in which we are trying to retrieve the individual variable solution values out of the solver after solving the optimization.
We are using a wide variety of constraints over a decent sized number of variables.
Has anyone seen this before?
Reference github issue link
After some testing we found that what appears to have been happening is that the garbage collector was collecting some of the Variables we were using during the P/Invoke, as per this.
Unfortunately, this seems to be a side effect of the way that SWIG creates its .NET wrappers and their IDisposable implementations, using HandleRefs instead of something like SafeHandles, which 'handle' this as per the documentation:
Platform invoke operations automatically increment the reference count of handles encapsulated by a SafeHandle and decrement them upon completion. This ensures that the handle will not be recycled or closed unexpectedly.
More information here.
Without wanting to get into the business of creating our own SWIG typemap or compiling a new version of SWIG, .NET provides a way of keeping objects 'alive' with regard to the Garbage Collector. That is, calling GC.KeepAlive on all of the objects which we will be accessing values from via P/Invoke (in our case the Solver and our Variables) at the end of the optimization procedure, prevents the garbage collector from thinking that they are collectible until the end of the scope of the KeepAlive method without side effects (as per their documentation).
Preliminary testing has shown this to work, though given that it was already intermittently occurring before, we'll be watching for this happening going forward.
Going forward, I think either making a request of SWIG to use SafeHandles is probably the best idea (it has been discussed before and is still an open issue) or changing the typemap to use SafeHandles directly, is likely the best option. I may try investigating the later option myself, but because this fix ended up only adding 3 lines of code (plus a host of comments) to our code base for what seems like a full fix, it's going to be low priority for me. That said, a fix for this would be nice for an upcoming version.

How the "using" keyword Improves Performance (memory or speed wise)

I am working on some image processing scripts in .net and came across the following article outlining how to crop, resize, compress, etc.
In the first comment, someone states that the methods used in the article for imaging are notorious for memory leaks:
A quick warning to everybody thinking about using System.Drawing (or
GDI+) in an ASP.NET environment. It's not supported by Microsoft and
MSDN (as of recently) clearly states that you may experience memory
leaks.
Then, in the second comment, the article author effectively says "i've handled that problem":
Just to make clear the code above isn't thrown together. It evolved
with time because as you suggested it is too easy to mistakenly create
performance issues when using GDI+. Just see how many times I've
written 'using' above!
I am wondering how (or if) the use of using effectively handles (or improves) the memory leak problems referenced in the first comment.
The using statements don't do anything performance wise. What it does do is calling the Dispose method on the objects the using is declared on.
Calling Dispose will allow unmanaged resources, like those GDI+ creates for your image operations, to be unallocated. This will free up memory and Windows handles. Both can contribute to your application to stop working correct, because if you run out of memory or handles, your program will have trouble running.
A using statement is actually equivalent to (and compiled to this eventually):
var x = ...; // code from using intialization
try
{
... // code inside using statement
}
finally
{
((IDisposable)x).Dispose();
}
This makes sure the object is disposed, even if an exception occurs.

How do you increase the heap size of a Mono for Android application?

I have a Mono for Android app that I think is running out of memory when I load and parse an XML document using the XMLDocument class multiple times in a row.
I see that the garbage collector is reporting that I only have 7367K of memory available, which seems quite low.
How can I increase this either through configuration or at runtime?
I'm afraid Android's Virtual Machine memory used for each application is quite limited: 16MB in most cases and 24MB for some other. I also came across that limitation. First you should check that your application has no memory leaks. If that's not enough then you may need to consider forcing calls to the garbage collector: http://docs.xamarin.com/android/advanced_topics/garbage_collection. You should also bear in mind that calling GC will make your application slower.
If anyone has a better option I'd be very happy to know about it!
I found that there is a bug in the XmlDocument that causes it to crash in some situations (loading large XML files (~180K) quickly in sequence). I will be reporting this to Xamarin to see if they can investigate it further.
After I converted my code to use XmlTextReader instead, the memory behavior changed. Now the system dynamically increases the heap size reported during GC cycles. The size goes up and down as necessary and nothing crashes.
With the XmlDocument code, instead of increasing the heap size, it just crashed.

System.OutOfMemory being thrown. How to find the culprit?

I am using Visual C# Express 2008 and I have an application that starts up on a form, but uses a thread with a delegated display function to take care of essentially all the processing. That way my form doesn't lock up while tasks are being processed.
Semi-recently, after going through a repeated process a number of times (the program processes incoming data, so when data comes in, the process repeats) my app will crash with a System.OutOfMemory error.
The stack trace in the error message is useless because it only directs me to the the line where I call the delegated form control function.
I've heard people say they use ProcMon from SysInternals to see why errors like this happen. But I, for the life of me, can't figure it out. The amount of memory I am using doesn't change as the program runs, if it goes up, it comes back down. Plus, even if it was going up, how do I figure out which part of my program is the problem?
How can I go about investigating this problem?
EDIT:
So, after delving further into this issue, I looked through anything that I was ever re-declaring. There were a few instances where I had hugematrix = new uint[gigantic], so I got rid of about 3 of those.
Instead of getting rid of the error, it is now far more obscured and confusing.
My application takes the incoming data, and renders it using OpenGL. Now, instead of throwing "System.OutOfMemory" it simply does not render anything with OpenGL.
The only difference in my code is that I do not make new matrices for holding the data I plot. That way, I hope, my array stays in the same place in memory and doesn't do anything suicidal to my LOH.
Unfortunately, this twists the beast far beyond my meager means. With zero errors popping up, and all my data structures apparently still properly filled, how can I find my problem? Does OpenGL use memory in an obscure way so as to not throw exceptions when it fails? Is memory still a problem? How do I find out? All the memory profilers in the world seem to tell me very little.
EDIT:
With the boatloads of support from this community (with extra kudos to Amissico) the error has finally been rooted out. Apparently I was adding items to an OpenGL list, and never taking them off the list.
The app that finally clued me in was .Net Memory Profiler. At the time of crash it showed 1.5GB of data in the <unknown> category. Through process of elimination (everything else in the list that was named), the last thing to be checked off the list was the OpenGL rendering pipleline. The rest is history.
Based on the description in your comments, I would suspect that you are either not disposing of your images correctly or that you have severe Large Object Heap fragmentation and, when trying to allocate for a new image, don't have enough contiguous space available. See this question for more info - Large Object Heap Fragmentation
You need to use a memory profiler, such as the ants memory profiler to find out what causes this error.
Are you re-registering an event handler on every loop and not un-registering it?
CLR Profiler for the .NET Framework 2.0 at https://github.com/MicrosoftArchive/clrprofiler
The most common cause of memory fragmentation is excessive string creation.
Following considerations:
Make sure that threads you spawn are destroyed (aborted or function return). Too much threads can fail application, although in Task Manager used memory is not too high
Memory leaks. Yes, yes, you can cause them in .net pretty well without setting reference to nulls. This can be solved by using memory profilers like dotTrace or ANTS Memory Profiler
I had an OutOfMemoryException-problem as well:
Microsoft Visual C# 2008 Reducing number of loaded dlls
The reason was fragmentation of 2GB GB virtual address space and poster nobugz suggested Sysinternal's Vmmap utility which has been very helpful for diagnostics. You can use it to check if your free memory areas become more fragmented over time. (First sort by size then by type -> refresh repeat sorting and you can see if contiguous free memory blocks become smaller)

Categories