Is there a way how to at least postpone termination of managed app (by few dozens of milliseconds) and set some shared flag to give other threads chance to gracefully terminate (the SO thread itself wouldn't obviously execute anything further)? I'm contemplating to use JIT debugger or CLR hosting for this - I'm curios if anybody tried this before.
Why would I want to do something so wrong?:
Without too much detail - imagine this analogy - you are in a casino betting on a roulette and suddenly find out that the roulette is unreliable fake. So you want to immediately leave the casino, BUT likely want to collect your bets from the table first.
Unfortunately I cannot leverage separate process for this as there are very tight performance requirements.
Tried and didn't work:
.NET behavior for StackOverflowException (and contradicting info on MSDN) has been discussed several times on SO - to quickly sum up:
HandleProcessCorruptedStateExceptionsAttribute (e.g. on appdomain unhandled exception handler) doesn't work
ExecuteCodeWithGuaranteedCleanup doesn't work
legacyUnhandledExceptionPolicy doesn't work
There may be few other attempts how to handle StackOverflowExceptions - but it seems to be apparent that CLR terminates the whole process as is mentioned in this great answer by Hans Passant.
Considering to try:
JIT debugger - leave the thread with exception frozen, set some
shared flag (likely in pinned location) and thaw other threads for a
short time.
CLR hosting and setting unhandled exception policy
Do you have any other idea? Or any experience (successful/unsuccessful) with those two ways?
The word "fake" isn't quite the correct one for your casino analogy. There was a magnitude 9 earth quake and the casino building along with the roulette table, the remaining chips and the player disappeared in a giant cloud of smoke and dust.
The only shot you have at running code after an SOE is to stay far away from that casino, it has to run in another process. A "guard" process that starts your misbehaving program, it can use the Process.ExitCode to detect the crash. It will be -1073741571 (0xc00000fd). The process state is gone, you'll have to use one of the .NET out-of-process interop methods (like WCF, named pipes, sockets, memory-mapped file) to make the guard process aware of things that need to be done to clean up. This needs to be transactional, you cannot reason about the exact point in time that the crash occurred since it might have died while updating the guard.
Do beware that this is rarely worth the effort. Because an SOE is pretty indistinguishable from an everyday process abort. Like getting killed by Task Manager. Or the machine losing power. Or being subjected to the effects of an earth quake :)
A StackOverflowException is an immediate and critical exception from which the runtime cannot recover - that's why you can't catch it, or recover from it, or anything else. In order to run another method (whether that's a cleanup method or anything else), you have to be able to create a stack frame for that method, and the stack is already full (that's what a StackOverflowException means!). You can't run another method because running a method is what causes the exception in the first place!
Fortunately, though, this kind of exception is always caused by program structure. You should be able to diagnose and fix the error in your code: when you get the exception, you will see in your call stack that there's a loop of one or more methods recursing indefinitely. You need to identify what the faulty logic is and fix it, and that'll be a lot easier than trying to fix the unfixable exception.
Related
I built an add-on to Microsoft Word. When the user clicks a button, it runs a number of processes that export a list of Microsoft Word documents to Filtered HTML. This works fine.
Where the code falls down is in processing large amounts of files. After the file conversions are done and I call the next function, the app crashes and I get this information from Visual Studio:
Managed Debugging Assistant 'DisconnectedContext' has detected a problem in 'C:\Program Files\Microsoft Office\root\Office16\WINWORD.EXE'.
Additional information: Transition into COM context 0x56255b88 for
this RuntimeCallableWrapper failed with the following error: System
call failed. (Exception from HRESULT: 0x80010100
(RPC_E_SYS_CALL_FAILED)). This is typically because the COM context
0x56255b88 where this RuntimeCallableWrapper was created has been
disconnected or it is busy doing something else. Releasing the
interfaces from the current COM context (COM context 0x56255cb0). This
may cause corruption or data loss. To avoid this problem, please
ensure that all COM contexts/apartments/threads stay alive and are
available for context transition, until the application is completely
done with the RuntimeCallableWrappers that represents COM components
that live inside them.
After some testing, I realized that if I simply remove all the code after the file conversions, there are no problems. To resolve this, I place the remainder of my code in yet another button.
The problem is I don't want to give the user two buttons. After reading various other threads, it sounds like my code has a memory or threading issue. The answers I am reading do not help me truly understand what to do next.
I feel like this is what I want to do:
1- Run conversion.
2- Close thread/cleanup memory issue from conversion.
3- Continue running code.
Unfortunately, I really don't know how to do #2 or if it is even possible. Your help is very much appreciated.
or it is busy doing something else
The managed debugging assistant diagnostic you got is pretty gobbledygooky but that's the part of the message that accurately describes the real problem. You have a firehose problem, the 3rd most common issue associated with threading. The mishap is hard to diagnose because this goes wrong inside the Word plumbing and not your code.
Trying not to commit the same gobbledygook sin myself, what goes wrong is that the interop calls you make into the Office program are queued, waiting for their turn to get executed. The underlying "system call" that the error code hints at is PostMessage(). Wherever there is a queue, there is a risk that the queue gets too large. Happens when the producer (your program) is adding items too the queue far faster than the consumer (the Office program) removes them. The firehose problem. Unless the producer slows down, the queue will grow without bounds and something is going to fail if it is allowed to grow endlessly, at a minimum the process runs out of memory.
It is not allowed to get close to that problem. The underlying queue that PostMessage() uses is protected by the OS. Windows fails the call when the queue already contains 10,000 messages. That's a fatal error that RPC does not know how to recover from, or rather should not try to recover from. Something is amiss and it isn't pretty. It returns an error code to your program to tell you about it. That's RPC_E_SYS_CALL_FAILED. Nothing much better happens in your program, the CLR doesn't know how to recover from it either, nor does your code. So the show is over, the interop call you made got lost and was not executed by Word.
Finding a completely reliable workaround for this awkward problem is not that straight-forward. Beware that this can happen on any interop call, so catching the exception and trying again is pretty drastically unpractical. But do keep in mind that the Q+D fix is very simple. The plain problem is that your program is running too fast, slowing it down with a Thread.Sleep() or Task.Delay() call is quite crude but will always fix the issue. Well, assuming you delay enough.
I think, but don't know for a fact because nobody ever posts repro code, that this issue is also associated with using a console mode app or a worker thread in your program. If it is a console mode app then try applying the [STAThread] attribute to your Main() method. If it is a worker thread then call Thread.SetApartmentState() before starting the thread, but beware it is very important to also create the Application interface on that worker thread. Not otherwise a workaround for an add-in.
If neither of those workarounds is effective or too unpractical then consider that you can automagically slow your program down, and ensure the queue is emptied, by occasionally reading something back from the Office program. Something silly, any property getter call will do. Necessarily you can't get the property value until the Office program catches up. That can still fail, there is also a 60 second time-out on the interop call. But that's something you can fix, you can call CoRegisterMessageFilter() in your program to install a callback that runs when the timeout trips. Very gobbledygooky as well, but the cut-and-paste code is readily available.
I am trying to debug some work that processes large files. The code itself works, but there are sporadic errors reported from the .NET Runtime itself. For context, the processing here is a 1.5GB file (loaded into memory once only) being processed and released in a loop, deliberately to try to reproduce this otherwise unpredictable error.
My test fragment is basically:
try {
byte[] data =File.ReadAllBytes(path);
for(int i = 0 ; i < 500 ; i++)
{
ProcessTheData(data); // deserialize and validate
// force collection, for tidiness
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced);
GC.WaitForPendingFinalizers();
}
} catch(Exception ex) {
Console.WriteLine(ex.Message);
// some more logging; StackTrace, recursive InnerException, etc
}
(with some timing and other stuff thrown in)
The loop will process fine for an non-deterministic number of iterations fully successfully - no problems whatsoever; then the process will terminate abruptly. The exception handler is not hit. The test does involve a lot of memory use, but it saw-tooths very nicely during each iteration (there is not an obvious memory leak, and I have plenty of headroom - 14GB unused primary memory at the worst point in the saw-tooth). The process is 64-bit.
The windows error-log contains 3 new entries, which (via exit code 80131506) suggest an Execution Engine error - a nasty little critter. A related answer, suggests a GC error, with a "fix" to disable concurrent GC; however this "fix" does not prevent the issue.
Clarification: this low-level error does not hit the CurrentDomain.UnhandledException event.
Clarification: the GC.Collect is there only to monitor the saw-toothing memory, to check for memory leaks and to keep things predictable; removing it does not make the problem go away: it just makes it keep more memory between iterations, and makes the dmp files bigger ;p
By adding more console tracing, I have observed it faulting during each of:
during deserialization (lots of allocations, etc)
during GC (between a GC "approach" and a GC "complete", using the GC notification API)
during validation (just foreach over some of the data) - curiously just after a GC "complete" during the validation
So lots of different scenarios.
I can obtain crash-dump (dmp) files; how can I investigate this further, to see what the system is doing when it fails so spectacularly?
If you have memory dumps, I'd suggest using WinDbg to look at them, assuming that you're not doing that already.
Trying running the comment !EEStack (mixed native and managed stack trace), and see if there's anything that might jump out in the stack trace. In my test program, I found this one of the times as my stack trace where a FEEE happened (I was purposefully corrupting the heap):
0:000> !EEStack
---------------------------------------------
Thread 0
Current frame: ntdll!NtWaitForSingleObject+0xa
Child-SP RetAddr Caller, Callee
00000089879bd3d0 000007fc586610ea KERNELBASE!WaitForSingleObjectEx+0x92, calling ntdll!NtWaitForSingleObject
00000089879bd400 000007fc5869811c KERNELBASE!RaiseException+0x68, calling ntdll!RtlRaiseException
[...]
00000089879bec80 000007fc49109cf6 clr!WKS::gc_heap::gc1+0x96, calling clr!WKS::gc_heap::mark_phase
00000089879becd0 000007fc49109c21 clr!WKS::gc_heap::garbage_collect+0x222, calling clr!WKS::gc_heap::gc1
00000089879bed10 000007fc491092f1 clr!WKS::GCHeap::RestartEE+0xa2, calling clr!Thread::ResumeRuntime
00000089879bed60 000007fc4910998d clr!WKS::GCHeap::GarbageCollectGeneration+0xdd, calling clr!WKS::gc_heap::garbage_collect
00000089879bedb0 000007fc4910df9c clr!WKS::GCHeap::Alloc+0x31b, calling clr!WKS::GCHeap::GarbageCollectGeneration
00000089879bee00 000007fc48ff82e1 clr!JIT_NewArr1+0x481
Since this could be related to heap corruption from the garbage collector, I would try the !VerifyHeap command. At least you could make sure that the heap is intact (and your problem lies elsewhere) or discover that your issue might actually be with the GC or some P/Invoke routines corrupting it.
If you find that the heap is corrupt, I might try and discover how much of the heap is corrupted, which you might be able to do via !HeapStat. That might just show the entire heap corrupt from a certain point, though.
It's difficult to suggest any other methods to analyze this via WinDbg, since I have no real clue about what your code is doing or how it's structured.
I suppose if you find it to be an issue with the heap and thus meaning it could be GC weirdness, I would look at the CLR GC events in Event Tracing for Windows.
If the minidumps you're getting aren't cutting it and you're using Windows 7/2008R2 or later, you can use Global Flags (gflags.exe) to attach a debugger when the process terminates without an exception, if you're not getting a WER notification.
In the Silent Process Exit tab, enter the name of the executable, not the full path to it (ie. TestProgram.exe). Use the following settings:
Check Enable Silent Process Exit Monitoring
Check Launch Monitor Process
For the Monitor Process, use {path to debugging tools}\cdb.exe -server tcp:port=5005 -g -G -p %e.
And apply the settings.
When your test program crashes, cdb will attach and wait for you to connect to it. Start WinDbg, type Ctrl+R, and use the connection string: tcp:port=5005,server=localhost.
You might be able to skip using remote debugging and instead use {path to debugging tools}\windbg.exe %e. However, the reason I suggested remote instead, was because WerFault.exe, which I believe is what reads the registry and launches the monitor process, will start the debugger in Session 0.
You can make session 0 interactive and connect to the window station, but I can't remember how that's done. It's also inconvenient, because you'd have to switch back and forth between sessions if you need to access any of your existing windows you've had open.
Tools->Debugging->General->Enable .Net Framework Debugging
+
Tools->IntelliTace-> IntelliTaceEbents And Call Information
+
Tools->IntelliTace-> Set StorIntelliTace Recordings in this directory
and choose a directory
should allow you to step INTO .net code and trace every single function call.
I tried it on a small sample project and it works
after each debug session it suppose to create a recording of the debug session. it the set directory
even if CLR dies if im not mistaken
this should allow you to get to the extact call before CLR collapsed.
Try writing a generic exception handler and see if there is an unhandled exception killing your app.
AppDomain currentDomain = AppDomain.CurrentDomain;
currentDomain.UnhandledException += new UnhandledExceptionEventHandler(MyExceptionHandler);
static void MyExceptionHandler(object sender, UnhandledExceptionEventArgs e) {
Console.WriteLine(e.ExceptionObject.ToString());
Console.WriteLine("Press Enter to continue");
Console.ReadLine();
Environment.Exit(1);
I usually invesitgate memory related problems with Valgrind and gdb.
If you run your things on Windows, there are plenty of good alternatives such as verysleepy for callgrind as suggested here:
Is there a good Valgrind substitute for Windows?
If you really want to debug internal errors of the .NET runtime, you have the problem that there is no source for neither the class libraries nor the VM.
Since you can't debug what you don't have, I suggest that (apart from decompiling the .NET framework libraries in question with ILSpy, and adding them to your project, which still doesn't cover the vm) you could use the mono runtime.
There you have both the source of the class libraries as well as of the VM.
Maybe your program works fine with mono, then your problem would be solved, at least as long as it's only a one-time-processing task.
If not, there is an extensive FAQ on debugging, including GDB support
http://www.mono-project.com/Debugging
Miguel also has this post regarding valgrind support:
http://tirania.org/blog/archive/2007/Jun-29.html
In addition to that, if you let it run on Linux, you can also use strace, to see what's going on in the syscalls. If you don't have extensive winforms usage or WinAPI calls, .NET programs usually work fine on Linux (for problems regarding file system case-sensitivity, you can loopmount a case-insensitive file system and/or use MONO_IOMAP).
If you're Windows centric person, this post
says the closest thing Windows has is WinDbg's Logger.exe, but ltrace information is not as extensive.
Mono sourcecode is available here:
http://download.mono-project.com/sources/
You are probably interested in the sources of the latest mono version
http://download.mono-project.com/sources/mono/mono-3.0.3.tar.bz2
If you need framework 4.5, you'll need mono 3, you can find precompiled packages here
https://www.meebey.net/posts/mono_3.0_preview_debian_ubuntu_packages/
If you want to make changes to the sourcecode, this is how to compile it:
http://ubuntuforums.org/showthread.php?t=1591370
There are .NET exceptions which can not be caught. Check out: http://msdn.microsoft.com/en-us/magazine/dd419661.aspx.
I've got an application that uses performance counters, that has worked for months. Now, on my dev machine and another developers machine, it has started hanging when I call PerformanceCounterCategory.Exists. As far as I can tell, it hangs indefinitely. It does not matter which category I use as input, and other applications using the API exhibits the same behaviour.
Debugging (using MS Symbol Servers) has shown that it is a call to Microsoft.Win32.RegistryKey that hangs. Further investigation shows that it is this line that hangs:
while (Win32Native.ERROR_MORE_DATA == (r = Win32Native.RegQueryValueEx(hkey, name, null, ref type, blob, ref sizeInput))) {
This is basically a loop that tries to allocate enough memory for the performance counter data. It starts at size = 65000 and does a few iterations. In the 4th call, when size = 520000, Win32Native.RegQueryValueEx hangs.
Furthermore, rather worringly, I found this comment in the reference source for PerformanceCounterLib.GetData:
// Win32 RegQueryValueEx for perf data could deadlock (for a Mutex) up to 2mins in some
// scenarios before they detect it and exit gracefully. In the mean time, ERROR_BUSY,
// ERROR_NOT_READY etc can be seen by other concurrent calls (which is the reason for the
// wait loop and switch case below). We want to wait most certainly more than a 2min window.
// The curent wait time of up to 10mins takes care of the known stress deadlock issues. In most
// cases we wouldn't wait for more than 2mins anyways but in worst cases how much ever time
// we wait may not be sufficient if the Win32 code keeps running into this deadlock again
// and again. A condition very rare but possible in theory. We would get back to the user
// in this case with InvalidOperationException after the wait time expires.
Has anyone seen this behaviour before ? What can I do to resolve this ?
This issue is now fixed, and since there has been no answers here, I will add an answer here in case the question is found in future searches.
I ultimately fixed this error by stopping the print spooler service (as a temporary measure).
It looks like the reading of Performance counters actually needs to enumerate the printers on the system (confirmed by a WinDbg dump of a hanging process, where I can see in the stack trace that winspool is enumerating printers, and is stuck in a network call). This was what was actually failing on the system (and sure enough, opening the "Devices and printers" window also hung). It baffles me that a printer/network issue can actually make the performance counters go down. One would think that there was some sort of fail-safe built in for such a case.
What I am guessing, is that this is cause by a bad printer/driver on the network. I haven't re-enabled printing on the affected systems yet, since we are hunting for the bad printer.
This really didn't help on my case, any operation done that uses the performance Category, will hung in there forever.
I am thinking it is more a problem of memory allocation for the call or something related to resources of the machine, I don't have a way to probe it, but trying exactly the same sample call to for example "PerformaceCXounterCategory.Exist" method in a computer with 32GB Ram will run just fine against the one another with only 16GB, if I get a chance to install more memory and test and verify this assumption I will update this ticket
I'm getting close to desperate.. I am developing a field service application for Windows Mobile 6.1 using C# and quite some p/Invoking. (I think I'm referencing about 50 native functions)
On normal circumstances this goes without any problem, but when i start stressing the GC i'm getting a nasty 0xC0000005 error witch seems uncatchable. In my test i'm rapidly closing and opening a dialog form (the form did make use of native functions, but for testing i commented these out) and after a while the Windows Mobile error reporter comes around to tell me that there was an fatal error in my application.
My code uses a try-catch around the Application.Run(masterForm); and hooks into the CurrentDomain.UnhandledException event, but the application still crashes. Even when i attach the debugger, visual studio just tells me "The remote connection to the device has been lost" when the exception occurs..
Since I didn't succeed to catch the exception in the managed environment, I tried to make sense out of the Error Reporter log file. But this doesn't make any sense, the only consistent this about the error is the application where it occurs in.
The thread where the application occurs in is unknown to me, the module where the error occurs differs from time to time (I've seen my application.exe, WS2.dll, netcfagl3_5.dll and mscoree3_5.dll), even the error code is not always the same. (most of the time it's 0xC0000005, but i've also seen an 0X80000002 error, which is a warning accounting the first byte?)
I tried debugging through bugtrap, but strangely enough this crashes with the same error code (0xC0000005). I tried to open the kdmp file with visual studio, but i can't seem to make any sense out of this because it only shows me disassembler code when i step into the error (unless i have the right .pbb files, which i don't). Same goes for WinDbg.
To make a long story short: I frankly don't have a single clue where to look for this error, and I'm hoping some bright soul on stackoverflow does. I'm happy to provide some code but at this moment I don't know which piece to provide..
Any help is greatly appreciated!
[EDIT May 3rd 2010]
As you can see in my comment to Hans I retested the whole program after I uncommented all P/Invokes, but that did not solve my problem. I tried reproducing the error with as little code as possible and eventually it looks like multi-threaded access is the one giving me all the problems.
In my application I have a usercontrol that functions as a finger / flick scroll list. In this control I use a bitmap for each item in the list as a canvas. Drawing on this canvas is handled by a separate thread and when i disable this thread, the error seems to disappear.. I'll do some more tests on this and will post the results here.
Catching this exception is not an option. It is the worst kind of heart attack a thread can suffer, the CPU has detected a serious problem and cannot continue running code. This is invariably caused by misbehaving unmanaged code, it sounds like you've got plenty of it running in your program. You need to focus on debugging that unmanaged code to get somewhere.
The two most common causes of an AV are
Heap corruption. The unmanaged code has written data to the heap improperly, destroying the structural integrity of the heap. Typically caused by overflowing the boundary of an allocated block of memory. Or using a heap block after it was freed. Very hard to diagnose, the exception will be raised long after the damage was done.
Stack corruption. Most typically caused by overflowing the boundaries of an array that was allocated on the stack. This can overwrite the values of other variables on the stack or destroy the function return address. A bit easier to diagnose, it tends to repeat well and has an immediate effect. One side-effect is that the debugger loses its ability to display the call stack right after the damage was done.
Heap corruption is the likely one and the hard one. This is most typically tackled by debugging the code in the debug build with a debug allocator that watches the integrity of the heap. The <crtdbg.h> header provides one. It's not a guaranteed approach, you can have some really nasty Heisenbugs that only rear their head in the Release build. Very few options available then, other than careful code review. Good luck, you'll need it.
It turns out to be an exception caused by Interlocked.
In my code there is an integer _drawThreadIsRunning which is set to 1 when the draw-thread is running, and set to 0 otherwise. I set this value using Interlocked:
if (Interlocked.Exchange(ref _drawThreadIsRunning, 1) == 0) { /* run thread */ }
When i change this line the whole thing works, so it seems that there is a problem with threadsafety somewhere, but i can't figure it out. (ie. i don't want to waste more time figuring it out)
Thanks for the help guys!
What is Environment.FailFast?
How is it useful?
It is used to kill an application. It's a static method that will instantly kill an application without being caught by any exception blocks.
Environment.FastFail(String) can
actually be a great debugging tool.
For example, say you have an
application that is just downright
giving you some weird output. You have
no idea why. You know it's wrong, but
there are just no exceptions bubbling
to the surface to help you out. Well,
if you have access to Visual Studio
2005's Debug->Exceptions... menu item,
you can actually tell Visual Studio to
allow you to see those first chance
exceptions. If you don't have that,
however you can put
Environment.FastFail(String) in an
exception, and use deductive reasoning
and process of elimination to find out
where your problem in.
Reference
It also creates a dump and event viewer entry, which might be useful.
It's a way to immediately exit your application without throwing an exception.
Documentation is here.
Might be useful in some security or data-critical contexts.
Failfast can be used in situations where you might be endangering the user's data. Say in a database engine, when you detect a corruption of your internal data structures, the only sane course of action is to halt the process as quickly as possible, to avoid writing garbage to the database and risk corrupting it and lose the user's data. This is one possible scenario where failfast is useful.
Another use is to catch programmer errors. Say you are writing a library and some function accepts a pointer that cannot be null in any circumstance, that is, if it's null, you are clearly in presence of a programmer error. You can return an error like E_POINTER or throw some InvalidArgument exception and hope someone notices, but you'll get their attention better by failing fast :-)
Note that I'm not restricting the example to pointers, you can generalize to any parameter or condition that should never happen. Failing fast ultimately results in better quality apps, as many bugs no longer go unnoticed.
Finally, failing fast helps with capturing the state of the process as faithfully as possible (as a memory dump gets created), in particular when failing fast immediately upon detecting an unrecoverable error or a really unexpected condition.
If the process was allowed to continue, say the 'finally' clauses would run or the stack would be unwound, and things would get destroyed or disposed-of, before a memory dump is taken, then the state of the process might be altered in such as way that makes it much more difficult to diagnose the root cause of the problem.
It kills the application and even skips try/finally blocks.
From .NET Framework Design Guidelines on Exception Throwing:
✓ CONSIDER terminating the process by calling System.Environment.FailFast (.NET Framework 2.0 feature) instead of throwing an exception if your code encounters a situation where it is unsafe for further execution.
Joe Duffy discusses failing fast and the discipline to make it useful, here.
http://joeduffyblog.com/2014/10/13/if-youre-going-to-fail-do-it-fast/
Essentially, he's saying that for programming bugs - i.e. unexpected errors that are the fault of the programmer and not the programme user or other inputs or situations that can be reasonable expected to be bad - then deciding to always fail fast for unexpected errors has been seen to improve code quality.
I think since its an optional team decision and discipline, use of this API in C# is rare since in reality we're all mostly writing LoB apps for 12 people in HR or an online shop at best.
So for us, we'd maybe use this when we want deny the consumer of our API the opportunity of making any further moves.
An unhandled exception that is thrown (or rethrown) within a Task won't take effect until the Task is garbage-collected, at some perhaps-random time later.
This method lets you crash the process now -- see this answer.