C# code very slow with debugger attached; MemoryMappedFile's fault? - c#

I have a client/server app. The server component runs, uses WCF in a 'remoting' fashion (binary formatter, session objects).
If I start the server component and launch the client, the first task the server does completes in <0.5sec.
If I start the server component with VS debugger attached, and then launch the client, the task takes upwards of 20sec to complete.
There are no code changes - no conditional compilation changes. The same occurs whether I have the server component compiled and running in 32-bit, 64-bit, with the VS hosting process, without the VS hosting process, or any combination of those things.
Possibly important: If I use the VS.NET profiler (sampling mode), then the app runs as quick as if there were no debugger attached. So I can't diagnose it that way. Just checked, instrumentation mode also runs quickly. Same for the concurrency profiling mode, works quickly.
Key data:
The app uses fairly heavy multithreading (40 threads in the standard thread pool). Creating the threads happens quickly regardless and is not a slow point. There are many locks, WaitHandles and Monitor patterns
The app raises no exceptions at all.
The app creates no console output.
The app is entirely managed code.
The app does map a few files on disk to a MemoryMappedFile: 1x750MB and 12x8MB and a few smaller ones
Measured performance:
CPU use is minimal in both cases; when debugger is attached, CPU sits at <1%
Memory use is minimal in both cases; maybe 50 or 60MB in both cases
There are plenty of page faults happening (ref MMF), however they happen more slowly when the debugger is attached
If the VS hosting process is not used, or basically the 'remote debugging monitor' comes into play, then that uses a decent amount CPU and creates a good number of page faults. But that's not the only time the problem is occurring
The performance difference is seen regardless of how the client is run. The only variable being changed is the server component being run via 'Start with debugging' vs launched from Explorer.
My ideas:
WCF slow when debugged?
MemoryMappedFiles slow when debugged?
40 threads used - slow to debug? Perhaps Monitors/locks notify debugger? Thread scheduling becomes strange/context switches very infrequent?
Cosmic background radiation granting intelligence and cruel sense of humour to VS
All seem stupidly unlikely.
So, my questions:
Why is this happening?
If #1 unknown, how can I diagnose / find out?

Since this is one of the first results when googling for this issue I would like to add my problem solution here in the hopes of saving someone 2 hours of research like in my case.
My code slowed down from 30 seconds without debugger attached to 4 minutes with debugger. because I forgot to remove a conditional breakpoint. These seem to slow down execution tremendously, so watch out for those

Exceptions can notably impact the performance of an application. There are two types of exceptions: 1st chance exceptions (the one gracefully handled with a try/catch block), and unhandled exceptions (that will eventually crash the application).
By default, the debugger does not show 1st chance exceptions, it just shows unhandled exceptions. And by default, it also shows only exceptions occurring in your code. However, even if it does not show them, it still handles them, so its performance may be impacted (especially in load tests, or big loop runs).
To enable 1st chance exceptions display in Visual Studio, click on "Debug | Exceptions" to invoke the Exceptions dialog, and check "Thrown" on the "Common language runtime" section (you can be more specific and choose wich 1st chance exception you want to see).
To enable 1st chance exceptions display originating from anywhere in the application, not just from your code, click on "Tools | Options | Debugging | General" and disable the "Enable Just My Code" option.
And for these specific "forensics mode" cases, I also strongly recommend to enable .NET Framework Source Stepping (it requires "Enable Just My Code" to be disabled). It's very useful to understand what's going on, sometimes just looking at the call stack is very inspiring - and helpful especially in the case of cosmic radiation mixup :-)
Two related interesting articles:
How to debug crashes and hangs
Configuring Visual Studio to Debug .NET Framework Source Code

Possible causes:
Various special kinds of breakpoints such as:
Conditional breakpoints
Memory changed breakpoints
Function breakpoints
Having the "Enable native code debugging" option checked.
This option makes debug runs slow as molasses.
This option is not under Tools -> Options -> Debug, (that would make too much sense,) it is under Project -> Properties -> Debug
Excessive use of System.Diagnostics.Debug.Write().
My benchmarks show that 1000 invocations of Debug.WriteLine() take only 10 milliseconds when running without debugging, but a whole 500 milliseconds when debugging. Apparently the Visual Studio Debugger, when active, intercepts DotNet debug output, and does extremely time-consuming stuff with it. (Over decades of using Microsoft products, we have come to expect nothing less from Microsoft.)
Replacing Debug.WriteLine() with kernel32.dll -> PInvoke -> OutputDebugStringW() does not help, because when running a DotNet application, Visual Studio completely ignores kernel32 debug output and only displays DotNet debug output, which is a completely different thing. (And I suppose that anything else would, again, make too much sense.)
Excessive amount of exceptions being thrown and caught, as another answer suggests.
Throwing an exception under DotNet is a mind-bogglingly slow operation.
Collecting a stack trace under DotNet is an insanely slow operation.

Related

Program hanging point identification

There is a C#-program which hangs pretty rare. Execution of the program takes place on a remote machines and to start debugger is not an option. Run external profiler is more realistic, but also conjugate with huge difficulties. How can you determine the point of the program hang without profiler or debugger?
Option "detailed logging on FS" is poorly suited. The program consists of about 20 thousand lines of code and hangs not often.
I have tried Process Explorer but it works very strange (or I have not understood it). If you have managed to "catch" the moment when thread entered into an infinite loop, it is possible to see the stack in that moment. But this thread disappears quite quickly (whether in PE or it is really killed by the environment).
The option to create another application, application-monitor, is acceptable. If you can say how to create a dump of the main process or to obtain information about threads of the main process, it would be great. If you have some ready tools, it would be even better.
When an application crashes, it should normally be logged into Window's Application Event Log. It's not extremely detailed, but should give pretty solid clues anyway without any external tools needed.
To get there, you can either search "Event Log" in the Start Menu or find it in the Control Panel. It is located in the Administrative Tools section.
Once you're in the Event Viewer, open the Windows Logs item on the left then select Application. You should be able to find your application in the list using the Source column.
At the bottom you'll find the error detail, timestamp and a couple more infos which can help you debug your application.
Picture taken from Cyberlink.com
By 'hang' do you mean the program stops working until it is restarted or that the program pauses for an unusual amount of time. If the latter it could be in a heavy GC collection. If it's the former and you suspect some sort of infinite loop then in task manager (or process explorer) you should see it pretty much eating up one of the processor cores. For example if you have four cores and a program in hung in a tight loop, you will see roughly 25% cpu usage in the performance panel (assuming an otherwise lightly loaded machine).
MS supports managed debugging, see Debugging Managed Code Using the Windows Debugger You can use the sos extension to break the code execution and look at the state of the program. You might want to have the programs pdb handy if you take this approach.

Visual Studio 2010 extremely slow when populating ListBoxes while debugging

While debugging inside VS2010, programs naturally run a lot slower than otherwise.
However, lately my programs run at an indescribably slow rate if I'm updating the values of a ListBox. (Other controls may also be affected, I'm not sure... but ListBox is a sure thing).
Operations which happen in tiny fractions of a second outside the debugger, like adding 100 elements to a ListBox, can take as long as 3 to 5 minutes inside VS.
Clearly, this isn't normal behaviour.
I'm not sure when this started, but it hasn't been happening always. It started happening a couple of months ago. Maybe when I installed the service pack? I'm not sure.
When I look at the processes, msvsmon.exe is chewing through CPU.
Any ideas if there is some option somewhere that I may have changed which causes this? I'm trying to debug something with a ListBox containing 8,000 elements and It's just completely impossible.
Windows 7 x64, 4GB RAM, VS2010-SP1
Yes, I can see a lot of System.InvalidCastExceptions in the output window
That's what causes the slow-down, the debugger does a lot of work when it processes an exception. Especially the remote debugger you are using now, required because your project's platform target is AnyCPU, adding the notification message to the Output window isn't cheap.
You can't ignore this problem, it is not just a debugger artifact. Debug + Exceptions, tick the Thrown box for CLR Exceptions. The debugger will now stop when the exception is thrown. You'll need to fix that code.
The problem might be the way VS2010 handles breakpoints. Look at this link:
VS2010 Debug entry very slow
Two interesting notes:
Searching for symbols is often very slow at the start of debug, particularly if you have one of the remote symbol options configured,
and have not set 'ignores' on the various DLLs which will not have
symbols on MS servers.
...
Yes, msvsmon.exe will be used when you debug a 64-bit program. Since Visual Studio is completely 32-bit, the remote debugger is needed to
bridge the divide. ... Working mightily to find and load the .pdb
files would be likely. Or accidentally having the mixed-mode debugging
option turned on so the debugger is also seeing all unmanaged DLL
loads and finding symbols for them. These are just guesses of course.
One more cause of slownes - conditional breakpoints as the condition need to be evaluated on each hit to the breakpoint. Having a breakpoint that have "false" for condition inside a long loop will slow debugging significantly.

Visual Studio - Debug vs Release

I built a windows service, targeted for .NET 2.0 in VS 2008. I run it as a console app to debug it.
Console app is working great. I put it on my local computer as a service, compiled in debug mode, still working great. I'm ready to release now, and suddenly, when I set it to release mode, the service compiles and installs, but nothing happens. (No code in service is running at all).
I realize that the release vs debug mode are property configuration settings, but it seems that in release mode, even when I check define DEBUG constant, uncheck Optimize code, and set Debug info to 'full', it is still doing nothing.
Set it back to debug and it's working like a charm again.
(As a sidenote, I tried resetting the target framework to 3.5 to make sure that wasn't the issue, too)
So my questions (in order of importance) are these:
Will using my "debug" version in any way ever cause any problems?
What settings are different between debug and release besides the three I've been trying to change already?
This seems like a weird error to me and has stirred up my curiosity. Any idea what would cause this?
EDIT:
Should mention, I already am using a custom installer. Basically I compile the program (in either debug or release) and then install it with the respective installer.
1) It might, if not directly, so indirectly by making the application slower and making it use more memory.
2) When it runs in debug mode, there are certain things that works differently, for example:
The code is compiled with some extra NOP instructions, so that there is at least one instruction at the beginning of each code line, so that it will be possible to place a break point at any line.
The instructions can be rearranged in release mode, but not in debug mode, so that the code can be single stepped and the result will correspond to the exact order of the source code.
The garbage collector works differently, by letting references survive throughout their entire scope instead of only for the time that they are used, so that variables can be viewed in debug mode without going away before the scope ends.
Exceptions contain more information and takes a lot longer to process when thrown.
All those differences are relatively small, but they are actual differences and they may matter in some cases.
If you see a great difference in performance between debug mode and release mode, it's usually because there is something wrong with the code, like for example if it's throwing and catching a huge amount of exceptions. If there is a race condition in the code, it may only happen in release mode because there is some extra overhead in debug mode that makes the code run slightly slower.
3) As to what the problem with your service is, I don't know, but it doesn't seem to be related to how the code is executed in debug mode or release mode. The code would start in any case, and if it was a problem with the code, it would crash and you would be able to see it in the event log.
I'm not sure I can speak to #1 or #2, but when I've had problems like that, it was because of incorrect threading/concurrency. I'm not sure how large your app is, but that might be a good place to start.

Performance Counters not being released

All:
I am using some custom Performance Counters that I have created. These are multi-instance, with a lifetime of "Process".
the problem: When I'm debugging in VS, if I stop the process and then start it again, I get an exception when my code attempts to create my performance counters. The exception indicates that the peformance counters already exist and that I cannot create them until the owning process releases them.
Once I get this error, there seems to be only 1 way out -- I have to close and restart Visual Studio -- it's as though VS gets ownership of my Process Lifetime performance counters even though it was really created by the owned process. Any idea what I can do about this?
BTW: the problem only seems to surface if my code actually writes to a performance counter before it is shut down.
I think you're doing battle with the Visual Studio hosting process. It is a helper .exe that hosts the CLR to improve the debugging experience, it is always running while you've got a project loaded into VS. Project + Properties, Debug tab, scroll down, uncheck the "Enable the Visual Studio hosting process" checkbox.
This does affect the debugging session somewhat, most notable is that the output written by Console.WriteLine() in your program no longer shows up in the Output window. Some obscure security options, not at all well documented. I doubt you'll have a problem.

Reasons to NOT run a business-critical C# console application via the debugger?

I'm looking for a few talking points I could use to convince coworkers that it's NOT OK to run a 24/7 production application by simply opening Visual Studio and running the app in debug mode.
What's different about running a compiled console application vs. running that same app in debug mode?
Are there ever times when you would use the debugger in a live setting? (live: meaning connected to customer facing databases)
Am I wrong in assuming that it's always a bad idea to run a live configuration via the debugger?
You will suffer from reduced performance when running under the debugger (not to mention the complexity concerns mentioned by Bruce), and there is nothing to keep you from getting the same functionality as running under the debugger when compiled in release mode -- you can always set your program up to log unhandled exceptions and generate a core dump that will allow you to debug issues even after restarting your app.
In addition, it sounds just plain wrong to be manually managing an app that needs 24/7 availability. You should be using scheduled tasks or some sort of automated process restarting mechanism.
Stepping back a bit, this question may provide some guidance on influencing your team.
Just in itself there's no issue in running it in debugging if the performance is good enough. What strikes me as odd is that you are running business critical 24/7 applications as users, perhaps even on a workstation. If you want to ensure robustness and avaliability you should consider running this on dedicated hardware that no one uses besides the application. If you are indeed running this on a users machine, accidents can be easily made, such as closing down the "wrong" visual studio, or crashing the computer etc.
Running in debug should be done in the test environment. Where I've work/worked we usually have three environments, Production, Release and Test.
Production
Dedicated hardware
Limited access, usually only the main developers/technology
Version control, a certain tagged version from SVN/CVS
Runs the latest stable version that has been promoted to production status
Release
Dedicate hardware
Full access to all developers
Version control, a certain tagged version from SVN/CVS
Runs the next version of the product, not yet promoted to production status, but will probably be. "Gold" if you like.
Test
Virtual machine or louse hardware
Full access
No version control, could be the next, next version, or just a custom build that someone wanted to test out on "near prod environment"
This way we can easily test new version in Release, even debug them there. In Test environment it's anything-goes. It's more if someone want to test something involving more than one box (your own).
This way it will protect you against quick-hacks that wasn't tested enough by having dedicated test machines, but still allow you to release those hacks in an emergency.
Speaking very generically, when you run a program under a debugger you're actually running two processes - the target and the debugger - and tying them together pretty intimately. So the opportunities for unexpected influences and errors (that aren't in a production run) exist. Of course, the folks who write the debuggers do their best to minimize these effects, but running that scenario 24/7 is likely to expose any issues that do exist.
If you're trying to track down a particular failure, sometimes running under a debugger is the best solution; but even there, often enabling tracing of one sort or another is a lower-impact solution that is just as effective.
The debugger is also using up resources - depending on the machine and the app, that could be an issue. If you need more specific examples of things that could go wrong using a debugger 24/7 let me know.
Ask them if they'd like to be publicly mocked on The Daily WTF. (Because with enough details in the write up, this would qualify.)
I can't speak for everyone's experience, but for me Visual Studio crashes a lot. It not only crashes itself, but it crashes explorer. This is exacerbated by add-ons and plugins. I'm not sure if its ever been tested to run for 24/7 over days and days and days the same way the OS has.
Your essentially putting the running of your app at the mercy of this huge behemoth of a second app that sounds like its easily orders-of-magnitude larger and more complex than your app. Youre just going to get bug reports and most of them are going to involve visual studio crashing.
Also, are you paying for visual studio licenses for production machines?
You definitely don't want an application that needs to be up 24/7 to be run manually from the debugger, regardless of the performance issues. If you have to convince your co-workers of that, find a new job.
I have sometimes used the debugger live (i.e. against live customer data) to debug data-related application problems in situations where I couldn't exactly reproduce the production data in a test environment.
Simple answer: you will almost certainly reduce performance (most likely considerably) and you will vastly increase your dependencies. In one step you've added the entire VS stack including the IDE and every other little bit to your dependencies. Smart people keep the dependencies of high-uptime services as tight as possible.
If you want to run under a debugger then you should use a lighter weight debugger like ntsd, this is just madness.
We never run it via the debugger. There are compiler options which may accidentally be turned on/off. Optimizations aren't turned on, and running it in production is a huge security risk.
Aside from the debug code possibly having different code paths (#ifdef, Debug.Assert(), etc) code-wise it will run the same.
A little scary mind you - set breakpoints, set the next line of code you want to execute, interactive exceptions popup and the not-as-stable running under visual studio.There are also debugger options that allow you to break always when an exception occurs. Even inspecting classes can cause side-effects if you haven't written code properly... It sure isn't something i'd want to do as the normal 24x7 process.
The only reason to run from the debugger is to debug the application. If you're doing that on a regular basis in production, it's a big red flag that your code and your process need help.
To date I've never had to run debug mode interactively in production. The rare time we switched over to a debug version for extra logging, but never sat there with visual studio open.
I would ask them what is the advantage of running it via Visual Studio?
There are plenty of disadvantages that have been listed in the replies. I can't think of any advantages.

Categories