Debug vs. Release performance - c#

I've encountered the following paragraph:
“Debug vs. Release setting in the IDE when you compile your code in Visual Studio makes almost no difference to performance… the generated code is almost the same. The C# compiler doesn’t really do any optimization. The C# compiler just spits out IL… and at the runtime it’s the JITer that does all the optimization. The JITer does have a Debug/Release mode and that makes a huge difference to performance. But that doesn’t key off whether you run the Debug or Release configuration of your project, that keys off whether a debugger is attached.”
The source is here and the podcast is here.
Can someone direct me to a Microsoft article that can actually prove this?
Googling "C# debug vs release performance" mostly returns results saying "Debug has a lot of performance hit", "release is optimized", and "don't deploy debug to production".

Partially true. In debug mode, the compiler emits debug symbols for all variables and compiles the code as is. In release mode, some optimizations are included:
unused variables do not get compiled at all
some loop variables are taken out of the loop by the compiler if they are proven to be invariants
code written under #debug directive is not included, etc.
The rest is up to the JIT.
Full list of optimizations here courtesy of Eric Lippert.

There is no article which "proves" anything about a performance question. The way to prove an assertion about the performance impact of a change is to try it both ways and test it under realistic-but-controlled conditions.
You're asking a question about performance, so clearly you care about performance. If you care about performance then the right thing to do is to set some performance goals and then write yourself a test suite which tracks your progress against those goals. Once you have a such a test suite you can then easily use it to test for yourself the truth or falsity of statements like "the debug build is slower".
And furthermore, you'll be able to get meaningful results. "Slower" is meaningless because it is not clear whether it's one microsecond slower or twenty minutes slower. "10% slower under realistic conditions" is more meaningful.
Spend the time you would have spent researching this question online on building a device which answers the question. You'll get far more accurate results that way. Anything you read online is just a guess about what might happen. Reason from facts you gathered yourself, not from other people's guesses about how your program might behave.

I can’t comment on the performance but the advice “don’t deploy debug to production” still holds simply because debug code usually does quite a few things differently in large products. For one thing, you might have debug switches active and for another there will probably be additional redundant sanity checks and debug outputs that don’t belong in production code.

From msdn social
It is not well documented, here's what
I know. The compiler emits an
instance of the
System.Diagnostics.DebuggableAttribute.
In the debug version, the
IsJitOptimizerEnabled property is
True, in the release version it is
False. You can see this attribute in
the assembly manifest with ildasm.exe
The JIT compiler uses this attribute
to disable optimizations that would
make debugging difficult. The ones
that move code around like
loop-invariant hoisting. In selected
cases, this can make a big difference
in performance. Not usually though.
Mapping breakpoints to execution
addresses is the job of the debugger.
It uses the .pdb file and info
generated by the JIT compiler that
provides the IL instruction to code
address mapping. If you would write
your own debugger, you'd use
ICorDebugCode::GetILToNativeMapping().
Basically debug deployment will be slower since the JIT compiler optimizations are disabled.

What you read is quite valid. Release is usually more lean due to JIT optimization, not including debug code (#IF DEBUG or [Conditional("DEBUG")]), minimal debug symbol loading and often not being considered is smaller assembly which will reduce loading time. Performance different is more obvious when running the code in VS because of more extensive PDB and symbols that are loaded, but if you run it independently, the performance differences may be less apparent. Certain code will optimize better than other and it is using the same optimizing heuristics just like in other languages.
Scott has a good explanation on inline method optimization here
See this article that give a brief explanation why it is different in ASP.NET environment for debug and release setting.

One thing you should note, regarding performance and whether the debugger is attached or not, something that took us by surprise.
We had a piece of code, involving many tight loops, that seemed to take forever to debug, yet ran quite well on its own. In other words, no customers or clients where experiencing problems, but when we were debugging it seemed to run like molasses.
The culprit was a Debug.WriteLine in one of the tight loops, which spit out thousands of log messages, left from a debug session a while back. It seems that when the debugger is attached and listens to such output, there's overhead involved that slows down the program. For this particular code, it was on the order of 0.2-0.3 seconds runtime on its own, and 30+ seconds when the debugger was attached.
Simple solution though, just remove the debug messages that was no longer needed.

In msdn site...
Release vs. Debug configurations
While you are still working on your
project, you will typically build your
application by using the debug
configuration, because this
configuration enables you to view the
value of variables and control
execution in the debugger. You can
also create and test builds in the
release configuration to ensure that
you have not introduced any bugs that
only manifest on one type of build or
the other. In .NET Framework
programming, such bugs are very rare,
but they can occur.
When you are ready to distribute your
application to end users, create a
release build, which will be much
smaller and will usually have much
better performance than the
corresponding debug configuration. You
can set the build configuration in the
Build pane of the Project Designer, or
in the Build toolbar. For more
information, see Build Configurations.

I recently run into a performance issue. The products full list was taking too much time, about 80 seconds. I tuned the DB, improved the queries and there wasn't any difference. I decided to create a TestProject and I found out that the same process was executed in 4 seconds. Then I realized the project was in Debug mode and the test project was in Release mode. I switched the main project to Release mode and the products full list only took 4 seconds to display all the results.
Summary: Debug mode is far more slower than run mode as it keeps debugging information. You should always deploy in Relase mode. You can still have debugging information if you include .PDB files. That way you can log errors with line numbers, for example.

To a large extent, that depends on whether your app is compute-bound, and it is not always easy to tell, as in Lasse's example. If I've got the slightest question about what it's doing, I pause it a few times and examine the stack. If there's something extra going on that I didn't really need, that spots it immediately.

Debug and Release modes have differences. There is a tool Fuzzlyn: it is a fuzzer which utilizes Roslyn to generate random C# programs. It runs these programs on .NET core and ensures that they give the same results when compiled in debug and release mode.
With this tool it was found and reported a lot of bugs.

Related

Is compiling Release and Debug going to generate different IL code + different machine code?

I heard compiling in Release mode generates optimized code than in Debug mode, which is fine.
But is this optimization in the IL? is it in the machine code once the CLR runs it? is the metadata structure different from PE compiled in Release and Debug?
thanks
Building in Release build turns on the /optimize compile option for the C# compiler. That has a few side-effects, the IL indeed changes but not a great deal. Notable is that the compiler no longer makes an effort to make the code perfectly debuggable. It for example skips an empty static constructor, it no longer emits the NOP opcodes that allows you to set a breakpoint on a curly brace and allows local variables with different scopes to overlap in a stack frame. Small stuff.
The most important difference is the [Debuggable] attribute that's emitted for the assembly, its IsJITOptimizerDisabled property is false.
Which turns on the real optimizer, the one that's built into the jitter. You'll find the list of optimizations it performs in this answer. Do note the usefulness of this approach, any language benefits from having the code optimizer in the jitter instead of the compiler.
So in a nutshell, very minor changes in the IL, very large changes in the generated machine code.
Yes, there's some optimization in the IL - in particular, the debug version will include NOP instructions which make it easy for a debugger to insert break points, I believe. There are also potentially differences in terms of the level of debug information provided (line numbers etc).
I suggest you take a small sample program, compile it in both ways, and then look at the output in ildasm.
The C# compiler doesn't do much optimization - the JIT compiler does most of that - but I think there are some differences.
The cil differs, it is optimized. Since the machine code is a translation of the cil, it also differs. You can see it by yourself, just open the disassembly window in visual studio. Metadata should remain the same as you don't change the structure of class contracts between releases.
In VB there is a side-effect of Edit + Continue support compiled into the executable, which can cause a memory leak. It is affected by any event that is declared with the WithEvents keyword. A WeakReference keeps track of those event instances. Problem is, those WeakReferences are leaked if you run the app without a debugger. The rate at which the process consumes memory is highly dependent on how many instances of the class get created. The leak is 16 bytes per event per object.
Disclaimer: copied from Hans' answer here
See this Microsoft knowledge base article.
This is not an answer to the exact question. Just to add that you can purposefully mark which code has to be run in debug mode and which in release mode with the help of preprocessor markups.
#if DEBUG
// code only meant for debug mode
#endif
#if NOT DEBUG
// code only meant for release mode
#endif
So if you do this you'd get different IL generated.

Why does VS2012 run identical tests at different speeds?

I'm working on a project at work where there's a performance issue with the code.
I've got some changes I think will improve performance, but no real way of gauging how my changes affect it.
I wrote a unit test that does things the way they're currently implemented, with a Stopwatch to monitor how fast the function runs. I've also wrote a similar unit test that does things slightly differently.
If the tests are ran together, one takes 1s to complete, the other takes 73 ms.
If the tests are ran separately, they both take around 1s to complete (yea.. that change i made didn't seem to change much).
If the tests are identical, I have the same issue, one runs faster than the other.
Is visual studio doing something behind the scenes to improve performance? Can I turn it off if it is?
I've tried moving tests into different files, which didn't fix the issue I'm having.
I'd like to be able to run all the tests, but have them run as if there's only one test running at a time.
My guess: it's likely down to dll loading and JIT compiling
1. Assembly loading.
.NET lazily loads assemblies (dll's). If you add reference to FooLibrary, it doesn't mean it gets loaded when your code loads.
Instead, what happens is that the first time you call a function or instantiate a class from FooLibrary, then the CLR will go and load the dll it lives in. This involves searching for it in the filesystem, possible security checks, etc.
If your code is even moderately complex, then the "first test" can often end up causing dozens of assemblies to get loaded, which obviously takes some time.
Subsequent tests appear fast because everything's already loaded.
2. JIT Compiling
Remember, your .NET assemblies don't contain code that the CPU can directly execute. Whenever you call any .NET function, the CLR takes the MSIL bytecode and compiles it into executable machine code, and then it goes and runs this machine code. It does this on a per-function basis.
So, if you consider that the first time you call any function, there will be a small delay while it JIT compiles, these things can add up. This can be particularly bad if you're calling a lot of functions or initializing a big third party library (think entity framework, etc).
As above, subsequent tests appear fast, because many of the functions will have already been JIT compiled, and cached in memory.
So, how can you get around this?
You can improve the assembly loading time by having fewer assemblies. This means fewer file searches and so on. The microsoft .NET performance guidelines go into more detail.
Also, I believe installing them in the global assembly cache may (??) help, but I haven't tested that at all so please take it with a large grain of salt.
Installing into the GAC requires administrative permissions and is quite a heavyweight operation. You don't want to be doing it during development, as it will cause you problems (assemblies get loaded from the GAC in preference to the filesystem, so you can end up loading old copies of your code without realizing it).
You can improve the JIT time by using ngen to pre-compile your assemblies. However, like with the GAC, this requires administrative permissions and takes some time, so you do not want to do it during development either.
My advice?
Firstly, measuring performance in unit tests is not a particularly good or reliable thing to be doing. Who knows what else visual studio is doing in the background that may or may not affect your tests.
Once you've got your code you're trying to benchmark out into a standalone app, have it loop and run all the tests twice, and discard the first result :-)
"Premature optimization is the root of all evil."
If you didn't measure before, how do you know you are fixing anything now? How do you even know you had a problem that needed to be solved?
Unit tests are for operational correctness. They could be used for performance, but I would not depend on that because many other factors come into play at run-time.
Your best bet is to get a profiler (or use one that comes with VS) and start measuring.

Visual Studio 2008 Very Slow Debugging Managed to Unmanaged Transition (only one machine though)

I have some C# code which passed a delegate as a callback to an unmanaged method via a P/Invoke function call in a NUnit test.
The code works great and passes all tests in both Relase and Debug modes. And it runs fast on one machine whether running under the Debugger or not.
But after setting up a nearly identical development environment on another PC for a new developer starting soon, it runs fast in Release and Debug configuration. But horribly slow when the Debugger is attached.
Note that I have seen a type of slowness with "debug unmanaged code" enabled on the Project. I have disabled that, recompiled and it doesn't matter with or w/o. I tried it both ways several times.
Also, there aren't any break points or watch variables set.
As an aside, this unit test actually calls the unmanaged method in a loop 1 million times which returns after incrementing a counter. It's extremely simple code that was only testing the performance of making unmanaged calls across AppDomains.
Please remember that this is identical code from git commit that only runs slow when under the debugger on one of the machines. No code modifications are different between them so it seems conclusively this isn't a "code" issue but rather a setting in Visual Studio somewhere related to unmanaged vs. managed debugging, I will wildly guss.
Thanks in advance for any ideas. If you really think seeing the code will help. I'll post the unit C# test and the cpp file too.
Edit: I narrowed down that this slowness in the debugger only happens for the unmanaged code that calls into a different AppDomain. So in these performance tests there is the primary and another, secondary AppDomain. Managed to Unmanaged calls are tested to callback from the primary domain to itself. Those are fast! But those that callback across from unmanaged into the other AppDomain are very, very slow. This means from 20 million per second down to only 4 or 5 thousand per second.
Note that the method being called to test is void callback()--so not arguments or return value. In other words, there's nothing to marshall.
Edit: I was jiggerng with different settings and now my development box is SLOW too. I was sure it was the "Just My Code" setting that saw was off for the faster machine so enabled it to try that out. But now, even after disabling it again, it's still slow. So not sure if this is the cause or not.
Check if symbol files settings are the same on both machines. Loading all symbols for native code may take very long time (Tools -> Options ->Debugging -> Symbols).

C# Debug vs Release

How much performance gain (if any) can a windows service gain between a debug build and release build and why?
For managed code, unless you have a lot of stuff conditionally compiled in for DEBUG builds there should be little difference - the IL should be pretty much the same. The Jitter generates differently when run under the debugger or not - the compilation to IL isn't affected much.
There are some things the /optimize does when compiling to IL, but they aren't particularly aggressive. And some of those IL optimizations will probably be handled by the jitter optimizations, even if they aren't optimized in the IL (like the removal of nops).
See Eric Lippert's article http://blogs.msdn.com/ericlippert/archive/2009/06/11/what-does-the-optimize-switch-do.aspx for details:
The /optimize flag does not change a huge amount of our emitting and generation logic. We try to always generate straightforward, verifiable code and then rely upon the jitter to do the heavy lifting of optimizations when it generates the real machine code. But we will do some simple optimizations with that flag set.
Read Eric's article for information about /optimize does do differently in IL generation.
Well, though the question is a duplicate, I feel that some of the better answers in the original question are at the very bottom. Personally I have seen situations where there is an appreciable difference between debug and release modes. (Example : Property performance, where there was a 2x difference between accessing properties in debug and release mode). Whether this difference would be present in an actual software(instead of benchmark like program) is debatable, but I have seen it happen in one product I worked on.
From Neil's answer on the original question, from msdn social:
It is not well documented, here's what I know. The compiler emits an instance of the System.Diagnostics.DebuggableAttribute. In the debug version, the IsJitOptimizerEnabled property is True, in the release version it is False. You can see this attribute in the assembly manifest with ildasm.exe.
The JIT compiler uses this attribute to disable optimizations that would make debugging difficult. The ones that move code around like loop-invariant hoisting. In selected cases, this can make a big difference in performance. Not usually though.
Mapping breakpoints to execution addresses is the job of the debugger. It uses the .pdb file and info generated by the JIT compiler that provides the IL instruction to code address mapping. If you would write your own debugger, you'd use ICorDebugCode::GetILToNativeMapping().

What is /optimize C# compiler key intended for?

Is there a full list of optimizations done by the /optimize C# compiler key available anywhere?
EDIT:
Why is it disabled by default?
Is it worth using in a real-world app? -- it is disabled by default only in Debug configuration and Enabled in Release.
Scott Hanselman has a blog post that shows a few examples of what /optimize (which is enabled in Release Builds) does.
As a summary: /optimize does many things with no exact number or definition given, but one of the more visible are method inlining (If you have a Method A() which calls B() which calls C() which calls D(), the compiler may "skip" B and C and go from A to D directly), which may cause a "weird" callstack in the Release build.
It is disabled by default for debug builds. For Release builds it is enabled.
It is definitely worth enabling this switch as the compiler makes lots of tweaks and optimizations depending on the kind of code you have.
For eg: Skipping redundant initializations, comparisons that never change etc.
Note: You might have some difficulty debugging if your turn on optimization as the code you have and the IL code that is generated may not match. This is the reason it is turned on only for Release builds.
Quoted from the MSDN page:
The /optimize option enables or
disables optimizations performed by
the compiler to make your output file
smaller, faster, and more efficient.
In other words, it does exactly what you think it would - optimises the compiled CIL (Common Intermediate Language) code that gets executed by the .NET VM. I wouldn't worry about what the specific optimisations are - suffice to say that they are many, and probably quite complex in some cases. If you are really interested in what sort of things it does, you could probably investigate the Mono C# Compiler (I doubt the details about the MS C# one are public).
The reason optimisation is disabled by default for Debug configurations is that it makes certain debugging features impossible. A few notable ones:
Perhaps most crucially, the Edit and Continue feature is disabled - i.e. no modifying code during execution.
Breaking execution often means the wrong line of code is highlighted (usually the one after the expected one).
Unused local variables aren't actually assigned or even declared.
Really, the default options for optimisation never ought to be changed. Having the option off for debugging is highly useful, while having it on for Release mode is equally wise.

Categories