Is there any way I can check (not force) if a given method or property getter is being inlined in a release build?
No - because it doesn't happen at build time; it happens at JIT time. The C# compiler won't perform any inlining; it's up to the CLR that the code ends up running on.
You can discover this using cordbg with all JIT optimizations turned on, but you'll need to dig through the assembly code. I don't know of any way of discovering this within code. (It's possible you could do so with the debugger API, although that may well disable some inlining to start with.)
They're never inlined by the C# compiler. Only const fields are.
You can take a look at the C# compiler optimizations here.
You can make sure that a method or property accessor is never inlined with this attribute applied to it:
[MethodImpl(MethodImplOptions.NoInlining)]
You'd have to look at the machine code. Set a breakpoint on method call and when it hits, right-click and choose Go To Assembly. If you don't see the CALL statement then it got inlined. You'll have to be up to speed a little on reading machine code to be really sure though, you might see a call that was in the inlined method.
To make this accurate, you'll have to use Tools + Options, Debugging, General, untick "Suppress JIT optimization on module load". Which ensures the jitter behaves as it does without the debugger, methods won't be inlined when the optimizer is turned off.
Add code within the method body to examine the stack trace using StackFrame. In my experience, inlined methods are excluded from this stack trace.
I know this post is rather old, but you just could print out the stack where you call the function and in the function you call itself. This is probaly the easiest way, because inlining happens at jit-compilation time.
If the printed out stack matches you can be sure, that the function was inlined.
To print out the stack you can use System.Environment.StackTrace or VS Varibles $caller and $callstack (https://msdn.microsoft.com/en-us/library/5557y8b4.aspx#BKMK_Print_to_the_Output_window_with_tracepoints)
It's possible without looking at the assembly code:
http://blogs.msdn.com/b/clrcodegeneration/archive/2009/05/11/jit-etw-tracing-in-net-framework-4.aspx
Related
When compiling an executable in Release mode -with code optimizations enabled- the compiler may opt to inline functions that meet certain criteria in order to improve performance.
My question is this: when an exception is thrown in the body of a function that has been inlined, will the stacktrace information be preserved regardless of the inline expansion? In other words, will it show the original function as the source of error, or will it show the calling function instead?
It depends how the exception was thrown. If you use the throw statement then you don't have a problem, the jitter won't inline methods that contain a throw. Something to be aware of when you need a property setter to be fast btw.
However, if the exception is caused by normal execution, like a NullReferenceException or IndexOutOfRangeException etc, then yes, you don't see the name of the method on the stack trace if it was inlined. This can be a bit bewildering but you usually figure it out from the source code of the calling method and the exception type. Hopefully it is relatively small. The [MethodImpl(MethodImplOptions.NoInlining)] attribute is available to suppress inlining. By the time you discover it would be helpful it is usually too late ;)
This is not a definitive answer but i tried to decorate a simple method that only does division by zero with the [MethodImpl(MethodImplOptions.AggressiveInlining)] attribute which in .NET 4.5 gives a hint to the JIT (which actually performs inlining) to inline a certain method and when i ran the program in release mode, the exception was reported from the calling method, not the one with the division. On the other hand, as Hans said, Methods with throw statements and complex flow logic aren't inlined. This Article on MSDN blog (although from 2004) gives you an overview on how inlining is done by the JIT.
In C# consider the following statement:
string operation = new StackTrace(false).GetFrame(0).GetMethod().Name;
Is this a dangerous construction in a release build, where the frames may be compiled to native code?
No, it is not dangerous per say, but you might not get all the information that you want from a release build that was built without debug symbols. Here's some information from MSDN:
StackTrace information will be most informative with Debug build configurations. By default, Debug builds include debug symbols, while Release builds do not. The debug symbols contain most of the file, method name, line number, and column information used in constructing StackFrame and StackTrace objects.
The CLR gives very strong guarantees about stack walks. Necessarily so, they are very important to make the garbage collector and code access security work. What you however cannot count on is that GetFrame(0) gives you the stack frame of your method. Inlining code is an important jitter optimization. At least not without explicitly suppressing the optimization with [MethodImpl], specifying MethodImplOptions.NoInlining on your method.
Both the stack walk and the optimization suppression are expensive so be sure this code isn't on your critical path.
Compiler support for this feature will be added to the next version of C#, version 5, with the [CallerMemberName] attribute.
AFAIK no.
Everything compiles to IL anyway. Methods will be known, you just need a pdb file - even if you do a release build.
You can set this in Advanced options on project properties
Thank you for the replies.
My problem was that I have experienced sporadic crashes with code including the listed statement (an access violation), however without the statement the code has been stable. But due to the replies it seems that I must look elsewere.
How much performance gain (if any) can a windows service gain between a debug build and release build and why?
For managed code, unless you have a lot of stuff conditionally compiled in for DEBUG builds there should be little difference - the IL should be pretty much the same. The Jitter generates differently when run under the debugger or not - the compilation to IL isn't affected much.
There are some things the /optimize does when compiling to IL, but they aren't particularly aggressive. And some of those IL optimizations will probably be handled by the jitter optimizations, even if they aren't optimized in the IL (like the removal of nops).
See Eric Lippert's article http://blogs.msdn.com/ericlippert/archive/2009/06/11/what-does-the-optimize-switch-do.aspx for details:
The /optimize flag does not change a huge amount of our emitting and generation logic. We try to always generate straightforward, verifiable code and then rely upon the jitter to do the heavy lifting of optimizations when it generates the real machine code. But we will do some simple optimizations with that flag set.
Read Eric's article for information about /optimize does do differently in IL generation.
Well, though the question is a duplicate, I feel that some of the better answers in the original question are at the very bottom. Personally I have seen situations where there is an appreciable difference between debug and release modes. (Example : Property performance, where there was a 2x difference between accessing properties in debug and release mode). Whether this difference would be present in an actual software(instead of benchmark like program) is debatable, but I have seen it happen in one product I worked on.
From Neil's answer on the original question, from msdn social:
It is not well documented, here's what I know. The compiler emits an instance of the System.Diagnostics.DebuggableAttribute. In the debug version, the IsJitOptimizerEnabled property is True, in the release version it is False. You can see this attribute in the assembly manifest with ildasm.exe.
The JIT compiler uses this attribute to disable optimizations that would make debugging difficult. The ones that move code around like loop-invariant hoisting. In selected cases, this can make a big difference in performance. Not usually though.
Mapping breakpoints to execution addresses is the job of the debugger. It uses the .pdb file and info generated by the JIT compiler that provides the IL instruction to code address mapping. If you would write your own debugger, you'd use ICorDebugCode::GetILToNativeMapping().
Is there a full list of optimizations done by the /optimize C# compiler key available anywhere?
EDIT:
Why is it disabled by default?
Is it worth using in a real-world app? -- it is disabled by default only in Debug configuration and Enabled in Release.
Scott Hanselman has a blog post that shows a few examples of what /optimize (which is enabled in Release Builds) does.
As a summary: /optimize does many things with no exact number or definition given, but one of the more visible are method inlining (If you have a Method A() which calls B() which calls C() which calls D(), the compiler may "skip" B and C and go from A to D directly), which may cause a "weird" callstack in the Release build.
It is disabled by default for debug builds. For Release builds it is enabled.
It is definitely worth enabling this switch as the compiler makes lots of tweaks and optimizations depending on the kind of code you have.
For eg: Skipping redundant initializations, comparisons that never change etc.
Note: You might have some difficulty debugging if your turn on optimization as the code you have and the IL code that is generated may not match. This is the reason it is turned on only for Release builds.
Quoted from the MSDN page:
The /optimize option enables or
disables optimizations performed by
the compiler to make your output file
smaller, faster, and more efficient.
In other words, it does exactly what you think it would - optimises the compiled CIL (Common Intermediate Language) code that gets executed by the .NET VM. I wouldn't worry about what the specific optimisations are - suffice to say that they are many, and probably quite complex in some cases. If you are really interested in what sort of things it does, you could probably investigate the Mono C# Compiler (I doubt the details about the MS C# one are public).
The reason optimisation is disabled by default for Debug configurations is that it makes certain debugging features impossible. A few notable ones:
Perhaps most crucially, the Edit and Continue feature is disabled - i.e. no modifying code during execution.
Breaking execution often means the wrong line of code is highlighted (usually the one after the expected one).
Unused local variables aren't actually assigned or even declared.
Really, the default options for optimisation never ought to be changed. Having the option off for debugging is highly useful, while having it on for Release mode is equally wise.
I'm writing an XNA game where I do per-pixel collision checks. The loop which checks this does so by shifting an int and bitwise ORing and is generally difficult to read and understand.
I would like to add private methods such as private bool IsTransparent(int pixelColorValue) to make the loop more readable, but I don't want the overhead of method calls since this is very performance sensitive code.
Is there a way to force the compiler to inline this call or will I do I just hope that the compiler will do this optimization?
If there isn't a way to force this, is there a way to check if the method was inlined, short of reading the disassembly? Will the method show up in reflection if it was inlined and no other callers exist?
Edit: I can't force it, so can I detect it?
No you can't. Even more, the one who decides on inlining isn't VS compiler that takes you code and converts it into IL, but JIT compiler that takes IL and converts it to machine code. This is because only the JIT compiler knows enough about the processor architecture to decide if putting a method inline is appropriate as it’s a tradeoff between instruction pipelining and cache size.
So even looking in .NET Reflector will not help you.
"You can check
System.Reflection.MethodBase.GetCurrentMethod().Name.
If the method is inlined, it will
return the name of the caller
instead."
--Joel Coehoorn
There is a new way to encourage more agressive inlining in .net 4.5 that is described here: http://blogs.microsoft.co.il/blogs/sasha/archive/2012/01/20/aggressive-inlining-in-the-clr-4-5-jit.aspx
Basically it is just a flag to tell the compiler to inline if possible. Unfortunatly, it's not available in the current version of XNA (Game Studio 4.0) but should be available when XNA catches up to VS 2012 this year some time. It is already available if you are somehow running on Mono.
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public static int LargeMethod(int i, int j)
{
if (i + 14 > j)
{
return i + j;
}
else if (j * 12 < i)
{
return 42 + i - j * 7;
}
else
{
return i % 14 - j;
}
}
Be aware that the XBox works different.
A google turned up this:
"The inline method which mitigates the overhead of a call of a method.
JIT forms into an inline what fulfills the following conditions.
The IL code size is 16 bytes or less.
The branch command is not used (if
sentence etc.).
The local variable is not used.
Exception handling has not been
carried out (try, catch, etc.).
float is not used as the argument or
return value of a method (probably by
the Xbox 360, not applied).
When two or more arguments are in a
method, it uses for the turn
declared.
However, a virtual function is not formed into an inline."
http://xnafever.blogspot.com/2008/07/inline-method-by-xna-on-xbox360.html
I have no idea if he is correct. Anyone?
Nope, you can't.
Basically, you can't do that in most modern C++ compilers either. inline is just an offer to the compiler. It's free to take it or not.
The C# compiler does not do any special inlining at the IL level. JIT optimizer is the one that will do it.
why not use unsafe code (inline c as its known) and make use of c/c++ style pointers, this is safe from the GC (ie not affected by collection) but comes with its own security implications (cant use for internet zone apps) but is excellent for the kind of thing it appears you are trying to achieve especially with performance and even more so with arrays and bitwise operations?
to summarise, you want performance for a small part of your app? use unsafe code and make use of pointers etc seems the best option to me
EDIT: a bit of a starter ?
http://msdn.microsoft.com/en-us/library/aa288474(VS.71).aspx
The only way to check this is to get or write a profiler, and hook into the JIT events, you must also make sure Inlining is not turned off as it is by default when profiling.
You can detect it at runtime with the aforementioned GetCurrentMethod call. But, that'd seem to be a bit of a waste[1]. The easiest thing to do would to just ILDASM the MSIL and check there.
Note that this is specifically for the compiler inlining the call, and is covered in the various Reflection docs on MSDN.
If the method that calls the GetCallingAssembly method is expanded inline by the compiler (that is, if the compiler inserts the function body into the emitted Microsoft intermediate language (MSIL), rather than emitting a function call), then the assembly returned by the GetCallingAssembly method is the assembly containing the inline code. This might be different from the assembly that contains the original method. To ensure that a method that calls the GetCallingAssembly method is not inlined by the compiler, you can apply the MethodImplAttribute attribute with MethodImplOptions.NoInlining.
However, the JITter is also free to inline calls - but I think a disassembler would be the only way to verify what is and isn't done at that level.
Edit: Just to clear up some confusion in this thread, csc.exe will inline MSIL calls - though the JITter will (probably) be more aggressive in it.
[1] And, by waste - I mean that (a) that it defeats the purpose of the inlining (better performance) because of the Reflection lookup. And (b), it'd probably change the inlining behavior so that it's no longer inlined anyway. And, before you think you can just turn it on Debug builds with an Assert or something - realize that it will not be inlined during Debug, but may be in Release.
Is there a way to force the compiler to inline this call or will I do I just hope that the compiler will do this optimization?
If it is cheaper to inline the function, it will. So don't worry about it unless your profiler says that it actually is a problem.
For more information
JIT Enhancements in .NET 3.5 SP1
For simple code, you can try to get asm even online: https://sharplab.io/
For more complex cases, try https://github.com/szehetner/InliningAnalyzer (I've not tried it yet).