In Visual Studio C/C++ projects, it's easy to modify compiler's optimization settings in "Property Pages | C/C++ | Optimization". For example, we may give different optimization levels such as /O2 and /O3, as well as advanced optimizations like "Omit Frame Pointers".
However, I can't simply find corresponding UIs in C# project of Visual Studio. All I can find is just turning off optimizations: the "Optimize code" check box is all I've got.
Can C# users control detailed compiler's optimizations like C/C++? Do I have to give compiler options in command line?
Much of the optimisation of C# code goes on at the JIT compiler level, rather than the C# compiler. Basically there are no such detailed settings as the ones available in C or C++.
There are a few performance-related elements of the runtime that can be tweaked, such as GC strategies, but not a great deal.
When I'm building benchmark tests etc from the command line I tend to just use something like this:
csc /o+ /debug- Test.cs
(I believe I have seen the presence of a matching pdb file make a difference to performance, possibly in terms of the cost of exceptions being thrown, hence the debug- switch... but I could be wrong.)
EDIT: If you want to see the difference each bit of optimization makes, there's one approach which could prove interesting:
Compile the same code with and without optimization
Use ildasm or Reflector in IL mode to see what the differences are
Apply the same changes one at a time manually (using ilasm) and measure how much each one has
AFAIK C# compiler has no such detailed optimization properties. Probably optimization is either enabled or disabled.
http://msdn.microsoft.com/en-us/library/6s2x2bzy.aspx
I found just two:
/filealign Specifies the size of sections in the output file.
/optimize Enables/disables optimizations.
A bit OT, but someone looking at this question might find this useful:
Adding this to method signature:
[MethodImpl(MethodImplOptions.NoOptimization)]
turns off compiler optimizations for that method.
See here for details:
https://msdn.microsoft.com/en-us/library/system.runtime.compilerservices.methodimploptions%28v=vs.110%29.aspx?f=255&MSPPError=-2147217396
Related
I'm interested in finding all the places in my solution where boxing or unboxing occur. I know that I can use ildasm like this:
Ildasm.exe yourcomponent.dll /text | findstr box
but I prefer not to look at the MSIL level.
Is there an easy way to do this?
Clr Heap Allocation Analyzer is a free Visual Studio add-on that detects many (but not all) forms of boxing and will highlight your source code and provide a tooltip explanation.
You can also use the Visual Studio Diagnostic Tools to analyze memory allocations. This won't reveal boxing directly, but any time you see a value type on the heap you know it's been boxed (for example, you will see that a references to Int32 takes 12 bytes).
you can do it with FXCOP: (old article with example)
Link - Three Vital FXCop Rules
This is a perfect use case of #Roslyn The Compiler as a Service that is coming out from Microsoft and Jon Skeet as usual is absolutely right. I am writing a book on Roslyn to show how to do these sort of code analytics using Roslyn and top it off with some eye catching visualization by JavaScript.
Here is the code for finding boxing calls. However scope resolution plays a role. But this example should get you started. Pre-order your copy to get more such at https://www.amazon.com/Source-Analytics-Roslyn-JavaScript-Visualization/dp/1484219244?ie=UTF8&Version=1&entries=0
https://gist.github.com/sudipto80/43efdecb878cac17b340cda2c281c3b3
Could you please tell me which are the differences between rules of StyleCop and Code Analysis ? Should it be used together or not ?
Thanks.
Style cop essentially parses the file looking for formatting issues and other things that you could think of as "cosmetic". Code analysis actually builds your code and inspects the compiled runtime IL for characteristics about how it behaves when it runs and flag potential runtime problems.
So, they are complimentary, and you are perfectly fine to use them together.
Short answer:
stylecop: takes your source code as input and checks for potential code style issues. For instance: using directives are not alphabetically ordered...etc.
fxcop (now code analysis): takes a compiled assembly as input and checks for potential issues related to the executable/dll itself when it'll be executed. For instance: in your class you have a member of type IDisposable that is not disposed properly.
However, there are some rules that are common to both tools, for instance rules related to naming convention for public exposed types.
Anyway, using both is a good idea.
FxCop checks what is written. It works over the compiled assembly.
StyleCop checks how it is written. It works over the parsed source file, even without trying to compile it.
This leads to all the differences. For example, FxCop cannot check indentations, cause they are absent in a compiled assembly. And StyleCop cannot perform code-flow checks cause it doesn't know how your code is really being executed.
For example, I know it is defined for gcc and used in the Linux kernel as:
#define likely(x) __builtin_expect((x),1)
#define unlikely(x) __builtin_expect((x),0)
If nothing like this is possible in C#, is the best alternative to manually reorder if-statements, putting the most likely case first? Are there any other ways to optimize based on this type of external knowledge?
On a related note, the CLR knows how to identify guard clauses and assumes that the alternate branch will be taken, making this optimization inappropriate to use on guard clases, correct?
(Note that I realize this may be a micro-optimization; I'm only interested for academic purposes.)
Short answer: No.
Longer Answer: You don't really need to in most cases. You can give hints by changing the logic in your statements. This is easier to do with a performance tool, like the one built into the higher (and more expensive) versions of Visual Studio, since you can capture the mispredicted branches counter. I realize this is for academic purposes, but it's good to know that the JITer is very good at optimizing your code for you. As an example (taken pretty much verbatim from CLR via C#)
This code:
public static void Main() {
Int32[] a = new Int32[5];
for(Int32 index = 0; index < a.Length; index++) {
// Do something with a[index]
}
}
may seem to be inefficient, since a.Length is a property and as we know in C#, a property is actually a set of one or two methods (get_XXX and set_XXX). However, the JIT knows that it's a property and either stores the length in a local variable for you, or inlines the method, to prevent the overhead.
...some developers have underestimated the abilities
of the JIT compiler and have tried to write “clever code” in an attempt to help the JIT
compiler. However, any clever attempts that you come up with will almost certainly impact
performance negatively and make your code harder to read, reducing its maintainability.
Among other things, it actually goes further and does the bounds checking once outside of the loop instead of inside the loop, which would degrade performance.
I realize it has little to do directly with your question, but I guess the point that I'm trying to make is that micro-optimizations like this don't really help you much in C#, because the JIT generally does it better, as it was designed exactly for this. (Fun fact, the x86 JIT compiler performs more aggressive optimizations than the x64 counterpart)
This article explains some of the optimizations that were added in .NET 3.5 SP1, among them being improvements to straightening branches to improve prediction and cache locality.
All of that being said, if you want to read a great book that goes into what the compiler generates and performance of the CLR, I recommend the book that I quoted from above, CLR via C#.
EDIT: I should mention that if this were currently possible in .NET, you could find the information in either the EMCA-335 standard or working draft. There is no standard that supports this, and viewing the metadata in something like IlDasm or CFF Explorer show no signs of any special metadata that can hint at branch predictions.
I've just disassembled a project to debug it using Reflector, but it seems to balk at decoding the 'compile results' of automatic properties, e.g. the next line gives me a syntax error. I've tried fixing these manually, but every time I fix one, more appear.
private string <GLDescription>k__BackingField;
Is there anything I can do about this?
Ha! Stupid me: all I had to do was set the disassembler optimization in Reflector's options to .NET 3.5. Mine was on 2.0.
The compiler generates fields with "unspeakable names" - i.e. ones which are illegal in C# itself, but are valid IL.
There's no exactly accurate translation of the IL into "normal" C# (without automatic properties). You can replace < and > with _ which will give legal code, but then of course it won't be exactly the same code any more. If you're only after the ability to debug, however, that won't be a problem.
If you decompile iterators (i.e. methods using yield statements) you'll find more of the same, including the use of fault blocks, which are like finally blocks but they only run when an exception has occurred (but without catching the exception). Various other constructs generate unspeakable names too, including anonymous methods, lambda expressions and anonymous types.
On a broader note, do you have permission to decompile this code? If the author doesn't mind you doing so, they're likely to be willing to give you the source code to start with which would make your life easier. If they don't want you debugging their source code to start with, you should consider the ethical (and potentially legal) ramifications of decompiling the code. This may vary by location: consult a real lawyer for more definitive guidance.
EDIT: Having seen your own answer, that makes a lot of sense. I'll leave this here for background material.
When Visual Studio greys out some code and tells you it is redundant, does this mean the compiler will ignore this code or will it still compile this code? In other words, would this redundant code never be interpreted or will it be? Or does it simply act as a reminder that the code is simply not required?
If I leave redundant code in my classes/structs etc, will it have an impact on performance?
Thanks
If the code is redundant it's not necessary for compilation, but leaving it in won't have any impact on performance.
As the compiler has identified the code as redundant in Visual Studio it won't get compiled into the IL or machine code.
It's not good practice to leave redundant code in your project. If you need the code in the future you should get it from the older versions of the file in your source code repository.
C# is not an interpreted language, it's a JITted (Just-In-Time compiled) language, which means it's compiled from MSIL at runtime. Thus, the JITter can do analysis to determine whether code is redundant, and then remove it.
There will be two opportunities to remove redundant code
Compiling C# to MSIL in Visual Studio.
JITting MSIL to assembly at run (or install) time.
Because the C# compiler itself has flagged this issue, that means the code will likely be removed during (1).
So yeah, it's just being nice and reminding you. Most compilers remove redundant code in many different and subtle ways without telling the programmer, but in certain obvious cases it's a good idea to tell the programmer.
No, it's not compiled.
Can drive me nuts sometimes when testing and I want to use the debuggers "set next statement" command to some statement and it wasn't compiled.