According to the question: Is C# code faster than Visual Basic.NET code?
Was said that C# and VB.NET generates the same CLR code at the end.
But in the case when I'm using codebehind and inline code, are there different performance (ignoring the language used)?
Inline code can require compilation the first time the request is made. After that (or if it's precompiled), there's absolutely zero difference between them.
By the way, even if it requires compilation, the speed difference should be insignificant as ASP.NET will have to compile a source file anyway. The difference will come down to a adding a few lines of code in a large source file!
Yes, ish... If you're compiling at run-time you're always going to be more expensive than something which doesn't have to, but that compile will be cached (if you will) after the first request so you'll get zero difference from then on.
There's probably somone who knows another reason, but to my mind the only realistic purpose for inline is the ability to make hot fixes without a rebuild + redeploy: the kind of thing you might do in small or early stage dev projects. Personally I also find inline just a little... aesthetically displeasing.
The aspx pages have to be parsed and compiled anyway as ASP.Net turns them into classes that inherit from the codebehind (hence the inherit attribute in the page directive) so compilation is necessary in either case. The difference between the two for first runs is going to be negligible unless we're talking about several thousand lines of code.
But I agree with anna: inline is icky.
I'm not sure whether the resulting assembly produced has the AllowOptimize attribute set to on or off. I can find no documentation which indicates this either way.
As such it is possible that the resulting code is not optimized by the JIT in quite the same way.
I doubt this makes a significant difference if any (like I said this might be controlled in some other way) but certainly it could impact certain operations if it, for example, disabled inlining and you had a large, extremely tight loop. Such a construct would probably be a poor choice within an asp.net page so this shouldn't be an issue.
No. Unless you are using web project the site needs to be compiled on the first hit. That affects inline as well as code behind to some extend. After that they run pretty much at the same speed.
Related
I've been using reflector to decompile a couple simple c# apps but I notice that though code is being decompiled, I still can't see things as they were written on VS. I think this is the way it is as the compiler replaces human instructions by machine code. However I thought I would give it a try and ask it on here. Maybe there is a decompiler that can decompile and show the coding almost identically to the original code.
That is impossible, since there are lots of ways to get the same IL from different code. For example, there is no way to know if an extension method was called fluent-style vs explicit on the declaring type. There is no way to know if LINQ vs regular code was used. All manner of implicit operations may or may not be there. Removed code may or may not have been there. Many primitives (including enums) up-to-and-including 4 bytes are indistinguishable once they are IL.
If you want the actual code, legally obtain the original code.
Existing .Net decompilers generally decompile to the best of their ability.
You appear to be asking for variable names and line formatting, which for obvious reasons are not compiled to IL.
There are several. I currently use JustDecompile found here http://www.telerik.com/products/decompiler.aspx?utm_source=twitter&utm_medium=sm&utm_campaign=ad
[Edit]
An alternative is .NET Reflector found here: http://www.reflector.net/
I believe there is a free version of it, but didn't take time to look.
Basically, no. There are often many ways to arrive at the same IL code, and there's no way at all for a decompiler to know which was used.
No, nor should there ever be. Things like comments and unreachable code would just add bloat with absolutely zero benefit. The very best you can ever do is approximate the compiled code.
Is there any evidence that suggests including a whole namespace in c# slows things down?
Is it better to do this
System.IO.Path.Combine....
Or to include the whole System.IO namespace?
It's much better to include the namespace in a using statement at the top of your class. The compiler doesn't care; it will emit the same IL both ways, and your code will be shorter and easier to read.
No matter what, including the entire namespace will not slow down production code.
Will it slow down the compiler? That's debatable, but C# compilation is so fast it's unlikely. A far worse offender in slowing down compilation is a large number of projects in your solution.
It makes no difference... it's purely for readability and in cases where you have naming collisions.
It will not slow down your production code, however it could slow down your coding as the IDE has to show you more options and you have to pick through more possibilities when looking at code completion lists.
Adding extra namespaces can affect the compile time of your application. It's unlikely to be noticable in most applications but extremes could make it visible.
It however has no impact on the runtime performance of your application.
No, the compiler is fast enough. Not sure what else I can add :)
Do class, method and variable names get included in the MSIL after compiling a Windows App project into an EXE?
For obfuscation - less names, harder to reverse engineer.
And for performance - shorter names, faster access.
e.g. So if methods ARE called via name:
Keep names short, better performance for named-lookup.
Keep names cryptic, harder to decompile.
Yes, they're in the IL - fire up Reflector and you'll see them. If they didn't end up in the IL, you couldn't build against them as libraries. (And yes, you can reference .exe files as if they were class libraries.)
However, this is all resolved once in JIT.
Keep names readable so that you'll be able to maintain the code in the future. The performance issue is unlikely to make any measurable difference, and if you want to obfuscate your code, don't do it at the source code level (where you're the one to read the code) - do it with a purpose-built obfuscator.
EDIT: As for what's included - why not just launch Reflector or ildasm and find out? From memory, you lose local variable names (which are in the pdb file if you build it) but that's about it. Private method names and private variable names are still there.
Yes, they do. I do not think that there will be notable performance gain by using shorter names. There is no way that gain overcomes the loss of readability.
Local variables are not included in MSIL. Fields, methods, classes etc are.
Variables are index based.
Member names do get included in the IL whether they are private or public. In fact all of your code gets included too, and if you'd use Reflector, you can practically read all the source code of the application. What's left is debugging the app, and I think there might be tools for that.
You must ABSOLUTELY (and I can't emphasize it more) obfuscate your code if you're making packaged applications that have a number of clients and competition. Luckily there are a number of obfuscators available.
This is a major gripe that I have with .Net. Since MS is doing so much hard work on this, why not develop (or acquire) a professional obfuscator and make that a part of VS. Dotfuscator just doesn't cut it, not the version they've for community.
Keep names short, better
performance for named-lookup.
How could this make any difference? I'm not sure how identifiers are looked up by the VM, but I'm pretty sure it's not doing a straight string comparison lookup. This would be the worst possible way to do it.
Keep names cryptic, harder to decompile.
To be honest, I don't think code obfuscation helps that much. Most competent developers out there have already developed a "sixth sense" to figure out things quickly even if identifiers like method names are totally unhelpful since very often the source code they need to maintain or improve already has these problems (I am talking about method names like "DoAllStuff()").
Anyway, security through obscurity is usually a bad idea.
If you are concerned about obfuscation check out .NET Reactor. I tested 8 different obfuscators and Reactor was not only the cheapest commercial one, it was the second best of the bunch (the best was the most expensive one, Dotfuscator Gold).
[EDIT]
Actually now that I think of it, if all you care about is obfuscating method names then the one that comes with VS.NET, Dotfuscator Community Edition, should work fine.
I think they're added, but the length of the name isn't going to affect anything, because of the way the function names are looked up. As for obfuscation, I think there are tools (Dotfuscator or something like that) that basically do exactly what you're saying.
So, every time I have written a lambda expression or anonymous method inside a method that I did not get quite right, I am forced to recompile and restart the entire application or unit test framework in order to fix it. This is seriously annoying, and I end up wasting more time than I saved by using these constructs in the first place. It is so bad that I try to stay away from them if I can, even though Linq and lambdas are among my favourite C# features.
I suppose there is a good technical reason for why it is this way, and perhaps someone knows? Furthermore, does anyone know if it will be fixed in VS2010?
Thanks.
Yes there is a very good reason for why you cannot do this. The simple reason is cost. The cost of enabling this feature in C# (or VB) is extremely high.
Editing a lambda function is a specific case of a class of ENC issues that are very difficult to solve with the current ENC (Edit'n'Continue) architecture. Namely, it's very difficult to ENC any method which where the ENC does one of the following:-
Generates Metadata in the form of a class
Edits or generates a generic method
The first issue is more of a logic constraint but it also bumps into a couple of limitations in the ENC architecture. Namely the problem is generating the first class isn't terribly difficult. What's bothersome is generating the class after the second edit. The ENC engine must start tracking the symbol table for not only the live code, but the generated classes as well. Normally this is not so bad, but this becomes increasingly difficult when the shape of a generated class is based on the context in which it is used (as is the case with lambdas because of closures). More importantly, how do you resolve the differences against instances of the classes that are already alive in the process?
The second issue is a strict limitation in the CLR ENC architecture. There is nothing that C# (or VB) can do to work around this.
Lambdas unfortunately hit both of these issues dead on. The short version is that ENC'ing a lambda involves lots of mutations on existing classes (which may or may not have been generated from other ENC's). The big problem comes in resolving the differences between the new code and the existing closure instances alive in the current process space. Also, lambdas tend to use generics a lot more than other code and hit issue #2.
The details are pretty hairy and a bit too involved for a normal SO answer. I have considered writing a lengthy blog post on the subject. If I get around to it I'll link it back into this particular answer.
According to a list of Supported Code Changes, you cannot add fields to existing types. Anonymous methods are compiled into oddly-named classes (kinda <>_c__DisplayClass1), which are precisely that: types. Even though your modifications to the anonymous method may not include changing the set of enclosed variables (adding those would alter fields of an existing class), I guess that's the reason it's impossible to modify anonymous methods.
It is a bit a shame that this feature is partially supported in VB but not in C#:
http://msdn.microsoft.com/en-us/library/bb385795.aspx
Implementing the same behaviour in C# would reduce the pain level by 80% for functions that contain lambda expressions, where we do not need to modify the lambda expressions nor any expression that depends on them, and probably not for a "monster cost".
Restarting a unit test should take a matter of seconds, if that. I've never liked the "edit and continue" model to be honest - you should always rerun from scratch IMO, just in case the change midway through execution would have affected the code which ran earlier. Given that, you're better off using unit tests which can be run with a very quick turnaround. If your individual unit tests take an unbearable time to start, that's something you should look at addressing.
EDIT: As for why it doesn't work - you may find that it works for some lambdas but not others. Lambda expressions which don't capture any variables (including this) are cached in a private static variable, so that only one instance of the delegate is ever created. Changing the code means reinitialising that variable which could have interesting side-effects I suspect.
I just want to point out that Visual Studio's consideration of "editing" in this context is (or at least can be) a bit stupid. When I was checking out an older commit as part of doing an interactive rebase in git and then attempting to run an unit test, that resulted in 9 error (with ENC0014 and some others).
So with no files modified, every time I attempted to debug the unit test I got those errors. Restarting Visual Studio made the errors go away, so I guess that the underlying problem is missing cache invalidation where Visual Studio does not detect/react to files being changed outside editing via its editor windows.
I know VS2008 has the remove and sort function for cleaning up using directives, as does Resharper. Apart from your code being "clean" and removing the problem of referencing namespaces which might not exist in the future, what are the benefits of maintaining a "clean" list of using directives?
Less code?
Faster compilation times?
If you always only have the using directives that you need, and always have them appropriately sorted, then when you come to diff two versions of the code, you'll never see irrelevant changes.
Furthermore, if you have a neat set of using directives then anyone looking at the code to start with can get a rough idea of what's going to be used just by look at the using directives.
For me it's basically all about less noise (plus making Resharper happy!).
I would believe any improvement in compilation time would be minimal.
There's no runtime impact. It's purely compile time. It potentially impacts the following:
Less chance for Namespace collisions
Less "noise" in the code file
Very explicit about which namespaces and possible types to expect in the file
Using the menu to remove unused and Sort means more consistency with using statements among the devs. Less chance of dumb checkins just to fix it up.
Less noise.
Clear expectation of what types are used ("My UI layer depends upon System.Net. Wow, why?")
Cleaner references: if you have the minimal set of using statements, you can cleanup your references. Often I see developers just keep throwing references into their projects, but they never remove them when they're no longer needed. If you don't have anything that actually needs a reference (and a using statement counts), it becomes trivial to clean up your references. (Why would you want to do that? In large systems that have been decomposed into components it will streamline your build dependencies by eliminating the unused deps.)
For me, a clean list of using statements at the beginning can give a good understanding of the types to expect.
I saw a decent gain in compile time a few years ago when I first installed ReSharper (on an 18 project solution). Since then its just been about keeping it clean.
I can't speak to the benefits in compile time and performance, but there's a lower chance of namespace collisions if you have minimize your using declarations. This is especially important if you are using more than one third party library.
There is one compile-time difference: when you remove a reference, but still have a using directive in your code, then you get a compiler error. So having a clean list of using directives makes it a little bit easier to remove unused references.
Usually the compiler removes unused references, but I don't know if that works when a using is in the code.