This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How costly is .NET reflection?
I am currently in a programming mentality that reflection is my best friend. I use it a lot for dynamic loading of content that allows "loose implementation" rather than strict interfaces, as well as a lot of custom attributes.
What is the "real" cost to using reflection?
Is it worth the effort for frequently reflected types to have cached reflection, such as our own pre-LINQ DAL object code on all the properties to table definitions?
Would the caching memory footprint outwieght the reflection CPU usage?
Reflection requires a large amount of the type metadata to be loaded and then processed. This can result in a larger memory overhead and slower execution. According to this article property modification is about 2.5x-3x slower and method invocation is 3.5x-4x slower.
Here is an excellent MSDN article outlining how to make reflection faster and where the overhead is. I highly recommend reading if you want to learn more.
There is also an element of complexity that reflection can add to the code that makes it substantially more confusing and hence difficult to work with. Some people, like Scott Hanselman believe that by using reflection you often make more problems than you solve. This is especially the case if your teams is mostly junior devs.
You may be better off looking into the DLR (Dynamic Language Runtime) if you need alot of dynamic behaviour. With the new changes coming in .NET 4.0 you may want to see if you can incorporate some of it into your solution. The added support for dynamic from VB and C# make using dynamic code very elegant and creating your own dynamic objects fairly straight forward.
Good luck.
EDIT: I did some more poking around Scott's site and found this podcast on reflection. I have not listened to it but it might be worth while.
There are lots of things you can do to speed up reflection. For example, if you are doing lots of property-access, then HyperDescriptor might be useful.
If you are doing a lot of method-invoke, then you can cache methods to typed delegates using Delegate.CreateDelegate - this then does the type-checking etc only once (during CreateDelegate).
If you are doing a lot of object construction, then Delegate.CreateDelegate won't help (you can't use it on a constructor) - but (in 3.5) Expression can be used to do this, again compiling to a typed delegate.
So yes: reflection is slow, but you can optimize it without too much pain.
With great power comes great responsibility.
As you say, reflection has costs associated with it, and depending on how much reflection you do it can slow the application down significantly.
One of the very approrpiate places to use it is for IoC (Inversion of Control) since, depending on the size of your application, would probably have more benefits than not.
Thanks for the great links and great comments, especially on the part of the Jr Devs, that hit it right on the money.
For us it is easier for our junior developers to do this:
[TableName("Table")]
public class SomeDal : BaseDal
{
[FieldName("Field")]
public string Field
}
rather than some larger impelementations of DAL. This speeds up their building of the DAL objects, while hiding all the internal workings for the senior developers to gut out.
Too bad LINQ didn't come out earlier, I feel at times we wrote half of it.
One thing that can sometimes bite you when using reflection is not updating calls using reflection when doing refactoring. Tools like resharper will prompt you to update comments and strings when you change a method name, so you can catch most of them that way, but when you're calling methods that have been dynamically generated or the method name has been dynamically generated you might miss something.
The only solution is good documentation and thorough unit testing.
Related
Why the designers of C# did not allow for something like this?
public readonly class ImmutableThing
{
...
}
One of the most important ways to safe multi-threading is the use of immutable objects/classes, yet there is no way to declare a class as immutable. I know I can make it immutable by proper implementation but having this enforced by class declaration would make it so much easier and safer. Commenting a class as immutable is a "door prop" solution at best.
One look at a class declaration and you would instantly know it was immutable. If you had to modify someone else's code you would know a class does not allow changes by intent. I can only see advantages here but I can't believe no one thought about this before. So why is not supported?
EDIT
Some say this is not very important feature but that does not really convince me. Multicore processors showed up because increasing performance by frequency hit a wall. Supercomputers are heavily multiprocessor machines. Parallel processing is more and more important and is one of the main ways to improve performance. The support for multithreading and parallel processing in .NET is significant (various lock types, thread pool, tasks, async calls, concurrent collections, blocking collection, parallel foreach, PLINQ and so on) and it seems to me everything that helps you write parallel code more easily gives an edge. Even if it's non trivial to implement.
Basically, because it's complicated - and as usr wrote, features need a lot of work in various ways before they're ready to ship. (It's easy being an armchair language designer - I'm sure it's incredibly difficult to really do it, in a language with millions of developers with critical code bases which can't be broken by changes.)
It's tricky for a compiler to verify that a type is visibly-immutable without being overly restrictive in some cases. As an example, String is actually mutable within mscorlib, but the code of other types (e.g. StringBuilder) has been written very carefully to avoid the outside world ever seeing that mutability.
Eric Lippert has written a lot on immutability - it's a complex topic which would/will need a lot of work to turn into a practical language feature. It's also quite hard to retrofit onto a language and framework which didn't have it to start with. I'd love C# to at least make it easier to write immutable types, and I suspect the team has spent quite a while thinking about it - whether they'll ever be happy enough with their ideas to turn it into a production language feature is a different matter.
Features need to be designed, implemented, tested, documented, deployed and supported. That's why we get the most important features first, and the less important ones late or never.
Your proposal is ok, but there is an easy workaround (as you said). Therefore it is not an "urgent" feature.
There is also a thing called representational immutability where state mutations inside the object are allowed but are never made visible to the outside. Example: a lazily-calculated field. This would not be possible under your proposal because the compiler could never prove the class to be immutable to the outside, although its field are routinely written to.
I am interested in writing a generic Intellisense enabled editor for SQL and C# (et al. if possible!). I would like to do this in C# as an overridden or extended WPF richTextBox-type control. I know there are many example projects available and I have implemented a basic version of my own; but most of the examples that I have come across (and indeed my own) are just that, basic.
A couple of code examples are:
DIY Intellisense By yetanotherchris
CodeTextBox - another RichTextBox control with syntax highlighting and Intellisense By Tamas Honfi
I have however, found a great example of an SQL editor with Intellisense QueryCommander SQL Editor By Mikael HÃ¥kansson which seems to work well. Microsoft must use a XML library of command keywords, but my question is: How (in detail) do Microsoft implement their Intellisense (as-you-type Intellisense) and how hard would it be for me to create my own of the same standard?
Edit A: A year on and I have managed to develop my own editor control with basic intellisense mainly for my own "enjoyment". I thought I would come back provide a list of freely available .NET projects that helped me with my own development and can be used out-of-the-box and free of charge:
ICSharpCode (WinForms)
AvalonEdit (WPF)
ScintillaNET (WinForms)
Query Commander [for example of intellisense implementation] (WinForms)
Edit B: 15 months after the question was asked I am still looking for new improved editors. This one is nice...
RoslynPAD is cool!
Edit C: 2 years+ on from the question, I have found the following projects, both using WPF and backed by AvalonEdit.
CodeCompletion for AvalonEdit using NRefactory. This project is really nice and has a full implementation of intellisense using NRefactory.
ScriptCS ScriptCS makes it easy to write and execute C# with a simple text editor.
How (in detail) do Microsoft implement their as-you-type Intellisense?
I can describe it to any level of detail you care to name, but I don't have the time for more than a brief explanation. I'll explain how we do it in Roslyn.
First, we build an immutable model of the token stream using a data structure that can efficiently represent edits, since obviously edits are precisely what there are going to be a lot of.
The key insight to making it efficient for persistent reuse is to represent the character lengths of the tokens but not their character positions in the edit buffer; remember, a token at the end of the file is going to change position on every edit but the length of the token does not change. You must at all costs minimize the number of total re-lexings if you want to be efficient on extremely large files.
Once you have an immutable model that can handle inserts and deletions to build up an immutable token stream without re-lexing the entire file every time, you then have to do the same thing, but for grammatical analysis. This is in practice a considerably harder problem. I recommend that you obtain an undergraduate or graduate degree in computer science with an emphasis on parser theory if you have not already. We obtained the help of people with PhDs who did their theses on parser theory to design this particular bit of the algorithm.
Then, obviously, build a grammatical analyzer that can analyze C#. Remember, it has to analyze broken C#, not correct C#; IntelliSense has to work while the program is in a non-compiling state. So start by coming up with modifications to the grammar that have good error-recovery characteristics.
OK, so now you've got a parser that can efficiently do grammatical analysis without re-lexing or re-parsing anything but the edited region, most of the time, which means that you can do the work between keystrokes. I forgot to mention, of course you will need to come up with some mechanism to not block the UI thread while doing all of these analyses should the analysis happen to take longer than the time between two keystrokes. The new "async/await" feature of C# 5 should help with that. (I can tell you from personal experience: be careful with the proliferation of tasks and cancellation tokens. If you are careless, it is possible to get into a state where there are tens of thousands of cancelled tasks pending, and that is not fast.)
Now that you've got a grammatical analysis you need to build a semantic analyzer. Since you are only doing IntelliSense, it does not need to be a particularly sophisticated semantic analyzer. (Our semantic analyzer must do an analysis suitable for generating code from correct programs and correct error analysis from incorrect programs.) But of course, again it has to do good semantic analysis on broken programs, which does increase the complexity considerably.
My advice is to start by building a "top level" semantic analyzer, again using an immutable model that can persist the state of the declared-in-source-code types from edit to edit. The top level analyzer deals with anything that is not a statement or expression: type declarations, directives, namespaces, method declarations, constructors, destructors, and so on. The stuff that makes up the "shape" of the program when the compiler generates metadata.
Metadata! I forgot about metadata. You'll need a metadata reader. You need to be able to produce IntelliSense on expressions that refer to types in libraries, obviously. I recommend using the CCI libraries as your metadata reader, and not Reflection. Since you are only doing IntelliSense, obviously you don't need a metadata writer.
Anyway, once you have a top-level semantic analyzer, then you can write a statement-and-expression semantic analyzer that analyzes the types of the expressions in a given statement. Pay particular attention to name lookup and overload resolution algorithms. Method type inference will be particularly tricky, especially inside LINQ queries.
Once you've got all that, an IntelliSense engine should be easy; just work out the type of the expression at the current cursor position and display a dropdown appropriately.
how hard would it be for me to create my own of the same standard?
Well, we've got a team of, call it ten people, and it'll probably take, call it five years all together to get the whole thing done from start to finish. But we have lots more to do than just the IntelliSense engine. That's maybe only 40% of the work. Oh, and half those people work on VB, now that I think about it. But those people have on average probably five or ten years experience in doing this sort of work, so they're faster at it than you will be if you've never done this before.
So let's say it should take you about ten to twenty years of full time work, working alone, to build a Roslyn-quality IntelliSense engine for C# that can do acceptably-close-to-correct analysis of large programs in the time between keystrokes.
Longer if you need to do that PhD first, obviously.
Or, you could simply use Roslyn, since that's what it's for. That'll take you probably a few hours, but you don't get the fun of doing it yourself. And it is fun!
You can download the preview release here:
http://www.microsoft.com/download/en/details.aspx?id=27746
This is an area where Microsoft typically produces great results - Microsoft developer tools really are awesome. And there is a clear commercial advantage for sales of their developer tools and for sales of Windows to having the best intellisense so it makes sense for Microsoft to devote the kind of resources Eric describes in his wonderfully detailed answer. Still, I think it's worth pointing out a few of things:
Your customers may not actually need all the features that Microsoft's implementation provides. The Microsoft solution might be incredibly over-engineered in terms of the features that you need to provide to your customers/users. Unless you're actually implementing a generic coding environment that is intended to be competitive with Visual Studio, it is likely that there are aspects of your intended use that either simplify the problem, or that allow you to make compromises on the solution that Microsoft feels they cannot make. Microsoft will likely spend resources decreasing response times that are already measured in hundreds of milliseconds. That may not be something you need to do. Microsoft is spending time on providing an API for others to use for code analysis. That's likely not part of your plan. Prioritize your features and decide what "good enough" looks like for you and your customers then estimate the cost of implementing that.
In addition to bearing the obvious costs of implementing requirements that you may not actually have, Microsoft also carries some costs that may not be obvious if you haven't worked in a team. There are huge communication costs associated with teams. It's actually incredibly easy to have five smart people take longer to produce a solution than it takes for a single smart person to produce the equivalent solution. There are aspects of Microsoft's hiring practices and organizational structure that make this scenario more likely. If you hire a bunch of smart people with egos and then empower all of them to make decisions, you too can get a 5% better solution for 500% of the cost. That 5% better solution might be profitable for Microsoft, but it could be deadly for a small company.
Going from a 1 person solution to a 5 person solution increases the costs, but that's just the intra-team development costs. Microsoft has separate teams that are devoted to (roughly) design, development, and testing even for a single feature. The project-related communication between peers across these boundaries has higher friction than within each of the disciplines. This not only increases communication costs between individuals, but it also results in larger team sizes. And more than that - since it's not a single team of 12 individuals, but is instead 3 teams of 5 individuals, there is 3x the upward communication cost. More costs that Microsoft has chosen to carry that may not translate to similar costs for other companies.
My point here is not to describe Microsoft as an inefficient company. My point is that Microsoft makes a ton of decisions about everything from hiring, to team organization, to design and implementation that start from assumptions about profitability and risk that simply do not apply to companies that are not Microsoft.
In terms of the intellisense thing, there are various ways of thinking about the problem. Microsoft is producing a very generic, reusable solution that doesn't just solve intellisense, but also targets code navigation, refactoring, and various other uses for code analysis. You don't need to do things the same way if your sole goal is to make it easy for developers to enter code without having to type much. Targeting that feature doesn't take years of effort and there are all sorts of creative things you can do if you're not just providing an API, but you actually control the UI too.
we know that with the help of reflection we can create instance of a class dynamically at run time and can call the method of the class very easily. so this point reflection is late binding because action is taken at run time. so i just want know reflection is faster or not.
what is the performance of reflection. is it good or bad...is it resource hungry. please discuss. thnks.
Technically speaking reflection is a performance hit. But if you're doing something that needs it then you have to use it. If you can go without it, avoid it.
EDIT
To further emphasize, reflection is neither good or bad. Its in the Framework because there's very legitimate reasons to use it. That said, 90% of the time that I see someone using reflection they're trying to do something the hard way, not knowing the easy route. Often its because they don't know about generics.
Generally, the performance of reflection is worse than when you do the same thing without reflection. But whether it is too slow for you depends on what your performance requirements are (do you need it to be fast) and what exactly are you doing.
I wonder if anyone could point me in the direction where I can read about the nuts and bolts of C#. What I'm interested in learning are method call costs, what it costs to create objects and such.
My aim of learning this is to get a better understanding of how increase the performance of an application and get a better understanding of how the C# language works.
The reference should preferable be a book, a book that I can read cover to cover.
CLR via C# is excellent for low level details about the CLR. It specifically covers the details of method invocation, creating new objects, garbage collection and lots more.
For actual performance numbers you should use a profiler to avoid the common pitfalls of premature optimization.
For performance profiling existing code, have a look at Eqatec Profiler. (There is a free license for personal use).
You may need to know about Garbage Collection and CLR
What are the best/most popular ways to do aspect-oriented programming (AOP) in C#/.Net?
DynamicProxy from Castle is
probably the most used tool for
doing AOP on the CLR.
Spring framework also offers
AOP capabilities through its
Spring.Aop namespace.
Postsharp is another well-known one: "Bringing AOP to .NET!" I only have very little experience with it, but it looks nice and worth having a look at it.
PostSharp is good. I have been using it for about a year now. It's easy to install and has a fairly shallow learning curve considering the almost godlike power it enables. Additionally, there seem to be both an active community of developers and a responsive developer.
Check out the code samples on the PostSharp home page. Those are good examples of simper aspects done with PostSharp.
I have been using Spring.Net AOP Framework for about 9 months now. It is pretty powerful and doesn't seem to impose a performance penalty in use even though weaving is done at run-time rather than during compilation. The only things to be aware of are that although objects you are applying advices to do not need to be aware of Spring.Net AOP, they must implement at least one interface. The documentation, which incidentally for Spring.Net in general is excellent, states that this restriction will be removed in a future version but doesn't give a roadmap.
Spring.Net AOP does not require you to use the rest of the Spring.Net framework and can be used in isolation.
I've played around with rolling my own, for several different types of things. I've had some luck. In general, I make an interface, implement it with a class, and then make a proxy which implements the interface, does whatever precondition steps I want, calls the real object's method, and then does whatever postcondition steps I want. One of the main annoyances of mine with this approach is that you can't have constructors in an interface, and you also can't have static methods in an interface, so there's no real obvious place to put that type of code. The hard part is the code generation - because you're either going to emit IL, or you're going to emit C# that you have to compile. But that's just been my approach. And it forced me to think about one aspect at a time, really - I hadn't gotten to the point where I could abstract out the "Aspect" and think in those terms. In short: roll your own or find a toolset you like, probably from Eric Bodden's list.