I need to call a method on an object but I do not know the method name until runtime.
What are the techniques available?
(e.g. GetMethod().Invoke(), delegates, c# 4.0 dynamic)
Thanks!
The C# 4.0 dynamic functionality is going to be the easiest way to do this. In a very real sense, dynamic is "just a wrapper" around Reflection. It's a very good wrapper, though, that is probably your best option.
Other ways, in approximately increasing level of difficult:
Using a third-party Reflection
library. Not sure what's out there
these days.
Writing your own
Reflection code.
Use the CodeDOM to
create code that calls the method you
want to call.
Emit IL that does
pretty much the same thing as the
CodeDOM generated code.
Create C#
source code that you then compile
into an assembly that you can call,
which in turn calls the desired
method.
The last three are not for the faint of heart. Your best bet is to use dynamic or write your own Reflection code. If I had had dynamic three years ago when I was writing code for something similar, I would have used it.
You can use Reflection to call functions that you do not know the name of until run time. Here's some doc:
http://msdn.microsoft.com/en-us/library/f7ykdhsy%28VS.80%29.aspx
Related
I was studying Reflection, I got some of it but I am not getting everything related to this concept. Why do we need Reflection? What things we couldn't achieve that we need Reflection?
There are many, many scenarios that reflection enables, but I group them primarily into two buckets.
Reflection enables us to write code that analyzes other code.
Consider for example the most basic question about an assembly: what types are in it? Assemblies are self-describing and reflection is the mechanism by which that description is surfaced to other code.
Suppose for example you wanted to write a program which took an assembly and did a graphical display of the relationships between the various classes in that assembly, to help you understand that code. There are such tools. They're in Visual Studio. Someone wrote those tools. They did not appear by magic. Reflection is the mechanism designed into the .NET framework that enables you or me or anyone else to write tools that understand code.
Reflection enables us to move compile time bindings to runtime.
Suppose you have a static method Foo.Bar(). When you put a call to Foo.Bar() in your program, you know with 100% certainty that the method you think is going to be called is actually going to be called. We call static methods "static" because the binding from the name Bar to the code that gets called can be understood statically -- that is, without running the program.
Now consider a virtual method Blah() on a base class. When you call whatever.Blah() you don't know exactly which Blah() will be called at compile time, but you know that some method Blah() with no arguments will be called on some type that is the runtime type of whatever, and that type is equal to or derived from the type which declares Blah(). (In fact you know more: you know that it is equal to or derived from the compile time type of whatever.) Virtual binding is a form of dynamic binding, but it is not fully dynamic. There's no way for the user to decide that this call should be to a different method on a different type hierarchy.
Reflection enables us to make calls that are bound entirely at runtime, based entirely on user choices if we like. We pay a performance penalty, and we lose compile-time type safety, but we gain the flexibility to decide 100% at runtime what code we call. There are scenarios where that's a reasonable tradeoff.
Reflection is such a deep part of the .NET framework that you often don't know that you're doing it (see Attributes and LINQ for instance). And when you do know you're doing it, even if it feels wrong, it might be the only way to achieve a particular objective.
Apart from the two broad areas that Eric mentioned here are a few others. There are lots more, these are just some that come to mind immediately.
Serialization (and similar)
Whether you're using XML or JSON or rolling your own, serializing objects is much easier when you don't have to write specific code for each class to enable serialization. Reflection enables you to enumerate the properties in your object that have been flagged for (or not flagged against) serailization and write them to the output.
This isn't about saving state though. Reflection allows us to write generic methods that can produce business output too, like CSV or XLSX files from an arbitrary collection. I get a lot of mileage out of my ToCSV(...) and ToExcel(...) extensions for things like producing downloadable versions of data sets on my web-based reporting.
Accessing Hidden Data
Yes, I know, this is a dodgy one. And yeah, Eric is probably going to slap me for this, but...
There's a lot of code out there - I'm looking at you, ASP.NET - that hides interesting and useful stuff behind private or protected. Sometimes the only way to get them out is to use reflection. Sometimes it's not the only way, but it can be the simpler way.
Attributes
Every time you tag an Attribute onto one of your classes, methods, etc. you are implicitly providing data that is going to be accessed through reflection. Want to use those attributes yourself? Reflection is the only way you can get at them.
LINQ and Other Expressions
This is really important stuff these days. If you've ever used LINQ to SQL, Entity Frameworks, etc. then you've used Expression in some way. You write a simple little POCO to represent a row in your database table and everything else gets handled by reflection. When you write a predicate expression the system is using the reflection model to build structures that are then processed (visited) to build an SQL statement.
Expressions aren't just for LINQ either, you do some really interesting things yourself, once you know what you're doing. I have code to generate line parsers for CSV import that run pretty damn quickly when compiled to Func<string, TRecord>. These days I tend to use a mapper somebody else wrote, but at the time I needed to slice a few more % off the total import time for a file with 20K records that was uploaded to a website periodically.
P/Invoke Marshalling
This one is a big deal behind the scenes and occasionally in the foreground too. When you want to call a Windows API function or use a native DLL, P/Invoke gives you ways to achieve this without having to mess about with building memory buffers in both directions. The marshalling methods use reflection to do translation of certain things - strings and so on being the obvious example - so that you don't have to get your hands dirty. All based on the Type object that is the foundation of reflection.
Fact is, without reflection the .NET framework wouldn't be what it is. No Attributes, no Expressions, probably a lot less interop between the languages. No automatic marshalling. No LINQ... at least in the way we often use it now.
I have a class in an SDK, for which every property I am in interested in calling. I know that the only way (I think the only way this is), is to use reflection, which most people claim as being slow etc (although I've seen articles which illustrate how in some cases it is not as slow as originally thought).
Is there a better way than to loop through and invoke each property in the target class?
Also, why is reflection deemed to be so slow?
It might be worth taking a looking at TypeDescriptors. As far as I am aware they have some performance benefits over using reflection and work in a slightly different way (they cache metadata for example). The MSDN article confused me in the way it describes how reflection is used by type descriptors, so you might need to find a more expansive explanation (therfore the 3rd link might be more helpful) .
The API for type descriptors is similar to that used for reflection.
Navigate to:
http://msdn.microsoft.com/en-us/library/ms171819.aspx
http://msdn.microsoft.com/en-us/library/system.componentmodel.typedescriptor.aspx
And
http://blogs.msdn.com/b/parthopdas/archive/2006/01/03/509103.aspx
Soom loose answers to your questions then:
1) Because of caching and a slightly different implementation to reflection TypeDescriptors my provide a performance improvement over relfection alone
2) You may be able to retrieve the properties and (invoke/set/get?) the properties in one fell swoop. This may be a case of calling an invoke type method and writing a lambda statement to peform some action on the collection returned?
You can use reflection to generate C# code accessing directly to all properties you are interested in. That would be a faster way to perform the calls.
I think Reflection is not a bad option for you though. It's not that slow.
I would use reflection to generate the code that calls all the properties. Then you don't have to worry about reflection being slow.
I've been told to use Reflection.Emit instead of PropertyInfo.GetValue / SetValue because it is faster this way.
But I don't really know what stuff from Reflection.Emit and how to use it to substitute GetValue and SetValue. Can anybody help me with this ?
Just an alternative answer; if you want the performance, but a similar API - consider HyperDescriptor; this uses Reflection.Emit underneath (so you don't have to), but exposes itself on the PropertyDescriptor API, so you can just use:
PropertyDescriptorCollection props = TypeDescriptor.GetProperties(obj);
props["Name"].SetValue(obj, "Fred");
DateTime dob = (DateTime)props["DateOfBirth"].GetValue(obj);
One line of code to enable it, and it handles all the caching etc.
If you're fetching/setting the same property many times, then using something to build a typesafe method will indeed be faster than reflection. However, I would suggest using Delegate.CreateDelegate instead of Reflection.Emit. It's easier to get right, and it's still blazingly fast.
I've used this in my Protocol Buffers implementation and it made a huge difference vs PropertyInfo.GetValue/SetValue. As others have said though, only do this after proving that the simplest way is too slow.
I have a blog post with more details if you decide to go down the CreateDelegate route.
Use PropertyInfo.GetValue/SetValue
If you have performance problems cache the PropertyInfo object (don't repeatedly call GetProperty)
If - and only if - the use of reflection is the performance bottleneck of your app (as seen in a profiler) use Delegate.CreateDelegate
If - and really really only if - you are absolutely sure that reading/writing the values is still the worst bottleneck it's time to start learning about the fun world of generating IL in runtime.
I really doubt it's worth it, each of those levels increase the complexity of the code more then they improve performance - do them only if you have to.
And if runtime access to properties is your performance bottleneck it's probably better going for compile time access (it's hard time to be both generic and super high performance at the same time).
The purpose of Reflection.Emit is completely different from that of PropertyInfo.Get/SetValue. Via Reflection.Emit, you can directly emit IL code, for example into dynamically compiled assemblies, and execute this code. Of course, this code could access your properties.
I seriously doubt that this will be much quicker then using PropertyInfo in the end, and it's not made for this purpose either. You could use Reflection.Emit as the code generator for a small compiler, for example.
Using Reflection.Emit seems a little too "clever", as well as a premature optimization. If you profile your application, and you find that the GetValue/SetValue Reflection is the bottleneck, then you could consider optimizing, but probably not even then...
I require the ability to preprocess a number of C# files as a prebuild step for a project, detect the start of methods, and insert generated code at the start of the method, before any existing code. I am, however, having a problem detecting the opening of a method. I initially tried a regular expression to match, but ended up with far too many false positives.
I would use reflection, but the MethodInfo class does not reference the point in the original source.
EDIT: What I am really trying to do here is to support pre-conditions on methods, that pre-condition code being determined by attributes on the method. My initial thought being that I could look for the beginning of the method, and then insert generated code for handling the pre-conditions.
Is there a better way to do this? I am open to creating a Visual Studio Addin if need be.
This is a .NET 2.0 project.
Cheers
PostSharp or Mono.Cecil will let you do this cleanly by altering the generated code without getting into writing a C# parser which is unlikely to be core business for you...
Havent done anything of consequence with PostSharp but would be guessing its more appropriate than Mono for implementing something like preconditions or AOP. Alternately you might be able to do something AOPy with a DI container like Ninject
But of course the applicability of this idea Depends - you didnt say much other than that you wanted to insert code at the start of methods...
EDIT: In light of your desire to do preconditions... Code Contracts in .net 4 is definitely in that direction.
What sort of a tool do you have? Whats wrong with having a single Mono.Cecil.dll DLL shipped? Either way something other than a parser is the tool for the job.
I am sure there is an easier way but this might be a good excuse to take MGrammer for a spin.
I happened upon a brief discussion recently on another site about C# runtime compilation recently while searching for something else and thought the idea was interesting. Have you ever used this? I'm trying to determine how/when one might use this and what problem it solves. I'd be very interested in hearing how you've used it or in what context it makes sense.
Thanks much.
Typically, I see this used in cases where you are currently using Reflection and need to optimize for performance.
For example, instead of using reflection to call method X, you generate a Dynamic Method at runtime to do this for you.
You can use this to add scripting support to your application. For examples look here or here.
It is quite easily possible to publish parts of your internal object framework to the scripting part, so you could with relative ease add something to your application that has the same effect as for example VBA for Office.
I've seen this (runtime compilation / use of System.Reflection.Emit classes) in generating dynamic proxies ( Code project sample ) or other means of optimizing reflection calls (time-wise).
At least one case you might use it is when generating dynamic code. For example, the framework is using this internally to generate XML serializers on the fly. After looking into a class at runtime, it can generate the code to serialize / deserialize the class. It then compiles that code and users it as needed.
In the same way you can generate code to handle arbitrary DB tables etc. and then compile and load the generated assembly.
Well, all C# code is run-time compiled, since it's a JIT (just-in-time) compiler. I assume you are referring to Reflection.Emit to create classes etc. on the fly. Here's an example I have seen recently in the Xml-Rpc.Net library.
I create a C# interface that has the same signature as an XML-RPC service's method calls, e.g.
IMyProxy : IXmlRpcProxy
{
[XmlRpcMethod]
int Add(int a, int b);
}
Then in my code I call something like
IMyProxy proxy = (IMyProxy)XmlRcpFactory.Create(typeof(IMyProxy));
This uses run-time code generation to create a fully functional proxy for me, so I can use it like this:
int result = proxy.Add(1, 2);
This then handles the XML-RPC call for me. Pretty cool.
I used runtime compiler services from .NET in my diploma thesis. Basically, it was about visually creating some graphical component for a process visualization, which is generated as C# code, compiled into an assembly and can then be used on the target system without being interpreted, to make it faster and more compact. And, as a bonus, the generated images could be packaged into the very same assembly as resources.
The other use of that was in Java. I had an application that had to plot a potentially expensive function using some numerical algorithm (was back at university) the user could enter. I put the entered function into a class, compiled and loaded it and it was then available for relatively fast execution.
So, these are my two experiences where runtime code generation was a good thing.
something I used it for was for allowing C# and VB code to bu run by the user ad-hoc. They could type in a line of code (or a couple lines) and it would be compiled, loaded into an app domain, and executed, and then unloaded. This probably isnt the best example of its usage, but an example of it none-the-less