cost of .net dynamic proxies - c#

What is the cost of using dynamic proxies?
I do not want to clutter my project with Interface Implementations, so I am considering using Dynamic proxies created by some 3rd party library like LinFu , Castle, Unity etc. Do they generate one instance per interface or do I get one for each call.
It is a web app, so whats the performance problem in the long run.
I am also using EF 4.1 (CTP5 at the moment), so if does create proxy classes itself, which makes me wonder if I can use EF's own Dynamic Proxy creating tools.
P.S. yes my interfaces are implemented by concrete classes along with other interfaces and base classes, but sometimes I only need the interface portion of it and not the extra stuff that comes with the concrete class.
All interfaces declare just some part of an EF4.1 POCO. So just getters and setters.

The opensource Impromptu-Interface, requires c# 4.0, and creates a lightweight proxy type for each Interface and Implementation Type combo you use and keeps them cached.
So creating an interface proxy around a given implementation (an ExpandoObject counts as one type no matter how you set it up) will have a one time cost of generating the proxy type, an Activator.CreateInstance for each time you make a proxy (which isn't bad) and for each call there will be a static invocation which is what you'd have with out the proxy + a dlr dynamic invocation which is very optimized thanks to microsoft.

Looks like you need more of a stub rather than a dynamic proxy. Perhaps you might want to take a look at Moq. As far as I know it creates a different instance for every time you create a mock, don't know if internally keeps some sort of a Type cache, though. Mind you, as it's a library targeted for unit tests, so this kind of use would be probably unorthodox.

Related

C# Forcing static fields [duplicate]

I am developing a set of classes that implement a common interface. A consumer of my library shall expect each of these classes to implement a certain set of static functions. Is there anyway that I can decorate these class so that the compiler will catch the case where one of the functions is not implemented.
I know it will eventually be caught when building the consuming code. And I also know how to get around this problem using a kind of factory class.
Just curious to know if there is any syntax/attributes out there for requiring static functions on a class.
Ed Removed the word 'interface' to avoid confusion.
No, there is no language support for this in C#. There are two workarounds that I can think of immediately:
use reflection at runtime; crossed fingers and hope...
use a singleton / default-instance / similar to implement an interface that declares the methods
(update)
Actually, as long as you have unit-testing, the first option isn't actually as bad as you might think if (like me) you come from a strict "static typing" background. The fact is; it works fine in dynamic languages. And indeed, this is exactly how my generic operators code works - it hopes you have the static operators. At runtime, if you don't, it will laugh at you in a suitably mocking tone... but it can't check at compile-time.
No. Basically it sounds like you're after a sort of "static polymorphism". That doesn't exist in C#, although I've suggested a sort of "static interface" notion which could be useful in terms of generics.
One thing you could do is write a simple unit test to verify that all of the types in a particular assembly obey your rules. If other developers will also be implementing the interface, you could put that test code into some common place so that everyone implementing the interface can easily test their own assemblies.
This is a great question and one that I've encountered in my projects.
Some people hold that interfaces and abstract classes exist for polymorphism only, not for forcing types to implement certain methods. Personally, I consider polymorphism a primary use case, and forced implementation a secondary. I do use the forced implementation technique fairly often. Typically, it appears in framework code implementing a template pattern. The base/template class encapsulates some complex idea, and subclasses provide numerous variations by implementing the abstract methods. One pragmatic benefit is that the abstract methods provide guidance to other developers implementing the subclasses. Visual Studio even has the ability to stub the methods out for you. This is especially helpful when a maintenance developer needs to add a new subclass months or years later.
The downside is that there is no specific support for some of these template scenarios in C#. Static methods are one. Another one is constructors; ideally, ISerializable should force the developer to implement the protected serialization constructor.
The easiest approach probably is (as suggested earlier) to use an automated test to check that the static method is implemented on the desired types. Another viable idea already mentioned is to implement a static analysis rule.
A third option is to use an Aspect-Oriented Programming framework such as PostSharp. PostSharp supports compile-time validation of aspects. You can write .NET code that reflects over the assembly at compile time, generating arbitrary warnings and errors. Usually, you do this to validate that an aspect usage is appropriate, but I don't see why you couldn't use it for validating template rules as well.
Unfortunately, no, there's nothing like this built into the language.
While there is no language support for this, you could use a static analysis tool to enforce it. For example, you could write a custom rule for FxCop that detects an attribute or interface implementation on a class and then checks for the existence of certain static methods.
The singleton pattern does not help in all cases. My example is from an actual project of mine. It is not contrived.
I have a class (let's call it "Widget") that inherits from a class in a third-party ORM. If I instantiate a Widget object (therefore creating a row in the db) just to make sure my static methods are declared, I'm making a bigger mess than the one I'm trying to clean up.
If I create this extra object in the data store, I've got to hide it from users, calculations, etc.
I use interfaces in C# to make sure that I implement common features in a set of classes.
Some of the methods that implement these features require instance data to run. I code these methods as instance methods, and use a C# interface to make sure they exist in the class.
Some of these methods do not require instance data, so they are static methods. If I could declare interfaces with static methods, the compiler could check whether or not these methods exist in the class that says it implements the interface.
No, there would be no point in this feature. Interfaces are basically a scaled down form of multiple inheritance. They tell the compiler how to set up the virtual function table so that non-static virtual methods can be called properly in descendant classes. Static methods can't be virtual, hence, there's no point in using interfaces for them.
The approach that gets you closer to what you need is a singleton, as Marc Gravell suggested.
Interfaces, among other things, let you provide some level of abstraction to your classes so you can use a given API regardless of the type that implements it. However, since you DO need to know the type of a static class in order to use it, why would you want to enforce that class to implement a set of functions?
Maybe you could use a custom attribute like [ImplementsXXXInterface] and provide some run time checking to ensure that classes with this attribute actually implement the interface you need?
If you're just after getting those compiler errors, consider this setup:
Define the methods in an interface.
Declare the methods with abstract.
Implement the public static methods, and have the abstract method overrides simply call the static methods.
It's a little bit of extra code, but you'll know when someone isn't implementing a required method.

Generic method to wrap a function

Say I want to wrap a function in another function, so to add some functionality to the wrapped function. But I don't know the return type or parameters on beforehand as the methods are generated as a web service proxy.
My first train of thought was using Func<T>. But some functions might return void, in which case Action<T> would be more appropriate.
Now my question: is there a nice generic way to achieve this? Is there some pattern I need to look for?
Well, the Facade Pattern comes to mind... It's not a very automatic way of doing things, but it works. You basically just put another interface in front of the proxy and call that instead. You can then add any functionality that you desire.
Another way to approach this is with aspect oriented programming. I've used PostSharp (when it was free) to do this this in the past. You can do things like add Pre / Post processing in the function by adding an attribute to a method / property. The AOP components then use code weaving to rewrite your IL to include the code that you've referenced. Note that this can significantly slow the build process.
As you say "I don't know the return type or parameters on beforehand", I think a Dynamic Proxy is what you
need.
Unfortunately, I know about the Dynamic Proxy in Java only. But I am sure, there is something similar for C#.
Try Googling "Dynamic Proxy C#".
For example, there seems to be an implementation for C# here: http://www.castleproject.org/dynamicproxy/
So, what IS a Dynamic Proxy?
From the JavaDoc http://docs.oracle.com/javase/1.3/docs/guide/reflection/proxy.html#api:
A dynamic proxy class is a class that implements a list of interfaces specified at runtime such that a method invocation through one of the interfaces on an instance of the class will be encoded and dispatched to another object through a uniform interface. Thus, a dynamic proxy class can be used to create a type-safe proxy object for a list of interfaces without requiring pre-generation of the proxy class, such as with compile-time tools. Method invocations on an instance of a dynamic proxy class are dispatched to a single method in the instance's invocation handler, and they are encoded with a java.lang.reflect.Method object identifying the method that was invoked and an array of type Object containing the arguments.

Why use services (IServiceProvider)?

I'm coming to this question from exploring the XNA framework, but I'd like a general understanding.
ISomeService someService = (ISomeService)Game.GetServices(typeof(ISomeService));
and then we do something with whatever functions/properties are in the interface:
someService.DoSomething(); // let's say not a static method but doesn't matter
I'm trying to figure out why this kind of implementation is any better than:
myObject = InstanceFromComponentThatWouldProvideTheService();
myObject.DoSomething();
When you use the services way to get your interface, you're really just getting an instance of the component that provides the service anyway. Right? You can't have an interface "instance". And there's only one class that can be the provider of a service. So all you really have is an instance of your component class, with the only difference being that you only have access to a subset of the component object (whatever subset is in the interface).
How is this any different from just having public and private methods and properties? In other words, the public methods/properties of the component is the "interface", and we can stop with all this roundaboutness. You can still change how you implement that "interface" without breaking anything (until you change the method signature, but that would break the services implementation too).
And there is going to be a 1-to-1 relationship between the component and the service anyway (more than one class can't register to be a provider of the service), and I can't see a class being a provider of more than one service (srp and all that).
So I guess I'm trying to figure out what problem this kind of framework is meant to solve. What am I missing?
Allow me to explain it via an example from XNA itself:
The ContentManager constructor takes a IServiceProvider. It then uses that IServiceProvider to get a IGraphicsDeviceService, which it in turn uses to get a GraphicsDevice onto which it loads things like textures, effects, etc.
It cannot take a Game - because that class is entirely optional (and is in a dependent assembly). It cannot take a GraphicsDeviceManager (the commonly used implementation of IGraphicsDeviceService) because that, like Game is an optional helper class for setting up the GraphicsDevice.
It can't take a GraphicsDevice directly, because you may be creating a ContentManager before the GraphicsDevice is created (this is exactly what the default Game class does). So it takes a service that it can retrieve a graphics device from later.
Now here is the real kicker: It could take a IGraphicsDeviceService and use that directly. BUT: what if at some time in the future the XNA team adds (for example) an AudioDevice class that some content types depend on? Then you'd have to modify the method signature of the ContentManager constructor to take an IAudioDeviceService or something - which will break third-party code. By having a service provider you avoid this issue.
In fact - you don't have to wait for the XNA team to add new content types requiring common resources: When you write a custom ContentTypeReader you can get access to the IServiceProvider from the content manager and query it for whatever service you like - even your own! This way your custom content types can use the same mechanism as the first-class XNA graphics types use, without the XNA code having to know about them or the services they require.
(Conversely, if you never load graphics types with your ContentManager, then you never have to provide it with a graphics device service.)
This is, of course, all well and good for a library like XNA, which needs to be updatable without breaking third-party code. Especially for something like ContentManager that is extendible by third parties.
However: I see lots of people running around using DrawableGameComponent, finding that you can't get a shared SpriteBatch into it easily, and so creating some kind of sprite-batch-service to pass that around. This is a lot more complication than you need for a game which generally has no versioning, assembly-dependency, or third-party extensibility requirements to worry about. Just because Game.Services exists, doesn't mean you have to use it! If you can pass things (like a SpriteBatch instance) around directly - just do that - it's much simpler and more obvious.
See http://en.wikipedia.org/wiki/Dependency_inversion_principle (and it's links) for a good start as to the architectural principles behind it
Interfaces are clearer and easier to mock.
That can be important, depending on your unit test policy.
Using a service provider is also a way of better controlling what portions of your code have access to certain other portions of your code. Similarly to passing an object through your code, you can pass an IServiceProvider implementation through the code to specific modules. This would allow for those modules to access certain services that are accessible through the service provider.
You can have many classes implement the IServiceProvider interface, each of which could provide access to one or more services - they are not restricted to returning a single instance (whether that be to themselves or another object).
For example, a use may be to have an IServiceProvider that contains services for keyboard handling, mouse handling and AI algorithms. Passing this interface to different modules or managers within your code will allow those modules or managers to retrieve the services they require (such as an EnemyManager needing access to the AI service).

Does Dependency Injection (DI) rely on Interfaces?

This may seem obvious to most people, but I'm just trying to confirm that Dependency Injection (DI) relies on the use of Interfaces.
More specifically, in the case of a class which has a certain Interface as a parameter in its constructor or a certain Interface defined as a property (aka. Setter), the DI framework can hand over an instance of a concrete class to satisfy the needs of that Interface in that class. (Apologies if this description is not clear. I'm having trouble describing this properly because the terminology/concepts are still somewhat new to me.)
The reason I ask is that I currently have a class that has a dependency of sorts. Not so much an object dependency, but a URL. The class looks like this [C#]:
using System.Web.Services.Protocols;
public partial class SomeLibraryService : SoapHttpClientProtocol
{
public SomeLibraryService()
{
this.Url = "http://MyDomainName.com:8080/library-service/jse";
}
}
The SoapHttpClientProtocol class has a Public property called Url (which is a plain old "string") and the constructor here initializes it to a hard-coded value.
Could I possibly use a DI framework to inject a different value at construction? I'm thinking not since this.Url isn't any sort of Interface; it's a String.
[Incidentally, the code above was "auto-generated by wsdl", according to the comments in the code I'm working with. So I don't particularly want to change this code, although I don't see myself re-generating it either. So maybe changing this code is fine.]
I could see myself making an alternate constructor that takes a string as a parameter and initializes this.Url that way, but I'm not sure that's the correct approach regarding keeping loosely coupled separation of concerns. (SoC)
Any advice for this situation?
DI really just means a class wont construct it's external dependencies and will not manage the lifetime of those dependencies. Dependencies can be injected either via constructor, or via method parameter. Interfaces or abstract types are common to clarify the contract the consumer expects from its dependency, however simple types can be injected as well in some cases.
For example, a class in a library might call HttpContext.Current internally, which makes arbitrary assumptions about the application the code will be hosted in. An DI version of the library method would expect a HttpContext instance to be injected via parameter, etc.
It's not required to use interfaces -- you could use concrete types or abstract base classes. But many of the advantages of DI (such as being able to change an implementation of a dependancy) come when using interfaces.
Castle Windsor (the DI framework I know best), allows you to map objects in the IoC container to Interfaces, or to just names, which would work in your case.
Dependency Injection is a way of organizing your code. Maybe some of your confusion comes from the fact that there is not one official way to do it. It can be achieved using "regular" c# code , or by using a framework like Castle Windsor. Sometimes (often?) this involves using interfaces. No matter how it is achieved, the big picture goal of DI is usually to make your code easier to test and easier to modify later on.
If you were to inject the URL in your example via a constructor, that could be considered "manual" DI. The Wikipedia article on DI has more examples of manual vs framework DI.
I would like to answer with a focus on using interfaces in .NET applications. Polymorphism in .NET can be achieved through virtual or abstract methods, or interfaces.
In all cases, there is a method signature with no implementation at all or an implementation that can be overridden.
The 'contract' of a function (or even a property) is defined but how the method is implemented, the logical guts of the method can be different at runtime, determined by which subclass is instantiated and passed-in to the method or constructor, or set on a property (the act of 'injection').
The official .NET type design guidelines advocate using abstract base classes over interfaces since they have better options for evolving them after shipping, can include convenience overloads and are better able to self-document and communicate correct usage to implementers.
However, care must be taken not to add any logic. The temptation to do so has burned people in the past so many people use interfaces - many other people use interfaces simply because that's what the programmers sitting around them do.
It's also interesting to point out that while DI itself is rarely over-used, using a framework to perform the injection is quite often over-used to the detriment of increased complexity, a chain-reaction can take place where more and more types are needed in the container even though they are never 'switched'.
IoC frameworks should be used sparingly, usually only when you need to swap out objects at runtime, according to the environment or configuration. This usually means switching major component "seams" in the application such as the repository objects used to abstract your data layer.
For me, the real power of an IoC framework is to switch implementation in places where you have no control over creation. For example, in ASP.NET MVC, the creation of the controller class is performed by the ASP.NET framework, so injecting anything is impossible. The ASP.NET framework has some hooks that IoC frameworks can use to 'get in-between' the creation process and perform their magic.
Luke

Implement Interface Without Creating Implementation (Dynamic Proxies?)

I've been working on creating my own IoC container for learning purposes. After asking a couple questions about them, I was shown that creating a factory to "resolve" the objects was the best solution (see third solution here). User Krzysztof Koźmic showed that Castle Windsor actually can implement this for you.
I've been reading the source of CW all morning. I know when Resolve is called, it "returns the interface". How does this interface "intercept" calls (since there is no implementation behind) and call it's own methods?
I know there's obviously some reflection trickery going on here and it's quite amazing. I'm just not at all user how the "interception" is done. I tried venturing down the rabbit hole myself on git, but I've gotten lost. If anyone could point me in the right direction it'd be much appreciated.
Also - Wouldn't creating a typed factory have the dependency on the container inside the calling code? In ASP.NET MVC terms, that's what it seems to me.
EDIT: Found Reflection.Emit... could this be what's used?
EDIT2: The more and more I look into this, the more complicated it sounds to automatically create factories. I might end up just sticking with the repetitive code.
There are two separate concepts here:
Dependency injection merely instantiates an existing class that implements the interface. For example, you might have a MyServices class that implements IMyServices. IoC frameworks give you various ways to specify that when you ask for an IMyServices, it will resolve to an instance of MyServices. There might be some IL Emit magic going on to set up the factory or helper methods, but the actual instances are simply classes you've defined.
Mocking allows you to instantiate a class that implements an interface, without actually having to code that class. This does usually make use of Reflection and IL Emit, as you thought. Typically the emitted IL code is fairly simple, delegating the bulk of the work to methods written in C#. Most of the complexity of mocking has to do with specifying the behavior of the method itself, as the frameworks often allow you to specify behavior with a fluent syntax. Some, like Moles, simply let you specify a delegate to implement the method, though Moles can do other, crazier things like redirecting calls to static methods.
To elaborate a bit further, you don't actually need to use IL to implement IoC functionality, but this is often valuable to avoid the overhead of repeated Reflection calls, since Reflection is relatively expensive. Here is some information on what Castle Windsor is doing.
To answer your question, the most helpful place I found to start was the OpCodes class. This is a good summary of the available functionality in IL and how the OpCodes function. It's essentially a stack-based assembly language (no registers to worry about), but strongly-typed and with first-class access to object symbols and concepts, like types, fields, and methods. Here is a good Code Project article introducing the basics of IL. If you're interested, I can also send you some helper classes I've created over the last few years that I use for my own Emit code.
The typed factory is implemented using Castle DynamicProxy library. It generates a type on the fly that implements the interface and forwards all calls to that type, you make via the interface, to the interceptor.
It imposes no dependency in your code. The interface is created in your assembly, that you control, where you don't reference Windsor. In other assembly (entry point to the app) you tell Windsor about that interface and tell it to make it a factory and Windsor learns about your interface and does stuff with it. That's Inversion of Control in its glory.
It's actually not that complicated :)
ImpromptuInterface creates DLR dynamic proxies based on interfaces. It allows you to have a dynamic implementation with a static interface. In fact it even has a baseclass ImpromptuFactory that provides the starting point for creating factories with a dynamic implementation based on the interface.

Categories