We have small lifetime scopes in our applications. It would be interesting to be able to intercept all services registered in autofac. By doing so we can see exactly which path the code takes for every lifetime scope and which method arguments are used. Not really usable for production but when really great for debugging/diagnostics/refactoring as you ge the whole picture and not just unit level.
But AFAIK it's only possible to register an interceptor for each single registration?
Nothing like this is supported out of the box with the Autofac.Extras.DynamicProxy2 library. You could potentially implement something like a module that handles OnActivating for every component using code similar to the stuff in Autofac.Extras.DynamicProxy2, but you'll run into trouble like...
Do you want class interceptors or interface interceptors? The type of service being resolved vs. the limit type of the component backing it will influence what kind of dynamic proxy you want to make. I believe the current A.E.D2 code only generates interception for either/or - not every interface a class implements, etc.
Do you use WCF client proxies? Client proxies are an interesting beast of their own so you have to special-case them. You'll see that in A.E.D2.
Generally problems like this get solved by aspect-oriented programming solutions (e.g., PostSharp) or profilers (e.g., ANTS)... or a combination of both. You might want to look into those solutions if you have the ability.
For an example of what sort of module implementation I'm talking about, check out the log4net integration page on the Autofac wiki. That shows how to handle OnPreparing for every component in the system. You can do the same thing, but handle OnActivating instead and use the sample on the Lifetime Events wiki page to show you how to swap one resolved thing for another (swap the real object for the generated proxy).
Related
In my app I'm creating an auditing package that obtains various information, one such type of information should provide different information regarding what method is executed (time it took to execute, method name, method class, assembly, etc).
I'm not looking to use an existing package or framework, but to create my own.
I can imagine that this is a complicated thing to do, but I'm looking for some pointers to get me started .
One option you may be interested in is DI-level interception. Since container is responsible for your objects instantiation, sometimes it can be configured with proxy generators to enable call interception.
You can choose between Autofac, or Unity.
The most popular tasks to solve with this approach are cross-cutting concerns, like: logging, measurements, run-time application structure analysis. If you don't want to pollute your code base with repetitive diagnostic code, just delegate this task to an interceptor.
Similiar idea is AOP. I havn't seen popular AOP packages for a long time, and havn't used them, but it's worth to do a research on this topic too:
What is the best implementation for AOP in .Net?
DI Interception vs. AOP
I would like to register a singleton component for multiple services and define which constructor to use, depending on which service was used during the resolve call.
I tried this:
_builder.RegisterType<TComponent>()
.As<IService1>()
.FindConstructorsWith(ConstructorFinder1)
.SingleInstance();
_builder.RegisterType<TComponent>()
.As<IService2>()
.FindConstructorsWith(ConstructorFinder2)
.SingleInstance();
But this leads to two different "singleton" instances, depending on which service was used.
So I tried:
_builder.RegisterType<TComponent>()
.As<IService1>()
.FindConstructorsWith(ConstructorFinder1)
.As<IService2>()
.FindConstructorsWith(ConstructorFinder2)
.SingleInstance();
This solves the singleton issue, but sadly the second FindConstructorsWith call overrides the first call, i.e. for both services ConstructorFinder2 is used.
I had assumed (hoped) that the ConstructorFinders would be stored with respect to the service, but apparently this is not the case.
Is what I'm trying to achieve conceptually wrong, does Autofac not support it or am I simply missing something?
EDIT:
Once again thanks to Travis for his great response. Apparently I left out a few details that made things confusing. Let me add some now.
This question was actually a kind of follow-up to How to determine which constructor Autofac uses when resolving (where Travis also helped me along). So the issue comes up when deserializing and it affects many different objects.
I get the arguments about composition, seperation of concerns and how having several ctors is often considered a code smell, but in the context of deserialization (at least for the app I'm currently developing) it is extremely useful to be able to create instances differently, depending on if they are newly built or deserialized from a project file. Several members that need to be initialized when building a new instance do not have to be initialized when deserializing (because their values would be overridden during deserialization anyway). It would mean extra performance costs and (and, in this case) cause other issues regarding the throw-away-initializations.
After spending days trying to find a solution (with complications also coming from the Newtonsoft Json side) I've decided to discontinue Autofac and implement our own IOC container. For general purposes it cannot (obviously!) compete with Autofac in any way, but since we were really only using a small subset of Autofac's great features, I felt we could try to roll our own. It took me a lot less than the days I've spent on trying to wrap my head around a monolithic black box. Yes, Autofac is open source, but stepping through the code no walk in the park.
First tests are very promising and it feels good to regain full control of such a vital component of the application.
Again, the reason for leaving Autofac was that it is not (feasibly) possible to define how a singleton component is constructed depending on the service it was constructed for. And from a general structure/concept point-of-view I understand that it makes sense to strictly seperate the service and the construction-how-tos. But during deserializing things are different, I believe. And, now that I'm independent of Autofac, I may decide to alter the mechanisms so they fit into the overall concept in a more straight-forward way.
This is sort of a difficult question to answer because it seems you have some underlying goal you're trying to achieve and you have a solution you want to work but perhaps it's the wrong solution and you should ask a [new] question depending on how this response works out for you.
Let me walk this through to see if I can explain why it's hard to answer.
I would like to register a singleton component for multiple services and define which constructor to use, depending on which service was used during the resolve call.
If it's a singleton that means there's one in the whole system, right? It'll be effectively "first in wins." If something resolves it as an IService1 then the constructor associated with that will be called and even if you try resolving it as IService2 later no construction will happen because the singleton was created. The converse is also true - IService2 gets resolved and the constructor path is followed there, then things asking for IService1 will get the singleton and no constructor is called.
That raises a concern:
If you know which thing, for sure, will be resolving first, then why do you need two different constructor selectors?
If you don't know which thing will be resolving first, then are you accounting for the system unpredictability?
I have seen these sorts of questions before and usually what they indicate is one of two things:
You are trying to do some sort of selection or special logic based on context. There's an Autofac FAQ about this that may help. Usually the way around this is to refactor. I'll get back to that in a second.
You are trying to "share registrations" between two different applications. The answer to this is to use Autofac modules and reuse those; but if there are special registrations for each app type, let that happen.
This isn't to say that either of these are what you're asking for, but this is where I've seen such questions. Usually there's some unspoken goal where a solution has been pre-chosen and it's better ask how to solve the goal rather than how to implement a very specific solution. Again, I could be wrong.
On the refactoring note for item 1, above, I can further guess based on the desire for a singleton that there's some sort of resource like a database connection that needs to be shared or is expensive to spin up. Consider splitting the TComponent into three separate classes:
TCommonExpensiveComponent - this is the stuff that is actually expensive to spin up and really does need to be a singleton, but does not differ across IService1 and IService2.
TService1 - implement IService1 with only the required constructor so you don't need a constructor finder. Have it consume TCommonExpensiveComponent.
TService2 - implement IService2 with only the required constructor so you don't need a constructor finder. Have it consume TCommonExpensiveComponent.
The idea being avoid the complexity of registrations, keep the shared/singleton that you want, and still get different constructor usage as needed. You may want to throw in some common base/abstract class, too, that the TService classes can derive from if there's really a lot of common logic.
Is what I'm trying to achieve conceptually wrong, does Autofac not support it or am I simply missing something?
Technically you could do some really crazy stuff in Autofac if you wanted to, like write a custom registration source that waits for someone to query for the IService1 or IService2 registration and then picks a constructor based on that, dynamically serving the registration as needed. But, truly, don't even start down this road.
Instead, it would be good to clarify what the problem is that you're trying to solve and how you plan on working around the challenges listed above if my response here doesn't help. Do that in a brand new question that goes into more detail about your challenge and what you've tried. This not being a forum, having a conversation to try and weed out additional help given the current question really isn't feasible. Plus, taking a second to step back and maybe reframe the question sounds like it might help here.
I'm coming to this question from exploring the XNA framework, but I'd like a general understanding.
ISomeService someService = (ISomeService)Game.GetServices(typeof(ISomeService));
and then we do something with whatever functions/properties are in the interface:
someService.DoSomething(); // let's say not a static method but doesn't matter
I'm trying to figure out why this kind of implementation is any better than:
myObject = InstanceFromComponentThatWouldProvideTheService();
myObject.DoSomething();
When you use the services way to get your interface, you're really just getting an instance of the component that provides the service anyway. Right? You can't have an interface "instance". And there's only one class that can be the provider of a service. So all you really have is an instance of your component class, with the only difference being that you only have access to a subset of the component object (whatever subset is in the interface).
How is this any different from just having public and private methods and properties? In other words, the public methods/properties of the component is the "interface", and we can stop with all this roundaboutness. You can still change how you implement that "interface" without breaking anything (until you change the method signature, but that would break the services implementation too).
And there is going to be a 1-to-1 relationship between the component and the service anyway (more than one class can't register to be a provider of the service), and I can't see a class being a provider of more than one service (srp and all that).
So I guess I'm trying to figure out what problem this kind of framework is meant to solve. What am I missing?
Allow me to explain it via an example from XNA itself:
The ContentManager constructor takes a IServiceProvider. It then uses that IServiceProvider to get a IGraphicsDeviceService, which it in turn uses to get a GraphicsDevice onto which it loads things like textures, effects, etc.
It cannot take a Game - because that class is entirely optional (and is in a dependent assembly). It cannot take a GraphicsDeviceManager (the commonly used implementation of IGraphicsDeviceService) because that, like Game is an optional helper class for setting up the GraphicsDevice.
It can't take a GraphicsDevice directly, because you may be creating a ContentManager before the GraphicsDevice is created (this is exactly what the default Game class does). So it takes a service that it can retrieve a graphics device from later.
Now here is the real kicker: It could take a IGraphicsDeviceService and use that directly. BUT: what if at some time in the future the XNA team adds (for example) an AudioDevice class that some content types depend on? Then you'd have to modify the method signature of the ContentManager constructor to take an IAudioDeviceService or something - which will break third-party code. By having a service provider you avoid this issue.
In fact - you don't have to wait for the XNA team to add new content types requiring common resources: When you write a custom ContentTypeReader you can get access to the IServiceProvider from the content manager and query it for whatever service you like - even your own! This way your custom content types can use the same mechanism as the first-class XNA graphics types use, without the XNA code having to know about them or the services they require.
(Conversely, if you never load graphics types with your ContentManager, then you never have to provide it with a graphics device service.)
This is, of course, all well and good for a library like XNA, which needs to be updatable without breaking third-party code. Especially for something like ContentManager that is extendible by third parties.
However: I see lots of people running around using DrawableGameComponent, finding that you can't get a shared SpriteBatch into it easily, and so creating some kind of sprite-batch-service to pass that around. This is a lot more complication than you need for a game which generally has no versioning, assembly-dependency, or third-party extensibility requirements to worry about. Just because Game.Services exists, doesn't mean you have to use it! If you can pass things (like a SpriteBatch instance) around directly - just do that - it's much simpler and more obvious.
See http://en.wikipedia.org/wiki/Dependency_inversion_principle (and it's links) for a good start as to the architectural principles behind it
Interfaces are clearer and easier to mock.
That can be important, depending on your unit test policy.
Using a service provider is also a way of better controlling what portions of your code have access to certain other portions of your code. Similarly to passing an object through your code, you can pass an IServiceProvider implementation through the code to specific modules. This would allow for those modules to access certain services that are accessible through the service provider.
You can have many classes implement the IServiceProvider interface, each of which could provide access to one or more services - they are not restricted to returning a single instance (whether that be to themselves or another object).
For example, a use may be to have an IServiceProvider that contains services for keyboard handling, mouse handling and AI algorithms. Passing this interface to different modules or managers within your code will allow those modules or managers to retrieve the services they require (such as an EnemyManager needing access to the AI service).
I have decided to use MEF for a plugin pattern I have and found MEF easy to pick up and not intrusive at all. I looked at samples and found them very easy to work with.
However, as soon as started implementing, I started struggling with the composition. Let's say I have a Class which has [ImportMany] on one of its properties. All examples I have seen, they create the Container in the class which has imports (let's call it composable) and basically the class composes itself. That might be OK for an example but surely putting knowledge of how the plugin gets populated is too much for the composable to know.
I can happily create a singleton container and access it in my composable but again the composable has to explicitly call Compose() on itself and I am not happy with that either as it is like a dependency injection scenario where the class pro-actively calls the Resolve() on the container. So I do not want to use it for just Service Location.
To make the matters worse I am also using Windsor Castle for DI and I am not sure how MEF and Windsor must work together.
I have really looked around and have not been able to find any guidance and sample on how to do MEF right. Now it might be that I have not looked around or I do not know MEF well enough (which is true) but will value your views from the experience of actually using it in the real world.
Do not do that. I used MEF for my last project and I wish to not do that.
There's a good idea behind it (composition) and I was do that manually for years. I was happy for the first official version in .NET 4.0 but there a re still a lot of design problems.
Unfortunately it's part of Microsoft policy to leave testing and bug finding to end users and feedback the hard-earned bugs and suggestions.
MEF is good if you use the way the example says. As soon as you need a little change you will find there's not enough documentation and nobody will answer you. Here are some of my never resolved issues with MEF and you can find my questions in codeplex.com which never had been answered by the developer team:
1) How to pass parameters to part's constructors (they may say use ExportFactory which is shipped in codeplex version but I wasted a long time on this, and I can say there's not an acceptable solution for that)
2) How to set configurations for parts ? (I ended-up passing configurations to parts through a method which is a bad idea, but the best available)
3) MEF is very slow because it use reflection under the hood. For my case loading 1,000 parts takes 60 seconds.
4) Debugging is awesome. You get unclear messages. You will end-up downloading the full source from codeplex and search your exceptions inside the code.
After all I think if you have other choices, let MEF gets mature and use the next version.
I just shared my own experience.
The recommended pattern is for you to create the container once in your hosting code, and only access it from there to get the "root" part. You would call container.GetExport<Root>() if it's OK for MEF to create the part for you, otherwise you would call container.SatisfyImports(root).
The root part should import the things it needs, and the parts supplying those exports should import what they need, and so on. MEF will create the whole graph and none of the parts need to call into the container directly. The samples often have very few different parts, so it isn't always obvious that the container creation and composition should only occur once, even in more complex applications.
There are situations where you may have object that need their imports satisfied, but can't be created by MEF. An example of this is WPF/Silverlight UI objects that are created by the Xaml parser. In this case you might resort to a service which allows these objects to request that their imports be satisfied.
I don't have much advice for how to use MEF and another DI container in the same application. If there isn't much interaction between the parts of the system composed with MEF and Windsor it might work without much trouble. If you need components from one container to be injected with components from the other container, it won't be as simple. One way would be to have a service that a component would have to call to resolve its dependencies from the other container. The other possibility would be to have the containers themselves linked. You can do this in theory with MEF by writing an ExportProvider that accesses the Windsor container. In practice it would require a very deep level of knowledge about MEF, and it might not be possible to get it to work exactly how you'd like.
Other than testability, what's the big advantage of utilizing D.I. (and I'm not talking about a D.I. framework or IoC) over static classes? Particularly for an application where you know a service won't be swapped out.
In one of our c# application, our team is utilizing Dependency Injection in the web web GUI, the service layer, and the repository layer rather than using static methods. In the past, we'd have POCOs (busines entity objects) that were created, modified, passed around, and saved by static classes.
For example, in the past we might have written:
CreditEntity creditObj = CreditEntityManager.GetCredit(customerId);
Decimal creditScore = CreditEntityManager.CalculateScore(creditObj);
return creditScore;
Now, with D.I., the same code would be:
//not shown, _creditService instantiation/injection in c-tors
CreditEntity creditObj = _creditService.GetCredit(customerId);
Decimal creditScore = _creditService.CalculateScore(creditObj);
return creditScore;
Not much different, but now we have dozens of service classes that have much broader scope, which means we should treat them just as if they were static (i.e. no private member variables unless they are used to define further dependencies). Plus, if any of those methods utilize a resource (database/web service/etc) we find it harder to manage concurrency issues unless we remove the dependency and utilize the old static or using(...) methods.
The question for D.I. might be: is CreditEntityManager in fact the natural place to centralize knowledge about how to find a CreditEntity and where to go to CalculateScore?
I think the theory of D.I. is that a modular application involved in thing X doesn't necessarily know how to hook up with thing Y even though X needs Y.
In your example, you are showing the code flow after the service providers have been located and incorporated in data objects. At that point, sure, with and without D.I. it looks about the same, even potentially exactly the same depending on programming language, style, etc.
The key is how those different services are hooked up together. In D.I., potentially a third party object essentially does configuration management, but after that is done the code should be about the same. The point of D.I. isn't to improve later code but to try and match the modular nature of the problem with the modular nature of the program, in order to avoid having to edit modules and program logic that are logically correct, but are hooking up with the wrong service providers.
It allows you to swap out implementations without cracking open the code. For example, in one of my applications, we created an interface called IDataService that defined methods for querying a data source. For the first few production releases, we used an implementation for Oracle using nHibernate. Later, we wanted to switch to an object database, so we wrote and implementation for db4o, added its assembly to the execution directory and changed a line in the config file. Presto! We were using db4o without having to crack open the code.
This has been discussed exactly 1002 times. Here's one such discussion that I remember (read in order):
http://scruffylookingcatherder.com/archive/2007/08/07/dependency-injection.aspx
http://ayende.com/Blog/archive/2007/08/18/Dependency-Injection-More-than-a-testing-seam.aspx
http://kohari.org/2007/08/15/defending-dependency-injection
http://scruffylookingcatherder.com/archive/2007/08/16/tilting-at-windmills.aspx
http://ayende.com/Blog/archive/2007/08/18/Dependency-Injection-IAmDonQuixote.aspx
http://scruffylookingcatherder.com/archive/2007/08/20/poking-bears.aspx
http://ayende.com/Blog/archive/2007/08/21/Dependency-Injection-Applicability-Benefits-and-Mocking.aspx
About your particular problems, it seems that you're not managing your services lifestyles correctly... for example, if one of your services is stateful (which should be quite rare) it probably has to be transient. I recommend that you create as many SO questions about this as you need to in order to clear all doubts.
There is a Guice video which gives a nice sample case for using D.I. If you are using a lot of 3-rd party services which need to be hooked upto dynamically D.I will be a great help.