In my app I'm creating an auditing package that obtains various information, one such type of information should provide different information regarding what method is executed (time it took to execute, method name, method class, assembly, etc).
I'm not looking to use an existing package or framework, but to create my own.
I can imagine that this is a complicated thing to do, but I'm looking for some pointers to get me started .
One option you may be interested in is DI-level interception. Since container is responsible for your objects instantiation, sometimes it can be configured with proxy generators to enable call interception.
You can choose between Autofac, or Unity.
The most popular tasks to solve with this approach are cross-cutting concerns, like: logging, measurements, run-time application structure analysis. If you don't want to pollute your code base with repetitive diagnostic code, just delegate this task to an interceptor.
Similiar idea is AOP. I havn't seen popular AOP packages for a long time, and havn't used them, but it's worth to do a research on this topic too:
What is the best implementation for AOP in .Net?
DI Interception vs. AOP
Related
We have small lifetime scopes in our applications. It would be interesting to be able to intercept all services registered in autofac. By doing so we can see exactly which path the code takes for every lifetime scope and which method arguments are used. Not really usable for production but when really great for debugging/diagnostics/refactoring as you ge the whole picture and not just unit level.
But AFAIK it's only possible to register an interceptor for each single registration?
Nothing like this is supported out of the box with the Autofac.Extras.DynamicProxy2 library. You could potentially implement something like a module that handles OnActivating for every component using code similar to the stuff in Autofac.Extras.DynamicProxy2, but you'll run into trouble like...
Do you want class interceptors or interface interceptors? The type of service being resolved vs. the limit type of the component backing it will influence what kind of dynamic proxy you want to make. I believe the current A.E.D2 code only generates interception for either/or - not every interface a class implements, etc.
Do you use WCF client proxies? Client proxies are an interesting beast of their own so you have to special-case them. You'll see that in A.E.D2.
Generally problems like this get solved by aspect-oriented programming solutions (e.g., PostSharp) or profilers (e.g., ANTS)... or a combination of both. You might want to look into those solutions if you have the ability.
For an example of what sort of module implementation I'm talking about, check out the log4net integration page on the Autofac wiki. That shows how to handle OnPreparing for every component in the system. You can do the same thing, but handle OnActivating instead and use the sample on the Lifetime Events wiki page to show you how to swap one resolved thing for another (swap the real object for the generated proxy).
I have decided to use MEF for a plugin pattern I have and found MEF easy to pick up and not intrusive at all. I looked at samples and found them very easy to work with.
However, as soon as started implementing, I started struggling with the composition. Let's say I have a Class which has [ImportMany] on one of its properties. All examples I have seen, they create the Container in the class which has imports (let's call it composable) and basically the class composes itself. That might be OK for an example but surely putting knowledge of how the plugin gets populated is too much for the composable to know.
I can happily create a singleton container and access it in my composable but again the composable has to explicitly call Compose() on itself and I am not happy with that either as it is like a dependency injection scenario where the class pro-actively calls the Resolve() on the container. So I do not want to use it for just Service Location.
To make the matters worse I am also using Windsor Castle for DI and I am not sure how MEF and Windsor must work together.
I have really looked around and have not been able to find any guidance and sample on how to do MEF right. Now it might be that I have not looked around or I do not know MEF well enough (which is true) but will value your views from the experience of actually using it in the real world.
Do not do that. I used MEF for my last project and I wish to not do that.
There's a good idea behind it (composition) and I was do that manually for years. I was happy for the first official version in .NET 4.0 but there a re still a lot of design problems.
Unfortunately it's part of Microsoft policy to leave testing and bug finding to end users and feedback the hard-earned bugs and suggestions.
MEF is good if you use the way the example says. As soon as you need a little change you will find there's not enough documentation and nobody will answer you. Here are some of my never resolved issues with MEF and you can find my questions in codeplex.com which never had been answered by the developer team:
1) How to pass parameters to part's constructors (they may say use ExportFactory which is shipped in codeplex version but I wasted a long time on this, and I can say there's not an acceptable solution for that)
2) How to set configurations for parts ? (I ended-up passing configurations to parts through a method which is a bad idea, but the best available)
3) MEF is very slow because it use reflection under the hood. For my case loading 1,000 parts takes 60 seconds.
4) Debugging is awesome. You get unclear messages. You will end-up downloading the full source from codeplex and search your exceptions inside the code.
After all I think if you have other choices, let MEF gets mature and use the next version.
I just shared my own experience.
The recommended pattern is for you to create the container once in your hosting code, and only access it from there to get the "root" part. You would call container.GetExport<Root>() if it's OK for MEF to create the part for you, otherwise you would call container.SatisfyImports(root).
The root part should import the things it needs, and the parts supplying those exports should import what they need, and so on. MEF will create the whole graph and none of the parts need to call into the container directly. The samples often have very few different parts, so it isn't always obvious that the container creation and composition should only occur once, even in more complex applications.
There are situations where you may have object that need their imports satisfied, but can't be created by MEF. An example of this is WPF/Silverlight UI objects that are created by the Xaml parser. In this case you might resort to a service which allows these objects to request that their imports be satisfied.
I don't have much advice for how to use MEF and another DI container in the same application. If there isn't much interaction between the parts of the system composed with MEF and Windsor it might work without much trouble. If you need components from one container to be injected with components from the other container, it won't be as simple. One way would be to have a service that a component would have to call to resolve its dependencies from the other container. The other possibility would be to have the containers themselves linked. You can do this in theory with MEF by writing an ExportProvider that accesses the Windsor container. In practice it would require a very deep level of knowledge about MEF, and it might not be possible to get it to work exactly how you'd like.
Other than testability, what's the big advantage of utilizing D.I. (and I'm not talking about a D.I. framework or IoC) over static classes? Particularly for an application where you know a service won't be swapped out.
In one of our c# application, our team is utilizing Dependency Injection in the web web GUI, the service layer, and the repository layer rather than using static methods. In the past, we'd have POCOs (busines entity objects) that were created, modified, passed around, and saved by static classes.
For example, in the past we might have written:
CreditEntity creditObj = CreditEntityManager.GetCredit(customerId);
Decimal creditScore = CreditEntityManager.CalculateScore(creditObj);
return creditScore;
Now, with D.I., the same code would be:
//not shown, _creditService instantiation/injection in c-tors
CreditEntity creditObj = _creditService.GetCredit(customerId);
Decimal creditScore = _creditService.CalculateScore(creditObj);
return creditScore;
Not much different, but now we have dozens of service classes that have much broader scope, which means we should treat them just as if they were static (i.e. no private member variables unless they are used to define further dependencies). Plus, if any of those methods utilize a resource (database/web service/etc) we find it harder to manage concurrency issues unless we remove the dependency and utilize the old static or using(...) methods.
The question for D.I. might be: is CreditEntityManager in fact the natural place to centralize knowledge about how to find a CreditEntity and where to go to CalculateScore?
I think the theory of D.I. is that a modular application involved in thing X doesn't necessarily know how to hook up with thing Y even though X needs Y.
In your example, you are showing the code flow after the service providers have been located and incorporated in data objects. At that point, sure, with and without D.I. it looks about the same, even potentially exactly the same depending on programming language, style, etc.
The key is how those different services are hooked up together. In D.I., potentially a third party object essentially does configuration management, but after that is done the code should be about the same. The point of D.I. isn't to improve later code but to try and match the modular nature of the problem with the modular nature of the program, in order to avoid having to edit modules and program logic that are logically correct, but are hooking up with the wrong service providers.
It allows you to swap out implementations without cracking open the code. For example, in one of my applications, we created an interface called IDataService that defined methods for querying a data source. For the first few production releases, we used an implementation for Oracle using nHibernate. Later, we wanted to switch to an object database, so we wrote and implementation for db4o, added its assembly to the execution directory and changed a line in the config file. Presto! We were using db4o without having to crack open the code.
This has been discussed exactly 1002 times. Here's one such discussion that I remember (read in order):
http://scruffylookingcatherder.com/archive/2007/08/07/dependency-injection.aspx
http://ayende.com/Blog/archive/2007/08/18/Dependency-Injection-More-than-a-testing-seam.aspx
http://kohari.org/2007/08/15/defending-dependency-injection
http://scruffylookingcatherder.com/archive/2007/08/16/tilting-at-windmills.aspx
http://ayende.com/Blog/archive/2007/08/18/Dependency-Injection-IAmDonQuixote.aspx
http://scruffylookingcatherder.com/archive/2007/08/20/poking-bears.aspx
http://ayende.com/Blog/archive/2007/08/21/Dependency-Injection-Applicability-Benefits-and-Mocking.aspx
About your particular problems, it seems that you're not managing your services lifestyles correctly... for example, if one of your services is stateful (which should be quite rare) it probably has to be transient. I recommend that you create as many SO questions about this as you need to in order to clear all doubts.
There is a Guice video which gives a nice sample case for using D.I. If you are using a lot of 3-rd party services which need to be hooked upto dynamically D.I will be a great help.
My solutions has several projects which includes several libraries and one project for UI. Currently it is a windows forms application and I use log4net for logging. This UI project has only reference to log4net and this project maintains the configuration files. But I would like to log from my libraries as well.
Usual method for doing this is to wrap the logging calls behind an interface. Create a common project something called utilities and add this interface to this project. Now this project can be used in all the projects and can use this interface for logging.
I am thinking about an alternative design which involves passing delegates and reducing coupling and avoiding unnecessary interfaces.
Consider following class is one from my library.
public sealed class Foo
{
Action<string> log;
Action<string, Exception> logException;
public Foo(Action<string> log, Action<string,Exception> logException)
{
this.log = log;
this.logException = logException;
}
public void Work()
{
WL("Starting work");
WL("Completed step1");
.........
}
void WL(string message)
{
if(log != null) log(message);
}
void WL(string message, Exception exception)
{
if(logException != null) logException(message, exception);
}
}
Now from the calling code, I can easily pass the logging method. Something like
Foo foo = new Foo(message => Console.WriteLine(message),
(message, exception) => Console.WriteLine("{0},{1}", message, exception));
foo.Work();
Used a console for explaining, in reality I will use the logging code here.
1 - Do you think this as a better solution? I think this is better as this is more loosely coupled.
2 - Is there any other better solutions available?
This is the only related question I have found here
Any thoughts...?
Don't use delegates if there are multiple signatures flying in close formation. All you're doing is avoiding defining classes and interfaces that would be meaningful. log4net provides an ILog interface which is an excellent example of a simple wrapper interface you can pass in.
If you're going to use a logging framework, especially log4net, don't wrap it and don't create a single global (static OR singleton) entry point. I've written about this before, and you may be interested in the question about best practices as well.
I have a thin layer that exposes a logging API very similar to Log4Net, that uses a provider-model design pattern to allow you to plug in any suitable logging framework. I've implemented providers for:
System.Diagnostics.Trace
log4net
EntLib
This means I can use logging throughout all my apps without any direct dependency on a specific logging framework, and users of my components can plug in their own favorite logging framework.
My advice is to add a reference to log4net to all your projects but leave the logger configuration in the UI project. This still leaves you with the flexibility to define different logging levels on a per assembly basis. Logging is such a low level activity and log4net is such a mature product that I wouldn't spend any time trying to come up with a clever solution just to satisfy "best practices". I might even argue, over a beer or two, that referencing log4net is no different than referencing System.Core.
Unless you have different pieces of code using different logging frameworks, I'd have a singleton LogDispatcher or something similar that all code which would try and log would call into, perhaps passing in a message level to determine the correct logging method. This prevents the delegates for logging from needing to be passed around the entire codebase, and centralizes all of the code which is responsible for the logging policy.
Another approach is to use a framework like Log4Net. Even if you don't end up using it, their design is a good one to base your own logging on.
Google for "AOP logging".
Here's some chat about this from Ayende.
Quoting Jon S. "Simple is almost always better than clever" - IMHO your use of delegates looks more of the latter.
If you want the library projects to log, they should setup-and-use their own logger. I'd not ask clients to pass in a logger (object or interface) - which then travels all the way deep down the type dependency graph. It just pollutes the interface a bit with unnecessary logger object/interface/delegate etc. parameters.
If you're using Log4XXX frameworks, I believe it emphasises the concept of "hierarchical logging architecture" (the names they come up with in s/w ;), where each type/class can maintain and write to its own log file. If the ctor of Foo creates a logger internally, I'd like that. And since it is configurable, specific clients may change the configuration files to redirect the output elsewhere too.
So your problem is one I will soon have to commit to a solution for. The defacto answer is "Use Injection" but in this case it's less inversion of control and more expansion of dependencies. I think your close, so here are my thoughts.
The Pros of your solution
There is no need for additional references by your class or the assembly it's in. Because your using Actions with common types, those references are likely already present.
The benefit of that is that is that 100% of the implementation of logging is left to the assembly that injects your actions. So if you add log4Net of nLog the only reference to it will be where it is implemented. So if you wanted switch later, only that assembly would have to be updated.
The converse of that is if you just inject a chosen logger into each class. That means you have to add a reference in every project to the logger. Even if the interfaces are named and implemented the same, you have to have the reference for it to resolve. In solutions where you have more than 3 projects that can be costly, and you have the same cost any time you would switch loggers.
Possible Improvement
In that lies the beauty of your solution. However it could be improved. I find that when injecting things of similar function or "aspect" it can make sense to put them into an object and inject that instead. You could create interface with both of your actions and inject concretes that implement whatever library you want. This would, again, leave the only reference to the logging library to one project/assembly with only the cost of having to add a reference to your interface to the rest.
Hope this helps and good luck.
I am currently writing an open source SDK for a program that I use and I'm using an IoC container internally(NInject) to wire up all my internal dependencies.
I have some objects that are marked as internal so that I don't crowd the public API as they are only used internally and shouldn't been seen by the user, stuff like factories and other objects. The problem that I'm having is that NInject can't create internal objects which means that I have to mark all my internal objects public which crowds up the public API.
My question is: Is there someway to get around this problem or am I doing it all wrong?
PS. I have thought about using InternalsVisiableTo attribute but I feel like that is a bit of a smell.
Quick look at the other answers: it doesn't seem like you are doing something so different that there is something fundamentally wrong with Ninject that you would need to modify it or replace it. In many cases, you can't "go straight for [the] internals" because they rely upon unresolved dependency injection; hence the usage of Ninject in the first place. Also it sounds like you already do have an internal set of interfaces which is why the question was posed.
Thoughts: one problem with using Ninject directly in your SDK or library is that then your users will have to use Ninject in their code. This probably isn't an issue for you because it is your IoC choice so you were going to use it anyway. What if they want to use another IoC container, then now they effectively have two running duplicating efforts. Worse yet what if they want to use Ninject v2 and you've used v1.5 then that really complicates the situation.
Best case: if you can refactor your classes such that they get everything they need through Dependency Injection then this is the cleanest because the library code doesn't need any IoC container. The app can wire up the dependencies and it just flows. This isn't always possible though, as sometimes the library classes need to create instances which have dependencies that you can't resolve through injection.
Suggestion: The CommonServiceLocator (and the Ninject adapter for it) were specifically designed for this situation (libraries with dependencies). You code against the CommonServiceLocator and then the application specifies which DI/IoC actually backs the interface.
It is a bit of a pain in that now you have to have Ninject and the CommonServiceLocator in your app, but the CommonServiceLocator is quite lightweight. Your SDK/library code only uses the CommonServiceLocator which is fairly clean.
I guess you don't even need that. IoC is for public stuff. Go straight for internals.
But - that's just my intuition...
Create a secondary, internal API which is different from the external API. You may need to do the split manually...
I'm going to vote for the InternalsVisibleTo solution. Totally not a smell, really. The point of the attribute is to enable the sort of behavior you are wanting, so rather than jumping through all sorts of elaborate hoops to make things work without it, just use the functionality provided by the framework for solving this particular problem.
I would also suggest, if you want to hide your choice of container from the user, using ILMerge to combine the Ninject assemblies with your SDK assembly, and apply the /internalize argument to change the visibility of the Ninject assemblies to internal, so the Ninject namespaces don't leak out of your library (sorry, couldn't find a link to the ILMerge docs online, but there is a doc file in the download). There is also this nice blog post about integrating ILMerge into your build process.
You can
modify Ninject
pick a different container