I am currently writing an open source SDK for a program that I use and I'm using an IoC container internally(NInject) to wire up all my internal dependencies.
I have some objects that are marked as internal so that I don't crowd the public API as they are only used internally and shouldn't been seen by the user, stuff like factories and other objects. The problem that I'm having is that NInject can't create internal objects which means that I have to mark all my internal objects public which crowds up the public API.
My question is: Is there someway to get around this problem or am I doing it all wrong?
PS. I have thought about using InternalsVisiableTo attribute but I feel like that is a bit of a smell.
Quick look at the other answers: it doesn't seem like you are doing something so different that there is something fundamentally wrong with Ninject that you would need to modify it or replace it. In many cases, you can't "go straight for [the] internals" because they rely upon unresolved dependency injection; hence the usage of Ninject in the first place. Also it sounds like you already do have an internal set of interfaces which is why the question was posed.
Thoughts: one problem with using Ninject directly in your SDK or library is that then your users will have to use Ninject in their code. This probably isn't an issue for you because it is your IoC choice so you were going to use it anyway. What if they want to use another IoC container, then now they effectively have two running duplicating efforts. Worse yet what if they want to use Ninject v2 and you've used v1.5 then that really complicates the situation.
Best case: if you can refactor your classes such that they get everything they need through Dependency Injection then this is the cleanest because the library code doesn't need any IoC container. The app can wire up the dependencies and it just flows. This isn't always possible though, as sometimes the library classes need to create instances which have dependencies that you can't resolve through injection.
Suggestion: The CommonServiceLocator (and the Ninject adapter for it) were specifically designed for this situation (libraries with dependencies). You code against the CommonServiceLocator and then the application specifies which DI/IoC actually backs the interface.
It is a bit of a pain in that now you have to have Ninject and the CommonServiceLocator in your app, but the CommonServiceLocator is quite lightweight. Your SDK/library code only uses the CommonServiceLocator which is fairly clean.
I guess you don't even need that. IoC is for public stuff. Go straight for internals.
But - that's just my intuition...
Create a secondary, internal API which is different from the external API. You may need to do the split manually...
I'm going to vote for the InternalsVisibleTo solution. Totally not a smell, really. The point of the attribute is to enable the sort of behavior you are wanting, so rather than jumping through all sorts of elaborate hoops to make things work without it, just use the functionality provided by the framework for solving this particular problem.
I would also suggest, if you want to hide your choice of container from the user, using ILMerge to combine the Ninject assemblies with your SDK assembly, and apply the /internalize argument to change the visibility of the Ninject assemblies to internal, so the Ninject namespaces don't leak out of your library (sorry, couldn't find a link to the ILMerge docs online, but there is a doc file in the download). There is also this nice blog post about integrating ILMerge into your build process.
You can
modify Ninject
pick a different container
Related
seems like with the new unity version has been added support for autowiring.
How many of you are familiar with it and strngly suggest me to use or not use it? Seems to me that the use of it limit my control on the DI especially for what regard the unit tests, am I thinking wrong?
I'm assuming that this question is about Auto-Registration, since Unity has had Auto-Wiring for years.
Since I wrote my When to use a DI Container article a couple of years ago, I've only become slightly more radical in my attitude towards DI Containers. In that article, I describe the benefits and trade-offs of using DI Containers, as opposed to Poor Man's DI (manually composing code).
My order of preference is now:
Manually write the code of the Composition Root (Poor Man's DI). This may seem like a lot of trouble, but gives you the best possible feedback, as well as it's easier to understand than using a DI Container.
Use Auto-Registration (AKA Convention over Configuration). While you lose compile-time feedback, the mechanism might actually pull your code towards a greater deal of consistency, because as long as you follow the conventions, things 'just work'. However, this requires that the team is comfortable with the Auto-Registration API of the chosen DI Container, which, in my experience, isn't likely to be the case.
Only use Explicit Register if you have a very compelling reason to do so (and no: not thoroughly understanding DI is not a good reason). These days, I almost never do this, so it's difficult for me to come up with some good cases, but advanced lifetime management may be one motivation.
It's been 1½ years since I last used a DI Container in any production code.
In summary, and in an effort to answer the specific question about Unity:
Seriously consider not using Unity at all (or any other DI Container).
If you must use Unity, use the Auto-Registration feature. Otherwise, you're likely to get more trouble than benefits from it.
Caveat: I'm writing this as a general response, based on my experience with DI and various DI Containers, including Explicit Registration and Auto-Registration. While I have some knowledge about previous versions of Unity, I don't know anything about the Auto-Registration features of the new version of Unity.
I've built a container which automatically register your services. All you need to do is to tag them with an attribute.
this is not autowiring per se, but that's part of my point. Unity have from the start been able to build classes which has not been registered in the container. And that's imho a big weekness as the class might be used with dependencies that it shouldnt use or that it will have a different lifetime than intended.
My choice to use an attribute was to be able to make sure that all services can be resolved and built. When you call the builder.Build() method my container will throw an exception if something can't be resolved.
Hence you will directly at the startup see if something is missing, rather then later at runtime.
So autowiring might seem good, but as you say: You'll loose control only to discover it later during runtime if something is missing.
Ninject, Sprint.NET, Unity, Autofac, Castle.Windsor are all examples are IoC frameworks that are available. However, I like the learning curve and control of writing my own. It is definitely common practice to not "re-invent the wheel" and just use pre-existing structures. If your comment is along those lines please be gentle.
Can IoC be implemented without the use of XML? It seems to me most, if not all, of the aforementioned frameworks use XML but I would much rather just write mine in C# instead of using XML to load a .dll. The C# is all converted into one .dll eventually anyway.
From my understanding, if wrong please correct, IoC can be used with DI to make the functionality of classes be based off of their definition and implementation while allowing for a separation of concerns.
This is accomplished in C# using microsoft's library System.ComponentModel.IContainer by having a class which inherits it. A class, such as Product, would have an interface IProduct. A generic constructor would then inherit from IContainer and in the constructor, allow a repository to be passed in, an instantiated object to be passed in, and a function to be passed in. This would allow a controller action to then instantiate an interface (IProduct), instantiate the generic constructor with the current repository instance, and then pass it the interface and function.
Is this setup accurate?
I am still trying to learn more about this topic, and have read the wiki articles on IoC, DI, and read about Castle.Windsor, ninject, Unity, and looked over multiple definitions from the MSDN regarding C# libraries which are used. Any assistance, corrections, or suggestions, are greatly appreciated. Thanks
Can IoC be implemented without the use of XML?
Yes, Ninject, Unity, Castle Windsor and Autofac can be configured without using any XML at all. (not sure about Spring.NET, last time I used it it was impossible, version 1.3)
From my understanding, if wrong please correct, IoC can be used with
DI to make the functionality of classes be based off of their
definition and implementation while allowing for a separation of
concerns.
If under "IoC" you mean "IoC container" then yes, it can be used with DI, but since DI is a particular case of Inversion Of Control your IoC container will be just a container for you dependencies. By just having it your will not magically get any DI-friendly types. It's just a support for managing your inverted dependencies.
Edit
As Mystere Man pointed in his answer you need to improve you understanding of the IoC containers. So I would recommend to read this wonderful book (from Mark Seeman) about all that stuff.
I think it is a great exercise to start without a DI container. Before focusing on using a DI framework, focus on best patterns and practices. Especially, design all classes around Dependency Injection and make sure your code follows the SOLID principles. Both sounds pretty easy, but this takes a shift in mindset and a lot of practice before you will get this right (but is well worth it).
When you do this, and do this well, you will quickly notice that your application will evolve in amazing ways. Your code will be testable and extendable in ways that you never imagined before, without your code to rot over time (however, it keeps constant focus to prevent code from rotting).
Still, when you do all this right (which –again- takes a lot of practice), you will still have one part of your application that, despite your best efforts, will get more complex and harder to maintain, as the application grows. This is the part of the application where you wire all dependencies together: the Composition Root.
And this is where DI containers come in. They have fancy names and compete with each other over features, but their goal can be stated in a single sentence:
The goal of a DI container is to keep the Composition Root
maintainable.
Although you can write your own simple DI container to wire up your dependencies, to prevent your Composition Root to become a big fragile, ever changing ball of mud, the container must at least have one crucial feature: Automatic Constructor Injection (a.k.a. auto-wiring). With auto-wiring, the container will look at the constructor arguments of a type that it needs to create, and it will inject the dependencies in it based on the types of those arguments. This feature will make the difference between a maintenance nightmare and a healthy Composition Root. Although creating your own container that supports auto-wiring isn't that hard (with expression trees it takes about 20 lines of code), the moment you start needing auto-wiring is the time to start using one of the existing DI frameworks.
So in conclusion, if you feel it helps you in the learning experience by doing this by hand, please do, as long as you stick to SOLID, DI, DRY, and TDD. When the burden of changing your Composition Root for each change in the application gets too big (which will be sooner than you might expect), switch to an established framework.
I would suggest using an existing DI container first, to understand how it works from the end user perspective. Then you can go about re-designing the wheel. My favorite saying is "You have to know the rules before you can break them".
Some of what you've said doesn't make a lot of sense. you don't have to use System.ComponentModel.IContainer in any framekwork i know of. Maybe Unity requires that (Microsoft's container) but none of the others do. I'm not familiar with Unity thogh.
Jason Dolinger in his video located here (hot available right now) www.lab49.com/files/videos/Jason%20Dolinger%20MVVM.wmv (from 0.59 to 1.04) uses such code:
public partial App: Application
{
protected override void OnStartup(StartupEventArgs e)
{
base.OnStartup(e);
IUnityContainer container = new UnityContainer();
RandomQuoteSource source = new RandomQuoteSource();
container.RegisterInstance<IQuoteSource>(source);
WatchList window = container.Resolve<WatchList>();
window.Show();
}
}
He uses class IUnityContainer which I can not found. As I understand here we just create a window (so container.Resolve call can be replaced with new WatchList(..., also somehow we associate RandomQouteSource as an implementation for IQouteSource, however I don't have clear understanding how this should be used later.
The questions are:
how do you create main Windows in your MVVM application, do you also use IUnityContainer for that?
is it good technics in general? is it well-known? is it default way to do these things? what alternativies do I have?
where can I find Microsoft.Practicies.Unity.dll?
Should you?
That's up to you. It can be complicated. If you use it correctly, it can be worth it, both for your code, and for your knowledge of how your code works.
You will be able to identify the parts of your application that should only touch other parts at arms length. You will be more free to make changes to your code without impacting other portions of your code. You will also have an easier time creating unit tests that use mock objects, but that's just a side benefit.
You'll have to read some articles on this topics and see if it makes sense to you.
(to be fair, it really isn't complicated - it just seems that way while you're learning it, or while you're trying to explain it to someone who is new to the concepts)
Unity and Dependency Injection
IUnityContainer is part of Unity, which is a Dependency Injection container library.
It can be coupled with the PRISM framework for use in WPF/Silverlight.
Dependency Injection has a lot of rules you'll want to follow to get the maximum benefit. I don't see an easy or effective "getting started" guide on Unity's site, and Mark Seemann's book on Dependency Injection in .Net isn't free.
So instead I suggest you check out an intro tutorial on Dependency Injection on a site that has a good tutorial:
https://github.com/ninject/ninject/wiki/Getting-Started
This is not the Unity framework, so the code won't directly compile...
...but it should teach you the basics of what Dependency Injection is, and why you'd want to use it. Then you should be able to follow the sample code and videos on the Unity page.
If you skip these steps, you're going to get confused very quickly, and will probably shoot yourself in the foot at least a few dozen times.
Creating Windows
You don't use the container except in that one function. Use it anywhere else, and you're not using the DI container correctly. You'll just use the container to register your views, view models, and models, resolve the main window you previously registered, and dispose the container when you're done.
This process is called the "Three Calls Pattern". Unfortunately I don't have any generic examples for Unity, but here is an article on the three calls pattern for yet-another DI container library.
You might also see this mentioned in that Ninject tutorial that I linked above.
It's a well known technic called Dependency Injection.
An alternativ is to create the needed dependency by hand.
You can download the unity assemblies at patterns & practices - Unity from codeplex.
Take a look at codeproject article for a tutorial.
I havent worked with wpf but i would go the same way to minimize the dependencies
and get a better testability.
Edit
Here is another example from codeplex.
But read this article from here stack first, cause it seems to be a pain
using dependency injection is a good practice in general. it lets your classes worry about their own concerns and leave the framework to worry about managing dependencies. this leads to more focused, more maintainable, and more testable code in your classes. unity is just one of many such frameworks and it could be argued there are others that are better, such as structuremap and castle windsor. using a container basically means that in one place you set up a registry from which you resolve your classes with their dependencies and you classes specify in their constructor or with public properties those things on which they depend. if you resolve a class from the container, it will automatically resolve its dependencies according to the type of the dependency based on how that type is registered with a container.
the easiest way to include unity in your project is to use nuget. just issue: install-package unity. you can also download binaries and source and get a lot more information at the codeplex project for unity: http://unity.codeplex.com/.
One suggestion is Ninject - a lightweight, easy to use DI-tool.
I have decided to use MEF for a plugin pattern I have and found MEF easy to pick up and not intrusive at all. I looked at samples and found them very easy to work with.
However, as soon as started implementing, I started struggling with the composition. Let's say I have a Class which has [ImportMany] on one of its properties. All examples I have seen, they create the Container in the class which has imports (let's call it composable) and basically the class composes itself. That might be OK for an example but surely putting knowledge of how the plugin gets populated is too much for the composable to know.
I can happily create a singleton container and access it in my composable but again the composable has to explicitly call Compose() on itself and I am not happy with that either as it is like a dependency injection scenario where the class pro-actively calls the Resolve() on the container. So I do not want to use it for just Service Location.
To make the matters worse I am also using Windsor Castle for DI and I am not sure how MEF and Windsor must work together.
I have really looked around and have not been able to find any guidance and sample on how to do MEF right. Now it might be that I have not looked around or I do not know MEF well enough (which is true) but will value your views from the experience of actually using it in the real world.
Do not do that. I used MEF for my last project and I wish to not do that.
There's a good idea behind it (composition) and I was do that manually for years. I was happy for the first official version in .NET 4.0 but there a re still a lot of design problems.
Unfortunately it's part of Microsoft policy to leave testing and bug finding to end users and feedback the hard-earned bugs and suggestions.
MEF is good if you use the way the example says. As soon as you need a little change you will find there's not enough documentation and nobody will answer you. Here are some of my never resolved issues with MEF and you can find my questions in codeplex.com which never had been answered by the developer team:
1) How to pass parameters to part's constructors (they may say use ExportFactory which is shipped in codeplex version but I wasted a long time on this, and I can say there's not an acceptable solution for that)
2) How to set configurations for parts ? (I ended-up passing configurations to parts through a method which is a bad idea, but the best available)
3) MEF is very slow because it use reflection under the hood. For my case loading 1,000 parts takes 60 seconds.
4) Debugging is awesome. You get unclear messages. You will end-up downloading the full source from codeplex and search your exceptions inside the code.
After all I think if you have other choices, let MEF gets mature and use the next version.
I just shared my own experience.
The recommended pattern is for you to create the container once in your hosting code, and only access it from there to get the "root" part. You would call container.GetExport<Root>() if it's OK for MEF to create the part for you, otherwise you would call container.SatisfyImports(root).
The root part should import the things it needs, and the parts supplying those exports should import what they need, and so on. MEF will create the whole graph and none of the parts need to call into the container directly. The samples often have very few different parts, so it isn't always obvious that the container creation and composition should only occur once, even in more complex applications.
There are situations where you may have object that need their imports satisfied, but can't be created by MEF. An example of this is WPF/Silverlight UI objects that are created by the Xaml parser. In this case you might resort to a service which allows these objects to request that their imports be satisfied.
I don't have much advice for how to use MEF and another DI container in the same application. If there isn't much interaction between the parts of the system composed with MEF and Windsor it might work without much trouble. If you need components from one container to be injected with components from the other container, it won't be as simple. One way would be to have a service that a component would have to call to resolve its dependencies from the other container. The other possibility would be to have the containers themselves linked. You can do this in theory with MEF by writing an ExportProvider that accesses the Windsor container. In practice it would require a very deep level of knowledge about MEF, and it might not be possible to get it to work exactly how you'd like.
I know what Dependency Injection is in theory, but I haven't ever actually used Dependency Injection in any of my projects yet. So consider me to be a DI noob.
The straightforward question is; Can MEF be used for Dependency Injection?
If it can, my follow up question is; Is it a good idea to use MEF for dependency Injection?
I understand that my follow up question may be viewed as being subjective. But, I am looking for best practices and reasons for and against. So, I hope that my follow up question doesn't rustle too many feathers.
The context of all this is I feel a little lost trying to figure out how to make a plugin framework for asp.net mvc.
As I explain in my book MEF can be used as a DI Container, but in its current incarnation it's not particularly well-suited for the task.
MEF was designed to address extensibility scenarios, and while it has a lot of overlapping features, it's quite limited when it comes to configuration and lifetime management.
MEF can, I believe, be used for dependency injection; at least I use it in my own small home WPF project currently. I suspect it might get messy when you need to inject different types for an interface for different deployments of your application, if you require this. It would require going to some effort to add the right classes to your catalog.
Where I work, using ASP.NET MVC2, we use Castle Windsor for dependency injection. We make use then of the XML configuration to initialize the container. This means we can inject different types for an interface without having to rebuild.
I believe .NET offers another option to MEF, similarly called MAF. It's supposed to be more complex, but offer much more control. I don't however know anything more about it.
(I'm not very experienced (1 year employed), so if someone disagrees with my on something, they're probably more correct)
Glenn Block (former product manager of MEF) answered this FAQ in a blog post.
Most of the shortcomings of MEF mentioned in his post have been addressed by MEFContrib: it contains additional catalog and export provider implementations to add support for POCOs, open generics and interception.
update: the recently released MEF2 Preview3 adds support for open generics and attribute-less registration out of the box. The APIs of preview releases aren't final but this is a good indication that those features will be in the next (>v4.0) .NET release .