I've started playing with Ninject and from a screencast it states the following is how you set up a binding:
class MyModule : StandardModule {
public override void Load() {
Bind<IInterface>().To<ConcreteType>();
// More bindings here...
}
}
This is all very good.
However suppose you have one hundred objects used in an application. That would mean this would have one hundred bindings. Is this correct?
Secondly, I presume that given such an application it may be split into subsystems such as GUI, Database, Services and so on.
Would you then create a custom module for each subsystem which in turn would be:
GUIModule
DatabaseModule
ServiceModule
...
For each module you'd have the correct bindings that they required. Am I on the right page here?
Finally would this binding all occur in Main or the entry point for your application?
However suppose you have one hundred
objects used in an application. That
would mean this would have one hundred
bindings. Is this correct?
One hundred registered components, yes, but not necessarily registered one by one. There's a Convention extension for Ninject that allows you to scan assemblies and register types based on some defined rules. See this test as an example.
Would you then create a custom module
for each subsystem
Again, not necessarily. You might just want to register all your repositories (just to name something) in a single convention registration.
For each module you'd have the correct
bindings that they required.
As with any "module" (be it assembly, class, application) the concepts of coupling and cohesion apply here as well. It's best practice to keep coupling low (don't depend too much on other modules) and cohesion high (all components within a module must serve towards a common goal)
Finally would this binding all occur
in Main or the entry point for your
application?
Yes, see this related question.
Related
I've been working on a personal project which, beyond just making something useful for myself, I've tried to use as a way to continue finding and learning architectural lessons. One such lesson has appeared like a Kodiak bear in the middle of a bike path and I've been struggling quite mightily with it.
The problem is essentially an amalgam of issues at the intersection of dependency injection, assembly decoupling and implementation hiding (that is, implementing my public interfaces using internal classes).
At my jobs, I've typically found that various layers of an application hold their own interfaces which they publicly expose, but internally implement. Each assembly's DI code registers the internal class to the public interface. This technique prevents outside assemblies from newing-up an instance of the implementation class. However, some books I've been reading while building this solution have spoken against this. The main things that conflict with my previous thinking have to do with the DI composition root and where one should keep the interfaces for a given implementation. If I move dependency registration to a single, global composition root (as Mark Seemann suggests), then I can get away from each assembly having to run its own dependency registrations. However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them). As for decoupling assemblies, Martin Fowler instructs to put interfaces in the project with the code that uses the interface, not the one that implements it. As an example, here is a diagram he provided, and, for contrast, a diagram for how I would normally implement the same solution (okay, these aren't quite the same; kindly focus on the arrows and notice when implementation arrows cross assembly boundaries instead of composition arrows).
Martin Style
What I've normally seen
I immediately saw the advantage in Martin's diagram, that it allows the lower assemblies to be swapped out for another, given that it has a class that implements the interface in the layer above it. However, I also saw this seemingly major disadvantage: If you want to swap out the assembly from an upper layer, you essentially "steal" the interface away that the lower layer is implementing.
After thinking about it for a little bit, I decided the best way to be fully decoupled in both directions would be to have the interfaces that specify the contract between layers in their own assemblies. Consider this updated diagram:
Is this nutty? Is it right on? To me, it seems like this solves the problem of interface segregation. It doesn't, however, solve the problem of not being able to hide the implementation class as internal. Is there anything reasonable that can be done there? Should I not be worried about this?
One solution that I'm toying around with in my head is to have each layer implement the proxy layer's interface twice; once with a public class and once with an internal class. This way, the public class could merely wrap/decorate the internal class, like this:
Some code might look like this:
namespace MechanismProxy // Simulates Mechanism Proxy Assembly
{
public interface IMechanism
{
void DoStuff();
}
}
namespace MechanismImpl // Simulates Mechanism Assembly
{
using MechanismProxy;
// This class would be registered to IMechanism in the DI container
public class Mechanism : IMechanism
{
private readonly IMechanism _internalMechanism = new InternalMechanism();
public void DoStuff()
{
_internalMechanism.DoStuff();
}
}
internal class InternalMechanism : IMechanism
{
public void DoStuff()
{
// Do whatever
}
}
}
... of course, I'd still have to address some issues regarding constructor injection and passing the dependencies injected into the public class to the internal one. There's also the problem that outside assemblies could possibly new-up the public Mechanism... I would need a way to ensure only the DI container can do that... I suppose if I could figure that out, I wouldn't even need the internal version. Anyway, if anyone can help me understand how to overcome these architectural problems, it would be mightily appreciated.
However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them).
Unless you are building a reusable library (that gets published on NuGet and gets used by other code bases you have no control over), there is typically no reason to make classes internal. Especially since you program to interfaces, the only place in the application that depends on those classes is the Composition Root.
Also note that, in case you move the abstractions to a different library, and let both the consuming and the implementing assembly depend on that assembly, those assemblies don't have to depend on each other. This means that it doesn't matter at all whether those classes are public or internal.
This level of separation (placing the interfaces in an assembly of its own) however is hardly ever needed. In the end it's all about the required granularity during deployment and the size of the application.
As for decoupling assemblies, Martin Fowler instructs to put interfaces in the project with the code that uses the interface, not the one that implements it.
This is the Dependency Inversion Principle, which states:
In a direct application of dependency inversion, the abstracts are owned by the upper/policy layers
This is a somewhat opinion based topic, but since you asked, I'll give mine.
Your focus on creating as many assemblies as possible to be as flexible as possible is very theoretical and you have to weigh the practical value vs costs.
Don't forget that assemblies are only a container for compiled code. They become mostly relevant only when you look at the processes for developing, building and deploying/delivering them. So you have to ask a lot more questions before you can make a good decision on how exactly to split up the code into assemblies.
So here a few examples of questions I'd ask beforehand:
Does it make sense from your application domain to split up the
assemblies in this way (e.g. will you really need to swap out
assemblies)?
Will you have separate teams in place for developing those?
What will the scope be in terms of size (both LOC and team sizes)?
Is it required to protect the implementation from being available/visible? E.g. are those external interfaces or internal?
Do you really need to rely on assemblies as a mechanism to enforce your architectural separation? Or are there other, better measures (e.g. code reviews, code checkers, etc.)?
Will your calls really only happen between assemblies or will you need remote calls at some point?
Do you have to use private assemblies?
Will sealed classes help instead enforcing your architecture?
For a very general view, leaving these additional factors out, I would side with Martin Fowler's diagram, because that is just the standard way of how to provide/use interfaces. If your answers to the questions indicate additional value by further splitting up/protecting the code that may be fine, too. But you'd have to tell us more about your application domain and you'd have to be able to justify it well.
So in a way you are confronted with two old wisdoms:
Architecture tends to follow organizational setups.
It is very easy to over-engineer (over-complicate) architectures but it is very hard to design them as simple as possible. Simple is most of the time better.
When coming up with an architecture you want to consider those factors upfront otherwise they'll come and haunt you later in the form of technical debt.
However, the downside is that the implementation classes have to be public (allowing any assembly to instantiate them).
That doesn't sound like a downside. Implementation classes, that are bound to abstractions in your Composition Root, could probably be used in an explicit way somewhere else for some other reasons. I don't see any benefit from hiding them.
I would need a way to ensure only the DI container can do that...
No, you don't.
Your confusion probably stems from the fact that you think of the DI and Composition Root like there is a container behind, for sure.
In fact, however, the infrastructure could be completely "container-agnostic" in a sense that you still have your dependencies injected but you don't think of "how". A Composition Root that uses a container is your choice, as good choice as possible another Composition Root where you manually compose dependencies. In other words, the Composition Root could be the only place in your code that is aware of a DI container, if any is used. Your code is built agaist the idea of Dependency Inversion, not the idea of Dependency Inversion container.
A short tutorial of mine can possibly shed some light here
http://www.wiktorzychla.com/2016/01/di-factories-and-composition-root.html
I'm organizing a solution and I need some tips on how to properly arrange the project's components.
Right now I have everything implemented on a single project, but I feel like it makes sense to isolate some of the components on their own projects. The main modules I have are categorzed by folders on the project, and are the Logic module, Database Access module and the Model module. It makes sense to me that these modules should be defined on their own project (maybe as a DLL).
Now, my question comes from the fact that during the application startup, the logic instantiates a configuration class which reads configurations from the app.config file and is known by these modules. Does it make sense to isolate the configuration into it's own project, to prevent the other modules from depending on the logic module? If so, should the configuration class implement from interfaces so that each module only has access to it's relevant configurations?
"The main modules I have are categorzed by folders on the project, and
are the Logic module, Database Access module and the Model module...
the logic instantiates a configuration class which reads
configurations from the app.config file and is known by these
modules."
The picture this paints to me is that you've got a class or classes that either take the configuration class as a constructor parameter, or there's a global/ singleton instance of the configuration class that the other classes make use of.
But the configuration class can read configs, etc. Presumably, the other classes don't need something that can read configs. They just need some values* (that happen for now to be read from a config). Those other classes don't need to go out and ask anybody for those values**; they should just require those values as parameters in their constructors.
This way, those other classes do not need to have any knowledge of the configuration class. Someone just hands them the data that they need. But who?
The answer is the entry point(s)***. Each project in the solution that contains an entry point (console apps, web apps, and test projects) has the responsibility for interfacing with the environment; it knows the context that it wants the rest of the code to run in. So the entry points need to get configuration information by whatever means necessary (e.g. your configuration class or the autogenerated MyEntryPoint.Properties.Settings) and then supply that to the constructors of the other classes they need.
*If each class requires a great deal of configuration information (as your comment below implies), consider either breaking those classes up into something simpler (because needing a lot of configuration may point to an ill-defined responsibility) or grouping the necessary information into DTOs that represent coherent concepts. Those DTOs could then be placed in their own project that can be referenced by both consumers and producers of configuration information.
**This assumes that the values obtained from the configuration class are constant for the lifetime of the objects that would be constructed with them. If not, then instead of taking those values as constructor parameters, you should take an interface (or Func) that you can call for the info you need when you need it. Those interfaces could be defined in an otherwise-empty project which anybody can reference. This sounds like what you're getting at with
"should the configuration class implement from interfaces so that each module only has access to it's relevant configurations?"
When you say
"Does it make sense to isolate the configuration into it's own project, to prevent the other modules from depending on the logic module?"
the answer is yes and no. The Logic module does stuff; doing stuff implies a need for tests; tests want to configure whatever they are testing in whatever way suits the test. So Logic shouldn't be responsible for configuration; it should itself take in information from whoever does the configuration. Rather, configuration is the entry points' job.
***I'm using "entry point" a little loosely here. I'm not talking specifically about the .entrypoint IL directive, but just the first places in your code that can be given control by stuff outside of your control. This includes Main in C# console apps, HttpApplication.Application_Start in web apps, methods recognized as tests by the test runner of your choice, etc.
I'm working with Castle Windsor as an IoC. I'm looking at a bit of code written by a team member and I am trying to figure out what the best practice here would be. Something rings odd to me about the way this was written, but I don't have the experience to say what should be done.
There are two Castle configs (which is my first gripe, but to give a benefit of the doubt, let's say this is okay). Let's say: cfgMain and cfgSub.
There is a main class which is responsible for setting up the application and causing it to run. (It can be a class with a Main() or a Global.asax, doesn't matter). Let's say: MainClass.
There is also a DependentClass.
MainClass instantiates a CastleContainer and installs cfgMain into it, then Resolves DependentClass.
DependentClass creates another CastleContainer and installs cfgSub in it. This is what I have a problem with
It seems like having a hardcoded path to a config inside of a class which itself is created via IoC is a recipe for disaster. It also makes it very hard to unit test.
Call to action: What's the best practice here? Should all the configs be merged? What if there's a reason (read: need) to separate them?
Without the information on why there are two configurations, it is impossible to judge that.
But assuming there is a reason, both classes sound to be parts of the Composite Root, a place near the start of the application that wires up all container dependencies. The main class is the composition root for the first configuration, the dependant class for the second configuration. There is still nothing wrong.
I would say that resolving the dependant class with the first container makes no sense - the composite root is a concrete class and there is no reason to replace it. However, there could be another reason the container is used to instantiate it - dependencies. If the dependant class itself depends on other services, resolving it with the main configuration sounds like the only way to resolve these dependencies.
Ultimately, with no other information, I would say that (with all these assumptions) what you describe could possibly make sense.
However, I strongly recommend to review the need of two separate configurations and two separate composite roots. Sounds overcomplicated.
I understand this could be interpreted as an opinion question, but it is technical and a problem I am currently trying to solve.
In the Prism documentation, it is stated that modules should have loose coupling with no direct references, only going through shared interfaces. Like in the following picture:
My issue is, if only a few modules required an IOrdersRepository, the infrastructure is the wrong place for it, as this contains shared code for all of the modules. If I placed the interface in another module, then both modules will need to directly reference that one, breaking the loose coupling.
Should I simply create a library which contains this interface and doesn't follow the module pattern?
Thanks,
Luke
It should be definitely Infrastructure module. Markus' argument is absolutely right - you shouldn't create separate assembly for each shared set of interfaces. It's much more better to have Infrastructure module with a lot of interfaces istead of a lot of modules with some interfaces in each one. Imagine, that one time you will find, that 2 of yours "set of interfaces" should use some shared interface! What will you do? Add yet one assembly for that "super-shared" interfaces? Or combine those modules to one? It's wrong I think.
So - definitely Infrastructure module!
PS. Imagine, that .NET Framework has 1000s libraries - one for collections, anotherone for math functions etc....
UPDATE:
Actually, I use Infrastructure module mostly for interfaces and very basic DTOs. All shared code I move to another assembly (like YourApplication.UIControls, YourApplication.DAL etc.). I haven't enough reasons to do exactly this way, but this is my way to understand Prism's recomendations. Just IMHO.
UPDATE 2:
If you want to share your service so wide - I think it absolutely makes sence to have structure like:
YourApplication.Infrastructure - "very-shared" interfaces (like IPaymentService)
YourApplication.Modules.PaymentModule - "very-shared" implementation of your PaymentService
YourApplication.WPF.Infrastucture - infrastructure of your WPF application (in addition to YourApplication.Infrastructure
YourApplication.WPF.Modules.PaymentUI - some WPF specific UI for your YourApplication.Modules.PaymentModule
YourApplication.WebSite.Modules.PaymentUI - UI for web-site
And so on.. So, your modules will have almost always references to YourApplication.Infrastructure and YourApplication.TYPEOFAPP.Infrastructure, where TYPEOFAPP can be WPF, WebSite, WinService etc.. Or you can name it like YourApplication.Modules.PaymentUI.WPF..
I have a modular application, it behaves quite like a plugin system. Module B is dependent on Module A. When B is present, then some dialogs (titles etc.) need to be altered in Module A. Also, a different entity should be used for a list when Module B is present, which I want to include in Module B, so A doesn't know about it during compile time. Creating an abstract base in A for the entity is something I want to avoid as well.
How would you implement this requirement? The modules can communicate in various ways:
1.) Microsoft Unity is used for Object creation and dependency injection
2.) The modules can communicate via a Message-System.
3.) There's an EventAggregator which all the modules can use
I don't want to sublcass the dialog in Module B and just alter the typemapping in unity, because then I'd have to provide the whole dialog in another module. Also, if some other module wants to make other changes to the dialog, it'd be impossible.
Suggestions welcome!
Without knowing specific details, I would use interfaces to blend the plug-in components/modules. Require that each plug-in component implement an interface -- say IPluginComponent or whatever makes sense. (Actually, only components that must communicate or interact would actually be required to implement the interface.) Once all modules are loaded, the host application can fire methods or events on the components.
Personally, I like to keep things data-driven and simple as much as possible; so I might favor a "two-phase" pass through the modules. This keeps the dependencies between modules simple. So in the first phase, when all components are loaded, the host application fires the "ContributeSharedData(Context ctx)" method, where each component sets any values in a shared context. (This might also be called "Init(ctx)".) The context might be as simple as a name-value-pair collection, e.g. Module B says *coll["ModuleB_Installed"] = true*, or it could add itself to a list of modules, or... the possibilities are endless. The context can be whatever class or structure is required to enable these components to work together.
The next pass -- if required -- would be for the components/modules to configure themselves based on the shared context. So the host might run through all the modules supporting the shared interface and fire the "Configure" method or event. Then ModuleA for instance can look in the context and see that ModuleB is installed, and configure its interface accordingly.
If an interface doesn't make sense for your situation, you can use any method of contributing shared data in a generic way to a common location, e.g. messaging or other common classes.
Hope this helps!