Domain driven development Entitiy base class - c#

I am sorry if the question is trivial and already anwsered. I have been looking for an anwser, but all I was able to find were images and UMLs of the given situation without explaination. I have started reading the .NET Domain-Driven-Design book and I find myself hitting a brick wall in the start. My problem is that the author suggest creating a base class for Entities, this base class is placed in the Infrasturcture layer. This class will be inherited by all Entity classes in the Domain. That seems very uncommon and counter intuitive , at least to me. So I am trying to follow the book, but I can't understand why is this presented as a good idea, because I have a fear of circular dependency in the future development of the project, and can't understand why should the Entity Base class be outside of the Domain. Thank you.
EDIT:
I am afraid of cyclic dependency because in the chapter that preceeds the implementation phase the Author describes a downflow of dependancy. Starting from UI ->Application layer -> Domain while Infrastructure layer is not referencing anyone, but everyone references the Infrastructure layer. So I am having trouble understanding why should the Domain refrence the Infastructure? Regarding DDD (I assume) the Domain should be independant of other layers. So I just wanted to ask is the Domain -> Infrasturcture dependancy a common thing, or should there be a better/cleaner solution ?
The book I'm reading is called .NET Domain Driven Design in C# Problem-Design-Solution
EDIT On base of architecture depiction:
This link provides an image that depicts the architecture of the design at hand. The same image is provided in the book that I am following.
https://ajlopez.wordpress.com/2008/09/12/layered-architecture-in-domain-driven-design/. Thank you for your feedback and your answers.

Putting the base class for entities in the infrastructure layer is only a good idea if you have a layered architecture that puts infrastructure at the bottom - which is not what most architectures do nowadays.
So I'd recommend against this. Put the entity base type in the domain layer.
Nobody is going to ask "what domain concept is this?" if you name it appropriately. Make sure the class is abstract and properly documented. This will further clarify the situation.
With DDD, architectures that put the domain in the "center" (e.g. Hexagonal Architecture) are usually a better fit than classical layered architectures with infrastructure at the bottom. The important property of such architectures is that all dependencies point inwards, i.e. towards the domain.

It's called a layer supertype and is a common pattern.
In general, it's convenient to "hide" all infrastructure code that is required for your domain entities (such as database IDs needed by the persistence layer but not by the domain) in a common base class, such that the actual domain classes are not polluted by infrastructure code.
I agree that you should avoid circular dependencies. However, you don't need to actually move the base class to a separate project (to avoid circular dependencies). I guess what the authors mean is that semantically this base class belongs to the infrastructure layer since it contains database IDs.

If you follow principles of onion architecture, then the domain is the middle layer (no dependencies) and infrastructure is the outer layer (has dependencies).
You don't want your domain model to be dependent on infrastructural concerns, so this means your base class definitely belongs to the domain layer. The idea is that any change in infrastructure (or any technical change for that matter) does not impact the domain/business/application layers.
Also, try to program against interfaces, for which the concrete implementation is injected at run time. This also avoids any coupling to infrastructural concerns in the middle layers.

Domain Driven Development architecture is a layered design that can be really good when you need to separate business rules (user requirements) from the rest of the project.It puts the Domain in the center of the development.
It is quite usual to see concepts such as Dependency Injection and Inversion of Control when implementing a DDD architecture.
The example below is one of the many ways to put Domain in the center of the application, its just for you to have something in mind.
Domain : Contains interfaces and classes that are part of the business rule. (It does not implement anything, every method is passed through dependency injection from the infrastructure level).
Infrastructure: Implements all the interfaces created at domain level. This layer is responsible to design all "techological" behaviors of the application. This layer reference Domain.
Presentation: It is the User Interface. Connects Domain and Dependency Injection layers. This layer can see Dependency Injection and Domain.
Dependency Injection: Solve the dependency problem. It returns an interface implementation passed by parameter. This layer can see Infrastructure and Domain.
As example shown below: (I am using C#)
Domain Level:
As you can see, there are no actual implementation of the method. Only a dependency call finding an implementation for the interface IFight.
public class Samurai
{
public void Attack()
{
/*There are no class coupling here. 'Solve' receives an Interface as
parameter and do its job in order to return an object of the instance
and it calls the method Attack*/
Dependencies.Solver<IFight>().Attack(); //Forget about this right now. You will understand it as the post goes on.
}
}
Then we need to implement an interface to be implemented at infrastructure level.
public interface IFight
{
/*This interface is declared at the domain, altought, it is
implemented at infrastructure layer.*/
void Attack();
}
To make dependency injection works, it is need to implement it at domain level.
public class Dependencies
{
private static ISolve _solver;
public static ISolve Solver
{
get
{
if (_solver == null)
throw new SolverNotConfiguratedException();
return _solver;
}
set
{
_solver = value;
}
}
}
And also a Solver:
public interface ISolve
{
//This interface will be implemented at DependencyInjection layer.
T Solve<T>();
}
Infrastructure level:
Implementation of the interface
public class Fight : IFight
{
public void Attack()
{
Console.WriteLine("Attacking...");
}
}
Dependency Injection level:
Now it is important to set the injection
public class Solver : ISolve
{
private UnityContainer _container = new UnityContainer();
public Solver()
{
/* Using Unity "Framework to do Dependency Injection" it is possible do register a type.
When the method 'Solve' is called, the container looks for a implemented class that inherits
methods from a certain interface passed by parameter and returns an instantiated object.
*/
_container.RegisterType<IFight,Fight>();
}
//This is where the magic happens
public T Solve<T>()
{
return _container.Resolve<T>();
}
Explanation of the Dependency at Samurai Class:
//We are making a request of an interface implementation for Dependency
Solver. The Domain does not know who what class is doing it and how.
Dependencies.Solver<IFight>().Attack();
Presentation Level:
Samurai samurai = new Samurai()
samurai.Attack();
Conclusion:
We can see that domain is at the center of the implementation, and every business rule can be easily seen at that level whitout technical stuff being in the middle of it.

Related

DI in class library [duplicate]

I'm pondering the design of a C# library, that will have several different high level functions. Of course, those high-level functions will be implemented using the SOLID class design principles as much as possible. As such, there will probably be classes intended for consumers to use directly on a regular basis, and "support classes" that are dependencies of those more common "end user" classes.
The question is, what is the best way to design the library so it is:
DI Agnostic - Although adding basic "support" for one or two of the common DI libraries (StructureMap, Ninject, etc) seems reasonable, I want consumers to be able to use the library with any DI framework.
Non-DI usable - If a consumer of the library is using no DI, the library should still be as easy to use as possible, reducing the amount of work a user has to do to create all these "unimportant" dependencies just to get to the "real" classes they want to use.
My current thinking is to provide a few "DI registration modules" for the common DI libraries (e.g a StructureMap registry, a Ninject module), and a set or Factory classes that are non-DI and contain the coupling to those few factories.
Thoughts?
This is actually simple to do once you understand that DI is about patterns and principles, not technology.
To design the API in a DI Container-agnostic way, follow these general principles:
Program to an interface, not an implementation
This principle is actually a quote (from memory though) from Design Patterns, but it should always be your real goal. DI is just a means to achieve that end.
Apply the Hollywood Principle
The Hollywood Principle in DI terms says: Don't call the DI Container, it'll call you.
Never directly ask for a dependency by calling a container from within your code. Ask for it implicitly by using Constructor Injection.
Use Constructor Injection
When you need a dependency, ask for it statically through the constructor:
public class Service : IService
{
private readonly ISomeDependency dep;
public Service(ISomeDependency dep)
{
if (dep == null)
{
throw new ArgumentNullException("dep");
}
this.dep = dep;
}
public ISomeDependency Dependency
{
get { return this.dep; }
}
}
Notice how the Service class guarantees its invariants. Once an instance is created, the dependency is guaranteed to be available because of the combination of the Guard Clause and the readonly keyword.
Use Abstract Factory if you need a short-lived object
Dependencies injected with Constructor Injection tend to be long-lived, but sometimes you need a short-lived object, or to construct the dependency based on a value known only at run-time.
See this for more information.
Compose only at the Last Responsible Moment
Keep objects decoupled until the very end. Normally, you can wait and wire everything up in the application's entry point. This is called the Composition Root.
More details here:
Where should I do Injection with Ninject 2+ (and how do I arrange my Modules?)
Design - Where should objects be registered when using Windsor
Simplify using a Facade
If you feel that the resulting API becomes too complex for novice users, you can always provide a few Facade classes that encapsulate common dependency combinations.
To provide a flexible Facade with a high degree of discoverability, you could consider providing Fluent Builders. Something like this:
public class MyFacade
{
private IMyDependency dep;
public MyFacade()
{
this.dep = new DefaultDependency();
}
public MyFacade WithDependency(IMyDependency dependency)
{
this.dep = dependency;
return this;
}
public Foo CreateFoo()
{
return new Foo(this.dep);
}
}
This would allow a user to create a default Foo by writing
var foo = new MyFacade().CreateFoo();
It would, however, be very discoverable that it's possible to supply a custom dependency, and you could write
var foo = new MyFacade().WithDependency(new CustomDependency()).CreateFoo();
If you imagine that the MyFacade class encapsulates a lot of different dependencies, I hope it's clear how it would provide proper defaults while still making extensibility discoverable.
FWIW, long after writing this answer, I expanded upon the concepts herein and wrote a longer blog post about DI-Friendly Libraries, and a companion post about DI-Friendly Frameworks.
The term "dependency injection" doesn't specifically have anything to do with an IoC container at all, even though you tend to see them mentioned together. It simply means that instead of writing your code like this:
public class Service
{
public Service()
{
}
public void DoSomething()
{
SqlConnection connection = new SqlConnection("some connection string");
WindowsIdentity identity = WindowsIdentity.GetCurrent();
// Do something with connection and identity variables
}
}
You write it like this:
public class Service
{
public Service(IDbConnection connection, IIdentity identity)
{
this.Connection = connection;
this.Identity = identity;
}
public void DoSomething()
{
// Do something with Connection and Identity properties
}
protected IDbConnection Connection { get; private set; }
protected IIdentity Identity { get; private set; }
}
That is, you do two things when you write your code:
Rely on interfaces instead of classes whenever you think that the implementation might need to be changed;
Instead of creating instances of these interfaces inside a class, pass them as constructor arguments (alternatively, they could be assigned to public properties; the former is constructor injection, the latter is property injection).
None of this presupposes the existence of any DI library, and it doesn't really make the code any more difficult to write without one.
If you're looking for an example of this, look no further than the .NET Framework itself:
List<T> implements IList<T>. If you design your class to use IList<T> (or IEnumerable<T>), you can take advantage of concepts like lazy-loading, as Linq to SQL, Linq to Entities, and NHibernate all do behind the scenes, usually through property injection. Some framework classes actually accept an IList<T> as a constructor argument, such as BindingList<T>, which is used for several data binding features.
Linq to SQL and EF are built entirely around the IDbConnection and related interfaces, which can be passed in via the public constructors. You don't need to use them, though; the default constructors work just fine with a connection string sitting in a configuration file somewhere.
If you ever work on WinForms components you deal with "services", like INameCreationService or IExtenderProviderService. You don't even really know what what the concrete classes are. .NET actually has its own IoC container, IContainer, which gets used for this, and the Component class has a GetService method which is the actual service locator. Of course, nothing prevents you from using any or all of these interfaces without the IContainer or that particular locator. The services themselves are only loosely-coupled with the container.
Contracts in WCF are built entirely around interfaces. The actual concrete service class is usually referenced by name in a configuration file, which is essentially DI. Many people don't realize this but it is entirely possible to swap out this configuration system with another IoC container. Perhaps more interestingly, the service behaviors are all instances of IServiceBehavior which can be added later. Again, you could easily wire this into an IoC container and have it pick the relevant behaviors, but the feature is completely usable without one.
And so on and so forth. You'll find DI all over the place in .NET, it's just that normally it's done so seamlessly that you don't even think of it as DI.
If you want to design your DI-enabled library for maximum usability then the best suggestion is probably to supply your own default IoC implementation using a lightweight container. IContainer is a great choice for this because it's a part of the .NET Framework itself.
EDIT 2015: time has passed, I realize now that this whole thing was a huge mistake. IoC containers are terrible and DI is a very poor way to deal with side effects. Effectively, all of the answers here (and the question itself) are to be avoided. Simply be aware of side effects, separate them from pure code, and everything else either falls into place or is irrelevant and unnecessary complexity.
Original answer follows:
I had to face this same decision while developing SolrNet. I started with the goal of being DI-friendly and container-agnostic, but as I added more and more internal components, the internal factories quickly became unmanageable and the resulting library was inflexible.
I ended up writing my own very simple embedded IoC container while also providing a Windsor facility and a Ninject module. Integrating the library with other containers is just a matter of properly wiring the components, so I could easily integrate it with Autofac, Unity, StructureMap, whatever.
The downside of this is that I lost the ability to just new up the service. I also took a dependency on CommonServiceLocator which I could have avoided (I might refactor it out in the future) to make the embedded container easier to implement.
More details in this blog post.
MassTransit seems to rely on something similar. It has an IObjectBuilder interface which is really CommonServiceLocator's IServiceLocator with a couple more methods, then it implements this for each container, i.e. NinjectObjectBuilder and a regular module/facility, i.e. MassTransitModule. Then it relies on IObjectBuilder to instantiate what it needs. This is a valid approach of course, but personally I don't like it very much since it's actually passing around the container too much, using it as a service locator.
MonoRail implements its own container as well, which implements good old IServiceProvider. This container is used throughout this framework through an interface that exposes well-known services. To get the concrete container, it has a built-in service provider locator. The Windsor facility points this service provider locator to Windsor, making it the selected service provider.
Bottom line: there is no perfect solution. As with any design decision, this issue demands a balance between flexibility, maintainability and convenience.
What I would do is design my library in a DI container agnostic way to limit the dependency on the container as much as possible. This allows to swap out on DI container for another if need be.
Then expose the layer above the DI logic to the users of the library so that they can use whatever framework you chose through your interface. This way they can still use DI functionality that you exposed and they are free to use any other framework for their own purposes.
Allowing the users of the library to plug their own DI framework seems a bit wrong to me as it dramatically increases amount of maintenance. This also then becomes more of a plugin environment than straight DI.

3 Tier Architecture with NHibernate, Ninject and Windows Forms

So I'm in the middle of rafactoring a small to medium sized Windows Forms application backed by a SQLite database accessed through NHibernate. The current solution contains only an App Project and Lib Project so it is not very well structured and tightly coupled in many places.
I started off with a structure like in this answer but ran into some problems down the road.
DB initialization:
Since the code building the NHibernate SessionFactory is in the DAL and I need to inject an ISession into my repositories, I need to reference the DAL and NHibernate in my Forms project directly to be able to set up the DI with Ninject (which should be done in the App Project / Presentation Layer right?)
Isn't that one of the things I try to avoid with such an architecture?
In an ideal world which projects should reference eachother?
DI in general:
I have a decently hard time figuring out how to do DI properly. I read about using a composition root to only have one place where the Ninject container is directly used but that doesn't really play well with the current way NHibernate Sessions are used.
We have a MainForm which is obviously the applications entry point and keeps one Session during its whole lifetime. In addition the user can open multiple SubForms (mostly but not exclusively) for editing single entities) which currently each have a separate Session with a shorter lifetime. This is accomplished with a static Helper exposing the SessionFactory and opening new Sessions as required.
Is there another way of using DI with Windows Forms besides the composition root pattern?
How can I make use of Ninjects capabilites to do scoped injection to manage my NHibernate Sessions on a per-form basis (if possible at all)?
Terminology:
I got a little confused as to what is a Repository versus a Service. One comment on the posted answer states "it is ok for the repository to contain business-logic, you can just call it a service in this case". It felt a little useless with our repositories only containing basic CRUD operations when we often wanted to push filtering etc. into the database. So we went ahead and extended the repositories with methods like GetByName or more complex GetAssignmentCandidates. It felt appropiate since the implementations are in the Business Layer but they are still called repositories. Also we went with Controllers for classes interacting directly with UI elements but I think that name is more common in the Web world.
Should our Repositories actually be called Services?
Sorry for the wall of text. Any answers would be greatly appreciated!
Regarding 1:
Yes and no. Yes you would prefer the UI Layer not to be dependent on some specifics of x-layers down. But it isn't. The composition root is just residing in the same assembly, logically it's not the same layer.
Regarding 2:
Limit the usage of the container. Factories (for Sessions,..) are sometimes necessary. Using static should be avoided. Some Frameworks however prevent you from using the ideal design. In that case try to approximate as much as possible.
If you can currently do new FooForm() then you can replace this by DI or a DI Factory (p.Ex. ninject.extensions.Factory). If you have absolutely no control on how a type is instanciated then you'll need to use static to access the kernel like a service locator and then "locate" direct dependencies (while indirect dependencies are injected into direct dependencies by the DI container).
Regarding 3: i think this is somewhat controversial and probably often missunderstood. I don't think it's really that important what you call your classes (of course it is, but consistency across your code base is more important than deciding whether to name them all Repository or Service), what's important is how you design their responsibilities and relationships.
As such i myself prefer to extract filters and stuff in the -Query named classes, each providing exactly one method. But others have other preferences... i think there's been enough blog posts etc. on this topic that there's no use in rehashing this here.
Best practice to implement for situation like yours is to use MVP design pattern. Here its the architecture that i can offer to you.
MyApp.Infrastructure // Base Layer - No reference
MyApp.Models // Domain Layer - Reference to Infrastructure
MyApp.Presenter // Acts like controllers in MVC - Reference to Service, Models,
MyApp.Repository.NH // DAL layer - Reference to Models, Infrastructure
MyApp.Services // BLL Layer - Reference to Repository, Models
MyApp.Services.Cache // Cached BLL Layer(Extremely recommended) - Reference to Services, Models
MyApp.UI.Web.WebForms // UI Layer - Reference to all of layers
I will try to do my best to explain with the example of basic implementation of 'Category' model.
-Infrastructure-
EntityBase.cs
BussinesRule.cs
IEntity.cs
IRepository.cs
-Models-
Categories(Folder)
Category.cs // Implements IEntity and derives from EntityBase
ICategoryRepository.cs // Implements IRepository
-Presenter-
Interfaces
IHomeView.cs // Put every property and methods you need.
ICategoryPresenter.cs
Implementations
CategoryPresenter.cs // Implements ICategoryPresenter
CategoryPresenter(IHomeView view, ICategorySevice categorySevice){
}
-Repository-
Repositories(Folder)
GenricRepository.cs // Implements IRepository
CategoryRepository : Implements ICategoryRepository and derives from GenricRepository
-Services-
Interfaces
ICategorySevice.cs
AddCategory(Category model);
Implementations
CategorySevice.cs // Implements ICategorySevice
CategorySevice(ICategoryRepository categoryRepository ){}
AddCategory(Category model){
// Do staff by ICategoryRepository implementation.
}
-Services.Cache-
// It all depents of your choose.. Radis or Web cache..
-UI.Web.WebForms-
Views - Home(Folder) // Implement a structure like in MVC views.
Index.aspx // Implements IHomeView
Page_Init(){
// Get instance of Presenter
var categoryPresenter = CategoryPresenter(this, new CategorySevice);
}
I'm not sure if i got your question correct, but maybe give you an idea:)

Is it bad practice to have a class helper to convert DAL objects to Core objects

I'm struggling to get a good architecture for my current project. It's my fist time designing a serious n-tiers app trying to use the best practices of software engineering (DI, Unit tests, etc...). My project is using the Onion architecture.
I have 4 layers
The Core Layer : It's Holding my business objects. Here I have classes representing my business entities with their methods. Some of these objects have a reference to a Service Interface.
The DAL (Data Access) Layer : It defines POCO objects and implements the Repository Interfaces defined in the Core Layer. In this Layer I thought that it was a good idea to design a big utility class whose role is to convert the POCOs objects from the DAL to Business Object from the Core.
The Service Layer : It implements the Service Interfaces defined in the Core. The Role of this Layer is to provide access to the Repositories defined in the DAL. I primarly believed that this Layer was useless so I directly used the Repository Interfaces defines in my Core Layer. However after some weeks spent writing very long instanciation code - having constructors taking 5-6 IRepository parameters - I got the point of Service Layer.
The presentation Layer. Nothing special to say here, except that I configure dependency injection in this Layer (I'm using Ninject ).
I've changed my architecture and rewrote my code at least 3 times because many time I saw that something was wrong with my code. (Things like long constructors with long parameter lists). Fortunately bit per bit I'm getting the point of the various coding pattern found in litterature.
However I've just come across a cyclical dependency with my DI and I'm seriously wondering if my DAL2Core Helper was a good idea...
Thanks to this helper I can write code such as :
DAL.Point p = DAL2Core.PointConverter(point); // point is a Core Object
context.Points.Add(p);
context.SaveChanges();
Which reduces a little code redundancy. Then each of my repositories defined in the DAL have its own DAL2Core member:
private IDAL2CoreHelper DAL2Core;
And I inject it from the Repository constructor.
The DAL2Core class itself is a a bit messy...
First of all, it has a property for every Repository, and every Processor (Service Layer). The reason of the presence of the Processors is that my Core Objects need that a Processor Dependency be injected. I've put some of the repositories and Processors referenced in my DAL2Core utility class below just to illustrate :
[Inject]
private Core.IUserRepository UserRepository{ get; set; }
[Inject]
private Core.IPointsRepository PointsRepository { get; set; }
...
[Inject]
private Core.IUserProcessor UserProcessor{ get; set; }
[Inject]
private Core.IPointsProcessor CoursProcessor { get; set; }
(Since the DAL2Core Helper, is required by the repositories, a constructor injection would cause cyclical dependencies)
And then this class has lot of simple methods such as :
public Core.User UserConverter(DAL.User u)
{
Core.User user = new Core.User(UserProcessor);
user.FirstName= u.FirstName;
user.Name= u.Name;
user.ID = u.ID;
user.Phone= u.Phone;
user.Email= u.Email;
user.Birthday= u.Birthday;
user.Photo = u.Photo;
return user;
}
This class is like 600 hundred lines. Thinking about it, I realize that I don't save much code because much of the time the DAL2Core Convertion code is only called from one place, so perhaps it would be better to leave this code in the repositories ? And - the biggest problem - since I decided to decouple this helper from my Repository Classes, cyclical depencies exception are thrown by Ninject.
What do you think about the design I tried, is it a good / common practice ? And how can I smartly and efficiently perform this DAL2Core convertion without code smells. I really look forward to solving this architecture issue, I've spent the last three weeks dealing with plumbing and architecture issues and not really advancing that project. I'm becoming very late. However I really want to produce a high quality code. I just want to avoid architectural solutions that look like overkills to me, with lot of Factories etc... But I admit that some times, this feeling juste come from a lack of understanding from me (Like for the Service Layer).
Thanks in advance for your help !
What you are looking to use is AutoMapper , Value injecter or something similar for this purpose.
Essentially, it is a good practice to seperate data models between layers, to reduce coupling and increase testability. If you come up with a generic Mapper you will reduce code redundancy.
Hope this helps.

Dependancy injection in two layers in asp.net mvc

I'm trying to get more and more familliar with DI and IoC. At this moment I understand the concept and implementation of DI with the controllers in an MVC application. But assume I have a layered application.
The Controllers call businesslogic classes and the businesslogic clases call the repository classes.
How do I setup the second layer, the businesslogic to repository part with DI. This ensures I can test on different levels in my application. What I don't want to do is passing the dependancy to the repository from the controllers.
Hope someone can give some hints on this.
Patrick
Minimalistic example how to implement using Ninject. This is not absolute truth about DI/IoC, just a brief example how it could be done.
Configuration
// repositories
base.Bind<IMyRepository>().To<MyRepository>();
// services
base.Bind<IMyServices>().To<MyServices>();
When ever IMyRepository is used then it will use concrete implementation MyRepository.
Controller
public class MyController : Controller
{
private readonly IMyServices _myServices;
public AnimalController(IMyServices myServices)
{
_myServices = myServices;
}
// your actions
}
Again, inside MyService there is a similar pattern (constructor injection)
Service
public class MyServices : IMyServices
{
private readonly IMyRepository _myRepository;
public MyServices(IMyRepository myRepository)
{
_myRepository = myRepository;
}
public void Example()
{
_myRepository.PleaseDoSomething();
}
}
Also remember that there are lots of other things in the ASP.NET MVC where IoC can be used:
localization
authorization
model metadata provider (for example localized error messages)
custom model binders
controller factory
etc.
Update
In the example code there was a bug. Dependency injection was not done in the service. Now it should be correct.
Update 2
I think it's highly recommended to use NuGet packages to bootstrap your app. Saves time, might apply some "best practices", other projects will get similar base etc. Here are some instructions for different IoC's + MVC 3
Ninject
Autofac
StructureMap
put simply, each layer in the hierarchy asks for the dependencies on the next layer down via a constructor argument, which is an interface.
your controllers ask for their dependencies on a business logic through their constructors. They do this by a dependency on an interface to the business logic not by asking for a particular implementation. You create an interface for your business logic class, and inject an implementation of that interface into your controller, this can be done manually or you can get a DI container to do it for you. Your controller knows nothing about the repository classes (or any other dependencies of any implementation of the business logic), only about the interface to the business logic class on which it depends.
You then rinse and repeat on the business logic concrete classes.
You create an interface for your repository classes and the business logic classes which require those ask for them through their constructor, and then you again inject the dependency in either manually or via a DI container.
You application should have a composition root where all of this setup takes place, which is where you either manually wire up your dependencies (create the lowest objects first and then pass them in to the constructors of the higher up objects as you create those), or you configure the container with the details of the implementation of various interfaces you have, so that it can then use that information to correctly construct objects which have dependencies.
A DI container just creates and resolves dependencies for types it's configured to. It is unrelated to how do you design your application layers. Why don't you want to pass the repository where it's needed?! Those objects depend on an abstraction and not on implementation.
You configure the DI container to serve a certain instance of Repository anywhere where it's required. The controller receives the repository instance, which then can be passed on to the business layer to be used.
Decoupling means an object doesn't depend on implementation details, that's it. Testing is possible because the dependecies are expressed as interfaces and you can mock them.

Options for IoC Auto Wiring in Domain Driven Design

In my latest ASP.NET MVC 2 application I have been trying to put into practice the concepts of Domain Driven Design (DDD), the Single Responsibility Principle (SRP), Inversion of Control (IoC), and Test Driven Development (TDD). As an architecture example I have been following Jeffery Palermo's "Onion Architecture" which is expanded on greatly in ASP.NET MVC 2 in Action.
While, I have begun to successfully apply most (some?) of these principles I am missing a key piece of the puzzle. I am having trouble determining the best mechanism for auto-wiring a service layer to my domain entities.
As an example: each domain entity that needs the ability to send an email should depend on an IEmailService interface. From my reading, best practice to reveal this dependency would be to use constructor injection. In my UI layer I perform a similar injection for repository interface implementations using the StructureMapControllerFactory from ASP.NET MVC Contrib.
Where I am confused is what is the best mechanism for auto-wiring the injection of the necessary services into domain entities? Should the domain entities even be injected this way? How would I go about using IEmailService if I don't inject it into the domain entities?
Additional Stack Overflow questions which are great DDD, SRP, IoC, TDD references:
IoC Containers and Domain Driven Design
How to avoid having very large objects with Domain Driven Design
Unless I'm misunderstanding your intent and instead I'm choosing to focus on semantics I'm going to dissect this statement "As an example: each domain entity that needs the ability to send an email should depend on an IEmailService interface."
I would have to argue this is upon itself is an extreme bastardization of DDD. Why should a domain entity ever need to depend on an email service? IMO it shouldn't. There is no justification for it.
However there are business operations in conjunction with a domain entity that would require the need to send emails. You should have your IEmailService dependency contained in this class here, not the domain entity. This class would most likely fall into one of a few nearly synonymous names: Model, Service or Controller dependent upon which architecture/layer you're in.
At this point your StructureMapControllerFactory would then correctly auto wire everything that would use the IEmailService.
While I might be minorly over generalizing it's pretty much standard practice to have domain entities be POCOs or be nearly POCOs (to avoid violation of the SRP) however frequently SRP is violated in domain entities for sake of serialization and validation. Choosing to violate SRP for those types of cross cutting concerns is more of a personal belief stance as opposed to a "right" or "wrong" decision.
As a final follow up if your question is on the portion of code that is truly operating in a stand alone service whether web or OS based and how to wire up the dependencies from that, a normal solution would be take over the service at a base level and apply IOC to it in the same similar fashion as the StructureMapControllerFactory does in MVC. How to achieve this would be entirely dependent upon the infrastructure you're working with.
Response:
Lets say you have IOrderConfirmService which has a method EmailOrderConfirmation(Order order). You would end up with something like this:
public class MyOrderConfirmService : IOrderConfirmService
{
private readonly IEmailService _mailer;
public MyOrderConfirmService(IEmailService mailer)
{
_mailer = mailer;
}
public void EmailOrderConfirmation(Order order)
{
var msg = ConvertOrderToMessage(order); //good extension method candidite
_mailer.Send(msg);
}
}
You would then have your OrderController class that would be something like
public class OrderController : Controller
{
private readonly IOrderConfirmService _service;
public OrderController(IOrderConfirmService service)
{
_service= service;
}
public ActionResult Confirm()
{
_service.EmailOrderConfirmation(some order);
return View();
}
}
StrucutreMap will inherently build up you're entire architecture chain when you use constructor injection correctly. This is the fundamental difference between tight coupling and inversion of control. So when the StructureMapFactory goes to build up your controller the first thing it will see is that it needs IOrderConfirmService. At this point it will check if it can plug IOrderConfirmService directly which it can't because it needs IEmailService. So it will check if it can plug IEmailService and for argumentsake lets say it can. So at this point it will build EmailService, which it will then build MyOrderConfirmService and plug in EmailService, and then finally build OrderController and plug in MyOrderConfirmService. This is where the term inversion of control comes from. StructureMap will build the EmailService first in the entire chain of dependencies and ending last with the Controller. In a tightly coupled setup this will be the opposite where the Controller will be built first and have to build the business service and then build the email service. Tightly coupled design is very brittle when compared to IOC.

Categories