In my latest ASP.NET MVC 2 application I have been trying to put into practice the concepts of Domain Driven Design (DDD), the Single Responsibility Principle (SRP), Inversion of Control (IoC), and Test Driven Development (TDD). As an architecture example I have been following Jeffery Palermo's "Onion Architecture" which is expanded on greatly in ASP.NET MVC 2 in Action.
While, I have begun to successfully apply most (some?) of these principles I am missing a key piece of the puzzle. I am having trouble determining the best mechanism for auto-wiring a service layer to my domain entities.
As an example: each domain entity that needs the ability to send an email should depend on an IEmailService interface. From my reading, best practice to reveal this dependency would be to use constructor injection. In my UI layer I perform a similar injection for repository interface implementations using the StructureMapControllerFactory from ASP.NET MVC Contrib.
Where I am confused is what is the best mechanism for auto-wiring the injection of the necessary services into domain entities? Should the domain entities even be injected this way? How would I go about using IEmailService if I don't inject it into the domain entities?
Additional Stack Overflow questions which are great DDD, SRP, IoC, TDD references:
IoC Containers and Domain Driven Design
How to avoid having very large objects with Domain Driven Design
Unless I'm misunderstanding your intent and instead I'm choosing to focus on semantics I'm going to dissect this statement "As an example: each domain entity that needs the ability to send an email should depend on an IEmailService interface."
I would have to argue this is upon itself is an extreme bastardization of DDD. Why should a domain entity ever need to depend on an email service? IMO it shouldn't. There is no justification for it.
However there are business operations in conjunction with a domain entity that would require the need to send emails. You should have your IEmailService dependency contained in this class here, not the domain entity. This class would most likely fall into one of a few nearly synonymous names: Model, Service or Controller dependent upon which architecture/layer you're in.
At this point your StructureMapControllerFactory would then correctly auto wire everything that would use the IEmailService.
While I might be minorly over generalizing it's pretty much standard practice to have domain entities be POCOs or be nearly POCOs (to avoid violation of the SRP) however frequently SRP is violated in domain entities for sake of serialization and validation. Choosing to violate SRP for those types of cross cutting concerns is more of a personal belief stance as opposed to a "right" or "wrong" decision.
As a final follow up if your question is on the portion of code that is truly operating in a stand alone service whether web or OS based and how to wire up the dependencies from that, a normal solution would be take over the service at a base level and apply IOC to it in the same similar fashion as the StructureMapControllerFactory does in MVC. How to achieve this would be entirely dependent upon the infrastructure you're working with.
Response:
Lets say you have IOrderConfirmService which has a method EmailOrderConfirmation(Order order). You would end up with something like this:
public class MyOrderConfirmService : IOrderConfirmService
{
private readonly IEmailService _mailer;
public MyOrderConfirmService(IEmailService mailer)
{
_mailer = mailer;
}
public void EmailOrderConfirmation(Order order)
{
var msg = ConvertOrderToMessage(order); //good extension method candidite
_mailer.Send(msg);
}
}
You would then have your OrderController class that would be something like
public class OrderController : Controller
{
private readonly IOrderConfirmService _service;
public OrderController(IOrderConfirmService service)
{
_service= service;
}
public ActionResult Confirm()
{
_service.EmailOrderConfirmation(some order);
return View();
}
}
StrucutreMap will inherently build up you're entire architecture chain when you use constructor injection correctly. This is the fundamental difference between tight coupling and inversion of control. So when the StructureMapFactory goes to build up your controller the first thing it will see is that it needs IOrderConfirmService. At this point it will check if it can plug IOrderConfirmService directly which it can't because it needs IEmailService. So it will check if it can plug IEmailService and for argumentsake lets say it can. So at this point it will build EmailService, which it will then build MyOrderConfirmService and plug in EmailService, and then finally build OrderController and plug in MyOrderConfirmService. This is where the term inversion of control comes from. StructureMap will build the EmailService first in the entire chain of dependencies and ending last with the Controller. In a tightly coupled setup this will be the opposite where the Controller will be built first and have to build the business service and then build the email service. Tightly coupled design is very brittle when compared to IOC.
Related
Do you think it might be reasonable to replace my service layer or service classes with MediatR? For example, my service classes look like this:
public interface IEntityService<TEntityDto> where TEntityDto : class, IDto
{
Task<TEntityDto> CreateAsync(TEntityDto entityDto);
Task<bool> DeleteAsync(int id);
Task<IEnumerable<TEntityDto>> GetAllAsync(SieveModel sieveModel);
Task<TEntityDto> GetByIdAsync(int id);
Task<TEntityDto> UpdateAsync(int id, TEntityDto entityDto);
}
I want to achieve some sort of modular design so other dynamically loaded modules
or plugins can write their own notification or command handlers for my main core application.
Currently, my application is not event-driven at all and there's no easy way for my dynamically loaded plugins to communicate.
I can either incorporate MediatR in my controllers removing service layer completely or use it with my service layer just publishing notifications so my plugins can handle them.
Currently, my logic is mostly CRUD but there's a lot of custom logic going on before creating, updating, deleting.
Possible replacement of my service would look like:
public class CommandHandler : IRequestHandler<CreateCommand, Response>, IRequestHandler<UpdateCommand, Response>, IRequestHandler<DeleteCommand, bool>
{
private readonly DbContext _dbContext;
public CommandHandler(DbContext dbContext)
{
_dbContext = dbContext;
}
public Task<Response> Handle(CreateCommand request, CancellationToken cancellationToken)
{
//...
}
public Task<Response> Handle(UpdateCommand request, CancellationToken cancellationToken)
{
//...
}
public Task<bool> Handle(DeleteCommand request, CancellationToken cancellationToken)
{
///...
}
}
Would it be something wrong to do?
Basically, I'm struggling what to choose for my logic flow:
Controller -> Service -> MediatR -> Notification handlers -> Repository
Controller -> MediatR -> Command handlers -> Repository
It seems like with MediatR I can't have a single model for Create, Update and Delete, so one way to re-use it I'd need to derive requests like:
public CreateRequest : MyDto, IRequest<MyDto> {}
public UpdateRequest : MyDto, IRequest<MyDto> {}
or embed it in my command like:
public CreateRequest : IRequest<MyDto>
{
MyDto MyDto { get; set; }
}
One advantage of MediatR is the ability to plug logic in and plug it out easily which seems like a nice fit for modular architecture but still, I'm a bit confused how to shape my architecture with it.
Update: I'm preserving the answer, but my position on this has changed somewhat as indicated in this blog post.
If you have a class, let's say an API controller, and it depends on
IRequestHandler<CreateCommand, Response>
What is the benefit of changing your class so that it depends on IMediator,
and instead of calling
return requestHandler.HandleRequest(request);
it calls
return mediator.Send(request);
The result is that instead of injecting the dependency we need, we inject a service locator which in turn resolves the dependency we need.
Quoting Mark Seeman's article,
In short, the problem with Service Locator is that it hides a class' dependencies, causing run-time errors instead of compile-time errors, as well as making the code more difficult to maintain because it becomes unclear when you would be introducing a breaking change.
It's not exactly the same as
var commandHandler = serviceLocator.Resolve<IRequestHandler<CreateCommand, Response>>();
return commandHandler.Handle(request);
because the mediator is limited to resolving command and query handlers, but it's close. It's still a single interface that provides access to lots of other ones.
It makes code harder to navigate
After we introduce IMediator, our class still indirectly depends on IRequestHandler<CreateCommand, Response>. The difference is that now we can't tell by looking at it. We can't navigate from the interface to its implementations. We might reason that we can still follow the dependencies if we know what to look for - that is, if we know the conventions of command handler interface names. But that's not nearly as helpful as a class actually declaring what it depends on.
Sure, we get the benefit of having interfaces wired up to concrete implementations without writing the code, but the savings are trivial and we'll likely lose whatever time we save because of the added (if minor) difficulty of navigating the code. And there are libraries which will register those dependencies for us anyway while still allowing us to inject abstraction we actually depend on.
It's a weird, skewed way of depending on abstractions
It's been suggested that using a mediator assists with implementing the decorator pattern. But again, we already gain that ability by depending on an abstraction. We can use one implementation of an interface or another that adds a decorator. The point of depending on abstractions is that we can change such implementation details without changing the abstraction.
To elaborate: The point of depending on ISomethingSpecific is that we can change or replace the implementation without modifying the classes that depend on it. But if we say, "I want to change the implementation of ISomethingSpecific (by adding a decorator), so to accomplish that I'm going to change the classes that depend on ISomethingSpecific, which were working just fine, and make them depend on some generic, all-purpose interface", then something has gone wrong. There are numerous other ways to add decorators without modifying parts of our code that don't need to change.
Yes, using IMediator promotes loose coupling. But we already accomplished that by using well-defined abstractions. Adding layer upon layer of indirection doesn't multiply that benefit. If you've got enough abstraction that it's easy to write unit tests, you've got enough.
Vague dependencies make it easier to violate the Single Responsibility Principle
Suppose you have a class for placing orders, and it depends on ICommandHandler<PlaceOrderCommand>. What happens if someone tries to sneak in something that doesn't belong there, like a command to update user data? They'll have to add a new dependency, ICommandHandler<ChangeUserAddressCommand>. What happens if they want to keep piling more stuff into that class, violating the SRP? They'll have to keep adding more dependencies. That doesn't prevent them from doing it, but at least it shines a light on what's happening.
On the other hand, what if you can add all sorts of random stuff into a class without adding more dependencies? The class depends on an abstraction that can do anything. It can place orders, update addresses, request sales history, whatever, and all without adding a single new dependency. That's the same problem you get if you inject an IoC container into a class where it doesn't belong. It's a single class or interface that can be used to request all sorts of dependencies. It's a service locator.
IMediator doesn't cause SRP violations, and its absence won't prevent them. But explicit, specific dependencies guide us away from such violations.
The Mediator Pattern
Curiously, using MediatR doesn't usually have anything to do with the mediator
pattern. The mediator pattern promotes loose coupling by having objects interact with a mediator rather than directly with each other. If we're already depending on an abstraction like an ICommandHandler then the tight coupling that the mediator pattern prevents doesn't exist in the first place.
The mediator pattern also encapsulates complex operations so that they appear simpler from the outside.
return mediator.Send(request);
is not simpler than
return requestHandler.HandleRequest(request);
The complexity of the two interactions is identical. Nothing is "mediated." Imagine that you're about to swipe your credit card at the grocery store, and then someone offers to simplify your complex interaction by leading you to another register where you do exactly the same thing.
What about CQRS?
A mediator is neutral when it comes to CQRS (unless we have two separate mediators, like ICommandMediator and IQueryMediator.) It seems counterproductive to separate our command handlers from our query handlers and then inject a single interface which in effect brings them back together and exposes all of our commands and queries in one place. At the very least it's hard to say that it helps us to keep them separate.
IMediator is used to invoke command and query handlers, but it has nothing to do with the extent to which they are segregated. If they were segregated before we added a mediator, they still are. If our query handler does something it shouldn't, the mediator will still happily invoke it.
I hope it doesn't sound like a mediator ran over my dog. But it's certainly not a silver bullet that sprinkles CQRS on our code or even necessarily improves our architecture.
We should ask, what are the benefits? What undesirable consequences could it have? Do I need that tool, or can I obtain the benefits I want without those consequences?
What I am asserting is that once we're already depending on abstractions, further steps to "hide" a class's dependencies usually add no value. They make it harder to read and understand, and erode our ability to detect and prevent other code smells.
Partly this was answered here: MediatR when and why I should use it? vs 2017 webapi
The biggest benefit of using MediaR(or MicroBus, or any other mediator implementation) is isolating and/or segregating your logic (one of the reasons its popular way to use CQRS) and a good foundation for implementing decorator pattern (so something like ASP.NET Core MVC filters). From MediatR 3.0 there's an inbuilt support for this (see Behaviours) (instead of using IoC decorators)
You can use the decorator pattern with services (classes like FooService) too. And you can use CQRS with services too (FooReadService, FooWriteService)
Other than that it's opinion-based, and use what you want to achieve your goal. The end result shouldn't make any difference except for code maintenance.
Additional reading:
Baking Round Shaped Apps with MediatR
(which compares custom mediator implementation with the one MediatR provides and porting process)
Is it good to handle multiple requests in a single handler?
I am sorry if the question is trivial and already anwsered. I have been looking for an anwser, but all I was able to find were images and UMLs of the given situation without explaination. I have started reading the .NET Domain-Driven-Design book and I find myself hitting a brick wall in the start. My problem is that the author suggest creating a base class for Entities, this base class is placed in the Infrasturcture layer. This class will be inherited by all Entity classes in the Domain. That seems very uncommon and counter intuitive , at least to me. So I am trying to follow the book, but I can't understand why is this presented as a good idea, because I have a fear of circular dependency in the future development of the project, and can't understand why should the Entity Base class be outside of the Domain. Thank you.
EDIT:
I am afraid of cyclic dependency because in the chapter that preceeds the implementation phase the Author describes a downflow of dependancy. Starting from UI ->Application layer -> Domain while Infrastructure layer is not referencing anyone, but everyone references the Infrastructure layer. So I am having trouble understanding why should the Domain refrence the Infastructure? Regarding DDD (I assume) the Domain should be independant of other layers. So I just wanted to ask is the Domain -> Infrasturcture dependancy a common thing, or should there be a better/cleaner solution ?
The book I'm reading is called .NET Domain Driven Design in C# Problem-Design-Solution
EDIT On base of architecture depiction:
This link provides an image that depicts the architecture of the design at hand. The same image is provided in the book that I am following.
https://ajlopez.wordpress.com/2008/09/12/layered-architecture-in-domain-driven-design/. Thank you for your feedback and your answers.
Putting the base class for entities in the infrastructure layer is only a good idea if you have a layered architecture that puts infrastructure at the bottom - which is not what most architectures do nowadays.
So I'd recommend against this. Put the entity base type in the domain layer.
Nobody is going to ask "what domain concept is this?" if you name it appropriately. Make sure the class is abstract and properly documented. This will further clarify the situation.
With DDD, architectures that put the domain in the "center" (e.g. Hexagonal Architecture) are usually a better fit than classical layered architectures with infrastructure at the bottom. The important property of such architectures is that all dependencies point inwards, i.e. towards the domain.
It's called a layer supertype and is a common pattern.
In general, it's convenient to "hide" all infrastructure code that is required for your domain entities (such as database IDs needed by the persistence layer but not by the domain) in a common base class, such that the actual domain classes are not polluted by infrastructure code.
I agree that you should avoid circular dependencies. However, you don't need to actually move the base class to a separate project (to avoid circular dependencies). I guess what the authors mean is that semantically this base class belongs to the infrastructure layer since it contains database IDs.
If you follow principles of onion architecture, then the domain is the middle layer (no dependencies) and infrastructure is the outer layer (has dependencies).
You don't want your domain model to be dependent on infrastructural concerns, so this means your base class definitely belongs to the domain layer. The idea is that any change in infrastructure (or any technical change for that matter) does not impact the domain/business/application layers.
Also, try to program against interfaces, for which the concrete implementation is injected at run time. This also avoids any coupling to infrastructural concerns in the middle layers.
Domain Driven Development architecture is a layered design that can be really good when you need to separate business rules (user requirements) from the rest of the project.It puts the Domain in the center of the development.
It is quite usual to see concepts such as Dependency Injection and Inversion of Control when implementing a DDD architecture.
The example below is one of the many ways to put Domain in the center of the application, its just for you to have something in mind.
Domain : Contains interfaces and classes that are part of the business rule. (It does not implement anything, every method is passed through dependency injection from the infrastructure level).
Infrastructure: Implements all the interfaces created at domain level. This layer is responsible to design all "techological" behaviors of the application. This layer reference Domain.
Presentation: It is the User Interface. Connects Domain and Dependency Injection layers. This layer can see Dependency Injection and Domain.
Dependency Injection: Solve the dependency problem. It returns an interface implementation passed by parameter. This layer can see Infrastructure and Domain.
As example shown below: (I am using C#)
Domain Level:
As you can see, there are no actual implementation of the method. Only a dependency call finding an implementation for the interface IFight.
public class Samurai
{
public void Attack()
{
/*There are no class coupling here. 'Solve' receives an Interface as
parameter and do its job in order to return an object of the instance
and it calls the method Attack*/
Dependencies.Solver<IFight>().Attack(); //Forget about this right now. You will understand it as the post goes on.
}
}
Then we need to implement an interface to be implemented at infrastructure level.
public interface IFight
{
/*This interface is declared at the domain, altought, it is
implemented at infrastructure layer.*/
void Attack();
}
To make dependency injection works, it is need to implement it at domain level.
public class Dependencies
{
private static ISolve _solver;
public static ISolve Solver
{
get
{
if (_solver == null)
throw new SolverNotConfiguratedException();
return _solver;
}
set
{
_solver = value;
}
}
}
And also a Solver:
public interface ISolve
{
//This interface will be implemented at DependencyInjection layer.
T Solve<T>();
}
Infrastructure level:
Implementation of the interface
public class Fight : IFight
{
public void Attack()
{
Console.WriteLine("Attacking...");
}
}
Dependency Injection level:
Now it is important to set the injection
public class Solver : ISolve
{
private UnityContainer _container = new UnityContainer();
public Solver()
{
/* Using Unity "Framework to do Dependency Injection" it is possible do register a type.
When the method 'Solve' is called, the container looks for a implemented class that inherits
methods from a certain interface passed by parameter and returns an instantiated object.
*/
_container.RegisterType<IFight,Fight>();
}
//This is where the magic happens
public T Solve<T>()
{
return _container.Resolve<T>();
}
Explanation of the Dependency at Samurai Class:
//We are making a request of an interface implementation for Dependency
Solver. The Domain does not know who what class is doing it and how.
Dependencies.Solver<IFight>().Attack();
Presentation Level:
Samurai samurai = new Samurai()
samurai.Attack();
Conclusion:
We can see that domain is at the center of the implementation, and every business rule can be easily seen at that level whitout technical stuff being in the middle of it.
So I'm in the middle of rafactoring a small to medium sized Windows Forms application backed by a SQLite database accessed through NHibernate. The current solution contains only an App Project and Lib Project so it is not very well structured and tightly coupled in many places.
I started off with a structure like in this answer but ran into some problems down the road.
DB initialization:
Since the code building the NHibernate SessionFactory is in the DAL and I need to inject an ISession into my repositories, I need to reference the DAL and NHibernate in my Forms project directly to be able to set up the DI with Ninject (which should be done in the App Project / Presentation Layer right?)
Isn't that one of the things I try to avoid with such an architecture?
In an ideal world which projects should reference eachother?
DI in general:
I have a decently hard time figuring out how to do DI properly. I read about using a composition root to only have one place where the Ninject container is directly used but that doesn't really play well with the current way NHibernate Sessions are used.
We have a MainForm which is obviously the applications entry point and keeps one Session during its whole lifetime. In addition the user can open multiple SubForms (mostly but not exclusively) for editing single entities) which currently each have a separate Session with a shorter lifetime. This is accomplished with a static Helper exposing the SessionFactory and opening new Sessions as required.
Is there another way of using DI with Windows Forms besides the composition root pattern?
How can I make use of Ninjects capabilites to do scoped injection to manage my NHibernate Sessions on a per-form basis (if possible at all)?
Terminology:
I got a little confused as to what is a Repository versus a Service. One comment on the posted answer states "it is ok for the repository to contain business-logic, you can just call it a service in this case". It felt a little useless with our repositories only containing basic CRUD operations when we often wanted to push filtering etc. into the database. So we went ahead and extended the repositories with methods like GetByName or more complex GetAssignmentCandidates. It felt appropiate since the implementations are in the Business Layer but they are still called repositories. Also we went with Controllers for classes interacting directly with UI elements but I think that name is more common in the Web world.
Should our Repositories actually be called Services?
Sorry for the wall of text. Any answers would be greatly appreciated!
Regarding 1:
Yes and no. Yes you would prefer the UI Layer not to be dependent on some specifics of x-layers down. But it isn't. The composition root is just residing in the same assembly, logically it's not the same layer.
Regarding 2:
Limit the usage of the container. Factories (for Sessions,..) are sometimes necessary. Using static should be avoided. Some Frameworks however prevent you from using the ideal design. In that case try to approximate as much as possible.
If you can currently do new FooForm() then you can replace this by DI or a DI Factory (p.Ex. ninject.extensions.Factory). If you have absolutely no control on how a type is instanciated then you'll need to use static to access the kernel like a service locator and then "locate" direct dependencies (while indirect dependencies are injected into direct dependencies by the DI container).
Regarding 3: i think this is somewhat controversial and probably often missunderstood. I don't think it's really that important what you call your classes (of course it is, but consistency across your code base is more important than deciding whether to name them all Repository or Service), what's important is how you design their responsibilities and relationships.
As such i myself prefer to extract filters and stuff in the -Query named classes, each providing exactly one method. But others have other preferences... i think there's been enough blog posts etc. on this topic that there's no use in rehashing this here.
Best practice to implement for situation like yours is to use MVP design pattern. Here its the architecture that i can offer to you.
MyApp.Infrastructure // Base Layer - No reference
MyApp.Models // Domain Layer - Reference to Infrastructure
MyApp.Presenter // Acts like controllers in MVC - Reference to Service, Models,
MyApp.Repository.NH // DAL layer - Reference to Models, Infrastructure
MyApp.Services // BLL Layer - Reference to Repository, Models
MyApp.Services.Cache // Cached BLL Layer(Extremely recommended) - Reference to Services, Models
MyApp.UI.Web.WebForms // UI Layer - Reference to all of layers
I will try to do my best to explain with the example of basic implementation of 'Category' model.
-Infrastructure-
EntityBase.cs
BussinesRule.cs
IEntity.cs
IRepository.cs
-Models-
Categories(Folder)
Category.cs // Implements IEntity and derives from EntityBase
ICategoryRepository.cs // Implements IRepository
-Presenter-
Interfaces
IHomeView.cs // Put every property and methods you need.
ICategoryPresenter.cs
Implementations
CategoryPresenter.cs // Implements ICategoryPresenter
CategoryPresenter(IHomeView view, ICategorySevice categorySevice){
}
-Repository-
Repositories(Folder)
GenricRepository.cs // Implements IRepository
CategoryRepository : Implements ICategoryRepository and derives from GenricRepository
-Services-
Interfaces
ICategorySevice.cs
AddCategory(Category model);
Implementations
CategorySevice.cs // Implements ICategorySevice
CategorySevice(ICategoryRepository categoryRepository ){}
AddCategory(Category model){
// Do staff by ICategoryRepository implementation.
}
-Services.Cache-
// It all depents of your choose.. Radis or Web cache..
-UI.Web.WebForms-
Views - Home(Folder) // Implement a structure like in MVC views.
Index.aspx // Implements IHomeView
Page_Init(){
// Get instance of Presenter
var categoryPresenter = CategoryPresenter(this, new CategorySevice);
}
I'm not sure if i got your question correct, but maybe give you an idea:)
I'm struggling to get a good architecture for my current project. It's my fist time designing a serious n-tiers app trying to use the best practices of software engineering (DI, Unit tests, etc...). My project is using the Onion architecture.
I have 4 layers
The Core Layer : It's Holding my business objects. Here I have classes representing my business entities with their methods. Some of these objects have a reference to a Service Interface.
The DAL (Data Access) Layer : It defines POCO objects and implements the Repository Interfaces defined in the Core Layer. In this Layer I thought that it was a good idea to design a big utility class whose role is to convert the POCOs objects from the DAL to Business Object from the Core.
The Service Layer : It implements the Service Interfaces defined in the Core. The Role of this Layer is to provide access to the Repositories defined in the DAL. I primarly believed that this Layer was useless so I directly used the Repository Interfaces defines in my Core Layer. However after some weeks spent writing very long instanciation code - having constructors taking 5-6 IRepository parameters - I got the point of Service Layer.
The presentation Layer. Nothing special to say here, except that I configure dependency injection in this Layer (I'm using Ninject ).
I've changed my architecture and rewrote my code at least 3 times because many time I saw that something was wrong with my code. (Things like long constructors with long parameter lists). Fortunately bit per bit I'm getting the point of the various coding pattern found in litterature.
However I've just come across a cyclical dependency with my DI and I'm seriously wondering if my DAL2Core Helper was a good idea...
Thanks to this helper I can write code such as :
DAL.Point p = DAL2Core.PointConverter(point); // point is a Core Object
context.Points.Add(p);
context.SaveChanges();
Which reduces a little code redundancy. Then each of my repositories defined in the DAL have its own DAL2Core member:
private IDAL2CoreHelper DAL2Core;
And I inject it from the Repository constructor.
The DAL2Core class itself is a a bit messy...
First of all, it has a property for every Repository, and every Processor (Service Layer). The reason of the presence of the Processors is that my Core Objects need that a Processor Dependency be injected. I've put some of the repositories and Processors referenced in my DAL2Core utility class below just to illustrate :
[Inject]
private Core.IUserRepository UserRepository{ get; set; }
[Inject]
private Core.IPointsRepository PointsRepository { get; set; }
...
[Inject]
private Core.IUserProcessor UserProcessor{ get; set; }
[Inject]
private Core.IPointsProcessor CoursProcessor { get; set; }
(Since the DAL2Core Helper, is required by the repositories, a constructor injection would cause cyclical dependencies)
And then this class has lot of simple methods such as :
public Core.User UserConverter(DAL.User u)
{
Core.User user = new Core.User(UserProcessor);
user.FirstName= u.FirstName;
user.Name= u.Name;
user.ID = u.ID;
user.Phone= u.Phone;
user.Email= u.Email;
user.Birthday= u.Birthday;
user.Photo = u.Photo;
return user;
}
This class is like 600 hundred lines. Thinking about it, I realize that I don't save much code because much of the time the DAL2Core Convertion code is only called from one place, so perhaps it would be better to leave this code in the repositories ? And - the biggest problem - since I decided to decouple this helper from my Repository Classes, cyclical depencies exception are thrown by Ninject.
What do you think about the design I tried, is it a good / common practice ? And how can I smartly and efficiently perform this DAL2Core convertion without code smells. I really look forward to solving this architecture issue, I've spent the last three weeks dealing with plumbing and architecture issues and not really advancing that project. I'm becoming very late. However I really want to produce a high quality code. I just want to avoid architectural solutions that look like overkills to me, with lot of Factories etc... But I admit that some times, this feeling juste come from a lack of understanding from me (Like for the Service Layer).
Thanks in advance for your help !
What you are looking to use is AutoMapper , Value injecter or something similar for this purpose.
Essentially, it is a good practice to seperate data models between layers, to reduce coupling and increase testability. If you come up with a generic Mapper you will reduce code redundancy.
Hope this helps.
I'm writing an application using DDD techniques. This is my first attempt at a DDD project. It is also my first greenfield project and I am the sole developer. I've fleshed out the domain model and User interface. Now I'm starting on the persistence layer. I start with a unit test, as usual.
[Test]
public void ShouldAddEmployerToCollection()
{
var employerRepository = new EmployerRepository();
var employer = _mockery.NewMock<Employer>();
employerRepository.Add(employer);
_mockery.VerifyAllExpectationsHaveBeenMet();
}
As you can see I haven't written any expectations for the Add() function. I got this far and realized I haven't settled on a particular database vendor yet. In fact I'm not even sure it calls for a db engine at all. Flat files or xml may be just as reasonable. So I'm left wondering what my next step should be.
Should I add another layer of abstraction... say a DataStore interface or look for an existing library that's already done the work for me? I'd like to avoid tying the program to a particular database technology if I can.
With your requirements, the only abstraction you really need is a repository interface that has basic CRUD semantics so that your client code and collaborating objects only deal with IEmployerRepository objects rather than concrete repositories. You have a few options for going about that:
1) No more abstractions. Just construct the concrete repository in your top-level application where you need it:
IEmployeeRepository repository = new StubEmployeeRepository();
IEmployee employee = repository.GetEmployee(id);
Changing that in a million places will get old, so this technique is only really viable for very small projects.
2) Create repository factories to use in your application:
IEmployeeRepository repository = repositoryFactory<IEmployee>.CreateRepository();
IEmployee employee = repository.GetEmployee(id);
You might pass the repository factory into the classes that will use it, or you might create an application-level static variable to hold it (it's a singleton, which is unfortunate, but fairly well-bounded).
3) Use a dependency injection container (essentially a general-purpose factory and configuration mechanism):
// A lot of DI containers use this 'Resolve' format.
IEmployeeRepository repository = container.Resolve<IEmployee>();
IEmployee employee = repository.GetEmployee(id);
If you haven't used DI containers before, there are lots of good questions and answers about them here on SO (such as Which C#/.NET Dependency Injection frameworks are worth looking into? and Data access, unit testing, dependency injection), and you would definitely want to read Martin Fowler's Inversion of Control Containers and the Dependency Injection pattern).
At some point you will have to make a call as to what your repository will do with the data. When you're starting your project it's probably best to keep it as simple as possible, and only add abstraction layers when necessary. Simply defining what your repositories / DAOs are is probably enough at this stage.
Usually, the repository / repositories / DAOs should know about the implementation details of which database or ORM you have decided to use. I expect this is why you are using repositories in DDD. This way your tests can mock the repositories and be agnostic of the implementation.
I wrote a blog post on implementing the Repository pattern on top of NHibernate, I think it will benefit you regardless of whether you use NHibernate or not.
Creating a common generic and extensible NHiberate Repository
One thing I've found with persistence layers is to make sure that there is a spot where you can start doing abstraction. If you're database grows, you might need to start implementing sharding and unless there's already an abstraction layer already available, it can be difficult to add one later.
I believe you shouldn't add yet another layer below the repository classes just for the purpose of unit testing, specially if you haven't chosen your persistence technology. I don't think you can create an interface more granular than "repository.GetEmployee(id)" without exposing details about the persistence method.
If you're really considering using flat text or XML files, I believe the best option is to stick with the repository interface abstraction. But if you have decided to use databases, and you're just not sure about the vendor, an ORM tool might be the way to go.