DTO vs. Domain Model, project organization - c#

I have a project with a repository, a service layer, using EF6 and code-first POCOs. In the CustomerRepository, I am doing several projection queries that return objects.
I understand that the code-first POCO's are what would be considered "Domain Models", but if I were to do a projection query into a different model, what is that model considered? An example of this would be the CustomerOrderStats. Is that still a Domain Model, or should that be considered a DTO model?
Example
Object returned from Repository:
public class CustomerOrderStats
{
public string Name { get; set; }
public int Count { get; set; }
}
Query in the Repository
public CustomerOrderStats GetCustomerOrderStats(Guid customerGuid)
{
return customers
.Where(c => c.Guid == customerGuid)
.Select(new CustomerOrderStats
{
Name = c.Name,
Count = c.Orders.Count()
};
}

It could be either one, really. The definition of a model vs. a DTO isn't really a matter of how you organize any given framework, but rather what that object represents in the domain. If it has rich functionality or business logic or is an active part of the actual business process, it's probably a model. If, on the other hand, it's just a container of properties to move values from one place to another, it's probably a DTO.
The key here is whether the object is an active part of the business process. And a good rule of thumb here is often the name of the object.
Is it a name that non-technical business team members understand?
Is it a term they use to describe what the business does? (Even a very small part of the business)
Does it carry a meaning in the industry in general?
A DTO is generally something that exists for purely technical reasons. Component A needs to send data to Component B, but that operation is a technical one and not a business one. Data just needs to be, well, transferred. As a piece of the system, it's essentially built "from the bottom up" because it satisfies a low-level technical need.
A model describes a part of the business. It could be an element on a chart which defines the business process in non-technical terms, or an encapsulation of a business concept. As a piece of the system, it's essentially built "from the top down" because it is described generally by the business and then implemented specifically to meet that need.

Related

According to DDD do i need additional repository and model

According to DDD for each aggregate i have repository. Let's take an example:
Client (model aggregate) > ClientRepository
Visit (model aggregate) > VisitRepository
Now phisicly i have association table in database which connects Client and Visit because clients could have many visists.
The question is: Should i create separated model like: ClientVisit which also would be an aggregate:
public class ClientVisit
{
int clientId;
int visitId;
}
Also s repository like ClientVisitRepository which could reference/use ClientRepository and VisitRepository.
Or is it enough to stick with e.g CLientRepository and get data from there without additionality model and repository
Modification to the post:
Instead of Visit (wrong example) - let's replace by Car therefore each client can have many cars. We would have also unique transactionNumber so:
Client (model aggregate) > ClientRepository
Car (model aggregate) > CarRepository
Should then i then create aggregate such as:
public class ClientCar
{
int clientId;
int visitId;
int transactionNumber;
}
and ClientCarRepository?
No, don't use a different repository for each entity or aggregate. You are not applying DDD completely in your modelling. You have to focus on the Ubiquitous language. Let me explain.
Repositories are meant to be nothing more than serializers and de-serializers for your entities and aggregates. There shouldn't be an intentional 1-to-1 between them. In fact, most of the time you won't have the 1-to-1. In my code, I tend to scope repositories to the bounded context or to a subcontext.
Take a trivial example: A blogging application. I might have a repository that can persist a comment. Persisting the comment means saving the comment itself and updating User's comment count. The Save(Comment comment, Usr usr) method will make two calls to my persistence mechanism to update the individual Entities or Aggregates.
Repository, in the sense of domain driven design, is a life cycle management pattern. See chapter 6 of the "blue book".
It's purpose is to isolate from our application code the strategy we are using to store and retrieve aggregate roots. That's important, because the aggregate roots are the only parts of the domain code that our application code talks to directly.
From this, it follows that you don't need a repository for the client car relation unless it is the root of its own aggregate.
Figuring out whether this relation should be in its own aggregate or not is a matter of domain analysis -- you're going to have to dig into your specific domain to figure out the answer. For something like a car rental domain, I would guess that you'll want this relation, and the information associated with its life cycle, to be in a separate aggregate from the car or the customer. But I wouldn't feel confident in that guess until I had worked through a few edge cases with the domain experts.
Whether you treat an entity as aggregate root, thereby introduce a corresponding repository, depends on your domain or its ubiquitous language. One of the key indicators of aggregates is that they encapsulate important domain operations.
Hard to be precise without knowing your domain, however, in your example, Client seems to be a more natural candidate for an aggregate: a client may own new cars, get rid of a few, etc; the corresponding operations (i.e. adding cars or removing cars) fit naturally into client.
ClientCar (or ClientVisit), on the other hand, doesn't seem to have any purpose other than retrieving cars owned by a client. For this purpose, navigating the entity should suffice, no aggregate is necessary. Your Client repository may introduce a method for this purpose like the following:
public interface ClientRepository
{
Client findById(String clientId);
void store(Client client);
IList<Cars> carsOwnedBy(String clientId);
}
Then carsOwnedBy method implementation retrieves a Client and returns only the Cars associated with it.

Business logic layer needs its own models or not

I'm doing 3 tier application using asp.net mvc and I want to do everything as recommended.
So I've done MvcSample.Bll for business logic, MvcSample.Data for data and MvcSample.Web for website.
In Data I've my edmx file (I'm using database first approach) and my repositories. And in Bll I'm doing services which will called in web.
So my question is that:
Should I write other models in Bll or use that ones which are generated in edmx file?
It heavily depends on the type of problem that your application is trying to solve.
From my experience, it is very rare that the business logic returns model objects directly from Entity Framework. Also, accepting these as arguments may not be the best idea.
Entity Framework model represents your relational database. Because of that, its definition contains many things that your business logic should not expose, for example navigation properties, computed properties etc. When accepting your model object as an argument, you may notice that many properties are not used by the particular business logic method. In many cases it confuses the developer and is the source of bugs.
All in all, if your application is a quick prototype, proof of concept or a simple CRUD software than it might be sufficient to use EF model classes. However, from practical point of view consider bespoke business logic model/dto classes.
From my point of view you need another model for your Bll.
That would encapsulate your Bllcompletely.
I think there is no right or wrong answer for your question.
In my experience, I used both.
Let's see at below example:
I have an User table
public class User
{
public int Id{get;set;}
public string First_Name{get;set;}
public string Last_Name{get;set;}
public int Age{get;set;}
public string Password{get;set;} //let's use this for demonstration
}
I have a Method call DisplayAll() in Bll. This method should list down all users in my database by Full Names (FirstName + LastName) and their Ages.
I should not return User class because it will expose the Password, but rather, I create a new Class UserDto
public class UserDto
{
public string FullName{get;set;}
public int Age{get;set;}
}
So here is my DisplayAll():
public List<UserDto> DisplayAll()
{
List<UserDto> result = ctx.User //my DbContext
.Select(x => new UserDto()
{
FullName = x.First_Name + " " +Last_Name,
Age = x.Age
}
return result;
}
So as you can see, my method DisplayAll() uses both User and UserDto
My approach will be
MvcSample.Data
-- Model Classes
-- EDMX attach to model
MvcSample.Bll
-- Model Inheriting MvcSample.Data.Model
-- Business Logic Class - Using MvcSample.Bll.Model
MvcSample.Web
-- Controller using MvcSample.Bll.Model
It depends on your view about software design and how you want to take advantage of it. by separating BLL model, you will have your freedom to put story specific validation and calculation. By using only DLL model, it is sometimes tough as it is going to take effect in DB.
You can use 3 tier architecture in asp.net in this way
MvcSample.BLL - business logic layer
MvcSample.DAL - Data access layer
MvcSample.Domain - Domain layer
MvcSample.web - website
All your repository classes are including in .BLL layer.That means your logics are stored here.
Usually .DAL is used for storing .edmx classes. .Domain is using for recreate database objects that are useful for server side.That means if you are passing a json object from client to server,Then that object should be create on the server side.So those classes can be implement in the .domain

Using AutoMapper to load entities from the database?

Most of what I've read (e.g. from the author) indicates that AutoMapper should be used to map an an entity to a DTO. It should not load anything from the database.
But what if I have this:
public class Customer {
public int Id { get; set; }
public string Name { get; set; }
public virtual ICollection<Order> Orders { get; set; }
}
public class CustomerDto {
public int Id { get; set; }
public string Name { get; set; }
public IEnumerable<int> OrderIds { get; set; } // here is the problem
}
I need to map from DTO to entity (i.e. from CustomerDto to Customer), but first I must use that list of foreign keys to load corresponding entities from the database. AutoMapper can do that with a custom converter.
I agree that it doesn't feel right... but what are the alternatives? Sticking that logic into a controller, service, a repository, some manager class? All that seems to be pushing the logic somewhere else, in the same tier. And if I do that, I must also perform the mapping manually!
From a DDD perspective, the DTO should not be part of the domain. So AutoMapper is also not part of the domain, because it knows about that DTO. So AutoMapper is in the same tier as the controllers, services, etc.
So does it make sense to put the DTO-to-entity logic (which includes accessing the database, and possibly throwing exceptions) into an AutoMapper mapping?
EDIT
#ChrisSimon's great answer below explains from a DDD perspective why I shouldn't do this. From a non-DDD perspective, is there a compelling reason not to use AutoMapper to load from the db?
To start with, I'm going to summarise my understanding of Entities in DDD:
Entities can be created - often using a factory. This is the start of their life-cycle.
Entities can be mutated - have their state modified - by calling methods on the entity. This is how they progress through their lifecycle. By ensuring that the entity owns its own state, and can only have its state modified by calling its methods, the logic that controls the entity's state is all within the entity class, leading to cleaner separation of business logic and more maintainable systems.
Using Automapper to convert from a Dto to the entity means the entity is giving up ownership of its state. If the dto is in an invalid state and you map that directly onto the entity, the entity may end up in an invalid state - you have lost the value of making entities contain data + logic, which is the foundation of the DDD entity.
To make a suggestion as to how you should approach this, I'd ask - what is the operation you are trying to achieve? DDD encourages us not to think about CRUD operations, but to think about real business processes, and to model them on our entities. In this case it looks like you are linking Orders to the Customer entity.
In an Application Service I would have a method like:
void LinkOrdersToCustomer(CustomerDto dto)
{
using (var dbTxn = _txnFactory.NewTransaction())
{
var customer = _customerRepository.Get(dto.Id);
foreach (var orderId in dto.OrderIds)
{
var order = _orderRepository.Get(orderId);
customer.LinkToOrder(order);
}
dbTxn.Save();
}
}
Within the LinkToOrder method, I would have explicit logic that did things like:
Check that order is not null
Check that the customer's state permits adding the order (are they currently active? is their account closed? etc.)
Check that the order actually does belong to the customer (what would happen if the order referenced by orderId belonged to another customer?)
Ask the order (via a method on the order entity) if it is in a valid state to be added to a customer.
Only then would I add it to the Customers Order's collection.
This way, the application 'flow' and infrastructure management is contained within the application/services layer, but the true business logic is contained within the domain layer - within your entities.
If the above requirements are not relevant in your application, you may have other requirements. If not, then perhaps it is not necessary to go the route of DDD - while DDD has a lot to add, its overheads are generally only worth it in systems with lots of complex business logic.
This isn't related to the question you asked, but I'd also suggest you take a look at the modelling of Customer and Order. Are they both independent Aggregates? If so, modelling Customer as containing a collection of Order may lead to problems down the road - what happens when a customer has a million orders? Even if the collection is lazy loaded, you know at some point something will attempt to load it, and there goes your performance. There's some great reading about aggregate design here: http://dddcommunity.org/library/vernon_2011/ which recommends modelling references by Id rather than reference. In your case, you could have a collection of OrderIds, or possibly even a completely new entity to represent the link - CustomerOrderLink which would have two properties - CustomerId, and OrderId. Then none of your entities would have embedded collections.

Entity Framework classes vs. POCO

I have a general difference of opinion on an architectural design and even though stackoverflow should not be used to ask for opinions I would like to ask for pros and cons of both approaches that I will describe below:
Details:
- C# application
- SQL Server database
- Using Entity Framework
- And we need to decide what objects we are going to use to store our information and use all throughout the application
Scenario 1:
We will use the Entity Framework entities to pass all around through our application, for example the object should be used to store all information, we pass it around to the BL and eventually our WepApi will take this entity and return the value. No DTOs nor POCOs.
If the database schema changes, we update the entity and modify in all classes where it is used.
Scenario 2:
We create an intermediate class - call it a DTO or call it a POCO - to hold all information that is required by the application. There is an intermediate step of taking the information stored in the entity and populated into the POCO but we keep all EF code within the data access and not across all layers.
What are the pros and cons of each one?
I would use intermediate classes, i.e. POCO instead of EF entities.
The only advantage I see to directly use EF entities is that it's less code to write...
Advantages to use POCO instead:
You only expose the data your application actually needs
Basically, say you have some GetUsers business method. If you just want the list of users to populate a grid (i.e. you need their ID, name, first name for example), you could just write something like that:
public IEnumerable<SimpleUser> GetUsers()
{
return this.DbContext
.Users
.Select(z => new SimpleUser
{
ID = z.ID,
Name = z.Name,
FirstName = z.FirstName
})
.ToList();
}
It is crystal clear what your method actually returns.
Now imagine instead, it returned a full User entity with all the navigation properties and internal stuff you do not want to expose (such as the Password field)...
It really simplify the job of the person that consumes your services
It's even more obvious for Create like business methods. You certainly don't want to use a User entity as parameter, it would be awfully complicated for the consumers of your service to know what properties are actually required...
Imagine the following entity:
public class User
{
public long ID { get; set; }
public string Name { get; set; }
public string FirstName { get; set; }
public string Password { get; set; }
public bool IsDeleted { get; set; }
public bool IsActive { get; set; }
public virtual ICollection<Profile> Profiles { get; set; }
public virtual ICollection<UserEvent> Events { get; set; }
}
Which properties are required for you to consume the void Create(User entity); method?
ID: dunno, maybe it's generated maybe it's not
Name/FirstName: well those should be set
Password: is that a plain-text password, an hashed version? what is it?
IsDeleted/IsActive: should I activate the user myself? Is is done by the business method?
Profiles: hum... how do I affect a profile to a user?
Events: the hell is that??
It forces you to not use lazy loading
Yes, I hate this feature for multiple reasons. Some of them are:
extremely hard to use efficiently. I've seen too much times code that produces thousands of SQL request because the developers didn't know how to properly use lazy loading
extremely hard to manage exceptions. By allowing SQL requests to be executed at any time (i.e. when you lazy load), you delegate the role of managing database exceptions to the upper layer, i.e. the business layer or even the application. A bad habit.
Using POCO forces you to eager-load your entities, much better IMO.
About AutoMapper
AutoMapper is a tool that allows you to automagically convert Entities to POCOs and vice et versa. I do not like it either. See https://stackoverflow.com/a/32459232/870604
I have a counter-question: Why not both?
Consider any arbitrary MVC application. In the model and controller layer you'll generally want to use the EF objects. If you defined them using Code First, you've essentially defined how they are used in your application first and then designed your persistence layer to accurately save the changes you need in your application.
Now consider serving these objects to the View layer. The views may or may not reflect your objects, or an aggregation of your working objects. This often leads to POCOS/DTO's that captures whatever is needed in the view. Another scenario is when you want to publish objects in a web service. Many frameworks provide easy serialization on poco classes in which case you typically either need to 1) annotate your EF classes or 2) make DTO's.
Also be aware that any lazy loading you may have on your EF classes is lost when you use POCOS or if you close your context.

Having Separate Domain Model and Persistence Model in DDD

I have been reading about domain driven design and how to implement it while using code first approach for generating a database. From what I've read and researched there are two opinions around this subject:
Have 1 class that serves both as a domain model and a persistence model
Have 2 different classes, one implementing the domain logic and one used for a code-first approach
Now I know opinion 1) is said to simplify small solutions that do not have many differences between the domain and persistence models but I think it breaks the single responsibility principle and by that introduces a lot of issues when an ORM's conventions interfere with DDD.
What is a surprise to me is there are numerous code examples of how to implement opinion 1). But a haven't found a single example of how to implement opinion 2) and how to map the 2 objects. (Probably there are such examples but I failed to find a C# one)
So I tried to implement an example on my own but I am not sure if that's a good way to do it.
Let's say I have a ticketing system and tickets have expiration date. My domain model will look like this:
/// <summary>
/// Domain Model
/// </summary>
public class TicketEntity
{
public int Id { get; private set; }
public decimal Cost { get; private set; }
public DateTime ExpiryDate { get; private set; }
public TicketEntity(int id, decimal cost, DateTime expiryDate)
{
this.Id = id;
this.Cost = cost;
this.ExpiryDate = expiryDate;
}
public bool IsTicketExpired()
{
if (DateTime.Now > this.ExpiryDate)
{
return true;
}
else
{
return false;
}
}
}
The persistence model using Entity Framework as ORM will look almost the same but as the solution grows this might not be the case
/// <summary>
/// ORM code first Persistence Model
/// </summary>
public class Ticket
{
[Key]
public int Id { get; set; }
public decimal Cost { get; set; }
public DateTime ExpiryDate { get; set; }
}
Everything looking great so far. Now what I am not sure about is which is the best place to get a Ticket persistence model from the repository and how to map it to the TicketEntity domain model
I have done this in an application/service layer.
public class ApplicationService
{
private ITicketsRepository ticketsRepository;
public ApplicationService(ITicketsRepository ticketsRepository)
{
this.ticketsRepository = ticketsRepository;
}
public bool IsTicketExpired(int ticketId)
{
Ticket persistanceModel = this.ticketsRepository.GetById(ticketId);
TicketEntity domainModel = new TicketEntity(
persistanceModel.Id,
persistanceModel.Cost,
persistanceModel.ExpiryDate);
return domainModel.IsTicketExpired();
}
}
My questions are:
Are there any reasons opinion 1) would be preferred to opinion 2) other than speeding up development and reusing code.
Are there any issues in my approach of mapping the models? Is there something I missed that would bring up issues when a solution grows?
Are there any reasons opinion 1) would be preferred to opinion 2) other than speeding up development and reusing code.
Option 1 is just because of pure laziness and imagined increased development speed. It's true that those applications will get version 1.0 built faster. But when those developers reach version 3.0 of the application, they do not think it's so fun to maintain the application due to all compromises that they have had to do in the domain model due to the ORM mapper.
Are there any issues in my approach of mapping the models? Is there something I missed that would bring up issues when a solution grows?
Yes. The repository should be responsible of hiding the persistence mechanism. It's API should only work with domain entities and not persistence entities.
The repository is responsible of doing conversions to/from domain entities (to be able to persist them). A fetch method typically uses ADO.NET or an ORM like Entity Framework to load the database object/entity. Then convert it to the correct business entity and finally return it.
Otherwise you would force every service to have knowledge about persistence AND working with your domain model, thus having two responsibilities.
If you work with application services per the DDD definition you will probably want to look at the Command/Query separation pattern which can be a replacement of the application services. The code gets cleaner and you also get a much more lightweight API wrapping your domain model.
I got into this dilemma this year in a big project I was working at and it was a really tough decision to make... I would like to talk about this topic during hours, but I'll resume my thoughts for you:
1) Persistence and Domain model as the same thing
If you are in a new project with a database designed from zero for it I would probablly suggest this option. Yes, the domain and your knowledge about it will change constantly and this will demand refactoring that will affect your database, but I think in most cases it's worth it.
With Entity Framework as your ORM you can almost keep your domain models entirely free of ORM concerns using fluent mappings.
Good parts:
Fast, easy, beautiful (if the database is designed for that problem)
Bad parts:
Maybe the developers starts to think twice before to do a change/refactoring in the domain fearing that it will affect the database. This fear is not good for the domain.
If the domain starts to diverge too much from the database you will face some difficulties to maintain the domain in harmony with the ORM. The closer to the domain the harder to configure the ORM. The closer to the ORM the dirtier the domain gets.
2) Persistence and Domain model as two separated things
It will get you free to do whatever you want with your domain. No fear of refactorings, no limitations provinients from ORM and database. I would recomend this approach for systems that deal with a legacy or bad designed database, something that will probably end messing up your domain.
Good parts:
Completely free to refactor the domain
It'll get easy to dig into another topics of DDD like Bounded Context.
Bad parts:
More efforts with data conversions between the layers. Development time (maybe also runtime) will get slower.
But the principal and, believe me, what will hurt more: You will lose the main beneffits of using an ORM! Like tracking changes. Maybe you will end up using frameworks like GraphDiff or even abandon ORM's and go to the pure ADO.NET.
Are there any issues in my approach of mapping the models?
I agree with #jgauffin: "it's in the repository that the mapping should take place". This way your Persistence models will never get out from the Repository layer, in preference no one should see those entities (unless the repository itself).
Are there any reasons opinion 1) would be preferred to opinion 2)
other than speeding up development and reusing code.
I can see a big one (opinionated stuff ahead) : there is no "Persistence Model". All you've got is a relational model in your database, and a Domain object model. Mapping between the two is a set of actions, not data structures. What's more, this is precisely what ORM's are supposed to do.
Most ORM's now support what they should have provided from the start -- a way to declare these actions directly in code without touching your domain entities. Entity Framework's fluent configurations for instance allow you to do that.
You may be under the impression that no persistence model = violating SRP and trampling on DDD, because many implementations you can find out there do. But it doesn't have to be like that.

Categories