Is it not bad practice to pass DTO object to service layer?
For now my service layer method look like this:
public save(MyEntity entity);
Mapping values from DTO to business entity (MyEntity) is done on presentation layer
But I want to change method signature to this:
public save(MyEntityDTO dto, String author);
And after it mapping from DTO to business entity will occur on service layer.
EDIT: I want it because I need opened hibernate session when mapping from DTO to business object, so all changes on entity will be automatically flushed.
Is it not bad practice to pass DTO object to service layer?
Not only you could pass DTO objects to Service Layer, but you should pass DTO objects instead of Business Entities to Service Layer.
Your service should receive DTOs, map them to business entities and send them to the repository. It should also retrieve business entities from the repository, map them to DTOs and return the DTOs as reponses. So your business entities never get out from the business layer, only the DTOs do.
See the full answer to a similar question here.
It's debatable. (As evidenced in comments to the accepted answer)
On the one hand, DTOs are belong to the layer of the application that deals with Data Transfer, which is your presentation layer apparently.
On the other hand, your domain object (business entity), which is properly dealt with in the service layer, probably has properties in it - like, say an Id, a LastUpdated or such that are not passed into the save method. So what do you pass in? Well, you pass in just the properties you need. And it just so happens, that the request MyEntityDTO for save() will happen to encapsulate all and only those properties!...
So you now have the unfortunate choice between:
Passing in the business object from the presentation layer
(Breaks the layering model too, and forces you to ignore properties
like Id and LastUpdated, which are not in the "request")
Breaking up the DTO into properties and passing them in:
service.save(Dto.Property_One, Dto.Property_Two)
which you then have to put back together in the save() method.
Creating some new object to encapsulate Property_One, Property_Two etc
Accepting that the DTO is for transferring between layers too
None of these is ideal imo, which is why I think #4 is okay. Probably the most correct is #2 - but again.. not ideal.
Sometimes naming my ease the pain: "MyEntityRequest" instead of "DTO"
It's ok, all standard 3 layer architectures do that. Dataaccess gets the data, business maps and manipulates it, presentation presents it. It is not ok - but, as said no crime - to pass dataaccess models to presentation - to this point u should pass business models. Btw. "DTO" can mean anything, business layer models can be DTO's, data access model can be DTO's. DTO's are usually POCO's in C#. Usually you have your dataaccess models, representing your database entities and your domain models wich pass the data around your application. The domain models are usually DTO's (or call it POCO). That means, in Microsoft speech, they are completly serializable, so you can pass them to any microsoft .net component. You can serialize them also to xml, json and so on ...
Related
My team devolops a web api application using entity framework,
The Gui is developed by a seperate team.
My question is how should the models be defined? Should we have two projects - one for domain models (database entities) and one for Dtos which are serializable?
Where should the parsing from Dto to domain models should happen and when should it happen the opposite way?
Moreover, sometimes all the data is needed to be sent to the clients.. Should a Dto be created for those cases as well? Or should I return a domain model?
Generally speaking, it's a good idea to not let your entities (database models) leak out of your database layer. However, as with everything in software - this can have its downfalls. One such downfall being is that it starts to increase complexity of your data layer as it involves mapping your entities to their DTO within your database layer, ultimately leaving repositories that are full of similar methods returning different DTO types.
Some people also feel that exposing IQueryables from your data layer is also a bad thing as you start to leak abstractions to different layers - though this has always seemed a little extreme.
Personally, I favour what I feel is a more pragmatic approach and I prefer to use a tool like AutoMapper to automatically map my entities to my DTOs within the business logic layer.
For example:
// Initial configuration loaded on start up of application and cached by AutoMapper
AutoMapper.Mapper.CreateMap<BlogPostEntity, BlogPostDto>();
// Usage
BlogPostDto blogPostDto = AutoMapper.Mapper.Map<BlogPostDto>(blogPostEntity);
AutoMapper also has the ability to configure more complex mapping, though you should try and avoid this if possible by sticking to flatter DTOs.
In addition, another great feature of AutoMapper is the ability to automatically project your entities to DTOs. This results in much cleaner SQL where only the columns within your DTO are queried:
public IEnumerable<BlogPostDto> GetRecentPosts()
{
IEnumerable<BlogPostDto> blogPosts = this.blogRepository.FindAll().Project(this.mappingEngine).To<BlogPostDto>().ToList();
return blogPosts;
}
Moreover, sometimes all the data is needed to be sent to the clients.. Should a Dto be created for those cases as well? Or should I return a domain model?
DTOs should be created for those. Ultimately you don't want your client depending on your data schema, which is exactly what will happen if you expose your entities.
Alternatives: Command/Query Segregation
It behooves me to also highlight that there are also some other alternatives to a typical layered architecture, such as the Command/Query Segregation approach where you model your commands and queries via a mediator. I won't go into it in too much detail as it's a whole other subject but it's one I would definitely favour over a layered approach discussed above. This would result in you mapping your entities to your DTOs directly within the modelled command or query.
I would recommend taking a look at Mediatr for this. The author, Jimmy Bogard who also created AutoMapper also has this video talking about the same subject.
I've had similar requirements in several projects and in most cases we separated at least three layers:
Database Layer
The database objects are simple one-to-one representations of the database tables. Nothing else.
Domain Layer
The domain layer defines entity objects which represent a complete business object. In our defintion an entity aggregates all data which is directly associated to the entity and can not be regarded as a dedicated entity.
An exmaple: In an application which handles invoices you have a table invoice and invoice_items. The business logic reads both tables and combines the data into a entity object Invoice.
Application Layer
In the application layer we define models for all kind of data we want to send to the client. Pass-through of domain entity objects to save time is tempting but strictly prohibited. The risk to publish any data which shouldn't be published is too high. Furthermore you gain more freedom regarding the design of your API. That's what helps you to fit your last requirement (send all data to the client): Just built a new model which aggregates the data of all domain objects you need to send.
This is the minimum set of layers we use in all projects. There were hundreds of cases where we've been very happy to have several abstraction layers which gave us enough possibilities to enhance and scale an application.
I use Entity Framework as ORM in my .net MVC project. I've implemented the Repository-Pattern (generic) to get/save/update/remove DAOs (Data Access Objects). I also have Business Objects which contain all the business logic. I have - for example - a DAO called Student and a BO (Business Object) called Student as well. The BO contains the logic, the DAO just the data stored in the DB.
Now I am wondering if the Student-Repository should return the Business-Object instead of the DAO?
I could achieve that using Automapper by converting the DAO to a Business Object before returning it from the Repository.Get(). Same with all the other methods. But is this a good practice?
Update
I have a Data Access Layer project and a project for the Business Logic. Entity Framework creates its entities in partial classes (into the Data Access Project) so I could actually extend the entities with other partial classes but the problem is that I reference the Data Access Project in my Business project and I don't have access to the logic code within the Data Access project. So I have to put the logic inside the Business project but as it is not possible to create partial classes over two projects I have to go another way... or do you have a good idea how to structure and solve the problem in a better way?
IMHO there are several goals (some competing):
Make business logic testable in isolation
Design domain objects to match your domain
Decouple data access from everything else
Keep it simple
Can you test your business logic without a database? Probably yes, whether the classes are EF POCO entities or mapped from DAOs.
Do your domain objects match your domain? Are their names well-chosen? Are they always in a valid state? (This can be difficult with a bunch of public read/write properties.) Domain-driven design considerations apply here. (I'm no expert in that.)
Could you swap out EF for Dapper, SQL Server for MongoDB, or current data access for a web service call without changing anything outside the data access layer - with confidence? My suspicion is no. Generic repositories tend to leak IQueryable into other layers. Not everything supports querying, and provider implementations vary. Unit tests typically use LINQ to Objects, which does not behave the same as LINQ to Entities. Also, if you want to extract a web service contract, you would have to look through all classes to find all the queries. See IQueryable is Tight Coupling.
Finally, do you need all of this? If your application's purpose is CRUD data access with no business logic above simple validation, maybe not. These considerations definitely apply to a complex application or site.
Yes, that's totally good practice. Usually you have repository interfaces defined in domain assembly. These interfaces are used by domain services, and implemented in persistence assembly. Entity Framework allows you to map business entities fluently, without polluting them with attributes or forcing them to inherit from some specific base class (POCO entities). That makes your domain model Persistence Ignorant.
I have developed an application with following layers:
Data access layer based on fluent nHibernate
Business rules
activity layer(more abstract than business rules and use some
business rules)
service layer based on WCF that sends some DTOs to the outside world
and recieves DTOs.
so when some DTO came back, I can map the DTO to business objects in the service layer and made my application to work with business objects. in that case when some function in lower layers executes it does not know any thing about the old object, so it become hard to handle and verify state changing and also there is class explosion for DTO adapters.
on the other hand if dto is mapped to business object on the higher layers, when it came down, the lower layers did not know anything about the service which is called, so they can not unserstand how this dto must change the business objects(1 DTO might be used by different services in different ways)
so the question is what is the real solution??
From your specs, I'm kind of assuming you are aiming for a DDD based implementation
First, some assumptions to help map this to more common terminology: I assume your "Business Rules" layer is just used by your activity layer, and thus can be considered as part of the domain layer.
You mention business objects. I assume then that you have a domain layer. This might be your "activity layer". This should be the layer that knows how to update objects and return them to the service layer.
The service layer (or "application layer" in DDD terms) should be mapping the DTOs, and invoking domain services. MS has a decent diagram here. But basically the workflow should be:
Send DTO to service layer
Service layer invokes DTO adapters to create domain objects/entities out of DTOs.
Service layer invokes domain services to perform business logic (invokes rules)
Domain services update domain objects as a result of business rules
Persistance layer is invoked by domain services as needed
Domain services return updated domain objects to service layer
Service layer maps domain objects back to DTO and returns them
There are of course many variations on this theme, but this should be your starting point.
I am creating a solution from scratch, using ASP.NET Web forms C#.
I am concerned about the model objects as I don't want to create duplicate sets of model objects in each layer. What is the best practice for using Model objects in 3 layer architecture in Web Forms?
The structure I have in mind is as follows:
UI
BLL
DAL
Model
The Model will contain all the model classes that can be used in each section of the layers. I thought this would be useful as each layer needs access to the model objects. For example:
UI calls a method in BLL passing in a model object filled with data.
BLL calls a method in DAL passing through the object which is saved
in the database etc.
Thanks
Models can be a cross-cutting concern with your layers, which is a quick way to do it. Or you can create interfaces for your models such that you could simply flesh out the interface in something like the BLL - this at least stops it being cross-cutting.
It further depends on if your models are simple data containers (anemic domain model), or contain behaviour, such as the ability to validate themselves or track changes themselves (rich domain model).
You can find that your DAL actually consists of two parts: the boilerplate-never-specific to an app code to talk to the database, and the app-specific populate-the-model code. We have this situation. We share interfaces of our models around, the app-specific DAL code can use this interface in order to push and pull data from the model, but the "true" DAL code works with raw stuff.
In a relatively small application, you can share your Domain Entities all the way up to your Presentation layer but be aware of the coupling this introduces.
If in your Databinding you except an Entity of type Customer with a property Address with a StreetLine1 and StreetLine2 property then all your layers are tightly coupled together and a change in one layer will probably cause changes in other layers.
So your decision should be based on the scale of your project and the amount of coupling you can have.
If you go for a low coupled design then your BLL will use your DAL to retrieve entities and use those entities to execute behavior. The BLL will then use Data Transfer Objects to pass to your Presentation layer so there is no coupling between your presentation layer and your Domain Model.
look at my answer here: https://stackoverflow.com/a/7474357/559144 this is the usual way I do things and works well, not only for MVC and Entity Framework... in fact in MVC the model could be an entity type which only has some of the fields contained by the real business entities defined in lower layers, it depends if you really absolutely need all fields in the UI level as well or only some to do some data rendering and input...
As a related topic, please see this related answer which I posted recently on avoiding duplication of code and correct architecture in a cross-platform client/server system.
I have +1'd the other posters in this thread as this is not intended to be a full answer, just useful information related to the question.
Best regards,
I have a repository layer that is responsible for my data-access, which is called by a service layer. The service layer returns DTOs which are serialized and sent over the wire. More often than not, services do little more than access a repository and return whatever the repository returns.
But for that to work, the repository has to return an instance of that DTO. Otherwise, you would first have to map the data layer object that the repository returns to a DTO in the service layer and return that. That just seems wasteful.
On top of that, if creation of the DTOs happens in the service layer, something that might have been done before in one repository call and thus one database query, now has to happen with multiple repository calls in the service layer to 'compose' the final DTO. Unless of course I create a transport object for between the data and service layer that can contain such a composed object. Which then has to be mapped to a DTO. It just seems wasteful for the sake of purity. But it also feels wrong to have the repository layer return objects that just exist to be sent over the wire.
Short answer: No.
Long answer: repository is responsible for turning persisted data back to entities (models) and vice versa.
Model is a business Model representing a business entity. DTO on the other hand - while looks like Model - is concerned with transfer of the object between various environment and in essence is a transient object. Usually mappers are responsible for turning model into DTO.
So your repository needs to hydrate the entire entity even if it's not being used? This seems very inefficient. – ajbeaven Oct 29 '18 at 23:25
Couldn't you add methods to the repository interface for calls that don't need to hydrate the entire entity? I suppose that could lead to bloated interfaces, which is one of the main arguments against, I think.
To answer the question, I agree with the accepted answer of No. Repository implementations are in the persistence layer. The domain layer may need to retrieve deep or shallow objects from the persistence layer which knows nothing except the Interface it must implement. If the domain is constantly asking for a full refrigerator when it only needs butter, then maybe the Interface (or perhaps the data model) need some work.