I've just started working with the Entity framework and I'm confused about how the classes normally in the business layer fit in with the entities that are created by the Entity Framework.
When working with classic ADO.NET, I would have a class called Customer for example and then another class called DALCustomer to handle database interaction, in this structure I would have put the code to do calculations, filtering and delcare an instance of the DAL withing Customer for saving, updating and deleting in the Customer class.
With the Entity Framework, if you have a table called Customer, the Entity framework creates an entity called Customer and this is where my confusion begins, does this entity remove the need for a Customer in the business layer? So in essence all the fields and methods that normally go in the business layer go in the entity generated by the Entity Framework? Or should a class still exist in the business layer called CustomerBL for example, that still contains the fields and methods needed to accomplish the business logic required for calculations, filtering and still needs an instance of the EF DAL declared to handle data access?
If there should be a business class, in this case CustomerBL, one other question jumps to mind, should the fields that are created in the customer entity be recreated in CustomerBL or should an instance of the Customer entity be declared in CustomerBL so there would be no need to have the fields declared in 2 locations?
Entity framework was designed with separation between data model and conceptual model in mind. It supports inheritance, entity splitting (not EF core), table splitting, complex types or owned types, and transparent many-to-many associations (not EF core), all of which allow molding the domain model to one's needs without being constrained too much by the data store model. EF core supports shadow properties which can be used to hide cross-cutting concerns from the exposed class model.
The code-first approach allows working with POCOs in which only a few properties are mapped to data store columns and others serve domain goals. Model-first and Database-first generate partial classes, allowing one to extend the generated code.
Of course this separation of conceptual model and store model can only succeed to a certain extent. Some things work against this goal of persistence ignorance. For instance -
If lazy loading is desirable, it is necessary to declare navigation properties as virtual, so EF can override them in proxy types. Domain-driven design (DDD) would encourage using virtual only when polymorphism is required.
It is very convenient to have primitive foreign key properties (say, ParentId) accompanying the "real" associations (a Parent reference). Purists consider this a violation of DDD principles.
The EF class model will is part of a data access layer and should primarily serve that goal. Therefore, it will contain many reciprocal relationships, in order to benefit from navigation properties as much as possible when writing LINQ queries. These mutual relationships are another violation of DDD principles.
There is a large number of differences between LINQ-to-objects and LINQ-to-entities. You just can't ignore the fact that you are LINQ-ing against a totally different universe than objects in memory. This is referred to as tight coupling, or leaky abstraction.
EF can only map concrete classes, no interfaces.
But then... generally I'm happy with using generated EF classes or POCOs from a code-first model as domain classes. So far, I've never seen a frictionless transition from one data store or ORM to another, if it happens at all. Persistence ignorance is a fiction. Idiosyncrasies from the DAL easily leave a footprint in the domain. Only when you have to code for different data stores/models or when stores/models are expected to change relatively often it pays off to minimize this footprint as much as possible or abstract it away completely.
Another factor that may promote EF classes as domain classes is that many applications today have multiple tiers, where (serialized) different view models or DTOs are sent to a client. Using domain classes in UIs hardly ever fits the bill. You may as well use the EF class model as the domain and have services return dedicated models and DTOs as required by a UI or service consumers. Another abstraction layer may be more of a burden than a blessing, if only performance-wise.
In my opinion the whole point of using POCOs as entities that can be persisted is to remove the distinction between "database entities" and "business entities". "Entities" are supposed to be "business entities" that directly can be persisted to and loaded from a data store and therefore act as "database entities" at the same time. By using POCOs the business entities are decoupled from the specific mechanism to interact with a database.
You can move the entities into a separate project - for example - that has no references to any EF assembly and yet use them in a database layer project to manage persistence.
This does not mean that you can design your business entities completely without having the requirements for EF in mind. There are limitations you need to know to avoid trouble when you come to the point to map the business entities to a database schema using EF, for instance:
You must make navigation properties (references or collections of references to other entities) virtual to support lazy loading with EF
You cannot use IEnumerable<T> for collections that have to be persisted. It must be ICollection<T> or a more derived type.
It's not easy to persist private properties
The type char is not supported by EF and you can't use it if you want to persist its values
and more...
But an additional set of entities is - in my opinion - an additional layer of complexity that should be justified to be really needed if the mentioned limitations are too tight for your project.
YA2C (Yet another 2 cents :))
I don't know if it's considered a good practice by others but personally this is how i handled this in the past:
The classes generated by EF are your DAL, and then for BL create a complementary set of classes in which you will have the structure you require (like maybe merging data from related entities in a one to one relationship) and other business logic concerns are handled (custom validation like implementing IDataErrorInfo to make it play nice with the UI in WPF for instance) and also create classes that would contain all the business layer methods relating to a type of entity, that use the BL instances and convert to and from EF entities to the BL objects.
So, for instance, you have Customer in your db. EF will generate a class Customer, and in the BL there will be a Customer (prefix, suffix, etc.) class and a CustomerLogic class. In the BL Customer class you can do whatever is needed to satisfy requirements without having to tamper with the EF entities and in the CustomerLogic class you would have BL methods (load most valued customers, save customer with extra data, etc.).
Now, this enables you to be loosely coupled to the datasource implementation. Another example of why this has benefited me in the past (in a WPF project) is that you can do stuff like implement IDataErrorInfo and implement validation logic in the CustomerBL classes so that when you bind the entity to a create/edit form on the UI you will benefit from the built in functionality provided by WPF.
...My 2 cents, i am also curious to find out what is the best practice or what other solutions/points of view are.
Also perhaps related to this topic - Code-first vs Model/Database-first
This topic maybe a bit old but this may help. Andras Nemes pointed out in his blog the concern of using DDD (domain driven design) over the technology driven design such as EF, MVC, etc.
http://dotnetcodr.com/2013/09/12/a-model-net-web-service-based-on-domain-driven-design-part-1-introduction/
I used the business logic to write my methods and return the results in its created view like:
namespace Template.BusinessLogic
{
public interface IApplicantBusiness
{
List<Template.Model.ApplicantView> GetAllApplicants();
void InsertApplicant(Template.Model.ApplicantView applicant);
}
}
Related
My team devolops a web api application using entity framework,
The Gui is developed by a seperate team.
My question is how should the models be defined? Should we have two projects - one for domain models (database entities) and one for Dtos which are serializable?
Where should the parsing from Dto to domain models should happen and when should it happen the opposite way?
Moreover, sometimes all the data is needed to be sent to the clients.. Should a Dto be created for those cases as well? Or should I return a domain model?
Generally speaking, it's a good idea to not let your entities (database models) leak out of your database layer. However, as with everything in software - this can have its downfalls. One such downfall being is that it starts to increase complexity of your data layer as it involves mapping your entities to their DTO within your database layer, ultimately leaving repositories that are full of similar methods returning different DTO types.
Some people also feel that exposing IQueryables from your data layer is also a bad thing as you start to leak abstractions to different layers - though this has always seemed a little extreme.
Personally, I favour what I feel is a more pragmatic approach and I prefer to use a tool like AutoMapper to automatically map my entities to my DTOs within the business logic layer.
For example:
// Initial configuration loaded on start up of application and cached by AutoMapper
AutoMapper.Mapper.CreateMap<BlogPostEntity, BlogPostDto>();
// Usage
BlogPostDto blogPostDto = AutoMapper.Mapper.Map<BlogPostDto>(blogPostEntity);
AutoMapper also has the ability to configure more complex mapping, though you should try and avoid this if possible by sticking to flatter DTOs.
In addition, another great feature of AutoMapper is the ability to automatically project your entities to DTOs. This results in much cleaner SQL where only the columns within your DTO are queried:
public IEnumerable<BlogPostDto> GetRecentPosts()
{
IEnumerable<BlogPostDto> blogPosts = this.blogRepository.FindAll().Project(this.mappingEngine).To<BlogPostDto>().ToList();
return blogPosts;
}
Moreover, sometimes all the data is needed to be sent to the clients.. Should a Dto be created for those cases as well? Or should I return a domain model?
DTOs should be created for those. Ultimately you don't want your client depending on your data schema, which is exactly what will happen if you expose your entities.
Alternatives: Command/Query Segregation
It behooves me to also highlight that there are also some other alternatives to a typical layered architecture, such as the Command/Query Segregation approach where you model your commands and queries via a mediator. I won't go into it in too much detail as it's a whole other subject but it's one I would definitely favour over a layered approach discussed above. This would result in you mapping your entities to your DTOs directly within the modelled command or query.
I would recommend taking a look at Mediatr for this. The author, Jimmy Bogard who also created AutoMapper also has this video talking about the same subject.
I've had similar requirements in several projects and in most cases we separated at least three layers:
Database Layer
The database objects are simple one-to-one representations of the database tables. Nothing else.
Domain Layer
The domain layer defines entity objects which represent a complete business object. In our defintion an entity aggregates all data which is directly associated to the entity and can not be regarded as a dedicated entity.
An exmaple: In an application which handles invoices you have a table invoice and invoice_items. The business logic reads both tables and combines the data into a entity object Invoice.
Application Layer
In the application layer we define models for all kind of data we want to send to the client. Pass-through of domain entity objects to save time is tempting but strictly prohibited. The risk to publish any data which shouldn't be published is too high. Furthermore you gain more freedom regarding the design of your API. That's what helps you to fit your last requirement (send all data to the client): Just built a new model which aggregates the data of all domain objects you need to send.
This is the minimum set of layers we use in all projects. There were hundreds of cases where we've been very happy to have several abstraction layers which gave us enough possibilities to enhance and scale an application.
I am using Code First approach and there are some mismatch between my model for code first approach (DAL) and my domain model (BLL). I imagine my Data Model to have annotations, properties, configurations, etc related to database only and not the same for my Domain model entities and vice versa to obey separation of concerns.
How do I go about handling this situation in my application? This is more logical then technical I guess. Asked before in many places but no concrete lead yet. Hope some suggestion from SO will help.
In my experience because of the rich mapping possibilities of the Entity Framework you don't have to separate a Data access and Business logic layer at all, you just have to use the Fluent API of the Entity Framework. In one of my current project we have more than 150 classes with inheritance hierarchies and all but we can still use it without "duplicating" the objects.
Some good introductions about the fluent API can be found here:
http://msdn.microsoft.com/en-us/data/hh134698.aspx
http://msdn.microsoft.com/en-us/magazine/hh852588.aspx
About the separation: we simply use a Domain project and a Persistence.EntityFramework project where the latter contains all the mappings thus the Domain does not reference the EntityFramework.dll at all.
And if you have some specific mapping questions e.g. the ones you mentioned that are the reasons you created two layers one for DAL and the other for BL just ask them.
I would go with AutoMapper. It can help you to reduce the boilerplate code needed to convert from one object to another.
You can find it here:
https://github.com/AutoMapper/AutoMapper
Edit:
Put your domain models either in BLL or in a separate project, add reference to the BLL or this separate project in the DAL (also reference the new project in BLL), and use the AutoMapper in the DAL. So only domain models will leave the DAL.
You normally have:
A domain model (entities), which is what the O/RM (Entity Framework) uses;
A Data Transfer Objects (DTO) model, used for sending data to a view (in MVC) or by web services, etc.
I agree with Andras: the best way to go from one (domain) to the other (DTO) is by using Automapper. Of course, you can also do it by hand.
One thing that you need to realize is that there is no need for a 1-1 mapping between the domain and the DTO, the DTO can contain denormalized or calculated properties as well.
I am trying to create a system that allows you to switch multiple data sources, e.g. switching from Entity Framework to Dapper. I am trying to find the best approach to do this.
At the moment I have different projects for different data layers, e.g. Data.EF for Entity Framework, Data.Dapper for Dapper. I have used a database approach but when it creates the models the information generated is coupled together and not easy to refactor, e.g. separation of models.
I have a project called models, this holds domain and view models, and I was thinking of creating Data.Core and follow the repository pattern. But then, doing this will add an extra layer so I would have Presentation / Business / Repository / Data.
I would like to know the best structure for this approach. Should I also do a code-first approach to create my database? This helps separate concerns and improve abstraction. This is quite a big application so getting the structure right is essential.
I'd suggest factoring your data interfaces either to the model through repository interfaces for your entities or to an infrastructure project. (I think the latter was your rationale behind creating a Data.Core project.)
Each data source will then implement the very same set of interfaces, and you can easily switch between them, even dynamically using dependency injection.
For instance, using repositories:
Model
\_ Entities
Entity
\_ Repositories
IEntityRepository
Data.EF
EntityRepository : Model.IEntityRepository
Data.Dapper
EntityRepository : Model.IEntityRepository
Then in your business you won't need to even reference Data.EF or Data.Dapper: you can work with IEntityRepository and have that reference injected dynamically.
I think you approach is correct. I'd say Presentation / business / repository / data is pretty standard these days.
I'd say the code first approach using POCOs is the preferred option today in the industry. I would advise to start creating a project containing your POCO data structures with any logic in it and take it from there. The advantage of this is that your objects model the domain more naturally. If you start with a db centric approach the problem is that, if you are not careful, you may end with objects more akin to SQL relational databases than to the real model. This was painfully evident in the first versions of .net where it was encouraged to use Datasets tighly coupled with the db and that often caused problems to work with in the business layer.
If needed you can do any complex mapping between the business objects and the db objects in the repository layer. You can use a proxy and/or a unit of work if you need to.
I would suggest you create your domain objects, use the code-first approach and also apply the repository pattern
Yes the repository pattern does bring in an extra layer. Have a look at this post for more detail information Difference between Repository and Service Layer?
RE: code-first approach to create my database
It doesn't matter how big your application is, it is a question of what else you intend to use the database for. If this database is simply a repository for this application then using code-first is fine as you are simply storing your code objects. However if you are using this database as an integration point between applications then you may wish to design the database seperately to the application models.
I use Entity Framework as ORM in my .net MVC project. I've implemented the Repository-Pattern (generic) to get/save/update/remove DAOs (Data Access Objects). I also have Business Objects which contain all the business logic. I have - for example - a DAO called Student and a BO (Business Object) called Student as well. The BO contains the logic, the DAO just the data stored in the DB.
Now I am wondering if the Student-Repository should return the Business-Object instead of the DAO?
I could achieve that using Automapper by converting the DAO to a Business Object before returning it from the Repository.Get(). Same with all the other methods. But is this a good practice?
Update
I have a Data Access Layer project and a project for the Business Logic. Entity Framework creates its entities in partial classes (into the Data Access Project) so I could actually extend the entities with other partial classes but the problem is that I reference the Data Access Project in my Business project and I don't have access to the logic code within the Data Access project. So I have to put the logic inside the Business project but as it is not possible to create partial classes over two projects I have to go another way... or do you have a good idea how to structure and solve the problem in a better way?
IMHO there are several goals (some competing):
Make business logic testable in isolation
Design domain objects to match your domain
Decouple data access from everything else
Keep it simple
Can you test your business logic without a database? Probably yes, whether the classes are EF POCO entities or mapped from DAOs.
Do your domain objects match your domain? Are their names well-chosen? Are they always in a valid state? (This can be difficult with a bunch of public read/write properties.) Domain-driven design considerations apply here. (I'm no expert in that.)
Could you swap out EF for Dapper, SQL Server for MongoDB, or current data access for a web service call without changing anything outside the data access layer - with confidence? My suspicion is no. Generic repositories tend to leak IQueryable into other layers. Not everything supports querying, and provider implementations vary. Unit tests typically use LINQ to Objects, which does not behave the same as LINQ to Entities. Also, if you want to extract a web service contract, you would have to look through all classes to find all the queries. See IQueryable is Tight Coupling.
Finally, do you need all of this? If your application's purpose is CRUD data access with no business logic above simple validation, maybe not. These considerations definitely apply to a complex application or site.
Yes, that's totally good practice. Usually you have repository interfaces defined in domain assembly. These interfaces are used by domain services, and implemented in persistence assembly. Entity Framework allows you to map business entities fluently, without polluting them with attributes or forcing them to inherit from some specific base class (POCO entities). That makes your domain model Persistence Ignorant.
I'm designing N-tier application and I came across a difficulty which you might have a solution to. Presentation layer is MVC.
My ORM is carried out using LinqToSQL - it's a seperate project which serves repositories.
Each reporsitory has an interface and at least 1 concrete implementation.
Repositories have the following methods: FindAll(), Save(T entity), Delete(int id)
FindAll() returns IQueryable of some type, which means that it returns queries to which I can apply filters.
ORM mapping has been carried out using Database First methodology, where tables were created first and then classes were generated by SQL Metal.
I have added a Pipeline layer which works with repositories. It applies further filters to queries. E.g. OrderRepository.FindAll().Where(o => o.CustomerId == 10)
Pipeline also returns IQueryable of some type, which means that I can pass it further up the layer and do more stuff with it.
At this point I would like to move to the BusinessLogic layer, but I don't want to work with entity models any longer, I want to convert entity model to a domain model. This means that I can add validation to a model and use that model in the presentation layer. Model can't be defined in MVC project as it would be dependant on the presentation layer, so that's a no.
I'm fairly certain that business logic (behaviour) and model must be stored seperate from pipeline, data and presentation layer. The question is where?
For example, a pipeline has three methods:
1. FindByCustomerId
2. FindByOrderId
3. FindBySomethingElse
All these methods return IQueryable of Order. I need to convert this to a domain model, but I don't want to do it per each method as it won't be mainteinable.
I feel that this model is fairly robust and scalable. I just don't see what is the best place for mapping from entities to domain model and vise versa.
Thank you
First of all, if you are applying Domain Driven Design principles here, you must not have BusinessLogic layer in your application. All business logic should live inside your domain model.
But it is quite hard to achieve using LinqToSQL because it does not support inheritance mapping and you would have to deal with partial classes to put business logic into your domain. So I would strongly recommend to consider moving from LinqToSQL to NHibernate or Entity Framework Code First .In this case you also won't have to convert your persistence model into your domain model and vice versa.
If you still want to do conversion, you could take a look at Automapper
From a domain driven point of view you would need a factory to convert your 'database entity' into a domain model entity.
When you're thinking of turning the 'database entities' to domain model entities at the end of your pipeline you should realize that after the conversion to domain model entities (a projection) you won't be able to use the IQueryable functionality as the projection will trigger execution of your expression tree. For example if you call FindAll for you customer database entity and then convert the IQueryable to (or project it onto) a customer domain entity it will execute (requesting the contents of your entire table).
This is how I do my N-Tier projects. This architecture has great separation of concerns. It sounds like you are already headed in this direction.
In the Mvc project is all your usual objects (Controllers, ViewModels, Views, helpers, etc). Fairly straight forward. All views are strongly typed. Plus my modified T4 templates that generate Controllers, Views, and ViewModels.
In the Business Model project I have all my business objects and rules, Included are the interfaces that define functionality of the data repositories. Instead of having one repository for every business object / table I prefer to group mine by functionality. All objects related to a blog are in one repository while all objects related to a photo gallery are in a separate repository, and logging may be in a third.
You could place your pipeline layer here.
In the Data project I implement those data repository interfaces. You can use Linq2SQL without having to use partial classes. Extending those Linq2SQL partial classes means you have tied your ORM to your domain model. Something you really don't want to do. You want to leave those generated data classes in the data domain. Here is an example Linq2SQL select that returns a BusinessModel object.
from t in Table
where t.Field == keyField
select new BusinessModel.DataObject
{
Id = t.Id,
Field1 = t.Field1,
Field2 = t.Field2
}
If I were you I would look at EntityFramework 4.1 using the CodeFirst approach or use NHibernate. Either of these will map your data model to the domain model. Then to map the domain models to the view models you could use AutoMapper or write custom code or write a T4 template that would generate the mapping code for you.
You could take the code generated by the dbml file as a starting point for your business objects.
Further to xelibrion's comments you could have a look at LightSpeed for your ORM needs. You are currently using LinqToSQL so you should find Lightspeed very straight-forward since it uses the same idea.
http://www.mindscapehq.com/products/lightspeed
If you can get your data to map Models that more match the form your higher levels want then hopefully you can simplify things. The less complexity in the system the less scope for bugs.
All these methods return IQueryable of
Order. I need to convert this to a
domain model, but I don't want to do
it per each method as it won't be
mainteinable.
This is not a true assessment and is probably blocking you from seeing the proper solution.