Handling Mismatch Between Domain and Code First Entities - c#

I am using Code First approach and there are some mismatch between my model for code first approach (DAL) and my domain model (BLL). I imagine my Data Model to have annotations, properties, configurations, etc related to database only and not the same for my Domain model entities and vice versa to obey separation of concerns.
How do I go about handling this situation in my application? This is more logical then technical I guess. Asked before in many places but no concrete lead yet. Hope some suggestion from SO will help.

In my experience because of the rich mapping possibilities of the Entity Framework you don't have to separate a Data access and Business logic layer at all, you just have to use the Fluent API of the Entity Framework. In one of my current project we have more than 150 classes with inheritance hierarchies and all but we can still use it without "duplicating" the objects.
Some good introductions about the fluent API can be found here:
http://msdn.microsoft.com/en-us/data/hh134698.aspx
http://msdn.microsoft.com/en-us/magazine/hh852588.aspx
About the separation: we simply use a Domain project and a Persistence.EntityFramework project where the latter contains all the mappings thus the Domain does not reference the EntityFramework.dll at all.
And if you have some specific mapping questions e.g. the ones you mentioned that are the reasons you created two layers one for DAL and the other for BL just ask them.

I would go with AutoMapper. It can help you to reduce the boilerplate code needed to convert from one object to another.
You can find it here:
https://github.com/AutoMapper/AutoMapper
Edit:
Put your domain models either in BLL or in a separate project, add reference to the BLL or this separate project in the DAL (also reference the new project in BLL), and use the AutoMapper in the DAL. So only domain models will leave the DAL.

You normally have:
A domain model (entities), which is what the O/RM (Entity Framework) uses;
A Data Transfer Objects (DTO) model, used for sending data to a view (in MVC) or by web services, etc.
I agree with Andras: the best way to go from one (domain) to the other (DTO) is by using Automapper. Of course, you can also do it by hand.
One thing that you need to realize is that there is no need for a 1-1 mapping between the domain and the DTO, the DTO can contain denormalized or calculated properties as well.

Related

Seperation of models in a web api application

My team devolops a web api application using entity framework,
The Gui is developed by a seperate team.
My question is how should the models be defined? Should we have two projects - one for domain models (database entities) and one for Dtos which are serializable?
Where should the parsing from Dto to domain models should happen and when should it happen the opposite way?
Moreover, sometimes all the data is needed to be sent to the clients.. Should a Dto be created for those cases as well? Or should I return a domain model?
Generally speaking, it's a good idea to not let your entities (database models) leak out of your database layer. However, as with everything in software - this can have its downfalls. One such downfall being is that it starts to increase complexity of your data layer as it involves mapping your entities to their DTO within your database layer, ultimately leaving repositories that are full of similar methods returning different DTO types.
Some people also feel that exposing IQueryables from your data layer is also a bad thing as you start to leak abstractions to different layers - though this has always seemed a little extreme.
Personally, I favour what I feel is a more pragmatic approach and I prefer to use a tool like AutoMapper to automatically map my entities to my DTOs within the business logic layer.
For example:
// Initial configuration loaded on start up of application and cached by AutoMapper
AutoMapper.Mapper.CreateMap<BlogPostEntity, BlogPostDto>();
// Usage
BlogPostDto blogPostDto = AutoMapper.Mapper.Map<BlogPostDto>(blogPostEntity);
AutoMapper also has the ability to configure more complex mapping, though you should try and avoid this if possible by sticking to flatter DTOs.
In addition, another great feature of AutoMapper is the ability to automatically project your entities to DTOs. This results in much cleaner SQL where only the columns within your DTO are queried:
public IEnumerable<BlogPostDto> GetRecentPosts()
{
IEnumerable<BlogPostDto> blogPosts = this.blogRepository.FindAll().Project(this.mappingEngine).To<BlogPostDto>().ToList();
return blogPosts;
}
Moreover, sometimes all the data is needed to be sent to the clients.. Should a Dto be created for those cases as well? Or should I return a domain model?
DTOs should be created for those. Ultimately you don't want your client depending on your data schema, which is exactly what will happen if you expose your entities.
Alternatives: Command/Query Segregation
It behooves me to also highlight that there are also some other alternatives to a typical layered architecture, such as the Command/Query Segregation approach where you model your commands and queries via a mediator. I won't go into it in too much detail as it's a whole other subject but it's one I would definitely favour over a layered approach discussed above. This would result in you mapping your entities to your DTOs directly within the modelled command or query.
I would recommend taking a look at Mediatr for this. The author, Jimmy Bogard who also created AutoMapper also has this video talking about the same subject.
I've had similar requirements in several projects and in most cases we separated at least three layers:
Database Layer
The database objects are simple one-to-one representations of the database tables. Nothing else.
Domain Layer
The domain layer defines entity objects which represent a complete business object. In our defintion an entity aggregates all data which is directly associated to the entity and can not be regarded as a dedicated entity.
An exmaple: In an application which handles invoices you have a table invoice and invoice_items. The business logic reads both tables and combines the data into a entity object Invoice.
Application Layer
In the application layer we define models for all kind of data we want to send to the client. Pass-through of domain entity objects to save time is tempting but strictly prohibited. The risk to publish any data which shouldn't be published is too high. Furthermore you gain more freedom regarding the design of your API. That's what helps you to fit your last requirement (send all data to the client): Just built a new model which aggregates the data of all domain objects you need to send.
This is the minimum set of layers we use in all projects. There were hundreds of cases where we've been very happy to have several abstraction layers which gave us enough possibilities to enhance and scale an application.

Data Layer Architecture with multiple datasources

I am trying to create a system that allows you to switch multiple data sources, e.g. switching from Entity Framework to Dapper. I am trying to find the best approach to do this.
At the moment I have different projects for different data layers, e.g. Data.EF for Entity Framework, Data.Dapper for Dapper. I have used a database approach but when it creates the models the information generated is coupled together and not easy to refactor, e.g. separation of models.
I have a project called models, this holds domain and view models, and I was thinking of creating Data.Core and follow the repository pattern. But then, doing this will add an extra layer so I would have Presentation / Business / Repository / Data.
I would like to know the best structure for this approach. Should I also do a code-first approach to create my database? This helps separate concerns and improve abstraction. This is quite a big application so getting the structure right is essential.
I'd suggest factoring your data interfaces either to the model through repository interfaces for your entities or to an infrastructure project. (I think the latter was your rationale behind creating a Data.Core project.)
Each data source will then implement the very same set of interfaces, and you can easily switch between them, even dynamically using dependency injection.
For instance, using repositories:
Model
\_ Entities
Entity
\_ Repositories
IEntityRepository
Data.EF
EntityRepository : Model.IEntityRepository
Data.Dapper
EntityRepository : Model.IEntityRepository
Then in your business you won't need to even reference Data.EF or Data.Dapper: you can work with IEntityRepository and have that reference injected dynamically.
I think you approach is correct. I'd say Presentation / business / repository / data is pretty standard these days.
I'd say the code first approach using POCOs is the preferred option today in the industry. I would advise to start creating a project containing your POCO data structures with any logic in it and take it from there. The advantage of this is that your objects model the domain more naturally. If you start with a db centric approach the problem is that, if you are not careful, you may end with objects more akin to SQL relational databases than to the real model. This was painfully evident in the first versions of .net where it was encouraged to use Datasets tighly coupled with the db and that often caused problems to work with in the business layer.
If needed you can do any complex mapping between the business objects and the db objects in the repository layer. You can use a proxy and/or a unit of work if you need to.
I would suggest you create your domain objects, use the code-first approach and also apply the repository pattern
Yes the repository pattern does bring in an extra layer. Have a look at this post for more detail information Difference between Repository and Service Layer?
RE: code-first approach to create my database
It doesn't matter how big your application is, it is a question of what else you intend to use the database for. If this database is simply a repository for this application then using code-first is fine as you are simply storing your code objects. However if you are using this database as an integration point between applications then you may wish to design the database seperately to the application models.

.NET N-Tier Architecture: What do I do about the Model objects?

I am creating a solution from scratch, using ASP.NET Web forms C#.
I am concerned about the model objects as I don't want to create duplicate sets of model objects in each layer. What is the best practice for using Model objects in 3 layer architecture in Web Forms?
The structure I have in mind is as follows:
UI
BLL
DAL
Model
The Model will contain all the model classes that can be used in each section of the layers. I thought this would be useful as each layer needs access to the model objects. For example:
UI calls a method in BLL passing in a model object filled with data.
BLL calls a method in DAL passing through the object which is saved
in the database etc.
Thanks
Models can be a cross-cutting concern with your layers, which is a quick way to do it. Or you can create interfaces for your models such that you could simply flesh out the interface in something like the BLL - this at least stops it being cross-cutting.
It further depends on if your models are simple data containers (anemic domain model), or contain behaviour, such as the ability to validate themselves or track changes themselves (rich domain model).
You can find that your DAL actually consists of two parts: the boilerplate-never-specific to an app code to talk to the database, and the app-specific populate-the-model code. We have this situation. We share interfaces of our models around, the app-specific DAL code can use this interface in order to push and pull data from the model, but the "true" DAL code works with raw stuff.
In a relatively small application, you can share your Domain Entities all the way up to your Presentation layer but be aware of the coupling this introduces.
If in your Databinding you except an Entity of type Customer with a property Address with a StreetLine1 and StreetLine2 property then all your layers are tightly coupled together and a change in one layer will probably cause changes in other layers.
So your decision should be based on the scale of your project and the amount of coupling you can have.
If you go for a low coupled design then your BLL will use your DAL to retrieve entities and use those entities to execute behavior. The BLL will then use Data Transfer Objects to pass to your Presentation layer so there is no coupling between your presentation layer and your Domain Model.
look at my answer here: https://stackoverflow.com/a/7474357/559144  this is the usual way I do things and works well, not only for MVC and Entity Framework... in fact in MVC the model could be an entity type which only has some of the fields contained by the real business entities defined in lower layers, it depends if you really absolutely need all fields in the UI level as well or only some to do some data rendering and input... 
As a related topic, please see this related answer which I posted recently on avoiding duplication of code and correct architecture in a cross-platform client/server system.
I have +1'd the other posters in this thread as this is not intended to be a full answer, just useful information related to the question.
Best regards,

Entity Framework and 3 layer architecture

I have a three layer architecture program. The questions are:
1. Data access is the layer of EF?
2. If i want to use an entity generated by EF from Presentation Layer, then i reference the Data Access, but this violates the principles of 3 layered architecture.
Microsoft Spain released a pretty good documentation, guide and sample application for N-layered applications on codeplex, you can look it up here:
http://microsoftnlayerapp.codeplex.com/
You will find many directions and helpful implementation patterns there.
hth.
Yes EF would be your Data Access Layer.
With EF you can use T4 templates with POCO support, you can then extract these POCO into a seperate dll and this will be reference from all of your layers.
What type of application are you building? If you are building an ASP.NET MVC 3 application, you can have your View be the presentation layer, your Model is your data access (which can use EF) and the controller and / or Action Filters can contain your business logic and in this scenario you will be using your EF Model in the presentation layer but still satisfy the separation of concerns principle.
EF does two things: -
1) Generates an domain model for you (optional, but commonly used)
2) Gives you the ability to query / modify your database via that domain model.
This can give the appearance of blurring the lines between domain model and data access but the two are indeed separate.
As long as you're not doing stuff like creating object contexts and writing queries directly in your presentation tierthen IMHO you are not breaking abstraction - the only thing you are "breakin"g is the fact that you will need to reference System.Data.Objects (or whatever the EF dll is) in your presentation project(s) (which is just a physical artifact) unless you go down the route suggested by Jethro to generate your domain model into a separate project.
For the three tier architecture. I would consider doing Abstraction using Domain Model and Data model pattern rather then doing direct EF from Presentation Layer.
So the idea is that you have your Data Model which has EF POCO classes with Repositories which knows how to access these Classes for various CRUDs.
Your Domain Model would have models related to your Client (so you can put various ViewModels or Validation related code), It can be a WPF or MVC web app.
Now between these two there is a business which talks to both Domain and Data models.
Your Presentation Layer does know nothing about the EF/Data Layer/Repository. When you want to introduce new Data Framework or database, you just need to write new repository classes and data models classes (which prob. be with some sort of code gen).
This also allows your code to be Unit testable as well.

Repository, Pipeline, business logic and domain model - how do I fit these together?

I'm designing N-tier application and I came across a difficulty which you might have a solution to. Presentation layer is MVC.
My ORM is carried out using LinqToSQL - it's a seperate project which serves repositories.
Each reporsitory has an interface and at least 1 concrete implementation.
Repositories have the following methods: FindAll(), Save(T entity), Delete(int id)
FindAll() returns IQueryable of some type, which means that it returns queries to which I can apply filters.
ORM mapping has been carried out using Database First methodology, where tables were created first and then classes were generated by SQL Metal.
I have added a Pipeline layer which works with repositories. It applies further filters to queries. E.g. OrderRepository.FindAll().Where(o => o.CustomerId == 10)
Pipeline also returns IQueryable of some type, which means that I can pass it further up the layer and do more stuff with it.
At this point I would like to move to the BusinessLogic layer, but I don't want to work with entity models any longer, I want to convert entity model to a domain model. This means that I can add validation to a model and use that model in the presentation layer. Model can't be defined in MVC project as it would be dependant on the presentation layer, so that's a no.
I'm fairly certain that business logic (behaviour) and model must be stored seperate from pipeline, data and presentation layer. The question is where?
For example, a pipeline has three methods:
1. FindByCustomerId
2. FindByOrderId
3. FindBySomethingElse
All these methods return IQueryable of Order. I need to convert this to a domain model, but I don't want to do it per each method as it won't be mainteinable.
I feel that this model is fairly robust and scalable. I just don't see what is the best place for mapping from entities to domain model and vise versa.
Thank you
First of all, if you are applying Domain Driven Design principles here, you must not have BusinessLogic layer in your application. All business logic should live inside your domain model.
But it is quite hard to achieve using LinqToSQL because it does not support inheritance mapping and you would have to deal with partial classes to put business logic into your domain. So I would strongly recommend to consider moving from LinqToSQL to NHibernate or Entity Framework Code First .In this case you also won't have to convert your persistence model into your domain model and vice versa.
If you still want to do conversion, you could take a look at Automapper
From a domain driven point of view you would need a factory to convert your 'database entity' into a domain model entity.
When you're thinking of turning the 'database entities' to domain model entities at the end of your pipeline you should realize that after the conversion to domain model entities (a projection) you won't be able to use the IQueryable functionality as the projection will trigger execution of your expression tree. For example if you call FindAll for you customer database entity and then convert the IQueryable to (or project it onto) a customer domain entity it will execute (requesting the contents of your entire table).
This is how I do my N-Tier projects. This architecture has great separation of concerns. It sounds like you are already headed in this direction.
In the Mvc project is all your usual objects (Controllers, ViewModels, Views, helpers, etc). Fairly straight forward. All views are strongly typed. Plus my modified T4 templates that generate Controllers, Views, and ViewModels.
In the Business Model project I have all my business objects and rules, Included are the interfaces that define functionality of the data repositories. Instead of having one repository for every business object / table I prefer to group mine by functionality. All objects related to a blog are in one repository while all objects related to a photo gallery are in a separate repository, and logging may be in a third.
You could place your pipeline layer here.
In the Data project I implement those data repository interfaces. You can use Linq2SQL without having to use partial classes. Extending those Linq2SQL partial classes means you have tied your ORM to your domain model. Something you really don't want to do. You want to leave those generated data classes in the data domain. Here is an example Linq2SQL select that returns a BusinessModel object.
from t in Table
where t.Field == keyField
select new BusinessModel.DataObject
{
Id = t.Id,
Field1 = t.Field1,
Field2 = t.Field2
}
If I were you I would look at EntityFramework 4.1 using the CodeFirst approach or use NHibernate. Either of these will map your data model to the domain model. Then to map the domain models to the view models you could use AutoMapper or write custom code or write a T4 template that would generate the mapping code for you.
You could take the code generated by the dbml file as a starting point for your business objects.
Further to xelibrion's comments you could have a look at LightSpeed for your ORM needs. You are currently using LinqToSQL so you should find Lightspeed very straight-forward since it uses the same idea.
http://www.mindscapehq.com/products/lightspeed
If you can get your data to map Models that more match the form your higher levels want then hopefully you can simplify things. The less complexity in the system the less scope for bugs.
All these methods return IQueryable of
Order. I need to convert this to a
domain model, but I don't want to do
it per each method as it won't be
mainteinable.
This is not a true assessment and is probably blocking you from seeing the proper solution.

Categories