DDD aggregate and entity framework. Which way is preferable? - c#

I am little bit confused about the problem. I have an entity Product that is represented in the database. It looks like POCO. Here is example (I use attributes instead of fluent api for simplicity).
public class Product
{
[Key]
public int Id { get; set; }
//other properties that have mapping to db
}
But now I want to avoid AnemicDomainModel anti-pattern
So I am going to fill the Product model with methods and properties, that do not have mapping to db, so I should use [Ignore].
public class Product
{
[Key]
public int Id { get; set; }
[Ignore]
public object FooProperty { get; set; }
//other properties that have mapping to db
//other properties and methods that do not have mapping to db
}
I think such a way spoils my model. In this article I've found acceptable workaround. Its idea is to separate Product (domain model) and ProductState (state of product that is stored in the database). So Product is wrapper for ProductState.
I really want to know the views of other developers. Thanks a lot for your answers.
I understood that my real question sounds something like that: "Should I separate Data model and domain model? Can I change EF entities from Anemic to Rich?"

To ensure persistence ignorance of your entities, I've found EF Fluent Mapping to be better than Data Annotations. The mappings are declared in an external file, thus normally your entity doesn't have to change if something in the persistence layer changes. However, there are still some things you can't map with EF.
Vaughn's "backing state object" solution you linked to is nice, but it is an extra layer of indirection which adds a fair amount of complexity to your application. It's a matter of personal taste, but I would use it only in cases when you absolutely need stuff in your entities that cannot be mapped directly because of EF shortcomings. It also plays well with an Event Sourcing approach.

The beauty of the Entity Framework is that it allows you to map your database tables to your domain model using mappings which can be defined using the Fluent API, therefore there is no need to have separate data entities. This is in comparison to its predecessor Linq To SQL where you'd map each table to an individual data entity.
Take the following example, for the paradigm of a Student and Course - a student can take many courses, and a course can have many students, therefore a many-to-many relationship in your database design. This would consist of three tables: Student, Course, StudentToCourse.
The EF will allow you to use Fluent API mappings to create the many collections on either side of the relationship without having the intermediate table (StudentToCourse) defined in your model (StudentToCourse has no existence in a DOMAIN MODEL), you would only need two classes in your domain, Student and Course. Whereas in LinqToSQL you'd have to define all three in your model as data entities and then create mappings between your data entities and domain model resulting in lots of plumbing work open to bugs.
The argument of the anaemic vs rich domain model should have little effect on your mapping between your model and database tables, but focuses on where you place the behaviour - in either the domain model or the service layer.

Related

Are classes generated by Entity Framework database classes or business classes? [duplicate]

This question already has answers here:
Should Entities in Domain Driven Design and Entity Framework be the same?
(4 answers)
Closed 5 years ago.
I have a three tier app with a class library as the Infrastructure Layer, which contains an Entity Framework data model (database first).
Entity Framework creates entities under the Model.tt folder. These classes are populated with data from the database.
In the past I would map the classes created by Entity Framework (in the data project) to classes in the Domain project e.g. Infrastructure.dbApplication was mapped to Domain.Application.
My reading is telling me that I should be using the classes contained in .tt as the domain classes i.e. add domain methods to the classes generated by Entity Framework. However, this would mean that the domain classes would be contained in the Infrastructure project, wouldn't it? Is is possible to relocate the classes generated by Entity framework to the Domain project? Am I missing something fundamental here?
I think in the true sense it is a Data Model - not a Domain Model. Although people talk about having the Entity Framework Model as a domain concept, I don't see how you can easily retro fit Value objects such as say amount which would be represented in the true domain sense as such:
public class CustomerTransaction
{
public int Id { get; set; }
public string TransactionNumber { get; set; }
public Amount Amount { get; set; }
}
public class Amount
{
public decimal Value { get; }
public Currency Currency { get; }
}
As opposed to a more incorrect data model approach:
public class CustomerTransaction
{
public int Id { get; set; }
public string TransactionNumber { get; set; }
public int CurrencyType { get; set; }
public decimal Amount { get; set; }
}
Yes, the example is anaemic, but only interested in properties for clarity sake - not behaviour. You will need to change visibility of properties, whether you need default constructor on the "business/data object" for starters.
So in the domain sense, Amount is a value object on a Customer Transaction - which I am assuming as an entity in the example.
So how would this translate to database mappings via Entity Framework. There might be away to hold the above in a single CustomerTransaction table as the flat structure in the data model, but my way would to be add an additional repository around it and map out to the data structures.
Udi Dahan has some good information on DDD and ORM in the true sense. I thought somewhere he talked about DDD and ORM having the Data Model instance as a private field in the domain object but I might be wrong.
Also, that data model suffers from Primitive Obsession (I think Fowler coined it in his Refactoring book - although it Is in his book) Jimmy Bogard talks about that here.
Check out Udi Dahan stuff.
You should move your model to a different project. That is good practice. I don't quite get it what you meant by "moving to to Domain project" Normally entity framework generated classes are used as a domain model. No need for creating "different" domain model from this. This model should be use only near to database operations, whereas web(window) application should use only DTO (Domain transfer objects)
I don't know if you use it or not - but this is a nice tool allowing for recreating model from the database :
https://marketplace.visualstudio.com/items?itemName=SimonHughes.EntityFrameworkReversePOCOGenerator
This allows to store model in classes (instead of EDMX) Someone refers to it as "code first" but there is a misunderstanding. One can use this tool to create model and still be on "database first" This is done simply to omit using EDMX as a model definition.
You can relocate the entity classes by creating a new item in your Domain project: DbContext EF 6.x Generator (not sure of the name and you might have to install a plugin to get this item in the list, also exists for EF 5.x).
Once you have created this new item, you have to edit it to set the path of your EDMX at the very begining of the file. In my project for example it is:
const string inputFile = #"..\..\DAL.Impl\GlobalSales\Mapping\GlobalSalesContext.edmx";
You will also need to edit the DbContext.tt file to add the right using on top of the generated class. At each change you've done on the EDMX, you also will have to right click the generator and click: "Run custom tool" to generate the new classes.
That being said, is it a good practice? As you can see that's what I have done in my project. As long as you do not have EF specific annotations or stuff like that in the generated entity classes, I would said that it is acceptable.
If you need to change your ORM, you can just keep the generated classes and remove all the EF stuff (.tt files, etc) and the rest of your application will work the same. But that's opinion based.

How to correctly parse complex viewmodel to separate domain models in ASP.NET MVC?

I wonder how could I solve the following case : there are a form on website where manager input very big amount of data to Viewmodel and pass to server-side.
class CitizenRegistrationViewModel {
public string NationalPassportId { get;set; }
public string Name { get;set; }
public List<string> PreviousRegisteredOfficeCodes {get;set;}
// about 30 fields like strings, Booleans, HttpBasedFiles (pdf,jpg).
}
And the problem that in domain this data need to be logically separated and stored in different tables (classes in EF) like CitizensNationalPassportsTable, CitizensWorkingPlaceRecordsTable, etc. There are no complex class Citizen with properties like :
public class Citizen {
public ICollection<CitizensWorkingPlaceRecords> workingPlaces
// etc...
}
but this properties are separately stored in different tables with no one-to-one or one-to-many relations (no FK here) . The only NationalPassportId property could be used as navigation key (unique per user and all records related to user in different tables contains this key).
Should I write big amount of code to parse Viewmodel to domains models like :
public void CitizenRegistrationViewModelToDomainModel(CitizenRegistrationViewModel model){
CitizenNationalPassport passport = new CitizenNationalPassport(model.NationalPassportId);
CitizensWorkingPlaceRecord workplace = new CitizensWorkingPlaceRecord(model.PreviousRegisteredOfficeCodes, model.NationalPassportId);
// 12 extra objects need to create...
db.CitizenNationalPassports.Add(passport);
}
Or is there any more correct approach to handle this problem? I wanted to use AutoMapper, but is it the best solution?
I can't change business models' logic, as it is a legacy project.
You should have a set of classes that represents the data that the browser is exchanging with ASP.NET MVC. Let's name them for example, Input Models. In this classes you have metadata attributes, custom properties and many things that are relates with the exchange between browser and web server.
You should have another set of classes that represent your database structure, those are your Entity Framework POCO classes. Let's name them DB Models. It does not matter how POCO and fancy they are, they always map to tables and columns, so they are always tight to the DB structure.
You should have another set of classes that are your domain classes, the classes you use when operating objects in your business layer.These are binding/persistence/representation agnostic.
You should have a repository that knows how to persist a domain entity. In your case it will be a class that knows how to operate the DB models and the DbContext.
Then, when you get input from your browser, you bind that data to the input models and those are passed to the controller (this is done automatically by the DefaultModelBinder or you can use your own IModelBinder).
When you get an input model, you have to create a new domain entity using that data (in case that is an actual new entity). Once you have your domain object ready, you pass it to the repository to be saved.
The repository is responsible of knowing how to save the domain entity in the database, using the DB models.
In essence, the Controller or the business service instance you operate in the Controller's action context should be responsible of articulate the interaction between these elements without them knowing each others.
AutoMapper or an alternative could be used to automate the mapping from View model to Domain models, but this only makes sense if properties are named identical in View and Domain models. If this is not the case you'll end up writing mapping rules which doesn't help you. It just moves code from your current mapping classes to the AutoMapper configuration. So, if you're in a position to modify your viewmodels I'd go for AutoMapper or anything similar, if not I'd use what you currently have.

Using AutoMapper to load entities from the database?

Most of what I've read (e.g. from the author) indicates that AutoMapper should be used to map an an entity to a DTO. It should not load anything from the database.
But what if I have this:
public class Customer {
public int Id { get; set; }
public string Name { get; set; }
public virtual ICollection<Order> Orders { get; set; }
}
public class CustomerDto {
public int Id { get; set; }
public string Name { get; set; }
public IEnumerable<int> OrderIds { get; set; } // here is the problem
}
I need to map from DTO to entity (i.e. from CustomerDto to Customer), but first I must use that list of foreign keys to load corresponding entities from the database. AutoMapper can do that with a custom converter.
I agree that it doesn't feel right... but what are the alternatives? Sticking that logic into a controller, service, a repository, some manager class? All that seems to be pushing the logic somewhere else, in the same tier. And if I do that, I must also perform the mapping manually!
From a DDD perspective, the DTO should not be part of the domain. So AutoMapper is also not part of the domain, because it knows about that DTO. So AutoMapper is in the same tier as the controllers, services, etc.
So does it make sense to put the DTO-to-entity logic (which includes accessing the database, and possibly throwing exceptions) into an AutoMapper mapping?
EDIT
#ChrisSimon's great answer below explains from a DDD perspective why I shouldn't do this. From a non-DDD perspective, is there a compelling reason not to use AutoMapper to load from the db?
To start with, I'm going to summarise my understanding of Entities in DDD:
Entities can be created - often using a factory. This is the start of their life-cycle.
Entities can be mutated - have their state modified - by calling methods on the entity. This is how they progress through their lifecycle. By ensuring that the entity owns its own state, and can only have its state modified by calling its methods, the logic that controls the entity's state is all within the entity class, leading to cleaner separation of business logic and more maintainable systems.
Using Automapper to convert from a Dto to the entity means the entity is giving up ownership of its state. If the dto is in an invalid state and you map that directly onto the entity, the entity may end up in an invalid state - you have lost the value of making entities contain data + logic, which is the foundation of the DDD entity.
To make a suggestion as to how you should approach this, I'd ask - what is the operation you are trying to achieve? DDD encourages us not to think about CRUD operations, but to think about real business processes, and to model them on our entities. In this case it looks like you are linking Orders to the Customer entity.
In an Application Service I would have a method like:
void LinkOrdersToCustomer(CustomerDto dto)
{
using (var dbTxn = _txnFactory.NewTransaction())
{
var customer = _customerRepository.Get(dto.Id);
foreach (var orderId in dto.OrderIds)
{
var order = _orderRepository.Get(orderId);
customer.LinkToOrder(order);
}
dbTxn.Save();
}
}
Within the LinkToOrder method, I would have explicit logic that did things like:
Check that order is not null
Check that the customer's state permits adding the order (are they currently active? is their account closed? etc.)
Check that the order actually does belong to the customer (what would happen if the order referenced by orderId belonged to another customer?)
Ask the order (via a method on the order entity) if it is in a valid state to be added to a customer.
Only then would I add it to the Customers Order's collection.
This way, the application 'flow' and infrastructure management is contained within the application/services layer, but the true business logic is contained within the domain layer - within your entities.
If the above requirements are not relevant in your application, you may have other requirements. If not, then perhaps it is not necessary to go the route of DDD - while DDD has a lot to add, its overheads are generally only worth it in systems with lots of complex business logic.
This isn't related to the question you asked, but I'd also suggest you take a look at the modelling of Customer and Order. Are they both independent Aggregates? If so, modelling Customer as containing a collection of Order may lead to problems down the road - what happens when a customer has a million orders? Even if the collection is lazy loaded, you know at some point something will attempt to load it, and there goes your performance. There's some great reading about aggregate design here: http://dddcommunity.org/library/vernon_2011/ which recommends modelling references by Id rather than reference. In your case, you could have a collection of OrderIds, or possibly even a completely new entity to represent the link - CustomerOrderLink which would have two properties - CustomerId, and OrderId. Then none of your entities would have embedded collections.

Can too many navigation properties be too much

If I have an entity:
public class User
{
public int UserId{get;set;}
}
And another entity:
public class Role
{
public int RoleId{get;set}
}
I want to model relation ship via EF Code First so I added:
User.cs
public virtual ICollection<Role> Roles {get;set;}
Role.cs
public virtual User User {get;set;}
This allow me to get user roles like:
context.Users.Roles.ToList();
But User is the main object in database. And it can have relations to 100 tables.
Is adding ICollection<T> and User object best practice or it is not always required (generally is there some rule of thumb for this)?
Sometimes I have feeling that I am creating too large objects and I wonder does this have some performance impact?
You are correct in thinking that dragging in 100 related tables to your dbcontext might not be the most performant solution and ef will drag in all tables that it can see either as a dbset, navigation property, or fluent configuration.
However if you need to navigate from roles to users in your dbcontext and the user entity has navigation properties that point to 100 of tables then your solution would be in this particular dbcontext to tell ef to ignore the tables you're not concerned with something like modelbuilder.ignore('orders') assuming from users you can navigate to orders in some way. in this way you can prune the graph to only consider the entities you need.
You then need another dbcontext for other spheres of interest: the concept is called the bounded context. (Ref Julie Lerman, Eric Evans DDD) You then need to do more work in your app to support multiple db contexts in the same app but it can be done - (See Julie Lerman on enterprise ef) However if you just want one dbcontext in your app where the scope of your model is limited to a subset of tables then the this will work.
I think you can use the ef power tools to view a read only diagram of the tables in your dbcontext model. You can then confirm how well your pruning is going.

Persistance ID's and Domain Model Entities

I was curious on what peoples thoughts are on keeping the Id of a DAL entity as a property of the Domain Entity, at the absolute most a read-only property.
My first thoughts was that this is ok to do but the more I think about it the more I dislike the idea. After all the domain model is supposed to be completely unaware of the how data is persisted, and keeping and Id property on each domain model is a less-than-subtle indication. The persistence layer may be something that doesn't require primary keys, or another property exposed in the domain model may be a suitable candidate for identification, a model no. perhaps.
But then that got me thinking, for domain models that do not have a reliable means of uniquely identifying an entry in a database persistence layer, how are they to identify entries when it comes to updating or deleting?
A dictionary based on weak reference keys could do the trick; WeakDictionary<DomainEntity, PrimaryKeyType>. This dictionary would be a part of the repository implementation, whenever the client of the repository Fetch's a collection of DomainEntity a weak reference to the entity and its persistence layer Id is stored in this internal dictionary such then when comes time to return the modified entity to the repository for updating the persistence layer, the following could be done to get back the Id
PrimaryKeyType id = default(PrimaryKeyType);
if (!weakDictionary.TryGetValue(someDomainEntity, out id))
// id not found, throw exception? custom or otherwise..
// id found, continue happily mapping domain model back to data model.
The benefits of this approach as I see it, is the domain entity need not maintain its persistence layer specific id and the repository forces you to have a legitimate Domain Entity obtained either by some call to a Fetch... method or the Add/CreateNew method, else should you try to update/delete the entity it will throw an exception.
I'm aware that this probably over-engineering and I should just buckle down and get pragmatic, I was just curious on what other people thought about this.
I don't want to start another thread just for this minor question as it is somewhat related. But since it is relatively recently I have started looking into DDD (though in this case my database came first) I wondered if I could confirm that I have the right mindset for Domain Entities, here is a cut down example of my Employee domain entity.
public class Employee : DomainEntity
{
public string FirstName { get; }
public string LastName { get; }
public UserGroup Group { get; }
// etc..
// only construct valid employees
public Employee(string firstName, string lastName, SecureString password, UserGroup group);
// validate, update. (not sure about this one.. pulled it
// from an open source project, I think that names should be able to be set individually).
AssignName(string firstName, string lastName);
// validate, update.
ResetPassword(SecureString oldPassword, SecureString newPassword);
// etc..
}
Thank you!
Your proposal of using weak references has one major flaw.
As you might know, domain entities have the important characteristic in that they must have identity. This is important for comparison reasons. If two entities have the same identity, regardless of the values of their properties, then they are considered equal:
Entity1 == Entity2 ⇔ Entity1.Identity == Entity2.Identity
A typical "design pattern" would be to inherit all entities from a DomainEntity<T> abstract class, which overrides the comparison of these objects and compares by identity.
Now, consider your approach of using a weak reference look up. Let's take an example:
You fetch an Entity1, say the "Reegan Layzell" user, from a repository. Then you fetch the exact same "Reegan Layzell" entity from the repository again as Entity2. You now have the same entity in your domain in two objects. But they have difference references (of course).
When comparing, these entities will not be considered equal in your domain.
I admire your fear of introducing database concerns into your domain model, but propagating the database ID into your entities is hardly going to affect the quality of your models and it will save you a lot of trouble. Like you said, we need to be pragmatic.
With regards to your Employee example: Does AssignName really make sense? In reality, can an employee's name really change after creation? Other than that, it looks like you have the right idea. I highly recommend you watch this: Crafting Wicked Domain Models by Jimmy Bogard.

Categories