I am currently trying to write an application in DDD allowing an entity to be created, updated and deleted. A change to an entity must be approved by another person. The application must also keep track of what changes were made to an entity. The simplified domain model looks like this:
The application has one bounded context containing ChangeSet, Enity and EntityHistory where ChangeSet is the aggregate root. I designed the aggregate this way because of an Entity should not be changed without a ChangeSet and furthermore a ChangeSet should be saved together with the edited entities in one transaction. On that account I designed a single aggregate.
The design work pretty good when creating new entities:
private void CreateChangeSet()
{
var repository = new ChangeSetRepository();
var entities = new List<Entity>
{
new Entity(Guid.NewGuid(), "Test1", new TagStatus(1, EntityState.Pending));
};
var changeSet = new ChangeSet("a user", "Added a new entity", DateTime.Now, ApprovalState.Submitted, entities);
repository.Insert(changeSet);
}
However, problems arise in my design occur when I am trying to edit an entity:
private void EditEnity()
{
var repository = new ChangeSetRepository();
var entity = repository.GetEntityByName("Test1");
entity.AssignName("a new name");
var entities = new List<Entity>{entity};
var cs = new ChangeSet("a user", "Added a new entity", DateTime.Now, ApprovalState.Submitted, entities);
repository.Insert(cs);
}
As far as I know an repository should return aggregates only, which would mean that in order to change an Entity I must first search for a ChangeSet which does not make sense. Is it a bad practice to return a sub-entity of an aggregate even if you perform changes only be the aggregate root?
I have searched the internet for an answer an many people are pointing out that this kind of query can point out a wrong design of aggregates. Which makes me think again if instead of one aggregate I need two aggregates one for the ChangeSet and one containing Entity and EntityHistory. Should I use two aggregates instead of one? If so how can I do this within a single transaction?
A further indication for two aggregates are user interface requirements like 'the user wants to see a change history for an entity' or 'show me all entities in a view'. On the one hand this indicates two aggregates on the other hand I have a feeling that ChangeSet and Entities should really belong together .
To sum up my questions:
Should I use one or two aggregates in my design?
If one aggregate: is it a bad practice to return a sub-entity of an aggregate even if you perform changes only through the aggregate root?
If two aggregates: how can I save the ChangeSet and the associated Entities in one transaction?
TL;DR:
You should use one entity.
Yes, it is bad practice as the behaviour should be exposed by the aggregate; also, reconstructing the Entity would require the Entity to know how to query ChangeSet; unless you orchestrate this at the service level, it is not great design.
You should not do it, as an aggregate root represents, IMHO, a transactional boundary.
Additional thoughts
If I understand correctly, you are trying to do what Event Sourcing does naturally, with the addition of the approval workflow. Events in an Event Store are approximately what you define with a ChangeSet.
If this is correct, you could model this elegantly in ES by:
Call an Edit Entity API that takes as input the bulk of the changes for an Entity
The API:
Builds a ChangeEntityCommand from the API input (command may fail validation);
Retrieves the Entity;
Invokes the corresponding Handler in the Entity aggregate, which in turn emits a ChangeQueuedForApprovalEvent.
Commits the Entity in the EventStore
An EventHandler will intercept the event above and take care of updating the approval view.
When the approver gives the green light, a similar flow will emit a ChangeApprovedEvent containing the same data of the former event. This event is the one that actually transforms the Entity.
Lastly, I do not believe that the ChangeSet modelling really suits DDD, as it fails to capture the intent of the change.
Hope this helps and good luck with your project.
Related
I am maintaining an application which uses EF Core to persist data to a SQL database.
I am trying to implement a new feature which requires me to retrieve an object from the database (Lets pretend its an order) manipulate it and some of the order lines which are attached to it and save it back into the database. Which wouldn't be a problem but I have inherited some of this code so need to try to stick to the existing way of doing things.
The basic process for data access is :
UI -> API -> Service -> Repository -> DataContext
The methods in the repo follow this pattern (Though I have simplified it for the purposes of this question)
public Order GetOrder(int id)
{
return _context.Orders.Include(o=>o.OrderLines).FirstOrDefault(x=>x.Id == id);
}
The service is where business logic and mapping to DTOs are applied, this is what the GetOrder method would look like :
public OrderDTO GetOrder(int id)
{
var ord = _repo.GetOrder(id);
return _mapper.Map<OrderDto>(ord);
}
So to retrieve and manipulate an order my code would look something like this
public void ManipulateAnOrder()
{
// Get the order DTO from the service
var order = _service.GetOrder(3);
// Manipulate the order
order.UpdatedBy = "Daneel Olivaw";
order.OrderLines.ForEach(ol=>ol.UpdatedBy = "Daneel Olivaw");
_service.SaveOrder(order);
}
And the method in the service which allows this to be saved back to the DB would look something like this:
public void SaveOrder(OrderDTO order)
{
// Get the original item from the database
var original = _repo.GetOrder(order.Id);
// Merge the original and the new DTO together
_mapper.Map(order, original);
_repo.Save(original);
}
Finally the repositories save method looks like this
public void Save(Order order){
_context.Update(order)
_context.SaveChanges();
}
The problem that I am encountering is using this method of mapping the Entities from the context into DTOs and back again causes the nested objects (in this instance the OrderLines) to be changed (or recreated) by AutoMapper in such a way that EF no longer recognises them as being the entities that it has just given to us.
This results in errors when updating along the lines of
InvalidOperationException the instance of ProductLine cannot be tracked because another instance with the same key value for {'Id'} is already being tracked.
Now to me, its not that there is ANOTHER instance of the object being tracked, its the same one, but I understand that the mapping process has broken that link and EF can no longer determine that they are the same object.
So, I have been looking for ways to rectify this, There are two ways that have jumped out at me as being promising,
the answer mentioned here EF & Automapper. Update nested collections
Automapper.Collection
Automapper.collection seems to be the better route, but I cant find a good working example of it in use, and the implementation that I have done doesn't seem to work.
So, I'm looking for advice from anyone who has either used automapper collections before successfully or anyone that has any suggestions as to how best to approach this.
Edit, I have knocked up a quick console app as an example, Note that when I say quick I mean... Horrible there is no DI or anything like that, I have done away with the repositories and services to keep it simple.
I have also left in a commented out mapper profile which does work, but isn't ideal.. You will see what I mean when you look at it.
Repo is here https://github.com/DavidDBD/AutomapperExample
Ok, after examining every scenario and counting on the fact that i did what you're trying to do in my previous project and it worked out of the box.
Updating your EntityFramework Core nuget packages to the latest stable version (3.1.8) solved the issue without modifying your code.
AutoMapper in fact "has broken that link" and the mapped entities you are trying to save are a set of new objects, not previously tracked by your DbContext. If the mapped entities were the same objects, you wouldn't have get this error.
In fact, it has nothing to do with AutoMapper and the mapping process, but how the DbContext is being used and how the entity states are being managed.
In your ManipulateAnOrder method after getting the mapped entities -
var order = _service.GetOrder(3);
your DbContext instance is still alive and at the repository layer it is tracking the entities you just retrieved, while you are modifying the mapped entities -
order.UpdatedBy = "Daneel Olivaw";
order.OrderLines.ForEach(ol=>ol.UpdatedBy = "Daneel Olivaw");
Then, when you are trying to save the modified entities -
_service.SaveOrder(order);
this mapped entities reach the repository layer and DbContext tries to add them to its tracking list, but finds that it already has entities of same type with same Ids in the list (the previously fetched ones). EF can track only one instance of a specific type with a specific key. Hence, the complaining message.
One way to solve this, is when fetching the Order, tell EF not to track it, like at your repository layer -
public Order GetOrder(int id, bool tracking = true) // optional parameter
{
if(!tracking)
{
return _context.Orders.Include(o=>o.OrderLines).AsNoTracking().FirstOrDefault(x=>x.Id == id);
}
return _context.Orders.Include(o=>o.OrderLines).FirstOrDefault(x=>x.Id == id);
}
(or you can add a separate method for handling NoTracking calls) and then at your Service layer -
var order = _repo.GetOrder(id, false); // for this operation tracking is false
I'm starting to get my head into Domain Driven Design and I'm having some issues with the repositories and the fact that EF Core explicitly loading will automatically fill my navigational properties.
I have a repository that I use to load my aggregate root and its children. However, some of the aggregate children need to be loaded later on (I need to load those entities based on a date range).
Example:
Load schedule owners
Calculate a date range
Load schedule owner's schedules
I'm trying to keep my data access layer isolated from the core layer and this is where I have some questions.
Imagine this method on my repository:
public List<Schedule> GetSchedules(Guid scheduleOwnePk, DateRange dateRange)
{
var schedules = dbContext.Schedules.Where(x => x.PkScheduleOwner == scheduleOwnerPk && x.StartDate >= dateRange.Start && x.EndDate <= dateRange.End).ToList();
return schedules;
}
I can call this method from the core layer in two ways:
//Take advantage of EF core ability to fill the navigational property automatically
scheduleOwnerRepository.GetSchedules(scheduleOwner.Pk, dateRange)
or
var schedules = scheduleOwnerRepository.GetSchedules(scheduleOwner.Pk, dateRange);
//At this moment EF core already loaded the navigational property, so I need to clear it to avoid duplicated results
scheduleOwner.Schedules.Clear();
//Schedules is implemented as an IEnumerable to protect it from being changed outside the aggregator root
scheduleOwner.AddSchedules(schedules);
The problem with the first approach is that it leaks EF core to the core layer, meaning that the property ScheduleOwner.Schedules will no longer be filled if I move away from EF core.
The second approach abstracts EF core but requires some extra steps to get ScheduleOwner.Schedules filled. Since EF core will automatically load the navigational property after the repository method is called, I'm forced to clear it before adding the results, otherwise I'll be inserting duplicated results.
How do you guys deal with this kind of situation? Do you take advantage of EF core features or do you follow the more natural approach of calling a repository method and use its results to fill some property?
Thanks for the help.
There are a couple of things to consider here.
Try to avoid using your domain model for querying. Rather use a read model through a query layer.
An aggregate is a complete unit as it were so when loaded you load everything. When you run into a scenario where you do not need all of the related data it may indicate that the data is not part of the aggregate but it may, in fact, only be related in a weaker sense.
An example is Order to Customer. Although an Order may very well require a Customer the Order is an aggregate in its own right. The Customer may have a list of OrderIds but that may become large rather quickly. One would typically not require a complete list of orders to determine whether an aggregate is valid or complete. However, you may very well need a list of ActiveOrder value objects of sorts if that is required for, say, keep a maximum order amount although there are various ways to deal with that case also.
Back to your scenario. An EF entity is not your domain model and when I have had to make use of EF in the past I would load the entity and then map to my domain entity in the repository. The repository would only deal with domain aggregates and you should avoid query methods on the repository. As a minimum a repository would typically have at least a Get(id) and a Save(aggregate) method.
I would recommend querying using a separate layer that returns as simple a result as possible. For something like a Count I may return an int whereas something like IScheduleQuery.Search(specification) I may return IEnumerable<DataRow> or, if it contains more complex data or I have a need for a read model I may return IEnumerable<Query.Schedule>.
I'm trying to grasp more and more of Domain Driven Design and follow best practices. So far this is my understanding:
An aggregate is a collection of entities related to each other.
The root of the aggregate is the entity the binds the relationship of the aggregate together.
If the root is deleted everything within the confines of the aggregate must be deleted as well
Aggregate roots can only reference each other via identities
My questions are:
If I have more than one aggregate related to each other, say Orders And Product Categories.
How should the application service handle the retrieval of an order and related product category?
Should the service have a reference to each repository of an order and product category, retrieve the order first, then retrieve the product category, and finally fill out a data transfer object referencing the information from both?
Something like this:
public OrderDto GetOrder(int id)
{
var order = _orderRepo.GetById(id);
var productCategory = _categoryRepo.GetById(order.ProductCategoryId);
return new OrderDto
{
CustomerName = order.CustomerName,
ProductCategoryName = productCategory.Name,
*..etc..*
};
}
Or is it over kill to keep the roots that decoupled if they are tightly related?
Or should the UI be making the calls to independent services for the complete picture?
There are some situations you may have to break the rules according to Reasons to break the rules section
The first one of them is presetation convenience, it's not a big deal when you just neeed to display one Order at a time, but the solution you mentioned causes N + 1 query problem if you need to list Order s.
Alternative solution is stick to the rule and use your persistence object for rendering ui(in list Order case) if you want to seperate(or have already seperated) your domain models from persistence infrastructure, some discussion can be found here.
Using the CQRS pattern in your application seems an option. The pattern fits well with DDD because it helps us in this kind of situation where we need a different mechanism for writing and reading data, you can read more about CQRS in this article https://martinfowler.com/bliki/CQRS.html, so if you want to retrieve data for the purpose of display you don't need to get all the aggregate roots because invariance of the entity can't be invalid when reading data i.e the entity state is not changing.
I currently have a repository for just about every table in the database and would like to further align myself with DDD by reducing them to aggregate roots only.
Let’s assume that I have the following tables, User and Phone. Each user might have one or more phones. Without the notion of aggregate root I might do something like this:
//assuming I have the userId in session for example and I want to update a phone number
List<Phone> phones = PhoneRepository.GetPhoneNumberByUserId(userId);
phones[0].Number = “911”;
PhoneRepository.Update(phones[0]);
The concept of aggregate roots is easier to understand on paper than in practice. I will never have phone numbers that do not belong to a User, so would it make sense to do away with the PhoneRepository and incorporate phone related methods into the UserRepository? Assuming the answer is yes, I’m going to rewrite the prior code sample.
Am I allowed to have a method on the UserRepository that returns phone numbers? Or should it always return a reference to a User, and then traverse the relationship through the User to get to the phone numbers:
List<Phone> phones = UserRepository.GetPhoneNumbers(userId);
// Or
User user = UserRepository.GetUserWithPhoneNumbers(userId); //this method will join to Phone
Regardless of which way I acquire the phones, assuming I modified one of them, how do I go about updating them? My limited understanding is that objects under the root should be updated through the root, which would steer me towards choice #1 below. Although this will work perfectly well with Entity Framework, this seems extremely un-descriptive, because reading the code I have no idea what I’m actually updating, even though Entity Framework is keeping tab on changed objects within the graph.
UserRepository.Update(user);
// Or
UserRepository.UpdatePhone(phone);
Lastly, assuming I have several lookup tables that are not really tied to anything, such as CountryCodes, ColorsCodes, SomethingElseCodes. I might use them to populate drop downs or for whatever other reason. Are these standalone repositories? Can they be combined into some sort of logical grouping/repository such as CodesRepository? Or is that against best practices.
You are allowed to have any method you want in your repository :) In both of the cases you mention, it makes sense to return the user with phone list populated. Normally user object would not be fully populated with all the sub information (say all addresses, phone numbers) and we may have different methods for getting the user object populated with different kind of information. This is referred to as lazy loading.
User GetUserDetailsWithPhones()
{
// Populate User along with Phones
}
For updating, in this case, the user is being updated, not the phone number itself. Storage model may store the phones in different table and that way you may think that just the phones are being updated but that is not the case if you think from DDD perspective. As far as readability is concerned, while the line
UserRepository.Update(user)
alone doesn't convey what is being updated, the code above it would make it clear what is being updated. Also it would most likely be part of a front end method call that may signifiy what is being updated.
For the lookup tables, and actually even otherwise, it is useful to have GenericRepository and use that. The custom repository can inherit from the GenericRepository.
public class UserRepository : GenericRepository<User>
{
IEnumerable<User> GetUserByCustomCriteria()
{
}
User GetUserDetailsWithPhones()
{
// Populate User along with Phones
}
User GetUserDetailsWithAllSubInfo()
{
// Populate User along with all sub information e.g. phones, addresses etc.
}
}
Search for Generic Repository Entity Framework and you would fine many nice implementation. Use one of those or write your own.
Your example on the Aggregate Root repository is perfectly fine i.e any entity that cannot reasonably exist without dependency on another shouldn't have its own repository (in your case Phone). Without this consideration you can quickly find yourself with an explosion of Repositories in a 1-1 mapping to db tables.
You should look at using the Unit of Work pattern for data changes rather than the repositories themselves as I think they're causing you some confusion around intent when it comes to persisting changes back to the db. In an EF solution the Unit of Work is essentially an interface wrapper around your EF Context.
With regards to your repository for lookup data we simply create a ReferenceDataRepository that becomes responsible for data that doesn't specifically belong to a domain entity (Countries, Colours etc).
If phone makes no sense w/o user, it's an entity (if You care about it's identity) or value object and should always be modified through user and retrieved/updated together.
Think about aggregate roots as context definers - they draw local contexts but are in global context (Your application) themselves.
If You follow domain driven design, repositories are supposed to be 1:1 per aggregate roots.
No excuses.
I bet these are problems You are facing:
technical difficulties - object relation impedance mismatch. You are struggling with persisting whole object graphs with ease and entity framework kind a fails to help.
domain model is data centric (as opposed to behavior centric). because of that - You lose knowledge about object hierarchy (previously mentioned contexts) and magically everything becomes an aggregate root.
I'm not sure how to fix first problem, but I've noticed that fixing second one fixes first good enough. To understand what I mean with behavior centric, give this paper a try.
P.s. Reducing repository to aggregate root makes no sense.
P.p.s. Avoid "CodeRepositories". That leads to data centric -> procedural code.
P.p.p.s Avoid unit of work pattern. Aggregate roots should define transaction boundaries.
This is an old question, but thought worth posting a simple solution.
EF Context is already giving you both Unit of Work (tracks changes) and Repositories (in-memory reference to stuff from DB). Further abstraction is not mandatory.
Remove the DBSet from your context class, as Phone is not an aggregate root.
Use the 'Phones' navigation property on User instead.
static void updateNumber(int userId, string oldNumber, string newNumber)
static void updateNumber(int userId, string oldNumber, string newNumber)
{
using (MyContext uow = new MyContext()) // Unit of Work
{
DbSet<User> repo = uow.Users; // Repository
User user = repo.Find(userId);
Phone oldPhone = user.Phones.Where(x => x.Number.Trim() == oldNumber).SingleOrDefault();
oldPhone.Number = newNumber;
uow.SaveChanges();
}
}
If a Phone entity only makes sense together with an aggregate root User, then I would also think it makes sense that the operation for adding a new Phone record is the responsibility of the User domain object throught a specific method (DDD behavior) and that could make perfectly sense for several reasons, the immidiate reason is we should check the User object exists since the Phone entity depends on it existence and perhaps keep a transaction lock on it while doing more validation checks to ensure no other process have deleted the root aggregate before we are done validating the operation. In other cases with other kinds of root aggregates you might want to aggregate or calculate some value and persist it on column properties of the root aggregate for more efficient processing by other operations later on. Note though I suggest the User domain object have a method that adds the Phone it doesn't mean it should know about the existence of the database or EF, one of the great feature of EM and Hibernate is that they can track changes made to entity classes transparently and that also means adding of new related entities by their navigation collection properties.
Also if you want to use methods that retrieve all phones regardless of the users owning them you could still though it through the User repository you only need one method returns all users as IQueryable then you can map them to get all user phones and do a refined query with that. So you don't even need a PhoneRepository in this case. Beside I would rather use a class with extensions method for IQueryable that I can use anywhere not just from a Repository class if I wanted to abstract queries behind methods.
Just one caveat for being able to delete Phone entities by only using the domain object and not a Phone repository you need to make sure the UserId is part of the Phone primary key or in other words the primary key of a Phone record is a composite key made up of UserId and some other property (I suggest an auto generated identity) in the Phone entity. This makes sense intuively as the Phone record is "owned" by the User record and it's removal from the User navigation collection would equal its complete removal from the database.
We are using Linq to SQL to read and write our domain objects to a SQL Server database.
We are exposing a number of services (via WCF) to do various operations. Conecptually, the implementation of these operations consists of three steps: reconstitute the necessary domain objects from the database; execute the operation on the domain objects; persist the (now changed) domain objects back to the database.
Problem is that sometimes, there are two or more instances of the same entity objects, which can lead to inconsistenties when saving the objects back to the db. A little made-up example:
public void Move(string sourceLocationid, destinationLocationId, itemId);
which is supposed to move the item with the given id from the source to the destination location (actual services are more complicated, often involving many locations, items etc). Now, it could be that both source and destination location id are the same - a naive implementation would just reconstitute two instances of the entity object, which would lead to problems.
This issue is now "solved" by checking for it manually, i.e. we reconstitute a first location, check if the id of the second is different from it, and if so reconsistute the second, and so on. This is obvisouly difficult and error-prone.
Anyway, I was actually surprised that there does not seem to be a "standard" solution for this in domain driven design. In particular, repositories or factories do not seem to solve this problem (unless they maintain their own cache, which then needs to be updated etc).
My idea would be to make a DomainContext object per operation, which tracks and caches the domain objects used in that particular method. Instead of reconstituing and saving individual domain objects, such an object would be reconstituted and saved as a whole (possibly using repositories), and it could act as a cache for the domain objects used in that particular operation.
Anyway, it seems that this is a common problem, so how is this usually dealt with? What do you think of the idea above?
The DataContext in Linq-To-Sql supports the Identity Map concept out of the box and should be caching the objects you retrieve. The objects will only be different if you are not using the same DataContext for each GetById() operation.
Linq to Sql objects aren't really valid outside of the lifetime of the DataContext. You may find Rick Strahl's Linq to SQL DataContext Lifetime Management a good background read.
Also, the ORM is not responsible for logic in the domain. It's not going to disallow your example Move operation. That's up for the domain to decide what that means. Does it ignore it? or is it an error? It's your domain logic, and that needs to be implemented at the service boundary you are creating.
However, Linq-To-Sql does know when an object changes, and from what I've looked at, it won't record the change if you are re-assigning the same value. e.g. if Item.LocationID = 12, setting the locationID to 12 again won't trigger an update when SubmitChanges() is called.
Based on the example given, I'd be tempted to return early without ever loading an object if the source and destination are the same.
public void Move(string sourceLocationId, destinationLocationId, itemId)
{
if( sourceLocationId == destinationLocationId )
return;
using( DataContext ctx = new DataContext() )
{
Item item = ctx.Items.First( o => o.ItemID == itemId );
Location destination =
ctx.Locations.First( o => o.LocationID == destinationLocationID );
item.Location = destination;
ctx.SubmitChanges();
}
}
Another small point, which may or may not be applicable, is you should make your interfaces as chunky as possible. e.g. If you're typically going to perform 10 move operations at once, it's better to call 1 service method to perform all 10 operations at once, rather than 1 operation at a time. ref: chunky vs chatty
Many ORMs use two concepts that, if I understand you, address your issue. The first and most relevant is Context this is responsible for ensuring that only one object represents a entity (database table row, in the simple case) no mater how many times or ways it's requested from the database. The second is Unit of Work; this ensures that updates to the database for a group of entities either all succeed or all fail.
Both of these are implemented by the ORM I'm most familiar with (LLBLGen Pro), however I believe NHibernate and others also implement these concepts.