I have an entity in my domain which I need to track its status. And I have a handler for this need. This status is like InProgress, Completed or Deleted. And I use CosmosDb, SQL API for storing that data.
Inside CosmosDb, I have created a container for those created entities and another container for its status. Therefore, inside the code, I have two repositories for those two containers.
internal interface EntityRepository
{
Task AddAsync(Entity entity);
}
internal interface EntityStatusRepository
{
Task AddAsync(EntityStatus entityStatus);
}
And for each repository, I have created one service
public interface EnityService
{
Task AddAsync(Entity entity);
}
public interface EntityStatusService
{
Task AddStatusAsync(EntityStatus entityStatus)
}
Those services have been exposed as public interfaces for the handler and not repositories.
Now I really wonder
Based on DDD and having an entity and its status, should I create two separated repositories or they should be as one repository, as they are one context?
Do I need to expose the entity and its status through one service?
I wonder if anyone has a suggestion or even a better solution?
I'm not a DDD expert - just reading through Implementing DDD by Vernon but from my experience, you have an issue with bounded context. Your models Entity and EntityStatus are probably closely related. In that case you should create EntityStatusRepository only if you have a place where you need EntityStatuses by itself. If you need both of them just go with EntityRepository
It appears the EntityStatus should be a property on Entity, but let’s go through the logic to make sure. Note that these are not hard rules, just my rules of thumb when I’m going through these decisions. Extenuating circumstances my supersede these.
Should EntityStatus be an Aggregate Root? Would it make sense to
work with an EntityStatus by itself with no relationship to anything
else, or with only references to child objects? If not, then it is
not an Aggregate Root. That means it’s either a supporting entity or
a property.
If the parent entity always has exactly one current value of
EntityStatus, and no logic needs to be embedded inside the status,
then it is best to leave it as a property on the Entity.
If the EntityStatus needs logic built into it then it should probably
be a value object. For example, if status can only change from X to
Y in some circumstances but not others, or if some external process
must be launched when a status changes, it should be a value object
whose value is set by the Entity. Being a value object doesn't necessarily mean it's a separate entity, though.
Finally, I prefer to tie my repositories to Aggregate Roots even if there are value objects owned by the AR. An AR update should all be saved or nothing, and extending a DB single transaction across repositories is less than ideal. If you’re using the Unit of Work pattern, then an AR update should be a single unit. I’ve tried creating a separate repo per table where the AR repo uses the individual table repos, and it felt too granular with all the plumbing code. It was also easy to lose the business idea you’re trying to accomplish when dealing with all the pieces floating around. In the end, though, there’s no rule governing this so do what you think is right.
Related
If I understand correctly CQRS is about dividing write and read responsibilities. So I can use repositories in my write model, for example var user = repository.GetUserById(); - this will get the user by id and then repository.UpdateUser(user); will update the user with changed properties. In the read model we can construct more complex DTO's:
public class UsersReadModel
{
private IMyContext context;
public UsersReadModel(IMyContext context)
{
this.context = context;
}
public ComplexUserDTO GetComplexUser(ISelectQuery query)
{
ComplexUserDTO user = new ComplexUserDTO();
// get all user properties, GetUser by id
user.UserDTO = context.Users.Where(d => d.UserId == query.UserId).ProjectTo<UserDTO>().FirstOrDefault();
//here I don't need everything from PoliciesTable, I just need two columns, so I use anonymous object
var policieObject = context.Policies.Where(f => f.BasePolicyId == query.PolicyId).Select(s => new { s.PoliciesNames, s.Clients.Select(d => d.ClientNames).ToList() }).FirstOrDefault();
user.PoliciesNames = policieObject.PoliciesNames;
user.ClientsNames = policieObject.ClientsNames;
return user;
}
}
So in my Write model, I get user by id from my repository, because i don't need to map it to DTO, and in my read model I use GetUser by id, but I map it to DTO, because I need it in that way. Isn't this code repeat(if I want to change getting user by id i'll have to change it in both places)? Can I use repositories in my read model? In this case I'll have to use both repositories and context(for the anonymous object, and selecting part of table columns) in UsersReadModel.
If your domain is very simple then the Write and the Read will be very similar and a lot of cod duplication will occur. In fact, this works in reverse as well, if your Write model is very similar to the Read model then you could implement them as CRUD and you don't necessarily need CQRS.
Can I use repositories in my read model?
You can have anything you want on the Read side; the two sides are separated from many points of view.
In CQRS there are many cases when code duplication occurs. Don't be afraid of that. You could extract that in shared classes.
P.S.
You should have a Read model for every use case, not for every Write model. If you have a 1:1 correspondence from Write to Read then this could also means that you should have implemented this using CRUD.
P.S. I like to use CQRS even if the domain is simple as I like to have very optimized Read models (different persistence type, no JOINS, custom data sharding etc).
There are a few things to look at here. From your description, it doesn't sound like there is a separation between the read and write models. Remember, they have very different purposes.
CQRS leans heavily on domain-driven design principles. A key principle is the encapsulation of your domain objects.
As a result, you wouldn't expect a domain object to have 'properties' on it (especially not setters). It may have ID for example but not much else. This is becuase it's role is to protect invariants within its self. Not something you can do easily if you have setters.
I would also argue that a domain object shouldn't really have getters except for id. If you have a good read model there is little need for them and may encourage incorrect use of the object. There are times when this idea can be relaxed a little. Although I can't think of one right now.
As a result, a repository for a domain object can be very simple. GetById and Save (unless you are using event sourcing but that's another topic).
The Read model, on the other hand, should be shaped to serve the UI. Each model is likely to have a mix of data from various sources. For example, you are likely to want to see a users details in context or their activities or orders or value to the company or whatever the purpose of the application is.
This explanation of the typical structure of a CQRS application may be helpful: CQRS + Event Sourcing - A Step by Step Overview
And this may give you some insight into creating domain objects: Aggregate Root - How to Build One for CQRS and Event Sourcing
Hope this helps.
If I understand correctly CQRS is about dividing write and read responsibilities.
Closer to say that it is about having data models that are designed for the use cases that they support.
We have Truth, and that truth has multiple representations.
The trick is that the representations don't need to be coupled in time -- we can update the "book of record" representation now, and the representations we use to support queries eventually.
Can I use repositories in my read model?
Absolutely. There's no magic.
Udi Dahan would probably suggest that you be thinking about different repositories, or perhaps more precisely methods on your repositories that provide different explicit representations of the read model depending on what you are doing. Each method loads the representation that you need for that particular use case.
I've read some of the articles on BL, but the methodology seems counter intuitive to me. It seems to break up normal OOP principles. Here's an very simplified example: A client table contains the birthdate and gender of each client. A life expectancy table contains the clientId, age, and probability of survivorship to that age.
Wouldn't basic OOP principles call for methods to be integrated into the entity? E.g. the calculateSPTable() method in the client class.
class client {
int clientId;
int age;
bool male;
list<surviveProb> lifeExpectancy;
void calculateLifeExpectancy(); // calculates lifeExpectancy
}
class surviveProb {
int surviveProbId;
int clientId;
int age;
double probability;
}
Yet the methodologies today seem to suggest such operations must be in a separate layer and a separate class. Methods operating on entities should not be included in the entity framework entities. This seems counter intuitive. I really want to put methods into EF entities. Is this going to lead to problems? What am I missing here?
After some research I now use some patterns that I think are good for maintenance porpoises and understanding the application.
Let's say you want to register an account.
In the controller, I would have an AddAccountViewModel that only exposes the minimum properties to a user. No worries about him injecting something bad in an unexpected property. Now, using dependency injection, I would call a Facade. Let's say _accountsFacade.RegisterAccount and I would pass the View Model as a parameter.
Inside this method in the facade, I would do the mapping from the View Model to the Model and this Facade would be responsible for doing everything that needed to be done so the account could be created. In my opinion, here is where all the business logic goes. In this Facade, using dependency injection again, I use a unit of Work and add and edit entities to the context. _unitOfWork.AccountRepository.Add(account)
You see? Controllers only "route" the application, facades handle business, unit of work handles the context, the repository only communicates with the data base... And the model only expose properties.
This makes the mapping faster, as stated, and it separate concerns. Sometimes, the logic of adding an account may involve handling different objects that shouldn't be used inside the account object,
I hope you can understand what I want to explain, as my English is not so great.
Was it helpful?
I currently have a repository for just about every table in the database and would like to further align myself with DDD by reducing them to aggregate roots only.
Let’s assume that I have the following tables, User and Phone. Each user might have one or more phones. Without the notion of aggregate root I might do something like this:
//assuming I have the userId in session for example and I want to update a phone number
List<Phone> phones = PhoneRepository.GetPhoneNumberByUserId(userId);
phones[0].Number = “911”;
PhoneRepository.Update(phones[0]);
The concept of aggregate roots is easier to understand on paper than in practice. I will never have phone numbers that do not belong to a User, so would it make sense to do away with the PhoneRepository and incorporate phone related methods into the UserRepository? Assuming the answer is yes, I’m going to rewrite the prior code sample.
Am I allowed to have a method on the UserRepository that returns phone numbers? Or should it always return a reference to a User, and then traverse the relationship through the User to get to the phone numbers:
List<Phone> phones = UserRepository.GetPhoneNumbers(userId);
// Or
User user = UserRepository.GetUserWithPhoneNumbers(userId); //this method will join to Phone
Regardless of which way I acquire the phones, assuming I modified one of them, how do I go about updating them? My limited understanding is that objects under the root should be updated through the root, which would steer me towards choice #1 below. Although this will work perfectly well with Entity Framework, this seems extremely un-descriptive, because reading the code I have no idea what I’m actually updating, even though Entity Framework is keeping tab on changed objects within the graph.
UserRepository.Update(user);
// Or
UserRepository.UpdatePhone(phone);
Lastly, assuming I have several lookup tables that are not really tied to anything, such as CountryCodes, ColorsCodes, SomethingElseCodes. I might use them to populate drop downs or for whatever other reason. Are these standalone repositories? Can they be combined into some sort of logical grouping/repository such as CodesRepository? Or is that against best practices.
You are allowed to have any method you want in your repository :) In both of the cases you mention, it makes sense to return the user with phone list populated. Normally user object would not be fully populated with all the sub information (say all addresses, phone numbers) and we may have different methods for getting the user object populated with different kind of information. This is referred to as lazy loading.
User GetUserDetailsWithPhones()
{
// Populate User along with Phones
}
For updating, in this case, the user is being updated, not the phone number itself. Storage model may store the phones in different table and that way you may think that just the phones are being updated but that is not the case if you think from DDD perspective. As far as readability is concerned, while the line
UserRepository.Update(user)
alone doesn't convey what is being updated, the code above it would make it clear what is being updated. Also it would most likely be part of a front end method call that may signifiy what is being updated.
For the lookup tables, and actually even otherwise, it is useful to have GenericRepository and use that. The custom repository can inherit from the GenericRepository.
public class UserRepository : GenericRepository<User>
{
IEnumerable<User> GetUserByCustomCriteria()
{
}
User GetUserDetailsWithPhones()
{
// Populate User along with Phones
}
User GetUserDetailsWithAllSubInfo()
{
// Populate User along with all sub information e.g. phones, addresses etc.
}
}
Search for Generic Repository Entity Framework and you would fine many nice implementation. Use one of those or write your own.
Your example on the Aggregate Root repository is perfectly fine i.e any entity that cannot reasonably exist without dependency on another shouldn't have its own repository (in your case Phone). Without this consideration you can quickly find yourself with an explosion of Repositories in a 1-1 mapping to db tables.
You should look at using the Unit of Work pattern for data changes rather than the repositories themselves as I think they're causing you some confusion around intent when it comes to persisting changes back to the db. In an EF solution the Unit of Work is essentially an interface wrapper around your EF Context.
With regards to your repository for lookup data we simply create a ReferenceDataRepository that becomes responsible for data that doesn't specifically belong to a domain entity (Countries, Colours etc).
If phone makes no sense w/o user, it's an entity (if You care about it's identity) or value object and should always be modified through user and retrieved/updated together.
Think about aggregate roots as context definers - they draw local contexts but are in global context (Your application) themselves.
If You follow domain driven design, repositories are supposed to be 1:1 per aggregate roots.
No excuses.
I bet these are problems You are facing:
technical difficulties - object relation impedance mismatch. You are struggling with persisting whole object graphs with ease and entity framework kind a fails to help.
domain model is data centric (as opposed to behavior centric). because of that - You lose knowledge about object hierarchy (previously mentioned contexts) and magically everything becomes an aggregate root.
I'm not sure how to fix first problem, but I've noticed that fixing second one fixes first good enough. To understand what I mean with behavior centric, give this paper a try.
P.s. Reducing repository to aggregate root makes no sense.
P.p.s. Avoid "CodeRepositories". That leads to data centric -> procedural code.
P.p.p.s Avoid unit of work pattern. Aggregate roots should define transaction boundaries.
This is an old question, but thought worth posting a simple solution.
EF Context is already giving you both Unit of Work (tracks changes) and Repositories (in-memory reference to stuff from DB). Further abstraction is not mandatory.
Remove the DBSet from your context class, as Phone is not an aggregate root.
Use the 'Phones' navigation property on User instead.
static void updateNumber(int userId, string oldNumber, string newNumber)
static void updateNumber(int userId, string oldNumber, string newNumber)
{
using (MyContext uow = new MyContext()) // Unit of Work
{
DbSet<User> repo = uow.Users; // Repository
User user = repo.Find(userId);
Phone oldPhone = user.Phones.Where(x => x.Number.Trim() == oldNumber).SingleOrDefault();
oldPhone.Number = newNumber;
uow.SaveChanges();
}
}
If a Phone entity only makes sense together with an aggregate root User, then I would also think it makes sense that the operation for adding a new Phone record is the responsibility of the User domain object throught a specific method (DDD behavior) and that could make perfectly sense for several reasons, the immidiate reason is we should check the User object exists since the Phone entity depends on it existence and perhaps keep a transaction lock on it while doing more validation checks to ensure no other process have deleted the root aggregate before we are done validating the operation. In other cases with other kinds of root aggregates you might want to aggregate or calculate some value and persist it on column properties of the root aggregate for more efficient processing by other operations later on. Note though I suggest the User domain object have a method that adds the Phone it doesn't mean it should know about the existence of the database or EF, one of the great feature of EM and Hibernate is that they can track changes made to entity classes transparently and that also means adding of new related entities by their navigation collection properties.
Also if you want to use methods that retrieve all phones regardless of the users owning them you could still though it through the User repository you only need one method returns all users as IQueryable then you can map them to get all user phones and do a refined query with that. So you don't even need a PhoneRepository in this case. Beside I would rather use a class with extensions method for IQueryable that I can use anywhere not just from a Repository class if I wanted to abstract queries behind methods.
Just one caveat for being able to delete Phone entities by only using the domain object and not a Phone repository you need to make sure the UserId is part of the Phone primary key or in other words the primary key of a Phone record is a composite key made up of UserId and some other property (I suggest an auto generated identity) in the Phone entity. This makes sense intuively as the Phone record is "owned" by the User record and it's removal from the User navigation collection would equal its complete removal from the database.
Validation of Business Objects is a common issue, but there are some solutions to solve that.
One of these solutions is to use the standalone NHibernate.Validator framework, which is an attribute-based validation framework.
But I'm facing into conceptual concern. Attribute validators like NH.Validator are great but the validation is only performed when save-update-delete within the Session.
So I wonder if business objects should not be self-validated in order to maintain their own integrity and consistence?
IMHO - there are 2 steps of validations needed for a Business Object (BO)/Entity to be valid:
Step1: BO/Entity self-validation
- In this, we check only if the entity is valid in terms of its state F.Ex.: If postal code is set, then does it have valid characters & is of valid length etc. form the BO/Entity level validations. But beyond this level of validation, we would not be able to say that the BO/Entity is valid in your business domain and/or repository.
Typically the BO/Entity would be able to enforce this level of validation.
Step2: Context validation
- In this, we need to validate if the BO/Entity is valid within the context of the Repository where it is being persisted. F.Ex.: Is the postal code valid for the country in which the order is being placed/sent to etc.
For this validation, some or all entities in the current context might need to be involved to make sure the BO/Entity is valid.
So, to keep the entities pure, you will need to split the validation into these 2 steps - one performed by the entity itself & the second by the repository which is persiting/working with the entity.
HTH.
It's not always possible for them to self-validate though. What if you enter an "invalid" Zip Code? You could validate that the Zip Code needs to be in a specific format, but what if you want them to be "valid", that is "existing and matching the city"? Or what if you only accept phone numbers from certain area codes, and the list of valid codes is in a database maintained by the legal department?
If you can perform semantic validation, that's great and could go into the Business Class. But often, you might need extra validation that is simply not possible to handle by the business class itself but needs to be handled by the class that talks to the database and other external services.
I don´t know if we are talking about the same idea, but if we are, I like what you explain. Very quickly, I´ll explain what I do to solve this. In my case, all the bussines objects in my domain layer must override two methods:
Obviously, to maintain this, I have more classes implicated, but I´ll not write all here, cos I´m only trying to explain the concept
List<ValidationRule> notPassedValidationRules = new List<ValidationRule>();
//...
public override void ValidateErrorsWhenSaving(Validator validator)
{
//...
}
public override void ValidateErrorsWhenDelete(Validator validator)
{
//...
}
In these methods, I check for some boolean conditions, mantaining a collection of non passed rules. In my case, these methods are invoked before my Unit Of Work commits the changes (inserting new entities, updating, deleting), and show possible errors to the user, before commiting.
We are using Linq to SQL to read and write our domain objects to a SQL Server database.
We are exposing a number of services (via WCF) to do various operations. Conecptually, the implementation of these operations consists of three steps: reconstitute the necessary domain objects from the database; execute the operation on the domain objects; persist the (now changed) domain objects back to the database.
Problem is that sometimes, there are two or more instances of the same entity objects, which can lead to inconsistenties when saving the objects back to the db. A little made-up example:
public void Move(string sourceLocationid, destinationLocationId, itemId);
which is supposed to move the item with the given id from the source to the destination location (actual services are more complicated, often involving many locations, items etc). Now, it could be that both source and destination location id are the same - a naive implementation would just reconstitute two instances of the entity object, which would lead to problems.
This issue is now "solved" by checking for it manually, i.e. we reconstitute a first location, check if the id of the second is different from it, and if so reconsistute the second, and so on. This is obvisouly difficult and error-prone.
Anyway, I was actually surprised that there does not seem to be a "standard" solution for this in domain driven design. In particular, repositories or factories do not seem to solve this problem (unless they maintain their own cache, which then needs to be updated etc).
My idea would be to make a DomainContext object per operation, which tracks and caches the domain objects used in that particular method. Instead of reconstituing and saving individual domain objects, such an object would be reconstituted and saved as a whole (possibly using repositories), and it could act as a cache for the domain objects used in that particular operation.
Anyway, it seems that this is a common problem, so how is this usually dealt with? What do you think of the idea above?
The DataContext in Linq-To-Sql supports the Identity Map concept out of the box and should be caching the objects you retrieve. The objects will only be different if you are not using the same DataContext for each GetById() operation.
Linq to Sql objects aren't really valid outside of the lifetime of the DataContext. You may find Rick Strahl's Linq to SQL DataContext Lifetime Management a good background read.
Also, the ORM is not responsible for logic in the domain. It's not going to disallow your example Move operation. That's up for the domain to decide what that means. Does it ignore it? or is it an error? It's your domain logic, and that needs to be implemented at the service boundary you are creating.
However, Linq-To-Sql does know when an object changes, and from what I've looked at, it won't record the change if you are re-assigning the same value. e.g. if Item.LocationID = 12, setting the locationID to 12 again won't trigger an update when SubmitChanges() is called.
Based on the example given, I'd be tempted to return early without ever loading an object if the source and destination are the same.
public void Move(string sourceLocationId, destinationLocationId, itemId)
{
if( sourceLocationId == destinationLocationId )
return;
using( DataContext ctx = new DataContext() )
{
Item item = ctx.Items.First( o => o.ItemID == itemId );
Location destination =
ctx.Locations.First( o => o.LocationID == destinationLocationID );
item.Location = destination;
ctx.SubmitChanges();
}
}
Another small point, which may or may not be applicable, is you should make your interfaces as chunky as possible. e.g. If you're typically going to perform 10 move operations at once, it's better to call 1 service method to perform all 10 operations at once, rather than 1 operation at a time. ref: chunky vs chatty
Many ORMs use two concepts that, if I understand you, address your issue. The first and most relevant is Context this is responsible for ensuring that only one object represents a entity (database table row, in the simple case) no mater how many times or ways it's requested from the database. The second is Unit of Work; this ensures that updates to the database for a group of entities either all succeed or all fail.
Both of these are implemented by the ORM I'm most familiar with (LLBLGen Pro), however I believe NHibernate and others also implement these concepts.