DDD construct AggregateRoot Entity from external API, Repositories and Domain Services - c#

Trying to apply DDD principles to small project... I have PlayerProfile aggregate root, which consists of Club entity and collection of Rating value objects. Periodically I have to sync all PlayerProfile's entities from external portal, parsing the raw HTML.
For now I come up with the solution to wrap the code, which renew the PlayerProfile's in simple PlayerProfileRepository, something like this:
public interface IPlayerProfileRepository
{
Task<IReadOnlyCollection<PlayerProfile>> SyncPlayersProfilesFromPortal(string sourceUrl);
// other methods, which works with data storage
}
First, I don't really like the idea of mixing method, which work with data storage with the methods, which work with external resource (HTML pages) to periodically create PlayerProfile. For me it sounds more like PlayerProfileFactory responsibilities?
The actual implementation of IPlayerProfileRepository delegates parsing of actual pages to 3 IPageParser's, which actually lives in the same layer as my repositories do. Something like this:
public PlayerProfileRepository(
IPageParser<ParseClubDto> clubPageParser,
IPageParser<ParsePlayerProfileDto> playerProfilePageParser,
IPageParser<ParseRatingDto> ratingPageParser)
{
_playerProfilePageParser = playerProfilePageParser;
_clubPageParser = clubPageParser;
}
I am not quite sure if all these Dtos are in fact Dtos as soon as they are used only from IPageParser's to save intermediate data while parsing the pages. I would like to keep them closely to IPageParser implementations in data service layer, but not to share them in separate Dtos project and maybe named differently.
After ParseClubDto, ParsePlayerProfileDto and ParseRatingDto parsed, I passed it to PlayerProfileFactory.Create factory method, something like this:
var playerProfiles = new List<PlayerProfile>();
var clubs = await _clubPageParser.ParseAsync(sourceUrl);
foreach (var club in clubs)
{
var clubPlayers = await _playerProfilePageParser.ParseAsync(club.PlayersPageUrl);
foreach (var clubPlayer in clubPlayers)
{
var ratings = await _ratingPageParser.ParseAsync(clubPlayer.RatingsPageUrl);
playerProfiles.Add(PlayerProfileFactory.Create(club, clubPlayer, ratings));
}
}
return playerProfiles;
After this is done I have to perform actual syncing with existing agreggate roots in DB, which I do simple by calling ResyncFrom(PlayerProfile profile) on aggregate root or should it be more like separate PlayerProfile domain service?
In general I got a feeling that I am doing something wrong, so please any comments are welcomed?

I think that your example is a case of integration between two BCs using the anti corruption layer pattern.
I would have a port (interface in the domain) with a method contract that returns a list of player profile aggregates.
In the infrastructure layer I would have an adapter that implements the port by reading the html data from the remote portal (for example using a REST API ) and constructing the aggregates from that data.
In the application layer I would have an application service in which you inject both the port and the player profile aggregate repository that deals with the local db. The application service calls the port to construct the aggregates, and then calls the repository to store them.
I would run this application service periodically.
This would be an async integration without events, but you could implement it with events if the remote portal fires events.

IPlayerProfileRepository interface is usually defined in the Domain and describes to the outside world how the Aggregate Root should be retrieved, usually by the Id. So the method SyncPlayersProfilesFromPortal should certainly not be a part of this interface.
Syncing data is an infrastructure concern and can be done asynchronously in the background as already suggested in the previous answer.

Related

CQRS - Passing complex paging / filtering data through the domain

I am trying to implement the cqrs pattern using mediatr and all is set up correctly and working well. I do however have an issue when trying to implement the devextreme components in my views. The components require an endpoint that accepts a DataSourceLoadOptions object which can then be coupled with the DataSourceLoader class and an IQueryable object to automate filtering/paging/sorting etc. This code is fantastic and really gets rid of a lot of boilerplate stuff.
Here is an example of the "old way" I used to do things :
[HttpGet]
public virtual object Get(DataSourceLoadOptions loadOptions)
{
var queryable = this.context.Set<TEntity>();
return DataSourceLoader.Load(queryable, loadOptions);
}
As you can see it is really is quite nice, however it is old school and not layered and couples me to EF as a persistence mechanism. But now to replace this with a CQRS pattern is going to be a bit tricky because I do not want my application or domain or even database layer to know about devextreme ( its a view technology and must remain there ). I also am not really keen on returning a simple IQueryable from the mediatr response as that means things like keeping the context alive / testability issues / some Linq queries cannot be materialized int SQL etc...smells bad.
I am wondering if there is another way to somehow extract out an interface and then maybe create a service that I can inject through DI to resolve this? I cant really find any resources on the net regarding this. As per usual all the example are just "hello world" use cases and none of them really get their hands dirty with "real world" problems like filtering / paging / Identity etc
If anyone has any ideas, please point me in the right direction
We are using Kendo UI, but the problem is the same. We are returning IQueryable<T> from our queries for cases where we need paging done for the UI. And then we have a test that ensures that the query can be executed with pure SQL.
Something like this:
public class MyProjectQuery : IQuery<IEnumerable<Project>
{
// params
}
[HttpGet]
public virtual object Get(DataSourceLoadOptions loadOptions)
{
var queryable = _mediator.Query(new MyProjectQuery());
return DataSourceLoader.Load(queryable, loadOptions);
}

Alternative for initializing properties in the constructor in (Dynamics) CRM

I'm currently working one a custom CRM-style solution (EF/Winforms/OData WebApi) and I wonder how to implement a quite simple requirement:
Let's say there is a simple Project entity. It is possible to assign Tasks to it. There is a DefaultTaskResponsible defined in the Project. Whenever a Task is created, the Project's DefaultTaskResponsible is used as the Task.Responsible. But it is possible change the Task.Responsible and even set it to null.
So, in a 'normal' programming world, I would use a Task constructor accepting the Project and set the Responsible there:
public class Task {
public Task(Project p) {
this.Responsible = p.DefaultTaskResponsible;
...
}
}
But how should I implement something like this in a CRM-World with Lookup views? In Dynamics CRM (or in my custom solution), there is a Task view with a Project Lookup field. It does not make sense to use a custom Task constructor.
Maybe it is possible to use Business Rules in Dynamics CRM and update the Responsible whenever the Project changes (not sure)?! But how should I deal with the WebApi/OData Client?
If I receive a Post to the Task endpoint without a Responsible I would like to use the DefaultTaskResponsible, e.g.
POST [Organization URI]/api/data/tasks
{
"project#odata.bind":"[Organization URI]/api/data/projects(xxx-1)"
}.
No Responsible was send (maybe because it is an older client), so use the default one. But if a Responsible is set, the passed value should be used instead, e.g.
POST [Organization URI]/api/data/tasks
{
"project#odata.bind":"[Organization URI]/api/data/projects(xxx-1)",
"responsible#odata.bind": null
}.
In my TaskController I only see the Task model with the Responsible being null, but I don't know if it is null because it was set explicitly or because it wasn't send in the request.
Is there something wrong with my ideas/concepts? I think it is quite common to initialize properties based on other objects/properties, isn't it?
This question is probably out of scope for this forum, but it is a subject I am interested in. A few thoughts:
A "Task" is a generic construct which traditionally can be associated with many different types of entities. For example, you might not only have tasks associated with Projects, but also with Customer records and Sales records. To run with your code example it would look like:
public Task(Entity parent) {}
Then you have to decide whether or not your defaulting of the Responsible party is specific to Projects, or generic across all Entities which have Tasks. If the latter, then our concept looks like this:
public Task(ITaskEntity parent)
{
this.Responsible = parent.DefaultResponsible; //A property of ITaskEntity
}
This logic should be enforced at the database "pre operation" level, i.e. when your CRM application receives a request to create a Task, it should make this calculation, then persist the task to the database. This suggests that you should have a database execution pipeline, where actions can be taken before or after database operations occur. A standard simple execution pipeline looks like this:
Validation -> Pre Operation -> Operation (CRUD) -> Post Operation
Unless you are doing this for fun, I recommend abandoning the project and using an existing CRM system.

Proper permission management when using CQRS

I use Command Query Separation in my system.
To describe the problem lets start with an example. Let's say we have a code as follows:
public class TenancyController : ControllerBase{
public async Task<ActionResult> CreateTenancy(CreateTenancyRto rto){
// 1. Run Blah1Command
// 2. Run Blah2Command
// 3. Run Bar1Query
// 4. Run Blah3Command
// 5. Run Bar2Query
// ...
// n. Run BlahNCommand
// n+1. Run BarNQuery
//example how to run a command in the system:
var command = new UploadTemplatePackageCommand
{
Comment = package.Comment,
Data = Request.Body,
TemplatePackageId = id
};
await _commandDispatcher.DispatchAsync(command);
return Ok();
}
}
The CreateTenancy has a very complex implementation and runs many different queries and commands.
Each command or query can be reused in other places of the system.
Each Command has a CommandHandler
Each Query has a QueryHandler
Example:
public class UploadTemplatePackageCommandHandler : PermissionedCommandHandler<UploadTemplatePackageCommand>
{
//ctor
protected override Task<IEnumerable<PermissionDemand>> GetPermissionDemandsAsync(UploadTemplatePackageCommand command) {
//return list of demands
}
protected override async Task HandleCommandAsync(UploadTemplatePackageCommand command)
{
//some business logic
}
}
Every time you try to run the command or query there is a permission check. The problem which appears in the CreateTenancy is when you run let's say 10 commands.
There can be a case when you have permissions to all of the first 9 commands but you are missing some permissions to run the last command. In such a situation you can make some complex modifications to the system running these 9 commands and at the end, you are not able to finish the whole transaction because you are not able to run the last command. In such a case, there is a need to make a complex rollback.
I believe that in the above example the permission check should be done only once at the very beginning of the whole transaction but I'm not sure what is the best way to achieve this.
My first idea is to create a command called let's say CreateTenancyCommand and in the HandleCommandAsync place the whole logic from CreateTenancy(CreateTenancyRto rto)
So it would look like:
public class CreateTenancyCommand : PermissionedCommandHandler<UploadTemplatePackageCommand>
{
//ctor
protected override Task<IEnumerable<PermissionDemand>> GetPermissionDemandsAsync(UploadTemplatePackageCommand command) {
//return list of demands
}
protected override async Task HandleCommandAsync(UploadTemplatePackageCommand command)
{
// 1. Run Blah1Command
// 2. Run Blah2Command
// 3. Run Bar1Query
// 4. Run Blah3Command
// 5. Run Bar2Query
// ...
// n. Run BlahNCommand
// n+1. Run BarNQuery
}
}
I'm not sure if it's a good approach to invoke a command inside a command handler of another command?
I think that each command handler should be independent.
Am I right that the permission check should happen only once?
If yes- how to do the permission check in the case when you want to run a command to modify the database and then return some data to the client?
In such a case, you would need to do 2 permission checks...
There can be a theoretical case when you modify the database running the command and then cannot run a query which only reads the database because you are missing some of the permissions. It can be very problematic for the developer to detect such a situation if the system is big and there are hundreds of
different permissions and even the good unit tests coverage can fail.
My second idea is to create some kind of wrapper or extra layer above the commands and queries and do the permission check there
but not sure how to implement it.
What is the proper way to do the permissions check in the described transaction CreateTenancy which is implemented in the action of the controller in the above example?
In a situation where you have some sort of process which requires multiple commands / service calls to carry out the process, then this is an ideal candidate for a DomainService.
A DomainService is by definition one which has some Domain Knowledge, and is used to facilitate a process which interacts with multiple Aggregates / services.
In this instance I would look to have your Controller Action call a CQRS Command/CommandHandler. That CommandHandler will take the domain service as a single dependency. The CommandHandler then has the single responsibility of calling the Domain Service method.
This then means your CreateTenancy process is contained in one place, The DomainService.
I typically have my CommandHandlers simply call into service methods. Therefore a DomainService can call into multiple services to perform it's function, rather than calling into multiple CommandHandlers. I treat the Command Handlers as a facade through which my Controllers can access the Domain.
When it comes to permissions, I typically first decide whether the users authorisation to carry out a process is a Domain issue. If so, I will typically create an Interface to describe the users permissions. And also, I will typically create an Interface for this specific to the Bounded Context I am working in. So in this case you may have something like:
public interface ITenancyUserPermissions
{
bool CanCreateTenancy(string userId);
}
I would then have the ITenancyUserPermission interface be a dependancy in my CommandValidator:
public class CommandValidator : AbstractValidator<Command>
{
private ITenancyUserPermissions _permissions;
public CommandValidator(ITenancyUserPermissions permissions)
{
_permissions = permissions;
RuleFor(r => r).Must(HavePermissionToCreateTenancy).WithMessage("You do not have permission to create a tenancy.");
}
public bool HavePermissionToCreateTenancy(Command command)
{
return _permissions.CanCreateTenancy(command.UserId);
}
}
You said that the permission to create a Tenancy is dependent on the permission to perform the other tasks / commands. Those other commands would have their own set of Permission Interfaces. And then ultimately within your application you would have an implementation of these interfaces such as:
public class UserPermissions : ITenancyUserPermissions, IBlah1Permissions, IBlah2Permissions
{
public bool CanCreateTenancy(string userId)
{
return CanBlah1 && CanBlah2;
}
public bool CanBlah1(string userID)
{
return _authService.Can("Blah1", userID);
}
public bool CanBlah2(string userID)
{
return _authService.Can("Blah2", userID);
}
}
In my case I use a ABAC system, with the policy stored and processed as a XACML file.
Using the above method may mean you have slightly more code, and several Permissions interfaces, but it does mean that any permissions you define are specific to the Bounded Context you are working within. I feel this is better than having a Domain Model wide IUserPermissions interface, which may define methods which of no relevance, and/or confusing in your Tenancy bounded context.
This means you can check user permissions in your QueryValidator or CommandValidator instances. And of course you can use the implementation of your IPermission interfaces at the UI level to control which buttons / functions etc are shown to the user.
There is no "The Proper Way", but I'd suggest that you could approach the solution from the following angle.
Usage of the word Controller in your names and returning Ok() lets me understand that you are handling an http request. But what is happening inside is a part of a business use case that has nothing to deal with http. So, you'd better get some Onion-ish and introduce a (business) application layer.
This way, your http controller would be responsible for: 1) Parsing create tenancy http request into a create tenancy business request - i.e. the request object model in terms of domain language void of any infrastructure terms. 2) Formatting business response into an http response including translating business errors into http errors.
So, what you get entering the application layer is a business create tenancy request. But it's not a command yet. I can't remember the source, but someone once said, that command should be internal to a domain. It cannot come from outside. You may consider a command to be a comprehensive object model necessary to make a decision whether to change an application's state. So, my suggestion is that in your business application layer you build a command not only from business request, but also from results of all these queries, including queries to necessary permission read models.
Next, you may have a separate decision-making business core of a system that takes a command (a value object) with all the comprehensive data, applies a pure decision-making function and returns a decision, also a value object (event or rejection), containing, again, all necessary data calculated from the command.
Then, when your business application layer gets back a decision, it can execute it, writing to event stores or repositories, logging, firing events and ultimately producing a business response to the controller.
In most cases you'll be ok with this single-step decision-making process. If it needs more than a single step - maybe it's a hint to reconsider the business flow, because it gets too complex for a single http request processing.
This way you'll get all the permissions before handling a command. So, your business core will be able to make a decision whether those permissions are sufficient to proceed. It also may make a decision-making logic much more testable and, therefore, reliable. Because it is the main part that should be tested in any calculation flow branch.
Keep in mind that this approach leans toward eventual consistency, which you have anyway in a distributed system. Though, if interacting with a single database, you may run an application-layer code in a single transaction. I suppose, though, that you deal with eventual consistency anyway.
Hope this helps.

Can I safely wrap the current request in a static instance?

I'm working on a multi-tenant web application. At the beginning of each request, the web application will use either the URL or a cookie to determine the tenant ID of the current request. I want to make this available to my data layer so that it can automatically include the ID in SQL queries without having to explicitly pass it down through the layers for each query. However, I don't want my data layer to be dependent on System.Web or to only be usable by a web application.
So I've created an interface in the data layer ITenantIDProvider
public interface ITenantIDProvider
{
Guid GetTenantID();
}
And it could be implemented something like this:
public class TenantIDProvider : ITenantIDProvider
{
public Guid GetFacilityID()
{
return (Guid)HttpContext.Current.Items["TenantID"];
}
}
In reality it will be something more complex than just retrieving from Items but the key is that it will be using the current request. There is also a business logic layer in between but it just forwards the ID along so I left that code out for simplicity.
When the web application starts it passes a new instance of TenantIDProvider to the data layer which passes it into an instance of a custom EntityFramework IDbContextInterceptor:
//Static method in data layer called at application start up
public static void SetTenantIDProvider(ITenantIDProvider tenantIDProvider)
{
System.Data.Entity.Infrastructure.Interception.DbInterception.Add(new MyContextInterceptor(tenantIDProvider));
}
DbInterception.Add only gets called once at application startup so I believe it's storing the given interceptor in a static collection, which means my interceptor instance is common to all requests. However, since my tenantIDProvider is just wrapping access to the current request (and importantly, not setting any instance variables) I shouldn't need to worry about a race condition right? I've attempted to debug through two threads, freezing them at different times to test it and it seems to work as intended. I just feel a little uneasy about an instance of an object that is shared by all threads/requests trying to access request-specific data.

Best/Logical practice for properties values update (Controller or Service layer?)

My project got Web API Controller, Services and Repositories. Controller got an Update method like:
public IActionResult Update(CreateCollaboratorViewModel collaboratorViewModel)
{
//Is it good to set values here in Controller or in Service layer ?
AdminCollaborators collaborator = new AdminCollaborators();
collaborator.Description = collaboratorViewModel.Description;
collaborator.ModifiedBy = _myContext.CurrentUserId;
var output = _collaboratorService.UpdateCollaborator(collaborator, _myContext.CurrentUserId);
return Ok(new WebApiResultValue(true, output, "update successful."));
}
Service
public AdminCollaborators UpdateCollaborator(AdminCollaborators collaborator, Guid actorUserId)
{
collaborator.ModifiedBy = actorUserId;
collaborator.ModifiedOn = DateTimeHelper.Instance.GetCurrentDate();
_collaboratorRepository.UpdateCollaborator(collaborator,actorUserId);
return _collaborationRepository.SaveChanges();
}
Normally Services are supposed to implement business logic (if im not wrong). Please advice should I update the properties values in Controller or Services.
Thanks
This is a common result of implementing DDD-ish architctures. When domains are too simple, you end up with many layers doing the same thing with more or less level of abstraction each one.
BTW, you go this route because you've determined that some day you'll want to be able to put more complicated things in the right place. Otherwise, you would need to do a massive refactor to turn a non-layered project into a n-layered one.
Anyway, updates should be done in the service layer. Probably, your service layer should look as follows:
collaboratorService.Update(collaboratorId, updateCollaboratorDto);
Where updateCollaboratorDto should be a DTO which should come with the data you want to update on your domain object.
As of the mapping betweeen a DTO and domain object, you should use AutoMapper to map them all automatically.
Note that you're in a WebAPI/RESTful API world and you're talking about view models. I would rename them replacing the ViewModel sufix to Dto.

Categories