I have been pondering this for a while. In general I try to stay away from injecting services into my domain, but I have this case:
I have an PurchaseOrder object. This order is sent to a supplier using some service (Email or webservice). After the order is sent, a confirmation should be sent to the user that made the order.
So I realised this is a case where Domain Events would be a nice way of implementing this publishing an PurchaseOrderMade - event.
Then I came to think:
Is the order really made if the order wasnt sent?
You haven't made an order just because you decided to do it and wrote it down, but the order is made at the time when you have conveyed it to the supplier according to contract without errors.
So I redecided and thought that this might belong to the domain after all so I should send it by injecting a IPurchaseOrderSender to my domain and then publish an OrderMadeEvent after the successfull transaction and sending a confirmation in the EventHandler.
The reasoning is this:
The sending of the order IS crucial part of the process and could cause state to alter (i.e. setting a flag that the order is sent)
The confirmation is NOT crucial, should this fail the order is still made and everything would go as planned.
It is easy to read, and to alter implementation of an IOrderService
The questions are:
Is this really so bad to do?
Does it break principles of DDD?
Have you encountered this before and solved it in a better way?
Here is the code:
public void MakeOrder(PurchaseOrder order, IPurchaseOrderSender orderSender)
{
if(PurchaseOrders == null)
PurchaseOrders = new List<PurchaseOrder>();
PurchaseOrders.Add(order);
orderSender.Send(order);
DomainEvents.Raise(new PurchaseOrderIsMade(){Order = order});
}
public interface IPurchaseOrderSender
{
void Send(PurchaseOrder order);
}
I encountered this before, here is what I did:
Split the local transaction with the remote procedure call.
I think it's not a big deal if the order sending fails. In this case, either the order is placed but not set to "sent" or the order is rollbacked. The business operator can intervene if the order is not send or the customer will call if the order is not placed.
But it's annoying if there is something wrong with the transaction after sending the order successfully. In this case, if the order is rollbacked, the intervention is more difficult because we lost the supplier's notification. The notification usually contains an supplier's order identifier, therefore we could cancel the order with this identifier if necessary.
So we decide to use messaging.
1) The PlaceOrderService is responsible for storing the order and sends a message.
2) The consumer of the message sends the order to the suppler and sends a message containing the supplier's notification.
3) The other consumer of the notification message update the order state.
Each step modifies only one aggregate or just call the remote.
Hope this helps.
Update
1.How would you implement the messaging part here
I adopt the solution mentioned in Eric Evans' dddsample, ApplicationEvents. It's just a simple interface and an jms implementation, something like
public void placeOrder(...) {// method in application service
....//order making
orderRepository.store(order);
applicationEvents.orderWasPlaced(order);//injected applicationEvents
//better move this step out of transaction boundary if not using 2pc commit
//make the method returnning order and use decorator to send the message
// placeOrder(...) {
// Order order = target.placeOrder(...);//transaction end here
// applicationEvents.orderWasPlaced(order);
// return order;
// }
}
public class JmsApplicationEvents implements ApplicationEvents {
public void orderWasPlaced(Order order) {
//sends the message using messaging api of your platform
}
}
2.I see you mention suppliers notification, but let's assume this is done through email (which will be the primary scenario here) I would
like to know that the transaction was performed without errors (i.e.
no smtp or connection failure), but cannot rely on a response that the
order was actually received would that change anything?
hmm.. I have never build an trading application based on email, but here is my suggestions:
Messaging solution still fits if you need strong consistency. Messaging is transactional and could involve in global transaction while email doesn't.
Messaging provides more availability (your order making will not fail even if the mailserver is down) and scalability(queues).
Failure handling is more difficult in messaging solution, usually needs compensating actions. And it's more difficult for the user to get the processing information. For example, you have to notify the user about the progress of the order processing since the following steps are asynchronous. And an email should be sent to the customer if the order is rejected by the supplier.
But messaging does add extra complexity and takes you more effort to build and maintain. You have to evaluate is the gain worth the cost. Actually, I have also built several systems in synchronous solution(they don't require high throughput and availability), they works fine most of the time, only less than ten of orders fail due to the connection problem within a year, so it's not worthy to build an automatic error handling mechanism at all.
Related
I use Command Query Separation in my system.
To describe the problem lets start with an example. Let's say we have a code as follows:
public class TenancyController : ControllerBase{
public async Task<ActionResult> CreateTenancy(CreateTenancyRto rto){
// 1. Run Blah1Command
// 2. Run Blah2Command
// 3. Run Bar1Query
// 4. Run Blah3Command
// 5. Run Bar2Query
// ...
// n. Run BlahNCommand
// n+1. Run BarNQuery
//example how to run a command in the system:
var command = new UploadTemplatePackageCommand
{
Comment = package.Comment,
Data = Request.Body,
TemplatePackageId = id
};
await _commandDispatcher.DispatchAsync(command);
return Ok();
}
}
The CreateTenancy has a very complex implementation and runs many different queries and commands.
Each command or query can be reused in other places of the system.
Each Command has a CommandHandler
Each Query has a QueryHandler
Example:
public class UploadTemplatePackageCommandHandler : PermissionedCommandHandler<UploadTemplatePackageCommand>
{
//ctor
protected override Task<IEnumerable<PermissionDemand>> GetPermissionDemandsAsync(UploadTemplatePackageCommand command) {
//return list of demands
}
protected override async Task HandleCommandAsync(UploadTemplatePackageCommand command)
{
//some business logic
}
}
Every time you try to run the command or query there is a permission check. The problem which appears in the CreateTenancy is when you run let's say 10 commands.
There can be a case when you have permissions to all of the first 9 commands but you are missing some permissions to run the last command. In such a situation you can make some complex modifications to the system running these 9 commands and at the end, you are not able to finish the whole transaction because you are not able to run the last command. In such a case, there is a need to make a complex rollback.
I believe that in the above example the permission check should be done only once at the very beginning of the whole transaction but I'm not sure what is the best way to achieve this.
My first idea is to create a command called let's say CreateTenancyCommand and in the HandleCommandAsync place the whole logic from CreateTenancy(CreateTenancyRto rto)
So it would look like:
public class CreateTenancyCommand : PermissionedCommandHandler<UploadTemplatePackageCommand>
{
//ctor
protected override Task<IEnumerable<PermissionDemand>> GetPermissionDemandsAsync(UploadTemplatePackageCommand command) {
//return list of demands
}
protected override async Task HandleCommandAsync(UploadTemplatePackageCommand command)
{
// 1. Run Blah1Command
// 2. Run Blah2Command
// 3. Run Bar1Query
// 4. Run Blah3Command
// 5. Run Bar2Query
// ...
// n. Run BlahNCommand
// n+1. Run BarNQuery
}
}
I'm not sure if it's a good approach to invoke a command inside a command handler of another command?
I think that each command handler should be independent.
Am I right that the permission check should happen only once?
If yes- how to do the permission check in the case when you want to run a command to modify the database and then return some data to the client?
In such a case, you would need to do 2 permission checks...
There can be a theoretical case when you modify the database running the command and then cannot run a query which only reads the database because you are missing some of the permissions. It can be very problematic for the developer to detect such a situation if the system is big and there are hundreds of
different permissions and even the good unit tests coverage can fail.
My second idea is to create some kind of wrapper or extra layer above the commands and queries and do the permission check there
but not sure how to implement it.
What is the proper way to do the permissions check in the described transaction CreateTenancy which is implemented in the action of the controller in the above example?
In a situation where you have some sort of process which requires multiple commands / service calls to carry out the process, then this is an ideal candidate for a DomainService.
A DomainService is by definition one which has some Domain Knowledge, and is used to facilitate a process which interacts with multiple Aggregates / services.
In this instance I would look to have your Controller Action call a CQRS Command/CommandHandler. That CommandHandler will take the domain service as a single dependency. The CommandHandler then has the single responsibility of calling the Domain Service method.
This then means your CreateTenancy process is contained in one place, The DomainService.
I typically have my CommandHandlers simply call into service methods. Therefore a DomainService can call into multiple services to perform it's function, rather than calling into multiple CommandHandlers. I treat the Command Handlers as a facade through which my Controllers can access the Domain.
When it comes to permissions, I typically first decide whether the users authorisation to carry out a process is a Domain issue. If so, I will typically create an Interface to describe the users permissions. And also, I will typically create an Interface for this specific to the Bounded Context I am working in. So in this case you may have something like:
public interface ITenancyUserPermissions
{
bool CanCreateTenancy(string userId);
}
I would then have the ITenancyUserPermission interface be a dependancy in my CommandValidator:
public class CommandValidator : AbstractValidator<Command>
{
private ITenancyUserPermissions _permissions;
public CommandValidator(ITenancyUserPermissions permissions)
{
_permissions = permissions;
RuleFor(r => r).Must(HavePermissionToCreateTenancy).WithMessage("You do not have permission to create a tenancy.");
}
public bool HavePermissionToCreateTenancy(Command command)
{
return _permissions.CanCreateTenancy(command.UserId);
}
}
You said that the permission to create a Tenancy is dependent on the permission to perform the other tasks / commands. Those other commands would have their own set of Permission Interfaces. And then ultimately within your application you would have an implementation of these interfaces such as:
public class UserPermissions : ITenancyUserPermissions, IBlah1Permissions, IBlah2Permissions
{
public bool CanCreateTenancy(string userId)
{
return CanBlah1 && CanBlah2;
}
public bool CanBlah1(string userID)
{
return _authService.Can("Blah1", userID);
}
public bool CanBlah2(string userID)
{
return _authService.Can("Blah2", userID);
}
}
In my case I use a ABAC system, with the policy stored and processed as a XACML file.
Using the above method may mean you have slightly more code, and several Permissions interfaces, but it does mean that any permissions you define are specific to the Bounded Context you are working within. I feel this is better than having a Domain Model wide IUserPermissions interface, which may define methods which of no relevance, and/or confusing in your Tenancy bounded context.
This means you can check user permissions in your QueryValidator or CommandValidator instances. And of course you can use the implementation of your IPermission interfaces at the UI level to control which buttons / functions etc are shown to the user.
There is no "The Proper Way", but I'd suggest that you could approach the solution from the following angle.
Usage of the word Controller in your names and returning Ok() lets me understand that you are handling an http request. But what is happening inside is a part of a business use case that has nothing to deal with http. So, you'd better get some Onion-ish and introduce a (business) application layer.
This way, your http controller would be responsible for: 1) Parsing create tenancy http request into a create tenancy business request - i.e. the request object model in terms of domain language void of any infrastructure terms. 2) Formatting business response into an http response including translating business errors into http errors.
So, what you get entering the application layer is a business create tenancy request. But it's not a command yet. I can't remember the source, but someone once said, that command should be internal to a domain. It cannot come from outside. You may consider a command to be a comprehensive object model necessary to make a decision whether to change an application's state. So, my suggestion is that in your business application layer you build a command not only from business request, but also from results of all these queries, including queries to necessary permission read models.
Next, you may have a separate decision-making business core of a system that takes a command (a value object) with all the comprehensive data, applies a pure decision-making function and returns a decision, also a value object (event or rejection), containing, again, all necessary data calculated from the command.
Then, when your business application layer gets back a decision, it can execute it, writing to event stores or repositories, logging, firing events and ultimately producing a business response to the controller.
In most cases you'll be ok with this single-step decision-making process. If it needs more than a single step - maybe it's a hint to reconsider the business flow, because it gets too complex for a single http request processing.
This way you'll get all the permissions before handling a command. So, your business core will be able to make a decision whether those permissions are sufficient to proceed. It also may make a decision-making logic much more testable and, therefore, reliable. Because it is the main part that should be tested in any calculation flow branch.
Keep in mind that this approach leans toward eventual consistency, which you have anyway in a distributed system. Though, if interacting with a single database, you may run an application-layer code in a single transaction. I suppose, though, that you deal with eventual consistency anyway.
Hope this helps.
I think I have an amateur architecture question, but this is something that I've been struggling to figure out for quite a while.
I have a C# web project that creates users in several places like this:
var user = /*create user somehow*/;
_userRepository.Add(user);
_userRepository.SaveChanges();
Now I need to add logic that sends email notifications every time a user is created:
var user = /*create user somehow*/;
_userRepository.Add(user);
_userRepository.SaveChanges();
_notificationService.SendUserCreatedNotification(user);
The problem with this is I wouldn't like to add the same line of code to all the places where new user is created (DRY!).
Now, I could wrap up the Add/Save/SendUserCreatedNotification logic in a separate service:
var user = /*create user somehow*/;
_userCreationService.AddAndSave(user);
But:
the purpose of this service would be logically weird (add user to
repo, save changes to repo, send notifications); I can't even think
of a good name for this service & method
The service method would only have 3 lines of code: Add/Save/SendUserCreatedNotification
How do you usually solve such tasks? Is approach 2 the best way to go? Or maybe there exists a better approach 3?
One possible solution would be to send this notification from the _userRepository.SaveChanges(); method.
You would check your UoW change tracker for all the user entities that are in the 'Created' state and send these notifications after committing.
However, using this approach, the notification sending will be hidden in your infrastructure/data access/... layer. This means the notification sending logic will not be part of your domain/core logic. For someone to become aware of this part of the logic, they would have to dive into the implementation details of your repository (or UoW).
Instead, you could fire an event from the _userRepository.SaveChanges(); and subscribe to that event in your core logic.
The approach I would take would be the following:
In _userRepository.SaveChanges(); for every created user, fire a UserCreatedEvent event that will contain the information about the user.
Subscribe to that event in your core logic and call _notificationService.SendUserCreatedNotification(user); from the event handler.
This way, you would also decouple your user creation logic from the notification sending logic.
I am wondering whether there is an establish pattern to control the flow that my application will have.
Simply put, it's supposed to be something like that:
User provides a file
File is being processed
User receives a processed file
There will be several processing steps, lets say
PreprocessingOne, PreprocessingTwo, PreprocessingThree and FinalProcessing.
Naturally, we do not control the files that the user provides - they will require a different amount of preprocessing steps.
Since my message handler services will be in separate APIs, I don't want to invoke them just to return 'Cannot process yet' or 'Does not require processing' for performance reason.
Similarily, I don't want to pass the uploaded file around between services.
Ideally, I would like to design the flow for a file dynamically by evaluating the content and inserting only those of the message handlers that make sense.
I am saying 'Inverted' pipeline, because instead of going from A to Z I would rather like to check which stages I need starting from Z and only insert the last ones.
So, if the uploaded file qualifies for FinalProcessing right away, the flow would be just one element.
If the file requires to go from PreprocessingTwo then the flow would be PreprocessingTwo > PreprocessingThree > FinalProcessing
So, I was thinking I could implement something like that, but I am not sure about the details.
public interface IMessageHandler
{
void Process(IFile file);
}
public interface IContentEvaluator
{
IList<IMessageHandler> PrepareWorkflow(IFile file);
}
public interface IPipelineExecutor
{
void ExecuteWorkflow(IList<IMessageHandler> workflow, IFile file);
}
And then in the application
public void Start(IFile newFile)
{
var contentEvaluator = new ContentEvaluator(this.availableHandlers); // would be DI
var workflow = contentEvaluator.PrepareWorkflow(newFile);
this.executor.ExecuteWorkflow(workflow, newFile);
}
Could you please advise, recommend some approach or further read?
You can consider to use Strategy pattern: ...selects an algorithm at runtime...
But if you have too many combinations of the flow than the number of strategies which needs to be implemented will increase and solution can be complex.
Another approach can be to use SEDA: ...decomposes a complex, event-driven application into a set of stages connected by queues...
PreprocessingOne, PreprocessingTwo, PreprocessingThree and FinalProcessing are the stages, and flows can be defined by directing outgoing messages to different queues.
Is that a decorator pattern
Definition
Attach additional responsibilities to an object dynamically.
Decorators provide a flexible alternative to subclassing for extending
functionality.
I am new to DDD, and I am trying to figure out a way to update aggregate by using a PUT verb.
If all properties in the aggregate have private setters, then it's obvious I need to have set of functionality for every business requirement. For an example
supportTicket.Resolve();
It's clear for me that I can achieve this with an endpoint such as /api/tickets/5/resolve, but what if i want to provide a way to update whole ticket atomically?
As an example, user can make a PUT request to /api/tickets/5 with a following body
{"status" : "RESOLVED", "Title":"Some crazy title"}
Do I need to do something like this in the ApplicationSercvice
if(DTO.Status != null && dto.Status == "RESOLVED")
supportTicket.Resolve();
if(DTO.Title != null)
supportTicket.setNewTitle(DTO.title);
If that's the case and changing ticket title has some business logic to prevent changing it if the ticket is resolved, should I consider some kind of prioritizing when updating aggregate, or I am looking at this entirely wrong?
Domain Driven Design for RESTful Systems -- Jim Webber
what if i want to provide a way to update whole ticket atomically?
If you want to update the whole ticket atomically, ditch aggregates; aggregates are the wrong tool in your box if what you really want is a key value store with CRUD semantics.
Aggregates only make sense when their are business rules for the domain to enforce. Don't build a tractor when all you need is a shovel.
As an example, user can make a PUT request to /api/tickets/5
That's going to make a mess. In a CRUD implementation, replacing the current state of a resource by sending it a representation of a new state is appropriate. But that doesn't really fit for aggregates at all, because the state of the aggregate is not under the control of you, the client/publisher.
The more appropriate idiom is to publish a message onto a bus, which when handled by the domain will have the side effect of achieving the changes you want.
PUT /api/tickets/5/messages/{messageId}
NOW your application service looks at the message, and sends commands to the aggregate
if(DTO.Status != null && dto.Status == "RESOLVED")
supportTicket.Resolve();
if(DTO.Title != null)
supportTicket.setNewTitle(DTO.title);
This is OK, but in practice its much more common to make the message explicit about what is to be done.
{ "messageType" : "ResolveWithNewTitle"
, "status" : "RESOLVED"
, "Title":"Some crazy title"
}
or even...
[
{ "messageType" : "ChangeTitle"
, "Title" : "Some crazy title"
}
, { "messageType" : "ResolveTicket"
}
]
Basically, you want to give the app enough context that it can do real message validation.
let's say I had aggregates which encapsulated needed business logic, but besides that there is a new demand for atomic update functionality and I am trying to understand a best way to deal with this.
So the right way to deal with this is first to deal with it on the domain level -- sit down with your domain experts, make sure that everybody understands the requirement and how to express it in the ubiquitous language, etc.
Implement any new methods that you need in the aggregate root.
Once you have the use case correctly supported in the domain, then you can start worrying about your resources following the previous pattern - the resource just takes the incoming request, and invokes the appropriate commands.
Is changing the Title a requirement of Resolving a ticket? If not, they should not be the same action in DDD. You wouldn't want to not resolve the ticket if the new name was invalid, and you wouldn't want to not change the name if the ticket was not resolvable.
Make 2 calls to perform the 2 separate actions. This also allows for flexibility such as, the Title can be changed immediately, but perhaps "resolving" the ticket will kick off some complex and time consuming (asyncronous) work flow before the ticket is actually resolved. Perhaps it needs to have a manager sign off? You don't want the call to change "title" tied up in that mix.
If needs be, create something to orchestrate multiple commands as per #VoiceOfUnreason's comment.
Wherever possible, keep things separate, and code to use cases as opposed to minimizing interacitons with entities.
You're probably right. But it's probably wiser to encapsulate such logic inside the ticket it self, by making a "change()" method, receiving a changeCommandModel (or something like this), so you can define the business rules inside your domain object.
if (DTO.Status != null && dto.Status == "RESOLVED")
supportTicket.Resolve(DTO.title);
I will change the underlying method to take title as parameter, this clarify the resolve action. That second if and validation you want in the domain method. It's really preference, more importantly is the message and I agree with #VoiceOfUnreason second option.
I am writing a piece of software in c# .net 4.0 and am running into a wall in making sure that the code-base is extensible, re-usable and flexible in a particular area.
We have data coming into it that needs to be broken down in discrete organizational units. These units will need to be changed, sorted, deleted, and added to as the company grows.
No matter how we slice the data structure we keep running into a boat-load of conditional statements (upwards of 100 or so to start) that we are trying to avoid, allowing us to modify the OUs easily.
We are hoping to find an object-oriented method that would allow us to route the object to different workflows based on properties of that object without having to add switch statements every time.
So, for example, let's say I have an object called "Order" come into the system. This object has 'orderItems' inside of it. Each of those different kinds of 'orderItems' would need to fire a different function in the code to be handled appropriately. Each 'orderItem' has a different workflow. The conditional looks basically like this -
if(order.orderitem == 'photo')
{do this}
else if(order.orderitem == 'canvas')
{do this}
edit: Trying to clarify.
I'm not sure your question is very well defined, you need a lot more specifics here - a sample piece of data, sample piece of code, what have you tried...
No matter how we slice the data structure we keep running into a boat-load of conditional statements (upwards of 100 or so to start) that we are trying to avoid
This usually means you're trying to encode data in your code - just add a data field (or a few).
Chances are your ifs are linked to each other, it's hard to come up with 100 independent ifs - that would imply you have 100 independent branches for 100 independent data conditions. I haven't encountered such a thing in my career that really would require hard-coding 100 ifs.
Worst case scenario you can make an additional data field contain a config file or even a script of your choice. Either case - your data is incomplete if you need 100 ifs
With the update you've put in your question here's one simple approach, kind of low tech. You can do better with dependency injection and some configuration but that can get excessive too, so be careful:
public class OrderHandler{
public static Dictionary<string,OrderHandler> Handlers = new Dictionary<string,OrderHandler>(){
{"photo", new PhotoHandler()},
{"canvas", new CanvasHandler()},
};
public virtual void Handle(Order order){
var handler = handlers[order.OrderType];
handler.Handle(order);
}
}
public class PhotoHandler: OrderHandler{...}
public class CanvasHandler: OrderHandler{...}
What you could do is called - "Message Based Routing" or "Message Content Based" Routing - depending on how you implement it.
In short, instead of using conditional statements in your business logic, you should implement organizational units to look for the messages they are interested in.
For example:
Say your organization has following departments - "Plant Products", "Paper Products", "Utilities". Say there is only one place where the orders come in - Ordering (module).
here is a sample incoming message.
Party:"ABC Cop"
Department: "Plant Product"
Qty: 50
Product: "Some plan"
Publish out a message with this information. In the module that processes orders for "Plant Products" configure it such that it listens to a message that has "Department = Plant Products". This way, you push the onus on the department modules instead of on the main ordering module.
You can do this using NServiceBus, BizTalk, or any other ESB you might already have.
This is how you do in BizTalk and this is how you can do in NServiceBus
Have you considered sub-typing OrderItem?
public class PhotoOrderItem : OrderItem {}
public class CanvasOrderItem : OrderItem {}
Another option would be to use the Strategy pattern. Add an extra property to your OrderItem class definition for the OrderProcessStrategy and use a PhotoOrderStrategy/CanvasOrderStrategy to contain all of the different logic.
public class OrderItem{
public IOrderItemStrategy Strategy;
}
public interface IOrderItemStrategy{
public void Checkout();
public Control CheckoutStub{get;}
public bool PreCheckoutValidate();
}
public class PhotoOrderStrategy : IOrderItemStrategy{}
public class CanvasOrderStrategy : IOrderItemStrategy{}
Taking the specific example:
You could have some Evaluator that takes an order and iterates each line item. Instead of processing if logic raise events that carry in their event arguments the photo, canvas details.
Have a collection of objects 'Initiators' that define: 1)an handler that can process Evaluator messages, 2)a simple bool that can be set to indicate if they know what to do with something in the message, and 3)an Action or Process method which can perform or initiate the workflow. Design an interface to abstract these.
Issue the messages. Visit each Initiator, ask it if it can process the lineItem if it can tell it to do so. The processing is kicked off by the 'initiators' and they can call other workflows etc.
Name the pieces outlined above whatever best suits your domain. This should offer some flexibility. Problems may arise depending on concurrent processing requirements and workflow dependencies between the Initiators.
In general, without knowing a lot more detail, size of the project, workflows, use cases etc it is hard to comment.