I have a project which is designed or at least should be according to the well known DDD principles.
Back - DDD + CQRS + Event Store
UI - ngrx/store
I have a lot of questions to ask about it but for now I will stick to these two:
How should the UI store be updated after a single Command/Action is executed ?
a) subscribe to response.ok
b) listen to domain events
c) trigger a generic event holding the created/updated/removed object ?
Is it a good idea to transfer the whole aggregate root dto with all its entities in each command / event or it is better to have more granular commands / events for ex.: with only a single property ?
How should the UI store be updated after a single Command/Action is executed ?
The command methods from my Aggregates return void (respecting CQS); thus, the REST endpoints that receive the command requests respond only with something like OK, command is accepted. Then, it depends on how the command is processed inside the backend server:
if the command is processed synchronously then a simple OK, command is accepted is sufficient as the UI will refresh itself and the new data will be there;
if the command is processed asynchronously then things get more complicated and some kind of command ID should be returned, so a response like OK, command is accepted and it has the ID 1234-abcd-5678-efgh; please check later at this URI for command completion status
At the same time, you could listen to the domain events. I do this using Server sent events that are send from the backend to the UI; this is useful if the UI is web based as there could be more than one browser windows open and the data will be updated in the background for pages; that's nice, client is pleased.
About including some data from the read side in the command response: this is something that depends on your specific case; I avoid it because it implies reading when writing and this means I can't separate the write from the read on a higher level; I like to be able to scale independently the write from the read part. So, a response.ok is the cleanest solution. Also, it implies that the command/write endpoint makes some query assumptions about the caller; why should a command handler/command endpoint assume what data the caller needs? But there could be exceptions, for example if you want to reduce the number of request or if you use an API gateway that do also a READ after the command is send to the backend server.
Is it a good idea to transfer the whole aggregate root dto with all its entities in each command / event or it is better to have more granular commands / events for ex.: with only a single property ?
I never send the whole Aggregate when using CQRS; you have the read-models so each Aggregate has a different representation on each read-model. So, you should create a read-model for each UI component, in this way you keep&send only the data that is displayed on the UI and not some god-like object that contains anything that anybody would need to display anywhere.
Commands basically fall into one of two categories : creation commands and the rest.
Creation commands
With creation commands, you often want to get back a handle to the thing you just created, otherwise you're left in the dark with no place to go to further manipulate it.
I believe that creation commands in CQS and CQRS can return an identifier or location of some sort : see my answer here. The identifier will probably be known by the command handler which can return it in its response. This maps well to 201 Created + Location header in REST.
You can also have the client generate the ID. In that case, see below.
All other commands
The client obviously has the address of the object. It can simply requery its location after it got an OK from the HTTP part. Optionally, you could poll the location until something indicates that the command was successful. It could be a resource version id, a status as Constantin pointed out, an Atom feed etc.
Also note that it might be simpler for the Command Handler to return the success status of the operation, it's debatable whether that really violates CQS or not (again, see answer above).
Is it a good idea to transfer the whole aggregate root dto with all
its entities in each command / event or it is better to have more
granular commands / events for ex.: with only a single property ?
Indeed it is better to have granular commands and events.
Commands and events should be immutable, expressive objects that clearly express an intent or past business event. This works best if the objects exactly contain the data that is about to change or was changed.
Related
I have tried the CQRS pattern using MediatR and am loving the clean state in which applications am working on are transforming.
In all the examples i have seen and used, I always do
await Mediator.Send(command);
It's been same for the queries as well
var data = await Mediator.Send(queryObject);
I just realized there's Mediator.Publish as well which after searching seems to me to be doing the same thing. I am trying to understand what the difference between the Mediator.Send and Mediator.Publish is.
I have read the MediatR library docs and I still don't understand what the difference between these are.
Kindly help me understand the difference.
Thanks for your help
MediatR has two kinds of messages it dispatches:
Request/response messages, dispatched to a single handler
Notification messages, dispatched to multiple handlers
Send may return a response, but do not have to do it.
Publish never return the result.
You are sending requests (sometimes called commands) by _mediator.Send({command}) to exactly one concrete handler. It may be e.g. command that saves a new product to the database. It is often a request from the user (frontend/API) or sometimes it may be internal command in your system given by other service in a synchronous way. It is always expected that the command will be executed immediately and you will receive some proper result or error to immediately inform the client about some failures.
You are publishing notifications (often called events) by _mediator.Publish({event}) to zero, one or many handlers. You used notifications when you want to publish some information and you do not know who needs that. E.g. NewProductEvent which is published after successfully adding product to your Warehouse Module. Few other contexts want to subscribe the information and e.g. send email to a client that a new product is available or create some default configuration for the product in your Store Module (which payment and delivery are available for the product). You may use notifications in a synchronous way. All data will be saved in one transaction (product and store configuration) or you may use some async pattern with service bus or/and sagas. In the second case (asynchronous) you must manually handle cases when something wrong happens in other services or contexts which subscribe to your notification.
Example scenario: Default configuration was not created.
If you have one transaction (synchronous way) for a few contexts, you will receive an error, log the error and return it to the user/client.
In an asynchronous way, you send events after saving a new product to the database. You do not want to have the product in a half-error state in your system. So firstly I recommend creating it in the Draft state and wait for an event that informs you about the successfully created configuration and then changes the state to e.g New/Correct etc.
A good example of using mediatR you will find in e.g. Ordering microservice in EShopOnContainers by Microsoft: github. You will see an example usage of CQRS and DDD with EF core and ASP Net.
I'm currently trying to figure the best solution for following problem using NServiceBus: I have GUI that user can use to search different things, but information about those things is spread in multiple services/databases. Lets say for example that user is searching for list of parks in a city, but each district of this city keeps only info of their parks in their own database (which they expose by web-services). I need NServiceBus to send message to each endpoint(district) what info user needs, wait for response, and then when it gets it from all endpoints (and only then) send it back to user(GUI). User is only interested in full information so Bus needs to know if every endpoint have send its response or not (it also needs to be in real time so Bus will assume that endpoint is offline and will send failure message if it will take too much time). Endpoints can change at any time so code needs to be easy to maintain. Best option will be that adding/removing endpoints can be done without changes in code.
Here are my thoughts about possible solution:
Publish/subscribe pattern lets me easily send message to multiple endpoints and add/remove endpoints at will by subscribing/unsubscribing without changing code of publisher. Problem: By definition publisher doesn't know how many subscribers are there (and what they are), so waiting for all of the subscribers to responds become difficult, if not impossible.
Request/response pattern lets me easily tell endpoints that i want answer and I will know if endpoint responded yet. Problem: Every time I need to add/remove new endpoint I need to change code of the sender. Also scalability may be a problem.
My question: Is there any way to combine those patterns? Or am I looking at this problem wrong way? Is there even a way that I can achieve all I want?
I think you are indeed looking at the problem the wrong way.
It sounds like you want to query multiple services and aggregate the information for presentation in the UI. Generally speaking, a bus is not a good choice for straight querying. A bus is great for sending commands to a specific endpoint, and for publishing state changes as they happen.
If you are performing a query against an endpoint, your best bet would be to model and expose a query (via something like WebAPI).
I have a little trouble deciding which way to go for while designing the message flow in our system.
Because the volatile nature of our business processes (i.e. calculating freight costs) we use a workflow framework to be able to change the process on the fly.
The general process should look something like this
The interface is a service which connects to the customers system via whatever interface the customer provides (webservices, tcp endpoints, database polling, files, you name it). Then a command is sent to the executor containing the received data and the id of the workflow to be executed.
The first problem comes at the point where we want to distribute load on multiple worker services.
Say we have different processes like printing parcel labels, calculating prices, sending notification mails. Printing the labels should never be delayed because a ton of mailing workflows is executed. So we want to be able to route commands to different workers based on the work they do.
Because all commands are like "execute workflow XY" we would be required to implement our own content based routing. NServicebus does not support this out of the box, most times because it's an anti pattern.
Is there a better way to do this, when you are not able to use different message types to route your messages?
The second problem comes when we want to add a monitoring. Because an endpoint can only subscribe to one queue for each message type we can not let all executors just publish a "I completed a workflow" message. The current solution would be to Bus.Send the message to a pre configured auditing endpoint. This feels a little like cheating to me ;)
Is there a better way to consolidate published messages of multiple workers into one queue again? If there would not be problem #1 I think all workers could use the same input queue however this is not possible in this scenario.
You can try to make your routing not content-based, but headers-based which should be much easier. You are not interested if the workflow is to print labels or not, you are interested in whether this command is priority or not. So you can probably add this information into the message header...
As a result of a previous post (Architecture: simple CQS) I've been thinking how I could build a simple system that is flexible enough to be extended later.
In other words: I don't see the need for a full-blown CQRS now, but I want it to be easy to evolve to it later, if needed.
So I was thinking to separate commanding from querying, but both based on the same database.
The query part would be easy: a WCF data service based on views to that it's easy to query for data. Nothing special there.
The command part is something more difficult, and here's an idea: commands are of course executed in an asynchronous way, so they don't return a result. But, my ASP.NET MVC site's controllers often need feedback from a command (for example if a registration of a member succeeded or not). So if the controller sends a command, it also generates a transaction ID (a guid) that is passed together with the command properties. The command service receives this command, puts it into a transactions table in the database with state 'processing', and is executed (using DDD principles). After execution, the transactions table is updated, so that state becomes 'completed' or 'failed', and other more detailed information like the primary key that was generated.
Meanwhile the site is using the QueryService to poll for the state of this transaction, until it receives 'completed' or 'failed', and then it can continue its work based on this result. If the transactions table is polled and the result was 'completed' or 'failed', the entry is deleted.
A side effect is that I don't need guid's as keys for my entities, which is a good thing for performance and size.
In most cases this polling mechanism is probably not needed, but is possible if needed. And the interfaces are designed with CQS in mind, so open for the future.
Do you think of any flaws in this approach? Other ideas or suggestions?
Thanks!
Lud
I think you are very close to a full CQRS system with your approach.
I have a site that I used to do something similar to what you are describing. My site, braincredits.com, is architected using CQRS, and all commands are async in nature. So, as a result, when I create an entry, there is really no feedback to the user other than the command was successfully submitted for processing (not that it processed).
But I have a user score on the site (a count of their "credits") that should change as the user submits more items. But I don't want the user to keep hitting F5 to refresh the browser. So I am doing what you are proposing -- I have an AJAX call that fires off every second or two to see if the user's credit count has changed. If it has, the new amount is brought back and the UI is updated (with a little bit of animation to catch the user's attention -- but not too flashy).
What you're talking about is eventual consistency -- that the state of the application that the user is seeing will eventually be consistent with the system data (the system of record). That concept is pretty key to CQRS, and, in my opinion, makes a lot of sense. As soon as you retrieve data in a system (whether it's a CQRS-based one or not), the data is old. But if you assume that and assume that the client will eventually be consistent, then your approach makes sense and you can also design your UI to account for that AND take advantage of that.
As far as suggestions, I would watch how much polling you do and how much data you're sending up and back. Do go overboard with polling, which is sounds like you're not. But target what should be updated on a regular basis on your site and I think you'll be good.
The WCF Data Service layer for the query side is a good idea - just make sure it's only read-enabled (which I'm sure you've done).
Other than that, it sounds like you're off to a good start.
I hope this helps. Good luck!
I have a service oriented architecture. The client holds a list of parent and child dtos that are bound to the front end. The service for this is a get that returns the full list of everything.
When deleting is it better to:
a. remove the object from the list on the front end on success of the service delete method (return bool for success or fail)
b. return the full list of objects again
c. return just the parent and children that were affected
d. other suggestion
Thanks in advance
To have a truly service-oriented architecture, services should be asynchronous, so they shouldn't return any results at all.
For a Delete operation, this is pretty simple to implement: just fire and forget.
Udi Dahan's blog is a good place to learn about real service-orientation.
If you would like to stay with the RPC message exchange pattern implied by your question, I would still say that the method should return void. If you get an empty answer back from a synchronous HTTP POST, it implies success - otherwise, you will get a SOAP Fault or other error result.
It actually depends on the business case.
When two users are using the system. Both are adding and deleting items from the list.
When user A deletes and item, do you want him to see the chenges from user B, or is it required that user A presses refresh to see these changes?
Ask the users how they want it to work, then choose the method that will give this with the least amount of data transfered over the network.
I would go with option A. If your service can be relied on to correctly indicate success or failure of the deletion then why bother reloading all the objects? Unless of course you also want to immediately show deletions by other users, in which case B would be the better option. You may also choose to show deletions by other users by a periodic polling/notification method, separate to deletion actions. It really depends on the requirements of your application.
Not only does it depend on the business case as indicated by this answer, it also depends on your level of risk tolerance in the code's design. Tight coupling between the client and the service can make the code more difficult to change as the application grows and increases in complexity. Instead, a clean separation of responsibilities and loose coupling generally increases maintainability and overall project agility.
In this case, that would probably mean the service wouldn't know about the existence of the client and it's implementation, so that would rule out the service directly manipulating the client list. If this service is implemented as a class library, I would recommend a publisher/subscriber approach, where the service exposes a C# event of a type that includes pertinent deletion information, and the client handles that event and updates it's list accordingly.
If this is a web (one-way) service, I would expect the a deletion service call to be separate of a GetAll service call. The client would manually manage it's list using a combination of those calls.