I'm currently trying to figure the best solution for following problem using NServiceBus: I have GUI that user can use to search different things, but information about those things is spread in multiple services/databases. Lets say for example that user is searching for list of parks in a city, but each district of this city keeps only info of their parks in their own database (which they expose by web-services). I need NServiceBus to send message to each endpoint(district) what info user needs, wait for response, and then when it gets it from all endpoints (and only then) send it back to user(GUI). User is only interested in full information so Bus needs to know if every endpoint have send its response or not (it also needs to be in real time so Bus will assume that endpoint is offline and will send failure message if it will take too much time). Endpoints can change at any time so code needs to be easy to maintain. Best option will be that adding/removing endpoints can be done without changes in code.
Here are my thoughts about possible solution:
Publish/subscribe pattern lets me easily send message to multiple endpoints and add/remove endpoints at will by subscribing/unsubscribing without changing code of publisher. Problem: By definition publisher doesn't know how many subscribers are there (and what they are), so waiting for all of the subscribers to responds become difficult, if not impossible.
Request/response pattern lets me easily tell endpoints that i want answer and I will know if endpoint responded yet. Problem: Every time I need to add/remove new endpoint I need to change code of the sender. Also scalability may be a problem.
My question: Is there any way to combine those patterns? Or am I looking at this problem wrong way? Is there even a way that I can achieve all I want?
I think you are indeed looking at the problem the wrong way.
It sounds like you want to query multiple services and aggregate the information for presentation in the UI. Generally speaking, a bus is not a good choice for straight querying. A bus is great for sending commands to a specific endpoint, and for publishing state changes as they happen.
If you are performing a query against an endpoint, your best bet would be to model and expose a query (via something like WebAPI).
Related
Consider a web application that implemented every database action except querying (i.e. add, update, remove) as a NServiceBus message, so that whenever a user calls a web API, in the back-end it will be mapped to await endpointInstance.Request method to return the response in the same HTTP request connection.
The challenge is when a message handler needs to send some other messages and wait for their response to finish its job. NServiceBus does not allow to call Request inside a message handler.
I ended up using Saga to implement message handlers that are relied on some other message handler responses. But the problem with Saga is that I can't send back the result in the same HTTP request, because Saga uses publish/subscribe pattern.
All our web APIs need to be responded in the same HTTP request (connection should be kept open until the result is received or a timeout exception occurred).
Is there any clean solution (preferably without using Saga)?
An example scenario:
user call http://test.com/purchase?itemId=5&paymentId=133
web server calls await endpointInstance.Request<PurchaseResult>(new PurchaseMessage(itemId, paymentId));
PurchaseMessage handler should call await endpointInstance.Request<AddPaymentResult>(new AddPaymentMessage(paymentId));
if the AddPaymentResult was successfull, store the purchase details in the database and return true as PurchaseResult, otherwise return false
You're trying to achieve something that we (at Particular Software) are trying to actively prevent. Let me explain.
With Remote Procedure Calls (RPC) you call another component out-of-process. That what makes the procedure call 'remote'. Where with regular programming you do everything in-process and it is blazing fast, with RPC you have the overhead of serialization, latency and more. Basically, you have to deal with the fallacies of distributed computing.
Still, people do it for various reasons. Sometimes because you want to use a WebAPI (or 'old fashioned' web service) because it offers the functionality you don't want to develop. Oldest example in the book is searching for an address by postal code. Or deducting money from someone's bank account. If you're building a CRM, you can use these remote components. These days a lot of people build distributed monoliths because they are taught at conferences that this is a good thing. In an architecture diagram, it looks really nice, but there's still temporal coupling that can provide a lot of headaches.
Some of these headaches come from the fact that you're trying to do stuff in an atomic action. Back in the days, with in-process calling of code/classes/etc this was easy and fast. Until you hit limitations, like tons of locks on a database.
A solution to this is asynchronous communication. You send some information via fire-and-forget. This solves temporal coupling. Instead of having a database that is getting dozens and dozens of requests to update data, etc. and as a result, your website is grinding to a halt, you have various options to make sure this doesn't happen. This is a really good thing, because instead of a single atomic operation, you have various smaller operations and many ways to distributed work, scale your system, etc, etc.
It also brings additional challenges, because not everyone is able to work with fire-and-forget. Some systems that were already built, try to introduce asynchronous communication via messaging (and hopefully NServiceBus). Some parts can work flawlessly with this. But others parts can't. Mainly the user-interface (UI). Because it was built to get an immediate result. So when you send a message from the UI, you expect a result!
With NServiceBus we've built a package called "Client-Side Callbacks" to make exactly this a possibility. We highly recommend our customers not to use it, except for this specific scenario that I just described. It is much better to migrate your entire UI to be able to deal with the fact that you don't receive an immediate answer, but we understand this is so much work, that not many will be able to achieve this.
However once that first message was sent and the UI received a result, there is no need to use callbacks anymore. As a result I'd like to propose this scenario:
use call http://test.com/purchase?itemId=5&paymentId=133
web server calls await endpointInstance.Request<PurchaseResult>();
PurchaseMessage handler retrieves info it needs and sends or publishes a message to (an)other component(s) and then replies back to the web server with an answer.
The next handler works with the send/published message and continues the process
Let us know if you need more information. You can always contact us by sending an email to support#particular.net
I have a Web.API and Angular app that works great.. However there is one hitch.. The app integrates several different systems in a fairly complex (but fast) workflow and the challenge I have, is when I kick it off, I don't really know when its over, and as such, sometimes the screen updates appropriately when the async call is finished, and sometimes the process is still going and the refresh shows stale data.
I was wondering if there was a way in MassTransit (other than a PublishRequest) that I could "monitor" a specific message and know when all of its consumers are in fact done with it?
I have a few ideas around listening for a "completed" message to bounce back, but that seems pretty noisy if there are 10 or 20,000 users... and turning everything into a Request, just so I can get a response seems equally wasteful..
I'm all ears,
Thanks
So you have a couple options, on the mailing list I think there's been some discussion about this in the past as well.
Use Signalr to notify the UI when a complete event happens. There's a RabbitMQ Backplane and I've heard this works pretty awesomely in some case.
Do Request/Response and wait for the response for each message
Do Request/Response but request out to a Saga that does all the work and then responds when it hits the right state. I've never tried to tie that together, so might be slightly tricky, but totally doable.
Create a Saga that will respond to status messages, so your UI can ping it directly via Request/Response to get updates. Depending on how you back the data for the Saga, you can also query your data store for the Saga to get statuses. The NHibernate Saga Repository for example would let you just hit DB tables to get status.
There might be some other options, but nothing is initially coming to mind that isn't going to be really complicated. I'd suggest joining the mailing list if you want to dig into this further because some of the people there have experience building out similar stuff. I've just done the last one.
I am new to GUIs, and i have encountered a problem in my client-server program.
My program is like a "customer-support", where multiple clients can use it from different computers simultaneously.My problem is that when one client changes some info, its inserted into the db but the other client will not see it unless I add a "Refresh" button to my gui.
I want the gui to be dynamic and react to different clients actions. How can you come over this issue?
EDIT:
1. .net4,
2. sql-server,
3. The actions happends after a button click
Basically, you have two options: push or poll. Push (some central server announcing the change to all the listeners) is more immediate, but demands suitable infrastructure. It also depends on the number of clients you need to support, and how many events are passing through the system. Personally, I'm a big fan of redis pub/sub for this (it is actually what we use for the live updates here on stackexchange, coupled with web-sockets). But in some cases you can get the database to provide change notifications directly (personally I prefer not to use this). You may also be able to use events over something like WCF from a central app-server, but that depends on there only being one app-server, which doesn't sound like a good idea to me.
The other option is polling - i.e. have the application automatically query the system periodically (every minute perhaps) to see if the data being displayed has changed. If you can, using the timestamp/rowversion is a cheap way of doing this.
After reading through the pub/sub project sample in MassTransit, it left me scratching my head.
In the sample, the client application publishes a request for the subscriber application to update the password of a fictitious user. This sample code works fine, and it's easy to follow the bouncing ball of this project.
HOWEVER--
In a real-world environment, the purpose of pub/sub (in my understanding) is to have a small number of publishers interacting with a large number of subscribers. In the case of a subscriber performing any sort of CRUD operation, shouldn't the communication pattern prevent more than one subscriber from handling the message? It would be less than desirable to have twenty subscribers attempt to update the same database record, for instance.
Is this just a case of a misguided sample project?
If pub/sub can be used for CRUD operations, how do you configure the framework to only allow one subscriber to perform an operation?
Am I just completely missing some basic info on the purpose of pub/sub?
Thanks for any clarification provided...
David
The scenario you refer to is usually referred to as 'competing consumers', and is quite typical of pub/sub.
If each consumer has it's own, unique queue name, each consumer will receive it's own copy of messages.
Alternatively, to get competing consumer behaviour, if consumers share the same queue name, there will be competition between the consumers for each message (so each message will only be received once)
You can have n-to-n, many-to-few, or few-to-many publishers to subscribers in any pub/sub system. It's really a matter of how many actors you want responding to a given message.
The sample project might not be the best, but we feel it shows what's going on. In real world cases though, it can be used for CRUD type behaviours; however it's more along the lines of many front ends sending "load data" type messages to middleware (cache) requesting a respond of same data. If that data gets updated on the front end somehow, it must publish some message indicating that and multiple middleware pieces need to update (cache, backend store, etc). [see CQRS]
Messaging in general is more about working with disconnected systems. Your specific world is more about the structure of consumers and publishers. I've seen implementations of MassTransit where most of the routes where static and it wasn't really pub/sub at all but just a lot of sends along a known topography of systems. Really understanding the concepts, the best book I know of is Enterprise Service Bus: Theory in Practice.
I hope this helps!
Edit: Also see our documentation, some of the concepts are touched on there.
I have a little trouble deciding which way to go for while designing the message flow in our system.
Because the volatile nature of our business processes (i.e. calculating freight costs) we use a workflow framework to be able to change the process on the fly.
The general process should look something like this
The interface is a service which connects to the customers system via whatever interface the customer provides (webservices, tcp endpoints, database polling, files, you name it). Then a command is sent to the executor containing the received data and the id of the workflow to be executed.
The first problem comes at the point where we want to distribute load on multiple worker services.
Say we have different processes like printing parcel labels, calculating prices, sending notification mails. Printing the labels should never be delayed because a ton of mailing workflows is executed. So we want to be able to route commands to different workers based on the work they do.
Because all commands are like "execute workflow XY" we would be required to implement our own content based routing. NServicebus does not support this out of the box, most times because it's an anti pattern.
Is there a better way to do this, when you are not able to use different message types to route your messages?
The second problem comes when we want to add a monitoring. Because an endpoint can only subscribe to one queue for each message type we can not let all executors just publish a "I completed a workflow" message. The current solution would be to Bus.Send the message to a pre configured auditing endpoint. This feels a little like cheating to me ;)
Is there a better way to consolidate published messages of multiple workers into one queue again? If there would not be problem #1 I think all workers could use the same input queue however this is not possible in this scenario.
You can try to make your routing not content-based, but headers-based which should be much easier. You are not interested if the workflow is to print labels or not, you are interested in whether this command is priority or not. So you can probably add this information into the message header...