We are using Rebus 4.2.1 and RabbitMQ
What we want to achieve is to have handlers on three (or more) instances all react to the same message.
As far as I understood (which may be wrong) - .Publish on the IBus interface should do exactly that (and we have been running with that on MSMQ).
Is there something I am missing with how RabbitMQ works?
(EDIT: I think the term used in RabbitMQ is a "fanout" style message)
EDIT2: mookid8000 put me on the right track - the issue was that each replica was asking for the same queue. As soon as I made that unique - everything started working as intended (and expected).
With Rebus + RabbitMQ, it's pretty simple, because RabbitMQ has native support for topic-based pub/sub messaging.
In each subscriber, you simply call
await bus.Subscribe<YourEvent>();
which will make Rebus generate a topic string out of your event type, and then bind it to the subscriber's input queue, and then in the publisher you
await bus.Publish(new YourEvent(...));
and then each subscriber will get a copy of the event in its input queue.
Underneath the covers, Rebus uses RabbitMQ's "Topic Exchange" exchange type to make this work.
Related
I like to add azure service bus to one my projects. And the project does a few different tasks: sending email, processing orders, sending invoices etc.
What I am keen to know is, do I create seperate queues to process all these different tasks ? I understand that a queue has one sender and one reciever. That makes sense, but then I will end up with quite a few number of queues for a project. is that normal?
Based on your description:
The project does a few different tasks: sending email, processing orders, sending invoices etc.
These messages are not related to each other. I like to differentiate between commands and events. Commands are sent specifically to certain destination with an expectation of an outcome, knowing that operation could fail. With events it's different. Events are broadcasted and there are no expectations for success or failure. There's also no knowledge about consumers of events to allow complete decoupling. Events can only be handled using Topics/Subscriptions. Commands can be handled either with Queues or with Topics/Subscriptions (a topic with a single subscription would act as a queue).
If you go with events, you don't create separate consumer input queues. You create a topic and subscriptions on that topic. Let's say you'll have a PublisherApp and a ConsumerApp. PublisherApp could create a topic and send all messages to the events topic. ConsumerApp would create the required subscriptions, where each subscription would have a filter based on type of the message you'd like that subscription to receive. For your sample, it would be the following subscriptions:
SendMail
ProcessOrder
SendInvoice
In order to filter properly, your BrokeredMessages will have to have a header (property) that would indicate the intent. You could either come up with a custom header or use a standard one, like Label.
I wrote a blog post a while ago on topologies with ASB, have a look, it might give you more ideas on how you can set up your entities.
If topology & entities management is not what you'd like to do, there are good frameworks that can abstract it for your and allow your code to work w/o diving into details too much. NServiceBus and MassTransit are two good examples you can have a look at.
Full disclosure: I'm working on Azure Service Bus transport for NServiceBus.
First of all look at Azure Storage queue i just switched to it in almost the same scenario. In Storage queue there is NO monthly payment fee you pay for what you use.
Queue is not limited to receivers or senders. What i mean by that, is that you could have many listeners for a queue (in case your app is scaled) but as soon as listener picked up event then its locked and not visible to others. (By default timeout is around 30 sec in Azure storage queue and 60 sec in Service bus, so be aware if you need more time for processing your message you need renew lock otherwise you will end up with processing same message multiple times)
You can use one queue per all your events and depends on message body you can run different message processors. For instance in my project I am sending message with key Type which identify who is going to process this message. You can also use one queue per type and then in your listeners listen to multiple queues
Look at this link for comparison table
topics and subscriptions suit your scenario the most.
At the subscription end, you can filter the messages based on criteria,in your case, it can be your task i.e sendemail,processorder .
If you want to add more tasks in future, you will be free from making any changes on the service bus itself, and will only have do required changes on sender and receiver code.
If you use service bus queues or storage queues, in future , you have to create more queues for adding another tasks and it can become complicated on the management level of your azure infrastrcture.
There are 2 approaches based on your design.
Queue and Message: Message Body has an indicator tasks: sending email, processing orders, sending invoices etc. The code then process the message accordingly.
Topics and Subscriptions: Define topics for each tasks and brokered messages are processed accordingly. This should be better than the Queues.
Does RabbitMQ (called from a c# client) have the ability to distribute work on a queue and allow the publisher to receive confirmations that that work processed successfully?
It seems like it should be possible without adding an extra queue, but unless I'm missing something acknowledgements/confirms don't tell the original publisher that a message was dealt with successfully. So it has no way of knowing if all of its work was handled.
I'm currently using the standard rabbit c# client, but I know easynetq is also very mature so suggestions for a good way to achieve this with either would be appreciated.
No, absolutely nothing in RabbitMQ will do that. The most you get out of RabbitMQ is an acknowledgment that the message was delivered to a worker, which you may interpret as "someone started work on the task". Your workers will need to find a way to communicate back to the caller with the results of the task, which could well be another exchange-queue mechanism but it is more likely that your workers will put the results of the task in Redis or database and if properly written, even a way to communicate failure codes via the same.
I have two consumers (different applications) connected to an Azure queue. I can either ReceiveAndDelete or PeekLock the messages and during consumption I can complete() or abandon() the message. Ref: http://msdn.microsoft.com/en-us/library/azure/hh851750.aspx.
I'm sure I want to use PeekLock and then abandon() the messages, as I want them to be received in both applications. I figured I'd set the message lifetime on 10 seconds on the queue as a deletion mechanism.
However, as the messages seem to be deleted after 10 seconds, they keep being published in both applications over and over again during those 10 seconds. Should I create some custom duplication detection or am I using a wrong approach in general?
In my experience, when you use PeeKLock, you will need to almost always finalize using the Complete Method. The Abandon method is creating the duplicity as it's never marked as "done".
Have you considered using the Service Bus Topics/Subscriptions pattern, or perhaps Filters? If I understand your scenario correctly, it may be just what you need. You can send 2 messages to the queue with different topics or filters designating which app it is for.
http://azure.microsoft.com/en-us/documentation/articles/service-bus-dotnet-how-to-use-topics-subscriptions/
Please let me know if this helps, or if this does not match your situation.
Kindest regards...
I have created a small class using RabbitMQ that implements a publish/subscribe messaging pattern on a topic exchange. On top of this pub/sub I have the methods and properties:
void Send(Message, Subject) - Publish message to destination topic for any subscribers to handle.
MessageReceivedEvent - Subscribe to message received events on this messaging instance (messaging instance is bound to the desired subscribe topic when created).
SendWaitReply(Message, Subject) - Send a message and block until a reply message is received with a correlation id matching the sent message id (or timeout). This is essentially a request/reply or RPC mechanism on top of the pub/sub pattern.
The messaging patterns I have chosen are somewhat set in stone due to the way the system is to be designed. I realize I could use reply-to queues to mitigate the potential issue with SendWaitReply, but that breaks some requirements.
Right now my issues are:
For the Listen event, the messages are processed synchronously through the event subscribers as the listener runs in a single thread. This causes some serious performance issues when handling large volumes of messages (i.e. in a back-end process consuming events from a web api). I am considering passing in a callback function as opposed to subscribing to an event and then dispatching the collection of callbacks in parallel using Task or Threadpool. Thread safety would obviously now be a concern of the caller. I am not sure if this is a correct approach.
For the SendWaitReply event, I have built what seems to be a hacky solution that takes all inbound messages from the message listener loop and places them in a ConcurrentDictionary if they contain a non-empty correlation guid. Then in the SendWaitReply method, I poll the ConcurrentDictionary for a message containing a key that matches the Id of the sent message (or timeout after a certain period). If there is a faster/better way to do this, I would really like to investigate it. Maybe a way to signal to all of the currently blocked SendWaitReply methods that a new message is available and they should all check their Ids instead of polling continuously?
Update 10/15/2014
After much exhaustive research, I have concluded that there is no "official" mechanism/helper/library to directly handle the particular use-case I have presented above for SendWaitReply in the scope of RabbitMQ or AMQP. I will stick with my current solution (and investigate more robust implementations) for the time being. There have been answers recommending I use the provided RPC functionality, but this unfortunately only works in the case that you want to use exclusive callback queues on a per-request basis. This breaks one of my major requirements of having all messages (request and reply) visible on the same topic exchange.
To further clarify, the typical message pair for a SendWaitReply request is in the format of:
Topic_Exchange.Service_A => some_command => Topic_Exchange.Service_B
Topic_Exchange.Service_B => some_command_reply => Topic_Exchange.Service_A
This affords me a powerful debugging and logging technique where I simply set up a listener on Topic_Exchange.# and can see all of the system traffic for tracing very deep 'call stacks' through various services.
TL; DR - Current Problem Below
Backing down from the architectural level - I still have an issue with the message listener loop. I have tried the EventingBasicConsumer and am still seeing a block. The way my class works is that the caller subscribes to the delegate provided by the instance of the class. The message loop fires the event on that delegate and those subscribers then handle the message. It seems as if I need a different way to pass the message event handlers into the instance such that they don't all sit behind one delegate which enforces synchronous processing.
It's difficult to say why your code is blocking without a sample, but to prevent blocking while consuming, you should use the EventingBasicConsumer.
var consumer = new EventingBasicConsumer;
consumer.Received += (s, delivery) => { /* do stuff here */ };
channel.BasicConsume(queue, false, consumer);
One caveat, if you are using autoAck = false (as I do), then you need to ensure you lock the channel when you do channel.BasicAck or you may hit concurrency issues in the .NET library.
For the SendWaitReply, you may have better luck if you just use the SimpleRpcClient included in the RabbitMQ client library:
var props = channel.CreateBasicProperties();
// Set your properties
var client = new RabbitMQ.Client.MessagePatterns.SimpleRpcClient(channel, exchange, ExchangeType.Direct, routingKey);
IBasicProperties replyProps;
byte[] response = client.Call(props, body, out replyProps);
The SimpleRpcClient will deal with creating a temporary queue, correlation ID's, and so on instead of building your own. If you find you want to do something more advanced, the source is also a good reference.
I am used to register callbacks from within ASP/MVC applications in order to get notified of responses to events send/published. In order to do so, NServiceBus provides some methods (Register/RegisterWebCallback) which can be invoked on the IAsync context which is returned by bus.Send(..).
Is it there any equivalent on Rebus side? I could define an IHandleMessage and then manually do internal dispatching of received responses, but it seems a bit overkill.
To be honest, I never really got why NServiceBus would allow you to register an in-memory callback when calling bus.Send.
I've actually only seen it used in weird hacky scenarios where people use it to implement a blocking request/response API by waiting on the wait handle of the returned IAsyncResult.
Is it something that you're seriously missing?
How would you use it?