I have two consumers (different applications) connected to an Azure queue. I can either ReceiveAndDelete or PeekLock the messages and during consumption I can complete() or abandon() the message. Ref: http://msdn.microsoft.com/en-us/library/azure/hh851750.aspx.
I'm sure I want to use PeekLock and then abandon() the messages, as I want them to be received in both applications. I figured I'd set the message lifetime on 10 seconds on the queue as a deletion mechanism.
However, as the messages seem to be deleted after 10 seconds, they keep being published in both applications over and over again during those 10 seconds. Should I create some custom duplication detection or am I using a wrong approach in general?
In my experience, when you use PeeKLock, you will need to almost always finalize using the Complete Method. The Abandon method is creating the duplicity as it's never marked as "done".
Have you considered using the Service Bus Topics/Subscriptions pattern, or perhaps Filters? If I understand your scenario correctly, it may be just what you need. You can send 2 messages to the queue with different topics or filters designating which app it is for.
http://azure.microsoft.com/en-us/documentation/articles/service-bus-dotnet-how-to-use-topics-subscriptions/
Please let me know if this helps, or if this does not match your situation.
Kindest regards...
Related
I'm trying to run a maintainance task. For this, I want to push a message on application start. When the actual handler runs, it does it's work and defers it to execute again a day later. I don't want to publish this message, if it's already on the queue. Is there a way to peek into the queue? I'm using the SQL transport, so I tried to simply query the DB. However, the table is locked and cannot be read. One other thing to consider is, that there are at least two machines running the same app. This is why I came up with this solutions, since I want to circumvent concurrency issues.
It sounds to me like you're using the message queue as a scheduler.
If I were you, I would use a simple background timer (e.g. System.Timers.Timer) to periodically send a message to yourself, and then you can do your work in the message handler.
If your scheduling requirements are more complex, it might be beneficial to take a look at something like Quartz .NET.
I like to add azure service bus to one my projects. And the project does a few different tasks: sending email, processing orders, sending invoices etc.
What I am keen to know is, do I create seperate queues to process all these different tasks ? I understand that a queue has one sender and one reciever. That makes sense, but then I will end up with quite a few number of queues for a project. is that normal?
Based on your description:
The project does a few different tasks: sending email, processing orders, sending invoices etc.
These messages are not related to each other. I like to differentiate between commands and events. Commands are sent specifically to certain destination with an expectation of an outcome, knowing that operation could fail. With events it's different. Events are broadcasted and there are no expectations for success or failure. There's also no knowledge about consumers of events to allow complete decoupling. Events can only be handled using Topics/Subscriptions. Commands can be handled either with Queues or with Topics/Subscriptions (a topic with a single subscription would act as a queue).
If you go with events, you don't create separate consumer input queues. You create a topic and subscriptions on that topic. Let's say you'll have a PublisherApp and a ConsumerApp. PublisherApp could create a topic and send all messages to the events topic. ConsumerApp would create the required subscriptions, where each subscription would have a filter based on type of the message you'd like that subscription to receive. For your sample, it would be the following subscriptions:
SendMail
ProcessOrder
SendInvoice
In order to filter properly, your BrokeredMessages will have to have a header (property) that would indicate the intent. You could either come up with a custom header or use a standard one, like Label.
I wrote a blog post a while ago on topologies with ASB, have a look, it might give you more ideas on how you can set up your entities.
If topology & entities management is not what you'd like to do, there are good frameworks that can abstract it for your and allow your code to work w/o diving into details too much. NServiceBus and MassTransit are two good examples you can have a look at.
Full disclosure: I'm working on Azure Service Bus transport for NServiceBus.
First of all look at Azure Storage queue i just switched to it in almost the same scenario. In Storage queue there is NO monthly payment fee you pay for what you use.
Queue is not limited to receivers or senders. What i mean by that, is that you could have many listeners for a queue (in case your app is scaled) but as soon as listener picked up event then its locked and not visible to others. (By default timeout is around 30 sec in Azure storage queue and 60 sec in Service bus, so be aware if you need more time for processing your message you need renew lock otherwise you will end up with processing same message multiple times)
You can use one queue per all your events and depends on message body you can run different message processors. For instance in my project I am sending message with key Type which identify who is going to process this message. You can also use one queue per type and then in your listeners listen to multiple queues
Look at this link for comparison table
topics and subscriptions suit your scenario the most.
At the subscription end, you can filter the messages based on criteria,in your case, it can be your task i.e sendemail,processorder .
If you want to add more tasks in future, you will be free from making any changes on the service bus itself, and will only have do required changes on sender and receiver code.
If you use service bus queues or storage queues, in future , you have to create more queues for adding another tasks and it can become complicated on the management level of your azure infrastrcture.
There are 2 approaches based on your design.
Queue and Message: Message Body has an indicator tasks: sending email, processing orders, sending invoices etc. The code then process the message accordingly.
Topics and Subscriptions: Define topics for each tasks and brokered messages are processed accordingly. This should be better than the Queues.
Does RabbitMQ (called from a c# client) have the ability to distribute work on a queue and allow the publisher to receive confirmations that that work processed successfully?
It seems like it should be possible without adding an extra queue, but unless I'm missing something acknowledgements/confirms don't tell the original publisher that a message was dealt with successfully. So it has no way of knowing if all of its work was handled.
I'm currently using the standard rabbit c# client, but I know easynetq is also very mature so suggestions for a good way to achieve this with either would be appreciated.
No, absolutely nothing in RabbitMQ will do that. The most you get out of RabbitMQ is an acknowledgment that the message was delivered to a worker, which you may interpret as "someone started work on the task". Your workers will need to find a way to communicate back to the caller with the results of the task, which could well be another exchange-queue mechanism but it is more likely that your workers will put the results of the task in Redis or database and if properly written, even a way to communicate failure codes via the same.
I am implementing a component that reads all the messages off a specific queue as they are available but should only remove messages from the queue asynchronously, after the message contents has been processed and persisted. We read messages off faster than we acknowledge them (e.g. we could read have read 10 messages off before we are ready to Ack the first). The current implementation uses the XMS API, but we can switch to MQI if XMS is inappropriate for these purposes.
We have tried two approaches to try solve this problem, but both have drawbacks that make them unacceptable. I was hoping that someone could suggest a better way.
The first implementation uses an IMessageConsumer in a dedicated thread to read all messages and publish their content as they are available. When the message has been processed, the message.Acknowledge() method is called. The Session is created with AcknowledgeMode.ClientAcknowledge. The problem with this approach is that, as per the documentation, this acknowledges (and deletes) ALL unacknowledged messages that have been received. With the example above, that would mean that all 10 read messages would be acked with the first call. So this does not really solve the problem. Because of the reading throughput we need, we cannot really modify this solution to wait for the first message's ack before reading the second, etc.
The second implementation uses an IQueueBrowser in a decided thread to read all messages and publish their content. This does not delete the messages off the queue as it reads. A separate dedicated thread then waits (on a BlockingQueue) for JMS Message IDs of messages that have been processed. For each of these, it then constructs a dedicated IMessageConsumer (using a message selector with JMSMessageID) to read off the message and ack it. (This pairing of an IQueueBrowser with dedicated IMessageConsumer is recommend by the XMS documentation's section on Queue browsers.) This method does work as expected but, as one would imagine, it is too CPU-intensive on the MQ Server.
Both of the methods proposed in the question appear to rely on a single instance of the app. What's wrong with using multiple app instances, transacted sessions and COMMIT? The performance reports (these are the SupportPacs with names like MP**) all show that throughput is maximized with multiple app instances, and horizontal scaling is one of the most used approaches in your scenario.
The design for this would be either multiple application instances or multiple threads within the same application. The key to making it work correctly is to keep in mind that transactions are scoped to a connection handle. The implication is that a multi-threaded app must dispatch a separate thread for each connection instance and the messages are read in the same thread.
The process flow is that, using a transacted session, the app performs a normal MQGet against the queue, processes the message contents as required and then issues an MQCommit. (I'll use the MQ native API names in my examples because this isn't language dependent.) If this is an XA transaction the app would call MQBegin to start the transaction but for single-phase commit the transaction is assumed. In both cases, MQCommit ends the transaction which removes the messages from the queue. While messages are under syncpoint, no other app instance can retrieve them; MQ just delivers the next available message. If a transaction is rolled back, the next MQGet from any thread retrieves it, assuming FIFO delivery.
There are some samples in:
[WMQ install home]\tools\dotnet\samples\cs\xms\simple\wmq\
...and SimpleXAConsumer.cs is one example that shows the XA version of this. The non-XA version is simpler since you don't need the external coordinator, the MQBegin verbs and so forth. If you start with one of these, make sure that they do not specify exclusive use of the queue and you can fire up several instances using the same configuration. Alternatively, take the portion of the sample that includes creation of the connection, message handling, connection close and destroy, and wrap all that in a thread spawner class.
[Insert usual advice about using the latest version of the classes here.]
Can someone explain to me the difference between these 3 approaches to processing messages that fail delivery?
Poison Queue service
Dead-Letter Queue service
Using a response service to handle failures
I have "Programming WCF", but I don't really understand when you would use one of these over another, or when it would make sense to use more than one of them. Thanks!
Dead and poison are two different concepts.
Poison messages are messages that can be read from the queue, but your code doesn't know how to handle it so your code gives an exception. If this goes on for some time you want this message to be put on a different queue so your other messages can be handled. A good aproach for this is described on MSDN.
A dead letter is a message that isn't even handled by the queue. The network is broken or the receiving MSMQ computer is shut down. Something like that. The message will automaticly be put on the dead queue after some time by Windows. So it's advisable to write a service that monitors the dead queue.
Poison message / dead letter message queues are used to place messages that have been determined to be undeliverable in a queue that will not try to deliver them anymore. You would do this if you might want to manually take a look at failed messages and process them at a later point. You use these type of queues when you want to keep bad messages from degrading the performance of your system by retrying over and over again.
On the other hand, a response service would be used to notify the sender that there was an error processing the message. Typically in this case you aren't planning on manually processing the bad message and need to let the system that sent the message in that the request has been rejected.
Note that these aren't exclusive. If you are using queues, there is always the chance that the message serialization might change enough to break messages that are in the queue in which case you might still want to have a dead letter queue even if you are using a response service.