Consume RabbitMQ Dead letter queue messages - c#

I have read about RabbitMQ & it's techniques on handling the events which fail to process at consumers (not acknowledged messages, expired ttl-s etc.). RabbitMQ DLX
The way this is working (as I understand) is setting a Dead Letter Exchange to the processing queue.
So far so good, when the events are failing I can see that the Dead letter queue is getting it's messages saved there.
But how do I consume these messages saved there ?!
In my case I want to re-process them after their TTL in dlq has expired, but in .NET Core (C#), I couldn't find a way or an implementation how to achieve this.
I could also agree with other solutions like creating a background worker to check for dead-messages etc.
Can this be achieved ? If yes, please help me understand what I need to do to get this working

You need to configure a "dead letter queue" to handle messages that have been rejected or undelivered. Using the RabbitMQ Client library, you can bind a consumer to that configured queue and retrieve the messages from it. From there you decide in code what you want to do to reprocess/reject them completely.
I've found a useful step by step guide for you to follow with more detail: https://medium.com/nerd-for-tech/dead-letter-exchanges-at-rabbitmq-net-core-b6348122460d

You may want to consider attaching the original q's source exchange as the DL exchange of the dlq, which makes the original q and the dlq the DLQs to each other, forming a re-try loop.
The added dlq serves as a temp store of retry messages, and a retry delay buffer with the TTL mechanism. Messages timing out in the dlq get auto-pushed back to the original q for retry. The original q's consumer handles both first-time and retry messages.
To avoid an infinite loop of retries, a retry counter would be set in the message; and the consumer of the original q needs to eventually break the retry loop based on that counter.

Related

Best way to read Service Bus Topic Subscription Dead Letter Queue (DLQ) C#

We have an external SB Topic Subscription for which we are the consumers. We are currently reading the SB with a max concurrent thread count of 20.
I am wondering what would be the best way to occasionally drain the DLQ? Should it be included in the same app or should I create another for just draining the DLQ?
Pointers on any standards being followed would be highly appreciated.
It depends on what do you mean by "draining the DLQ." Do you need to reprocess those messages or purge the queue? What environment you're running in? Would a serverless option work in your case?
If you don't need those messages and OK with running Functions (e.g. consumption mode), I would do the simplest thing - a Function triggered by DLQ that does nothing. That's right, nothing. That will do the job and purge those messages as they arrive w/o the need to worry about hosting your process, scaling out, concurrency, etc.
Note that a dead-letter queue is always a sub-queue on the original queue. For a queue named myQueue the dead-letter queue would be myQueue/$DeadLetterQueue.

Azure servicebus queue message handling

I have two consumers (different applications) connected to an Azure queue. I can either ReceiveAndDelete or PeekLock the messages and during consumption I can complete() or abandon() the message. Ref: http://msdn.microsoft.com/en-us/library/azure/hh851750.aspx.
I'm sure I want to use PeekLock and then abandon() the messages, as I want them to be received in both applications. I figured I'd set the message lifetime on 10 seconds on the queue as a deletion mechanism.
However, as the messages seem to be deleted after 10 seconds, they keep being published in both applications over and over again during those 10 seconds. Should I create some custom duplication detection or am I using a wrong approach in general?
In my experience, when you use PeeKLock, you will need to almost always finalize using the Complete Method. The Abandon method is creating the duplicity as it's never marked as "done".
Have you considered using the Service Bus Topics/Subscriptions pattern, or perhaps Filters? If I understand your scenario correctly, it may be just what you need. You can send 2 messages to the queue with different topics or filters designating which app it is for.
http://azure.microsoft.com/en-us/documentation/articles/service-bus-dotnet-how-to-use-topics-subscriptions/
Please let me know if this helps, or if this does not match your situation.
Kindest regards...

IBM-MQ reader in .net XMS to ack processed messages one-by-one

I am implementing a component that reads all the messages off a specific queue as they are available but should only remove messages from the queue asynchronously, after the message contents has been processed and persisted. We read messages off faster than we acknowledge them (e.g. we could read have read 10 messages off before we are ready to Ack the first). The current implementation uses the XMS API, but we can switch to MQI if XMS is inappropriate for these purposes.
We have tried two approaches to try solve this problem, but both have drawbacks that make them unacceptable. I was hoping that someone could suggest a better way.
The first implementation uses an IMessageConsumer in a dedicated thread to read all messages and publish their content as they are available. When the message has been processed, the message.Acknowledge() method is called. The Session is created with AcknowledgeMode.ClientAcknowledge. The problem with this approach is that, as per the documentation, this acknowledges (and deletes) ALL unacknowledged messages that have been received. With the example above, that would mean that all 10 read messages would be acked with the first call. So this does not really solve the problem. Because of the reading throughput we need, we cannot really modify this solution to wait for the first message's ack before reading the second, etc.
The second implementation uses an IQueueBrowser in a decided thread to read all messages and publish their content. This does not delete the messages off the queue as it reads. A separate dedicated thread then waits (on a BlockingQueue) for JMS Message IDs of messages that have been processed. For each of these, it then constructs a dedicated IMessageConsumer (using a message selector with JMSMessageID) to read off the message and ack it. (This pairing of an IQueueBrowser with dedicated IMessageConsumer is recommend by the XMS documentation's section on Queue browsers.) This method does work as expected but, as one would imagine, it is too CPU-intensive on the MQ Server.
Both of the methods proposed in the question appear to rely on a single instance of the app. What's wrong with using multiple app instances, transacted sessions and COMMIT? The performance reports (these are the SupportPacs with names like MP**) all show that throughput is maximized with multiple app instances, and horizontal scaling is one of the most used approaches in your scenario.
The design for this would be either multiple application instances or multiple threads within the same application. The key to making it work correctly is to keep in mind that transactions are scoped to a connection handle. The implication is that a multi-threaded app must dispatch a separate thread for each connection instance and the messages are read in the same thread.
The process flow is that, using a transacted session, the app performs a normal MQGet against the queue, processes the message contents as required and then issues an MQCommit. (I'll use the MQ native API names in my examples because this isn't language dependent.) If this is an XA transaction the app would call MQBegin to start the transaction but for single-phase commit the transaction is assumed. In both cases, MQCommit ends the transaction which removes the messages from the queue. While messages are under syncpoint, no other app instance can retrieve them; MQ just delivers the next available message. If a transaction is rolled back, the next MQGet from any thread retrieves it, assuming FIFO delivery.
There are some samples in:
[WMQ install home]\tools\dotnet\samples\cs\xms\simple\wmq\
...and SimpleXAConsumer.cs is one example that shows the XA version of this. The non-XA version is simpler since you don't need the external coordinator, the MQBegin verbs and so forth. If you start with one of these, make sure that they do not specify exclusive use of the queue and you can fire up several instances using the same configuration. Alternatively, take the portion of the sample that includes creation of the connection, message handling, connection close and destroy, and wrap all that in a thread spawner class.
[Insert usual advice about using the latest version of the classes here.]

RabbitMQ Basic Recover Doesn't Work

We have a durable RabbitMQ queue. When consumer gets item from the queue, it processes it and then acknowledges it. If consumer fails to process the item it prints an error expecting someone to fix the problem and stops. No acknowledgement is being sent. When consumer restarts the item it receives is next item in the queue, not the item without ack. Basic.Recover() doesn't help (using .NET client).
Any ideas how to make it work as a queue - always get the first item if it is not acked.
Messages can be consumed in two ways noAck=false or noAck=true
noAck is a parameter on both Model.BasicConsume and Model.BasicGet
When noAck is set to true messages are automatically removed from the queue after being delivered. If noAsk is set to false messages are only removed when you call basicAck.
If noAck=false and you do not call basicAck the message will remain but you will not receive in on other consumers until you restart the application (or close the connection that consumed it first). If you call BasicReject the message will be redelivered to a subscriber.
I hope this helps.
See this entry in the RabbitMQ FAQ. While you may want RabbitMQ to re-queue your unacked messages right back to the head of the queue (where they were before your consumer pulled them down), the reality is likely going to be different, as you've experienced.
So it's not that Basic.Recover() doesn't work (the message was placed back on the queue for future reprocessing) just that it doesn't work the way you expected.
Something in the back of my mind tells me that you may be able to get the behavior you want by setting a prefetch count of 1 and having at most only one consumer connect to the queue at any time, but I can't guarantee that's the case. It's worth trying. However even if it works it's not something to rely on staying the case forever, and with such a low prefetch count your consumer's messages/second performance will probably suffer.
RabbitMQ seems to have fixed part of this issue since 2.7.0, as now messages will be requeued in publication order. Although if you have more than one subscriber to a queue, you may still have message arriving out of the original order.
You can get this behaviour by having a pair of queues, one high priority and one normal priority. Set the prefetch count to 1, and then alternate between the queues using basic.get. Most of the time the priority queue will be empty, but when you want to requeue, publish the message again onto the high priority queue instead.
This works in a scenario where you have multiple processes consuming the message stream, and one process decides to bail out on a message. That message will be picked up almost immediately by another process.

WCF and MSMQ failure handling

Can someone explain to me the difference between these 3 approaches to processing messages that fail delivery?
Poison Queue service
Dead-Letter Queue service
Using a response service to handle failures
I have "Programming WCF", but I don't really understand when you would use one of these over another, or when it would make sense to use more than one of them. Thanks!
Dead and poison are two different concepts.
Poison messages are messages that can be read from the queue, but your code doesn't know how to handle it so your code gives an exception. If this goes on for some time you want this message to be put on a different queue so your other messages can be handled. A good aproach for this is described on MSDN.
A dead letter is a message that isn't even handled by the queue. The network is broken or the receiving MSMQ computer is shut down. Something like that. The message will automaticly be put on the dead queue after some time by Windows. So it's advisable to write a service that monitors the dead queue.
Poison message / dead letter message queues are used to place messages that have been determined to be undeliverable in a queue that will not try to deliver them anymore. You would do this if you might want to manually take a look at failed messages and process them at a later point. You use these type of queues when you want to keep bad messages from degrading the performance of your system by retrying over and over again.
On the other hand, a response service would be used to notify the sender that there was an error processing the message. Typically in this case you aren't planning on manually processing the bad message and need to let the system that sent the message in that the request has been rejected.
Note that these aren't exclusive. If you are using queues, there is always the chance that the message serialization might change enough to break messages that are in the queue in which case you might still want to have a dead letter queue even if you are using a response service.

Categories