basic.Nack not being processed - c#

Here is what I am trying to do:
Dequeue a message
Do an action with the message
If the action fails, put the message back in the queue
If the action succeeds, acknowledge the message
My problem right now is that, if the action fails, the message isn't re-queued, but stays unacknowledged. If I go in RabbitMQ web configuration interface, I see that the messages are flagged as unacknowledged, even though the basic.Nack has been stepped over.
var delivery = subscription.Next();
var messageBody = delivery.Body;
try
{
action.Invoke(messageBody);
subscription.Ack(delivery);
}
catch (Exception ex)
{
subscription.Model.BasicNack(delivery.DeliveryTag, false, true);
throw ex;
}
Update:
So I've noticed that Messages go from Ready to Unacknowledged really fast. A rate way faster then I'm actually calling subscriber.Next(), as if the the .Net client caches all the messages in memory (the memory foot print of my app is actually growing quite fast), and processes those messages from memory and sends the Ack() afterwards, unflagging the message from Unacknowledged.
Update 2:
Seems like the queue being emptied really fast was because I hadn't set BasicQos on my Model. The following fixed everything. Basic.Nack() still doesn't seem to work tho:
Model.BasicQos(0, 1, false)

I suspect you're using:
channel.BasicConsume(your_queue_name, false, consumer); to retrieve messages.
I ran several tests with a RabbitMQ 3.2.4 server and client. I was unable to get either channel.BasickAck(...) or channel.BasicNack(...) to work as expected.
That said, I was able to get the expected Ack | Nack behavior when I used:
BasicGetResult result = channel.BasicGet(your_queue_name, false);
So you may want to consider a different retrieval method to get messages. I realize that the Consume & Dequeue are the "preferred" methods but they weren't working in my case. I wanted fair, one-at-a-time dispatch with acknowledgments. Using BasicGet was the only way I could achieve that.
The downside to that approach is you'll possibly lose the client side event iterator you're using with subscription.Next().
If I had to venture a guess, I think that something about the local Queue collection is messing up the channel's ability to provide an acknowledgement. And it's worth pointing out that creating the consumer with new QueueingBasicConsumer(channel); triggers a call to pre-fetch events from the server's queue. The consumer's Queue is just a SharedQueue<RabbitMQ.Client.Events.BasicDeliverEventArgs> and SharedQueue is just an extension of IEnumerable.
Also keep in mind that the same channel that pulls the message needs to provide the Ack | Nack. You cannot Ack | Nack a message from a different channel. Or at least I haven't figured out how to do so, nor have others. That's a problem if you wrap your RabbitMQ objects within using statements (so you don't leave network resources laying around) and you have long process to run before you can safely acknowledge.
This SO Answer lays out a decent workflow to get around the likely reality that your pulling channel is not going to be the channel that sends the Ack | Nack. The trick is setting a TTL and not bothering with sending a Nack - just let the new message expire and requeue automatically.

Related

Requeue Ibm MQ Message

We are running multiple instances of a windows service that reads messages from a Topic, runs a report, then converts the results into a PDF and emails them to a user. In case of exceptions we simply log the exception and move on.
The use case we want to handle is when the service is shut down we want to preserve the jobs that are currently running so they can be reprocessed by another instance of the service or when the service is restarted.
Is there a way of requeueing a message? The hacky solution would be to just republish the message from the consuming service, but there must be another way.
When incoming messages are processed, their data is put in an internal queue structure (not a message queue) and processed in batches of parallel threads, so the IbmMq transaction stuff seems hard to implement. Is that what I should be using though?
Your requirement seems to be hard to implement if you don't get rid of the "internal queue structure (not a message queue)" if this is not based on a transaction oriented middleware. The MQ queue / topic works well for multi-threaded consumers, so it is not apparent what you gain from this intermediate step of moving the data to just another queue. If you start your transaction with consuming the message from MQ, you can have it being rolled back when something goes wrong.
If I understood your use case correctly, you can use Durable subscriptions:
Durable subscriptions continue to exist when a subscribing application's connection to the queue manager is closed.
The details are explained in DEFINE SUB (create a durable subscription). Example:
DEFINE QLOCAL(THE.REPORTING.QUEUE) REPLACE DEFPSIST(YES)
DEFINE TOPIC(THE.REPORTING.TOPIC) REPLACE +
TOPICSTR('/Path/To/My/Interesting/Thing') DEFPSIST(YES) DURSUB(YES)
DEFINE SUB(THE.REPORTING.SUB) REPLACE +
TOPICOBJ(THE.REPORTING.TOPIC) DEST(THE.REPORTING.QUEUE)
Your service instances can consume now from THE.REPORTING.QUEUE.
While I readily admit that my knowledge is shaky, from what I understood from IBM’s [sketchy, inadequate, obtuse] documentation there really is no good built in solution. With transactions the Queue Manager assumes all is well unless it receives a roll back request and when it does it rolls back to a syncpoint, so if you’re trying to roll back to one message but two other messages have completed in the meantime it will roll back all three.
We ended up coding our own solution updating the way we’re logging messages and marking them as completed in the DB. Then on both startup and shutdown we find the uncompleted messages and programmatically publish them back to the queue, limiting the DB search by machine name so if we have multiple instances of the service running they won’t duplicate message processing.

RabbitMq message causes server to crash, leading to infinite retry

We use rabbit mq to send messages to a server for processing.
We require the server to ack a message. That way if the server happens to die whilst processing the message, we will retry the message when it restarts, or with a different server.
The problem is, on a very rare occasion, we will get a message that deterministically crashes the server. This is because we call into some open source native dlls, those dlls have bugs, and sometimes these dlls just cause the process to crash with no exception. Of course it would be ideal to fix those bugs, but we don't expect to fix all such issues in pdfium or opencv any time soon. We have to reckon with the fact that whatever we do, we will eventually get such a message.
The result of this is that the message is then retried, the server restarts, picks ups the message, crashes, and so on ad infinitum. Nothing gets processed till we manually stop the server, and purge the message. Not ideal.
What can we do to solve this problem?
What we don't want to do is create another service that monitors the rabbitmq service, looks for such messages and purges them, since that just leads to spiralling complexity. Instead we want to deal with this at the rabbitmq client level. We would be perfectly happy to say that if a message is not processed 3 times, we should just fail the message. We could do this by maintaining a database entry of which messages we've processed, but ideally I wouldn't want to involve anything external, and just contain the solution to this problem in our rabbitmq client library. I'm not sure how to do this though.
One method I have used in my event driven architecture is to use dead letter exchanges (DLXs) or poison queues, that way if we see the same message multiple times due to service failure then it'll be pushed into the DLX instead of being re-queued into the original exchange. These messages then trigger a different type of process within our system to alert us messages are stuck and failing to process, we can then diagnose and fix the consumer. After a fix has been made we trigger another process to move the poison messages back into the original exchange to be then processed as normal.
In your scenario because your process crashes there is two possible options to deal with these messages:
If the message is marked as redelivered then clone the message and add an attempt count to the body or as a header (x-attempt-count) to the message. The copy will then be added to the back of the queue with the attempt count. When the copy is then consumed you can check if it hits the threshold and then move the message into a DLX or store in a database. The major drawback here is that it breaks the order of which the messages are processed.
Use an external services to keep track of the number of delivery attempts, I would recommend using something like redis/memcache where you can increment a counter based on a unique message id. At the start of your process if the message has been marked as redelivered then lookup the counter. If the message has reached the threshold, trigger a different process again like moving it into a DLX.

Deleting a message from Azure Queue Service by value

I'm using an Azure Storage Queue Service to track a queue of long running jobs which are en-queued by multiple disparate clients, but I now have a requirement to remove messages from the queue if any exist matching some given criteria.
I realise that this is somewhat anti-pattern for a queue, but the service does provide some functionality in addition to simple queueing (such as Delete Message and Peek Messages) so I thought I'd try to implement it.
The solution I've come up with works, but is not very elegant and is quite inefficient - and I'm wondering if it can be done better - or if I should bin the whole approach and use a mechanism which supports this requirement by design (which would require a fair amount of work across different systems). Here is the simplified code:
var queue = MethodThatGetsTheAppropriateQueueReference();
await queue.FetchAttributesAsync(); //populates the current queue length
if (queue.ApproximateMessageCount.HasValue)
{
// Get all messages and find any messages to be removed.
// this makes those messages unavailable for other clients
// for the visibilityTimeOut period.
// I've set this to the minimum of 1 second - not ideal though
var messages = await queue.GetMessagesAsync(queue.ApproximateMessageCount.Value);
var messagesToDelete = messages.Where(x => x.AsString.Contains(someGuid));
// Delete applicable messages
messagesToDelete.ToList().ForEach(x => queue.DeleteMessageAsync(x));
}
Note originally I tried using PeekMessagesAsync() to avoid affecting messages which do not need to be deleted, but this does not give you a PopReceipt which is required by DeleteMessageAsync().
The questions:
Is there a way to do this without pulling ALL of the messages down? (there could be quite a few)
If 1 isnt possible, is there a way to get the PopReceipt for a message if we use PeekMessagesAsync()?
Is there a way to do this without pulling ALL of the messages down?
(there could be quite a few)
Unfortunately no. You have to Get messages (a maximum of 32 at a time) and analyze the contents of the messages to determine if the message should be deleted.
If 1 isnt possible, is there a way to get the PopReceipt for a message
if we use PeekMessagesAsync()?
Again, No. In order to get PopReceipt, a message must be dequeued which is only possible via GetMessagesAsync(). PeekMessagesAsync() simply returns the message without altering its visibility.
Possible Solution
You may want to look into Service Bus Topics and Subscriptions for this kind of functionality.
What you could do is create a topic where all messages will be sent.
Then you would create 2 subscriptions: In one subscription you will set a rule which checks for message contents for the matching value and in other subscription you will set a rule which checks for message contents for not matching value.
What Azure Service Bus will do is check each message that arrives against the rules and accordingly pushes the message in appropriate subscription. This way you will have a nice separation of messages that should/shouldn't be deleted.

RabbitMq message re delivery

Say I have a connection to rabbit, and I've pulled 1000 messages, but have not yet ack'd them, as they are being processed by a single thread out of a Blocking collection.
Now suppose my connection dies and is auto recovered. At this point all of these msgs on the server will be re queued for delivery. But I still have copies of them locally, with the old Delivery tag.
This leads me to believe I should handle connection or channel down events by clearing my local queue out.
Can you confirm this is true?
Yes that is the case. Those messages will be redelivered.
So in addition to clearing our your locally queued messages, you might want to consider your prefetch so that you don't have so many messages queued locally.
Is your strategy is to pull 1000, process them all, then finally ack them all? I can see that due to performance reasons you might do this so you can send a single ack with multiple=true, but it does introduce extra redelivery and duplicate processing risks.
You are right.If you are processing one message at a time you can set prefetch count as 1 and you may not need to clear any messages locally,too.

azure service bus not receiving all messages (only ~65%)

This is the closest previous question I could find: Azure Service Bus Subscription OnMessage not receiving messages.
The same thing happens to me too. When I change the name of the topic it works for a while again. Then that service bus topic is corrupt again. Only 65-71% of messages arriving. Doesn't help to delete the subscritption, nor the topic. The topic name seems to be polluted somehow after a while. It is really really bad, because I have no way of telling when the topic is corrupt, except that the system doesn't work like it should when messages don't arrive. Creating a new topic with new name randomly now and then seems like an utterly bad solution.
I'm testing it through a loop in one process, sending the messages, then a loop in another process, receiving and counting. With a new topic name it works perfectly. And I know I only have one listener to the subscription, and it's a peek lock, requires the message to be completed.
Anyone? How can I solve this?
UPDATE:
There's a gotcha to be found here. I've had 1 subscription created and maintained a connection to it; 1 topic created, and the bus recreated 10 times, sending 100 msg every time. No msg lost.
I have had 1 subscription created, and a new subscriptionclient created for every time the bus is recreated and 100 msg sent. Losing 50% of msgs.
Seems like the topic is aware of the previous subscriptionclient and the msgs are dealt to the two?
NEW QUESTIONS:
I'm trying to wrap my head around how to handle this. Can anyone confirm that a restart of the process, leading to creating a new subscription client with the same subscription name, would make the topic deal out the messages between the first and the second subscription client, even though the first is no longer there?
Since I'm trying to handle faults by restarting my subscription module, i.e. going through the steps of checking if topic exists, if subscription exists, and then create subscription client, I'm struggling to understand how I could avoid the above described, and avoid messages being dealt out to the non existant subscriber also..
Suggestion for solution, atm:
Keep track of old subscriptions, and if I have to restart process, create a new subscription?
Leaves a window between process going down and new subscription created, where messages will be pumped out to the "dead" subscription only. These messages will be lost.
But at least any messages after that will be received by the new subscription.
Man.. this problem must have been dealt with before. I'm not doing it right. Would highly appreciate some guidance here.
SOLUTION:
It's all about the right tool for the job. This situation calls for a Queue, not a Pub/Sub.
Everything solved. I'm doing the same tests as above, but with queue instead, and of course, since it's decided clientside who receives a message, there is no problem with previous (dead) subscriptionclients taking messages from new ones. Only one queue client will be alive at a time, so there is only that one who can take msgs off the queue.
SOLUTION: It's all about the right tool for the job. This situation calls for a Queue, not a Pub/Sub. Everything solved. I'm doing the same tests as above, but with queue instead, and of course, since it's decided clientside who receives a message, there is no problem with previous (dead) subscriptionclients taking messages from new ones. Only one queue client will be alive at a time, so there is only that one who can take messages off the queue.

Categories