Move message to 'deadletter ' in azure servicebus - c#

I have implemented backoff exponential retry. So basically if there is any exception i clone the message and then i re-submit it to the queue by adding some delay.
Now i am facing 2 issues - 1) i see that the delivery count is not increasing when i clone and resubmit back to queue
2) I want to move it to deadletter if the max delivery count is reached.
Code :
catch (Exception ex)
{
_logger.Error(ex, $"Failed to process request {requestId}");
var clone = messageResult.Message.Clone();
clone.ScheduledEnqueueTimeUtc = DateTime.UtcNow.AddSeconds(45);
await messageResult.ResendMessage(clone);
if (retryCount == MaxAttempts)
{
//messageResult.dea
}
return new PdfResponse { Error = ex.ToString() };
}
please help me on this

When you clone a message it becomes a new message, that means system properties are not cloned which gives the cloned message a fresh delivery count starting at 1 again. See also https://docs.azure.cn/zh-cn/dotnet/api/microsoft.azure.servicebus.message.clone?view=azure-dotnet
You can look into the Peek Lock Feature of Azure Service Bus. When using PeekLock the message gets invisible on the queue until you explicitly abandon it (put it back to the queue with delivery count increased) or complete if everything works out as expected when processing the message. Another option is to explicitly dead letter this message.
The feature is documented here: https://learn.microsoft.com/en-us/azure/service-bus-messaging/message-transfers-locks-settlement#peeklock
But the important thing about this is that if you do not perform any of the above mentioned actions such as cloning Azure Service Bus will automatically make the message visible again after a defined interval (the LockDuration property) or when you abandon it.
So to get a delayed retry and dead letter behaviour (when maximum delivery count has been reached) you can use the following options:
Option 1. Retry via Azure service bus auto-unlock
When processing of the message cannot be performed at the moment for some reason catch the exception and make sure none of the mentioned actions (abandon, complete or deadletter) are performed. This will keep the message invisible for the remaining time and will make it again visible after the configured lock duration has been reached. And the delivery count will also be increased by Azure Service Bus as expected.
Option 2. Implement your own retry policy
Perform your own retry policy in your code and retry processing of the message. If your maximum retries have been reached abandon the message which will make it visible again for the next queue reading step after the retry time has been reached. In this case the delivery count is increased as well.
Note: If you choose option 2.) make sure your retry period will conform to the defined LockDuration so that your message will not be visible again on the queue if you are still processing it with retries. You could also renew the lock between retries by calling the RenewLock() method on the message between retries.
If you implement the retry policy in your code I recommend using into Polly .Net which already gives you great features such as Retry and Circuit Breaker policies. See https://github.com/App-vNext/Polly

Related

Azure Service Bus - Leave message

(FYI - I am new ASB)
A couple of questions around Azure Service Bus:
How do you get a message from a Queue but leave it there until its' TTL expires? I would have thought simply not calling CompleteMessageAsync would do just that, but it appears to get removed regardless.
How do get a message from a Queue, but only dequeue (remove) it when received by a specific receiver?
Message.ApplicationProperties["ReceiverId"].ToString() == "123"
// now you can remove it
Thanks
How do you get a message from a Queue but leave it there until its' TTL expires?
You can peek at messages rather than receive them. The problem is that the message will be picked up again and again until the delivery count exceeds the maximum and the message will dead-letter, which you don't want to happen. I would review what you're trying to achieve here as it's a contradictory setup. You want the message to have a TTL in anticipation that it's not picked up, but then you want to probe it until TTL expires continuedly.
How do get a message from a Queue, but only dequeue (remove) it when received by a specific receiver?
My advice is don't use a queue for that. If you target a specific destination, express it with your entity topology. For example: publish a message on a topic and have different subscriptions based on the subscriber identification. That way you can have messages for specific subscribers, where a logical subscriber can be scaled out.
1-Use the PeekMessage:
You can peek at the messages in the queue without removing them from
the queue by calling the PeekMessages method. If you don't pass a
value for the maxMessages parameter, the default is to peek at one
message.
//-------------------------------------------------
// Peek at a message in the queue
//-------------------------------------------------
public void PeekMessage(string queueName)
{
// Get the connection string from app settings
string connectionString = ConfigurationManager.AppSettings["StorageConnectionString"];
// Instantiate a QueueClient which will be used to manipulate the queue
QueueClient queueClient = new QueueClient(connectionString, queueName);
if (queueClient.Exists())
{
// Peek at the next message
PeekedMessage[] peekedMessage = queueClient.PeekMessages();
// Display the message
Console.WriteLine($"Peeked message: '{peekedMessage[0].Body}'");
}
}
https://learn.microsoft.com/en-us/azure/storage/queues/storage-dotnet-how-to-use-queues?tabs=dotnet
2-you can also use PeekMessage, check for the property you want (ReceiverId), and it case it's the right one, just complete the message:
// ServiceBusReceiver
await receiver.CompleteMessageAsync(receivedMessage);

MessageLockLostException: The lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue

i am trying to consume a message from queue using service bus queue trigger and do some job which will take some time to complete .i don't want other processor to pick the message while i am processing the message. I have my following configuration in host.json. When i receive the message from queue at await receiver.CompleteAsync(lockToken);
i am getting an exception "The lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue."
"serviceBus": {
"prefetchCount": 1,
"autoRenewTimeout": "00:05:00",
"messageHandlerOptions": {
"autoComplete": false,
"maxConcurrentCalls": 1,
"maxAutoRenewDuration": "00:04:00"
}
}
Code from Azure Function are as below
public static void Run([ServiceBusTrigger("testqueue", Connection = "AzureServiceBus.ConnectionString")]Message message, MessageReceiver messageReceiver,ILogger log)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message: {messageReceiver.ClientId}");
log.LogInformation($"Message={Encoding.UTF8.GetString(message.Body)}");
string lockToken = message.SystemProperties.LockToken;
log.LogInformation($"Processing Message:={Encoding.UTF8.GetString(message.Body)}");
DoSomeJob(messageReceiver, lockToken,log);
}
public static async void DoSomeJob(MessageReceiver receiver,string lockToken, ILogger log)
{
try
{
await Task.Delay(360000);
await receiver.CompleteAsync(lockToken);
}
catch (Exception ex)
{
log.LogInformation($"Error In Job={ex}");
}
}
When you configure Azure Function triggered by Azure Service Bus with maxAutoRenewDuration set to 10 mins, you're asking the trigger to extend the lock up-to 10 minutes. This is not a guaranteed operation as it's initiated by the client-side and a maximum single locking period of time is 5 minutes. Given that, an operation to extend the lock can fail and the lock will be released, causing another instance of your function to process it concurrently, while the original processing is still happening.
Another aspect to look at is the prefetchCount which is set to 100, and maxConcurrentCalls that is set to 32. What that means is that you're fetching up-to 100 messages and process up to 32 that. I don't know if the actual Function code runs longer than 50 seconds (in your example), but prefetched message locks are not auto-renewed. Therefore, if the prefetched messages are not getting processed withing the queue's MaxLockDuration time (which by default is less than 5 mins), some of those prefetched messages will start processing, optional renewal, and completion way after they've lost the lock.
I would recommend:
Check the MaxLockDuration not to be too short to accommodate your prefetch and concurrency.
Update prefetchCount to ensure you don't over-fetch.
If a single message processing can be done within 5 minutes or less, rather prefer that and not the auto-renewal.

Apache NMS using ActiveMQ: How do I use transactional acknowledge mode but still acknowledging/rolling back a single message every time?

I use Apache NMS (in c#) to receive messages from ActiveMQ.
I want to be able to acknowledge every message I received, or roll back a message in case I had an error.
I solved the first part by using the CreateSession(AcknowledgementMode.IndividualAcknowledge), and then for every received message I use message.Acknowledge().
The problem is that in this mode there is no Rollback option. if the message is not acknowledged - I can never receive it again for another trial. It can only be sent to another consumer, but there isn't another consumer so it is just stucked in queue.
So I tried to use AcknowledgementMode.Transactional instead, but here there is another problem: I can only use session.Commit() or session.Rollback(), but there is no way to know which specific message I commit or role back.
What is the correct way to do this?
Stay with INDIVIDUAL_ACKNOWLEDGE and then try session.recover() and session.close(). Both of those should signal to the broker that the messages are not going to be acknowledged.
My solution to this was to throw an exception if (for any reason (exception from db savechanges event for example)) I did not want to acknowledge the message with message.Acknowledge().
When you throw an exception inside your extended method of IMessageConsumer Listener then the message will be sent again to your consumer for about 5 times (it will then moved to default DLQ queue for investigation).
However you can change this using the RedeliveryPolicy in connection object.
Example of Redelivery
Policy redeliveryPolicy = new RedeliveryPolicy
{
InitialRedeliveryDelay = 5000, // every 5 secs
MaximumRedeliveries = 10, // the message will be redelivered 10 times
UseCollisionAvoidance = true, // use this to random calculate the 5 secs
CollisionAvoidancePercent = 50,// used along with above option
UseExponentialBackOff = false
};
If message fails again (after 10 times) then it will be moved to a default DLQ queue. (this queue will be created automatically)
You can use this queue to investigate the messages that have not been acknowledged using an other consumer.

Determine if message will be retried from observer context in MassTransit 3

I would like to track the number of message retries and redelivers that occur while using MassTransit 3. I have both retries and redeliveries configured:
config.UseDelayedRedelivery(r => r.Immediate(2));
config.UseRetry(r => r.Immediate(3));
I have set up a IConsumeObserver and a IReceiveObserver as described here. And I can inspect the ConsumeContext/ReceiveContext in PostConsume<T>(ConsumeContext<T> context)/PostReceive(ReceiveContext context).
But when inspecting the contexts I cannot see a difference between the context for a message which was consumed without exception and one that threw an exception during consumption and will be redelivered.
How can I, in the PostConsume, method of an IConsumeObserver or IReceiveObserver determine if context represents a message that will be redelivered or one that has completed sucesfully?
You can do it. MassTransit keeps the redelivery count in the message headers, otherwise, it won't know when to stop redelivering, according to your policy.
If this line returns a non-zero (or not null, I am not sure) - you are dealing with a redelivered message.
context.Headers.Get(MessageHeaders.RedeliveryCount, default(int?)));
If your message is being retried (not redelivered), check this answer from Chris: Get MassTransit message retries amount
The consumer can influence whether or not a message will be redelivered, but it doesn't have full control or knowledge of it.
For example, everything succeeds on the consuming side, but it just takes too long, the publisher will retry and the consumer has no simple way to know that this will happen.
It's often best to design your application so that consuming the same message multiple times has the same effect as consuming it one time.
Additionally, you check the MessageId on consuming the message if you want to see if you've consumed it before.
The ConsumeContext also has a RetryCount, but I don't believe it's incremented until the next time the consumer runs.

Azure Storage Queue - processing messages on poison queue

I've been using Azure Storage Queues to post messages too, then write the messages to a db table. However I've noticed that when an error occurs processing messages on the queue, the message is written to a poison queue.
Here is some background to the setup of my app:
Azure Web App -> Writes message to the queue
Azure function -> Queue trigger processes the message and writes the contents to a db
There was an issue with the db schema which caused the INSERTS to fail. Each message was retried 5 times, which I believe is the default for retrying queue messages, and after the 5th attempt the message was placed on the poison queue.
The db schema was subsequently fixed but now I've no way of processing the messages on the poison queue.
My question is can we recover messages written to the poison queue in order to process them and INSERT them into the db, and if so how?
For your particular problem, I would recommend solution mentioned in question part of this post: Azure: How to move messages from poison queue to back to main queue?
Please note that name of poison queue == $"{queueName}-poison"
In my current project I've created something what is called: "Support functions" in the FunctionApp. It exposes a special HTTP endpoint with Admin authorization level that can be executed at any time.
Please See the code below, which solves the problem of reprocessing messages from the poison queue:
public static class QueueOperations
{
[FunctionName("Support_ReprocessPoisonQueueMessages")]
public static async Task<IActionResult> Support_ReprocessPoisonQueueMessages([HttpTrigger(AuthorizationLevel.Admin, "put", Route = "support/reprocessQueueMessages/{queueName}")]HttpRequest req, ILogger log,
[Queue("{queueName}")] CloudQueue queue,
[Queue("{queueName}-poison")] CloudQueue poisonQueue, string queueName)
{
log.LogInformation("Support_ReprocessPoisonQueueMessages function processed a request.");
int.TryParse(req.Query["messageCount"], out var messageCountParameter);
var messageCount = messageCountParameter == 0 ? 10 : messageCountParameter;
var processedMessages = 0;
while (processedMessages < messageCount)
{
var message = await poisonQueue.GetMessageAsync();
if (message == null)
break;
var messageId = message.Id;
var popReceipt = message.PopReceipt;
await queue.AddMessageAsync(message); // a new Id and PopReceipt is assigned
await poisonQueue.DeleteMessageAsync(messageId, popReceipt);
processedMessages++;
}
return new OkObjectResult($"Reprocessed {processedMessages} messages from the {poisonQueue.Name} queue.");
}
}
Alternatively it may be a good idea to create a new message with the additional metadata (as information that the message has already been processed in the past with no success - then it may be send to the dead letter queue).
You have two options
Add another function that is triggered by messages added to the poison queue. You can try adding the contents to the db in this function. More details on this approach can be found here. Of course, if this function too fails to process the message you could check the dequeue count and post a notification that needs manual intervention.
Add an int 'dequeueCount' parameter to the function processing the queue and after say 5 retries log the failure instead of letting the message go the poison queue. For example you can send an email to notify that manual intervention is required.
You can use azure management studio(cerulean) and move the message from poison queue to actual queue. Highly recommended tool to access queues and blobs and do any production related activity also. https://www.cerebrata.com/products/cerulean
I am just user of the tool and no way affiliated, i recommended because it is very powerful, very useful and makes you very productive.
Click on move and message can be moved to the actual uploaded queue
Just point your Azure function to the poison queue and the items in that poison queue will be handled. More details here: https://briancaos.wordpress.com/2018/05/03/azure-functions-how-to-retry-messages-in-the-poison-queue/
Azure Storage Explorer(version above 1.15.0) has now added support to move messages from one queue to another. This makes it possible to move all, or a selected set of messages, from the poison queue back to the original queue.
https://github.com/microsoft/AzureStorageExplorer/issues/1064

Categories