I follow Microsoft's document to setup (using c#) but message will be expired due to Service Bus queue’s lock duration.
isSessionsEnabled is false, my setting in `host.json is as below for reference:
{
"version": "2.0",
"functionTimeout": "00:10:00",
"extensions": {
"serviceBus": {
"prefetchCount": 100,
"messageHandlerOptions": {
"autoComplete": true,
"maxConcurrentCalls": 32,
// tried "00:00:55", "00:02:30", "00:05:00"
"maxAutoRenewDuration": "00:10:00"
}
}
}
}
I also tried to not implement extensions in host.json (as from document it will auto-renew the lock) but still does not work.
For reference, found this mentioned that Microsoft's document may have something wrong but did not mention possible solutions.
There is no need to renew the lock manually as its control by the run time of function.
Function can renew the message lock by itself.
If you have set isSessionsEnabled to true, the sessionHandlerOptions will honored. As you have set isSessionsEnabled to false, then messageHandlerOptions will honored.
The Peeklock behavior :
When Functions runtime receieve a message in peek-lock mode
it will call Complete on the message once the function finishes
successfully, or calls Abandon if the function fails. If the
function runs longer than the PeekLock timeout, the lock is
automatically renewed as long as the function is running.
The maxAutoRenewDuration is configurable in host.json, which
maps to OnMessageOptions.MaxAutoRenewDuration
Based on the documentation maximum allowed time is 5 minute
whereas you can increase the Functions run time
limit from the default of 5 minutes to 10 minutes. For Service Bus
functions you wouldn’t want to do that, because you’d exceed the
Service Bus renewal limit.
Please refer this for further information about Message expiration .
Can you share the signature of your function? I believe maxAutoRenewDuration only applies for functions receiving a single message, i.e. won't work for functions receiving an array/list of messages.
Related
I have an azure webjob TimerTrigger.
We may be led to stop the webjob, in a way or another (for instance changing an appSetting, aborts the webjob).
The webjob is defined as TimerTrigger in Azure. There are storage accounts for AzureWebJobsDashboard and AzureWebJobsStorage in the appSettings of the webjob. The webjob is executed at a certain time of the day. The execution of the webjob may last 5 or 6 hours. there are massive writings in a storage account (around 13000) and in storage table.
The question is :
When restarting the webjob, there is, often, an UnscheduledInvocationReason: IsPastDue, OriginalSchedule. I would like to avoid that, that the next execution of the webjob will be done accordingly to the cron expression of the timerTrigger webjob.
Is it possible, and if so how to do that ?
Any idea ?
Regards.
If I understand the problem correctly, you could use RunOnStartup = false (default) and UseMonitor = false
Timer trigger for Azure Functions
Parameter
Description
RunOnStartup
If true, the function is invoked when the runtime starts. For example, the runtime starts when the function app wakes up after going idle due to inactivity. when the function app restarts due to function changes, and when the function app scales out. So runOnStartup should rarely if ever be set to true, especially in production.
UseMonitor
Set to true or false to indicate whether the schedule should be monitored. Schedule monitoring persists schedule occurrences to aid in ensuring the schedule is maintained correctly even when function app instances restart. If not set explicitly, the default is true for schedules that have a recurrence interval greater than or equal to 1 minute. For schedules that trigger more than once per minute, the default is false.
There are a few ways to set these values, however it can be set from the attribute it self
[FunctionName("MyLovelyHorseFunction")]
public static void Run(
[TimerTrigger(
"0 */5 * * * *",
RunOnStartup = false,
UseMonitor = false)]
TimerInfo myTimer,
ILogger log)
{
...
}
Also note, you can explicitly check for past due (which may or may not help you out depending on your needs)
if (myTimer.IsPastDue)
{
...
}
i am trying to consume a message from queue using service bus queue trigger and do some job which will take some time to complete .i don't want other processor to pick the message while i am processing the message. I have my following configuration in host.json. When i receive the message from queue at await receiver.CompleteAsync(lockToken);
i am getting an exception "The lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue."
"serviceBus": {
"prefetchCount": 1,
"autoRenewTimeout": "00:05:00",
"messageHandlerOptions": {
"autoComplete": false,
"maxConcurrentCalls": 1,
"maxAutoRenewDuration": "00:04:00"
}
}
Code from Azure Function are as below
public static void Run([ServiceBusTrigger("testqueue", Connection = "AzureServiceBus.ConnectionString")]Message message, MessageReceiver messageReceiver,ILogger log)
{
log.LogInformation($"C# ServiceBus queue trigger function processed message: {messageReceiver.ClientId}");
log.LogInformation($"Message={Encoding.UTF8.GetString(message.Body)}");
string lockToken = message.SystemProperties.LockToken;
log.LogInformation($"Processing Message:={Encoding.UTF8.GetString(message.Body)}");
DoSomeJob(messageReceiver, lockToken,log);
}
public static async void DoSomeJob(MessageReceiver receiver,string lockToken, ILogger log)
{
try
{
await Task.Delay(360000);
await receiver.CompleteAsync(lockToken);
}
catch (Exception ex)
{
log.LogInformation($"Error In Job={ex}");
}
}
When you configure Azure Function triggered by Azure Service Bus with maxAutoRenewDuration set to 10 mins, you're asking the trigger to extend the lock up-to 10 minutes. This is not a guaranteed operation as it's initiated by the client-side and a maximum single locking period of time is 5 minutes. Given that, an operation to extend the lock can fail and the lock will be released, causing another instance of your function to process it concurrently, while the original processing is still happening.
Another aspect to look at is the prefetchCount which is set to 100, and maxConcurrentCalls that is set to 32. What that means is that you're fetching up-to 100 messages and process up to 32 that. I don't know if the actual Function code runs longer than 50 seconds (in your example), but prefetched message locks are not auto-renewed. Therefore, if the prefetched messages are not getting processed withing the queue's MaxLockDuration time (which by default is less than 5 mins), some of those prefetched messages will start processing, optional renewal, and completion way after they've lost the lock.
I would recommend:
Check the MaxLockDuration not to be too short to accommodate your prefetch and concurrency.
Update prefetchCount to ensure you don't over-fetch.
If a single message processing can be done within 5 minutes or less, rather prefer that and not the auto-renewal.
I have implemented backoff exponential retry. So basically if there is any exception i clone the message and then i re-submit it to the queue by adding some delay.
Now i am facing 2 issues - 1) i see that the delivery count is not increasing when i clone and resubmit back to queue
2) I want to move it to deadletter if the max delivery count is reached.
Code :
catch (Exception ex)
{
_logger.Error(ex, $"Failed to process request {requestId}");
var clone = messageResult.Message.Clone();
clone.ScheduledEnqueueTimeUtc = DateTime.UtcNow.AddSeconds(45);
await messageResult.ResendMessage(clone);
if (retryCount == MaxAttempts)
{
//messageResult.dea
}
return new PdfResponse { Error = ex.ToString() };
}
please help me on this
When you clone a message it becomes a new message, that means system properties are not cloned which gives the cloned message a fresh delivery count starting at 1 again. See also https://docs.azure.cn/zh-cn/dotnet/api/microsoft.azure.servicebus.message.clone?view=azure-dotnet
You can look into the Peek Lock Feature of Azure Service Bus. When using PeekLock the message gets invisible on the queue until you explicitly abandon it (put it back to the queue with delivery count increased) or complete if everything works out as expected when processing the message. Another option is to explicitly dead letter this message.
The feature is documented here: https://learn.microsoft.com/en-us/azure/service-bus-messaging/message-transfers-locks-settlement#peeklock
But the important thing about this is that if you do not perform any of the above mentioned actions such as cloning Azure Service Bus will automatically make the message visible again after a defined interval (the LockDuration property) or when you abandon it.
So to get a delayed retry and dead letter behaviour (when maximum delivery count has been reached) you can use the following options:
Option 1. Retry via Azure service bus auto-unlock
When processing of the message cannot be performed at the moment for some reason catch the exception and make sure none of the mentioned actions (abandon, complete or deadletter) are performed. This will keep the message invisible for the remaining time and will make it again visible after the configured lock duration has been reached. And the delivery count will also be increased by Azure Service Bus as expected.
Option 2. Implement your own retry policy
Perform your own retry policy in your code and retry processing of the message. If your maximum retries have been reached abandon the message which will make it visible again for the next queue reading step after the retry time has been reached. In this case the delivery count is increased as well.
Note: If you choose option 2.) make sure your retry period will conform to the defined LockDuration so that your message will not be visible again on the queue if you are still processing it with retries. You could also renew the lock between retries by calling the RenewLock() method on the message between retries.
If you implement the retry policy in your code I recommend using into Polly .Net which already gives you great features such as Retry and Circuit Breaker policies. See https://github.com/App-vNext/Polly
I use Apache NMS (in c#) to receive messages from ActiveMQ.
I want to be able to acknowledge every message I received, or roll back a message in case I had an error.
I solved the first part by using the CreateSession(AcknowledgementMode.IndividualAcknowledge), and then for every received message I use message.Acknowledge().
The problem is that in this mode there is no Rollback option. if the message is not acknowledged - I can never receive it again for another trial. It can only be sent to another consumer, but there isn't another consumer so it is just stucked in queue.
So I tried to use AcknowledgementMode.Transactional instead, but here there is another problem: I can only use session.Commit() or session.Rollback(), but there is no way to know which specific message I commit or role back.
What is the correct way to do this?
Stay with INDIVIDUAL_ACKNOWLEDGE and then try session.recover() and session.close(). Both of those should signal to the broker that the messages are not going to be acknowledged.
My solution to this was to throw an exception if (for any reason (exception from db savechanges event for example)) I did not want to acknowledge the message with message.Acknowledge().
When you throw an exception inside your extended method of IMessageConsumer Listener then the message will be sent again to your consumer for about 5 times (it will then moved to default DLQ queue for investigation).
However you can change this using the RedeliveryPolicy in connection object.
Example of Redelivery
Policy redeliveryPolicy = new RedeliveryPolicy
{
InitialRedeliveryDelay = 5000, // every 5 secs
MaximumRedeliveries = 10, // the message will be redelivered 10 times
UseCollisionAvoidance = true, // use this to random calculate the 5 secs
CollisionAvoidancePercent = 50,// used along with above option
UseExponentialBackOff = false
};
If message fails again (after 10 times) then it will be moved to a default DLQ queue. (this queue will be created automatically)
You can use this queue to investigate the messages that have not been acknowledged using an other consumer.
I'm trying to write a system in Azure, and for one part of it I want to be able to have a number of bits of code writing to a queue, and have a single bit of processing code deal with each item in the queue.
Items are being added to the queue correctly. I've checked this; I have Visual Studio with the Azure plugins. I can then use Cloud Explorer to pull up the storage account and view the queue. In here, the queue content seems correct, in that the Message Text Preview looks as I would expect.
However, when I add an Azure Functions with a Queue Trigger to process this, while the trigger fires, the queue item comes out empty. I've tried the tutorial code, cut down a little. When I set the run function to be:
public static void Run(string myQueueItem,
DateTimeOffset expirationTime,
DateTimeOffset insertionTime,
DateTimeOffset nextVisibleTime,
string queueTrigger,
TraceWriter log)
{
log.Info($"C# Queue trigger function processed: '{myQueueItem.GetType()}'\n" +
$"queueTrigger={queueTrigger}\n" +
$"expirationTime={expirationTime}\n" +
$"insertionTime={insertionTime}\n" +
$"nextVisibleTime={nextVisibleTime}\n");
}
I then get output with the queue item is empty, when I know it isn't. The queue trigger element is also empty. Here is some sample output, when I run the function directly in Azure Functions:
016-11-01T13:47:41.834 C# Queue trigger function processed:
queueTrigger=
expirationTime=12/31/9999 11:59:59 PM +00:00
insertionTime=11/1/2016 1:47:41 PM +00:00
The fact that this triggers at all, and has a sensible looking insertion time suggests that I'm connecting to the right queue.
Does anyone know why the string myQueueItem coming out empty, when the queue peeking tool can see the full preview string?
I've now got this working. I did two things.
First I cleared out the 'poison' queue. I had been trying to deserialize some object from the queue earlier.
Then, I enabled the queue - it was disabled earlier. It seems to me that, when you manually run a disabled Queue Trigger, it provides some fake information, and doesn't take anything from the queue - it doesn't even dequeue a message, wihch was the hint.
From this point on, when I add queues, they get processed correctly.