In my azure function, at some point I would like to defer my message. But if I do, I get an exception:
[7/30/2020 5:59:02 PM] Message processing error (Action=Complete, ClientId=MessageReceiver1UserCreated/Subscriptions/MySubscription, EntityPath=UserCreated/Subscriptions/MySubscription, Endpoint=xxxxxxxxxxx.servicebus.windows.net)
[7/30/2020 5:59:02 PM] Microsoft.Azure.ServiceBus: The lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue, or was received by a different receiver instance.
This is my code
[FunctionName("UserCreated")]
public static async Task Run([ServiceBusTrigger("UserCreated", "MySubscription", Connection = "ServiceBusConnectionString")]UserCreated userCreated, ILogger log, string lockToken, MessageReceiver messageReceiver)
{
//some logic.....
await messageReceiver.DeferAsync(lockToken);
}
Honestly I have no clue what I am doing wrong. Code examples id found and also this StackOverflow: Azure Function V2 Service Bus Message Deferral post, does not help me out.
I understand that the message is automatically completed after the function completes. So I tried to disable autocomplete but also there I did not succeed to find a working solution.
Using packages:
Microsoft.Azure.WebJobs.Extensions.ServiceBus 4.1.0
(references) Microsoft.Azure.ServiceBus 4.1.1
As the error message states, the message may be losing the lock before reaching the Defer instruction. Try to extend the lock timeout on your service bus. I think it may fix the issue.
Here is a bit of an explanation on what a lock does in a service bus queue, according to the error you describe, your lock is expiring before you are able to defer and autorenewal should be handled by the functions but it is not guaranteed, so the best way to tackle this is to extend the maximum duration of the lock.
The easiest way to achieve this is to navigate into the azure portal and find the service bus subscription you wish to change, once you select it you should see something like this screen:
By clicking on the Change button under the message lock duration you will be able to modify the duration based on your needs.
Thanks for all answers however none actually explained the real cause.
TL;DR
If you want to complete, defer, abandon or remove the message yourself, you have to disable autocomplete in the host.json file.
Root cause
The reason why the lock is invalid states:
The lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue, or was received by a different receiver instance.
In my case the message was already "removed" since I used messageReceiver.DeferAsync(lockToken);
So this means that after this statement, the function automatically completes the message (which is already deferred).
Therefore you have to disable autocompletion of the message.
Solution
disable autocomplete in host.json
"extensions": {
"serviceBus": {
"messageHandlerOptions": {
"autoComplete": false
}
}
}
Be careful
When disabling autocomplete, you are responsible to do something with the message. You always have to make a decision otherwise the message will become available again after lock timeout.
Related
I'm getting a MessageLockLostException when performing a complete operation on Azure Service Bus after performing a long operation of 30 minutes to over an hour. I want this process to scale and be resilient to failures so I keep hold of the Message lock and renew it well within the default lock duration of 1 minute. However when I try to complete the message at the end, even though I can see all the lock renewals have occurred at the correct time I get a MessageLockLostException. I want to scale this up in the future however there is currently only one instance of the application and I can confirm that the message still exists on the Service Bus Subscription after it errors so the problem is definitely around the lock.
Here are the steps I take.
Obtain a message and configure a lock
messages = await Receiver.ReceiveAsync(1, TimeSpan.FromSeconds(10)).ConfigureAwait(false);
var message = messages[0];
var messageBody = GetTypedMessageContent(message);
Messages.TryAdd(messageBody, message);
LockTimers.TryAdd(
messageBody,
new Timer(
async _ =>
{
if (Messages.TryGetValue(messageBody, out var msg))
{
await Receiver.RenewLockAsync(msg.SystemProperties.LockToken).ConfigureAwait(false);
}
},
null,
TimeSpan.FromSeconds(Config.ReceiverInfo.LockRenewalTimeThreshold),
TimeSpan.FromSeconds(Config.ReceiverInfo.LockRenewalTimeThreshold)));
Perform the long running process
Complete the message
internal async Task Complete(T message)
{
if (Messages.TryGetValue(message, out var msg))
{
await Receiver.RenewLockAsync(msg.SystemProperties.LockToken);
await Receiver.CompleteAsync(msg.SystemProperties.LockToken).ConfigureAwait(false);
}
}
The code above is a stripped down version of what's there, I removed some try catch error handling and logging we have but I can confirm that when debugging the issue I can see the timer execute on time. It's just the "CompleteAsync" that fails.
Additional Info;
Service Bus Topic has Partitioning Enabled
I have tried renewing it at 80% of the threshold (48 seconds), 30% of the Threshold (18 seconds) and 10% of the Threshold (6 seconds)
I've searched around for an answer and the closest thing I found was this article but it's from 2016.
I couldn't get it to fail in a standalone Console Application so I don't know if it's something I'm doing in my Application but I can confirm that the lock renewal occurs for the duration of the processing and returns the correct DateTime for the updated lock, I'd expect if the lock was truely lost that the CompleteAsync would fail
I'm using the Microsoft.Azure.ServiceBus nuget package Version="4.1.3"
My Application is Dotnet Core 3.1 and uses a Service Bus Wrapper Package which is written in Dotnet Standard 2.1
The message completes if you don't hold onto it for a long time and occasionally completes even when you do.
Any help or advice on how I could complete my Service Bus message successfully after an hour would be great
The issue here wasn't with my code. It was with Partitioning on the Service Bus topic. If you search around there are some issues on the Microsoft GitHub around completion of messages. That's not important anyway because the fix I used here was to use the Subscription forwarding feature to move the message to a new Topic with partitioning disabled and then read the message from that new topic and I was able to use the exact same code to keep the message locked for a long time and still complete it successfully
Scenario: A Azure WebJob that will get all the Vendor record from NetSuite via WSDL.
Problem: The dataset is too large. Even with service set to 12 minutes time out. It still time out and the code failed.
NetSuite have a async process that basically run whatever you want on the server and it will return a JobId that allowed you to check the process on the server.
What I did currently is by making a search call first asking for all the Vendor records and it is to be process on the server. After I got the JobId, i wrote a void Recursion that check if the job is finish on the server with Thread Sleep set to 10 minutes.
private static bool ChkProcess(VendorsService vendorService, string jobId)
{
var isJobDone = false;
//Recursion
void ChkAsyncProgress(bool isFinish)
{
if (isFinish) return;
var chkJobProgress = vendorService.NsCheckProcessStatus(jobId);
if (chkJobProgress.OperationResult.IsFinish) isJobDone = true;
Thread.Sleep(TimeSpan.FromMinutes(10));
ChkAsyncProgress(isJobDone);
}
ChkAsyncProgress(false);
return isJobDone;
}
It work but is there a better approach?
Thanks
I think that since you're working with Azure already, with Service BUS you can implement a really low cost solution for this (if not free, depending on how much frequent is your job running)
Basically it's a queue where you enqueue messages (which can be objects with properties too, so they could also contain your result of the elaboration potentially).
A service bus is used to enqueue.
An azure function of type ServiceBusTrigger listens automatically if any new message on service bus has arrived and gets triggered if so (or, you can set messages to be enqueued, but be available after a certain future time only).
So, in the webjob code, at the end you could add code to enqueue a message which will mark the webjob has finished elaboration.
The azure function will get immediately noticed as soon as the message gets in the queue and you can retrieve the data without polling constantly for job completion, as azure will take care of all of that for you for a ridiculous price and without any effort by you.
Also, these task aren't priced timely based, but execution based, so you will pay only when it effectively put a message in queue.
They have a certain number of executions free, so it might be that you don't even need to pay anything.
Here some microsoft code sample for doing so.
Is there a way to mark a WebJob (triggered, not continuous) as failed, without throwing an exception? I need to check that certain conditions are true to mark the job as successful.
According to Azure WebJob SDK, Code from TriggeredFunctionExecutor class.
public async Task<FunctionResult> TryExecuteAsync(TriggeredFunctionData input, CancellationToken cancellationToken)
{
IFunctionInstance instance = _instanceFactory.Create((TTriggerValue)input.TriggerValue, input.ParentId);
IDelayedException exception = await _executor.TryExecuteAsync(instance, cancellationToken);
FunctionResult result = exception != null ?
new FunctionResult(exception.Exception)
: new FunctionResult(true);
return result;
}
We know that the WebJobs status depends on whether your WebJob/Function is executed without any exceptions or not. We can't set the finial status of a running WebJob programmatically.
I need to check that certain conditions are true to mark the job as successful.
Throw an exception is the only way I found. Or you could store the webjob execute result in an additional place(For example, Azure Table Storage). We can get the current invocation id by ExecutionContext class. In your webjob, you could save the current invocation id and the status you wanted to an Azure Table Storage. You could query the status later if you needed from Azure Table Storage based on the invocation id.
public static void ProcessQueueMessage([QueueTrigger("myqueue")] string message, ExecutionContext context, TextWriter log)
{
log.WriteLine(message);
SaveStatusToTableStorage(context.InvocationId, "Fail/Success");
}
To use ExecutionContext as parameter, you need to install Azure WebJobs SDK Extensions using NuGet and invoke UserCore method before your run your WebJob.
var config = new JobHostConfiguration();
config.UseCore();
var host = new JobHost(config);
host.RunAndBlock();
Throwing an unmanaged exception will result in a Failed execution.
But i have noticed that it will also result with a bad management of your message: i.e. your message will be dequeued but not moved to your poison queue regarding your configuration (but maybe it was due to my SDK version).
#Jean NETR-VALERE the newer versions of the WebJobs packages do act as you say and if an exception is thrown the job will fail, and will continue to be run over and over and over until you finally clear your queue. This is absolutely horrible behavior and I have no clue why they changed this.
Yes they did change it to make it work this way, because I use an older version of the webjobs package just for this reason. About 3 months ago I upgraded to the newer version, and shortly after could not understand why the above behavior was happening . Once I reverted back to the older version, it started working correctly again and after failing 5 times is moved to poison queue and never ran again. My point is that if you want the correct (IMO) behavior, see if you can go back to using version 1.1.0 and you will be happy. Hope that helps.
To mark a triggered web job as failed you just need to set process exit code to non-zero.
System.Environment.ExitCode = 1;
When you throw an unhandled exception it also sets the exit code, that is how Azure determines failure.
I am building a .NET IoT project that requires MQTT for communication. For the broker I use GnatMQ and for clients I use MqttDotNet (for mobile compatibility). The client library builds, connects, and sends messages fine but I get an error whenever the client’s PublishArrivedDelegate is triggered (i.e. message received event).
The error occurs on retained messages as well as standard received messages. The MqttDotNet error log is here.
Console output:
It seems the error is captured in the QoSManager.cs on line 91:
else if (mess.QualityOfService == QoS.OnceAndOnceOnly)
{
_responses.Add(mess.MessageID, new MqttPubrelMessage(mess.MessageID));
}
NOTE: I am using the raw libraries (as they are) without added code.
Has anyone tried these libraries and can maybe confirm that it worked for them without issues? Until then, I guess it's debugging till the extreme.
Update 1: The error only persists on subscribe with QoS level 2.
Update 2: The error can be prevented by adding a handle to prevent duplicates from being added to the hashtable, as pointed out by #hardillb . But this does not SOLVE the actual issue here.
The issue still persists on publish and subscribe with QOS 2. The problem is that the onClientPublishedArrived is being triggered exactly 3 times whenever a message gets received and exactly 2 times when a message get received as retained. NOTE that when I test this with HiveMQ the issue is gone. The problem only persists when using the GnatMQ broker.
Looks like you are trying to add duplicate keys to a Hashtable, this will be being caused by mess.MessageID returning the same value for multiple messages.
The C# implementation looks like it throws an exception rather replacing the original value with the new value.
If you just need a unique key for the Hashtable you could use something like the timestamp
I need to push notifications to tens of thousands of iOS devices that my app installed. I'm trying to do it with PushSharp, but I'm missing some fundamental concepts here. At first I tried to actually run this in a Windows service, but couldn't get it work - getting null reference errors coming from _push.QueueNotification() call. Then I did exactly what the documented sample code did and it worked:
PushService _push = new PushService();
_push.Events.OnNotificationSendFailure += new ChannelEvents.NotificationSendFailureDelegate(Events_OnNotificationSendFailure);
_push.Events.OnNotificationSent += new ChannelEvents.NotificationSentDelegate(Events_OnNotificationSent);
var cert = File.ReadAllBytes(HttpContext.Current.Server.MapPath("..pathtokeyfile.p12"));
_push.StartApplePushService(new ApplePushChannelSettings(false, cert, "certpwd"));
AppleNotification notification = NotificationFactory.Apple()
.ForDeviceToken(deviceToken)
.WithAlert(message)
.WithSound("default")
.WithBadge(badge);
_push.QueueNotification(notification);
_push.StopAllServices(true);
Issue #1:
This works perfectly and I see the notification pop up on the iPhone. However, since it's called a Push Service, I assumed it would behave like a service - meaning, I instantiate it and call _push.StartApplePushService() within a Windows service perhaps. And I thought to actually queue up my notifications, I could do this on the front-end (admin app, let's say):
PushService push = new PushService();
AppleNotification notification = NotificationFactory.Apple()
.ForDeviceToken(deviceToken)
.WithAlert(message)
.WithSound("default")
.WithBadge(badge);
push.QueueNotification(notification);
Obviously (and like I already said), it didn't work - the last line kept throwing a null reference exception.
I'm having trouble finding any other kind of documentation that would show how to set this up in a service/client manner (and not just call everything at once). Is it possible or am I missing the point of how PushSharp should be utilized?
Issue #2:
Also, I can't seem to find a way to target many device tokens at once, without looping through them and queuing up notifications one at a time. Is that the only way or am I missing something here as well?
Thanks in advance.
#baramuse explained it all, if you wish to see a service "processor" you can browse through my solution on https://github.com/vmandic/DevUG-PushSharp where I've implemented the workflow you seek for, i.e. a win service, win processor or even a web api ad hoc processor using the same core processor.
From what I've read and how I'm using it, the 'Service' keyword may have mislead you...
It is a service in a way that you configure it once and start it.
From this point, it will wait for you to push new notifications inside its queue system and it will raise events as soon as something happens (delivery report, delivery error...). It is asynchronous and you can push (=queue) 10000 notifications and wait for the results to come back later using the event handlers.
But still it's a regular object instance you will have to create and access as a regular one. It doesn't expose any "outside listener" (http/tcp/ipc connection for example), you will have to build that.
In my project I created a small selfhosted webservice (relying on ServiceStack) that takes care about the configuration and instance lifetime while only exposing the SendNotification function.
And about the Issue #2, there indeed isn't any "batch queue" but as the queue function returns straight away (enqueue and push later) it's just a matter of a looping into your device tokens list...
public void QueueNotification(Notification notification)
{
if (this.cancelTokenSource.IsCancellationRequested)
{
Events.RaiseChannelException(new ObjectDisposedException("Service", "Service has already been signaled to stop"), this.Platform, notification);
return;
}
notification.EnqueuedTimestamp = DateTime.UtcNow;
queuedNotifications.Enqueue(notification);
}