Retrying a message on azure service bus after an elapsed time? - c#

Problem: I need to reschedule/defer a message to be processed after a user defined elapsed time as the receiver.
Goal: After a HttpReponseException of ServerUnavailable, I would like to retry processing of the message after 30 minutes. It must also follow the rule set being after 10 delivery attempts the message is sent to the dead letter queue (Happens automatically based on the topic rules).
So I have a function app to process a Azure Service Bus Topic. This means that thread sleeping is not an option.
What I have tried:
I understand messageSender.ScheduleMessageAsync(message, dateTime) is used on the senders side to schedule the message for later processing which works when sending a new message, however, I as the receiver would like to do this on my side after getting an exception.
I tried using messageReceiver.DeferAsync(message.SystemProperties.LockToken, properties), properties containing the new "ScheduledEnqueueTimeUtc" and this does defer the message but the Sequence ID's seem to go out of sync making it impossible to receive my deferred message.
If I clone the message I cannot set the SystemProperty.DeliveryCount as it is readonly hence the Dead Letter Queue Rule will not function as intended. I can create UserProperty's and manually count message reties and a scheduled date in my function app but I am wondering if there is a better way to do this?
Any suggestions will be appriciated.

What do you think about creating a retry policy ? and instead of thread.sleep you can shedule the same message another time in the queue but with specific time +30 minutes ,
and returning postive response to clear the currentmessage,
you need to keep the rule of deliveryCount , so you may need to add property to the message queue that have a count?
i think this idea is logic , here is article that may help you , you need to jsut change thread.sleep with ScheduleMessageAsync

I managed to resolve retrying a message using custom UserProperties.
This is in line with what Houssem Dbira suggested and my 3rd point but instead of using a custom retry policy object I created a helper function that manages the retry count as well as schedules the message again to the service bus.
The link below will take you to the helper function I created if you interested in doing this yourself.
RetryHelper.cs

Related

Consume RabbitMQ Dead letter queue messages

I have read about RabbitMQ & it's techniques on handling the events which fail to process at consumers (not acknowledged messages, expired ttl-s etc.). RabbitMQ DLX
The way this is working (as I understand) is setting a Dead Letter Exchange to the processing queue.
So far so good, when the events are failing I can see that the Dead letter queue is getting it's messages saved there.
But how do I consume these messages saved there ?!
In my case I want to re-process them after their TTL in dlq has expired, but in .NET Core (C#), I couldn't find a way or an implementation how to achieve this.
I could also agree with other solutions like creating a background worker to check for dead-messages etc.
Can this be achieved ? If yes, please help me understand what I need to do to get this working
You need to configure a "dead letter queue" to handle messages that have been rejected or undelivered. Using the RabbitMQ Client library, you can bind a consumer to that configured queue and retrieve the messages from it. From there you decide in code what you want to do to reprocess/reject them completely.
I've found a useful step by step guide for you to follow with more detail: https://medium.com/nerd-for-tech/dead-letter-exchanges-at-rabbitmq-net-core-b6348122460d
You may want to consider attaching the original q's source exchange as the DL exchange of the dlq, which makes the original q and the dlq the DLQs to each other, forming a re-try loop.
The added dlq serves as a temp store of retry messages, and a retry delay buffer with the TTL mechanism. Messages timing out in the dlq get auto-pushed back to the original q for retry. The original q's consumer handles both first-time and retry messages.
To avoid an infinite loop of retries, a retry counter would be set in the message; and the consumer of the original q needs to eventually break the retry loop based on that counter.

C# - Azure Durable Function - Restart Orchestration

i'm working with durable functions. I have already understood how Durable Functions work, so it has an Orchestration that controls the Flow (the order that activities work), that orchestration takes care of the Sequence of the Activities.
But currently i am having a question that i'm not finding the correct answer and maybe you can help me on that:
Imagine that i have one orchestration, with 5 activities.
One of the activities do a Call to an API that will get a Document as an Array of bytes.
If one of the activities fail, the orchestration can throw an exception and i can detect that through the code.
Also i have some retry options that Retry the activities with an interval of 2 minutes.
But... What if those retries doesn't success?
As i was able to read, i can use "ContinueasNew" method in order to restart the orchestration, but there is a problem i think.
If i use this method in order to restart the orchestration 1 hour after, will it resume the activity where it was?
I mean, if the first activity is done and when i restart the orchestration due to failure of one of the activities, will it resume on the 2nd activity as it was before?
Thank you for your time guys.
If you restart the orchestration, it doesn't have any state of the previous one.
So the first activity will run again.
If you don't want that to happen, you'll need to retry the second one until it succeeds.
I would not recommend making that infinite though, an orchestration should always finish at some point.
I'd just increase the retry count to a sufficiently high number so I can be confident that the processing will succeed in at least 99% of cases.
(How likely is your activity to fail?)
Then if it still fails, you could send a message to a queue and have it trigger some alert. You could then start that one from the beginning.
If something fails so many times that the retry amount is breached, there could be something wrong with the data itself and typically a manual intervention may be needed at that point.
Another option could be to send the alert from within the orchestration if the retries fail, and then wait for an external event to come from an admin who approves or denies it to retry.

Azure function running multiple times for the same service bus queue message

I have an Azure function (based on the new C# functions instead of the old .csx functions) that is triggered whenever a message comes into an Azure Service Bus Queue. Once the function is triggered, it starts processing the service bus message. It decodes the message, reads a bunch of databases, updates a bunch of others, etc... This can take upwards of 30 minutes at times.
Since, this is not a time sensitive process, 30 minutes or even 60 minutes is not an issue. The problem is that in the meanwhile, Azure Function seems to kick in again and picks up the same message again and again and reprocesses it. This is an issue and cause problems in our business logic.
So, the question is, can we force the Azure function to run in a singleton mode? Or if that's not possible, how do we change the polling interval?
The issue is related to Service Bus setting...
What is happening is that the message is added to the queue, the message is then given to the function and a lock is placed on that message so that no other consumer can see/process that message while you have a lock on it.
If within that lock period you do not tell service bus that you've processed the file, or to extend the lock, the lock is removed from the message and it will become visible to other services that will then process that message, which is what you are seeing.
Fortunately, Azure Functions can automatically renew the lock for you. In the host.json file there is an autoRenewTimeout setting that specifies for how long you want Azure Functions to keep on renewing the lock for.
https://github.com/Azure/azure-webjobs-sdk-script/wiki/host.json
"serviceBus": {
// the maximum duration within which the message lock will be renewed automatically.
"autoRenewTimeout": "00:05:00"
},
AutoRenewTimeout is not as great as suggested. It has a downside that you need to be aware of. It's not a guaranteed operation. Being a client side initiated operation, it can and sometimes will fail, leaving you in the same state as you are today.
What you could do to address it is to review your design. If you have a long running process, then you process the message, and hand off processing to something that can run longer than MaxLockDuration. The fact that your function is taking so long, indicates you have a long running process. Messaging is not designed for that.
One of the potential solutions would be to get a message, register processing intent in a storage table. Have another storage table triggered function to kick-off processing that could take X minutes. Mark it as a singleton. By doing so, you'll be processing your messages in parallel, writing "request for long running processing" into storage table, completing Service Bus messaging and therefore not triggering their re-processing. With the long running processing you can decide how to handle failure cases.
Hope that helps.
So your question is on how to make the Azure Function Message trigger to process one message at a time, and a message has to be processed once only.
I have been able to achieve the same functionality using the following host.json configuration.
{
"serviceBus": {
"maxConcurrentCalls": 1,
"autoRenewTimeout": "23:59:59",
"autoComplete": true
}
}
Note that I set the autoRenewTimeout to 24 hour, which is long enough for a really long running process. Otherwise you can change it to a duration that fits your need.
Many will argue the suitability of using Azure Function for a long running operation. But that is not the question that needs an answer here.
I also experience the same issue, what I did is remove the default rule and add a custom rule on subscriber.
(OriginalTopic='name-of-topic' AND OriginalSubscription='name-of-subcription-sub') OR (NOT Exists([OriginalTopic]))

Azure servicebus queue message handling

I have two consumers (different applications) connected to an Azure queue. I can either ReceiveAndDelete or PeekLock the messages and during consumption I can complete() or abandon() the message. Ref: http://msdn.microsoft.com/en-us/library/azure/hh851750.aspx.
I'm sure I want to use PeekLock and then abandon() the messages, as I want them to be received in both applications. I figured I'd set the message lifetime on 10 seconds on the queue as a deletion mechanism.
However, as the messages seem to be deleted after 10 seconds, they keep being published in both applications over and over again during those 10 seconds. Should I create some custom duplication detection or am I using a wrong approach in general?
In my experience, when you use PeeKLock, you will need to almost always finalize using the Complete Method. The Abandon method is creating the duplicity as it's never marked as "done".
Have you considered using the Service Bus Topics/Subscriptions pattern, or perhaps Filters? If I understand your scenario correctly, it may be just what you need. You can send 2 messages to the queue with different topics or filters designating which app it is for.
http://azure.microsoft.com/en-us/documentation/articles/service-bus-dotnet-how-to-use-topics-subscriptions/
Please let me know if this helps, or if this does not match your situation.
Kindest regards...

Best way to schedule messages to be sent using a Windows service in C#

I'll try and briefy explain what i'm looking to achieve. My intial idea of doing this, is not going to work well in my opinion, so i'm trying to decide how best to plan this.
The first thought was:
I have a list of messages that need to be sent out at scheduled times, each one is stored in a central SQL database.
The intention is to use a Windows service that will have a timer that ticks every 30 mins. So..
30 Mins pass > Call ScheduleMessages()
ScheduleMessages will check the database for any unsent messages that need to go out in the next 30 minutes, it will then mark them in the database as:
ScheduleActivated = 1
For each one it marks as ScheduleActivated = 1 it will fire off a customer time object which inherits from the normal timer, which also includes the properties for the message it needs to send.
It will be set to tick at the time the message is due to go out, will send the message and mark as successful in the database.
The main problem with this is I am going to have timers all over the place, and if there were a few hundred messages sheduled at once, it's probably going to either not perform well, or fall over completely.
After re-evalutating I thought of solution 2
My other idea was to have 1 timer running in the service, which ticks once every 10 minutes. Each time it ticks it would fire off a method that gathers every single message due to be sent at any point up until that time into a list, and then processes them one at a time.
This seems much less resource intensive, but i'm worried that if the timer ticks after 10 minutes, any messages that haven't finished sending, wil be caught in the next tick, and be sent again.
Would it be work to stop the timer once it has been going for 10 minutes, then reset to zero and start again once the messages have been sent.
Is there a 3rd solution to the problem which is better than the above?
We implemented this on one project, what worked for us was:
All messages written to a table with a send time
Service that checks every x mins if there is something to send
When the service sends a message it also marks the message a sent (update sent time from null to actual sent time)
Marking the message avoids resends, and if you want to resend you just set the date to null.
The only problem that we had is that the service ran as a single thread and the number of messages sent was therefore limited. But you would have very many messages and a very small window before this is a problem.
Ditch the fixed interval. Windows has plenty of ways to sleep for a specific amount of time, including the sleep function, waitable timers, etc.
Some of these are available in .NET, for example WaitHandle.WaitAll accepts a sleep time and an event, that way your thread can wait until the next scheduled item but also be woken by a request to modify the schedule.
In my opinion, the scheduling service should only be responsible for checking schedules and any work should be passed off to a separate service. The scheduling service shouldnt care about the work to be scheduled. Try implementing a work item interface that contains an execute method. That way the executing object can handle the internals itself and neednt be aware of the scheduling service. For scheduling have you checked out quartz.net?

Categories