Botframework: how to handle long running tasks with a bot? - c#

How do I handle a long running tasks on a bot so the client dosnt retry to send the message after 15 seconds again.
I got a bot with the botframework v3 and connect the client with directline

The Direct Line channel connector itself does not retry sending messages. If it does not receive an ack within 15 seconds of sending a message to your bot, it will throw a Gateway Timeout.
If you are using the DirectLineClient, you can override the retry policy, ensuring the client does not retry messages:
DirectLineClientCredentials creds = new DirectLineClientCredentials(directLineSecret);
DirectLineClient directLineClient = new DirectLineClient(new Uri("https://directline.botframework.com"), creds);
directLineClient.SetRetryPolicy(new Microsoft.Rest.TransientFaultHandling.RetryPolicy(new Microsoft.Rest.TransientFaultHandling.HttpStatusCodeErrorDetectionStrategy(), 0));
If you have a long running process, that takes more than 15 seconds, consider queuing the message somewhere, so you can acknowledge the call immediately, then process the message on a background thread. This is conceptually called Proactive Messaging. More information can be found here: https://learn.microsoft.com/en-us/azure/bot-service/dotnet/bot-builder-dotnet-proactive-messages?view=azure-bot-service-3.0
Edit: This blog post also explains one method for handling long operations within a bot, by using Azure Queue storage and an Azure Function which processes the operation and calls the bot when finished:
Manage a long-running operation
Another option is to process incoming messages, or long processing messages, on a background thread. This experimental sample demonstrates some methods using this design:
Immediate Accept Bot

Related

How to bulk read messages from Solace queue

Is it possible to bulk read messages from Solace queue rather than receiving them one by one on callback?
Currently MessageEventHandler receives about 20 messages per minute, this is too slow for our application.
Does anyone have a better solution to speed things up in Solace?
This is a C# application.
We used
ISession.CreateFlow(FlowProperties, IEndpoint, ISubscription,
EventHandler<MessageEventArgs>, EventHandler<FlowEventArgs>)
Passing in a MessageEventHandler which gets the message via MessageEventArgs.Message
queue = CreateQueue();
Flow = Session.CreateFlow(flowProperties, queue, null, OnHandleMessageEvent, OnHandleFlowEvent);
..
void OnHandleMessageEvent(object sender, MessageEventArgs args)
{
var msgObj = args.Message.BinaryAttachment;
..
}
```
No, there is no API call for a user to read messages in bulk.
By default, the API already obtaining messages from the message broker in batches, with each message being individualy delivered to the application in the message receive callback.
FlowProperties.WindowSize and FlowProperties.MaxUnackedMessages can change this behavior.
20 messages per minute is extremely slow.
One common reason for slowness is that the application is taking a long time to process messages in the message receive callback ("OnHandleMessageEvent").
Blocking in the message receive callback will prevent the API from delivering another message to the application.
Refer to Do Not Block in Callbacks for details.

How I can poll messages from Solace queue (instead of default pushing behavior)?

I'd like to write parallel execution module based on Solace. And I use request-reply schema for this.
I have:
Multiple message consumers, which publish messages into the same queue.
Multiple message producers, which read queue and create reply messages.
Message execution time is between 10 seconds to 10 minutes.
Queue access type is non-exclusive (e.g. it does round-robin between all consumers).
Each producer and consumer is asynchronous, e.g. Solace API blocks execution during the connection only.
What I'd like to have: if produces works on the message, it should not receive any other messages. This is extremely important, because some tasks blocks executor for several minutes, however other executors can be free after couple of seconds.
Scheme below can be workable (possible), however blocking code appears below. I'd like to avoid it.
while(true)
{
var inputMessage = flow.ReceiveMsg( /*timeout 1s*/1_000); // <--- blocking code, I'd like to avoid it
flow.Ack(inputMessage.ADMessageId);
var reply = await ProcessMessageAsync(inputMessage); // execute plus handle exceptions
session.SendReply(inputMessage, reply)
}
Messages are only pushed to the consuming applications.
That being said, your desired behavior can be obtained by setting the "max-delivered-unacked-msgs-per-flow" on your queue to 1.
This means that each consumer bound to the queue is only allowed to have 1 outstanding unacknowledged messages.
The next message will be only sent to the consumer after it has acknowledged the message.
Details about this feature can be found here.
Do note that your code snippet does not appear to be valid.
IFlow.ReceiveMsg is only used in transacted sessions, which makes use of ITransactedSession.Commit to acknowledge messages.

Handling Azure Service Bus Queue messages with Azure function

So we are in the position where we like to offload some processing in our application to give a better user experience while still accomplishing those heavy tasks and have found our way to Azure Service Bus Queues.
I understand how to push data to the queue and the basic idea behind message queues but what I am struggling to understand is how to handle them when they come in. In just thinking about it it sounds like there should be some way to implement and Azure function that listens to whenever a message comes in but how do I do that without constant polling? I understand you can subscribe to the queue with OnMessage but how does that work with an Azure function?
For example currently we are doing something like this,
var client = QueueClient.CreateFromConnectionString(connectionString, queueName);
BrokeredMessage message = new BrokeredMessage();
while ((message = client.Receive(new TimeSpan(hours: 0, minutes: 0, seconds: 30))) != null)
{
Console.WriteLine(string.Format("Message received: {0}, {1}, {2}", message.SequenceNumber, message.Label, message.MessageId));
message.Complete();
Console.WriteLine("Processing message (sleeping...)");
Thread.Sleep(1000);
}
Console.WriteLine("Finished listening Press ENTER to exit program");
Console.ReadLine();
But in this case we are just simulating polling right? This just doesn't feel like a good solution. Am I thinking about this wrong in my design?
Azure ServiceBus works by pushing new messages to connected clients instead of having the clients polling the queue.
With the ServiceBus API, you could use the OnMessage method to set up a message pump, but if you are using Azure Functions, this is all done for you with the use of a Service Bus trigger.
You simply configure Azure Function to point to the queue you want to listen on. When a new message is added to the queue, your function is triggered, and the message is passed into it.
Take a look at the Service Bus trigger example:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus#trigger-sample

What happens when you try to cancel a scheduled Azure service bus message that has already been enqueued?

I'm trying to come up with the best way to schedule a message for an Azure service bus queue or topic while leaving the option open for immediately sending a message instead of the scheduled message. I want to make sure I can protect myself against creating a duplicate message if I try to send the replacement message right at or after the scheduled time of the first message.
What will happen if I try to cancel a scheduled message with CancelScheduledMessageAsync (for both QueueClient and TopicClient classes) after the message has already been enqueued? Will it throw an exception?
According to your description, I found a blog (Canceling Scheduled Messages) talking about the similar issue.
Before version 3.3.1, a scheduled message needs to be canceled prior to becoming visible, it was not possible. And any attempt to access its value would result in InvalidOperationException. Therefore, any messages scheduled in the future and no longer needed would be "stuck" on the broker until the later time.
With Microsoft Azure Service Bus >= 3.3.1 QueueClient or TopicClient can be used to schedule a message and cancel it later.
Also, I have tested it on my side via the following code:
BrokeredMessage brokerMsg= new BrokeredMessage("Hello World!!!");
long sequenceNumber = await queueClient.ScheduleMessageAsync(brokerMsg, DateTimeOffset.UtcNow.AddSeconds(30));
await Task.Delay(TimeSpan.FromMinutes(1));
// Cancel scheduled message
await queueClient.CancelScheduledMessageAsync(sequenceNumber);
I logged into azure portal and checked ACTIVE MESSAGE COUNT and SCHEDULED MESSAGE COUNT. I could cancel the scheduled message before it becomes active, but if I cancel the scheduled message via the sequenceNumber after the scheduled message becomes active, then I would retrieve the exception as follows:

Azure Service Bus, determine if OnMessage stops processing

Using Azure ServiceBus and the OnMessage call I am looking for a way to determine if the OnMessage event pump stops reading from the queue.
Our connection to OnMessage is configured as below:
protected virtual void DoSubscription(string queueName, Func<QueueRequest, bool> callback)
{
var client = GetClient(queueName, PollingTimeout);
var transformCallback = new Action<BrokeredMessage>((message) =>
{
try
{
var request = message.ToQueueRequest();
if (callback(request))
{
message.Complete();
}
else
{
message.Abandon();
Log.Warn("DoSubscription: Message Failed to Process Gracefully: {0}{1}", Environment.NewLine, JsonConvert.SerializeObject(request));
}
}
catch (Exception ex)
{
Log.Error("DoSubscription: Message Failed to Process With Exception:", ex);
message.Abandon();
}
});
var options = new OnMessageOptions
{
MaxConcurrentCalls = _config.GetInt("MaxThreadsPerQueue"),
AutoComplete = false,
AutoRenewTimeout = new TimeSpan(0,0,1)
};
options.ExceptionReceived += OnMessageError;
client.OnMessage(transformCallback, options);
}
The problem we are encountering is after a period of time with no messages being queued new messages that are queued fail to be picked up by the OnMessage event pump until the application is restarted.
I realize there are ways to do this using Worker Roles, however for monitoring and management purposes we decided to implement this in the Application Start of a web app.
So after a call with Microsoft's Azure support team there is not an event to trap when OnMessage or OnMessageAsync errors. As these are not blocking calls, it starts the event pump and returns to the executing thread, this creates a challenge to determine if OnMessage* is doing it's job.
Suggestions from Microsoft were:
Create your own implementation of the QueueClientBase class which exposes the OnClose, On* methods which you could handle. However, in doing this you have to do the handling of the message envelope yourself.
Use OnReceive in a separate thread loop, which you can trap errors yourself and immediately retry.
However, I did explore some bullet proofing against OnMessage and discovered a few things which have eased my fears.
OnMessage is incredibly fault tolerant
I uplugged the ethernet cable from my laptop with wireless turned off, this broke the OnMessage's connection to the Service Bus queue. After waiting 10 minutes I plugged the ethernet cable back in and the OnMessage immediately began processing queued elements.
On Message surprisingly is fairly stable. It has been running inside the global.asax.cs App Start, to abbreviate, for days on end with a Factory IdleTimeout set to 24 hours without restarting the web application for 72 hours.
All-in-all I'm going to continue using OnMessage/OnMessageAsync for now and keep an eye on it. I will update this if I see issues that change my opinion of OnMessage.
Aside - Make sure if you are using OnMessage for permanent listening in an Azure Web Site that you set the "Always On" configuration option to "On". Otherwise, unless a web request comes in OnMessage will be disposed and messages will no longer be processed until the web application is reawakened by a HTTP request.

Categories