How do I handle a long running tasks on a bot so the client dosnt retry to send the message after 15 seconds again.
I got a bot with the botframework v3 and connect the client with directline
The Direct Line channel connector itself does not retry sending messages. If it does not receive an ack within 15 seconds of sending a message to your bot, it will throw a Gateway Timeout.
If you are using the DirectLineClient, you can override the retry policy, ensuring the client does not retry messages:
DirectLineClientCredentials creds = new DirectLineClientCredentials(directLineSecret);
DirectLineClient directLineClient = new DirectLineClient(new Uri("https://directline.botframework.com"), creds);
directLineClient.SetRetryPolicy(new Microsoft.Rest.TransientFaultHandling.RetryPolicy(new Microsoft.Rest.TransientFaultHandling.HttpStatusCodeErrorDetectionStrategy(), 0));
If you have a long running process, that takes more than 15 seconds, consider queuing the message somewhere, so you can acknowledge the call immediately, then process the message on a background thread. This is conceptually called Proactive Messaging. More information can be found here: https://learn.microsoft.com/en-us/azure/bot-service/dotnet/bot-builder-dotnet-proactive-messages?view=azure-bot-service-3.0
Edit: This blog post also explains one method for handling long operations within a bot, by using Azure Queue storage and an Azure Function which processes the operation and calls the bot when finished:
Manage a long-running operation
Another option is to process incoming messages, or long processing messages, on a background thread. This experimental sample demonstrates some methods using this design:
Immediate Accept Bot
I am using NService to create an endpoint.
The endpoint is listening to an event and do some calculation, then publish result (success or fail) to other endpoints
I know that NServiceBus support ImmediateRetry and DelayRetry, and they are configurable.
Now, I want to publish a fail result event to other endpoints after all retries (before sending to error queue).
public async Task Handle(MyEvent message, IMessageHandlerContext context)
{
Console.WriteLine($"Received MyEvent, ID = {message.Id}");
//Connect to other services to get data and do some calculation
Thread.Sleep(1000);
Console.WriteLine($"Processed MyEvent, ID = { message.Id}");
await context.Publish(new MyEventResult { IsSucceed = true });
}
Above is my current code. It will publish a successful result if there is no exception throw. But If it has a fatal exception, I don't know how to publish a fail result event before the message is sent to the error queue.
Thanks in advance.
Notes: I am using NServiceBus 6.4.3
I'm not sure why you want this but have you looked at NServiceBus sagas? They are intended to be used when having to doing blocking IO via (external) services. You can take alternative actions based on the fact if a specific task hasn't been performed within an allocated period or because the returned result was incorrect.
https://docs.particular.net/nservicebus/sagas/
See the following sample of a saga:
https://docs.particular.net/samples/saga/simple/
The following is a sample showing the usage of saga timeouts. If specific task has not been performed within a specific duration an alternative action can be performed like publishing an event or performing a ReplyToOriginator
https://docs.particular.net/nservicebus/sagas/timeouts
https://docs.particular.net/nservicebus/sagas/reply-replytooriginator-differences
https://docs.particular.net/nservicebus/sagas/#notifying-callers-of-status
By using sagas you are making your process explicit. I would avoid hooking into the recovery mechanism for this.
The recovery mechanism is meant to deal with transient errors like network connectivity issues, database deadlocks, etc. but not with expected failure results. You should properly process these and continue your modeled process in its unhappy path.
So we are in the position where we like to offload some processing in our application to give a better user experience while still accomplishing those heavy tasks and have found our way to Azure Service Bus Queues.
I understand how to push data to the queue and the basic idea behind message queues but what I am struggling to understand is how to handle them when they come in. In just thinking about it it sounds like there should be some way to implement and Azure function that listens to whenever a message comes in but how do I do that without constant polling? I understand you can subscribe to the queue with OnMessage but how does that work with an Azure function?
For example currently we are doing something like this,
var client = QueueClient.CreateFromConnectionString(connectionString, queueName);
BrokeredMessage message = new BrokeredMessage();
while ((message = client.Receive(new TimeSpan(hours: 0, minutes: 0, seconds: 30))) != null)
{
Console.WriteLine(string.Format("Message received: {0}, {1}, {2}", message.SequenceNumber, message.Label, message.MessageId));
message.Complete();
Console.WriteLine("Processing message (sleeping...)");
Thread.Sleep(1000);
}
Console.WriteLine("Finished listening Press ENTER to exit program");
Console.ReadLine();
But in this case we are just simulating polling right? This just doesn't feel like a good solution. Am I thinking about this wrong in my design?
Azure ServiceBus works by pushing new messages to connected clients instead of having the clients polling the queue.
With the ServiceBus API, you could use the OnMessage method to set up a message pump, but if you are using Azure Functions, this is all done for you with the use of a Service Bus trigger.
You simply configure Azure Function to point to the queue you want to listen on. When a new message is added to the queue, your function is triggered, and the message is passed into it.
Take a look at the Service Bus trigger example:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-bindings-service-bus#trigger-sample
I am creating a simple Publisher/Subscriber using MassTransit and RabbitMQ.
The Publisher has the following code to initialize the bus:
/** create the bus */
var bus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri("rabbitmq://localhost/"), h =>
{
h.Username("guest");
h.Password("guest");
});
});
/** start the bus and publish */
bus.Start();
bus.Publish<IPersonLogin>(new {FirstName = "John", LastName = "Smith"});
And the Subscriber has this code for initialization:
var bus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri("rabbitmq://localhost/"), h =>
{
h.Username("guest");
h.Password("guest");
});
cfg.ReceiveEndpoint(host, "person_login", e =>
{
e.Consumer<PersonLoginConsumer>();
});
});
If I shut-down the Subscriber and publish 2 messages, the messages are not lost and as soon as the Subscriber comes back to life the Messages are processed.
So my questions are:
How do I ensure that a Message stays in the Queue of RabbitMQ until one Subscriber comes up and pick it up?
What happen if the Server is reboot and some Messages were not processed by any Subscriber, do they get lost or do they get processed as soon as the Subscriber come alive after reboot?
Is this the correct pattern to ensure that every single message is processed or should I use a different strategy?
by default, any message sitting in a queue will remain there until one of three things happens:
the message is consumed
the message "time to live" expires (default is to live forever)
the server crashes or restarts
if you have a queue full of messages, the messages will generally stick around until one of those three things happens. hopefully you will have your consumers online soon enough that you can consume the messages and process them.
you would only set a time to live (ttl) if you want messages to be automatically deleted after a period of time (assuming they are not consumed first)
for crashes... a message can survive a crash / restart if you make the message persistent to disk. there's still a chance that the message will be lost if the server crashes before the message is routed from the exchange to the queue, though.
On top of mind.
If there arent any subscribers RabbitMQ wont know to which queue a message should be delivered. Then a message will be undeliverable.(Not sure if this will be moved to a error queue or skipped)
If the exchanges are already there it will be placed in the queue of consumer that has subscribed to the event. So you endpoint hosting your consumer can be down the message will still be delivered.
When the message is delivered to the queue the consumer will pick up your message and process it. If a exception occurs while processing your message it will be moved to the endpoint_error queue. (Depending on your RetryPolicy). Deploy a fix and move you message back in to the main queue and the messages will be processed as if nothing has happend.
Good read for common issues on common gotcha's
Under the Hood
Common Gotcha's
.NET 3.5, VS2008, WCF service using BasicHttpBinding
I have a WCF service hosted in a Windows service. When the Windows service shuts down, due to upgrades, scheduled maintenance, etc, I need to gracefully shut down my WCF service. The WCF service has methods that can take up to several seconds to complete, and typical volume is 2-5 method calls per second. I need to shut down the WCF service in a way that allows any previously call methods to complete, while denying any new calls. In this manner, I can reach a quiet state in ~ 5-10 seconds and then complete the shutdown cycle of my Windows service.
Calling ServiceHost.Close seems like the right approach, but it closes client connections right away, without waiting for any methods in progress to complete. My WCF service completes its method, but there is no one to send the response to, because the client has already been disconnected. This is the solution suggested by this question.
Here is the sequence of events:
Client calls method on service, using the VS generated proxy class
Service begins execution of service method
Service receives a request to shut down
Service calls ServiceHost.Close (or BeginClose)
Client is disconnected, and receives a System.ServiceModel.CommunicationException
Service completes service method.
Eventually service detects it has no more work to do (through application logic) and terminates.
What I need is for the client connections to be kept open so the clients know that their service methods completed sucessfully. Right now they just get a closed connection and don't know if the service method completed successfully or not. Prior to using WCF, I was using sockets and was able to do this by controlling the Socket directly. (ie stop the Accept loop while still doing Receive and Send)
It is important that the host HTTP port is closed so that the upstream firewall can direct traffic to another host system, but existing connections are left open to allow the existing method calls to complete.
Is there a way to accomplish this in WCF?
Things I have tried:
ServiceHost.Close() - closes clients right away
ServiceHost.ChannelDispatchers - call Listener.Close() on each - doesn't seem to do anything
ServiceHost.ChannelDispatchers - call CloseInput() on each - closes clients right away
Override ServiceHost.OnClosing() - lets me delay the Close until I decide it is ok to close, but new connections are allowed during this time
Remove the endpoint using the technique described here. This wipes out everything.
Running a network sniffer to observe ServiceHost.Close(). The host just closes the connection, no response is sent.
Thanks
Edit: Unfortunately I cannot implement an application-level advisory response that the system is shutting down, because the clients in the field are already deployed. (I only control the service, not the clients)
Edit: I used the Redgate Reflector to look at Microsoft's implementation of ServiceHost.Close. Unfortunately, it calls some internal helper classes that my code can't access.
Edit: I haven't found the complete solution I was looking for, but Benjamin's suggestion to use the IMessageDispatchInspector to reject requests prior to entering the service method came closest.
Guessing:
Have you tried to grab the binding at runtime (from the endpoints), cast it to BasicHttpBinding and (re)define the properties there?
Best guesses from me:
OpenTimeout
MaxReceivedMessageSize
ReaderQuotas
Those can be set at runtime according to the documentation and seem to allow the desired behaviour (blocking new clients). This wouldn't help with the "upstream firewall/load balancer needs to reroute" part though.
Last guess: Can you (the documention says yes, but I'm not sure what the consequences are) redefine the address of the endpoint(s) to a localhost address on demand?
This might work as a "Port close" for the firewall host as well, if it doesn't kill of all clients anyway..
Edit: While playing with the suggestions above and a limited test I started playing with a message inspector/behavior combination that looks promising for now:
public class WCFFilter : IServiceBehavior, IDispatchMessageInspector {
private readonly object blockLock = new object();
private bool blockCalls = false;
public bool BlockRequests {
get {
lock (blockLock) {
return blockCalls;
}
}
set {
lock (blockLock) {
blockCalls = !blockCalls;
}
}
}
public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) {
}
public void AddBindingParameters(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase, Collection<ServiceEndpoint> endpoints, BindingParameterCollection bindingParameters) {
}
public void ApplyDispatchBehavior(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase) {
foreach (ChannelDispatcher channelDispatcher in serviceHostBase.ChannelDispatchers) {
foreach (EndpointDispatcher endpointDispatcher in channelDispatcher.Endpoints) {
endpointDispatcher.DispatchRuntime.MessageInspectors.Add(this);
}
}
}
public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext) {
lock (blockLock) {
if (blockCalls)
request.Close();
}
return null;
}
public void BeforeSendReply(ref Message reply, object correlationState) {
}
}
Forget about the crappy lock usage etc., but using this with a very simple WCF test (returning a random number with a Thread.Sleep inside) like this:
var sh = new ServiceHost(new WCFTestService(), baseAdresses);
var filter = new WCFFilter();
sh.Description.Behaviors.Add(filter);
and later flipping the BlockRequests property I get the following behavior (again: This is of course a very, very simplified example, but I hope it might work for you anyway):
// I spawn 3 threads
Requesting a number..
Requesting a number..
Requesting a number..
// Server side log for one incoming request
Incoming request for a number.
// Main loop flips the "block everything" bool
Blocking access from here on.
// 3 more clients after that, for good measure
Requesting a number..
Requesting a number..
Requesting a number..
// First request (with server side log, see above) completes sucessfully
Received 1569129641
// All other messages never made it to the server yet and die with a fault
Error in client request spawned after the block.
Error in client request spawned after the block.
Error in client request spawned after the block.
Error in client request before the block.
Error in client request before the block.
Is there an api for the upstream firewall? The way we do this in our application is to stop new requests coming in at the load balancer level, and then when all of the requests have finished processing we can restart the servers and services.
My suggestion is to set an EventHandler when your service goes into a "stopping state", use the OnStop method. Set the EventHandler indicating that your service is going into a stopping state.
Your normal service loop should check if this event is set, if it is, return a "Service is stopping message" to the calling client, and do not allow it to enter your normal routine.
While you still have active processes running, let it finish, before the OnStop method moves on to killing the WCF host (ServiceHost.Close).
Another way is to keep track of the active calls by implementing your own reference counter. you will then know when you can stop the Service Host, once the reference counter hits zero, and by implementing the above check for when the stop event has been initiated.
Hope this helps.
I haven't implemented this myself, so YMMV, but I believe what you're looking to do is pause the service prior to fully stopping it. Pausing can be used to refuse new connections while completing existing requests.
In .NET it appears the approach to pausing the service is to use the ServiceController.
Does this WCF Service authenticate the user in any way?
Do you have any "Handshake" method?
I think you might need to write your own implementation with a helper class that keeps track of all running requests, then when a shutdown is requested, you can find out if anything is still running, delay shutdown based on that... (using a timer maybe?)
Not sure about blocking further incoming requests... you should have a global variable that tells your application whether a shutdown was requested and so you could deny further requests ...
Hope this may help you.
Maybe you should set the
ServiceBehaviorAttribute and the OperationBehavior attribute. Check this on MSDN
In addition to the answer from Matthew Steeples.
Most serious load balancers like a F5 etc. have a mechanism to identify if a node is alive. In your case it seems to check whether a certain port is open. But alternative ways can be configured easily.
So you could expose e.g. two services: the real service that serves requests, and a monitoring "heart beat"-like service. When transitioning into maintenance mode, you could first take the monitoring service offline which will take the load away from the node and only shutdown the real service after all requests finished processing. Sounds a bit weird but might help in your scenario...