Azure Webjobs timing out, not receiving response from webapi - c#

I have a Azure Webjob which hits a WebApi. The WebApi sends back an acknowledgement in 4 - 5mins. So, I have declared the TimeOut for the HttpClient to be at 10 mins.
Now, if the WebApi is returning the response in less than 4 mins, it works fine. But if the WebApi is returning the response for a request in 4 min 30 sec, the azure webjob is not getting this acknowledgement. Instead, the HttpClient on the WebJob is waiting for 10 mins, and then timing out. Is there some kind of a time out on the Webjob that I'm missing?

There is a 230 second timeout for requests that are not sending any data back. Please see the answer of #David Ebbo in following thread.
https://social.msdn.microsoft.com/Forums/en-US/17305ddc-07b2-436c-881b-286d1744c98f/503-errors-with-large-pdf-file?forum=windowsazurewebsitespreview
There is a 230 second (i.e. a little less than 4 mins) timeout for requests that are not sending any data back. After that, the client gets the 500 you saw, even though in reality the request is allowed to continue server side.
So the issue of not receiving response from Web API is related to the timeout limit of Azure Web App. I also test it on my side. Though I set executionTimeout as 10min in web.config.
<httpRuntime targetFramework="4.6" executionTimeout="600" />
If the time of sending response from service side need more than 230s. I will get a 500 error on my client side.
public IEnumerable<string> Get()
{
Thread.Sleep(270000);
return new string[] { "value1", "value2" };
}
I suggest you accept the suggestion of #Thomas. Writing your response to a queue from your Web API and get the response which you needed from the queue.
So, is there no method to bypass this timeout?
The 230 sec limit also could be proved in following article.
there is a general idle request timeout that will cause clients to get disconnected after 230 seconds.
https://github.com/projectkudu/kudu/wiki/Configurable-settings
I haven't found any ways to modify this timeout for Azure Web App. Please try the workarounds which I mentioned above.

If you are executing a long running task or function in the context of BrokeredMessages, they have a timeout hard coded to 5 minutes.
Utilizing Task I place my long running function inside one and passing it to this function to run and "relock" the BrokeredMessage:
/// <summary>
/// Maximum time lockout for a BrokeredMessage is 5 minutes. This allows the
/// timer to relock every 4 minutes while waiting on Task parameter to complete.
/// </summary>
/// <param name="task"></param>
/// <param name="message"></param>
private void WaitAndRelockMessage( Task task, BrokeredMessage message )
{
var myTimer = new Timer( new TimerCallback( RelockMessage ), message, 240000, 240000 );
task.Wait();
myTimer.Dispose();
}
private void RelockMessage( object message )
{
try { ((BrokeredMessage)message).RenewLock(); }
catch( OperationCanceledException ) { }
}
Sample Usage:
var task = Task.Run( async () => { await m_service.doWork(); } );
WaitAndRelockMessage(task, message);

Related

Is it possible for HTTP server to receive requests out of order even if they are sent sequentially?

(This discussion might not be specific to C#...)
I have a C# method SendMultipleRequests that sends HTTP POST request 10 times sequentially.
Is it possible for the server to receive requests out of order?
If my understanding is correct if the requests are sent concurrently (without await), the server could receive requests out of order, but in the example below it needs to wait for the response to be received at the client before sending next request, so the server will receive requests in order.
public async Task SendRequest(int i)
{
// definition of endpoint is omitted in this example
var content = new StringContent($"I am {i}-th request");
await HttpClient.PostAsync(endpoint, content);
}
public async Task SendMultipleRequests()
{
for (int i = 0; i < 10; i++)
{
await SendRequest(i);
}
}
with await your app will wait for the task returned by PostAsync to finish before it issues the next request - see the docs for postasync where it says “This operation will not block. The returned Task<TResult> object will complete after the whole response (including content) is read.” - using await will mean that you will only issue the next request after you I’ve read the content of the previous response
If you remove the await then your code will queue up ten tasks and start working on them all in some undefined order. The server will see requests in an unspecified order. This may be further exacerbated by the fact that the requests may take different routes through the internet, some slower. If you remove the await then you should capture the returned task into a list, then you can use something like await Task.WhenAll(list) to wait for them all to complete (unless you really want to fire and forget in which case you can assign them to the discard _ = client.PostAsync... but keeping the task allows you to discover and deal with exceptions that arose)

HttpClient get request cancellation, in case the operation takes longer than x minutes

I am writing an azure function which synchronizes data every 15 minutes.
I am targeting .net 3.1 for azure function and I am using httpclient to send a get request to their API.
var response = await client.GetStringAsync()
It usually takes about 20 seconds to get the results, however, sometimes their server does not process the request correctly or it is just busy, and takes too long to do so or does not return any data. I've tested in postman, and I get status Ok 200, but it keeps loading for many minutes, and nothing happens.
However, if you just cancel the operation, and try again, it is responds fine.
Now My question is, how could I keep track of the time it takes for the result to come from my get client.GetStringAsync() method, and somehow cancel the operation if lets say it is longer than 2 minutes?
If you are using .NET 5, you can use a CancellationTokenSource with a 'cancel after' and pass the token to client.GetStringAsync(), like this:
var cts = new CancellationTokenSource()
cts.CancelAfter(120000); // Cancel after 2 minutes
try
{
var response = await client.GetStringAsync(uri, cts.Token);
}
catch (TaskCanceledException)
{
Console.WriteLine("\nTask cancelled due to timeout.\n");
}
Then you can wrap the operation in a try-catch block to see if the operation was cancelled by the token.
Reference: Cancel async tasks after a period of time (C#)
UPDATE
On .NET Core, you can specify a timeout for your HttpClient. Note however that this timeout applies to all requests you send using that instance of HttpClient. If you wish to do it that way, you can do as follows:
client.Timeout = TimeSpan.FromMinutes(2);
try
{
var response = await client.GetStringAsync(uri);
}
catch (TaskCanceledException)
{
Console.WriteLine("\nRequest cancelled due to timeout.\n");
}

ClientBase EndPoint Binding SendTimeout ReceiveTimeout: how to change while debugging

I'm developing a solution with a WCF service and a client that uses the service. Sometimes I'm debugging the service, sometime the client, and sometimes both.
During debugging I get a TimeoutException with additional information
Additional information: The request channel timed out while waiting for a reply after 00:00:59.9950000. Increase the timeout value passed to the call to Request or increase the SendTimeout value on the Binding. The time allotted to this operation may have been a portion of a longer timeout.
The reason if of course that my server is waiting at a breakpoint instead of answering the question.
During debugging I want longer timeouts, preferably without creating a new configuration for my service client, because if other values of this configuration would change, the changer would have to remember that a special configuration for debugging was created.
I think it is something like:
private IMyServiceInterface CreateServiceChannel()
{
var myServiceClient = new MyServiceClient(); // reads from configuration file
if (Debugger.IsAttached)
{
// Increase timeouts to enable slow debugging
...
}
return (IMyServiceInterface)myServiceClient;
}
According to MSDN Binding.SendTimeout Property is used for something else:
SendTimeout gets or sets the interval of time provided for a write operation to complete before the transport raises an exception.
Therefore I'd rather not change this value if not needed.
Is SendTimeout really the best timeout to increase, or is there something like a TransactionTimeout, the timeout between my question and the receipt of the answer?
How to change the timeout programmatically
The article All WCF timouts explained states that indeed there is something like a transaction timeout: IContextChannel.OperationTimeout
The operation timeout covers the whole service call (sending the request, processing it and receiving a reply). In other words, it defines the maximum time a service call is active from a client’s point of view. If not set, WCF initializes the operation timeout with the configured send timeout.
This explains why the TimeoutException that is thrown advises to change the send timeout.
However, it is possible to change the operation timeout without changing the send timeout:
var myServiceClient = new MyServiceClient(); // reads from configuration file
if (Debugger.IsAttached)
{ // Increase timeouts to enable slow debugging:
IContextChannel contextChannel = (IContextChannel)myServiceClient.InnerChannel;
// InnerChannel is of type IClientChannel, which implements IContextChannel
// set the operation timeout to a long value, for example 3 minutes:
contextChannel.OperationTimeout = TimeSpan.FromMinutes(3);
}
return (IMyInterface)myService;

Performance with Web API

I'm with some performance problems working with WEB Api. On my real/production code, I'll do a SOAP WS call, on this sample, I'll just sleep. I have 400+ clients sending request's to the Web API.
I guess it's a problem with web api, because if I open 5 process, I can handle more requests than when I'm only with one process.
My test async version of the controller looks like this
[HttpPost]
public Task<HttpResponseMessage> SampleRequest()
{
return Request.Content.ReadAsStringAsync()
.ContinueWith(content =>
{
Thread.Sleep(Timeout);
return new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StringContent(content.Result, Encoding.UTF8, "text/plain")
};
});
}
The sync version looks like this
[HttpPost]
public HttpResponseMessage SampleRequest()
{
var content = Request.Content.ReadAsStringAsync().Result;
Thread.Sleep(Timeout);
return new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StringContent(content, Encoding.UTF8, "text/plain")
};
}
My client code to this test, looks like this (it is configured to time out after 30 seconds)
for (int i = 0; i < numberOfRequests; i++)
{
tasks.Add(new Task(() =>
{
MakeHttpPostRequest();
}));
}
foreach (var task in tasks)
{
task.Start();
}
I was not able to put it here in a nice way, but the table with the results are available at github
The CPU, memory and disk IO is low. There's always at least 800 available threads (both worker and io threads)
public static void AvailableThreads()
{
int workerThreads;
int ioThreads;
ThreadPool.GetAvailableThreads(out workerThreads, out ioThreads);
Console.WriteLine("Available threads {0} ioThreads {1}", workerThreads, ioThreads);
}
I've configured the DefaultConnectionLimit
System.Net.ServicePointManager.DefaultConnectionLimit = Int32.MaxValue;
My question is why there's a queue to answer those request?
In every test, I began with a response time almost exactly like the server Thread.Sleep() time, but the responses get slower as new request arrive.
Any tip on how I can discover where's the bootleneck?
It is a .net 4.0 solution, using self host option.
Edit: I've also tested with .net 4.5 and Web API 2.0, and got the same behaviour.
First requests got the answer almost as soon as sleep expires, later it takes up to 4x the sleep time to get an answer.
Edit2: Gist of the web api1 implementation and gist of the web api2 implementation
Edit3: The MakeHttpPost method creates a new WebApiClient
Edit4:
If I change the
Thread.Sleep()
to
await Task.Delay(10000);
in the .net 4.5 version, it can handle all requests, as expected. So I don't think something related to any network issue.
Since Thread.Sleep() blocks the thread and Task.Delay don't, looks like there's an issue with webapi to consume more threads? But there's available threads in the threadpool...
Edit 5: If I open 5 servers and double the number of clients, the server can respond to all requests. So looks like it's not a problem with number of request to a server, because I can 'scale' this solution running a lot of process in different ports. It's more a problem with the number of request to the same process.
How to Check the TCP/IP stack for overloading
Run Netstat on the server having the issue, look for any Time-Waits, Fin-Wait-1, Fin-Wait-2 and RST-Wait, RST-Wait2. These are half-baked sessions whereby the stack is waiting for the other side to clean up.....OR.....the other side did send a packet in but the local machine could not process them yet depending on the Stacks' availability to do the job.
The kicker is that even sessions showing Established could be in trouble in that the time-out hasn't fired yet.
The symptoms described above are reminiscent of network or TCP/IP stack overload. Very similar behavior is seen when routers get overloaded.

How can I send an AWS SQS message with C# with a custom timeout?

Here's my basic code to send an SQS message from C# using the AWS .NET SDK. How can I give the message a different timeout other than the queue default?
public async Task PostMessage(Uri queueUrl, string body)
{
var request = new SendMessageRequest()
{
MessageBody = body,
QueueUrl = queueUrl.ToString(),
};
var result = await this.client.SendMessageAsync(request);
}
I can send a separate API call to extend the timeout of an in-flight message. But I'd like to do this at the time of creation if that's practical to do.
You can't do it with C# API or with any other API. Message visibility timeout is set globally on a queue.
I would suggest to create 2 queues. One for short tasks and one for long tasks. This way you can set different visibility timeouts on the queues.

Categories