Performance with Web API - c#

I'm with some performance problems working with WEB Api. On my real/production code, I'll do a SOAP WS call, on this sample, I'll just sleep. I have 400+ clients sending request's to the Web API.
I guess it's a problem with web api, because if I open 5 process, I can handle more requests than when I'm only with one process.
My test async version of the controller looks like this
[HttpPost]
public Task<HttpResponseMessage> SampleRequest()
{
return Request.Content.ReadAsStringAsync()
.ContinueWith(content =>
{
Thread.Sleep(Timeout);
return new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StringContent(content.Result, Encoding.UTF8, "text/plain")
};
});
}
The sync version looks like this
[HttpPost]
public HttpResponseMessage SampleRequest()
{
var content = Request.Content.ReadAsStringAsync().Result;
Thread.Sleep(Timeout);
return new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StringContent(content, Encoding.UTF8, "text/plain")
};
}
My client code to this test, looks like this (it is configured to time out after 30 seconds)
for (int i = 0; i < numberOfRequests; i++)
{
tasks.Add(new Task(() =>
{
MakeHttpPostRequest();
}));
}
foreach (var task in tasks)
{
task.Start();
}
I was not able to put it here in a nice way, but the table with the results are available at github
The CPU, memory and disk IO is low. There's always at least 800 available threads (both worker and io threads)
public static void AvailableThreads()
{
int workerThreads;
int ioThreads;
ThreadPool.GetAvailableThreads(out workerThreads, out ioThreads);
Console.WriteLine("Available threads {0} ioThreads {1}", workerThreads, ioThreads);
}
I've configured the DefaultConnectionLimit
System.Net.ServicePointManager.DefaultConnectionLimit = Int32.MaxValue;
My question is why there's a queue to answer those request?
In every test, I began with a response time almost exactly like the server Thread.Sleep() time, but the responses get slower as new request arrive.
Any tip on how I can discover where's the bootleneck?
It is a .net 4.0 solution, using self host option.
Edit: I've also tested with .net 4.5 and Web API 2.0, and got the same behaviour.
First requests got the answer almost as soon as sleep expires, later it takes up to 4x the sleep time to get an answer.
Edit2: Gist of the web api1 implementation and gist of the web api2 implementation
Edit3: The MakeHttpPost method creates a new WebApiClient
Edit4:
If I change the
Thread.Sleep()
to
await Task.Delay(10000);
in the .net 4.5 version, it can handle all requests, as expected. So I don't think something related to any network issue.
Since Thread.Sleep() blocks the thread and Task.Delay don't, looks like there's an issue with webapi to consume more threads? But there's available threads in the threadpool...
Edit 5: If I open 5 servers and double the number of clients, the server can respond to all requests. So looks like it's not a problem with number of request to a server, because I can 'scale' this solution running a lot of process in different ports. It's more a problem with the number of request to the same process.

How to Check the TCP/IP stack for overloading
Run Netstat on the server having the issue, look for any Time-Waits, Fin-Wait-1, Fin-Wait-2 and RST-Wait, RST-Wait2. These are half-baked sessions whereby the stack is waiting for the other side to clean up.....OR.....the other side did send a packet in but the local machine could not process them yet depending on the Stacks' availability to do the job.
The kicker is that even sessions showing Established could be in trouble in that the time-out hasn't fired yet.
The symptoms described above are reminiscent of network or TCP/IP stack overload. Very similar behavior is seen when routers get overloaded.

Related

In .NET, failure to retrieve HTTP resource from W3C web site

Retrieving the resource at http://www.w3.org/TR/xmlschema11-1/XMLSchema.xsd takes around 10 seconds using the following mechanisms:
web browser
curl
Java URL.openConnection()
It's possible that the W3C site is applying some "throttling" - deliberately slowing the response to discourage bulk requests.
Trying to retrieve the same resource from a C# application on .NET, I get a timeout after about 60-70 seconds. I've tried a couple of different approaches, both with the same result:
System.Xml.XmlUrlResolver.GetEntity()
new WebClient().OpenRead(uri)
Anyone have any idea what's going on? Would another API, or some configuration options, solve the problem?
The problem is they are (probably) checking for a User-Agent string. If it's not present, they send you to purgatory. .NET's http clients do not set this by default.
So, give this a shot:
private static readonly HttpClient _client = new HttpClient();
public static async Task TestMe()
{
using (var req = new HttpRequestMessage(HttpMethod.Get,
"http://www.w3.org/TR/xmlschema11-1/XMLSchema.xsd"))
{
req.Headers.Add("user-agent",
"Mozilla/5.0 (iPhone; CPU iPhone OS 10_3 like Mac OS X)");
using (var resp = await _client.SendAsync(req))
{
resp.EnsureSuccessStatusCode();
var data = await resp.Content.ReadAsStringAsync();
}
}
}
No idea why they do this; Maybe it's a bug in their back-end? (I sure wouldn't want to leave a socket open longer than it needs to be for no good reason). The request still takes 10-15 seconds, but it's better than the 120+ second timeout.

RestRequestAsyncHandle.Abort() won't trigger CancellationToken.IsCancellationRequested

I have two C# app running simultaneously. One is a C# console app that sends REST calls to the other and the other is running a Nancy self hosted rest server.
Here's a really basic view of some of the code of both parts.
RestsharpClient:
public async void PostAsyncWhile()
{
IRestClient RestsharpClient = new RestClient("http://localhost:50001");
var request = new RestRequest($"/nancyserver/whileasync", Method.POST);
var asyncHandle = RestsharpClient.ExecuteAsync(request, response =>
{
Console.WriteLine(response.Content.ToString());
});
await Task.Delay(1000);
asyncHandle.Abort();//PROBLEM
}
NancyRestServer:
public class RestModule : NancyModule
{
public RestModule() : base("/nancyserver")
{
Post("/whileasync", async (args, ctx) => await WhileAsync(args, ctx));
}
private async Task<bool> WhileAsync(dynamic args, CancellationToken ctx)
{
do
{
await Task.Delay(100);
if (ctx.IsCancellationRequested)//PROBLEM
{
break;
}
}
while (true);
return true;
}
}
The client sends a command to start a while loop on the server, waits for 1sec and aborts the async call.
The problem I'm having is that the asyncHandle.Abort(); doesn't seem to trigger the ctx.IsCancellationRequested on the server's side.
How am I supposed to cancel/abort an async call on a nancy host server using a restsharp client?
There's a couple of parts that go into cancelling a web request.
First, the client must support cancellation and use that cancellation to clamp the connection closed. So the first thing to do is to monitor the network packets (e.g., using Wireshark) and make sure RestSharp is sending a RST (reset), and not just closing its side of the connection. An RST is a signal to the other side that you want to cancel the request. Closing the connection is a signal to the other side that you're done sending data but are still willing to receive a response.
Next, the server must support detecting cancellation. Not all servers do. For quite a while, ASP.NET (pre-Core) did not properly support client-initiated request cancellation. There was some kind of race condition where it wouldn't cancel properly (don't remember the details), so they disabled that cancellation path. This has all been sorted out in ASP.NET Core, but AFAIK, it was never fixed in ASP.NET Legacy Edition.
If RestSharp is not sending an RST, then open an issue with that team. If Nancy is not responding to the RST, then open an issue with that team. If it's an ASP.NET issue, the Nancy team should know that (and respond informing you they can't fix it), in which case you're out of luck. :/

Calls to an API fails after some time when multiple users are logged in and using the application. Is there a way to solve this?

I am making some calls to an api that syncs and saves appointments to/from my application and mail agenda which is most commonly Outlook mails. The calls to the api are made from a web application and it is working fine for some time but then immediately after a few hours, the calls are failed. This continues for some time and starts to work again after some time.
The DefaultConnectionLimit was set to 100 in the beginning and during this scenario, the process stopped working after some time (Say 30 mins to 1 hour). Then DefaultConnectionLimit was set to 20 and in this case, it worked fine for 3-4 hours and stopped working after that. I don't know if this is the actual cause but just mentioning.
The code where the call to the api is made from the web application is mentioned below :
public bool SyncNewAppToExch(string encryptedAppSyncDetails)
{
try
{
string result = string.Empty;
JArray paramList = new JArray();
paramList.Add(encryptedAppSyncDetails);
var emailApiUrl = ConfigurationManager.AppSettings["emailApiUrl"].ToString();
Uri baseAddress = new Uri(emailApiUrl);
var url = "/api/EmailProcess/ManualSaveAppointments";
HttpClient client = new HttpClient();
client.BaseAddress = baseAddress;
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
Task.Run(() => client.PostAsJsonAsync(url, paramList)).Wait(5000);
var responseMessage = true;
if (responseMessage)
return true;
else
return false;
}
catch (Exception ex)
{
return false;
}
}
The exception that follows the api call failure reads :
Exception : System.AggregateException: One or more errors occurred. System.Threading.Tasks.TaskCanceledException: A task was canceled. --- End of inner exception stack trace --- at System.Threading.Tasks.Task1.GetResultCore(Boolean waitCompletionNotification) at System.Threading.Tasks.Task1.get_Result()
If you are making a new HttpClient object each time you want to make a call, you are not using it appropriately. HttpClient may continue to use sockets even if it is no longer being used and this will likely cause socket exhaustion under load. You can see more on this topic here
You should create a single instance and reuse it. If you are using .Net core it would be a better idea to use the HttpClientFactory.
In addition, using Task.Run to call an async method is not wise. You are setting yourself up for a deadlock. It would be much better to mark your method as async and await, if possible. Since you are effectively doing fire and forget, the way you have done this if your App Domain shuts down this work will be lost.
If you need to do this, you should at least consider registering your task as such:
HostingEnvironment.QueueBackgroundWorkItem(ct => SendMailAsync(user.Email));
The short answer is that it is difficult to say what your issue is, it could be one of several problems.

MVC HttpClient multiple post requests, get mismatched responses

The scenario
I need to show N reports on a web page. The reports need to be requested to an external service. The time for the service to generate a report can vary from 2 seconds to 50 seconds, depending on the requested content.
To call the service I use HttpClient in an async action. To generate 1 report I call the service once. To generate 5 reports I call it 5 times and so on.
The Problem
Let's suppose we request 3 reports BigReport, MediumReport and SmallReport with a known relative generation time of 1 minute, 30 seconds and 2 seconds, and we call the service in the following order:
BigReport, MediumReport, SmallReport
The result of the HttpCalls will be as following:
HttpCall response for BigReport returns SmallReport (which is the quickest to be generated)
MediumReport will be correct
SmallReport response will contain the BigReport (which is the longest and the last)
Basicly, although the HttpCalls are different, for the fact they are made over a very short period of time, and they are still "active", the server will repond based on first arrived, first served, instead of serving each call with its exact response.
The Code
I have a Request controller with an async action like this:
public async Task<string> GenerateReport(string blockContent)
{
var formDataContent = new MultipartFormDataContent
{
AddStringContent(userid, "userid"),
AddStringContent(passcode, "passcode"),
AddStringContent(outputtype, "outputtype"),
AddStringContent(submit, "submit")
};
var blockStream = new StreamContent(new MemoryStream(Encoding.Default.GetBytes(blockContent)));
blockStream.Headers.Add("Content-Disposition", "form-data; name=\"file\"; filename=\"" + filename + "\"");
formDataContent.Add(blockStream);
using (var client = new HttpClient())
{
using(var message = await client.PostAsync(Url, formDataContent))
{
var report = await message.Content.ReadAsStringAsync();
return report;
}
}
}
The action is being called from a view via Ajax, like this
//FOREACH BLOCK, CALL THE REPORT SERVICE
$('.block').each(function(index, block) {
var reportActionUrl = "Report/GenerateReport/"+block.Content;
//AJAX CALL GetReportAction
$(block).load(reportActionUrl);
});
Everything works fine if I covert the action from async to sync, by removing async Task and instead of "awaiting" for the response, I just get result as
var result = client.PostAsync(Url, formDataContent).Result.
This will make everything run synchronously and working fine, but the waiting time for the user, will be much longer. I would really like to avoid this by making parallel calls or similar.
Conclusions and questions
The problem itself make sense, after inspecting it also with Fiddler, as we have multiple opened HttpRequests pending almost simultaneously.
I suppose I need a sort of handler or something to identify and match request/response, but I don't know what's the name of the "domain" I need to look for. So far, my questions are:
What is the technical name of "making multiple http calls in parallel"?
If the problem is understandable, what is name of the problem? (concurrency, parallel requests queuing, etc..?)
And of course, is there any solution?
Many thanks.
With a "bit" of delay, I post the solution.
The problem was that the filename parameter was incorrectly called filename instead of blockname. This was causing the very weird behaviour, as a file could have had many blocks.
The lesson learned was that in case of very weird behaviour, in this case with a HttpClient call, analyse all the possible parameters and test it with different values, even if it doesn't make too much sense. At worst it can throw an error.

Limiting asynchronous requests without blocking

i am after some advice/strategy on limiting http requests when consuming multiple web services. I feel i could do this if the requests were happening synchronously, but they are asynchronous and think i should try to perform the limit logic in a way that i wont block.
Due to the web app consuming multiple web services there will be different limits for different requests. I was thinking something like this, but aren't sure how to proceed in a no blocking manner:
request method:
public static Task<string> AsyncRequest(string url, enum webService)
{
using(LimitingClass limiter = new LimitingClass(webService))
{
//Perform async request
}
}
In the LimitingClass it will have logic like checking the last request for the given webservice, if it violates the limit then it will wait a certain amount of time. But in the mean time if another request comes in to a different webservice then i dont want that request to be blocked while the LimitingClass is waiting. Is there anything fundamentally wrong with the above approach? Should i open up a new thread with each LimitingClass instance?
Some pseudo code would be great if possible.
Many Thanks
UPDATE:
This is a simplified version of my current request method:
public static Task<string> MakeAsyncRequest(string url, string contentType)
{
HttpWebRequest request = //set up my request
Task<WebResponse> task = Task.Factory.FromAsync(
request.BeginGetResponse,
asyncResult => request.EndGetResponse(asyncResult),
(object)null);
return task.ContinueWith(t => ReadCallback(t.Result));
}
I just want to wrap this in a using to check the limits and which doesnt block other requests.
So how are you limiting an access to resource without blocking ?
I think you need to look at Semaphores - they allow to limits the number of threads that can access a resource or pool of resources concurrently.

Categories