I had a ASP.NET (.NET Framework 4.8) web service running on a Windows Server, that made lots of outgoing HTTP requests using HttpWebRequest (synchronous). It was handling thousands of concurrent requests without any trouble.
Recently, I migrated the service/middleware to ASP.NET Core (runtime 3.1) running on Ubuntu Server, using the updated HttpWebRequest (synchronous).
Now this service is stalling under a load test with just a few hundred concurrent requests. System journal/logs indicate that health check (heartbeat) cannot reach the service after a few minutes. It starts out fine, but after a few minutes it slows down and eventually halts (no response but it doesn't crash dotnet though), and then starts working again after 5-10 minutes without any intervention, and repeats this same behavior every few minutes.
I'm not sure if this is due to port exhaustion or a deadlock. If I load test the service by skipping all HttpWebRequest calls, then it works fine, so I'm suspecting it has to do something with HttpWebRequest causing an issue under stress of traffic.
Looking at the .NET Core codebase, it seems like HttpWebRequest (synchronous) creates a new HttpClient for each request (client is not cached due to parameters in my case), and executes HttpClient synchronously like:
public override WebResponse GetResponse()
{
:
:
return SendRequest(async: false).GetAwaiter().GetResult();
:
:
}
private async Task<WebResponse> SendRequest(bool async)
{
:
:
_sendRequestTask = async ?
client.SendAsync(...) :
Task.FromResult(client.Send(...));
HttpResponseMessage responseMessage = await _sendRequestTask.ConfigureAwait(false);
:
:
}
Official suggestion from Microsoft is to use IHttpClientFactory or SocketsHttpHandler for better performance. I can make our service use a singleton SocketsHttpHandler and new HttpClient's per outgoing request (with the shared handler) to reuse & close sockets better, but my main concern is (below):
The service is based on synchronous code, so I'll have to use asynchronous HttpClient synchronously, probably using the same method.GetAwaiter().GetResult() technique as official .NET Core code above. While a singleton SocketsHttpHandler may help avoid port exhaustion, could concurrent synchronous execution still result in the stalling problem due to deadlocks like the native HttpWebRequest?
Also, is there an approach (another synchronous HTTP client for .NET Core, setting 'Connection: close' header etc.) to smoothly making lots of concurrent HTTP requests synchronously without port exhaustion or deadlocks, just like it worked smoothly earlier with HttpWebRequest in .NET Framework 4.8?
Just to clarify, all WebRequest related objects are closed/disposed properly in the code, ServicePointManager.DefaultConnectionLimit is set to int.MaxValue, nginx (proxy to dotnet) has been tuned, sysctl has been tuned as well.
I'm not sure if this is due to port exhaustion or a deadlock.
Sounds more like thread pool exhaustion to me.
The service is based on synchronous code, so I'll have to use asynchronous HttpClient synchronously
Why?
The best solution to thread pool exhaustion is to rewrite blocking code to be asynchronous. There were places in ASP.NET pre-Core that required synchronous code (e.g., MVC action filters and child actions), but ASP.NET Core is fully asynchronous, including the middleware pipeline.
If you absolutely cannot make the code properly asynchronous for some reason, the only other workaround is to increase the minimum number of threads in the thread pool on startup.
Related
I have a project related to using HttpClient (C #. NET) with WPF
I use HttpClient singleton
I made a million requests to the server, the problem is that the response time is very slow if I stay connected forever.
I have set the ConnectionLeaseTimeout for the endpoint in the ServicePointManager class to 60 seconds. However, when I used the "netstat" command to test, more than 10 endpoints were in the TIME_AWAIT state, with only one endpoint in the ESTABLISHED state.
Do you have a way to fix this?
Does the IHttpClientFactory not apply to the .NET Framework?
I am very grateful for your help
I have a legacy application where HostingEnivronment.RegisterObject is used.
I have been tasked with converting this to asp.net core 2.0. however I am unable to find a way to either implement this or find an alternative to this in asp.net core 2.0.
the namespace Microsoft.AspNetCore.Hosting.Internal does not contain the registerobject method nor does it have the IRegisteredObject interface. I am at a loss on how to get this implemented.
The way to achieve similar goal in asp.net core is to use IApplicationLifetime interface. It has two properties two CancellationTokens,
ApplicationStopping:
The host is performing a graceful shutdown. Requests may still be
processing. Shutdown blocks until this event completes.
And ApplicationStopped:
The host is completing a graceful shutdown. All requests should be
completely processed. Shutdown blocks until this event completes.
This interface is by default registered in container, so you can just inject it wherever you need. Where you previously called RegisterObject, you instead call
// or ApplicationStopped
var token = lifeTime.ApplicationStopping.Register(OnApplicationStopping);
private void OnApplicationStopping() {
// will be executed on host shutting down
}
And your OnApplicationStopping callback will be invoked by runtime on host shutdown. Where you previously would call UnregisterObject, you just dispose token returned from CancellationToken.Register:
token.Dispose();
You can also pass these cancellation tokens to operations that expect cancellation tokens and which should not be accidentally interrupted by shutdown.
Consider these two controller methods:
public async Task DoAsync() {
await TaskThatRunsFor10MinutesAsync().ConfigureAwait(false);
}
public void DoWithTaskRunAndReturnEarly() {
Task.Run(() => TaskThatRunsFor10MinutesAsync());
}
Let's say on the client side we don't care about the response. We might wait for it to complete, we might abandon it half-way due user refreshing the page -- but it doesn't matter that DoWithTaskRunAndReturnEarly returns earlier.
Is there an important server-side difference between those two approaches, e.g. in reliability, timeouts, etc? Does ASP.NET Core ever abort threads on a timeout?
In ASP.NET Core 1.1 the socket will stay in the CLOSE_WAIT state until the application Task completes. That could lead to resource exhaustion on the server side if you have lots of long running tasks on the server where the client has already disconnected (refreshed the page in your case).
Is there an important server-side difference between those two approaches, e.g. in reliability, timeouts, etc? Does ASP.NET Core ever abort threads on a timeout?
Thread aborts aren't a "thing" in .NET Core anymore and wouldn't work for async code anyways (there's no active thread when you're awaiting).
In the next version of ASP.NET Core (2.x at time of writing) the RequestAborted cancellation token trigger when the client socket sends a FIN so you'll be able to react to clients disconnecting to cancel long running work. On top of that, it will close the socket even before the application completes.
I have a loop that actually waits for some process for completion of a Job and returns result.
I have MyRestClient.FetchResult(id) and MyRestClient.FetchResultAsync(id) both available to me, which fetches result from some remote service and returns boolean value if it is complete.
public class StatusController: ActionController {
public ActionResult Poll(long id){
return new PollingResult(()=>{
return MyRestClient.FetchResult(id) == SomethingSuccessful;
});
}
}
public class PollingResult : ActionResult{
private Func<bool> PollResult;
public PollingResult(Func<bool> pollResult){
this.PollResult = pollResult;
}
public override void ExecuteResult(ControllerContext context)
{
Response = context.HttpContext.Response;
Request = context.HttpContext.Request;
// poll every 5 Seconds, for 5 minutes
for(int i=0;i<60;i++){
if(!Request.IsClientConnected){
return;
}
Thread.Sleep(5000);
if(PollResult()){
Response.WriteLine("Success");
return;
}
// This is a comet, so we need to
// send a response, so that browser does not disconnect
Response.WriteLine("Waiting");
Response.Flush();
}
Response.WriteLine("Timeout");
}
}
Now I am just wondering if there is anyway to use Async Await to improve this logic because this thread is just waiting for every 5 seconds for 5 minutes.
Update
Async Task pattern usually finishes all work before sending result back to client, please note, if I do not send intermediate responses back to client in 5 seconds, client will disconnect.
Reason for Client Side Long Poll
Our web server is on high speed internet, where else clients are on low end connection, making multiple connections from client to our server and then relaying further to third party api is little extra overhead on client end.
This is called Comet technology, instead of making multiple calls in duration of 5 seconds, keeping a connection open for little longer is less resource consuming.
And of course, if client is disconnected, client will reconnect and once again wait. Multiple HTTP connections every 5 seconds drains battery life quicker compared to single polling request
First, I should point out that SignalR was designed to replace manual long-polling. I recommend that you use it first, if possible. It will upgrade to WebSockets if both sides support it, which is more efficient than long polling.
There is no "async ActionResult" supported in MVC, but you can do something similar via a trick:
public async Task<ActionResult> Poll()
{
while (!IsCompleted)
{
await Task.Delay(TimeSpan.FromSeconds(5));
PartialView("PleaseWait").ExecuteResult(ControllerContext);
Response.Flush();
}
return PartialView("Done");
}
However, flushing partial results goes completely against the spirit and design of MVC. MVC = Model, View, Controller, you know. Where the Controller constructs the Model and passes it to the View. In this case you have the Controller is directly flushing parts of the View.
WebAPI has a more natural and less hackish solution: a PushStreamContent type, with an example.
MVC was definitely not designed for this. WebAPI supports it but not as a mainstream option. SignalR is the appropriate technology to use, if your clients can use it.
Use Task.Delay instead of Thread.Sleep
await Task.Delay(5000);
Sleep tells the operating system to put your thread to sleep, and remove it from scheduling for at least 5 seconds. As follows, the thread will do nothing for 5 secs - that's one less thread you can use to process incoming requests.
await Task.Delay creates a timer, which will tick after 5 seconds. The thing is, this timer doesn't use a thread itself - it simply tells the operating system to signal a ThreadPool thread when 5 seconds have passed.
Meanwhile, your thread will be free to answer other requests.
update
For your specific scenario, it seems there's a gotcha.
Normally, you'd change the surrounding method's signature to return a Task/Task<T> instead of void. But ASP.NET MVC doesn't support an asynchronous ActionResult (see here).
It seems your options are to either:
move the async code to the controller (or to another class with an async-compatible interface)
Use a WebAPI controller, which seems to be a good fit for your scenario.
I have a video encoding in process with third party cloud api, however
my web client (chrome/ie/ff) need to poll result of encoding. If I
simply pass on result for every 5 seconds, web client will need to
make multiple HTTP calls one after another
I think the approach when you're trying to poll the result of the video encoding operation within the boundaries of a single HTTP request (i.e., within your ASP.NET MVC controller method) is wrong.
While you're doing the polling, the client browser is still waiting for your HTTP response. This way, the client-side HTTP request may simple get timed out. It is also a not-so-user-friendly behavior, the user is not getting any progress notifications, and cannot request the cancellation.
I've recently answer a related question about long-running server side operation. IMO, the best way of dealing with it is to outsource it to a WCF service and use AJAX polling. I also answered another related question on how to do the asynchronous long-polling in a WCF service.
I've faced with the next issue related to web service request processing:
Preamble
I have
Web api service hosted on IIS 7.0 on local machine
Test harness console application on the same machine
and i'm trying to simulate web service load by hitting one with requests generated via test harness app.
Test harness core code:
static int HitsCount = 40;
static async void PerformHitting()
{
{
await Task.WhenAll(ParallelEnumerable.Range(0, HitsCount)
.Select(_ => HitAsync())
.WithDegreeOfParallelism(HitsCount));
}
}
static async Task HitAsync()
{
// some logging skipped here
...
await new HttpClient().GetAsync(TargetUrl, HttpCompletionOption.ResponseHeadersRead);
}
Expectation
Logging shows that all HitAsync() calls are made simultaneously: each hit via HttpClients had started in
[0s; 0.1s] time frame (timings are roughly rounded here and below). Hence, I'm expecting to catch all these requests in approximately the same time frame on web service side.
Reality
But logging on the service side shows that requests grouped in bunches 8-12 request each and service catches these bunches with ~1 second interval. I mean:
[0s, 0.3s] <- requests #0-#10
[1.2s, 1.6s] <- requests #10-#20
...
[4.1s, 4.5s] <- request #30-#40
And i'm getting really long execution time for any significant HitCount values.
Question
I suspect some kind of built-in service throttling mechanism or framework built-in concurrent connections limitation. Only I found related to such guesstimate is that, but i didn't get any success trying soulutions from there.
Any ideas what is the issue?
Thanks.
By default, HTTP requests on ASP.NET are limited to 12 times the number of cores. I recommend setting ServicePointManager.DefaultConnectionLimit to int.MaxValue.
Well, the root of the problems lies in the IIS + Windows 7 concurrent requests handling limit (some info about such limits here. Moving service out to the machine with Windows Server kicked out the problem.