AspNet Core 3 HttpClientFactory: TaskCanceledException - c#

I'm using an aspnet core 3 reverse proxy in Azure Kubernetes Service, loosely based on ProxyKit, which used to work just fine in a previous cluster. On the new cluster (the only difference I can see is that the old one used kubenet, the new one an azure virtual network) I constantly get TaskCanceledExceptions on HttpClient.SendAsync when an upstream request takes more than a few seconds.
This is the relevant method that throws the exception:
return await _httpClient.SendAsync(
UpstreamRequest,
HttpCompletionOption.ResponseContentRead,
HttpContext.RequestAborted)
.ConfigureAwait(false);
The HttpClient is provided by HttpClientFactory using the typed AddHttpClient middleware.
Things i've tried so far:
explicitly set a 30 seconds timeout for the HttpClient
passing no CancelationToken to the SendAsync method
implement custom timeout handling as suggested in this article
This is how the HttpClientFactory was configured before:
var httpClientBuilder = services
.AddHttpClient<ProxyKitClient>()
.ConfigurePrimaryHttpMessageHandler(sp => new HttpClientHandler
{
AllowAutoRedirect = false,
UseCookies = false
});
And this is the configuration right now:
var httpClientBuilder = services
.AddHttpClient<ProxyKitClient>(o => o.Timeout = Timeout.InfiniteTimeSpan)
.ConfigurePrimaryHttpMessageHandler(sp => new TimeoutHandler
{
InnerHandler = new HttpClientHandler
{
AllowAutoRedirect = false,
UseCookies = false
}
});
The behavior did not change whatsoever.
How can i make sure that HttpClient waits for the upstream request to finish? Kestrel and HttpClient default timeouts are way higher than the requests that are aborted right now.
As a side note, when i revert to aspnet core 2.2, the behavior is exactly the same.

I commented:
Task cancellation for something like SendAsync is going to occur when the client closes the connection. As such, I think you're looking the wrong place. You need to figure out why the clients are closing the connection prematurely.
As a result, the OP was able to determine that this issue lied with Azure Application Gateway:
Makes sense, and you are totally right. Was pulling my remaining hair out the whole day and the culprit is Azure Application Gateway with a request timeout of 1 second.

Related

Increase the request time-out of a request in ASP.NET CORE REST API

I'm quite a beginner in using HTTP Request,
I have an automated task that updates the attendance of 6000 employees every hour. When I was testing my API in my local, it succeeded with minimum 40 seconds.
Once I have deployed it, it gives me an operation time-out.
I wish to increase my Request Time-out to at least a minute.
I have used this code :
[HttpPost("autoAttendanceUpdate")]
public bool AUTO_AttendanceUpdate([FromBody] AttendanceProcessorFilters filter)
{
HttpClient httpClient = new HttpClient();
httpClient.Timeout = TimeSpan.FromMinutes(1);
_attendanceProcessorRepository.AUTO_attendanceUpdate(filter); // AUTO_attendanceUpdate contains the logic
}
But I think I have used httpClient wrong.
I don't know if I should modify AUTO_attendanceUpdate or if there are other solutions that can help me.
AS much as possible I don't want any other methods to be affected so adjusting web config is not an option. TIA

Why do I get a server timeout calling an API via HttpClient but Postman is fine?

I am trying to make a GET request to a RESTful service but the .NET HttpClient is receiving a timeout, but via Postman this returns an expected response (an error response, since I'm not yet "logged in") within seconds.
What I've tried:
I have checked the URLs are the same and they are.
I thought perhaps Postman's automagical headers are the issue, but after setting these in the HttpClient I still receive a timeout.
If I increase the timeout on the HttpClient I still get a timeout from the server ((504) Gateway Timeout).
Client = new HttpClient(new HttpClientHandler()
{
AutomaticDecompression = System.Net.DecompressionMethods.GZip
});
var objTask = Client.GetStringAsync(objUrl);
objTask.Wait();
strResponse = objTask.Result;
strResponse = Client.GetStringAsync(objUrl).Result;
Why am I receiving a timeout via HttpClient?
Note:
Our WebApi is calling our WCF Service, which in turn is calling the 3rd party RESTful service via HttpClient---for security reasons I can't call the 3rd party API directly from our WebApi controller. The website is based on the .NET 4.0 Framework.
With lots of help from #jdweng and #gunr2171, I've realised the issue was due to the proxy stored in the App.Config on the self-hosted WCF Services. If I commented this out, I wouldn't get a time-out issue.
The HttpClient constructor can take in a HttpClientHandler, which has a Proxy property where you can bypass the local proxy as so:
return new HttpClient(new HttpClientHandler()
{
AutomaticDecompression = System.Net.DecompressionMethods.GZip,
Proxy = new WebProxy()
{
BypassProxyOnLocal = true
}
})
{
BaseAddress = new Uri("https:\\my.service")
};

Static HttpClient still creating TIME_WAIT tcp ports

I am experiencing some interesting behavior with the HttpClient from the .NET Framework (4.5.1+, 4.6.1 and 4.7.2). I have proposed some changes in a project at work to not dispose of the HttpClient on each use because of the known issue with high TCP port usage, see https://aspnetmonsters.com/2016/08/2016-08-27-httpclientwrong/.
I have investigated the changes to check that things were working as expected and found that we are still experiencing the same TIME_WAIT ports as before.
To confirm that my proposed changes were correct I have added some extra tracing to the application that confirm that I am using the same instance of the HttpClient through out the application. I have since used simple test application (taken from the aspnetmonsters site linked above.
using System;
using System.Net.Http;
namespace ConsoleApplication
{
public class Program
{
private static HttpClientHandler { UseDefaultCredentials = true };
private static HttpClient Client = new HttpClient(handler);
public static async Task Main(string[] args)
{
Console.WriteLine("Starting connections");
for(int i = 0; i<10; i++)
{
var result = await Client.GetAsync("http://localhost:51000");
Console.WriteLine(result.StatusCode);
}
Console.WriteLine("Connections done");
Console.ReadLine();
}
}
}
The issue only occurs when connecting to a site that is hosted in IIS using Windows Authentication. I can reproduce the issue easily by setting the Authentication to Anonymous (problem goes away) and back to Windows Authentication (problem reoccurs).
The issue with Windows Authentication does not seem to be limited to the scope of the provider. It has the same issue if you us Negotiate or NTLM. Also the issue occurs if the machine is just a workstation or part of a domain.
Out of interest I created a dotnet core 2.1.0 console app and the issue is not present at all and works as expected.
TLDR: Does any one have any idea on how to fix this, or is it likely to be a bug?
Short version
Use .NET Core 2.1 if you want to reuse connections with NTLM authentication
Long version
I was quite surprised to see that the "old" HttpClient does use a different connection for each request when NTLM authentication is used. This isn't a bug - before .NET Core 2.1 HttpClient would use HttpWebRequest which closes the connection after every NTLM authenticated call.
This is described in the documentation of the HttpWebRequest.UnsafeAuthenticatedConnectionSharing property which can be used to enable sharing of the connection :
The default value for this property is false, which causes the current connection to be closed after a request is completed. Your application must go through the authentication sequence every time it issues a new request.
If this property is set to true, the connection used to retrieve the response remains open after the authentication has been performed. In this case, other requests that have this property set to true may use the connection without re-authenticating.
The risk is that :
If a connection has been authenticated for user A, user B may reuse A's connection; user B's request is fulfilled based on the credentials of user A.
If one understands the risks, and the application doesn't use impersonation, one could configure HttpClient with a WebRequestHandler and set the UnsafeAuthenticatedConnectionSharing, eg :
HttpClient _client;
public void InitTheClient()
{
var handler=new WebRequestHandler
{
UseDefaultCredentials=true,
UnsafeAuthenticatedConnectionSharing =true
};
_client=new HttpClient(handler);
}
WebRequestHandler doesn't expose the HttpWebRequest.ConnectionGroupName that would allow to group connections eg by ID, so it can't handle impersonation.
.NET Core 2.1
HttpClient was rewritten in .NET Core 2.1 and implements all the HTTP, networking functionality using sockets, minimal allocations, connection pooling etc. It also handles the NTLM challenge/response flow separatelly so the same socket connection can be used to serve different authenticated requests.
If anyone is interested, you can chase the calls from HttpClient to SocketsHttpHanlder to HttpConnectionPoolManager , HttpConnectionPool, HttpConnection, AuthenticationHelper.NtAuth and then back to HttpConnection to send the raw bytes.

Why is first HttpClient.PostAsync call extremely slow in my C# winforms app?

I have an httpclient like this :
var client = new HttpClient();
I post to it like this:
var result = client.PostAsync(
endpointUri,
requestContent);
And get the response like this:
HttpResponseMessage response = result.Result;
I understand this call will block the thread, thats how its supposed to work (just building a tool for myself, no async threads needed)
The first time I run this call, it takes about 2 minutes to get a result. Meanwhile, if I do the exact same call elsewhere its done in 200ms. Even if I hit google, it takes 2 minutes. But, after the first call, as long as I keep the app open any additional calls are good. Its just the first cal when I open the application. What could be causing this?
The problem was that it was hanging for a very long time trying to resolve a proxy for the client. Initializing the HttpClient like this did the trick:
var client = new HttpClient(new HttpClientHandler
{
UseProxy = false
});
In my case I was trying to access a service on localhost. Apparently the HTTP client tries to connect to the IPv6 localhost first before connecting to its IPv4 equivalent (source: https://github.com/jchristn/restwrapper), which causes slowdown.
Changing localhost to 127.0.0.1 in my case cut off 2000ms of delay, although there is still maybe ~120ms delay on the first request.

Should WebRequest.CachePolicy work from code running within IIS?

I have some code running within an ApiController (ASP.Net Web API) that itself wants to make a GET request to another web service. The web service (also part of my app) returns Cache-Control headers indicating an expiry time for the content it returns.
I am using the new System.Net.Http.HttpClient, configured with a WebRequestHandler in order to use client-side caching (the default HttpClientHandler does not support cache configuration, although it does use System.Net.WebRequest as its underlying HTTP implementation):
var client = new HttpClient(new WebRequestHandler {
UseDefaultCredentials = true,
CachePolicy = new RequestCachePolicy(RequestCacheLevel.Default)
});
var response = client.GetAsync("someUri").Result;
response.EnsureSuccessStatusCode();
On the server I am enabling caching within my controller action via...
var response = new HttpResponseMessage(HttpStatusCode.OK);
response.Headers.CacheControl = new CacheControlHeaderValue {
Public = true,
MaxAge = new TimeSpan(0, 5, 0); // Five minutes in this case
};
// Omitted, some content is added to the response
return response;
The above (abbreviated) code works correctly within a test; I make multiple calls to the service in this way and only the first call actually contacts the service (observed via log messages on the service in IIS); subsequent calls use the cache.
However, running the same code hosted on IIS itself, it seems the HttpClient ignores the caching result (I have also set up my IoC container such that only one instance of the HttpClient exists in the AppDomain) and calls the service each time. This is running as the AppPoolIdentity.
Interestingly, if I change the app pool to run as NetworkService, then the response has status code 401 Unauthorized (I have tried setting Preauthenticate = true on the WebRequestHandler but the status code is still 401). The same is true if I change the app pool to run under my own credentials.
So, is there something about running the App Pool under the NetworkService identity, and virtual AppPoolIdentity, that prevents them from using client-side caching. Where does the content cached by WebRequest physically exist anyway?
WinInet caching is not supported when running on IIS, please check the following support article from MS http://support.microsoft.com/kb/238425
I don't see any reason why the cache should not work under IIS. The cache is implemented by WinINetProxy and is the same cache that is used by Internet Explorer.
Try setting the max-age instead of the expiry time.

Categories