Is HttpClient async safe? - c#

I suddenly got some doubts on this. HttpClient is thread safe according to MSDN (for GetAsync or PostAsync at least).
But if I do this
List<Task> tasks= new List<Task>();
tasks.Add(_httpClient.PostAsync(url1, requestMessage1));
tasks.Add(_httpClient.PostAsync(url2, requestMessage2));
Tasks.Wait(tasks);
Will I get correct results back all the time as both calls come from the same thread now?

Will I get correct results back all the time as both calls come from the same thread now?
Yes. That's the indented usage of HttpClient.
"An HttpClient instance is a collection of settings applied to all requests executed by that instance. In addition, every HttpClient instance uses its own connection pool"
HttpClient Class

Related

Single HttpClient for application life cycle - How does this single instance of HttpClient ensures that it has responded to correct request?

I have created a single instance of HttpClient in Application_Start event to be reused accross the application in Global.asax.cs
Code in App start:
protected new void Application_Start()
{
HttpClientHandler httpClientHandler = new HttpClientHandler();
string _accessTokenUrl = ConfigurationManager.AppSettings["KongAccessTokenURl"];
string _adminUrl = ConfigurationManager.AppSettings["KongAdminUrl"];
base.Application_Start();
ApplicationWrapper.KongAdminClient = new HttpClient(httpClientHandler)
{
BaseAddress = new Uri(_adminUrl)
};
}
Here ApplicationWrapper.KongAdminClient is a static property.
I have developed a login API and within that api i Invoke Kong gateway api to generate token so that i can create a response with that token for that particular user.
For above purpose i create a new HttpRequestMessage for each request but HttpClient remains same as Microsoft says ..
HttpClient is intended to be instantiated once and re-used throughout the life of an application. Instantiating an HttpClient class for every request will exhaust the number of sockets available under heavy loads
https://learn.microsoft.com/en-us/dotnet/api/system.net.http.httpclient?view=netframework-4.8#remarks
My question is that with this same intance how will HttpClient know which thread to respond to ?
will this same instance respond appropriately to correct requesting thread under load conditions?
Think about it this way. When you are using the Math.Round function, you are effectively just calling a function that does something - in this case rounding - based on a specific input.
It might have some constants and other values reused, but they don't change in a way that affects other calls.
So when you use code like GetAsync you are just calling a method that gets some input and returns a value.

Handling HTTPClient performing multiple requests in fire and forget

I have to make multiple Http post requests (few hundreds or can be more) and not wait for any of the responses as have an SLA. Without waiting for those response I need to send back response from my Web API (before performing mentioned multiple HTTP requests, I fetch data from another API and need to return back).
I have looked around and found "fire and forget" implementation which does not wait for response. I am not sure if this right way to do and since im returning without waiting for parallel fire and forget requests, how will HttpClient get disposed?
HttpClient client = new HttpClient();
var CompositeResponse = client.GetAsync(_SOMEURL);
List<MEDLogresp> sortedmeds = MEDLogresps.OrderBy(x => x.rxId).ThenBy(y => y.recordActionType);
Task.Run(() => Parallel.ForEach(sortedmeds, ele => clientMED.PostAsync(URL , new StringContent(JsonConvert.SerializeObject(ele), Encoding.UTF8, "application/json"))));
return ResponseMessage(Request.CreateResponse<CompositeResponse>(HttpStatusCode.OK, compositeResponse));
since im returning without waiting for parallel fire and forget
requests, how will httpclient get disposed?
You can use a single, shared static instance of HttpClient for the lifetime of your application and never Dispose() it. It is safe to use the same HttpClient from multiple threads concurrently.

HttpClient single instance with user specific request headers. (Concurrently)

I am reusing same HttpClient throughout the application. But i have to set different headers for different users.
I have referred this post : HttpClient single instance with different authentication headers
When I implemented this approach and runs code concurrently, like this :
Parallel.For(0, 3, i =>
{
// HttpClient Call
}
Thread overrides each other data. Could you please let me know how to fix it ?
Thanks in Advance.

Dynamically changing HttpClient.Timeout in .NET

I need to change a HttpClient.Timeout property after it made a request(s). When I try, I get an exception:
This instance has already started one or more requests. Properties can only be modified before sending the first request.
Is there any way to avoid this?
There isn't much you can do to change this. This is just default behavior in the HttpClient implementation.
The Timeout property must be set before the GetRequestStream or GetResponse method is called.
From HttpClient.Timeout Remark Section
In order to change the timeout, it would be best to create a new instance of an HttpClient.
client = new HttpClient();
client.Timeout = 20; //set new timeout
Internally the Timeout property is used to set up a CancellationTokenSource which will abort the async operation when that timeout is reached. Since some overloads of the HttpClient methods accept CancellationTokens, we can create helper methods to have a custom timeouts for specific operations:
public async Task<string> GetStringAsync(string requestUri, TimeSpan timeout)
{
using (var cts = new CancellationTokenSource(timeout))
{
HttpResponseMessage response = await _httpClient.GetAsync(requestUri, cts.Token)
response.EnsureSuccessStatusCode();
return await response.Content.ReadAsStringAsync();
}
}
Lack of support for custom request-level timeouts has always been a shortcoming of HttpClient in my mind. If you don't mind a small library dependency, Flurl.Http [disclaimer: I'm the author] supports this directly:
"http://api.com/endpoint".WithTimeout(30).GetJsonAsync<T>();
This is a true request-level setting; all calls to the same host use a shared HttpClient instance under the hood, and concurrent calls with different timeouts will not conflict. There's a configurable global default (100 seconds initially, same as HttpClient).

Limiting asynchronous requests without blocking

i am after some advice/strategy on limiting http requests when consuming multiple web services. I feel i could do this if the requests were happening synchronously, but they are asynchronous and think i should try to perform the limit logic in a way that i wont block.
Due to the web app consuming multiple web services there will be different limits for different requests. I was thinking something like this, but aren't sure how to proceed in a no blocking manner:
request method:
public static Task<string> AsyncRequest(string url, enum webService)
{
using(LimitingClass limiter = new LimitingClass(webService))
{
//Perform async request
}
}
In the LimitingClass it will have logic like checking the last request for the given webservice, if it violates the limit then it will wait a certain amount of time. But in the mean time if another request comes in to a different webservice then i dont want that request to be blocked while the LimitingClass is waiting. Is there anything fundamentally wrong with the above approach? Should i open up a new thread with each LimitingClass instance?
Some pseudo code would be great if possible.
Many Thanks
UPDATE:
This is a simplified version of my current request method:
public static Task<string> MakeAsyncRequest(string url, string contentType)
{
HttpWebRequest request = //set up my request
Task<WebResponse> task = Task.Factory.FromAsync(
request.BeginGetResponse,
asyncResult => request.EndGetResponse(asyncResult),
(object)null);
return task.ContinueWith(t => ReadCallback(t.Result));
}
I just want to wrap this in a using to check the limits and which doesnt block other requests.
So how are you limiting an access to resource without blocking ?
I think you need to look at Semaphores - they allow to limits the number of threads that can access a resource or pool of resources concurrently.

Categories