I have an external endpoint which I call to get some Json response.
This endpoint will initiate a session to a POS device, so the device will show the request details and ask the customer to enter his credit card to complete the payment, then when the customer finishes; the POS will call the endpoint and it will return the result back to my application.
The problem here is that I need the operation to complete as described in this scenario (synchronously).
When I do the call to this endpoint from postman; it waits a lot of time (until the POS receives the request and customer do his entries then returns the results back to endpoint and endpoint returns the results back to Postman) ... this is all works fine.
The problem is when I do this from an ASP.NET Core app, the request is not waited for endpoint and the response is returned with null directly.
I need something to wait for it.
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("x-API-Key", "ApiKey");
client.DefaultRequestHeaders.Add("Connection", "keep-alive");
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var postTask = client.PostAsJsonAsync(new Uri("terminalEndpoint here"), dto);//dto is the request payload
postTask.Wait();
var result = postTask.Result;
if (result.IsSuccessStatusCode)
{
//Should hang after this line to wait for POS
var terminalPaymentResponseDto = result.Content.ReadAsAsync<InitiateTerminalPaymentResponseDto>().Result;
//Should hit this line after customer finishes with POS device
return terminalPaymentResponseDto;
}
}
First of all, there's no need to block. In fact, in an ASP.NET Core application you should avoid blocking as much as possible. Use async and await instead. This allows ASP.NET Core to use the freed threadpool thread for other work.
Second, HttpClient is thread-safe and meant to be reused. Creating a new one every time in a using block leaks sockets. You could use a static instance but a better solution is to use IHttpClientFactory as Make HTTP requests using IHttpClientFactory in ASP.NET Core shows, to both reuse and recycle HttpClient instances automatically.
Finally, there's no reason to add these headers on every call. The Content-Type is set by PostAsJsonAsync anyway. I also suspect the API key doesn't change when calling the same server either.
In your Startup.cs or Program.cs you can use AddHttpClient to configure the API Key :
builder.Services.AddHttpClient(client=>{
client.DefaultRequestHeaders.Add("x-API-Key", "ApiKey");
});
After that you can inject IHttpClientFactory into your controllers or pages and call it asynchronously in asynchronous actions or handlers :
public class MyController:ControllerBase
{
private readonly IHttpClientFactory _httpClientFactory;
public MyController:ControllerBase(IHttpClientFactory httpClientFactory) =>
_httpClientFactory = httpClientFactory;
public async Task<InitiateTerminalPaymentResponseDto> PostAsync(MyDTO dto)
{
var client=_httpClientFactory.CreateClient();
var uri=new Uri("terminalEndpoint here");
var result = client.PostAsJsonAsync(uri, dto);payload
if (result.IsSuccessStatusCode)
{
//Should hang after this line to wait for POS
var paymentDto= await result.Content.ReadAsAsync<InitiateTerminalPaymentResponseDto>();
//Should hit this line after customer finishes with POS device
return paymentDto;
}
else {
//Do whatever is needed in case of error
}
}
}
Using HttpClientFactory allows adding retry strategies using Polly eg, to recover from a temporary network disconnection.
Why not use the await like below? And make sure to change the function to async
var postTask = await client.PostAsJsonAsync(new Uri("terminalEndpoint here"), dto);
Related
After testing with the .net HttpClient I'm having the following issue. It will do the first 5-6 requests just fine (within 200ms or so), but after that it'll be a full 60 seconds before the rest complete - and they all complete nearly at once
Here is how I've been testing
var tasks = new List<Task<HttpResponseMessage>>();
var client = new HttpClient();
client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", "secret_jwt");
client.BaseAddress = new Uri("http://myapi/api/");
for (int i = 0; i < 50; i++)
{
tasks.Add(ProcessUrlAsync("organizations/id/185", client));
}
await Task.WhenAll(tasks);
-
static private async Task<HttpResponseMessage> ProcessUrlAsync(string url, HttpClient client)
{
var sw = System.Diagnostics.Stopwatch.StartNew();
var message = await client.GetAsync(url);
sw.Stop();
Console.WriteLine(sw.Elapsed.TotalMilliseconds.ToString());
return message;
}
and my output is typically
174.5346
127.0873
141.9458
141.7396
153.6638
153.3449
61241.5598
61241.8476
61283.9076
61287.406
61326.0361
61328.7341
61368.6317
etc.
This is not the API I'm using that's the issue - I can point the HttpClient to any address and the same issue occurs
If I write my own GetAsync method that sets the Http version to 1.0...
public new async Task<HttpResponseMessage> GetAsync(string url)
{
using (var request = new HttpRequestMessage(HttpMethod.Get, url))
{
request.Version = HttpVersion.Version10; //removing this will reproduce the issue!
return await SendAsync(request);
}
}
It works fine (all complete within a few hundred ms). Why is this, and what can I do to fix it whilst still using Http 1.1? I'm assuming it's something to do with 1.0 having connection : close and 1.1 having connection: keep-alive
I feel rather silly, the solution is as simple as
ServicePointManager.DefaultConnectionLimit = 100;
(Edited to complete)
This looks like Socket exhaustion with HttpClient. There's lots of info available online if you search for that.
One solution is to only create one HttpClient and re-use that.
Another option is to add the header Connection: close to your request
This would explain why switching to HTTP/1.0 seemed to solve the issue -- as leaving connections open was added in 1.1.
As a side note to anyone who comes across this, you can't send the same HttpRequestMessage twice. You will need to create a new one for the second request.
I am working on a system that uses HTTPClient to get a HTTP route like /api/GUID and holds the connection open until another system sends a message to SQL Server to notify that the work is completed. Once the message is received the connection ends and my client continues on with its data it was requesting.
When I test this with curl or postman I get the intended results.
I get intermittent issues with c# httpclient. Sometimes I get my data immediately. Other times httpclient will stay open and after its timeout expires the message is shown in my test app despite throwing a task cancellation exception. Other times nothing is returned and a timeout occurs OR I only receive an empty string from my data. This very well could be related to our service broker implementation but i want to rule out everything about HTTPclient. I have tried using threads/tasks/ lots of async await using.Result and I am getting the same results so far. It is highly likely related to our servicebroker waitfor but any insight would be great. The issue tends to be connections that come in rapidly from a single source tend to have this pattern.
public string GetSynchronous(string url)
{
var tokenSource = new CancellationTokenSource();
var cancellationToken = tokenSource.Token;
return _httpClient.GetAsync(url, cancellationToken).Result.Content.ReadAsStringAsync().Result;
}
public void PostAsync(string url, JObject jsonBody)
{
var content = new StringContent(jsonBody.ToString(), Encoding.UTF8, "application/json");
var tokenSource = new CancellationTokenSource();
var cancellationToken = tokenSource.Token;
var result = _httpClient.PostAsync(url, content, cancellationToken).Result;
}
private static readonly HttpClient _httpClient = new HttpClient {
Timeout = TimeSpan.FromSeconds(60),
DefaultRequestHeaders =
{
Accept = { new MediaTypeWithQualityHeaderValue("application/json")},
ConnectionClose = true
},
};
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hit enter to start");
Console.ReadLine();
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
ServicePointManager.DefaultConnectionLimit = 100;
var tests = new Tests();
tests.CallSynchronizer();
tests.CallSynchronizerPostFirst();
Console.WriteLine("Hit enter to end");
Console.ReadLine();
}
}
Service Broker is designed for long exchanges (hours, days, weeks). Waiting for a response in the HTTP requests is definitely an anti-pattern. And more fundamentally, if concurrent requests are using this patter, there is nothing preventing them from each receiving the response intended for the other request.
You HTTP requests should post the work to-do (ie. issue a SEND) and return immediately. The response, when it comes, should activate a stored procedure and this should update some state, write the result. From HTTP you could then poll for the result and return current status immediately. Your app should work with service broker responses that return after a week same way as if the response returned in 10ms.
Ultimately this ended up being related to how we are load balancing our application and sqldependency.start queue name being set. This is a service that is setup in an anti-pattern way to migrate old code off of a really unreliable set of services.
So I have 2 http Post requests consuming a web api as follow:-
using (var client = new HttpClient())
{
client.BaseAddress = new Uri("https://your_url.com:8443/");
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
// 1st request //
var datatobeSent = new ApiSendData()
{
UserID = "xxx",
UserPsw = "yyy",
ApiFunc = "zzz",
clearData = "x1y1z1"
};
HttpResponseMessage response = await client.PostAsJsonAsync("WebApi", datatobeSent);
var resultData = await response.Content.ReadAsStringAsync();
#region Extract Secured Data response from WebApi
JObject JsonObject = JObject.Parse(resultData);
datatobeSent.SecureData = (string)JsonObject["SecureResult"];
#endregion
// 2nd request //
var datatobeSent2 = new ApiSendData()
{
UserID = "xxx",
UserPsw = "yyy",
ApiFunc = "zzz",
SecureData = datatobeSent.SecureData
};
HttpResponseMessage response2 = await client.PostAsJsonAsync("WebApi", datatobeSent2);
var resultData2 = await response2.Content.ReadAsStringAsync();
}
So now I need some clarifications...
1) Are both my http POST requests sent over the same SSL session ?
2) If they are not, then how can I combine the two and send the 2 requests over a single connections/session ?
3) How may I improve performance of this code? currently 100 requests take 11secs to process and respond. ( i just used a for loop that counts 100 http post requests for the above 2 samples)
They are over the same SSL session and connection. The same HttpClient instance shares some configurations and underlying TCP connections. Therefore, you should reuse the same instance, which you are already doing with the using statement.
I would try to improve the performance of your code by asynchronously making the post requests and processing the results. Here's an option:
Create a new class to handle these async requests
public class WebHelper
{
public async Task<string> MakePostRequest(HttpClient client, string route, object dataToBeSent)
{
try{
HttpResponseMessage response = await client.PostAsJsonAsync(route, datatobeSent);
string resultData = await response.Content.ReadAsStringAsync();
return resultData;
}
catch (Exception ex){
return ex.Message;
}
}
}
Notice that the same httpClient instance is being used. In the main code, you could test your performance like this (to simplify our test, I'm just making the post request with the same parameters 101 times):
//Start time measurement
List<Task> TaskList = new List<Task>();
for (int i = 0; i < 100; i++)
{
Task postTask = Task.Run(async () =>
{
WebHelper webRequest = new WebHelper();
string response = await webRequest.MakePostRequest(client, "WebApi", dataToBeSent);
Console.WriteLine(response);
});
TaskList.Add(postTask);
}
Task.WaitAll(TaskList.ToArray());
//end time measurement
And just an improvement for your code: use try/catches to make the requests!
While HttpClient will aim to use the same Ssl session and connection, this cannot be guaranteed as it depends on the server also maintaining the session and connection. If the server drops them, HttpClient will renegotiate new connections and sessions transparently.
That said, while there is some overhead from connection and ssl session establishment, it's unlikely to be the root cause of any performance issues.
Given the simplicity of your code above, I strongly suspect that the performance issue is not in your client code, but in the service you are connecting to.
To determine that, you need to isolate the performance of that service. A few options for how to do that, depending on the level of control you have over that service:
If you are the owner of that service, carry out over the wire performance testing directly against the service, using a tool like https://www.blitz.io/. If 100 sequential requests from blitz.io takes 11 seconds, then the code you have posted is not the culprit, as the service itself just has an average response time of 110ms.
If you are not the owner of that service, use a tool like Mountebank to create an over the wire test double of that service, and use your existing test loop. If the test loop using Mountebank executes quickly, then again, the code you have posted is not the culprit. (Mountebank supports https, but you will need to either have the keypair, or use their self-signed cert and disable certificate verification in your client.)
It's worth noting that it's not unusual for complex web services to take 110ms to respond. Your test appears to be doing a sequential series of requests - one request at a time - most web services are optimised around handling many requests from independent users in parallel. The scalability challenge is in terms of how many concurrent users can I service while still ensuring that the average response time is 110ms. But if only a single user is using the service at a time, it will still take ~110ms.
So - an even earlier validation step you could take is to work out if your test of 100 sequential requests in a loop is a valid representation of your actual performance requirements, or if you should call your method 100 times in parallel to see if you can have 100 concurrent users access your service.
I'm using HttpClient to make request to WebApi.
I have written this code
public async Task<string> ExecuteGetHttp(string url, Dictionary<string, string> headers = null)
{
using (var client = new HttpClient())
{
client.BaseAddress = new Uri(url);
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
if (headers != null)
{
foreach (var header in headers)
{
client.DefaultRequestHeaders.Add(header.Key, header.Value);
}
}
var response = await client.GetAsync(url);
response.EnsureSuccessStatusCode();
return await response.Content.ReadAsStringAsync();
}
}
Now I'm calling this method from my action.
public async Task<ActionResult> Index()
{
try
{
RestWebRequest RestWebRequest = new RestWebRequest();
Dictionary<string, string> headers = new Dictionary<string, string>();
headers.Add("Authorization", "bearer _AxE9GWUO__8iIGS8stK1GrXuCXuz0xJ8Ba_nR1W2AhhOWy9r98e2_YquUmjFsAv1RcI94ROKEbiEjFVGmoiqmUU7qB5_Rjw1Z3FWMtzEc8BeM60WuIuF2fx_Y2FNTE_6XRhXce75MNf4-i0HbygnClzqDdrdG_B0hK6u2H7rtpBFV0BYZIUqFuJpkg4Aus85P8_Rd2KTCC5o6mHPiGxRl_yGFFTTL4_GvSuBQH39RoMqNj94A84KlE0hm99Yk-8jY6AKdxGRoEhtW_Ddow9FKWiViSuetcegzs_YWiPMN6kBFhY401ON_M_aH067ciIu6nZ7TiIkD5GHgndMvF-dYt3nAD95uLaqX6t8MS-WS2E80h7_AuaN5JZMOEOJCUi7z3zWMD2MoSwDtiB644XdmQ5DcJUXy_lli3KKaXgArJzKj85BWTAQ8xGXz3PyVo6W8swRaY5ojfnPUmUibm4A2lkRUvu7mHLGExgZ9rOsW_BbCDJq6LlYHM1BnAQ_W6LAE5P-DxMNZj7PNmEP1LKptr2RWwYt17JPRdN27OcSvZZdam6YMlBW00Dz2T2dgWqv7LvKpVhMpOtjOSdMhDzWEcf6yqr4ldVUszCQrPfjfBBtUdN_5nqcpiWlPx3JTkx438i08Ni8ph3gDQQvl3YL5psDcdwh0-QtNjEAGvBdQCwABvkbUhnIQQo_vwA68ITg07sEYgCl7Sql5IV7bD_x-yrlHyaVNtCn9C4zVr5ALIfj0YCuCyF_l1Z1MTRE7nb");
var getCategories = await RestWebRequest.ExecuteGetHttp("http://localhost:53646/api/Job/GetAllCategories?isIncludeChild=true", headers);
}
catch (HttpRequestException ex)
{
return View();
}
return View();
}
Now It is said that HttpClient has been designed to be re-used for multiple calls.
How Can I use same httpClient object for multiple calls.
Let's suppose
First I'm calling
http://localhost:53646/api/Job/GetAllCategories?isIncludeChild=true
Now In same controller I have to call another Api with diffrent header and diffrent url.
http://localhost:53646/api/Job/category/10
Should I make the global object of HttpClient and Use the same object for all API calls.
The challenge in using just one HttpClient across your application is when you want to use different credentials or you try to vary the default headers for your requests (or anything in the HttpClientHandler passed in). In this case you will need a set of purpose specific HttpClients to re-use since using just one will be problematic.
I suggest creating a HttpClient per the "type" of request you wish to make and re-use those. E.g. one for each credential you need - and maybe if you have a few sets of default headers, one per each of those.
It can be a bit of a juggling act between the HttpClient properties (which are not thread safe) and need their own instance if being varied:
- BaseAddress
- DefaultRequestHeaders
- MaxResponseContentBufferSize
- Timeout
And what you can pass in to the "VERB" methods (get, put, post etc). For example, using HttpClient.PostAsync Method (String, HttpContent) you can specify your headers for the [HttpContent][3] (and not have to put them in the HttpClient DefaultHeaders).
All of the Async methods off the HttpClient are thread safe (PostAsync) etc.
Just because you can, doesn't mean you should.
You don't have to, but you can reuse the HttpClient, for example when you want to issue many HTTP requests in a tight loop. This saves a tiny fraction of time it takes to instantiate the object.
Your MVC controller is instantiated for every request. So it won't harm any significant amount of time to instantiate a HttpClient at the same time. Remember you're going to issue an HTTP request with it, which will take many orders more time than the instantiation ever will.
If you do insist you want to reuse one instance, because you have benchmarked it and evaluated the instantiation of HttpClient to be your greatest bottleneck, then you can take a look at dependency injection and inject a single instance into every controller that needs it.
in .net core you can do the same with HttpClientFactory something like this:
public interface IBuyService
{
Task<Buy> GetBuyItems();
}
public class BuyService: IBuyService
{
private readonly HttpClient _httpClient;
public BuyService(HttpClient httpClient)
{
_httpClient = httpClient;
}
public async Task<Buy> GetBuyItems()
{
var uri = "Uri";
var responseString = await _httpClient.GetStringAsync(uri);
var buy = JsonConvert.DeserializeObject<Buy>(responseString);
return buy;
}
}
ConfigureServices
services.AddHttpClient<IBuyService, BuyService>(client =>
{
client.BaseAddress = new Uri(Configuration["BaseUrl"]);
});
documentation and example at here and here
I wrote a simple client for my web service with Web Api as explained in this tutorial:
http://www.asp.net/web-api/overview/web-api-clients/calling-a-web-api-from-a-net-client
The first request (GET a table) is executed successfully. That is, table data is fetched from the web service and bound to TableContent object.
But the program is totally blocked before executing the second request (GET table list); nothing happens, not even an error message!
If I comment out the code section related with the first request (GET a table) the second request (GET table list) is executed successfully.
What happens here? Why can this client execute only a single GET request?
using (var client = new HttpClient())
{
client.BaseAddress = new Uri("http://localhost:56510/");
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
HttpResponseMessage response;
// GET a table from web service
response = await client.GetAsync("api/table/GetMatrixTable");
if (response.IsSuccessStatusCode)
{
var tblcont = await response.Content.ReadAsAsync<TableContent>();
// do things ...
}
// GET a table list from web service
response = await client.GetAsync("api/table/GetTableList");
if (response.IsSuccessStatusCode)
{
var TblContList = await response.Content.ReadAsAsync<IList<TableContent>>();
// do things ...
}
}
Looking at the following SO question and its accepted answer (maybe not directly relevant but still useful in its own right):
HttpClient.GetAsync(...) never returns when using await/async
it might help if you replace your calls to GetAsync() as follows:
response = await client.GetAsync("api/table/GetMatrixTable",
HttpCompletionOption.ResponseHeadersRead).ConfigureAwait(false);
. . .
response = await client.GetAsync("api/table/GetTableList",
HttpCompletionOption.ResponseHeadersRead).ConfigureAwait(false);
I'm not able to test that so I can't take all the credit if that works.