C# consume a web api with multiple http POST requests - c#

So I have 2 http Post requests consuming a web api as follow:-
using (var client = new HttpClient())
{
client.BaseAddress = new Uri("https://your_url.com:8443/");
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
// 1st request //
var datatobeSent = new ApiSendData()
{
UserID = "xxx",
UserPsw = "yyy",
ApiFunc = "zzz",
clearData = "x1y1z1"
};
HttpResponseMessage response = await client.PostAsJsonAsync("WebApi", datatobeSent);
var resultData = await response.Content.ReadAsStringAsync();
#region Extract Secured Data response from WebApi
JObject JsonObject = JObject.Parse(resultData);
datatobeSent.SecureData = (string)JsonObject["SecureResult"];
#endregion
// 2nd request //
var datatobeSent2 = new ApiSendData()
{
UserID = "xxx",
UserPsw = "yyy",
ApiFunc = "zzz",
SecureData = datatobeSent.SecureData
};
HttpResponseMessage response2 = await client.PostAsJsonAsync("WebApi", datatobeSent2);
var resultData2 = await response2.Content.ReadAsStringAsync();
}
So now I need some clarifications...
1) Are both my http POST requests sent over the same SSL session ?
2) If they are not, then how can I combine the two and send the 2 requests over a single connections/session ?
3) How may I improve performance of this code? currently 100 requests take 11secs to process and respond. ( i just used a for loop that counts 100 http post requests for the above 2 samples)

They are over the same SSL session and connection. The same HttpClient instance shares some configurations and underlying TCP connections. Therefore, you should reuse the same instance, which you are already doing with the using statement.
I would try to improve the performance of your code by asynchronously making the post requests and processing the results. Here's an option:
Create a new class to handle these async requests
public class WebHelper
{
public async Task<string> MakePostRequest(HttpClient client, string route, object dataToBeSent)
{
try{
HttpResponseMessage response = await client.PostAsJsonAsync(route, datatobeSent);
string resultData = await response.Content.ReadAsStringAsync();
return resultData;
}
catch (Exception ex){
return ex.Message;
}
}
}
Notice that the same httpClient instance is being used. In the main code, you could test your performance like this (to simplify our test, I'm just making the post request with the same parameters 101 times):
//Start time measurement
List<Task> TaskList = new List<Task>();
for (int i = 0; i < 100; i++)
{
Task postTask = Task.Run(async () =>
{
WebHelper webRequest = new WebHelper();
string response = await webRequest.MakePostRequest(client, "WebApi", dataToBeSent);
Console.WriteLine(response);
});
TaskList.Add(postTask);
}
Task.WaitAll(TaskList.ToArray());
//end time measurement
And just an improvement for your code: use try/catches to make the requests!

While HttpClient will aim to use the same Ssl session and connection, this cannot be guaranteed as it depends on the server also maintaining the session and connection. If the server drops them, HttpClient will renegotiate new connections and sessions transparently.
That said, while there is some overhead from connection and ssl session establishment, it's unlikely to be the root cause of any performance issues.
Given the simplicity of your code above, I strongly suspect that the performance issue is not in your client code, but in the service you are connecting to.
To determine that, you need to isolate the performance of that service. A few options for how to do that, depending on the level of control you have over that service:
If you are the owner of that service, carry out over the wire performance testing directly against the service, using a tool like https://www.blitz.io/. If 100 sequential requests from blitz.io takes 11 seconds, then the code you have posted is not the culprit, as the service itself just has an average response time of 110ms.
If you are not the owner of that service, use a tool like Mountebank to create an over the wire test double of that service, and use your existing test loop. If the test loop using Mountebank executes quickly, then again, the code you have posted is not the culprit. (Mountebank supports https, but you will need to either have the keypair, or use their self-signed cert and disable certificate verification in your client.)
It's worth noting that it's not unusual for complex web services to take 110ms to respond. Your test appears to be doing a sequential series of requests - one request at a time - most web services are optimised around handling many requests from independent users in parallel. The scalability challenge is in terms of how many concurrent users can I service while still ensuring that the average response time is 110ms. But if only a single user is using the service at a time, it will still take ~110ms.
So - an even earlier validation step you could take is to work out if your test of 100 sequential requests in a loop is a valid representation of your actual performance requirements, or if you should call your method 100 times in parallel to see if you can have 100 concurrent users access your service.

Related

ASP.NET Core synchronously wait for http request

I have an external endpoint which I call to get some Json response.
This endpoint will initiate a session to a POS device, so the device will show the request details and ask the customer to enter his credit card to complete the payment, then when the customer finishes; the POS will call the endpoint and it will return the result back to my application.
The problem here is that I need the operation to complete as described in this scenario (synchronously).
When I do the call to this endpoint from postman; it waits a lot of time (until the POS receives the request and customer do his entries then returns the results back to endpoint and endpoint returns the results back to Postman) ... this is all works fine.
The problem is when I do this from an ASP.NET Core app, the request is not waited for endpoint and the response is returned with null directly.
I need something to wait for it.
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("x-API-Key", "ApiKey");
client.DefaultRequestHeaders.Add("Connection", "keep-alive");
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var postTask = client.PostAsJsonAsync(new Uri("terminalEndpoint here"), dto);//dto is the request payload
postTask.Wait();
var result = postTask.Result;
if (result.IsSuccessStatusCode)
{
//Should hang after this line to wait for POS
var terminalPaymentResponseDto = result.Content.ReadAsAsync<InitiateTerminalPaymentResponseDto>().Result;
//Should hit this line after customer finishes with POS device
return terminalPaymentResponseDto;
}
}
First of all, there's no need to block. In fact, in an ASP.NET Core application you should avoid blocking as much as possible. Use async and await instead. This allows ASP.NET Core to use the freed threadpool thread for other work.
Second, HttpClient is thread-safe and meant to be reused. Creating a new one every time in a using block leaks sockets. You could use a static instance but a better solution is to use IHttpClientFactory as Make HTTP requests using IHttpClientFactory in ASP.NET Core shows, to both reuse and recycle HttpClient instances automatically.
Finally, there's no reason to add these headers on every call. The Content-Type is set by PostAsJsonAsync anyway. I also suspect the API key doesn't change when calling the same server either.
In your Startup.cs or Program.cs you can use AddHttpClient to configure the API Key :
builder.Services.AddHttpClient(client=>{
client.DefaultRequestHeaders.Add("x-API-Key", "ApiKey");
});
After that you can inject IHttpClientFactory into your controllers or pages and call it asynchronously in asynchronous actions or handlers :
public class MyController:ControllerBase
{
private readonly IHttpClientFactory _httpClientFactory;
public MyController:ControllerBase(IHttpClientFactory httpClientFactory) =>
_httpClientFactory = httpClientFactory;
public async Task<InitiateTerminalPaymentResponseDto> PostAsync(MyDTO dto)
{
var client=_httpClientFactory.CreateClient();
var uri=new Uri("terminalEndpoint here");
var result = client.PostAsJsonAsync(uri, dto);payload
if (result.IsSuccessStatusCode)
{
//Should hang after this line to wait for POS
var paymentDto= await result.Content.ReadAsAsync<InitiateTerminalPaymentResponseDto>();
//Should hit this line after customer finishes with POS device
return paymentDto;
}
else {
//Do whatever is needed in case of error
}
}
}
Using HttpClientFactory allows adding retry strategies using Polly eg, to recover from a temporary network disconnection.
Why not use the await like below? And make sure to change the function to async
var postTask = await client.PostAsJsonAsync(new Uri("terminalEndpoint here"), dto);

Improve performance by using Linq methods

I have used the following code to retrieve the content of a JSON feed and as you see I have used the paging techniques and Skip and Take methods like this. My paging works fine, however I can't see any performance improvement and I know this is because every time I request a new page it calls the API again and again and after retrieving all contents, it filters it using Skip and Take methods, I am looking for a way to apply the Skip and Take methods with my HttpClient so that it only retrieves the needed records for every page. Is it possible? If so, how??:
[HttpGet("[action]")]
public async Task<myPaginatedReturnedData> MyMethod(int page)
{
int perPage = 10;
int start = (page - 1) * perPage;
using (HttpClient client = new HttpClient())
{
client.BaseAddress = new Uri("externalAPI");
MediaTypeWithQualityHeaderValue contentType =
new MediaTypeWithQualityHeaderValue("application/json");
client.DefaultRequestHeaders.Accept.Add(contentType);
HttpResponseMessage response = await client.GetAsync(client.BaseAddress);
string content = await response.Content.ReadAsStringAsync();
IEnumerable<myReturnedData> data =
JsonConvert.DeserializeObject<IEnumerable<myReturnedData>>(content);
myPaginatedReturnedData datasent = new myPaginatedReturnedData
{
Count = data.Count(),
myReturnedData = data.Skip(start).Take(perPage).ToList(),
};
return datasent;
}
}
If there isn't anyway to do this, I can retrieve the full collection from the external api and moving it to my front-end (Anguar) to page it there, by using this way the data is transferred just once and then paged at the client side that seems to be very better than making the external API to return a full set of data to my server every time the client changes the page. Is this correct and server side paging useless for me in this case?

HttpClient takes excessive amount of time (unless Http version is set to 1.0) when making multiple requests in parallel

After testing with the .net HttpClient I'm having the following issue. It will do the first 5-6 requests just fine (within 200ms or so), but after that it'll be a full 60 seconds before the rest complete - and they all complete nearly at once
Here is how I've been testing
var tasks = new List<Task<HttpResponseMessage>>();
var client = new HttpClient();
client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", "secret_jwt");
client.BaseAddress = new Uri("http://myapi/api/");
for (int i = 0; i < 50; i++)
{
tasks.Add(ProcessUrlAsync("organizations/id/185", client));
}
await Task.WhenAll(tasks);
-
static private async Task<HttpResponseMessage> ProcessUrlAsync(string url, HttpClient client)
{
var sw = System.Diagnostics.Stopwatch.StartNew();
var message = await client.GetAsync(url);
sw.Stop();
Console.WriteLine(sw.Elapsed.TotalMilliseconds.ToString());
return message;
}
and my output is typically
174.5346
127.0873
141.9458
141.7396
153.6638
153.3449
61241.5598
61241.8476
61283.9076
61287.406
61326.0361
61328.7341
61368.6317
etc.
This is not the API I'm using that's the issue - I can point the HttpClient to any address and the same issue occurs
If I write my own GetAsync method that sets the Http version to 1.0...
public new async Task<HttpResponseMessage> GetAsync(string url)
{
using (var request = new HttpRequestMessage(HttpMethod.Get, url))
{
request.Version = HttpVersion.Version10; //removing this will reproduce the issue!
return await SendAsync(request);
}
}
It works fine (all complete within a few hundred ms). Why is this, and what can I do to fix it whilst still using Http 1.1? I'm assuming it's something to do with 1.0 having connection : close and 1.1 having connection: keep-alive
I feel rather silly, the solution is as simple as
ServicePointManager.DefaultConnectionLimit = 100;
(Edited to complete)
This looks like Socket exhaustion with HttpClient. There's lots of info available online if you search for that.
One solution is to only create one HttpClient and re-use that.
Another option is to add the header Connection: close to your request
This would explain why switching to HTTP/1.0 seemed to solve the issue -- as leaving connections open was added in 1.1.
As a side note to anyone who comes across this, you can't send the same HttpRequestMessage twice. You will need to create a new one for the second request.

SQL Server service brokers waitfor c# httpclient long open http

I am working on a system that uses HTTPClient to get a HTTP route like /api/GUID and holds the connection open until another system sends a message to SQL Server to notify that the work is completed. Once the message is received the connection ends and my client continues on with its data it was requesting.
When I test this with curl or postman I get the intended results.
I get intermittent issues with c# httpclient. Sometimes I get my data immediately. Other times httpclient will stay open and after its timeout expires the message is shown in my test app despite throwing a task cancellation exception. Other times nothing is returned and a timeout occurs OR I only receive an empty string from my data. This very well could be related to our service broker implementation but i want to rule out everything about HTTPclient. I have tried using threads/tasks/ lots of async await using.Result and I am getting the same results so far. It is highly likely related to our servicebroker waitfor but any insight would be great. The issue tends to be connections that come in rapidly from a single source tend to have this pattern.
public string GetSynchronous(string url)
{
var tokenSource = new CancellationTokenSource();
var cancellationToken = tokenSource.Token;
return _httpClient.GetAsync(url, cancellationToken).Result.Content.ReadAsStringAsync().Result;
}
public void PostAsync(string url, JObject jsonBody)
{
var content = new StringContent(jsonBody.ToString(), Encoding.UTF8, "application/json");
var tokenSource = new CancellationTokenSource();
var cancellationToken = tokenSource.Token;
var result = _httpClient.PostAsync(url, content, cancellationToken).Result;
}
private static readonly HttpClient _httpClient = new HttpClient {
Timeout = TimeSpan.FromSeconds(60),
DefaultRequestHeaders =
{
Accept = { new MediaTypeWithQualityHeaderValue("application/json")},
ConnectionClose = true
},
};
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hit enter to start");
Console.ReadLine();
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
ServicePointManager.DefaultConnectionLimit = 100;
var tests = new Tests();
tests.CallSynchronizer();
tests.CallSynchronizerPostFirst();
Console.WriteLine("Hit enter to end");
Console.ReadLine();
}
}
Service Broker is designed for long exchanges (hours, days, weeks). Waiting for a response in the HTTP requests is definitely an anti-pattern. And more fundamentally, if concurrent requests are using this patter, there is nothing preventing them from each receiving the response intended for the other request.
You HTTP requests should post the work to-do (ie. issue a SEND) and return immediately. The response, when it comes, should activate a stored procedure and this should update some state, write the result. From HTTP you could then poll for the result and return current status immediately. Your app should work with service broker responses that return after a week same way as if the response returned in 10ms.
Ultimately this ended up being related to how we are load balancing our application and sqldependency.start queue name being set. This is a service that is setup in an anti-pattern way to migrate old code off of a really unreliable set of services.

Amazon S3 Request Overload

I have a WebApi Route that makes asynchronous requests of Amazon S3. The number of requests is in the thousands. It looks something like this:
for (var i = 0; i < pdfs.Count; i++)
{
//stuff ...
var getPdfClient = new RestClient(pdf.Url);
var getPdfRequest = new RestRequest(Method.GET);
getPdfRequest.AddParameter("pdfIndex", i);
getPdfClient.ExecuteAsync(getPdfRequest, getPdfResponse => // Download the PDF async
{
//do stuff on response
}
}
As written this results in a Timeout response after so many requests.
The problem does not go away by increasing the request timeout request.Timeout = 14400000;
The problem does go away by putting a Thread.Sleep(100) before the execute async. This leads me to believe the timeout is the result of bombarding S3 with so many requests.
Obviously Thread.Sleep is not the right answer. What should I change to correctly interface with S3 under this load?

Categories