I have a WebApi Route that makes asynchronous requests of Amazon S3. The number of requests is in the thousands. It looks something like this:
for (var i = 0; i < pdfs.Count; i++)
{
//stuff ...
var getPdfClient = new RestClient(pdf.Url);
var getPdfRequest = new RestRequest(Method.GET);
getPdfRequest.AddParameter("pdfIndex", i);
getPdfClient.ExecuteAsync(getPdfRequest, getPdfResponse => // Download the PDF async
{
//do stuff on response
}
}
As written this results in a Timeout response after so many requests.
The problem does not go away by increasing the request timeout request.Timeout = 14400000;
The problem does go away by putting a Thread.Sleep(100) before the execute async. This leads me to believe the timeout is the result of bombarding S3 with so many requests.
Obviously Thread.Sleep is not the right answer. What should I change to correctly interface with S3 under this load?
Related
After testing with the .net HttpClient I'm having the following issue. It will do the first 5-6 requests just fine (within 200ms or so), but after that it'll be a full 60 seconds before the rest complete - and they all complete nearly at once
Here is how I've been testing
var tasks = new List<Task<HttpResponseMessage>>();
var client = new HttpClient();
client.DefaultRequestHeaders.Authorization = new System.Net.Http.Headers.AuthenticationHeaderValue("Bearer", "secret_jwt");
client.BaseAddress = new Uri("http://myapi/api/");
for (int i = 0; i < 50; i++)
{
tasks.Add(ProcessUrlAsync("organizations/id/185", client));
}
await Task.WhenAll(tasks);
-
static private async Task<HttpResponseMessage> ProcessUrlAsync(string url, HttpClient client)
{
var sw = System.Diagnostics.Stopwatch.StartNew();
var message = await client.GetAsync(url);
sw.Stop();
Console.WriteLine(sw.Elapsed.TotalMilliseconds.ToString());
return message;
}
and my output is typically
174.5346
127.0873
141.9458
141.7396
153.6638
153.3449
61241.5598
61241.8476
61283.9076
61287.406
61326.0361
61328.7341
61368.6317
etc.
This is not the API I'm using that's the issue - I can point the HttpClient to any address and the same issue occurs
If I write my own GetAsync method that sets the Http version to 1.0...
public new async Task<HttpResponseMessage> GetAsync(string url)
{
using (var request = new HttpRequestMessage(HttpMethod.Get, url))
{
request.Version = HttpVersion.Version10; //removing this will reproduce the issue!
return await SendAsync(request);
}
}
It works fine (all complete within a few hundred ms). Why is this, and what can I do to fix it whilst still using Http 1.1? I'm assuming it's something to do with 1.0 having connection : close and 1.1 having connection: keep-alive
I feel rather silly, the solution is as simple as
ServicePointManager.DefaultConnectionLimit = 100;
(Edited to complete)
This looks like Socket exhaustion with HttpClient. There's lots of info available online if you search for that.
One solution is to only create one HttpClient and re-use that.
Another option is to add the header Connection: close to your request
This would explain why switching to HTTP/1.0 seemed to solve the issue -- as leaving connections open was added in 1.1.
As a side note to anyone who comes across this, you can't send the same HttpRequestMessage twice. You will need to create a new one for the second request.
I am working on a system that uses HTTPClient to get a HTTP route like /api/GUID and holds the connection open until another system sends a message to SQL Server to notify that the work is completed. Once the message is received the connection ends and my client continues on with its data it was requesting.
When I test this with curl or postman I get the intended results.
I get intermittent issues with c# httpclient. Sometimes I get my data immediately. Other times httpclient will stay open and after its timeout expires the message is shown in my test app despite throwing a task cancellation exception. Other times nothing is returned and a timeout occurs OR I only receive an empty string from my data. This very well could be related to our service broker implementation but i want to rule out everything about HTTPclient. I have tried using threads/tasks/ lots of async await using.Result and I am getting the same results so far. It is highly likely related to our servicebroker waitfor but any insight would be great. The issue tends to be connections that come in rapidly from a single source tend to have this pattern.
public string GetSynchronous(string url)
{
var tokenSource = new CancellationTokenSource();
var cancellationToken = tokenSource.Token;
return _httpClient.GetAsync(url, cancellationToken).Result.Content.ReadAsStringAsync().Result;
}
public void PostAsync(string url, JObject jsonBody)
{
var content = new StringContent(jsonBody.ToString(), Encoding.UTF8, "application/json");
var tokenSource = new CancellationTokenSource();
var cancellationToken = tokenSource.Token;
var result = _httpClient.PostAsync(url, content, cancellationToken).Result;
}
private static readonly HttpClient _httpClient = new HttpClient {
Timeout = TimeSpan.FromSeconds(60),
DefaultRequestHeaders =
{
Accept = { new MediaTypeWithQualityHeaderValue("application/json")},
ConnectionClose = true
},
};
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Hit enter to start");
Console.ReadLine();
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
ServicePointManager.DefaultConnectionLimit = 100;
var tests = new Tests();
tests.CallSynchronizer();
tests.CallSynchronizerPostFirst();
Console.WriteLine("Hit enter to end");
Console.ReadLine();
}
}
Service Broker is designed for long exchanges (hours, days, weeks). Waiting for a response in the HTTP requests is definitely an anti-pattern. And more fundamentally, if concurrent requests are using this patter, there is nothing preventing them from each receiving the response intended for the other request.
You HTTP requests should post the work to-do (ie. issue a SEND) and return immediately. The response, when it comes, should activate a stored procedure and this should update some state, write the result. From HTTP you could then poll for the result and return current status immediately. Your app should work with service broker responses that return after a week same way as if the response returned in 10ms.
Ultimately this ended up being related to how we are load balancing our application and sqldependency.start queue name being set. This is a service that is setup in an anti-pattern way to migrate old code off of a really unreliable set of services.
So I have 2 http Post requests consuming a web api as follow:-
using (var client = new HttpClient())
{
client.BaseAddress = new Uri("https://your_url.com:8443/");
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
// 1st request //
var datatobeSent = new ApiSendData()
{
UserID = "xxx",
UserPsw = "yyy",
ApiFunc = "zzz",
clearData = "x1y1z1"
};
HttpResponseMessage response = await client.PostAsJsonAsync("WebApi", datatobeSent);
var resultData = await response.Content.ReadAsStringAsync();
#region Extract Secured Data response from WebApi
JObject JsonObject = JObject.Parse(resultData);
datatobeSent.SecureData = (string)JsonObject["SecureResult"];
#endregion
// 2nd request //
var datatobeSent2 = new ApiSendData()
{
UserID = "xxx",
UserPsw = "yyy",
ApiFunc = "zzz",
SecureData = datatobeSent.SecureData
};
HttpResponseMessage response2 = await client.PostAsJsonAsync("WebApi", datatobeSent2);
var resultData2 = await response2.Content.ReadAsStringAsync();
}
So now I need some clarifications...
1) Are both my http POST requests sent over the same SSL session ?
2) If they are not, then how can I combine the two and send the 2 requests over a single connections/session ?
3) How may I improve performance of this code? currently 100 requests take 11secs to process and respond. ( i just used a for loop that counts 100 http post requests for the above 2 samples)
They are over the same SSL session and connection. The same HttpClient instance shares some configurations and underlying TCP connections. Therefore, you should reuse the same instance, which you are already doing with the using statement.
I would try to improve the performance of your code by asynchronously making the post requests and processing the results. Here's an option:
Create a new class to handle these async requests
public class WebHelper
{
public async Task<string> MakePostRequest(HttpClient client, string route, object dataToBeSent)
{
try{
HttpResponseMessage response = await client.PostAsJsonAsync(route, datatobeSent);
string resultData = await response.Content.ReadAsStringAsync();
return resultData;
}
catch (Exception ex){
return ex.Message;
}
}
}
Notice that the same httpClient instance is being used. In the main code, you could test your performance like this (to simplify our test, I'm just making the post request with the same parameters 101 times):
//Start time measurement
List<Task> TaskList = new List<Task>();
for (int i = 0; i < 100; i++)
{
Task postTask = Task.Run(async () =>
{
WebHelper webRequest = new WebHelper();
string response = await webRequest.MakePostRequest(client, "WebApi", dataToBeSent);
Console.WriteLine(response);
});
TaskList.Add(postTask);
}
Task.WaitAll(TaskList.ToArray());
//end time measurement
And just an improvement for your code: use try/catches to make the requests!
While HttpClient will aim to use the same Ssl session and connection, this cannot be guaranteed as it depends on the server also maintaining the session and connection. If the server drops them, HttpClient will renegotiate new connections and sessions transparently.
That said, while there is some overhead from connection and ssl session establishment, it's unlikely to be the root cause of any performance issues.
Given the simplicity of your code above, I strongly suspect that the performance issue is not in your client code, but in the service you are connecting to.
To determine that, you need to isolate the performance of that service. A few options for how to do that, depending on the level of control you have over that service:
If you are the owner of that service, carry out over the wire performance testing directly against the service, using a tool like https://www.blitz.io/. If 100 sequential requests from blitz.io takes 11 seconds, then the code you have posted is not the culprit, as the service itself just has an average response time of 110ms.
If you are not the owner of that service, use a tool like Mountebank to create an over the wire test double of that service, and use your existing test loop. If the test loop using Mountebank executes quickly, then again, the code you have posted is not the culprit. (Mountebank supports https, but you will need to either have the keypair, or use their self-signed cert and disable certificate verification in your client.)
It's worth noting that it's not unusual for complex web services to take 110ms to respond. Your test appears to be doing a sequential series of requests - one request at a time - most web services are optimised around handling many requests from independent users in parallel. The scalability challenge is in terms of how many concurrent users can I service while still ensuring that the average response time is 110ms. But if only a single user is using the service at a time, it will still take ~110ms.
So - an even earlier validation step you could take is to work out if your test of 100 sequential requests in a loop is a valid representation of your actual performance requirements, or if you should call your method 100 times in parallel to see if you can have 100 concurrent users access your service.
I have developed some OWIN middleware to appends a custom header to the response. However, in my integration tests (which uses OWIN TestServer), I cannot see the custom header in the response object.
I notice that I do see the location header which I populate for POST requests.
I also notice that the header is appearing when I make real requests to the service.
Does anyone know why I can't see the custom header in case of TestServer? Is there settings I need to make to allow these?
Here is the OWIN middleware:
private async Task CalculateTimeToProcess(IOwinContext context)
{
var sw = new Stopwatch();
sw.Start();
await Next.Invoke(context);
sw.Stop();
var response = context.Response;
response.Headers.Add("x-timetoprocessmilliseconds",
new[] { sw.ElapsedMilliseconds.ToString(CultureInfo.InvariantCulture) });
}
This is how I am trying to retrieve the header in my test:
var header = _restContext.HttpResponseMessage.Headers.SingleOrDefault(x => x.Key == "x-timetoprocessmilliseconds");
I don't know what the difference is between your live setup and unit-test, but you should be aware that if any previous middleware starts writing to the response.Body, the headers are getting sent before the OWIN pipeline returns to your middleware (see Note below).
What you can do is attaching a callback to OnSendingHeaders before you invoke the next middleware.
private async Task CalculateTimeToProcess(IOwinContext context)
{
var sw = new Stopwatch();
sw.Start();
context.Response.OnSendingHeaders(state =>
{
sw.Stop();
var response = (IOwinResponse)state;
response.Headers.Add("x-timetoprocessmilliseconds", new[] { sw.ElapsedMilliseconds.ToString(CultureInfo.InvariantCulture) });
}, context.Response);
await Next.Invoke(context);
}
Note: By sending the headers first, they can transmit whatever gets written into the body stream directly to the socket without having to buffer that in memory. This also means that your measurement will be incorrect in case other middleware already writes to the output stream while still processing things...
I need to send about 200 HTTP requests in parallel to different servers and get response.
I use HttpWebRequest class in C#.
But I don't see good time enhancement when requests are handled in parallel.
For example if one request needs 3sec to get response, 2 request in parallel - 6sec, 3 requests - 8 secs, 4 requests - 11sec ...
It is not good, I hope that there should be best time, about 10 sec for 200 requests.
It looks like only 2-3 requests performs in parallel, but timeout starts immediately after WebRequest object creation.
I tried set DefaultConnectionLimit and MaxServicePoints values, but id didn't help. As I understand these parameters for number of requests to one site in parallel. I need requests to different sites.
Code example that I use to test:
ArrayList a = new ArrayList(150);
for (i = 50; i < 250; i++ )
{
a.Add("http://207.242.7." + i.ToString() + "/");
}
for (i = 0; i < a.Count; i++)
{
Thread t = new Thread(new ParameterizedThreadStart(performRequest));
t.Start(a[i]);
}
static void performRequest(object ip)
{
HttpWebRequest req = (HttpWebRequest)WebRequest.Create((stirng)ip);
HttpWebResponse resp = (HttpWebResponse)req.GetResponse();
}
Сan anyone ever encountered such a problem?
Thank you for any suggestions.
Instead of starting up your own threads try using the asynchronous methods of HttpWebRequest such as HttpWebRequest.BeginGetResponse and HttpWebRequest.BeginGetRequestStream.
This might help - http://social.msdn.microsoft.com/forums/en-US/netfxnetcom/thread/1f863f20-09f9-49a5-8eee-17a89b591007
Suggests there is a limit on the number of TCP connections allowed, but that you can increase the limit
Use asynchronous web requests in stead.
Edit: http://msdn.microsoft.com/en-us/library/86wf6409(VS.71).aspx
you can try this :
try
{
List<Uri> uris = new List<Uri>();
uris.Add(new Uri("http://www.google.fr"));
uris.Add(new Uri("http://www.bing.com"));
Parallel.ForEach(uris, u =>
{
WebRequest webR = HttpWebRequest.Create(u);
HttpWebResponse webResponse = webR.GetResponse() as HttpWebResponse;
});
}
catch (AggregateException exc)
{
exc.InnerExceptions.ToList().ForEach(e =>
{
Console.WriteLine(e.Message);
});
}
This is the code of android application of sending the request. How can we use the upper code like:
params.add(new BasicNameValuePair("key", "value");
HttpPost request = new HttpPost();
List<NameValuePair> params = new ArrayList<NameValuePair>();
params.add(new BasicNameValuePair("key", "value");// How can we give the value in this format format
post.setEntity(new UrlEncodedFormEntity(params));
httpClient.execute(request);