Hi guys I am trying to use a simple console application to send requests to service without out waiting. The goal is to send 100 requests per second without waiting for the response. My idea is to create and run 10 threads at a time, each thread will create a http client and send request. after threads created then wait for 100 ms, then create another 10 threads. The code looks like:
while (true)
{
try
{
for (int i = 0; i < 10; i++)
{
Task.Factory.StartNew(SendRequest);
}
}
catch (Exception)
{
// ignored
}
Thread.Sleep(100);
}
Each SendRequest method will print http status code the result to console after get the result from http request. But I found that this application will successfully print first 20 result, which means that threads has finished and destroyed. But after this, it will no longer print anything out, but the memory is increasing, which means that thread is kept creating but spinning. So, could you help answer:
1) why the thread is hanging after 20 requests finished, is it because of the connection limit at http client?
2) how to send constant requests per second without waiting by using code?
May be off topic, but if you don't need to do this in C#, you can use apache benchmark to do this and test out. You can control concurrency, time and number of requests.
ab -c 10 -t 60 -n 6000 http://www.website.com/
If it has to be in C# .. then sorry and nevermind
Related
I'm using .NET Framework 4.8 for a console application that manages an ETL process and a .NET standard 2.0 library for the HTTP requests that use the HttpClient. This application is expected to handle millions of records and is long running.
These requests are made in parallel with a maximum concurrency limit of 20.
At application launch, I increase the number of connections .NET can make to a single host in a connection pool via the ServicePointManager. This is a frequent cause for connection pool starvation, as in .NET Framework it defaults to 2.
public static async Task Main(string[] args)
{
ServicePointManager.DefaultConnectionLimit = 50;
...
}
This is greater than the number of max concurrent requests allowed to ensure my requests are not being queued up on connections that are already being used.
I then loop through the records and post them to a TPL Dataflow block with the concurrency limit of 20.
This block makes the request on my API client which uses a singleton HttpClient for all requests..
public class MyApiClient
{
private static HttpClient _httpClient { get; set; }
public MyApiClient()
{
httpClient = new HttpClient();
}
public async Task<ReturnedObject> UploadNewDocumentAsync(DocParams docParams, DocData docData)
{
MultipartFormDataContent content = ConstructMultipartFormDataContent(docParams);
HttpContent httpContent = docData.ConvertToHttpContent();
content.Add(httpContent);
using (HttpResponseMessage response = await _httpClient.PostAsync("document/upload", content))
{
return await HttpResponseReader.ReadResponse<ReturnedObject>(response).ConfigureAwait(false);
}
}
}
I'm using sysinternals TCPView.exe to view all connections made between the local host and remote host, as well as what the status of those connections are and whether data is actively being sent or not.
When starting the application I see new connections established for each request made, until around 50 connections are established. I also see activity for around 20 at any given time. This meets expectations.
After around 24 hours of activity, TCPView shows only 5 connections concurrently sending and receiving data. All 50 connections still exist and are all in the Established state, but the majority sit idle. I don't have a way of logging when connections stop being actively used. So I don't know whether all of a sudden it drops from using 20 connections to only 5, or whether it gradually decreases.
My log file records elapsed time for every request made and I also see a degradation in performance at this time. Requests take longer and longer to complete, and I see an increase in TaskCanceled exceptions as the HttpClient reaches its timeout value of 100 seconds.
I've also confirmed with the 3rd party API vendor that they are not receiving a large number of incoming requests timing out.
This suggests to me that the application is still trying to make 20 requests at a time, but they are being queued up on a smaller number of TCP connections. All the symptoms point to classic connection pool starvation.
While the application is running, I can output some information from the ServicePointManager to the console.
ServicePoint sp = ServicePointManager.FindServicePoint(myApiService.BasePath, WebRequest.DefaultWebProxy);
Console.WriteLine(
$"CurrentConnections: {sp.CurrentConnections} " +
$"ConnectionLimit: {sp.ConnectionLimit}");
Console output:
CurrentConnections: 50 ConnectionLimit: 50
This validates what I see in TCPView.exe, which is that all 50 connections are still allowed and are established.
The symptoms show connection pool starvation, but TCPView.exe and ServicePointManager show there are plenty of available established connections. .NET is just not using them all. This behaviour only shows up after several hours of runtime. The issue is repeatable. If I close and relaunch the application, it begins by rapidly opening all 50 TCP connections, and I see data being transferred on up to 20 at a time. When I check 24 hours later, the symptoms of connection pool starvation have shown up again.
What could cause the behavior, and is there anything further I could do to validate my assumptions?
My C# client (running on .NET Framework 4.5.1 or later) calls a WSDL-defined SOAP web service call that returns a byte[] (with length typically about 100000). We make hundreds of calls to this web service just fine -- they normally take just a few seconds to return. But very intermittently, the call sits there for exactly 5 minutes and then throws an InvalidOperationException indicating that "There is an error in XML document (1, 678)", with an InnerException that is a WebException "The operation has timed out." We've wrapped a try-catch around this call, look for those particular Exceptions, and then ask the user if they'd like us to retry it, and usually it works just fine on the next try.
Looking at the logging on the server, the logs for the good calls and the intermittent bad calls look exactly the same. In particular, in both cases we get the log statement at the very end of the web service, right before the "return byteArray;"... and it is doing that in the typical 3-15 seconds from the start of the call. So, it seems the web service returns the byte array successfully, but the client that called the web service just never receives it.
However, the client does NOT get the typical SoapException or WebException... for example, if we pause the web service in the debugger right before that return, then after 60 seconds the client will get a WebException "The operation has timed out." But we don't get that in this case... instead we are stuck there for a full 5 minutes before we finally get the InvalidOperationException mentioned above. So, it is as if it started receiving the reply, so it doesn't consider it timed out the normal way, but it never gets the rest of the reply, and the parsing/deserializing of the XML containing the reply eventually times out.
Question #1: Any suggestions on what's happening here? Or what we might be doing wrong in our web service that would result in a byte[] reply getting stuck mid-return intermittently? I'd obviously love to fix the root problem.
Question #2: What controls the length of that 5 minute timeout?? Our exception handling for this would be okay except for the ridiculous 5 minute timeout. After about 10 seconds, the user knows it is stuck because it normally returns in 10 seconds or less. But they have to sit there and wait for 5 minutes before they can do anything. We have set every timeout setting we could find to just 60 seconds, but none seem to control this. We have set:
In the server Web.config: <httpRuntime executionTimeout="60">
In the server Global.asax.cs: HttpContext.Current.Server.ScriptTimeout = 60;
In both server and client: ServicePointManager.MaxServicePointIdleTime = 60000;
In the client, right after we new up the WSDL-defined class derived from SoapHttpClientProtocol with all the web service calls, we call: service.Timeout = 60000;
We previously had those at their defaults or set to 100 / 100000 ... we lowered them all to 60 / 60000 to see if the 5 minute wait would come down at all (just in case one or more of them were being added into that 5 minutes). But no, no matter what we changed any of those timeouts to, the timeout in this case remains exactly 5 minutes, every time it gets stuck.
Does anybody know where the length of the timeout is set for when it generates an InvalidOperationException on the XML document containing the returned byte array due to an InnerException WebException with the timeout?? (please!)
I have a situation here where my javascript application is calling back into the server using web socket with the signalR proxy.
I have a long running method for which I report progress 10 times.
However I noticed that the client progress callback is not always called 10 times even though the server method definitely calls it 10 times.
Here is the server method pseudo code simplified for clarity:
public async Task LongRunning(IProgress<Something> progress)
{
var things = await Get10ThingsFromDb();
foreach (var thing in things)
{
var something = ComputeSomething(thing);
progress.Report(something);
}
}
Notice that right after that last call to progress.Report, the method finishes immediately after.
Most of the time it works fine, but maybe once every 5 times I get less than 10 report calls, sometimes even just 1.
I have inspected the network tab in Chrome, and it seems that the problem is that the server is sometimes sending the method completion result BEFORE it has sent all the reports.
So in a normal case the server sends the following frames
-Report 10 times
-Completion 1 time
But in a failed case the server sends the following frames.
-Report 7 times
-Completion 1 time
-Report 3 times
My guess is that the js client discards any progress report that happens after the completion.
I ended up adding await Task.Delay(100)
at the end of the server method to avoid the race condition. This is not ideal...
Is this a bug or am I missing something?
Thanks
I have the following test code. I always get the "Task was cancelled" error after looping 316934 or 361992 times.
If I am not wrong, there are two possible reasons why the task was cancelled a) HttpClient got timeout or b) too many tasks in queue and some tasks got time-out.
I couldn't find the documentation about the limitation in queueing the tasks. And I tried creating more than 500K tasks and no time-out. I guess the reason "b" might not be right.
Q1. Is there any other reason that I missed out?
Q2. If it's because HttpClient timeout, how can I get the exact exception message instead of "TaskCancellation" exception.
Q3. What would be the best way to fix it? Should I introduce the throttler?
Thanks!
var _httpClient = new HttpClient();
_httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Accept", "text/html,application/xhtml+xml,application/xml");
_httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Accept-Encoding", "gzip, deflate");
_httpClient.DefaultRequestHeaders.TryAddWithoutValidation("User-Agent", "Mozilla/5.0 (Windows NT 6.2; WOW64; rv:19.0) Gecko/20100101 Firefox/19.0");
_httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Accept-Charset", "ISO-8859-1");
int[] intArray = Enumerable.Range(0, 600000).ToArray();
var results = intArray
.Select(async t => {
using (HttpRequestMessage requestMessage = new HttpRequestMessage(HttpMethod.Get, "http://www.google.com")) {
log.Info(t);
try {
var response = await _httpClient.SendAsync(requestMessage);
var responseContent = await response.Content.ReadAsStringAsync();
return responseContent;
}
catch (Exception ex) {
log.ErrorException(string.Format("SoeHtike {0}", Task.CurrentId), ex);
}
return null;
}
});
Task.WaitAll(results.ToArray());
Console.ReadLine();
Here is the step to replicate the issue.
Create a Console Project in VS 2012.
Please copy and paste my code in Main.
Put the breakpoint at this line " log.ErrorException(string.Format("SoeHtike
{0}",
Task.CurrentId),
ex);"
Run the program in debug mode. Wait for a few minutes. (maybe 5 minutes? ) I just tested my code and I got the exception after 3 mins. If you have fiddler, you can monitor the requests so that you know the program is still running or not.
Feel free to let me know if you can't replicate the issue.
The default HttpClient.Timeout value is 100 seconds (00:01:40). If you do a timestamp in your catch block you will notice that tasks begin to get canceled at exactly that time. Apparently there is a limited number of HTTP requests you can do per second, others get queued. Queued requests get canceled on timeout. Out of all 600k of tasks I personally got only 2500 successful, others got canceled.
I also find it unlikely, that you will be able to run the whole 600000 of tasks. Many network drivers let through high number of requests only for a small time, and reduce that number to a very low value after some time. My network card allowed me to send only 921 requests within 36 seconds and dropped that speed to only one request per second. At that speed it will take a week to complete all the tasks.
If you are able to bypass that limitation, make sure you build the code for 64-bit platform as the app is very hungry for memory.
Don't dispose the instance of HttpClient you're using. Weird but fixed for me this problem.
just wanted to share
I have had a similar code to load test our servers ,and its a high probability that your requests are timing out.
You can set the timeout for your http request to max and see if it changes anything for you.
I tried hitting our servers by creating various threads. And it increased the hits but they would all eventually timeout.And also you cannot set a timeout when hitting them on another thread.
I ran across this recently. As it turned out, when I started up the web application in debug mode, the start up URL was using https.
However, the URL for WebApi endpoint (in my config file) was using http. Once I update it to use https, I was able to call the WebApi endpoint.
I have a Console Application which consumes a BizTalk Web Service. The Problem is that when I send the BizTalk Service object data in bulk, my console application throws the exception:
Application has either timed out or is Timing out.
My application actually needs to wait for the Biztalk service to finish processing its job. Increasing the obj.Timeout value was of no help. Is there anything else other than using Thread.Sleep method (which I want to avoid)?
Below is the relevant code snippet from my application:
pumpSyncService.Timeout = 750000;
outputRecords = pumpSyncService.PumpSynchronization(pumpRecords);
The pump records contain an array of objects. When the count is around 30, I get a correct response, but when the count increases to around 150 I get the exception.
Try sending smaller chunks in a loop. Instead of sending 150 all at once, send 30 records 5 times. The timeout might be happening because it takes too long to send 150 records.
you should be able to send all 30 at once , if the service allows you to. I am assuming you have verified that the event kicking this off is not firing 5 times . try it asynchronously and process your results when they come back.