HttpClient - task was cancelled - How to get the exact error message? - c#

I have the following test code. I always get the "Task was cancelled" error after looping 316934 or 361992 times.
If I am not wrong, there are two possible reasons why the task was cancelled a) HttpClient got timeout or b) too many tasks in queue and some tasks got time-out.
I couldn't find the documentation about the limitation in queueing the tasks. And I tried creating more than 500K tasks and no time-out. I guess the reason "b" might not be right.
Q1. Is there any other reason that I missed out?
Q2. If it's because HttpClient timeout, how can I get the exact exception message instead of "TaskCancellation" exception.
Q3. What would be the best way to fix it? Should I introduce the throttler?
Thanks!
var _httpClient = new HttpClient();
_httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Accept", "text/html,application/xhtml+xml,application/xml");
_httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Accept-Encoding", "gzip, deflate");
_httpClient.DefaultRequestHeaders.TryAddWithoutValidation("User-Agent", "Mozilla/5.0 (Windows NT 6.2; WOW64; rv:19.0) Gecko/20100101 Firefox/19.0");
_httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Accept-Charset", "ISO-8859-1");
int[] intArray = Enumerable.Range(0, 600000).ToArray();
var results = intArray
.Select(async t => {
using (HttpRequestMessage requestMessage = new HttpRequestMessage(HttpMethod.Get, "http://www.google.com")) {
log.Info(t);
try {
var response = await _httpClient.SendAsync(requestMessage);
var responseContent = await response.Content.ReadAsStringAsync();
return responseContent;
}
catch (Exception ex) {
log.ErrorException(string.Format("SoeHtike {0}", Task.CurrentId), ex);
}
return null;
}
});
Task.WaitAll(results.ToArray());
Console.ReadLine();
Here is the step to replicate the issue.
Create a Console Project in VS 2012.
Please copy and paste my code in Main.
Put the breakpoint at this line " log.ErrorException(string.Format("SoeHtike
{0}",
Task.CurrentId),
ex);"
Run the program in debug mode. Wait for a few minutes. (maybe 5 minutes? ) I just tested my code and I got the exception after 3 mins. If you have fiddler, you can monitor the requests so that you know the program is still running or not.
Feel free to let me know if you can't replicate the issue.

The default HttpClient.Timeout value is 100 seconds (00:01:40). If you do a timestamp in your catch block you will notice that tasks begin to get canceled at exactly that time. Apparently there is a limited number of HTTP requests you can do per second, others get queued. Queued requests get canceled on timeout. Out of all 600k of tasks I personally got only 2500 successful, others got canceled.
I also find it unlikely, that you will be able to run the whole 600000 of tasks. Many network drivers let through high number of requests only for a small time, and reduce that number to a very low value after some time. My network card allowed me to send only 921 requests within 36 seconds and dropped that speed to only one request per second. At that speed it will take a week to complete all the tasks.
If you are able to bypass that limitation, make sure you build the code for 64-bit platform as the app is very hungry for memory.

Don't dispose the instance of HttpClient you're using. Weird but fixed for me this problem.

just wanted to share
I have had a similar code to load test our servers ,and its a high probability that your requests are timing out.
You can set the timeout for your http request to max and see if it changes anything for you.
I tried hitting our servers by creating various threads. And it increased the hits but they would all eventually timeout.And also you cannot set a timeout when hitting them on another thread.

I ran across this recently. As it turned out, when I started up the web application in debug mode, the start up URL was using https.
However, the URL for WebApi endpoint (in my config file) was using http. Once I update it to use https, I was able to call the WebApi endpoint.

Related

How to make PostAsync respond if API is down

I am calling an API using these commands:
byte[] messageBytes = System.Text.Encoding.UTF8.GetBytes(message);
var content = new ByteArrayContent(messageBytes);
content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/json");
HttpResponseMessage response = client.PostAsync(ApiUrl, content).Result;
However the code stops executing at the PostAsync line. I put a breakpoint on the next line but it is never reached. It does not throw an error immediately, but a few minutes later it throws an error like:
System.Net.Sockets.SocketException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
I presume this means the API is down. What can I do to make PostAsync spit back an error immediately even if the API is down so that I can handle the error and inform the user?
Thank you.
Broadly speaking, what you're asking is "How can I check if an API is available?", and the answer to this depends how low level you want to get, and what you want to do for each level of unavailability:
Is there internet connectivity? Is it worth probing this locally first (as it's relatively quick to check)?
Is the server address correct? If it's wrong it doesn't matter how long you wait. Can the user configure this?
Is the address correct but the server is unable or unwilling to respond? What then?
If you're willing to lump them all into a single "can't contact server in a reasonable amount of time" bucket, there are a few approaches:
Decrease timeouts (beware)
In the case you gave, it sounds like your request is simply timing out: the address or port is wrong, the server is under immense load and can't respond in a timely fashion, you're attempting to contact a non-SSL endpoint using SSL or vice-versa, etc. In any of these cases, you can't know if the request has timed out, until it actually times out. One thing you can do is reduce the HttpClient request timeout. Beware: going too low will cause slow connections to time out on users, which is a worse problem than the one you have.
Pre-check
You could, either before each call, periodically, or at some point early in the client initialisation, do a quick probe of the API to see if it's responsive. This can be spun off into either an async or background task while the UI is being built, etc. This gives you more time to wait for a response, and as an added bonus if the API is responding slowly you can notify your users of this so they know not to expect immediate responses to their clicks. This will improve user experience. If the pre-check fails, you could show an error and advise the user to either check connectivity, check server address (if it's configurable), retry, etc.
Use a CancellationToken
You could pass a CancellationToken into PostAsync with a suitable timeout set, which also allows you to let the user cancel the request if they want to. Read up on CancellationToken for more information.
EDIT: as Alex pointed out, this line is not usually how you deal with async tasks:
HttpResponseMessage response = client.PostAsync(ApiUrl, content).Result;
Change this instead to:
HttpResponseMessage response = await client.PostAsync(ApiUrl, content);
Of course the calling method will then also need to be marked as async, and so on ("It's asyncs, all the way up"), but this is a good thing - it means that your code is not blocking a thread while it waits for a response from the server.
Have a read here for some good material.
Hope that helps

Send constant number of http requests without waiting

Hi guys I am trying to use a simple console application to send requests to service without out waiting. The goal is to send 100 requests per second without waiting for the response. My idea is to create and run 10 threads at a time, each thread will create a http client and send request. after threads created then wait for 100 ms, then create another 10 threads. The code looks like:
while (true)
{
try
{
for (int i = 0; i < 10; i++)
{
Task.Factory.StartNew(SendRequest);
}
}
catch (Exception)
{
// ignored
}
Thread.Sleep(100);
}
Each SendRequest method will print http status code the result to console after get the result from http request. But I found that this application will successfully print first 20 result, which means that threads has finished and destroyed. But after this, it will no longer print anything out, but the memory is increasing, which means that thread is kept creating but spinning. So, could you help answer:
1) why the thread is hanging after 20 requests finished, is it because of the connection limit at http client?
2) how to send constant requests per second without waiting by using code?
May be off topic, but if you don't need to do this in C#, you can use apache benchmark to do this and test out. You can control concurrency, time and number of requests.
ab -c 10 -t 60 -n 6000 http://www.website.com/
If it has to be in C# .. then sorry and nevermind

Can I stop WCF hanging at 100% CPU by making it time out? [duplicate]

Is there a way to tell a WCF service to response to a request (with or without aborting it's processing) after a certain amount of time, even if it didn't finish yet, something like a server-side timeout policy?
I suppose you could do this by starting a new Thread as soon as the WCF operation starts. The real work then happens on the new thread and the original WCF request thread waits using a Thread.Join() with a specific timeout. If the timeout occurs the worker thread can be canceled using a Thread.Abort().
Something like this:
public string GetData(int value)
{
string result = "";
var worker = new Thread((state) =>
{
// Simulate l0ng running
Thread.Sleep(TimeSpan.FromSeconds(value));
result = string.Format("You entered: {0}", value);
});
worker.Start();
if (!worker.Join(TimeSpan.FromSeconds(5)))
{
worker.Abort();
throw new FaultException("Work took to long.");
}
return result;
}
I have solved the same problem and created a blog post:
http://kanchengcao.blogspot.com/2012/06/adding-timeout-and-congestion.html
In short:
WCF server timeout config do not work
You could implement a timeout as others said
Such a timeout does not guarantee a timely response to the client, since the request could be queued for long before entering your code with timeout.
So I implemented method to drop requests if the server is overloaded and is expected to cause more timeouts.
I don't know why you want to do this - you should probably edit your question to say what you're trying to accomplish.
If I had to do this, then I would have the web service pass the request off to a separate Windows Service, possibly by using WCF over MSMQ. I would have a timeout on that request. If the request didn't finish in time, I'd simply return a Timeout fault. The actual request would not be impacted.
Implement your service using the asynchronous model and have some code monitoring your outstanding requests to see if they've taken too long.
Then, if a timeout occurs before the request can be answered in the real way, then call their callback. The WCF stack provides this when it calls your
BeginFoo( fooParam1, fooParam2, AsyncCallback callback, object state)
Then throw or return your fault/timeout exception or response in the correponding EndFoo() method.
Make sure to not call their callback again if the real answer comes along eventually.
It'll take some getting used to asynchronous wcf programming, but no, apparently there is no server side setting.
Also, you should try to use a client that supports timeout or cancellable requests because you might not be able to rely on the server to time out the request for you. There might not be connectivity or the server machine might have another problem.
Cheers,
Chris

HttpWebRequest.BeginGetResponse blocks 30-60 seconds

currently in one of our apps I see big lags when executing the first HTTP Webrequest. According to our logs there is a lag of 30 - 60 seconds. It blocks at HttpWebRequest.BeginGetResponse
Here is a quote from the MSDN:
The BeginGetRequestStream method requires some synchronous setup tasks
to complete (DNS resolution, proxy detection, and TCP socket
connection, for example) before this method becomes asynchronous. As a
result, this method should never be called on a user interface (UI)
thread because it might take some time, typically several seconds. In
some environments where the webproxy scripts are not configured
properly, this can take 60 seconds or more. The default value for the
downloadTime attribute on the config file element is
one minute which accounts for most of the potential time delay.
I understand that there is DNS resolution, proxy detection and other stuff required. But 30-60 seconds is way too long. When I enter the same URL in any browser I get the page immediately. When I resolve DNS manual there is also no delay.
All concurrent requests to the same URI don't block. When I restart the application the first request blocks again for min. 30 seconds.
Is this a known problem? Is there a bug? We see this on different machines, So I don't think my developer machine is the problem.
Here is some example code:
private void TestWebRequest()
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://matrix.ag-software.de/http-bind");
request.ContentType = "text/xml; charset=utf-8";
request.Method = "POST";
request.BeginGetRequestStream(new AsyncCallback(GetRequestStreamCallback), request);
}
private void GetRequestStreamCallback(IAsyncResult result)
{
//
}
update: This must be a problem with my Windows 7 Installation. I have tested 2 other machines and can't cause the same problem there. I have seen logfiles from customers with exactly the same problems. This seems to happen under some conditions on some machines.
I have just happened through the same.
I found out that the Microsoft Client Firewall was generating it. I was using it to avoid navigate through my company's proxy. My first workaround was to set the request.Proxy hard-coded. To avoid that ugly line of code I end up disabling the Microsoft Client Firewall and setting the proxy on Internet Options.
Hope this helps.
As #EricLaw said, System.Net logging may help. Here is a link to MSDN to Enable System.Net logging.
This is a known issue. There's more information available in these links:
http://social.msdn.microsoft.com/Forums/en-US/netfxnetcom/thread/939e78bf-b9be-4d0b-894e-ae7f0d6013ff
http://social.msdn.microsoft.com/forums/en-US/netfxnetcom/thread/2cb74a7e-6e8f-4d05-b86a-2401df5d2ed3/

HttpWebRequest is extremely slow!

I am using an open source library to connect to my webserver. I was concerned that the webserver was going extremely slow and then I tried doing a simple test in Ruby and I got these results
Ruby program: 2.11seconds for 10 HTTP
GETs
Ruby program: 18.13seconds for 100 HTTP
GETs
C# library: 20.81seconds for 10 HTTP
GETs
C# library: 36847.46seconds for 100 HTTP
GETs
I have profiled and found the problem to be this function:
private HttpWebResponse GetRawResponse(HttpWebRequest request) {
HttpWebResponse raw = null;
try {
raw = (HttpWebResponse)request.GetResponse(); //This line!
}
catch (WebException ex) {
if (ex.Response is HttpWebResponse) {
raw = ex.Response as HttpWebResponse;
}
}
return raw;
}
The marked line is takes over 1 second to complete by itself while the ruby program making 1 request takes .3 seconds. I am also doing all of these tests on 127.0.0.1, so network bandwidth is not an issue.
What could be causing this huge slow down?
UPDATE
Check out the changed benchmark results. I actually tested with 10 GETs and not 100, I updated the results.
What I have found to be the main culprit with slow web requests is the proxy property. If you set this property to null before you call the GetResponse method the query will skip the proxy autodetect step:
request.Proxy = null;
using (var response = (HttpWebResponse)request.GetResponse())
{
}
The proxy autodetect was taking up to 7 seconds to query before returning the response. It is a little annoying that this property is set on by default for the HttpWebRequest object.
It may have to do with the fact that you are opening several connections at once. By default the Maximum amount of open HTTP connections is set to two. Try adding this to your .config file and see if it helps:
<system.net>
.......
<connectionManagement>
<add address="*" maxconnection="20"/>
</connectionManagement>
</system.net>
I was having a similar issue with a VB.Net MVC project.
Locally on my pc (Windows 7) it was taking under 1 second to hit the page requests, but on the server (Windows Server 2008 R2) it was taking 20+ seconds for each page request.
I tried a combination of setting the proxy to null
System.Net.WebRequest.DefaultWebProxy = Nothing
request.Proxy = System.Net.WebRequest.DefaultWebProxy
And changing the config file by adding
<system.net>
.......
<connectionManagement>
<add address="*" maxconnection="20"/>
</connectionManagement>
</system.net>
This still did not reduce the slow page request times on the server. In the end the solution was to uncheck the “Automatically detect settings” option in the IE options on the server itself. (Under Tools -> Internet Options select the Connections tab. Press the LAN Settings button)
Immediately after I unchecked this browser option on the server all the page request times dropped from 20+ seconds to under 1 second.
I started observing a slow down similar to the OP in this area which got a little better when increasing the MaxConnections.
ServicePointManager.DefaultConnectionLimit = 4;
But after building this number of WebRequests the delays came back.
The problem, in my case, was that I was calling a POST and not bothered about the response so wasn't picking up or doing anything with it. Unfortunately this left the WebRequest floating around until they timed out.
The fix was to pick up the Response and just close it.
WebRequest webRequest = WebRequest.Create(sURL);
webRequest.Method = "POST";
webRequest.ContentLength = byteDataGZ.Length;
webRequest.Proxy = null;
using (var requestStream = webRequest.GetRequestStream())
{
requestStream.WriteTimeout = 500;
requestStream.Write(byteDataGZ, 0, byteDataGZ.Length);
requestStream.Close();
}
// Get the response so that we don't leave this request hanging around
WebResponse response = webRequest.GetResponse();
response.Close();
Use a computer other than localhost, then use WireShark to see what's really going over the wire.
Like others have said, it can be a number of things. Looking at things on the TCP level should give a clear picture.
I don't know how exactly I've reach to this workaround, I didn't have time to do some research yet, so it's up to you guys. There's a parameter and I've used it like this (at the constructor of my class, before instantiating the HTTPWebRequest object):
System.Net.ServicePointManager.Expect100Continue = false;
I don't know why exactly but now my calls look quite faster.
I tried all the solutions described here with no luck, the call took about 5 minutes.
What was the issue:
I needed the same session and obviously the same cookies (request made on the same server), so I recreated the cookies from Request.Cookies into WebRequest.CookieContainer. The response time was about 5 minutes.
My solution:
Commented out the cookie-related code and bam! Call took less then one second.
I know this is some old thread, but I have lost whole day with slow HttpWebRequest, tried every provided solution with no luck. Every request for any address was more than one minute.
Eventually, problem was with my Antivirus Firewall (Eset). I'm using firewall with interactive mode, but Eset was somehow turned off completely. That caused request to last forever. After turning ON Eset, and executing request, prompt firewall message is shown, and after confirmation, request executing for less than one second.
For me, using HttpWebRequest to call an API locally averaged 40ms, and to call an API on a server averaged 270ms. But calling them via Postman averaged 40ms on both environments. None of the solutions in this thread made any difference for me.
Then I found this article which mentioned the Nagle algorithm:
The Nagle algorithm increases network efficiency by decreasing the number of packets sent across the network. It accomplishes this by instituting a delay on the client of up to 200 milliseconds when small amounts of data are written to the network. The delay is a wait period for additional data that might be written. New data is added to the same packet.
Setting ServicePoint.UseNagleAlgorithm to false was the magic I needed, it made a huge difference and the performance on the server is now almost identical to local.
var webRequest = (HttpWebRequest)WebRequest.Create(url);
webRequest.Method = "POST";
webRequest.ServicePoint.Expect100Continue = false;
webRequest.ServicePoint.UseNagleAlgorithm = false; // <<<this is the important bit
Note that this worked for me with small amounts of data, however if your request involves large data then it might be worth making this flag conditional depending on the size of the request/expected size of the response.
We encountered a similar issue at work where we had two REST APIs communicating locally with each other. In our case all HttpWebRequest took more than 2 seconds even though the request url was http://localhost:5000/whatever which was way to slow.
However upon investigating this issue with Wireshark we found this:
It turned out the framework was trying to establish an IPv6 connection to localhost first, however as our services were listening on 0.0.0.0 (IPv4) the connection attempt failed and after 500 ms timeout it retried until after 4 failed attempts (4 * 500 ms = 2 seconds) it eventually gave up and used IPv4 as a fallback (which obviously succeeded almost immediately).
The solution for us was to change the request URI to http://127.0.0.1/5000/whatever (or to also listen on IPv6 which we deemed unnecessary for our local callbacks).
For me, the problem was that I had installed LogMeIn Hamachi--ironically to remotely debug the same program that then started exhibiting this extreme slowness.
FYI, disabling the Hamachi network adapter was not enough because it seems that its Windows service re-enables the adapter.
Also, re-connecting to my Hamachi network did not solve the problem. Only disabling the adapter (by way of disabling the LogMeIn Hamachi Windows service) or, presumably, uninstalling Hamachi, fixed the problem for me.
Is it possible to ask HttpWebRequest to go out through a specific network adapter?
This worked for me:
<configuration>
<system.net>
<defaultProxy enabled="false"/>
</system.net>
</configuration>
Credit: Slow HTTPWebRequest the first time the program starts
In my case add AspxAutoDetectCookieSupport=1 to the code and the problem was solved
Uri target = new Uri("http://payroll");
string responseContent;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(target);
request.CookieContainer = new CookieContainer();
request.CookieContainer.Add(new Cookie("AspxAutoDetectCookieSupport", "1") { Domain = target.Host });
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (Stream responseStream = response.GetResponseStream())
{
using (StreamReader sr = new StreamReader(responseStream))
responseContent = sr.ReadToEnd();
}
}
I was experiencing a 15 second or so delay upon creating a session to an api via httpwebrequest. The delay was around waiting for the getrequeststream() . After scrounging for answers, I found this article and the solution was just changing a local windows policy regarding ssl:
https://www.generacodice.com/en/articolo/1296839/httpwebrequest-15-second-delay-performance-issue
We had the same problem on web app. We waited on response 5 seconds. When we change user on applicationPool on IIS to networkService, the response began to arrive less than 1 second

Categories