currently in one of our apps I see big lags when executing the first HTTP Webrequest. According to our logs there is a lag of 30 - 60 seconds. It blocks at HttpWebRequest.BeginGetResponse
Here is a quote from the MSDN:
The BeginGetRequestStream method requires some synchronous setup tasks
to complete (DNS resolution, proxy detection, and TCP socket
connection, for example) before this method becomes asynchronous. As a
result, this method should never be called on a user interface (UI)
thread because it might take some time, typically several seconds. In
some environments where the webproxy scripts are not configured
properly, this can take 60 seconds or more. The default value for the
downloadTime attribute on the config file element is
one minute which accounts for most of the potential time delay.
I understand that there is DNS resolution, proxy detection and other stuff required. But 30-60 seconds is way too long. When I enter the same URL in any browser I get the page immediately. When I resolve DNS manual there is also no delay.
All concurrent requests to the same URI don't block. When I restart the application the first request blocks again for min. 30 seconds.
Is this a known problem? Is there a bug? We see this on different machines, So I don't think my developer machine is the problem.
Here is some example code:
private void TestWebRequest()
{
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("http://matrix.ag-software.de/http-bind");
request.ContentType = "text/xml; charset=utf-8";
request.Method = "POST";
request.BeginGetRequestStream(new AsyncCallback(GetRequestStreamCallback), request);
}
private void GetRequestStreamCallback(IAsyncResult result)
{
//
}
update: This must be a problem with my Windows 7 Installation. I have tested 2 other machines and can't cause the same problem there. I have seen logfiles from customers with exactly the same problems. This seems to happen under some conditions on some machines.
I have just happened through the same.
I found out that the Microsoft Client Firewall was generating it. I was using it to avoid navigate through my company's proxy. My first workaround was to set the request.Proxy hard-coded. To avoid that ugly line of code I end up disabling the Microsoft Client Firewall and setting the proxy on Internet Options.
Hope this helps.
As #EricLaw said, System.Net logging may help. Here is a link to MSDN to Enable System.Net logging.
This is a known issue. There's more information available in these links:
http://social.msdn.microsoft.com/Forums/en-US/netfxnetcom/thread/939e78bf-b9be-4d0b-894e-ae7f0d6013ff
http://social.msdn.microsoft.com/forums/en-US/netfxnetcom/thread/2cb74a7e-6e8f-4d05-b86a-2401df5d2ed3/
Related
I'm trying to load various sites to get some information in C# but there are a quite a few sites (not all) where the first request is taking 30-60 seconds.
As an example;
var url = "https://www.coolermaster.com/catalog/cases/mid-tower/masterbox-k501l";
using var wc = new WebClient();
Console.WriteLine(wc.DownloadString(url));
This will consistently take 30-60 seconds every time I first run the app, fiddler says the connection stays in "CONNECT" state but then eventually returns the page content.
I've tried using HttpClient (and disabling proxy via HttpClientHandler), using HttpClientFactory, using WebRequest.Create but no luck.
If I use any command line tools (cURL, wget) it returns instantly, also works instantly doing requests from other languages (JS/Ruby/PHP/Python), but no luck in .NET, even if I send the exact same headers as the other tools send. Tried on various computers, different internet connections, vps, etc.
None of these sites block bots, it's explicitly allowed in robots.txt.
Am I hitting some kind of bug, or is there any option I'm missing? It seems to be related to a slow SSL handshake but I'm not sure?
I've written a windows service in C# that interfaces with a third party via external web service calls (SOAP). Some of the calls respond quickly and some slowly
(for example, quick = retrieving a list of currencies; slow = retrieving a historical value of currencies per product over many years)
The external service works fine when I run it on my local machine - the quick calls runs for about 20 seconds; the slow calls runs for about 30 minutes. I can't do anything about the speed of the 3rd party service and I don't mind the time it takes to return an answer..
My problem is that when I deploy my service to my Azure Virtual Machine: The quick call works perfectly like it does locally; the slow ones just never returns anything. I have tried exception handling, logging to files, logging to eventLog.
There is no clear indication of what goes wrong - it just seems that for whatever reason - the long running web service calls, never return successfully to Azure.
I've read somewhere that there is some sort of connection-recycling happening every 4 minutes, which I suspect is somehow causing the external web service response to land up somewhere in a void with the load balancer or whatever not knowing any longer whom requested the content.
I start by creating the request with the relevant SOAP envelope, like this:
HttpWebRequest tRequest = (HttpWebRequest)WebRequest.Create(endpoint);
Then i set the stuff, like this:
tRequest.ClientCertificates.Add(clientCertificate);
tRequest.PreAuthenticate = true;
tRequest.KeepAlive = true;
tRequest.Credentials = CredentialCache.DefaultCredentials;
tRequest.ContentLength = byteArray.Length;
tRequest.ContentType = #"text/xml; charset=utf-8";
tRequest.Headers.Add("SOAPAction", #"http://schemas.xmlsoap.org/soap/envelope/");
tRequest.Method = "POST";
tRequest.Timeout = 3600000;
ServicePointManager.ServerCertificateValidationCallback = delegate { return true; }; //the SSL certificate is bad
Stream requestStream = tRequest.GetRequestStream();
requestStream.Write(byteArray, 0, byteArray.Length);
requestStream.Close();
requestStream.Dispose(); //works fine up to this point.
WebResponse webResponse = tRequest.GetResponse(); //The slow calls never make it past this. Fast one does
Anyone else experienced something similar and any suggestions how to solve it please?
Many thanks
http://www.fourtimesfour.co.za
When you deploy to Azure (Cloud Service, Virtual Machine) there is always the Azure Load Balancer, which sits between your VMs and the Internet.
In order to keep resources equally available to all the cloud users, Azure Load Balancer will kill idle connections. What idle for Azure LB is - default is 4 minutes of no communication sent over established channel. So if you call a web service and there is absolutely no response on the pip for 4 minutes - your connection will get terminated.
You can configure this timeout, but I would really argue the need for keeping connection open that long. Yes, you can do nothing about it, besides looking for a service which has better design (i.e. either returns responses faster, or implements asynchronous calls where the first call to the service will just give you a task id, using which you can poll periodically to get a result)
Here is a good article on how to configure the Azure Load Balancer timeout. Be aware, the maximum timeout for Azure LB is 30 minutes.
I have the following test code. I always get the "Task was cancelled" error after looping 316934 or 361992 times.
If I am not wrong, there are two possible reasons why the task was cancelled a) HttpClient got timeout or b) too many tasks in queue and some tasks got time-out.
I couldn't find the documentation about the limitation in queueing the tasks. And I tried creating more than 500K tasks and no time-out. I guess the reason "b" might not be right.
Q1. Is there any other reason that I missed out?
Q2. If it's because HttpClient timeout, how can I get the exact exception message instead of "TaskCancellation" exception.
Q3. What would be the best way to fix it? Should I introduce the throttler?
Thanks!
var _httpClient = new HttpClient();
_httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Accept", "text/html,application/xhtml+xml,application/xml");
_httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Accept-Encoding", "gzip, deflate");
_httpClient.DefaultRequestHeaders.TryAddWithoutValidation("User-Agent", "Mozilla/5.0 (Windows NT 6.2; WOW64; rv:19.0) Gecko/20100101 Firefox/19.0");
_httpClient.DefaultRequestHeaders.TryAddWithoutValidation("Accept-Charset", "ISO-8859-1");
int[] intArray = Enumerable.Range(0, 600000).ToArray();
var results = intArray
.Select(async t => {
using (HttpRequestMessage requestMessage = new HttpRequestMessage(HttpMethod.Get, "http://www.google.com")) {
log.Info(t);
try {
var response = await _httpClient.SendAsync(requestMessage);
var responseContent = await response.Content.ReadAsStringAsync();
return responseContent;
}
catch (Exception ex) {
log.ErrorException(string.Format("SoeHtike {0}", Task.CurrentId), ex);
}
return null;
}
});
Task.WaitAll(results.ToArray());
Console.ReadLine();
Here is the step to replicate the issue.
Create a Console Project in VS 2012.
Please copy and paste my code in Main.
Put the breakpoint at this line " log.ErrorException(string.Format("SoeHtike
{0}",
Task.CurrentId),
ex);"
Run the program in debug mode. Wait for a few minutes. (maybe 5 minutes? ) I just tested my code and I got the exception after 3 mins. If you have fiddler, you can monitor the requests so that you know the program is still running or not.
Feel free to let me know if you can't replicate the issue.
The default HttpClient.Timeout value is 100 seconds (00:01:40). If you do a timestamp in your catch block you will notice that tasks begin to get canceled at exactly that time. Apparently there is a limited number of HTTP requests you can do per second, others get queued. Queued requests get canceled on timeout. Out of all 600k of tasks I personally got only 2500 successful, others got canceled.
I also find it unlikely, that you will be able to run the whole 600000 of tasks. Many network drivers let through high number of requests only for a small time, and reduce that number to a very low value after some time. My network card allowed me to send only 921 requests within 36 seconds and dropped that speed to only one request per second. At that speed it will take a week to complete all the tasks.
If you are able to bypass that limitation, make sure you build the code for 64-bit platform as the app is very hungry for memory.
Don't dispose the instance of HttpClient you're using. Weird but fixed for me this problem.
just wanted to share
I have had a similar code to load test our servers ,and its a high probability that your requests are timing out.
You can set the timeout for your http request to max and see if it changes anything for you.
I tried hitting our servers by creating various threads. And it increased the hits but they would all eventually timeout.And also you cannot set a timeout when hitting them on another thread.
I ran across this recently. As it turned out, when I started up the web application in debug mode, the start up URL was using https.
However, the URL for WebApi endpoint (in my config file) was using http. Once I update it to use https, I was able to call the WebApi endpoint.
I am using an open source library to connect to my webserver. I was concerned that the webserver was going extremely slow and then I tried doing a simple test in Ruby and I got these results
Ruby program: 2.11seconds for 10 HTTP
GETs
Ruby program: 18.13seconds for 100 HTTP
GETs
C# library: 20.81seconds for 10 HTTP
GETs
C# library: 36847.46seconds for 100 HTTP
GETs
I have profiled and found the problem to be this function:
private HttpWebResponse GetRawResponse(HttpWebRequest request) {
HttpWebResponse raw = null;
try {
raw = (HttpWebResponse)request.GetResponse(); //This line!
}
catch (WebException ex) {
if (ex.Response is HttpWebResponse) {
raw = ex.Response as HttpWebResponse;
}
}
return raw;
}
The marked line is takes over 1 second to complete by itself while the ruby program making 1 request takes .3 seconds. I am also doing all of these tests on 127.0.0.1, so network bandwidth is not an issue.
What could be causing this huge slow down?
UPDATE
Check out the changed benchmark results. I actually tested with 10 GETs and not 100, I updated the results.
What I have found to be the main culprit with slow web requests is the proxy property. If you set this property to null before you call the GetResponse method the query will skip the proxy autodetect step:
request.Proxy = null;
using (var response = (HttpWebResponse)request.GetResponse())
{
}
The proxy autodetect was taking up to 7 seconds to query before returning the response. It is a little annoying that this property is set on by default for the HttpWebRequest object.
It may have to do with the fact that you are opening several connections at once. By default the Maximum amount of open HTTP connections is set to two. Try adding this to your .config file and see if it helps:
<system.net>
.......
<connectionManagement>
<add address="*" maxconnection="20"/>
</connectionManagement>
</system.net>
I was having a similar issue with a VB.Net MVC project.
Locally on my pc (Windows 7) it was taking under 1 second to hit the page requests, but on the server (Windows Server 2008 R2) it was taking 20+ seconds for each page request.
I tried a combination of setting the proxy to null
System.Net.WebRequest.DefaultWebProxy = Nothing
request.Proxy = System.Net.WebRequest.DefaultWebProxy
And changing the config file by adding
<system.net>
.......
<connectionManagement>
<add address="*" maxconnection="20"/>
</connectionManagement>
</system.net>
This still did not reduce the slow page request times on the server. In the end the solution was to uncheck the “Automatically detect settings” option in the IE options on the server itself. (Under Tools -> Internet Options select the Connections tab. Press the LAN Settings button)
Immediately after I unchecked this browser option on the server all the page request times dropped from 20+ seconds to under 1 second.
I started observing a slow down similar to the OP in this area which got a little better when increasing the MaxConnections.
ServicePointManager.DefaultConnectionLimit = 4;
But after building this number of WebRequests the delays came back.
The problem, in my case, was that I was calling a POST and not bothered about the response so wasn't picking up or doing anything with it. Unfortunately this left the WebRequest floating around until they timed out.
The fix was to pick up the Response and just close it.
WebRequest webRequest = WebRequest.Create(sURL);
webRequest.Method = "POST";
webRequest.ContentLength = byteDataGZ.Length;
webRequest.Proxy = null;
using (var requestStream = webRequest.GetRequestStream())
{
requestStream.WriteTimeout = 500;
requestStream.Write(byteDataGZ, 0, byteDataGZ.Length);
requestStream.Close();
}
// Get the response so that we don't leave this request hanging around
WebResponse response = webRequest.GetResponse();
response.Close();
Use a computer other than localhost, then use WireShark to see what's really going over the wire.
Like others have said, it can be a number of things. Looking at things on the TCP level should give a clear picture.
I don't know how exactly I've reach to this workaround, I didn't have time to do some research yet, so it's up to you guys. There's a parameter and I've used it like this (at the constructor of my class, before instantiating the HTTPWebRequest object):
System.Net.ServicePointManager.Expect100Continue = false;
I don't know why exactly but now my calls look quite faster.
I tried all the solutions described here with no luck, the call took about 5 minutes.
What was the issue:
I needed the same session and obviously the same cookies (request made on the same server), so I recreated the cookies from Request.Cookies into WebRequest.CookieContainer. The response time was about 5 minutes.
My solution:
Commented out the cookie-related code and bam! Call took less then one second.
I know this is some old thread, but I have lost whole day with slow HttpWebRequest, tried every provided solution with no luck. Every request for any address was more than one minute.
Eventually, problem was with my Antivirus Firewall (Eset). I'm using firewall with interactive mode, but Eset was somehow turned off completely. That caused request to last forever. After turning ON Eset, and executing request, prompt firewall message is shown, and after confirmation, request executing for less than one second.
For me, using HttpWebRequest to call an API locally averaged 40ms, and to call an API on a server averaged 270ms. But calling them via Postman averaged 40ms on both environments. None of the solutions in this thread made any difference for me.
Then I found this article which mentioned the Nagle algorithm:
The Nagle algorithm increases network efficiency by decreasing the number of packets sent across the network. It accomplishes this by instituting a delay on the client of up to 200 milliseconds when small amounts of data are written to the network. The delay is a wait period for additional data that might be written. New data is added to the same packet.
Setting ServicePoint.UseNagleAlgorithm to false was the magic I needed, it made a huge difference and the performance on the server is now almost identical to local.
var webRequest = (HttpWebRequest)WebRequest.Create(url);
webRequest.Method = "POST";
webRequest.ServicePoint.Expect100Continue = false;
webRequest.ServicePoint.UseNagleAlgorithm = false; // <<<this is the important bit
Note that this worked for me with small amounts of data, however if your request involves large data then it might be worth making this flag conditional depending on the size of the request/expected size of the response.
We encountered a similar issue at work where we had two REST APIs communicating locally with each other. In our case all HttpWebRequest took more than 2 seconds even though the request url was http://localhost:5000/whatever which was way to slow.
However upon investigating this issue with Wireshark we found this:
It turned out the framework was trying to establish an IPv6 connection to localhost first, however as our services were listening on 0.0.0.0 (IPv4) the connection attempt failed and after 500 ms timeout it retried until after 4 failed attempts (4 * 500 ms = 2 seconds) it eventually gave up and used IPv4 as a fallback (which obviously succeeded almost immediately).
The solution for us was to change the request URI to http://127.0.0.1/5000/whatever (or to also listen on IPv6 which we deemed unnecessary for our local callbacks).
For me, the problem was that I had installed LogMeIn Hamachi--ironically to remotely debug the same program that then started exhibiting this extreme slowness.
FYI, disabling the Hamachi network adapter was not enough because it seems that its Windows service re-enables the adapter.
Also, re-connecting to my Hamachi network did not solve the problem. Only disabling the adapter (by way of disabling the LogMeIn Hamachi Windows service) or, presumably, uninstalling Hamachi, fixed the problem for me.
Is it possible to ask HttpWebRequest to go out through a specific network adapter?
This worked for me:
<configuration>
<system.net>
<defaultProxy enabled="false"/>
</system.net>
</configuration>
Credit: Slow HTTPWebRequest the first time the program starts
In my case add AspxAutoDetectCookieSupport=1 to the code and the problem was solved
Uri target = new Uri("http://payroll");
string responseContent;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(target);
request.CookieContainer = new CookieContainer();
request.CookieContainer.Add(new Cookie("AspxAutoDetectCookieSupport", "1") { Domain = target.Host });
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (Stream responseStream = response.GetResponseStream())
{
using (StreamReader sr = new StreamReader(responseStream))
responseContent = sr.ReadToEnd();
}
}
I was experiencing a 15 second or so delay upon creating a session to an api via httpwebrequest. The delay was around waiting for the getrequeststream() . After scrounging for answers, I found this article and the solution was just changing a local windows policy regarding ssl:
https://www.generacodice.com/en/articolo/1296839/httpwebrequest-15-second-delay-performance-issue
We had the same problem on web app. We waited on response 5 seconds. When we change user on applicationPool on IIS to networkService, the response began to arrive less than 1 second
I have some fairly simple code that uploads a photo or video to an endpoint (using HTTP PUT or POST). Every so often I see connection closed exceptions thrown, and in reality the photo/video was uploaded just fine, it's calling GetResponse where the exception occurs.
One thing I've noticed is that GetResponse can take an awful long time to process. Often longer than the actual upload time of the photo to the server. My code writes to the web server using RequestStream.Write.
I did a little test and uploaded about 40 photos/videos to the server that range in size from 1MB to 85MB and the time for GetResponse to return was anywhere from 3 to 40 seconds.
My question is, is this normal? Is this just a matter of how long the server I am uploading these files to is taking to process my request and respond? In looking at Fidder HTTP traces it seems to be the case.
FYI, my uploads are HTTP 1.0, Timeout values set to Infinite (both Timeout and ReadWriteTimeout)
If the server is genuinely taking a long time to return any data (as shown in Fiddler) then that's the cause of it. Uploading an 85MB attachment would take a long time to start with, and then the server has to process it. You can't do a lot about that - other than to use an asynchronous method if you're able to get on with more work before the call returns.
It's not entirely clear what Fiddler's showing you though - is it showing a long time before the server sends the response? If so, there's not much you can do. I'm surprised that the connection is being closed on you, admittedly. If, however, you're not seeing your data being written to the server for a while, that's a different matter.
Are you disposing the response returned? If not, you may have connections which are being kept alive. This shouldn't be a problem if it's explicitly HTTP 1.0, but it's the most common cause of "hanging" web calls in my experience.
Basically, if you don't dispose of a WebResponse it will usually (at least with HTTP 1.1 and keepalive) hold on to the connection. There's a limit to the number of connections which can be open to a single host, so you could end up waiting until an earlier response is finalized before the next one can proceed.
If this is the problem, a simple using statement is the answer:
using (WebResponse response = request.GetResponse())
{
...
}
Yes, the response time may be a lot longer than just the upload time. After the request has been sent to the server it has to be processed and a response has to be returned. There may be some time before the request is processed, and then the file typically is going to be saved somewhere. After that the server will create the response page that is sent back.
IIS handles only one request at a time from each user, so if you start another upload before the first one is completed, it will wait until the first one completes before it even starts to process the next.