httpWebRequest (The underlying connection was closed: The connection was closed unexpectedly.) - c#

I am developing an C# application which logs data from a webserver. It sends the following post request to the webserver and awaits for the response.
/// <summary>
/// Function for obtaining testCgi data
/// </summary>
/// <param name="Parameters"></param>
/// <returns></returns>
private string HttpmyPost(string Parameters)
{
string str = "No response";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uriTestCGI);
request.Method = "POST";
byte[] bytes = Encoding.UTF8.GetBytes(Parameters);
request.ContentLength = bytes.Length;
Stream requestStream = request.GetRequestStream();
requestStream.Write(bytes, 0, bytes.Length);
requestStream.Close();
WebResponse response = request.GetResponse();
Stream stream = response.GetResponseStream();
StreamReader reader = new StreamReader(stream);
try
{
var result = reader.ReadToEnd();
stream.Dispose();
str = result.ToString();
reader.Dispose();
}
catch (WebException ex)
{
//System.Windows.Forms.MessageBox.Show(ex.Message);
System.Diagnostics.Trace.WriteLine(ex.Message);
}
finally
{
request.Abort();
}
return str;
}
I am getting the error
> "The underlying connection was closed: The connection was closed
> unexpectedly"
I have tried to debug the error, and I have used fiddler in order to check the post request as given from Firefox. To my surprise whenever Fiddler was my program was working perfectly. When I close fiddler I am having the same error.
I suspect that since Fiddler is acting as a proxy it may change some of the settings.
I have tried using webclient and the result was the same.
When I tried coding the request in python, everything worked as it should without no problem. Of cource I have the option of installing IronPython and wrapping that particular function, however I consider this overkill and lacking elegance, so I am pursuing a leaner approach. I suspect this is nothing more that a setting adjustment.
I have tried modifying and in my case it is indifferent.
request.Accept
request.ReadWriteTimeout
request.Timeout
request.UserAgent
request.Headers
request.AutomaticDecompression
request.Referer
request.AllowAutoRedirect
//request.TransferEncoding
request.Expect
request.ServicePoint.Expect100Continue
request.PreAuthenticate
request.KeepAlive
request.ProtocolVersion
request.ContentType
With or without the above adjustments the code works when Fiddler is capturing data.
Also it might be noteworthy that the program yields the error at
WebResponse response = request.GetResponse();
UPDATE:
Following #EricLaw suggestions I looked into Latency.
I found this article
HttpWebRequest gets slower when adding an Interval
which suggested turning of the Nagle Algorithm.
Now there are no closed connections, although there is a small lag in the overall response (when I use winforms, and not async).

I write a bit about how Fiddler can "magically" fix things here: http://blogs.telerik.com/fiddler/posts/13-02-28/help!-running-fiddler-fixes-my-app-
The issue you're encountering is actually a bug in the .NET Framework itself. The rules of HTTP are such that the server may close a KeepAlive connection at any time after sending the first response (e.g. it doesn't need to accept another request on the connection, even if the client requested KeepAlive behavior).
.NET has a bug where it expects that the server will include a Connection: close response header if it will close the connection after the response is complete. If the server closes the connection without the Connection: Close header (entirely valid per RFC2616), .NET will encounter the closed connection when attempting to send the next request on the connection and it will throw this exception. What .NET should be doing is silently creating a new connection and resending the request on that new connection.
Fiddler resolves this problem because it doesn't care if the server closes the connection, and it keeps the connection to the client alive. When the client sends its second request, Fiddler attempts to reuse its connection to the server, notices that it's closed, and silently creates a new connection.
You can mitigate this problem in your code by:
Disabling keepalive on the request (this hurts performance)
Catching the exception and retrying automatically
Changing the server to keep connections alive longer
Approach #3 only works if you control the server and because the client may be behind a gateway/proxy that closes connections after use, you should probably use approach #2 as well.

a suggestion and a question:
1) if you really want to see what's going on install Wireshark. It will show you exactly what's being sent / received. and it will make it possible to compare with the fiddler.
I guess you are missing a header, like request.ContentType = "...." but only wireshark will show you which one (is sent via your working alternative, while not sent by your HttpWebRequest).
2) are you getting the error inside the http response content, or is it an exception, and if it's an exception is it caught in your catch, or does it occur during the request, before your try statement?

Fiddler works as an Internet Proxy. If your code works while Fiddler is running, (and maybe also from a browser), then you may have a problem with your proxy settings.

Related

ASP.NET application initially fails with: An existing connection was forcibly closed by the remote host

I apologize if I missed any details in this post, it's a bit bewildering and seems to be more of a bug in the OS than anything else.
We have an ASP.NET web application running on Windows Server 2019. The application attempts to make a web connection to another hosted application. We are trying to force it to use TLS1.2. When we do, the initial connection request ALWAYS fails after restarting IIS and then works fine from that point on.
I've installed Wireshark and can confirm that there is NO TCP connection attempt made on the wire whatsoever. Instead, we get an error from the application that the connection was forcibly closed.
System.IO.IOException: Unable to read data from the transport
connection: An existing connection was forcibly closed by the remote
host. ---> System.Net.Sockets.SocketException: An existing connection
was forcibly closed by the remote host at
System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset,
Int32 size)
The ASP.NET application is using .NET 4.5.2. The simplified code looks like so:
try {
HttpWebRequest hwrequest = (HttpWebRequest) System.Net.WebRequest.Create("https://api.domain.com/");
hwrequest.Accept = "*/*";
hwrequest.AllowAutoRedirect = true;
hwrequest.UserAgent = "http_requester/0.1";
hwrequest.Timeout = 900000;
hwrequest.Method = "POST";
hwrequest.KeepAlive = false;
hwrequest.ContentType = "application/x-www-form-urlencoded; charset=UTF-8";
ServicePointManager.ServerCertificateValidationCallback = new RemoteCertificateValidationCallback(delegate {
return true;
});
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12;
System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding();
byte[] postByteArray = encoding.GetBytes(postData);
hwrequest.ContentLength = postByteArray.Length;
System.IO.Stream postStream = hwrequest.GetRequestStream();
postStream.Write(postByteArray, 0, postByteArray.Length);
postStream.Close();
System.Net.HttpWebResponse hwresponse = (System.Net.HttpWebResponse) hwrequest.GetResponse();
if (hwresponse.StatusCode == System.Net.HttpStatusCode.OK) {
System.IO.Stream responseStream = hwresponse.GetResponseStream();
System.IO.StreamReader myStreamReader = new System.IO.StreamReader(responseStream);
responseData = myStreamReader.ReadToEnd();
}
hwresponse.Close();
}
catch(Exception e) {
Console.WriteLine(e.ToString());
responseData = "An error occurred: " + e.Message;
}
If we refresh the page, the connection attempt succeeds. But the initial request fails. If we comment out the line ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12; then there is no problem, but the application uses TLSv1.
Is there something that you can think of that would cause the initial connection attempt to fail and then all subsequent requests to work until IIS is restarted? Seems like some type of bug in the OS rather than a coding issue.
This is a fully patched Windows Server 2019 almost fresh out of the box on Amazon AWS. There is no security software or extra things installed on it.
Maybe this generally means that the remote side closed the connection (usually by sending a TCP/IP RST packet). If you're working with a third-party application, the likely causes are:
You are sending malformed data to the application
The network link between the client and server is going down for some
reason
You have triggered a bug in the third-party application that caused
it to crash
The third-party application has exhausted system resources
It's likely that the first case is what's happening.
But You can fire up Wireshark to see exactly what is happening on the wire to narrow down the problem. Without more specific information, it's unlikely that anyone here can really help you much.
Another way is Using TLS 1.2 might solved this error.
You can force your application using TLS 1.2 with this (make sure to execute it before calling your service):
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls12
Another solution :
Enable strong cryptography in your local machine or server in order to use TLS1.2 because by default it is disabled so only TLS1.0 is used.
To enable strong cryptography , execute these commande in PowerShell with admin privileges :

Asynchronous connection not working after first failure

I am dealing with quite annoying issue, and I could not find any solution for this.
I am calling WebRequest Connection, working under Unity C#;
IAsyncResult startingTheCall = webRequest.BeginGetRequestStream(new AsyncCallback(GetRequestStreamCallback), parameters);
It sends call to server running on Windows; All works fine. But, if server is not turned off, connection waits for the response for eternity. So solve this, I add timeout:
ThreadPool.RegisterWaitForSingleObject(startingTheCall.AsyncWaitHandle, new WaitOrTimerCallback(TimeoutCallbackOfFirstStream), parameters, 10000, true);
Timeout works fine too; The problem is, if I ever trigger the time out (by shutting the server down), then enabled server again, BeginGetRequestStream will never reach server no matter what, until I restart the application.
I thought, that maybe on failure, I am not cleaning connections properly. So I did set up this cleaning routine inside timeout:
ArrayList unbox = (ArrayList)state;
HttpWebRequest request = (HttpWebRequest)unbox[1];
IAsyncResult asynchronousResult = (IAsyncResult)unbox[6];
Stream postStream = request.EndGetRequestStream(asynchronousResult);
postStream.Close();
request.Abort();
I abort the request, close the stream. Still, after 1st failure, the server will never sent responses or get the stream messages, until I restart the application. Like it is completely blocked.
Anyone ever had this behaviour?

C# HttpWebRequest to Socket 3 second delay in writing to Request Stream

I have a set up with a custom HTTP server using sockets, and a client that uses HttpWebRequest to connect to that server. The client is multi-threaded, using this code:
ServicePointManager.UseNagleAlgorithm = false;
ServicePointManager.Expect100Continue = false;
ServicePointManager.DefaultConnectionLimit = 48;
var request = (HttpWebRequest)WebRequest.Create(url);
request.Method = verb;
request.Timeout = timeout;
request.AutomaticDecompression = DecompressionMethods.GZip;
request.Proxy = null;
request.SendChunked = false;
/*offending lines*/
var writer = new StreamWriter(request.GetRequestStream(),Encoding.UTF8);
writer.Write(stringData);
writer.Close();
/*end offending lines*/
var response = (HttpWebResponse)request.GetResponse();
I have been running thousands of requests through this code, and it works very well most of the time. The average response time is about 200ms, including some work done on the server.
The problem is that about 0.2% of the requests are very slow. When measuring I notice that the slow requests are pausing for 3 seconds (+- 20ms) when getting and writing to the request stream. GetResponse() is still fast, and nothing on the server takes extra time.
The server is implemented as a listener Socket listening asyncronously on a port, using Listen() and BeginAccept(). After a connection is accepted, BeginRecieve is called on the new Socket (aquired from EndAccept()) and BeginAccept is called again on the listener socket. From what I've read, this is the way to do it.
So my question is: what happens during GetRequestStream() and writing to it? Is data actually being sent here or is it flushed on GetResponse()? How come some of the requests take 3 seconds longer without any apparent reason? Is my server implementation flawed in some way? The weird thing is that it is always 3 seconds, except sometimes it can be 9 seconds. It seems to be a multiple of 3, for some reason.
A few possible scenarios:
The connection is initiated on GetRequestStream() and the server does not accept it right away. How can I test if this is the case?
Something is going on behind the scenes on GetRequestStream() or writing to it.
Waiting for network hardware?
Waiting for DNS resolution?
Something else?
Any help is very appreciated.
EDIT: I've tried connecting through a web proxy, and then the delay moves from GetRequestStream() to GetResponse(), indicating that the problem indeed lies in the web server. It seems like it is not accepting the connection. I realize that there might be some delay between EndAccept() and BeginAccept(), but it should not amount to a 3 second delay, right?
I've also tried running my client single-threaded, and the problem seems to disappear. This suggests that the connection delay only occurs when multiple requests are made at the same time.
I guess I'm a bit too late, but I've encountered similar problem and found and article, which describes what seems to be pretty low-level reason for such an issue - http://www.percona.com/blog/2011/04/19/mysql-connection-timeouts/
Maybe its because you don't dispose the request stream. Try to replace your offending lines with the following:
var stream = request.GetRequestStream();
var buffer = Encoding.UTF8.GetBytes(stringData);
stream.Write(buffer, 0, buffer.Length);
stream.Dispose();
Preliminary testing suggests that the issue was indeed connection crowding on the server. With some restructuring of Socket code, I managed to get rid of the slow connections. I will run more testing tonight to make sure, but it looks promising!

HttpWebRequest is extremely slow!

I am using an open source library to connect to my webserver. I was concerned that the webserver was going extremely slow and then I tried doing a simple test in Ruby and I got these results
Ruby program: 2.11seconds for 10 HTTP
GETs
Ruby program: 18.13seconds for 100 HTTP
GETs
C# library: 20.81seconds for 10 HTTP
GETs
C# library: 36847.46seconds for 100 HTTP
GETs
I have profiled and found the problem to be this function:
private HttpWebResponse GetRawResponse(HttpWebRequest request) {
HttpWebResponse raw = null;
try {
raw = (HttpWebResponse)request.GetResponse(); //This line!
}
catch (WebException ex) {
if (ex.Response is HttpWebResponse) {
raw = ex.Response as HttpWebResponse;
}
}
return raw;
}
The marked line is takes over 1 second to complete by itself while the ruby program making 1 request takes .3 seconds. I am also doing all of these tests on 127.0.0.1, so network bandwidth is not an issue.
What could be causing this huge slow down?
UPDATE
Check out the changed benchmark results. I actually tested with 10 GETs and not 100, I updated the results.
What I have found to be the main culprit with slow web requests is the proxy property. If you set this property to null before you call the GetResponse method the query will skip the proxy autodetect step:
request.Proxy = null;
using (var response = (HttpWebResponse)request.GetResponse())
{
}
The proxy autodetect was taking up to 7 seconds to query before returning the response. It is a little annoying that this property is set on by default for the HttpWebRequest object.
It may have to do with the fact that you are opening several connections at once. By default the Maximum amount of open HTTP connections is set to two. Try adding this to your .config file and see if it helps:
<system.net>
.......
<connectionManagement>
<add address="*" maxconnection="20"/>
</connectionManagement>
</system.net>
I was having a similar issue with a VB.Net MVC project.
Locally on my pc (Windows 7) it was taking under 1 second to hit the page requests, but on the server (Windows Server 2008 R2) it was taking 20+ seconds for each page request.
I tried a combination of setting the proxy to null
System.Net.WebRequest.DefaultWebProxy = Nothing
request.Proxy = System.Net.WebRequest.DefaultWebProxy
And changing the config file by adding
<system.net>
.......
<connectionManagement>
<add address="*" maxconnection="20"/>
</connectionManagement>
</system.net>
This still did not reduce the slow page request times on the server. In the end the solution was to uncheck the “Automatically detect settings” option in the IE options on the server itself. (Under Tools -> Internet Options select the Connections tab. Press the LAN Settings button)
Immediately after I unchecked this browser option on the server all the page request times dropped from 20+ seconds to under 1 second.
I started observing a slow down similar to the OP in this area which got a little better when increasing the MaxConnections.
ServicePointManager.DefaultConnectionLimit = 4;
But after building this number of WebRequests the delays came back.
The problem, in my case, was that I was calling a POST and not bothered about the response so wasn't picking up or doing anything with it. Unfortunately this left the WebRequest floating around until they timed out.
The fix was to pick up the Response and just close it.
WebRequest webRequest = WebRequest.Create(sURL);
webRequest.Method = "POST";
webRequest.ContentLength = byteDataGZ.Length;
webRequest.Proxy = null;
using (var requestStream = webRequest.GetRequestStream())
{
requestStream.WriteTimeout = 500;
requestStream.Write(byteDataGZ, 0, byteDataGZ.Length);
requestStream.Close();
}
// Get the response so that we don't leave this request hanging around
WebResponse response = webRequest.GetResponse();
response.Close();
Use a computer other than localhost, then use WireShark to see what's really going over the wire.
Like others have said, it can be a number of things. Looking at things on the TCP level should give a clear picture.
I don't know how exactly I've reach to this workaround, I didn't have time to do some research yet, so it's up to you guys. There's a parameter and I've used it like this (at the constructor of my class, before instantiating the HTTPWebRequest object):
System.Net.ServicePointManager.Expect100Continue = false;
I don't know why exactly but now my calls look quite faster.
I tried all the solutions described here with no luck, the call took about 5 minutes.
What was the issue:
I needed the same session and obviously the same cookies (request made on the same server), so I recreated the cookies from Request.Cookies into WebRequest.CookieContainer. The response time was about 5 minutes.
My solution:
Commented out the cookie-related code and bam! Call took less then one second.
I know this is some old thread, but I have lost whole day with slow HttpWebRequest, tried every provided solution with no luck. Every request for any address was more than one minute.
Eventually, problem was with my Antivirus Firewall (Eset). I'm using firewall with interactive mode, but Eset was somehow turned off completely. That caused request to last forever. After turning ON Eset, and executing request, prompt firewall message is shown, and after confirmation, request executing for less than one second.
For me, using HttpWebRequest to call an API locally averaged 40ms, and to call an API on a server averaged 270ms. But calling them via Postman averaged 40ms on both environments. None of the solutions in this thread made any difference for me.
Then I found this article which mentioned the Nagle algorithm:
The Nagle algorithm increases network efficiency by decreasing the number of packets sent across the network. It accomplishes this by instituting a delay on the client of up to 200 milliseconds when small amounts of data are written to the network. The delay is a wait period for additional data that might be written. New data is added to the same packet.
Setting ServicePoint.UseNagleAlgorithm to false was the magic I needed, it made a huge difference and the performance on the server is now almost identical to local.
var webRequest = (HttpWebRequest)WebRequest.Create(url);
webRequest.Method = "POST";
webRequest.ServicePoint.Expect100Continue = false;
webRequest.ServicePoint.UseNagleAlgorithm = false; // <<<this is the important bit
Note that this worked for me with small amounts of data, however if your request involves large data then it might be worth making this flag conditional depending on the size of the request/expected size of the response.
We encountered a similar issue at work where we had two REST APIs communicating locally with each other. In our case all HttpWebRequest took more than 2 seconds even though the request url was http://localhost:5000/whatever which was way to slow.
However upon investigating this issue with Wireshark we found this:
It turned out the framework was trying to establish an IPv6 connection to localhost first, however as our services were listening on 0.0.0.0 (IPv4) the connection attempt failed and after 500 ms timeout it retried until after 4 failed attempts (4 * 500 ms = 2 seconds) it eventually gave up and used IPv4 as a fallback (which obviously succeeded almost immediately).
The solution for us was to change the request URI to http://127.0.0.1/5000/whatever (or to also listen on IPv6 which we deemed unnecessary for our local callbacks).
For me, the problem was that I had installed LogMeIn Hamachi--ironically to remotely debug the same program that then started exhibiting this extreme slowness.
FYI, disabling the Hamachi network adapter was not enough because it seems that its Windows service re-enables the adapter.
Also, re-connecting to my Hamachi network did not solve the problem. Only disabling the adapter (by way of disabling the LogMeIn Hamachi Windows service) or, presumably, uninstalling Hamachi, fixed the problem for me.
Is it possible to ask HttpWebRequest to go out through a specific network adapter?
This worked for me:
<configuration>
<system.net>
<defaultProxy enabled="false"/>
</system.net>
</configuration>
Credit: Slow HTTPWebRequest the first time the program starts
In my case add AspxAutoDetectCookieSupport=1 to the code and the problem was solved
Uri target = new Uri("http://payroll");
string responseContent;
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(target);
request.CookieContainer = new CookieContainer();
request.CookieContainer.Add(new Cookie("AspxAutoDetectCookieSupport", "1") { Domain = target.Host });
using (HttpWebResponse response = (HttpWebResponse)request.GetResponse())
{
using (Stream responseStream = response.GetResponseStream())
{
using (StreamReader sr = new StreamReader(responseStream))
responseContent = sr.ReadToEnd();
}
}
I was experiencing a 15 second or so delay upon creating a session to an api via httpwebrequest. The delay was around waiting for the getrequeststream() . After scrounging for answers, I found this article and the solution was just changing a local windows policy regarding ssl:
https://www.generacodice.com/en/articolo/1296839/httpwebrequest-15-second-delay-performance-issue
We had the same problem on web app. We waited on response 5 seconds. When we change user on applicationPool on IIS to networkService, the response began to arrive less than 1 second

HttpWebRequest not returning, connection closing

I have a web application that is polling a web service on another server. The server is located on the same network, and is referenced by an internal IP, running on port 8080.
Every 15 secs, a request is sent out, which receives an xml response with job information. 95% of the time, this works well, however at random times, the request to the server is null, and reports a "response forcibly closed by remote host."
Researching this issue, others have set KeepAlive = false. This has not solved the issue. The web server is running .NET 3.5 SP1.
Uri serverPath = new Uri(_Url);
// create the request and set the login credentials
_Req = (HttpWebRequest)WebRequest.Create(serverPath);
_Req.KeepAlive = false;
_Req.Credentials = new NetworkCredential(username, password);
_Req.Method = this._Method;
Call to the response:
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
_ResponseStream = response.GetResponseStream();
The method for this is GET. I tried changing the timeout, but the default is large enough to take this into account.
The other request we perform is a POST to post data to the server, and we are getting the same issue randomly as well. There are no firewalls affecting this, and we ruled out the Virus scanner. Any ideas to help solving this is greatly appreciated!
Are you closing the response stream and disposing of the response itself? That's the most frequent cause of "hangs" with WebRequest - there's a limit to how many connections you can open to the same machine at the same time. The GC will finalize the connections eventually, but if you dispose them properly it's not a problem.
I wouldn't rule out network issues as a possible reason for problems. Have you run a ping to your server to see if you get dropped packets that correspond to the same times as your failed requests?
Set the timeout property of FtpWebRequest object to maximum i tried it with 4 GB File and it's working great.

Categories