Force connection to close with HttpClient in WCF - c#

We're using the HttpClient class from "Rest toolkit" for WCF ( http://aspnet.codeplex.com/Release/ProjectReleases.aspx?ReleaseId=24644 ) to inteface a Rest-server we've created.
The server currently always close the connection, regardless of the "connection" header (it is in developement, so that is ok for now).
How can I tell the HttpClient instance to always close the connection (or accept that the server closes it)? I tried adding the " Connection: close" header, but resulted in an exeption ("connection" was not an allowed header). I also tried to set DefaultHeaders.Connection.Close = true, but this didn't seem to make any difference. I can still see the connection with netstat after the POST is done.
(body and uri are strings)
HttpClient client = new HttpClient();
client.DefaultHeaders.Connection = new Connection();
client.DefaultHeaders.Connection.Close = true;
HttpContent content = HttpContent.Create(body);
HttpResponseMessage res = client.Post(new Uri(uri), content);
The problem here is that the next time we do a POST, the call just blocks. We think this is due to the fact that the connection is kept alive by the client, and this is not supported by the server.

I wrote most of the HttpClient code; here are some thoughts:
You don't need to tell HttpClient to accept the server closing the connection -- it should just work automatically -- the server can always close the connection whether or not the client includes the "Connection: close" header
Use response.Dispose() -- the underlying connection may not be able to initiate a new connection / send bytes until you're done reading the bytes that are pending
client.DefaultHeaders.Add("Connection", "close"); should work -- if you get an exception, please open an issue on the codeplex site
You can inspect response.Request.Headers to see what's going out on the wire
You can skip the "new Uri()" in Post(new Uri(x)) -- passing a string will call the right constructor (new Uri(x, UriKind.RelativeOrAbsolute)) for you
By default, the timeouts are the same as HttpWebRequest -- you might want to turn them down via client.TransportSettings.ReadWriteTimeout / client.TransportSettings.ConnectionTimeout to distinguish between blocking forever and timing out

Have you tried disposing the response when you're done with it?
// do stuff
res.Dispose();
or
using (HttpResponseMessage res = client.Post(new Uri(uri), content))
{
// do stuff
}

Related

HttpClient and Connection: close HttpRequestHeaders

I'm making a static wrapper for HttpClient, that object identifier is 'Client'. The code is below. As I add to the options class, I encounter ConnectionClose and I'm thinking of making it configurable as I want other developers to be able to configure it as desired.
But everything I read about Connection: close in the header indicates I want it keep-alive. Should this value just be false all the time? Or are there valid use cases for the close value of true?
protected void Setup(ApiCallerOptions options)
{
Client = CreateHttpClient();
Options = options;
ServicePointManager.FindServicePoint(new Uri(options.BaseAddress))
.ConnectionLeaseTimeout = options.ConnectionLeaseTimeout;
Client.BaseAddress = new Uri(options.BaseAddress);
Client.DefaultRequestHeaders.Accept.Clear();
Client.DefaultRequestHeaders.Accept.Add(options.ContentType);
Client.DefaultRequestHeaders.ConnectionClose = false;
}
RFC https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
14.10 Connection
Can be used for debugging issues related to connections that are kept alive or you simply do not wish to have a persistent connection whatever the reason. These are pretty much the use cases when set explicitly to true HttpClient instance.
To answer your question it should just be false else performance will be degraded.

Why would my C# app fail on this REST request but but works fine through a browser?

I have a C# app and I am accessing some data over REST so I pass in a URL to get a JSON payload back. I access a few different URLs programmatically and they all work fine using this code below except one call.
Here is my code:
var url = "http://theRESTURL.com/rest/API/myRequest";
var results = GetHTTPClient().GetStringAsync(url).Result;
var restResponse = new RestSharp.RestResponse();
restResponse.Content = results;
var _deserializer = new JsonDeserializer();
where GetHTTPClient() is using this code below:
private HttpClient GetHTTPClient()
{
var httpClient = new HttpClient(new HttpClientHandler()
{
Credentials = new System.Net.NetworkCredential("usr", "pwd"),
UseDefaultCredentials = false,
UseProxy = true,
Proxy = new WebProxy(new Uri("http://myproxy.com:8080")),
AllowAutoRedirect = false
});
httpClient.Timeout = new TimeSpan(0,0, 3500);
return httpClient;
}
so as i said, the above code works fine but a bunch of different request but for one particular request, I am getting an exception inside of the
.GetStringAsync(url).Result
call with the error:
Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host
I get that error after waiting for about 10 minutes. What is interesting is that if I put that same URL that isn't working into Internet Explorer directly I do get the JSON payload back (after about 10 minutes as well). So i am confused at why
It would work fine directly from the browser but fail when using the code above.
It fails on this one request but other requests using the same code work fine programmatically.
Any suggestions for things to try or things I should ask the owner of the server to check out on their end to help diagnose what is going on?
I think the timeout is not an issue here, as the error states that connection has been closed remotely and the set timeout is about 58 minutes, which is more than enough compared to your other figures.
Have you tried looking at the requests itself? Might want to edit your question with those results.
If you remove line httpClient.Timeout = new TimeSpan(0,0, 3500); the issue should be solved but if the request would last 20 minutes you should wait all the time.

Pipelining using HttpWebRequest in C#

I am working on an HTTP client which should ideally pipeline requests when needed. Also, the requests will be sent on specific network interfaces (the client is multihomed).
Asynchronous sockets are used and in order to make a request, I use the following code:
Uri url = new Uri(reqUrl);
ServicePoint sp = ServicePointManager.FindServicePoint(url);
sp.BindIPEndPointDelegate = new BindIPEndPoint(localBind);
pseg.req = (HttpWebRequest)HttpWebRequest.Create(url);
pseg.req.AddRange("bytes", psegStart, psegStart + psegLength - 1);
pseg.req.KeepAlive = true;
pseg.req.Pipelined = true;
For each request made using this code, a separate connection to the server is opened and segments received in parallell. This is ok, however, it is not the behavior I want. I want the requests to be pipelined, but the replies to arrive sequentially. If I use locking or set the connectionlimit to 1, the request for segment #2 is not sent until after segment #1 has been fully received.
Is there any way to achieve what I want and still use the HttpWebRequest/Response-classes? Or will I have to drop down to sockets?

How to force IIS to send response headers without sending response body and closing connection

I am trying to stream dynamically generated data to a client over HTTP using IIS, and the connection has to remain open for a long period of time, and the server will send periodic status updates to the client while it is performing a time-consuming operation.
This MUST all be handled within ONE request, but I am using a WebClient.OpenRead() stream, which cannot be opened until the headers are sent.
How can I force IIS to send headers to the client, and later send a response body?
This behaviour is normally achievable by setting KeepAlive to true and setting Expect header to "100 and continue". By doing this, server will send the headers with result code 100.
I am not sure if this is possible using WebClient.
Use HttpWebRequest instead to be able to set the values above. In fact WebClient does nothing magical but using GET to get the data. Here is the code for calling OpenRead in Reflector:
try
{
request = this.m_WebRequest = this.GetWebRequest(this.GetUri(address));
Stream responseStream = (this.m_WebResponse = this.GetWebResponse(request)).GetResponseStream();
if (Logging.On)
{
Logging.Exit(Logging.Web, this, "OpenRead", responseStream);
}
stream2 = responseStream;
}
catch (Exception exception)
{
//

System.Net.WebClient unreasonably slow

When using the System.Net.WebClient.DownloadData() method I'm getting an unreasonably slow response time.
When fetching an url using the WebClient class in .NET it takes around 10 sec before I get a response, while the same page is fetched by my browser in under 1 sec.
And this is with data that's 0.5kB or smaller in size.
The request involves POST/GET parameters and a user agent header if perhaps that could cause problems.
I haven't (yet) tried if other ways to download data in .NET gives me the same problems, but I'm suspecting I might get similar results. (I've always had a feeling web requests in .NET are unusually slow...)
What could be the cause of this?
Edit:
I tried doing the exact thing using System.Net.HttpWebRequest instead, using the following method, and all requests finish in under 1 sec.
public static string DownloadText(string url)
var request = (HttpWebRequest)WebRequest.Create(url);
var response = (HttpWebResponse)request.GetResponse();
using (var reader = new StreamReader(response.GetResponseStream()))
{
return reader.ReadToEnd();
}
}
While this (old) method using System.Net.WebClient takes 15-30s for each request to finish:
public static string DownloadText(string url)
{
var client = new WebClient();
byte[] data = client.DownloadData(url);
return client.Encoding.GetString(data);
}
I had that problem with WebRequest. Try setting Proxy = null;
WebClient wc = new WebClient();
wc.Proxy = null;
By default WebClient, WebRequest try to determine what proxy to use from IE settings, sometimes it results in like 5 sec delay before the actual request is sent.
This applies to all classes that use WebRequest, including WCF services with HTTP binding.
In general you can use this static code at application startup:
WebRequest.DefaultWebProxy = null;
Download Wireshark here http://www.wireshark.org/
Capture the network packets and filter the "http" packets.
It should give you the answer right away.
There is nothing inherently slow about .NET web requests; that code should be fine. I regularly use WebClient and it works very quickly.
How big is the payload in each direction? Silly question maybe, but is it simply bandwidth limitations?
IMO the most likely thing is that your web-site has spun down, and when you hit the URL the web-site is slow to respond. This is then not the fault of the client. It is also possible that DNS is slow for some reason (in which case you could hard-code the IP into your "hosts" file), or that some proxy server in the middle is slow.
If the web-site isn't yours, it is also possible that they are detecting atypical usage and deliberately injecting a delay to annoy scrapers.
I would grab Fiddler (a free, simple web inspector) and look at the timings.
WebClient may be slow on some workstations when Automatic Proxy Settings in checked in the IE settings (Connections tab - LAN Settings).
Setting WebRequest.DefaultWebProxy = null; or client.Proxy = null didn't do anything for me, using Xamarin on iOS.
I did two things to fix this:
I wrote a downloadString function which does not use WebRequest and System.Net:
public static async Task<string> FnDownloadStringWithoutWebRequest(string url)
{
using (var client = new HttpClient())
{
//Define Headers
client.DefaultRequestHeaders.Accept.Clear();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var response = await client.GetAsync(url);
if (response.IsSuccessStatusCode)
{
string responseContent = await response.Content.ReadAsStringAsync();
//dynamic json = Newtonsoft.Json.JsonConvert.DeserializeObject(responseContent);
return responseContent;
}
Logger.DefaultLogger.LogError(LogLevel.NORMAL, "GoogleLoginManager.FnDownloadString", "error fetching string, code: " + response.StatusCode);
return "";
}
}
This is however still slow with Managed HttpClient.
So secondly, in Visual Studio Community for Mac, right click on your Project in the Solution -> Options -> set HttpClient implementation to NSUrlSession, instead of Managed.
Screenshot: Set HttpClient implementation to NSUrlSession instead of Managed
Managed is not fully integrated into iOS, doesn't support TLS 1.2, and thus does not support the ATS standards set as default in iOS9+, see here:
https://learn.microsoft.com/en-us/xamarin/ios/app-fundamentals/ats
With both these changes, string downloads are always very fast (<<1s).
Without both of these changes, on every second or third try, downloadString took over a minute.
Just FYI, there's one more thing you could try, though it shouldn't be necessary anymore:
//var authgoogle = new OAuth2Authenticator(...);
//authgoogle.Completed...
if (authgoogle.IsUsingNativeUI)
{
// Step 2.1 Creating Login UI
// In order to access SFSafariViewController API the cast is neccessary
SafariServices.SFSafariViewController c = null;
c = (SafariServices.SFSafariViewController)ui_object;
PresentViewController(c, true, null);
}
else
{
PresentViewController(ui_object, true, null);
}
Though in my experience, you probably don't need the SafariController.
Another alternative (also free) to Wireshark is Microsoft Network Monitor.
What browser are you using to test?
Try using the default IE install. System.Net.WebClient uses the local IE settings, proxy etc. Maybe that has been mangled?
Another cause for extremely slow WebClient downloads is the destination media to which you are downloading. If it is a slow device like a USB key, this can massively impact download speed. To my HDD I could download at 6MB/s, to my USB key, only 700kb/s, even though I can copy files to this USB at 5MB/s from another drive. wget shows the same behavior. This is also reported here:
https://superuser.com/questions/413750/why-is-downloading-over-usb-so-slow
So if this is your scenario, an alternative solution is to download to HDD first and then copy files to the slow medium after download completes.

Categories