Issues with C# HttpClient Timeout - c#

I'm making a simple GetAsync() request with an HttpClient object. When I retrieve a small amount of data this way, the request works fine. When I try to retrieve a large amount of data the request times out.
I tried to set the HttpClient's Timeout property to 5 minutes like so:
var client = new HttpClient();
client.Timeout = new TimeSpan(0, 5, 0);
before making the call, but it still times out after ~60 seconds.
I've read in several answers to use HttpWebRequest to access more granular timeout options, but I'm using .net 4.5.2 and references to HttpWebRequest won't compile.
How can I increase the readwrite timeout of the HttpClient?

Related

In .NET, failure to retrieve HTTP resource from W3C web site

Retrieving the resource at http://www.w3.org/TR/xmlschema11-1/XMLSchema.xsd takes around 10 seconds using the following mechanisms:
web browser
curl
Java URL.openConnection()
It's possible that the W3C site is applying some "throttling" - deliberately slowing the response to discourage bulk requests.
Trying to retrieve the same resource from a C# application on .NET, I get a timeout after about 60-70 seconds. I've tried a couple of different approaches, both with the same result:
System.Xml.XmlUrlResolver.GetEntity()
new WebClient().OpenRead(uri)
Anyone have any idea what's going on? Would another API, or some configuration options, solve the problem?
The problem is they are (probably) checking for a User-Agent string. If it's not present, they send you to purgatory. .NET's http clients do not set this by default.
So, give this a shot:
private static readonly HttpClient _client = new HttpClient();
public static async Task TestMe()
{
using (var req = new HttpRequestMessage(HttpMethod.Get,
"http://www.w3.org/TR/xmlschema11-1/XMLSchema.xsd"))
{
req.Headers.Add("user-agent",
"Mozilla/5.0 (iPhone; CPU iPhone OS 10_3 like Mac OS X)");
using (var resp = await _client.SendAsync(req))
{
resp.EnsureSuccessStatusCode();
var data = await resp.Content.ReadAsStringAsync();
}
}
}
No idea why they do this; Maybe it's a bug in their back-end? (I sure wouldn't want to leave a socket open longer than it needs to be for no good reason). The request still takes 10-15 seconds, but it's better than the 120+ second timeout.

.Net C# RESTSharp 10 Minute Timeout

I have embedded a browser control into a .Net form and compiled it as a window's executable. The browser control is displaying our HTML5 image viewer. The application opens sockets so it can listen to "push" requests from various servers. This allows images to be pushed to individual user's desktops.
When an incoming image push request comes in, the application calls a REST service using RESTSharp to generate a token for the viewer to use to display the image.
As long as the requests are consistently arriving, everything works great. If there is a lull (10 minutes seems to be the time frame), then the RESTSharp request times out. It is almost as though the creation of a new instance of the RESTSharp artifacts are reusing the old ones in an attempted .Net optimization.
Here is the RESTSharp code I am using:
private async Task<string> postJsonDataToUrl(string lpPostData) {
IRestClient client = new RestClient(string.Format("{0}:{1}", MstrScsUrlBase, MintScsUrlPort));
IRestRequest request = new RestRequest(string.Format("{0}{1}{2}", MstrScsUrlContextRoot, MstrScsUrlPath, SCS_GENERATE_TOKEN_URL_PATH));
request.Timeout = 5000;
request.ReadWriteTimeout = 5000;
request.AddParameter("application/json", lpPostData, ParameterType.RequestBody);
IRestResponse response = await postResultAsync(client, request);
return response.Content;
} // postJsonDataToUrl
private static Task<IRestResponse> postResultAsync(IRestClient client, IRestRequest request) {
return client.ExecutePostTaskAsync(request);
} // PostResultAsync
This is the line where the time out occurs:
IRestResponse response = await postResultAsync(client, request);
I have tried rewriting this using .Net's HttpWebRequest and I get the same problem.
If I lengthen the RESTSharp timeouts, I am able to make calls to the server (using a different client) while the application is "timing out" so I know the server isn't the issue.
The initial version of the code did not have the await async call structure - that was added as an attempt to get more information on the problem.
I am not getting any errors other than the REST timeout.
I have had limited success with forcing a Garbage Collection with this call:
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced);
Any thoughts?
It is possible you are hitting the connection limit for .Net apps, as in MS docs:
"By default, an application using the HttpWebRequest class uses a maximum of two persistent connections to a given server, but you can set the maximum number of connections on a per-application basis."
(https://learn.microsoft.com/en-us/dotnet/framework/network-programming/managing-connections).
Closing the connections should help, or you might be able to increase that limit, that is also in the doc
I ended up putting
GC.Collect(GC.MaxGeneration, GCCollectionMode.Forced);
in a timer that fired every 2 minutes. This completely solved my issue.
This is very surprising to me since my HttpWebRequest code was wrapped in "using" statements so the resources should have been released properly. I can only conclude that .Net was optimizing the use of the class and was trying to reuse a stale class rather than allow me to create a new one from scratch.
A new way of doing things.
var body = #"{ ""key"": ""value"" }";
// HTTP package
var request = new RestRequest("https://localhost:5001/api/networkdevices", Method.Put);
request.AddHeader("Content-Type", "application/json");
request.AddHeader("Keep-Alive", "");// set "timeout=120" will work as well
request.Timeout = 120;
request.AddBody(body);
// HTTP call
var client = new RestClient();
RestResponse response = await client.ExecuteAsync(request);
Console.WriteLine(response.Content);

Timeout settings seem to have no effect

I am trying to set a timeout for a special request which will take a long time to process. Because of this, I am trying to set the timeout, like this:
client.RequestFilter = r => {
r.Timeout = 1000000;
r.ReadWriteTimeout = 1000000;
}
However, these settings seem to have no effect; the request still times out in about 30 seconds. Is there some hack I can use to set the timeout properly ?
ETA: The response I'm receiving is a stream; I do it like this:
var stream = client.Send<Stream>(requestDto);
Is there a better way ?
ServiceStack's Service Clients is just a wrapper around HttpWebRequest so your code ends up setting the HttpWebRequest Timeout and ReadWriteTimeout properties directly.
The Request Filter gives you direct access to the HttpWebRequest instance used and setting the Timeout properties should work as expected. Other than that the only class that can modify behavior of .NET's HttpWebRequest is System.Net.ServicePointManager which lets you configure some properties like DefaultConnectionLimit and DnsRefreshTimeout, etc. But there's no additional Request Timeout properties.
The alternative solution you can try is to use ServiceStack's JsonHttpClient which as it's built on Microsoft's newer HttpClient library, you may have better luck with it. Although it's recommended to use the Async API's since the Sync API's are just blocking on the HttpClient's underlying Async API's.
For the API call itself, you should access the stream in a using block, e.g:
using (var stream = client.Send<Stream>(requestDto))
{
}

Why would my C# app fail on this REST request but but works fine through a browser?

I have a C# app and I am accessing some data over REST so I pass in a URL to get a JSON payload back. I access a few different URLs programmatically and they all work fine using this code below except one call.
Here is my code:
var url = "http://theRESTURL.com/rest/API/myRequest";
var results = GetHTTPClient().GetStringAsync(url).Result;
var restResponse = new RestSharp.RestResponse();
restResponse.Content = results;
var _deserializer = new JsonDeserializer();
where GetHTTPClient() is using this code below:
private HttpClient GetHTTPClient()
{
var httpClient = new HttpClient(new HttpClientHandler()
{
Credentials = new System.Net.NetworkCredential("usr", "pwd"),
UseDefaultCredentials = false,
UseProxy = true,
Proxy = new WebProxy(new Uri("http://myproxy.com:8080")),
AllowAutoRedirect = false
});
httpClient.Timeout = new TimeSpan(0,0, 3500);
return httpClient;
}
so as i said, the above code works fine but a bunch of different request but for one particular request, I am getting an exception inside of the
.GetStringAsync(url).Result
call with the error:
Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host
I get that error after waiting for about 10 minutes. What is interesting is that if I put that same URL that isn't working into Internet Explorer directly I do get the JSON payload back (after about 10 minutes as well). So i am confused at why
It would work fine directly from the browser but fail when using the code above.
It fails on this one request but other requests using the same code work fine programmatically.
Any suggestions for things to try or things I should ask the owner of the server to check out on their end to help diagnose what is going on?
I think the timeout is not an issue here, as the error states that connection has been closed remotely and the set timeout is about 58 minutes, which is more than enough compared to your other figures.
Have you tried looking at the requests itself? Might want to edit your question with those results.
If you remove line httpClient.Timeout = new TimeSpan(0,0, 3500); the issue should be solved but if the request would last 20 minutes you should wait all the time.

Pipelining using HttpWebRequest in C#

I am working on an HTTP client which should ideally pipeline requests when needed. Also, the requests will be sent on specific network interfaces (the client is multihomed).
Asynchronous sockets are used and in order to make a request, I use the following code:
Uri url = new Uri(reqUrl);
ServicePoint sp = ServicePointManager.FindServicePoint(url);
sp.BindIPEndPointDelegate = new BindIPEndPoint(localBind);
pseg.req = (HttpWebRequest)HttpWebRequest.Create(url);
pseg.req.AddRange("bytes", psegStart, psegStart + psegLength - 1);
pseg.req.KeepAlive = true;
pseg.req.Pipelined = true;
For each request made using this code, a separate connection to the server is opened and segments received in parallell. This is ok, however, it is not the behavior I want. I want the requests to be pipelined, but the replies to arrive sequentially. If I use locking or set the connectionlimit to 1, the request for segment #2 is not sent until after segment #1 has been fully received.
Is there any way to achieve what I want and still use the HttpWebRequest/Response-classes? Or will I have to drop down to sockets?

Categories