How can I specify a connection-only timeout when executing web requests? - c#

I'm currently using code that makes HTTP requests using the HttpClient class. Although you can specify a timeout for the request, the value applies to the entirety of the request (which includes resolving the host name, establishing a connection, sending the request and receiving the response).
I need a way to make requests fail fast if they cannot resolve the name or establish a connection, but I also sometimes need to receive large amounts of data, so cannot just reduce the timeout.
Is there a way to achieve this using either a built in (BCL) class or an alternative HTTP client stack?
I've looked briefly at RestSharp and ServiceStack, but neither seems to provide a timeout just for the connection part (but do correct me if I am wrong).

You can use a Timer to abort the request if the connection take too much time. Add an event when the time is elapsed. You can use something like this:
static WebRequest request;
private static void sendAndReceive()
{
// The request with a big timeout for receiving large amout of data
request = HttpWebRequest.Create("http://localhost:8081/index/");
request.Timeout = 100000;
// The connection timeout
var ConnectionTimeoutTime = 100;
Timer timer = new Timer(ConnectionTimeoutTime);
timer.Elapsed += connectionTimeout;
timer.Enabled = true;
Console.WriteLine("Connecting...");
try
{
using (var stream = request.GetRequestStream())
{
Console.WriteLine("Connection success !");
timer.Enabled = false;
/*
* Sending data ...
*/
System.Threading.Thread.Sleep(1000000);
}
using (var response = (HttpWebResponse)request.GetResponse())
{
/*
* Receiving datas...
*/
}
}
catch (WebException e)
{
if(e.Status==WebExceptionStatus.RequestCanceled)
Console.WriteLine("Connection canceled (timeout)");
else if(e.Status==WebExceptionStatus.ConnectFailure)
Console.WriteLine("Can't connect to server");
else if(e.Status==WebExceptionStatus.Timeout)
Console.WriteLine("Timeout");
else
Console.WriteLine("Error");
}
}
static void connectionTimeout(object sender, System.Timers.ElapsedEventArgs e)
{
Console.WriteLine("Connection failed...");
Timer timer = (Timer)sender;
timer.Enabled = false;
request.Abort();
}
Times here are just for example, you have to adjust them to your needs.

.NET's HttpWebRequest exposes 2 properties for specifying a Timeout for connecting with a remote HTTP Server:
Timeout - Gets or sets the time-out value in milliseconds for the GetResponse and GetRequestStream methods.
ReadWriteTimeout - The number of milliseconds before the writing or reading times out. The default value is 300,000 milliseconds (5 minutes).
The Timeout property is the closest to what you're after, but it does suggest that regardless of the Timeout value the DNS resolution may take up to 15 seconds:
A Domain Name System (DNS) query may take up to 15 seconds to return or time out. If your request contains a host name that requires resolution and you set Timeout to a value less than 15 seconds, it may take 15 seconds or more before a WebException is thrown to indicate a timeout on your request.
One way to prempt a lower timeout than 15s for DNS lookups is to lookup the hostname yourself, but many solutions requires P/Invoke to specify low-level settings.
Specifying timeouts in ServiceStack HTTP Clients
The underlying HttpWebRequest Timeout and ReadWriteTimeout properties can also be specified in ServiceStack's high-level HTTP Clients, i.e. in C# Service Clients with:
var client = new JsonServiceClient(BaseUri) {
Timeout = TimeSpan.FromSeconds(30)
};
Or using ServiceStack's HTTP Utils with:
var timeoutMs = 30 * 1000;
var response = url.GetStringFromUrl(requestFilter: req =>
req.Timeout = timeoutMs);

I believe RestSharp does have timeout properties in RestClient.
var request = new RestRequest();
var client = new RestClient
{
Timeout = timeout, //Timeout in milliseconds to use for requests made by this client instance
ReadWriteTimeout = readWriteTimeout //The number of milliseconds before the writing or reading times out.
};
var response = client.Execute(request);
//Handle response

You right, you are unable to set this specific timeout.
I don't have enough information about how the libraries were built, but for the purpose they are meant to, I believe they fit. Someone wants to do a request and set a timeout for everything.
I suggest you take a different approach.
You are trying to do two different things here that HttpRequest do at once:
Try to find the host/stabblish a connection;
Transfer data;
You could try to separate this in two stages.
Use Ping class (check this out) to try to get to your host and set a timeout for it;
Use the HttpRequest IF it works for your needs (of timeout,
This process should not slow down everything, since part of resolving names/routes would be done at the first stage. This would not be totally disposable.
There's a drawback on this solution: your remote host must accept pings.
Hope this helps.

I used this method to check if the connection can be established. This however doesn't guarantee that the connection can be established by the subsequent call in HttpWebRequest.
private static bool CanConnect(string machine)
{
using (TcpClient client = new TcpClient())
{
if (!client.ConnectAsync(machine, 443).Wait(50)) // Check if we can connect in 50ms
{
return false;
}
}
return true;
}

if timeouts does not suits your need - don't use them. you can use a handler which waits for the operation to complete. when you get a response - stop the handler and proceed. that way you will get short time requests when failing and long time requests for large amounts of data.
something like this maybe:
var handler = new ManualResetEvent(false);
request = (HttpWebRequest)WebRequest.Create(url)
{
// initialize parameters such as method
}
request.BeginGetResponse(new AsyncCallback(delegate(IAsyncResult result)
{
try
{
var request = (HttpWebRequest)result.AsyncState;
using (var response = (HttpWebResponse)request.EndGetResponse(result))
{
using (var stream = response.GetResponseStream())
{
// success
}
response.Close();
}
}
catch (Exception e)
{
// fail operations go here
}
finally
{
handler.Set(); // whenever i succeed or fail
}
}), request);
handler.WaitOne(); // wait for the operation to complete

What about asking for only the header at first, and then the usual resource if it is successful,
webRequest.Method = "HEAD";

Related

C# RabbitMQ wait for one message for specified timeout?

Solutions in RabbitMQ Wait for a message with a timeout and Wait for a single RabbitMQ message with a timeout don't seem to work because there is no next delivery method in official C# library and QueueingBasicConsumer is depricated, so it just throws NotSupportedException everywhere.
How I can wait for single message from queue for specified timeout?
PS
It can be done through Basic.Get(), yes, but well, it is bad solution to pull messages in specififed interval (excess traffic, excess CPU).
Update
EventingBasicConsumer by implmenetation NOT SUPPORT immediate cancelation. Even if you call BasicCancel at some point, even if you specify prefetch through BasicQos - it will still fetch in Frames and those frames can contain multiple messages. So, it is not good for single task execution. Don't bother - it just don't work with single messages.
There are many ways to do this. For example you can use EventingBasicConsumer together with ManualResetEvent, like this (that's just for demonstration purposes - better use one of the methods below):
var factory = new ConnectionFactory();
using (var connection = factory.CreateConnection()) {
using (var channel = connection.CreateModel()) {
// setup signal
using (var signal = new ManualResetEvent(false)) {
var consumer = new EventingBasicConsumer(channel);
byte[] messageBody = null;
consumer.Received += (sender, args) => {
messageBody = args.Body;
// process your message or store for later
// set signal
signal.Set();
};
// start consuming
channel.BasicConsume("your.queue", false, consumer);
// wait until message is received or timeout reached
bool timeout = !signal.WaitOne(TimeSpan.FromSeconds(10));
// cancel subscription
channel.BasicCancel(consumer.ConsumerTag);
if (timeout) {
// timeout reached - do what you need in this case
throw new Exception("timeout");
}
// at this point messageBody is received
}
}
}
As you stated in comments - if you expect multiple messages on the same queue, it's not the best way. Well it's not the best way in any case, I included it just to demonstrate the use of ManualResetEvent in case library itself does not provide timeouts support.
If you are doing RPC (remote procedure call, request-reply) - you can use SimpleRpcClient together with SimpleRpcServer on server side. Client side will look like this:
var client = new SimpleRpcClient(channel, "your.queue");
client.TimeoutMilliseconds = 10 * 1000;
client.TimedOut += (sender, args) => {
// do something on timeout
};
var reply = client.Call(myMessage); // will return reply or null if timeout reached
Even more simple way: use basic Subscription class (it uses the same EventingBasicConsumer internally, but supports timeouts so you don't need to implement yourself), like this:
var sub = new Subscription(channel, "your.queue");
BasicDeliverEventArgs reply;
if (!sub.Next(10 * 1000, out reply)) {
// timeout
}

Getting HTML response fails respectively after first fail

I have a program which gets html code for ~500 webpages every 5 minutes
it runs correctly until first fail(unable to download source in 6 seconds)
after that all threads will fail
and if I restart program, again it runs correctly until ...
where I'm wrong, what I should do to do it better?
this function runs every 5 mins:
foreach (Company company in companies)
{
string link = company.GetLink();
Thread t = new Thread(() => F(company, link));
t.Start();
if (!t.Join(TimeSpan.FromSeconds(6)))
{
Debug.WriteLine( company.Name + " Fails");
t.Abort();
}
}
and this function download html code
private void F(Company company, string link)
{
try
{
string htmlCode = GetInformationFromWeb.GetHtmlRequest(link);
company.HtmlCode = htmlCode;
}
catch (Exception ex)
{
}
}
and this class:
public class GetInformationFromWeb
{
public static string GetHtmlRequest(string url)
{
using (MyWebClient client = new MyWebClient())
{
client.Encoding = Encoding.UTF8;
string htmlCode = client.DownloadString(url);
return htmlCode;
}
}
}
and web client class
public class MyWebClient : WebClient
{
protected override WebRequest GetWebRequest(Uri address)
{
HttpWebRequest request = base.GetWebRequest(address) as HttpWebRequest;
request.AutomaticDecompression = DecompressionMethods.Deflate | DecompressionMethods.GZip;
return request;
}
}
IF your foreach is looping over 500 companies, and each is creating a new thread, it could be that your internet speed could become a bottleneck and you will receive timeouts over 6 seconds, and fail very often.
I suggest you to try with parallelism. Note MaxDegreeOfParallelism, which sets maximum amount of parallel executions. You can tune this to suit your needs.
Parallel.ForEach(companies, new ParallelOptions { MaxDegreeOfParallelism = 10 }, (company) =>
{
try
{
string htmlCode = GetInformationFromWeb.GetHtmlRequest(company.link);
company.HtmlCode = htmlCode;
}
catch(Exception ex)
{
//ignore or process exception
}
});
I have four basic suggestions:
Use HttpClient instead of obsolete WebClient. HttpClient can deal with asynchronous operations natively and has far more flexibility to take advantage of. You can even read downloaded contents to strings/streams on different thread since you can configure await not to schedule back your operations. Or even program the HttpClientHandler to break after 6 seconds and raise TaskCanceledException if this was exceeded.
Avoid swallowing exceptions (like you do in your F function) as it breaks debugging and obfuscates the real cause of problems. Correctly-written program will never raise an exception during normal operation.
You are using threads in an useless way, in which they are not even overlapping; they are just waiting for each other to start, because you are locking the calling loop after each thread's start. In .NET it would be better to do multitasking using Tasks (for example, by calling them as Task.Run(async delegate() { await yourTask(); }) (or AsyncContext.Run(...) if you need UI access) and it won't block anything.
The whole GetInformationFromWeb class is pointless in the moment - and you are spawning multiple client objects also pointlessly, since one HTTP client object can handle multiple requests (if you'd use HttpClient even without additional bloat - you just instantiate it once as static global variable with all necessary configuration and then call it from any place using as little code as client.GetStringAsync(Uri uri).
OT: Is it some kind of an academic project?

.NET HttpClient.PostAsync() slow after 3 requests

I am using the .NET 4.5 HttpClient class to make a POST request to a server a number of times. The first 3 calls run quickly, but the fourth time a call to await client.PostAsync(...) is made, it hangs for several seconds before returning the expected response.
using (HttpClient client = new HttpClient())
{
// Prepare query
StringBuilder queryBuilder = new StringBuilder();
queryBuilder.Append("?arg=value");
// Send query
using (var result = await client.PostAsync(BaseUrl + queryBuilder.ToString(),
new StreamContent(streamData)))
{
Stream stream = await result.Content.ReadAsStreamAsync();
return new MyResult(stream);
}
}
The server code is shown below:
HttpListener listener;
void Run()
{
listener.Start();
ThreadPool.QueueUserWorkItem((o) =>
{
while (listener.IsListening)
{
ThreadPool.QueueUserWorkItem((c) =>
{
var context = c as HttpListenerContext;
try
{
// Handle request
}
finally
{
// Always close the stream
context.Response.OutputStream.Close();
}
}, listener.GetContext());
}
});
}
Inserting a debug statement at // Handle request shows that the server code doesn't seem to receive the request as soon as it is sent.
I have already investigated whether it could be a problem with the client not closing the response, meaning that the number of connections the ServicePoint provider allows could be reached. However, I have tried increasing ServicePointManager.MaxServicePoints but this has no effect at all.
I also found this similar question:
.NET HttpClient hangs after several requests (unless Fiddler is active)
I don't believe this is the problem with my code - even changing my code to exactly what is given there didn't fix the problem.
The problem was that there were too many Task instances scheduled to run.
Changing some of the Task.Factory.StartNew calls in my program for tasks which ran for a long time to use the TaskCreationOptions.LongRunning option fixed this. It appears that the task scheduler was waiting for other tasks to finish before it scheduled the request to the server.

Multiple WebRequests to the same resource blocking each other?

So I have multiple threads trying to get a response from a resource, but for some reason - even though they are running in seperate threads, each response will only return when all others are either still waiting or closed. I tried using
WebResponse response = await request.GetResponseAsync(); but first of all that seems redundant to me, since I'm already running seperate threads, and also visual studio tells me
The 'await' operator can only be used within an async method. Consider marking this method with the 'async' modifier and changing its return type to 'Task'.
What's going on here?
EDIT (Code):
Start method (called from a single thread)
public void Start()
{
if (!Started)
{
ByteAt = 0;
request = (HttpWebRequest)WebRequest.Create(URL);
request.Method = "GET";
request.AddRange(ByteStart, ByteStart + ByteLength);
downloadThread = new Thread(DownloadThreadWorker);
downloadThread.Start();
Started = true;
Paused = false;
}
}
Download threads:
private void DownloadThreadWorker()
{
WebResponse response = request.GetResponse();
if (response != null)
{
if (!CheckRange(response))
Abort(String.Format("Multi part downloads not supported (Requested length: {0}, response length: {1})", ByteLength, response.ContentLength));
else
{ ...
Per HTTP 1.1 RFC a client should make no more than 2 concurrent connections. Not sure about the latest versions of IE, but previously IE honored this limitation (could be changed via a registry key) and only had no more than 2 connections to the same host at any one time. This could be the limitation you're experiencing...
Or try setting ServicePointManager.DefaultConnectionLimit above 2.

Killing HttpWebRequest object using Thread.Abort

All, I am trying to cancel two concurrent HttpWebRequests using a method similar to the code below (shown in pseudo-ish C#).
The Main method creates two threads which create HttpWebRequests. If the user wishes to, they may abort the requests by clicking a button which then calls the Abort method.
private Thread first;
private Thread second;
private string uri = "http://somewhere";
public void Main()
{
first = new Thread(GetFirst);
first.Start();
second = new Thread(GetSecond);
second.Start();
// Some block on threads... like the Countdown class
countdown.Wait();
}
public void Abort()
{
try
{
first.Abort();
}
catch { // do nothing }
try
{
second.Abort();
}
catch { // do nothing }
}
private void GetFirst(object state)
{
MyHandler h = new MyHandler(uri);
h.RunRequest();
}
private void GetSecond(object state)
{
MyHandler h = new MyHandler(uri);
h.RunRequest();
}
The first thread gets interrupted by a SocketException:
A blocking operation was interrupted by a call to WSACancelBlockingCall
The second thread hangs on GetResponse().
How can I abort both of these requests in a way that the web server knows that the connection has been aborted?, and/or, Is there a better way to do this?
UPDATE
As suggested, a good alternative would be to use BeginGetResponse. However, I don't have access to the HttpWebRequest object - it is abstracted in the MyHandler class. I have modified the question to show this.
public class MyHandler
{
public void RunRequest(string uri)
{
HttpWebRequest req = HttpWebRequest.Create(uri);
HttpWebResponse res = req.GetResponse();
}
}
Use BeginGetResponse to initiate the call and then use the Abort method on the class to cancel it.
http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest_methods.aspx
I believe Abort will not work with the synchronous GetResponse:
http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.abort.aspx
If you have to stick with the synchronous version, to kill the situation, all you can do is abort the thread. To give up waiting, you can specify a timeout:
http://msdn.microsoft.com/en-us/library/system.net.httpwebrequest.timeout.aspx
If you need to kill the process, I would argue launching it inside a new AppDomain and dropping the AppDomain when you want to kill the request; instead of aborting a thread inside your main process.
A ThreadAbortException is highly non-specific. HttpWebRequest already supports a way to cancel the request in a predictable way with the Abort() method. I recommend you use it instead.
Note that you'll still get a WebException on the thread, designed to tell you that the request got aborted externally. Be prepared to catch it.
This might be because of .NET's connection pooling.
Every WebRequest-instance has a ServicePoint that describes the target you want to communicate with (server address, port, protocol,...). These ServicePoints will be reused, so if you create 2 WebRequests with the same server address, port and protocol they will share the same ServicePoint instance.
When you call WebRequest.GetResponse() it uses the connection pool provided by the ServicePoint to create connections. If you then kill the thread with Thread.Abort() it will NOT return the connection to the ServicePoint's connection pool, so the ServicePoint thinks this connection is still in use.
If the connection limit of the ServicePoint is reached (default: 2) it will not create any new connections, but instead wait for one of the open connections to be returned.
You can increase the connection limit like this:
HttpWebRequest httpRequest = (HttpWebRequest)WebRequest.Create(url);
httpRequest.ServicePoint.ConnectionLimit = 10;
or you can use the default connection limit, so every new ServicePoint will use this limit:
System.Net.ServicePointManager.DefaultConnectionLimit = 10;
You can also use ServicePoint.CurrentConnections to get the number of open connections.
You could use the following method to abort your thread:
private Thread thread;
private Uri uri;
void StartThread()
{
thread = new Thread(new ThreadStart(() =>
{
WebRequest request = WebRequest.Create(uri);
request.ConnectionGroupName = "SomeConnectionGroup";
var response = request.GetResponse();
//...
}));
thread.Start();
}
void AbortThread()
{
thread.Abort();
ServicePointManager.FindServicePoint(uri).CloseConnectionGroup("SomeConnectionGroup");
}
Remember that ALL connections to the same server (or ServicePoint) that have the same connection group name will be killed. If you have multiple concurrent threads you might want to assign unique connection group names to them.

Categories