I'm calling a Web Service via SOAP. When the web service is down, we'd like it to fail rather quickly rather than wait the default timeout. I use the Timeout property for this:
service.Timeout = 5000;
which I think should time-out the operation after 5 seconds. However, I see that the operation doesn't time out until after 23 seconds, the same amount of time as the default timeout (ie, the above line not present).
I see that the exception thrown is "The operation has timed out", I just can't understand why it is not timing out in the time I've specified. What am I doing wrong?
Edit:
Here's the test program:
long start = Environment.TickCount;
try {
mdDqwsStatus.Service service = new mdDqwsStatus.Service();
service.Timeout = 5000; // 5 sec
string response = service.GetServiceStatus(customerID, pafID); // This calls the WS
long end1 = Environment.TickCount - start; // I never hit this line
} catch (Exception ex) {
long end2 = Environment.TickCount - start; // Failure goes to here
OutputError(Ex);
}
I set a breakpoint on the OutputError line and look at end2, and see 23000+ milliseconds.
Note that in the field, if the Web Service (which runs on IIS) has been stopped, the delay is quite short. However, if the machine is down or there's connectivity issues, the delay is 23 seconds (or sometimes even quite longer).
Related
When fetching data from a domain that does not resolve, it takes more than 10 seconds to go into the catch block.
try
{
var resultDomain = client.GetAsync("http://nonexistent.nonexistent.nonexistent").Result.Content.ReadAsStringAsync().Result;
}
catch(Exception ex)
{
//outputs
//The remote name could not be resolved: 'nonexistent.nonexistent.nonexistent'
//11,0632079
Console.WriteLine(ex.InnerException.InnerException.Message);
Console.WriteLine(watch.Elapsed.TotalSeconds);
}
This while the command
nslookup nonexistent.nonexistent.nonexistent
finishes almost immediately with the notification that the domain doesnt exist. Is there a way to have the HttpClient/WebClient/... behave as fast as nslookup does? What is .NET waiting for?
According to wireshark, DNS responds immediately
What about setting a timeout lower than the default of 100 seconds?
client.Timeout = TimeSpan.FromMilliseconds(500);
client.GetAsync(...);
I have this block of code:
var client = new TcpClient();
HttpRequestInfo.AddTimestamp("Connecting");
await Task.WhenAny(client.ConnectAsync(serverAddress, serverPort),
Task.Delay(TimeSpan.FromMilliseconds(300)));
HttpRequestInfo.AddTimestamp("Connected");
if(client.Connected){ ... }
Where HttpRequestInfo.AddTimestamp simply logs named timestamps with Stopwatch class.
In logs I sometimes see:
"Connecting":110ms - "Connected":747ms
"Connecting":35ms - "Connected":3120ms
"Connecting":38ms - "Connected":3053ms
I assumed that this approach will give me the opportunity to limit the connection by timeout (300ms). However, I see that this line of code sometimes (very rarely) runs longer than 300 ms.
What is the reason for this behavior?
The docs states:
This method depends on the system clock. This means that the time
delay will approximately equal the resolution of the system clock if
the delay argument is less than the resolution of the system clock,
which is approximately 15 milliseconds on Windows systems.
So it can explain the longer timeouts if they are approximately 15 milliseconds more than 300 milliseconds, because the delay will have to adjust itself to the system clock resolution.
It does not explain your longer timeouts that are in a larger scale.
I assume that for some reason ConnectAsync may block for a while before returning to the calling method, if it is true it will take time between your first log and when you actually fire Task.Delay and the problem is not related to the delay at all.
You can try this code and monitor the logs, maybe the lost time is hiding when launching ConnectAsync:
var client = new TcpClient();
HttpRequestInfo.AddTimestamp("Launching ConnectAsync");
var connectAsyncTask = client.ConnectAsync(serverAddress, serverPort);
HttpRequestInfo.AddTimestamp("ConnectAsync launched");
HttpRequestInfo.AddTimestamp("Launching Delay");
var delayTask= Task.Delay(TimeSpan.FromMilliseconds(300));
HttpRequestInfo.AddTimestamp("Delay launched");
var firstTask = await Task.WhenAny(connectAsyncTask, delayTask);
if(firstTask == connectAsyncTask)
{
HttpRequestInfo.AddTimestamp("Connected");
}
else
{
HttpRequestInfo.AddTimestamp("Timeout");
}
So I'm having a really strange behavior with a c# task delay that is kind of making me insane.
Context: I'm using C# .net to communicate with one of our devices via R4852. The device needs roughly 200ms to finish each command so I introduced a 250ms delay inside my communication class.
Bug / bad behavior: The delay inside my communication class sometimes waits for 250ms and sometimes only waits for 125ms. This is reproducible and the same behavior occurs when I'm increasing my delay. E.g. if I set my delay to 1000ms every second request will only wait for 875ms, so again there are 125ms missing.
This behavior only occurs if there is no debugger attached and only occurs on some machines. The machine where this software will be used in our production department is having this issue, my machine that I'm working on right now doesn't have this issue. Both are running Windows 10.
How come that there are 125ms missing from time to time?
I already learnt that the Task.Delay method is using a timer with a precision of 15ms. This doesn't explain the missing 125ms as it at most should fire a few milliseconds too late instead of 125m too early.
The following method is the one I use to queue commands to my device. There is a semaphore responsible so that only one command can be executed at a time (_requestSemapohre) so there can only ever be one request being processed.
public async Task<bool> Request(WriteRequest request)
{
await _requestSemaphore.WaitAsync(); // block incoming calls
await Task.Delay(Delay); // delay
Write(_connectionIdDictionary[request.Connection], request.Request); // write
if (request is WriteReadRequest)
{
_currentRequest = request as WriteReadRequest;
var readSuccess = await _readSemaphore.WaitAsync(Timeout); // wait until read of line has finished
_currentRequest = null; // set _currentRequest to null
_requestSemaphore.Release(); // release next incoming call
if (!readSuccess)
{
return false;
}
else
{
return true;
}
}
else
{
if (request is WriteWithDelayRequest)
{
await Task.Delay((request as WriteWithDelayRequest).Delay);
}
_requestSemaphore.Release(); // release next incoming call
return true;
}
}
The following code is part of the method that is sending the requests to the method above. I removed some lines to keep it short. The basic stuff (requesting and waiting) is still there
// this command is the first command and will always have a proper delay of 1000ms
var request = new Communication.Requests.WriteRequest(item.Connection, item.Command);
await _translator.Request(request);
// this request is the second request that is missing 125ms
var queryRequest = new Communication.Requests.WriteReadRequest(item.Connection, item.Query); // query that is being sent to check if the value has been sent properly
if (await _translator.Request(queryRequest)) // send the query to the device and wait for response
{
if (item.IsQueryValid(queryRequest.Response)) // check result
{
item.Success = true;
}
}
The first request that I'm sending to this method is a WriteRequest, the second one a WriteReadRequest.
I discovered this behavior when looking at the serial port communication using a software named Device Monitoring Studio to monitor the serial communication.
Here is a screenshot of the actual serial communication. In this case I was using a delay of 1000ms. You can see that the sens0002 command had a delay of exactly 1 second before it was executed. The next command / query sens?only has a 875ms delay. This screenshot was taken while the debugger was not attached.
Here is another screenshot. The delay was set to 1000ms again but this time the debugger was attached. As you can see the first and second command now both have a delay of roughly 1000ms.
And in the two following screenshots you can see the same behavior with a delay of 250ms (bugged down to 125ms). First screenshot without debugger attached, second one with debugger attached. In the second screenshot you can also see that there is quiet the drift of 35ms but still nowhere close to the 125ms that were missing before.
So what the hell am I looking at here? The quick and dirty solution would be to just increase the delay to 1000ms so that this won't be an issue anymore but I'd rather understand why this issue occurs and how to fix it properly.
Cheers!
As far as I can see, your times are printed as delta to the prev. entry.
In case of the 125/875ms you have 8 intermediate entries with each roughly 15ms (sum roughly 120ms)
In case of 250/1000ms you have 8 intermediate entries with each roughly 5ms (sum roughly 40ms) and the numbers are actually more like 215/960ms.
So, if you add those intermediate delays, the resulting complete delay is roughly the same as far as I can tell.
Answering the question for everyone who just wants a yes / no on the question title: The First Rule of Programming: It's Always Your Fault
It's save to assume, that Task.Delay covers at least the specified amount of time (might be more due to clock resolution). So if it seems to cover a smaller timespan, then the method used to test the actual delay is faulty somehow.
I have a question about the azure service bus billing.
If I have the following code, and a message isn't sent for a long time say 5 hours.
Assume I only have one subscription and the code is as below.
In this scenario over that 5 hour period what do I get charged (is it once for sending and once for downloading, or do I incur charges for the polling keep alive that azure implements in the background)?
enter code here
var subscriptionClient = SubscriptionClient.CreateFromConnectionString(ConnString, topic, subscriptionName);
while (true)
{
var message = subscriptionClient.Receive();
if (message != null)
{
try
{
message.Complete();
}
catch (Exception)
{
// Indicate a problem, unlock message in subscription
message.Abandon();
}
}
else
{
Console.WriteLine("null message received");
}
Thread.Sleep(25);
}
From the code above you will get charged for a single message every time the Receive call returns (even if the result is null). The default timeout for the Receive call is 60 seconds so in the case there is no message for 5 hours, your code will return every one minute and then sleep for 25 seconds so assume that per hour you will get charged for 48 messages (1 min timeout and 25 second wait). You can call the overload of Receive that takes a timeout and pass in 5 hour timeout there. Here the connection will be kept alive for 5 hours before it returns and thus no charges will occur during that time.
From a back of the envelope calculation: A single receiver, running with one minute timeout with no wait and no real messages will get a message charged every minute. That is less than 5cents for the entire month. See billing calculator here
Only Message Transaction will be counted( Send,Receive)... Azure not charging for KeepAlive Messages...
Refer MSDN topic: http://msdn.microsoft.com/en-us/library/hh667438.aspx#BKMK_SBv2FAQ2_1
I believe after lengthy research and searching, I have discovered that what I want to do is probably better served by setting up an asynchronous connection and terminating it after the desired timeout... But I will go ahead and ask anyway!
Quick snippet of code:
HttpWebRequest webReq = (HttpWebRequest)HttpWebRequest.Create(url);
webReq.Timeout = 5000;
HttpWebResponse response = (HttpWebResponse)webReq.GetResponse();
// this takes ~20+ sec on servers that aren't on the proper port, etc.
I have an HttpWebRequest method that is in a multi-threaded application, in which I am connecting to a large number of company web servers. In cases where the server is not responding, the HttpWebRequest.GetResponse() is taking about 20 seconds to time out, even though I have specified a timeout of only 5 seconds. In the interest of getting through the servers on a regular interval, I want to skip those taking longer than 5 seconds to connect to.
So the question is: "Is there a simple way to specify/decrease a connection timeout for a WebRequest or HttpWebRequest?"
I believe that the problem is that the WebRequest measures the time only after the request is actually made. If you submit multiple requests to the same address then the ServicePointManager will throttle your requests and only actually submit as many concurrent connections as the value of the corresponding ServicePoint.ConnectionLimit which by default gets the value from ServicePointManager.DefaultConnectionLimit. Application CLR host sets this to 2, ASP host to 10. So if you have a multithreaded application that submits multiple requests to the same host only two are actually placed on the wire, the rest are queued up.
I have not researched this to a conclusive evidence whether this is what really happens, but on a similar project I had things were horrible until I removed the ServicePoint limitation.
Another factor to consider is the DNS lookup time. Again, is my belief not backed by hard evidence, but I think the WebRequest does not count the DNS lookup time against the request timeout. DNS lookup time can show up as very big time factor on some deployments.
And yes, you must code your app around the WebRequest.BeginGetRequestStream (for POSTs with content) and WebRequest.BeginGetResponse (for GETs and POSTSs). Synchronous calls will not scale (I won't enter into details why, but that I do have hard evidence for). Anyway, the ServicePoint issue is orthogonal to this: the queueing behavior happens with async calls too.
Sorry for tacking on to an old thread, but I think something that was said above may be incorrect/misleading.
From what I can tell .Timeout is NOT the connection time, it is the TOTAL time allowed for the entire life of the HttpWebRequest and response. Proof:
I Set:
.Timeout=5000
.ReadWriteTimeout=32000
The connect and post time for the HttpWebRequest took 26ms
but the subsequent call HttpWebRequest.GetResponse() timed out in 4974ms thus proving that the 5000ms was the time limit for the whole send request/get response set of calls.
I didn't verify if the DNS name resolution was measured as part of the time as this is irrelevant to me since none of this works the way I really need it to work--my intention was to time out quicker when connecting to systems that weren't accepting connections as shown by them failing during the connect phase of the request.
For example: I'm willing to wait 30 seconds on a connection request that has a chance of returning a result, but I only want to burn 10 seconds waiting to send a request to a host that is misbehaving.
Something I found later which helped, is the .ReadWriteTimeout property. This, in addition to the .Timeout property seemed to finally cut down on the time threads would spend trying to download from a problematic server. The default time for .ReadWriteTimeout is 5 minutes, which for my application was far too long.
So, it seems to me:
.Timeout = time spent trying to establish a connection (not including lookup time)
.ReadWriteTimeout = time spent trying to read or write data after connection established
More info: HttpWebRequest.ReadWriteTimeout Property
Edit:
Per #KyleM's comment, the Timeout property is for the entire connection attempt, and reading up on it at MSDN shows:
Timeout is the number of milliseconds that a subsequent synchronous request made with the GetResponse method waits for a response, and the GetRequestStream method waits for a stream. The Timeout applies to the entire request and response, not individually to the GetRequestStream and GetResponse method calls. If the resource is not returned within the time-out period, the request throws a WebException with the Status property set to WebExceptionStatus.Timeout.
(Emphasis mine.)
From the documentation of the HttpWebRequest.Timeout property:
A Domain Name System (DNS) query may
take up to 15 seconds to return or
time out. If your request contains a
host name that requires resolution and
you set Timeout to a value less than
15 seconds, it may take 15 seconds or
more before a WebException is thrown
to indicate a timeout on your request.
Is it possible that your DNS query is the cause of the timeout?
No matter what we tried we couldn't manage to get the timeout below 21 seconds when the server we were checking was down.
To work around this we combined a TcpClient check to see if the domain was alive followed by a separate check to see if the URL was active
public static bool IsUrlAlive(string aUrl, int aTimeoutSeconds)
{
try
{
//check the domain first
if (IsDomainAlive(new Uri(aUrl).Host, aTimeoutSeconds))
{
//only now check the url itself
var request = System.Net.WebRequest.Create(aUrl);
request.Method = "HEAD";
request.Timeout = aTimeoutSeconds * 1000;
var response = (HttpWebResponse)request.GetResponse();
return response.StatusCode == HttpStatusCode.OK;
}
}
catch
{
}
return false;
}
private static bool IsDomainAlive(string aDomain, int aTimeoutSeconds)
{
try
{
using (TcpClient client = new TcpClient())
{
var result = client.BeginConnect(aDomain, 80, null, null);
var success = result.AsyncWaitHandle.WaitOne(TimeSpan.FromSeconds(aTimeoutSeconds));
if (!success)
{
return false;
}
// we have connected
client.EndConnect(result);
return true;
}
}
catch
{
}
return false;
}