Using a modified WebClient, I download data periodically from a service with the following characteristics:
The data download (~1GB) can take around 20 minutes
Sometimes the service decides not to return any data at all (request hangs), or takes minutes to hours to return the first byte.
I would like to fail fast in the event that the service does not return any data within a reasonable (configurable) amount of time, yet allow plenty of time for a download that is making progress to succeed.
It seems that the WebRequest.Timeout property controls the total time for the request to complete, while ReadWriteTimeout controls the total time available to read data once the data transfer begins.
Am I missing a property that would control the maximum amount of time to wait between establishing the connection and the first byte returning? If there is no such property, how can I approach the problem?
I am not aware of any additional timeout property that will achieve the result you are looking for. The first thought that comes to mind is attaching a handler to DownloadProgressChanged that will update a flag to indicate data has been received (not always accurate though).
Using a Timer or EventWaitHandle you could then block (or handle async if you prefer) for a short period of time and evaluate whether any data has been received. The code below is not a fully fleshed out example, but an idea of how it may be implemented.
using (var manualResetEvent = new ManualResetEvent(false))
using (var client = new WebClient())
{
client.DownloadProgressChanged += (sender, e) => manualResetEvent.Set();
client.DownloadDataAsync(new Uri("https://github.com/downloads/cbaxter/Harvester/Harvester.msi"));
if (!manualResetEvent.WaitOne(5000))
client.CancelAsync();
}
In the above example, the manualResetEvent.WaitOne will return true if DownloadProgressChanged was invoked. You will likely want to check e.BytesReceived > 0 and only set for non-zero values, but I think you get the idea?
Related
So basically I am running a program which is able to send up to 7,000 HTTP requests every second in average, 24/7, in order to detect last changes on a website as quickly as possible.
However, every 2.5 to 3 minutes in average, my program slowdowns for around 10-15 seconds and goes from ~7K rq/s to less than 1000.
Here are logs from my program, where you can see the amount of requests it sends every second:
https://pastebin.com/029VLxZG
When scrolling down through the logs, you can see it goes slower every ~3 minutes. Example: https://i.imgur.com/US0wPzm.jpeg
At first I thought it was my server's ethernet connection going in a temporary "restricted" mode, and I even tried contacting my host about it. But then I ran 2 instances of my program simulteanously just to see what would happen and I noticed that, even though the issue (downtime) was happening on both, it wasn't always happening at the same time (depending on when the program was started, if you get what I mean), which meant the problem wasn't coming from the internet connection, but my program itself.
I investigated a little bit more, and found out that as soon as my program goes from ~7K rq/s to ~700, a lot of RAM is being freed up on my server.
I have taken 2 screenshots of the consecutive seconds before and once the downtime occurs (including RAM metrics), to compare, and you can view them here: https://imgur.com/a/sk2TYQZ (please note that I was using less threads here, which is why the average "normal" speed is ~2K rq/s instead of ~7K as mentioned before)
If you'd like to see more of it, here is the full record of the issue, in a video which lasts about 40 seconds: https://i.imgur.com/z27FlVP.mp4 - As you can see, after the RAM is freed up, its usage slowly goes up again, before the same process repeats every ~3 minutes.
For more context, here is the method I am using to send the HTTP requests (it is being called from a lot of threads concurrently, as my app is multi-threaded in order to be super fast):
public static async Task<bool> HasChangedAsync(string endpoint, HttpClient httpClient)
{
const string baseAddress = "https://example.com/";
string response = await httpClient.GetStringAsync(baseAddress + endpoint);
return response.Contains("example");
}
One thing I did is I tried replacing the whole method by await Task.Delay(25) then return false, and that fixed the issue, RAM usage was barely increasing.
This lead me to believe the issue is HttpClient / my HTTP requests, and even though I tried replacing the GetStringAsync method by GetAsync using both a HttpRequestMessage and HttpResponseMessage (and disposing them with using), the behavior ended up being the exact same.
So here I am, desperate for a fix, and without enough knowledge about memory, garbage collector etc (if that's even needed here) to be able to fix this myself.
Please, Stack Overflow, do you have any idea?
Thanks a lot.
Your best bet would be to stream the response and then use chunks of it to find what your are looking for. An example implementation could be something as follows:
using var response = await Client.GetAsync(BaseUrl, HttpCompletionOption.ResponseHeadersRead);
await using var stream = await response.Content.ReadAsStreamAsync();
using var reader = new StreamReader(stream);
string line = null;
while ((line = await reader.ReadLineAsync()) != null)
{
if(line.Contains("example"))// do whatever
}
So I'm having a really strange behavior with a c# task delay that is kind of making me insane.
Context: I'm using C# .net to communicate with one of our devices via R4852. The device needs roughly 200ms to finish each command so I introduced a 250ms delay inside my communication class.
Bug / bad behavior: The delay inside my communication class sometimes waits for 250ms and sometimes only waits for 125ms. This is reproducible and the same behavior occurs when I'm increasing my delay. E.g. if I set my delay to 1000ms every second request will only wait for 875ms, so again there are 125ms missing.
This behavior only occurs if there is no debugger attached and only occurs on some machines. The machine where this software will be used in our production department is having this issue, my machine that I'm working on right now doesn't have this issue. Both are running Windows 10.
How come that there are 125ms missing from time to time?
I already learnt that the Task.Delay method is using a timer with a precision of 15ms. This doesn't explain the missing 125ms as it at most should fire a few milliseconds too late instead of 125m too early.
The following method is the one I use to queue commands to my device. There is a semaphore responsible so that only one command can be executed at a time (_requestSemapohre) so there can only ever be one request being processed.
public async Task<bool> Request(WriteRequest request)
{
await _requestSemaphore.WaitAsync(); // block incoming calls
await Task.Delay(Delay); // delay
Write(_connectionIdDictionary[request.Connection], request.Request); // write
if (request is WriteReadRequest)
{
_currentRequest = request as WriteReadRequest;
var readSuccess = await _readSemaphore.WaitAsync(Timeout); // wait until read of line has finished
_currentRequest = null; // set _currentRequest to null
_requestSemaphore.Release(); // release next incoming call
if (!readSuccess)
{
return false;
}
else
{
return true;
}
}
else
{
if (request is WriteWithDelayRequest)
{
await Task.Delay((request as WriteWithDelayRequest).Delay);
}
_requestSemaphore.Release(); // release next incoming call
return true;
}
}
The following code is part of the method that is sending the requests to the method above. I removed some lines to keep it short. The basic stuff (requesting and waiting) is still there
// this command is the first command and will always have a proper delay of 1000ms
var request = new Communication.Requests.WriteRequest(item.Connection, item.Command);
await _translator.Request(request);
// this request is the second request that is missing 125ms
var queryRequest = new Communication.Requests.WriteReadRequest(item.Connection, item.Query); // query that is being sent to check if the value has been sent properly
if (await _translator.Request(queryRequest)) // send the query to the device and wait for response
{
if (item.IsQueryValid(queryRequest.Response)) // check result
{
item.Success = true;
}
}
The first request that I'm sending to this method is a WriteRequest, the second one a WriteReadRequest.
I discovered this behavior when looking at the serial port communication using a software named Device Monitoring Studio to monitor the serial communication.
Here is a screenshot of the actual serial communication. In this case I was using a delay of 1000ms. You can see that the sens0002 command had a delay of exactly 1 second before it was executed. The next command / query sens?only has a 875ms delay. This screenshot was taken while the debugger was not attached.
Here is another screenshot. The delay was set to 1000ms again but this time the debugger was attached. As you can see the first and second command now both have a delay of roughly 1000ms.
And in the two following screenshots you can see the same behavior with a delay of 250ms (bugged down to 125ms). First screenshot without debugger attached, second one with debugger attached. In the second screenshot you can also see that there is quiet the drift of 35ms but still nowhere close to the 125ms that were missing before.
So what the hell am I looking at here? The quick and dirty solution would be to just increase the delay to 1000ms so that this won't be an issue anymore but I'd rather understand why this issue occurs and how to fix it properly.
Cheers!
As far as I can see, your times are printed as delta to the prev. entry.
In case of the 125/875ms you have 8 intermediate entries with each roughly 15ms (sum roughly 120ms)
In case of 250/1000ms you have 8 intermediate entries with each roughly 5ms (sum roughly 40ms) and the numbers are actually more like 215/960ms.
So, if you add those intermediate delays, the resulting complete delay is roughly the same as far as I can tell.
Answering the question for everyone who just wants a yes / no on the question title: The First Rule of Programming: It's Always Your Fault
It's save to assume, that Task.Delay covers at least the specified amount of time (might be more due to clock resolution). So if it seems to cover a smaller timespan, then the method used to test the actual delay is faulty somehow.
My application could have up to roughly 100 requests for a batch job within a few milliseconds but in actuality, these job requests are being masked as one job request.
To fix this issue so that only one job request is just not feasible at the moment.
A workaround that I have thought is to program my application to fulfill only 1 batch job every x milliseconds, in this case I was thinking of 200 milliseconds, and ignore any other batch job that may come in within those 200 milliseconds or when my batch job have completed. After those 200 milliseconds are up or when the batch job is completed, my application will wait and accept 1 job request from that time on and it will not process any requests that may have been ignored before. Once my application accepts another job requests, it will repeat the cycle above.
What's the best way of doing this using .Net 4.0? Are there any boiler plate code that I can simply follow as a guide?
Update
Sorry for being unclear. I have added more details about my scenario. Also I just realized that my proposed workaround above will not work. Sorry guys, lol. Here's some background information.
I have an application that builds an index using files in a specified directory. When a file is added, deleted or modified in this directory, my application listens for these events using a FileSystemWatcher and re-indexes these files. The problem is that around 100 files can be added, deleted or modified by an external process and they occur very quickly, ie: within a few milliseconds. My end goal is to re-index these files after the last file change have occurred by the external process. The best solution is to modify the external process to signal my application when it has finished modifying the files I'm listening to but that's not feasible at the moment. Thus, I have to create a workaround.
A workaround that may solve my problem is to wait for the first file change. When the first file change have occurred, wait 200 milliseconds for any other subsequent file changes. Why 200 milliseconds? Because I'm hoping and confident that the external process can perform its file changes within 200 milliseconds. Once my application have waited for 200 milliseconds, I would like it to start a task that will re-index the files and go through another cycle of listening to a file change.
What's the best way of doing this?
Again, sorry for the confusion.
This question is a bit too high level to guess at.
My guess is your application is run as a service, you have your requests come into your application and arrive in a queue to be processed. And every 200 ms, you wake the queue and pop and item off for processing.
I'm confused about the "masked as one job request". Since you mentioned you will "ignore any other batch job", my guess is you haven't arranged your code to accept the incoming requests in a queue.
Regardless, you will generally always have one application process running (your service) and if you choose you could spawn a new thread for each item you process in the queue. You can monitor how much cpu/memory utilization this required and adjust the firing time (200ms) accordingly.
I may not be accurately understanding the problem, but my recommendation is to use the singleton pattern to work around this issue.
With the singleton approach, you can implement a lock on an object (the access method could potentially be something along the lines of BatchProcessor::GetBatchResults) that would then lock all requests to the batch job results object. Once the batch has finished, the lock will be released, and the underlying object, will have the results of the batch job available.
Please keep in mind that this is a "work around". There may be a better solution that involves looking into and changing the underlying business logic that causes multiple requests to come in for a job that's processing on demand.
Update:
Here is a link for information regarding Singleton (includes code examples): http://msdn.microsoft.com/en-us/library/ff650316.aspx
It is my understanding that the poster has some sort of an application that sits and waits for incoming requests to perform a batch job. The problem that he is receiving multiple requests within a short period of time that should actually have come in as just a single request. And, unfortunately, he is not able to solve this problem.
So, his solution is to assume that all requests received within a 200 ms timespan are the same, and to only process these once. My concern with this would be whether this assumption is correct or not? This entirely depends on the sending systems and the environment in which this is being used. The general idea to be able to do this would be to update a lastReceived date/time when a request is processed. Then when a new request comes in, compare the current date/time to the lastReceived date/time and only process it if the difference is greater than 200 ms.
Other possible solutions:
You said you could not modify the sending application so only one job request was sent, but could you add additional information to it, for instance a unique identifier?
Could you store the parameters from the last job request and compare it with the next job request and only process them if they are different?
Based on your Update
Here is an example how you could wait 200ms using a Timer:
static Timer timer;
static int waitTime = 200; //in ms
static void Main(string[] args)
{
FileSystemWatcher fsw = new FileSystemWatcher();
fsw.Path = #"C:\temp\";
fsw.Created += new FileSystemEventHandler(fsw_Created);
fsw.EnableRaisingEvents = true;
Console.ReadLine();
}
static void fsw_Created(object sender, FileSystemEventArgs e)
{
DateTime currTime = DateTime.Now;
if (timer == null)
{
Console.WriteLine("Started # " + currTime);
timer = new Timer();
timer.Interval = waitTime;
timer.Elapsed += new ElapsedEventHandler(timer_Elapsed);
timer.Start();
}
else
{
Console.WriteLine("Ignored # " + currTime);
}
}
static void timer_Elapsed(object sender, ElapsedEventArgs e)
{
//Start task here
Console.WriteLine("Elapsed # " + DateTime.Now);
timer = null;
}
I believe after lengthy research and searching, I have discovered that what I want to do is probably better served by setting up an asynchronous connection and terminating it after the desired timeout... But I will go ahead and ask anyway!
Quick snippet of code:
HttpWebRequest webReq = (HttpWebRequest)HttpWebRequest.Create(url);
webReq.Timeout = 5000;
HttpWebResponse response = (HttpWebResponse)webReq.GetResponse();
// this takes ~20+ sec on servers that aren't on the proper port, etc.
I have an HttpWebRequest method that is in a multi-threaded application, in which I am connecting to a large number of company web servers. In cases where the server is not responding, the HttpWebRequest.GetResponse() is taking about 20 seconds to time out, even though I have specified a timeout of only 5 seconds. In the interest of getting through the servers on a regular interval, I want to skip those taking longer than 5 seconds to connect to.
So the question is: "Is there a simple way to specify/decrease a connection timeout for a WebRequest or HttpWebRequest?"
I believe that the problem is that the WebRequest measures the time only after the request is actually made. If you submit multiple requests to the same address then the ServicePointManager will throttle your requests and only actually submit as many concurrent connections as the value of the corresponding ServicePoint.ConnectionLimit which by default gets the value from ServicePointManager.DefaultConnectionLimit. Application CLR host sets this to 2, ASP host to 10. So if you have a multithreaded application that submits multiple requests to the same host only two are actually placed on the wire, the rest are queued up.
I have not researched this to a conclusive evidence whether this is what really happens, but on a similar project I had things were horrible until I removed the ServicePoint limitation.
Another factor to consider is the DNS lookup time. Again, is my belief not backed by hard evidence, but I think the WebRequest does not count the DNS lookup time against the request timeout. DNS lookup time can show up as very big time factor on some deployments.
And yes, you must code your app around the WebRequest.BeginGetRequestStream (for POSTs with content) and WebRequest.BeginGetResponse (for GETs and POSTSs). Synchronous calls will not scale (I won't enter into details why, but that I do have hard evidence for). Anyway, the ServicePoint issue is orthogonal to this: the queueing behavior happens with async calls too.
Sorry for tacking on to an old thread, but I think something that was said above may be incorrect/misleading.
From what I can tell .Timeout is NOT the connection time, it is the TOTAL time allowed for the entire life of the HttpWebRequest and response. Proof:
I Set:
.Timeout=5000
.ReadWriteTimeout=32000
The connect and post time for the HttpWebRequest took 26ms
but the subsequent call HttpWebRequest.GetResponse() timed out in 4974ms thus proving that the 5000ms was the time limit for the whole send request/get response set of calls.
I didn't verify if the DNS name resolution was measured as part of the time as this is irrelevant to me since none of this works the way I really need it to work--my intention was to time out quicker when connecting to systems that weren't accepting connections as shown by them failing during the connect phase of the request.
For example: I'm willing to wait 30 seconds on a connection request that has a chance of returning a result, but I only want to burn 10 seconds waiting to send a request to a host that is misbehaving.
Something I found later which helped, is the .ReadWriteTimeout property. This, in addition to the .Timeout property seemed to finally cut down on the time threads would spend trying to download from a problematic server. The default time for .ReadWriteTimeout is 5 minutes, which for my application was far too long.
So, it seems to me:
.Timeout = time spent trying to establish a connection (not including lookup time)
.ReadWriteTimeout = time spent trying to read or write data after connection established
More info: HttpWebRequest.ReadWriteTimeout Property
Edit:
Per #KyleM's comment, the Timeout property is for the entire connection attempt, and reading up on it at MSDN shows:
Timeout is the number of milliseconds that a subsequent synchronous request made with the GetResponse method waits for a response, and the GetRequestStream method waits for a stream. The Timeout applies to the entire request and response, not individually to the GetRequestStream and GetResponse method calls. If the resource is not returned within the time-out period, the request throws a WebException with the Status property set to WebExceptionStatus.Timeout.
(Emphasis mine.)
From the documentation of the HttpWebRequest.Timeout property:
A Domain Name System (DNS) query may
take up to 15 seconds to return or
time out. If your request contains a
host name that requires resolution and
you set Timeout to a value less than
15 seconds, it may take 15 seconds or
more before a WebException is thrown
to indicate a timeout on your request.
Is it possible that your DNS query is the cause of the timeout?
No matter what we tried we couldn't manage to get the timeout below 21 seconds when the server we were checking was down.
To work around this we combined a TcpClient check to see if the domain was alive followed by a separate check to see if the URL was active
public static bool IsUrlAlive(string aUrl, int aTimeoutSeconds)
{
try
{
//check the domain first
if (IsDomainAlive(new Uri(aUrl).Host, aTimeoutSeconds))
{
//only now check the url itself
var request = System.Net.WebRequest.Create(aUrl);
request.Method = "HEAD";
request.Timeout = aTimeoutSeconds * 1000;
var response = (HttpWebResponse)request.GetResponse();
return response.StatusCode == HttpStatusCode.OK;
}
}
catch
{
}
return false;
}
private static bool IsDomainAlive(string aDomain, int aTimeoutSeconds)
{
try
{
using (TcpClient client = new TcpClient())
{
var result = client.BeginConnect(aDomain, 80, null, null);
var success = result.AsyncWaitHandle.WaitOne(TimeSpan.FromSeconds(aTimeoutSeconds));
if (!success)
{
return false;
}
// we have connected
client.EndConnect(result);
return true;
}
}
catch
{
}
return false;
}
What is a reasonable amount of time to wait for a web request to return? I know this is maybe a little loaded as a question, but all I am trying to do is verify if a web page is available.
Maybe there is a better way?
try
{
// Create the web request
HttpWebRequest request = WebRequest.Create(this.getUri()) as HttpWebRequest;
request.Credentials = System.Net.CredentialCache.DefaultCredentials;
// 2 minutes for timeout
request.Timeout = 120 * 1000;
if (request != null)
{
// Get response
response = request.GetResponse() as HttpWebResponse;
connectedToUrl = processResponseCode(response);
}
else
{
logger.Fatal(getFatalMessage());
string error = string.Empty;
}
}
catch (WebException we)
{
...
}
catch (Exception e)
{
...
}
You need to consider how long the consumer of the web service is going to take e.g. if you are connecting to a DB web server and you run a lengthy query, you need to make the web service timeout longer then the time the query will take. Otherwise, the web service will (erroneously) time out.
I also use something like (consumer time) + 10 seconds.
Offhand I'd allow 10 seconds, but it really depends on what kind of network connection the code will be running with. Try running some test pings over a period of a few days/weeks to see what the typical response time is.
I would measure how long it takes for pages that do exist to respond. If they all respond in about the same amount of time, then I would set the timeout period to approximately double that amount.
Just wanted to add that a lot of the time I'll use an adaptive timeout. Could be a simple metric like:
period += (numTimeouts/numRequests > .01 ? someConstant: 0);
checked whenever you hit a timeout to try and keep timeouts under 1% (for example). Just be careful about decrementing it too low :)
The reasonable amount of time to wait for a web request may differ from one server to the next. If a server is at the far end of a high-delay link then clearly it will take longer to respond than when it is in the next room. But two minutes seems like it's more than ample time for a server to respond. The default timeout value for the PING command is expressed in seconds, not minutes. I suggest you look into the timeout values that are used by networking utilities like PING or TRACERT for inspiration.
I guess this depends on two things:
network speed/load (as others wrote, using ping might give you an idea about this)
the kind of page you are calling: e.g. is it a static HTML page or is it a page which might do some time-consuming operations (DB access, etc.)
Anyway, I think 2 minutes is a lot of time. I would definitely reduce the timeout to less than 30 seconds.
I realize this doesn't directly answer your question, but then an "answer" to this question is a little tough. Anyway, a tool I've used gomez in the past to measure page load times from various parts of the world. It's free and if you haven't done this kind of testing before it might be helpful in terms of giving you a firm idea of what typical page load times are for a given page from a given location.
I would only wait (MAX) 30 seconds probably closer to 15. It really depends on what you are doing and what the result is of unsuccessful connection. As I am sure you know there is lots of reason why you could get a timeout...