I'm having the issue that setting a timeout in HttpClient in C# a) interrupts a running download if it is set too low, and b) doesn't trigger in some situations. I am trying to find a workaround, and need some help.
I have a pretty straightforward implementation of a HttpClient client call. The problem is easiest to see when downloading a large file. The code looks like this, and I believe this is the correct usage:
HttpClient client= new HttpClient()
client.Timeout= TimeSpan.FromMinutes(1);
HttpMessageRequestMessage msg= new HttpRequestMessage(HttpMethod.Get, "<URL>");
HttpResponseMessage response= await client.SendAsync(msg, HttpCompleteOption.ResponseHeadersRead);
response.EnsureSuccessStatusCode();
await response.Content.ReadAsSytreamAsync().Result.CopyToAsync(fileStream);
Now this works in principle, BUT:
The timeout of 1 minute seems to be an "execution time timeout", that is, it kills the copy even if the file has not been downloaded completely, but is downloading quite well up to that point.
If I actually unplug a network cable during the transfer (to simulate a catastrophic failure), the timeout does not fire. I assume that some .Read() method within CopyToAsync simply blocks in that case.
Regarding 1: AFAIK the client.Timeout gets converted to CancellationTokens internally (which is why a TaskCanceledException is thrown), that is, it a) only works if the underlying operation actually checks for a cancellation and b) the underlying operation seemingly doesn't reset the timer on a successful read, since the whole point seems to cancel after a set timeout.
Regarding 2: In many cases, i.e. if the server isn't there at all, or if I kill the server, i.e. if there is a "definite network failure" which the client can recognize, I do get a proper exception from this code; but I don't get one in, let's say, more 'problematic network failures' (as simulated by unplugging the network cable from a (wired) server, while the (wireless) client still tries to download.
Now, this is easiest to test on CopyToAsync for a large file, but I have no reason to believe that this works any different on a standard GetAsync or PostAsync, which means that with unlucky timing, those methods might hang indefinitely as well.
What I would expect the Timeout in HttpClient to do is a) only count from the last successful read/write operation (which it seemingly doesn't, it counts from the start of the operation), and b) fire in all cases, even if the network goes down (which it doesn't either).
So - what can I do about this? Should I just use another implementation (which?), or should I implement my own timeouts by using a secondary thread which just kills the underlying client/socket/stream?
Thanks.
Related
My goal was to reduce the time it takes until the application stops to try connecting to a server.
This is the solution I am now using:
(It works, but I want to understand more in detail how)
MongoClientSettings clientSettings = new MongoClientSettings()
{
Server = new MongoServerAddress(host, port),
ClusterConfigurator = builder =>
{
//The "Normal" Timeout settings are for something different. This one here really is relevant when it is about
//how long it takes until we stop, when we cannot connect to the MongoDB Instance
//https://jira.mongodb.org/browse/CSHARP-1018, https://jira.mongodb.org/browse/CSHARP-1231
builder.ConfigureCluster(settings => settings.With(serverSelectionTimeout: TimeSpan.FromSeconds(2)));
}
};
I do not understand exactly what SocketTimeout and ConnectTimeout is then used for.
If I make those settings e.g. to 3 seconds, it actually would not make sense for the driver to wait longer then, since it is not expected that anything good can happen after the socket had a timeout?
My theory is that Connect and SocketTimeout affect how long it takes until the the connect to a single server waits, while serverSelectionTimeout is the timeout for the overall process? Is this true?
You might see here in ClusterRegistry.cs that ConnectTimeout is passed to TcpStreamSettings.ConnectTimeout, while SocketTimeout is passed to both TcpStreamSettings.ReadTimeout and TcpStreamSettings.WriteTimeout.
Then in TcpStreamFactory.cs you see how those Read and Write timeouts are used: they are used as NetworkStream.ReadTimeout and NetworkStream.WriteTimeout, when creating stream to read\write data from\to TCP connection.
Now, if we go to documentation of NetworkStream.ReadTimeout we see there:
This property affects only synchronous reads performed by calling the Read method. This property does not affect asynchronous reads performed by calling the BeginRead method.
But in Mongo driver, those network streams are read asynchronously, which means those timeouts do nothing. Same story with NetworkStream.WriteTimeout.
So long story short: SocketTimeout seems to have no effect at all and ConnectTimeout on the other hand is used when doing TCP connection. How exactly it happens you can see in TcpStreamFactory.cs.
I'm trying to implement a basic UDP client. One of its functions is the ability to probe computers to see if a UDP server is listening. I need to scan lots of these computers quickly.
I can't use the Socket.BeginReceiveFrom method and run a timeout waiting for it to complete, because callbacks may occur after the timeout is over, and seeing as many computers are being probed quickly, I found that later callbacks ended up using modified data as a new probe was already underway when the callback was finally invoked.
I can't use the Socket.ReceiveFrom method and set a Socket.ReceiveTimeout because the SocketException being thrown+handled takes a long time (not sure why, I'm not running much code to handle it), meaning it takes about 2 seconds per computer rather than 100ms like hoped.
Is there any way of running a timeout on a synchronous call to ReceiveFrom without using exceptions to determine when the call has failed/succeeded? Or is there a tactic I've not yet taken that you think could work?
Any advice is appreciated.
I decided to rewrite the probe code using TCP.
However, I later discovered the Socket.ReceiveFromAsync method which, seeing as it only receives a single datagram per call, would have made life easier.
I've got a very simple method that basically retrieves string content from a list of passed in urls:
foreach (var url in urls)
{
var content = _httpClient.GetStringAsync(url).Result;
}
Intermittently, I get the following exception:
An operation was attempted on something that is not a socket
I suspect I'm running out of connections?
If there is some network latency (and it takes a second or so for each url to return) I don't get this error.
Is there any way to prevent this?
Using .Result can cause problems depending on what the current Threading Synchronization Context is. However, assuming that .Result is not causing the issue, there is a property that limits the number of simultaneous connections allowed. However, unless you explicitly set it, sometimes it doesn't actually do anything, and you can easily get HttpClient trying to make 100s of simultaneous requests.
So, try adding this line,
ServicePointManager.DefaultConnectionLimit = ServicePointManager.DefaultConnectionLimit;
I know it looks dumb, but it will actually limit the number of simultaneous connections to 2. That might prevent whatever strange issue you are seeing.
I have a number of applications using various web technologies such as SOAP, WCF Services, or just simple XmlReader. However they all seem to suffer the same problem of missing the timeout and hanging infinitely if the internet connection suffers problems at the wrong time.
I have set the timeout in all scenarios to something small, e.g. for wcf
closeTimeout="00:00:15" openTimeout="00:00:15"
receiveTimeout="00:00:15" sendTimeout="00:00:15"
or for soap
_Session.Timeout = (int)TIMEOUT.TotalMilliseconds;
These timeouts do generally get hit, however it appears there is some special case where if the internet drops out at the right time, the call will hang and never time out (using synchronous calls).
I was considering starting up a timer every time I make a call, and using the appropriate .Abort() function if the timer expires to cancel the call. However, I was wondering if there was a simpler way to fix the issue and ensure the timeout gets hit.
Does anyone know why this occurs, and if so what a clean/simple/good way is to ensure the calls always time out?
I can guess at why it occurs, but without giving a solution :(
I suspect it's getting caught up on DNS resolution. I've seen various situations where that "doesn't count" - e.g. where it ends up happening on the initiating thread of an asynchronous call, or where it's definitely not been included in timeouts.
If you're able to reproduce this by pulling out your network cable, I'd suggest using Wireshark to confirm my guess - that would at least suggest further avenues for investigation. Maybe there's a DNS timeout somewhere in the .NET stack which is normally infinite but which can be tweaked, for example.
The requirement of the TCP server:
receive from each client and send
result back to same client (the
server only do this)
require to cater for 100 clients
speed is an important factor, ie:
even at 100 client connections, it should not be laggy.
For now I have been using C# async method, but I find that I always encounter laggy at around 20 connections. By laggy I mean taking around almost 15-20 seconds to get the result. At around 5-10 connections, time to get result is almost immediate.
Actually when the tcp server got the message, it will interact with a dll which does some processing to return a result. Not exactly sure what is the workflow behind it but at small scale you do not see any problem, so I thought the problem might be with my TCP server.
Right now, I thinking of using a sync method. Doing so, I will have a while loop to block the accept method, and spawn a new thread for each client after accept. But at 100 connections, it is definitely overkill.
Chance upon IOCP, not exactly sure, but it seems to be like a connection pool, as the way it handles tcp is quite like the normal way.
For these TCP methods I am also not sure whether it is a better option to open and close connection each time message needs to be passed. On average, message are passed from each client at around 5-10 min interval.
Another alternative might be to use a web, (looking at generic handler) to form only 1 connection with the server. Any message that needs to be handled will be passed to this generic handler, which then sends and receive message from the server.
Need advice from especially those who did TCP in large scale. I do not have 100 PC for me to test out, so quite hard for me. Language wise C# or C++ will do, I'm more familar with C#, but will consider porting to C++ for the speed.
You must be doing it wrong. I personally wrote C# based servers that could handle 1000+ connections, sending more than 1 message per second, with <10ms response time, on commodity hardware.
If you have such high response times it must be your server process that is causing blocking. Perhaps contention on locks, perhaps plain bad code, perhaps blocking on external access leading to thread pool exhaustion. Unfortunately, there are plenty of ways to screw this up, and only few ways to get it right. There are good guidelines out there, starting with the fundamentals covered in Rick Vicik's High Performance Windows Programming articles, going over the SocketAsyncEventArgs example which covers the most performant way of writing socket apps in .Net since the advent of Socket Performance Enhancements in Version 3.5 and so on and so forth.
If you find yourself lost at the task ahead (as it seems you happen to be) I would urge you to embrace an established communication framework, perhaps WCF with a net binding, and use the declarative service model programming of WCF. This way you'll piggyback on the WCF performance. While this may not be enough for some, it will get you far enough, much further than you are right now for sure, with regard to performance.
I don't see why C# should be any worse than C++ in this situation - chances are that you've not yet hit upon the 'right way' to handle the incoming connections. Spawning off a separate thread for each client would certainly be a step in the right direction, assuming that workload for each thread is more I/O bound than CPU intensive. Whether you spawn off a thread per connection or use a thread pool to manage a number of threads is another matter - and something to determine through experimentation and also whilst considering whether 100 clients is your maximum!