C# UDP Socket.ReceiveFrom timeout without using BeginReceiveFrom or Exceptions - c#

I'm trying to implement a basic UDP client. One of its functions is the ability to probe computers to see if a UDP server is listening. I need to scan lots of these computers quickly.
I can't use the Socket.BeginReceiveFrom method and run a timeout waiting for it to complete, because callbacks may occur after the timeout is over, and seeing as many computers are being probed quickly, I found that later callbacks ended up using modified data as a new probe was already underway when the callback was finally invoked.
I can't use the Socket.ReceiveFrom method and set a Socket.ReceiveTimeout because the SocketException being thrown+handled takes a long time (not sure why, I'm not running much code to handle it), meaning it takes about 2 seconds per computer rather than 100ms like hoped.
Is there any way of running a timeout on a synchronous call to ReceiveFrom without using exceptions to determine when the call has failed/succeeded? Or is there a tactic I've not yet taken that you think could work?
Any advice is appreciated.

I decided to rewrite the probe code using TCP.
However, I later discovered the Socket.ReceiveFromAsync method which, seeing as it only receives a single datagram per call, would have made life easier.

Related

How to use actual working timeouts in HttpClient.* methods in C#

I'm having the issue that setting a timeout in HttpClient in C# a) interrupts a running download if it is set too low, and b) doesn't trigger in some situations. I am trying to find a workaround, and need some help.
I have a pretty straightforward implementation of a HttpClient client call. The problem is easiest to see when downloading a large file. The code looks like this, and I believe this is the correct usage:
HttpClient client= new HttpClient()
client.Timeout= TimeSpan.FromMinutes(1);
HttpMessageRequestMessage msg= new HttpRequestMessage(HttpMethod.Get, "<URL>");
HttpResponseMessage response= await client.SendAsync(msg, HttpCompleteOption.ResponseHeadersRead);
response.EnsureSuccessStatusCode();
await response.Content.ReadAsSytreamAsync().Result.CopyToAsync(fileStream);
Now this works in principle, BUT:
The timeout of 1 minute seems to be an "execution time timeout", that is, it kills the copy even if the file has not been downloaded completely, but is downloading quite well up to that point.
If I actually unplug a network cable during the transfer (to simulate a catastrophic failure), the timeout does not fire. I assume that some .Read() method within CopyToAsync simply blocks in that case.
Regarding 1: AFAIK the client.Timeout gets converted to CancellationTokens internally (which is why a TaskCanceledException is thrown), that is, it a) only works if the underlying operation actually checks for a cancellation and b) the underlying operation seemingly doesn't reset the timer on a successful read, since the whole point seems to cancel after a set timeout.
Regarding 2: In many cases, i.e. if the server isn't there at all, or if I kill the server, i.e. if there is a "definite network failure" which the client can recognize, I do get a proper exception from this code; but I don't get one in, let's say, more 'problematic network failures' (as simulated by unplugging the network cable from a (wired) server, while the (wireless) client still tries to download.
Now, this is easiest to test on CopyToAsync for a large file, but I have no reason to believe that this works any different on a standard GetAsync or PostAsync, which means that with unlucky timing, those methods might hang indefinitely as well.
What I would expect the Timeout in HttpClient to do is a) only count from the last successful read/write operation (which it seemingly doesn't, it counts from the start of the operation), and b) fire in all cases, even if the network goes down (which it doesn't either).
So - what can I do about this? Should I just use another implementation (which?), or should I implement my own timeouts by using a secondary thread which just kills the underlying client/socket/stream?
Thanks.

Is this an appropriate time to look into using async?

I'm working on a program that pings a long list of IP addresses. Currently there are about 250 in the database to ping, and it takes a long time to get through all of them. The program also sends email alerts when that status changes (from failed to success or vice versa). At worst, it takes two seconds for each ping (if they fail). So it generally takes about 8 minutes for the whole program to cycle through.
I'd like the email alerts to be closer to real time if possible. Would calling the ping function asynchronously allow them to fire off more quickly without waiting for the response from the one before it? I'm new to non-synchronous programming of any sort, and I'm not sure if this is an appropriate situation to use it in.
If it is, any pointers towards resources for getting started with this would be much appreciated!
The network should support sending 250 pings in below one second. You should probably start all pings at once using async IO, then use Task.WaitAll to collect the results. That way you are done within 2sec.
With 250 work items I would strongly prefer a solution using async IO. You can use threads as well if you like.
Find out what async methods the Ping class provides. Learn about async/await and Task. This is a good use-case for them.

Asynchronous vs. Synchronous socket server for real-time application

I am currently developing a C# socket server that needs to send and receive commands to a real-time process. The client is an android device. Currently the real-time requirements are "soft", however in the future more strict timing requirements might arise. Lets say in the future it might be to send commands to a crane that could be potentially dangerous.
The server is working, and seemingly very well with my current synchronous socket server design. I have separate threads for receiving and sending data. I am wondering if there would be any reason to attempt an asynchronous server socket approach? Could it provide more stability and/or faster performance?
I'll gloss over the definition of real time and say that asynchronous sockets won't make the body of the request process any faster, but will increase concurrency (the number of requests you can take at any one time). If all processors are busy processing something, you won't get any gain. This only gives you gain in the situation where a processor would have sat waiting for a socket to receive something.
Just a note on real time, if your real time requirements are anything like the need to guarantee a response in x-time, then C# and .NET will not give you such guarantees. This, however, depends on your current and future definitions of "soft". It may be the case that you happen to be getting good response times, but don't confuse that with true real time systems.
If you're doubting the usefullness of something asynchronous in your aplications then you should definitely read about this. It gives you a clear idea of what the asynchronous solutions could add to your applications
I don't think you are going to get more stability or faster performance. If it really is a "real-time" system, then it should be synchronous. If you can tolerate "near real-time" and there are long running or expensive compute operations, then you could consider an asynchronous approach. I would not add the complexity if not needed though.
If it's real time, then you absolutely want your communications to be backed by a queue so that you can prove temporal logic on that queue. This is what nio/io-completion-ports/async gives you. If you are using synchronous programming then you are wasting your CPU while copying data from RAM to the network card.
Furthermore, it means that your server is absolutely single-threaded. You may have a single thread even with async, but still be able to serve thousands of requests.
Say for example that a client wanted to perform a DOS attack. He would connect and send one byte of data. Your application would now become unable to receive further commands for the timeout of that connection, which could be quite large. With async, you would ACK the SYN package back, but your code would not be waiting for the full transmission.

web calls never timing out

I have a number of applications using various web technologies such as SOAP, WCF Services, or just simple XmlReader. However they all seem to suffer the same problem of missing the timeout and hanging infinitely if the internet connection suffers problems at the wrong time.
I have set the timeout in all scenarios to something small, e.g. for wcf
closeTimeout="00:00:15" openTimeout="00:00:15"
receiveTimeout="00:00:15" sendTimeout="00:00:15"
or for soap
_Session.Timeout = (int)TIMEOUT.TotalMilliseconds;
These timeouts do generally get hit, however it appears there is some special case where if the internet drops out at the right time, the call will hang and never time out (using synchronous calls).
I was considering starting up a timer every time I make a call, and using the appropriate .Abort() function if the timer expires to cancel the call. However, I was wondering if there was a simpler way to fix the issue and ensure the timeout gets hit.
Does anyone know why this occurs, and if so what a clean/simple/good way is to ensure the calls always time out?
I can guess at why it occurs, but without giving a solution :(
I suspect it's getting caught up on DNS resolution. I've seen various situations where that "doesn't count" - e.g. where it ends up happening on the initiating thread of an asynchronous call, or where it's definitely not been included in timeouts.
If you're able to reproduce this by pulling out your network cable, I'd suggest using Wireshark to confirm my guess - that would at least suggest further avenues for investigation. Maybe there's a DNS timeout somewhere in the .NET stack which is normally infinite but which can be tweaked, for example.

need advice for type of TCP server to cater for this type of application

The requirement of the TCP server:
receive from each client and send
result back to same client (the
server only do this)
require to cater for 100 clients
speed is an important factor, ie:
even at 100 client connections, it should not be laggy.
For now I have been using C# async method, but I find that I always encounter laggy at around 20 connections. By laggy I mean taking around almost 15-20 seconds to get the result. At around 5-10 connections, time to get result is almost immediate.
Actually when the tcp server got the message, it will interact with a dll which does some processing to return a result. Not exactly sure what is the workflow behind it but at small scale you do not see any problem, so I thought the problem might be with my TCP server.
Right now, I thinking of using a sync method. Doing so, I will have a while loop to block the accept method, and spawn a new thread for each client after accept. But at 100 connections, it is definitely overkill.
Chance upon IOCP, not exactly sure, but it seems to be like a connection pool, as the way it handles tcp is quite like the normal way.
For these TCP methods I am also not sure whether it is a better option to open and close connection each time message needs to be passed. On average, message are passed from each client at around 5-10 min interval.
Another alternative might be to use a web, (looking at generic handler) to form only 1 connection with the server. Any message that needs to be handled will be passed to this generic handler, which then sends and receive message from the server.
Need advice from especially those who did TCP in large scale. I do not have 100 PC for me to test out, so quite hard for me. Language wise C# or C++ will do, I'm more familar with C#, but will consider porting to C++ for the speed.
You must be doing it wrong. I personally wrote C# based servers that could handle 1000+ connections, sending more than 1 message per second, with <10ms response time, on commodity hardware.
If you have such high response times it must be your server process that is causing blocking. Perhaps contention on locks, perhaps plain bad code, perhaps blocking on external access leading to thread pool exhaustion. Unfortunately, there are plenty of ways to screw this up, and only few ways to get it right. There are good guidelines out there, starting with the fundamentals covered in Rick Vicik's High Performance Windows Programming articles, going over the SocketAsyncEventArgs example which covers the most performant way of writing socket apps in .Net since the advent of Socket Performance Enhancements in Version 3.5 and so on and so forth.
If you find yourself lost at the task ahead (as it seems you happen to be) I would urge you to embrace an established communication framework, perhaps WCF with a net binding, and use the declarative service model programming of WCF. This way you'll piggyback on the WCF performance. While this may not be enough for some, it will get you far enough, much further than you are right now for sure, with regard to performance.
I don't see why C# should be any worse than C++ in this situation - chances are that you've not yet hit upon the 'right way' to handle the incoming connections. Spawning off a separate thread for each client would certainly be a step in the right direction, assuming that workload for each thread is more I/O bound than CPU intensive. Whether you spawn off a thread per connection or use a thread pool to manage a number of threads is another matter - and something to determine through experimentation and also whilst considering whether 100 clients is your maximum!

Categories