C# non-blocking socket without while(true) loop - c#

I'm just trying to make some socket programming, using non-blocking sockets in c#.
The various samples that i've found, such as this, seems to use a while(true) loop, but this approach causes the cpu to burst at 100%.
Is there a way to use non-blocking sockets using a event programming style?
Thanks

See the MSDN example here. The example shows how to receive data asynchronously. You can also use the Socket BeginSend/EndSend methods to send data asynchronously.
You should note that the callback delegate executes in the context of a ThreadPool thread. This is important if the data received inside the callback needs to be shared with another thread, e.g., the main UI thread that displays the data in a Windows form. If so, you will need to synchronized access to the data using the lock keyword, for example.
As you've noticed, with nonblocking sockets and a while loop, the processor is pegged at 100%. The asynchronous model will only invoke the callback delegate when there is data to send or receive.

Talking generally about blocking/non-blocking IO, applicable generally:
The key thing is that in real life your program does other things whilst not doing IO. The examples are all contrived in this way.
In blocking IO, your thread 'blocks' while waiting for IO. The OS goes and does other things, e.g. allows other threads to run. So your application can do many things (conceptually) in parallel by using many threads.
In non-blocking IO, your thread queries to see if IO is possible, and otherwise goes and does something else. So you do many things in parallel by explicitly - at an application level - swapping between them.

To avoid a CPU issue in heavy while loop, when no data receive put thread.sleep(100) or less. That will let other processes change to do their task

Socket.BeginReceive and AsyncCallback

Related

C# Asynchronous Socket Read Without Using Runtime's Threadpool

I'm trying to create a socket server which can handle relatively large amount of clients. There are different approaches used for such a task, first is to use a separate thread for each incoming connection, and second is to use async/await pattern.
First approach is bad as there will be relatively small number of threads such as all system's resources will be lost on context switching.
Second approach from the first sight is good as we can have our own threadpool with limited number of worker threads, so dispatcher will receive incoming connections, add them to some queue and call async socket read methods which in turn will receive data from socket and add this data/errors to queue for further processing(error handling client responses, DB-related work).
There is not so much info on internal implementation of async/await I could found, but as I understood while using non-UI application all continuation is done through TaskScheduler.Current which is using runtime's threadpool and so it's resources are limited. Greater amount of incoming connections will result in no free threads in runtime's threadpool or amount will be so large that system will stop responding.
In this matter async/await will result in same problem as with 1-client/1-thread concern, however with little advantage as runtime threadpool's threads may not occupy so much address space as default System.Threading.Thread (I believe 1MB stack size + ~1/2MB of control data).
Is there any way I can made one thread to wait for some kernel interrupt on say 10 sockets so application will only use my explicitly sized thread pool? (I mean that in case there is any further data on one from 10 sockets, one thread will wake up and handle it.)
In this matter async/await will result in same problem as with 1-client/1-thread concern
When thread reach code that is running asynchronously then control is returned to caller so that means thread is returned to thread pool and can handle another request so it is any superior to 1-client/1-thread because thread isn't blocked.
There is some any intersting blog about asnyc/await:
1

Asynchronous methods(!) clarification in .net?

I've been reading a lot lately about this topic and , still I need to clarify something
The whole idea with asynchronous methods is Thread economy :
Allow many tasks to run on a few threads. this is done by using the hardware driver to do the job while releasing the thread back to the thread-pool so it can server other jobs.
please notice .
I'm not talking about asynchronous delegates which ties another thread (execute a task in parallel with the caller).
However I've seen 2 main types of asynchronous methods examples :
Code samples (from books) who only uses existing I/O asynchronous operations as beginXXX / endXX e.g. Stream.BeginRead.
And I couldn't find any asynchronous methods samples which don't use existing .net I/O operations e.g. Stream.BeginRead )
Code samples like this (and this). which doesnt actually invoking an asynchronous operation (although the author thinks he is - but he actually causes a thread to block !)
Question :
Does asynchronous methods are used only with .net I/O existing methods like BeginXXX , EndXXX ?
I mean , If I want to create my own asynchronous methods like BeginMyDelay(int ms,...){..} , EndMyDelay(...). I couldn't done it without tie a blocked thread to it....correct?
Thank you very much.
p.s. please notice this question is tagged as .net 4 and not .net4.5
You're talking about APM.
APM widely uses OS concept, known as IO Completion ports. That's why different IO operations are the best candidates to use APM.
You could write your own APM methods.
But, in fact, these methods will be either over existing APM methods, or they will be IO-bound, and will use some native OS mechanism (like FilesStream, which uses overlapped file IO).
For compute-bound asynchronous operations APM only will increase complexity, IMO.
A bit more clarification.
Work with hardware is asynchronous by its nature. Hardware needs a time to perform request - newtork card must send or receive data, HDD must read/write etc. If IO is synchronous, thread, which was generated IO request, is waiting for response. And here APM helps - you shouldn't wait, just execute something else, and when IO will be complete, I'll call you, says APM.
The main point - operation is performing outside of CPU.
When you're writing any compute-bound operation, which will use CPU for it execution without any IO, there's nothing to wait here. So, APM coludn't help - if you need CPU, you need thread - you need thread pool.
I think, but I'm not sure, that you can create your own asynchronous methods. For example creating a new thread and wait for it to finish some work (db query, ...).
In term of overall system performance probably it is not useful, as you say you just create another thread. But for example if you work on IIS, the original request thread can be used for other requests while you are waiting for the 'background' operation.
I think that IIS has a fixed number of threads (thread pool), so in this case can be useful.
I mean , If I want to create my own asynchronous methods like
BeginMyDelay(int ms,...){..} , EndMyDelay(...). I couldn't done it
without tie a blocked thread to it....correct?
While I've not dug into the implementation of async, I can't see any reason why one couldn't do this.
The simplest way would be to use existing libraries that help [e.g. timers] or some sort of event system IIRC.
However even if you don't want to use any library helpers then you're stuck with a problem... the 'blocked thread'.
Sure the code does look something like this:
while (true){
foreach (var item in WaitingTasks)
if (item.Ready())
/*fire item, and remove it from tasks*/;
/*Some blocking action*/
}
Thing is - 'Some blocking action' doesn't have to be 'blocking'. You could yield/sleep the thread, or use it to process some data. For example, the Unity Game Engine does a similar thing with Coroutines - where the same thread that processes all the code also checks to see if various coroutines [that have been delayed due to time] need to be updated. Replace /*Some blocking action*/ with ProcessGameLoop().
Hoe that helps, feel free to ask questions/post corrections etc.

BeginXXX and threadpool

I'm writing a TCP Server in C# and I'm using the BeginXXX and EndXXX methods for async communication. If I understand correctly, when I use BeginXXX the request will be handled on in the threadpool (when the request is ready) while the main thread keeps accepting new connections.
The question is what happens if I perform a blocking action in one of these AsyncCallbacks? Will it be better to run a blocking operation as a task? Tasks use the threadpool as well don't they?
The use case is the following:
The main thread set ups a listening socket which accepts connections using BeginAccept, and starts listening on those connections using BeginReceive. When a full message has been received, a function is called depending on what that message was, in 80% of all cases, those functions will start a database query/insertion/update.
I suggest you use SocketAsyncEventArgs which is introduced in .net 4.5
Here's some reading material you can start with
click me
The question is what happens if I perform a blocking action in one of these AsyncCallbacks? Will it be better to run a blocking operation as a task?
If you do that too often or for too long then the ThreadPool will grow. Possible to the point where it will crash your App.
So try to avoid blocking as much as possible. But a little bit of it should be acceptable. Keep in mind that the ThreadPool will grow with 1 new thread per 500 ms. So make sure and verify that it will level out on some reasonable number of threads.
A blunt instrument could be to cap the MaxThreads of the pool.
Tasks use the threadpool as well don't they?
Yes, so your options are limited.

.NET Async IO associated with calling Sleep on response handler

I have a piece of code (on a server) that uses async method to receive data on sockets like this:
asyncRes = connectionSocket.BeginReceive(receiveBuffer, 0, RECEIVING_BUFFER_SIZE,
SocketFlags.None, out error, new AsyncCallback(ReceiveDataDone), null);
In the handler (ReceiveDataDone) of the socket there are cases where Thread.Sleep(X) is used in order to wait for other things(questionable implementation indeed). I know this is a questionable design but I wonder if making such kind of code could explain an explosion of threads created in my application because of the other pending sockets in the server that have their ReceiveDataDone called. (when many connections are handled by the server the number of threads created figuratively explodes). I wonder how BeginReceive method on .NET sockets work, that could explain the huge number of threads I see.
You absolutely should not perform any kind of blocking action in APM callbacks. These are run in the ThreadPool. The ThreadPool is designed for the invocation of short-lived tasks. If you block (or take a long time to execute) you are tying up (a finite number of) threads and causing ThreadPool starvation. Because the ThreadPool does not spin up extra threads easily (in fact, it's quite slow to start extra threads), you're bottlenecking on the timing that controls how quickly the ThreadPool is allowed to spin up new threads.
Despite answering a different question, this answer I provided a while back explains the same issue:
https://stackoverflow.com/a/1733226/14357
You should not use Thread.sleep for waiting in ThreadPool Threads this causes the Thread to be blocked and It will not accept any further workitems for the time it is blocked.
You can use TimerCallback for such a use case. It will let the ThreadPool schedule other work on the waiting thread in the meantime.

Alternative to a bunch of while (true) loops when awaiting messages or client connections? (TcpClient C#)

I thought C# was an event-driven programming language.
This type of thing seems rather messy and inefficient to me:
tcpListener.Start();
while (true)
{
TcpClient client = this.tcpListener.AcceptTcpClient();
Thread clientThread = new Thread(new ParameterizedThreadStart(HandleClientCommunication));
clientThread.Start(client);
}
I also have to do the same kind of thing when waiting for new messages to arrive. Obviously these functions are enclosed within a thread.
Is there not a better way to do this that doesn't involve infinite loops and wasted CPU cycles? Events? Notifications? Something? If not, is it bad practice to do a Thread.Sleep so it isn't processing as nearly as often?
There is absolutely nothing wrong with the method you posted. There are also no wasted CPU cycles like you mentioned. TcpClient.AcceptTcpClient() blocks the thread until a client connects to it, which means it does not take up any CPU cycles. So the only time the loop actually loops is when a client connects.
Of course you may want to use something other than while(true) if you want a way to exit the loop and stop listening for connections, but that's another topic. In short, this is a good way to accept connections and I don't see any purpose of having a Thread.Sleep anywhere in here.
There are actually three ways to handle IO operations for sockets. The first one is to use the blocking functions just as you do. They are usually used to handle a client socket since the client expects and answer directly most of the time (and therefore can use blocking reads)
For any other socket handlers I would recommend to use one of the two asynchronous (non-blocking) models.
The first model is the easiest one to use. And it's recognized by the Begin/End method names and the IAsyncResult return value from the Begin method. You pass a callback (function pointer) to the Begin method which will be invoked when something has happened. As an example take a look at BeginReceive.
The second asynchronous model is more like the windows IO model (IO Completion Ports) . It's also the newest model in .NET and should give you the best performance. As SocketAsyncEventArgs object is used to control the behavior (like which method to invoke when an operation completes). You also need to be aware of that an operation can be completed directly and that the callback method will not be invoked then. Read more about RecieveAsync.

Categories