Is it safe to access a NetMQ socket from multiple threads, as long as they are not using it simultaneously?
For example,is the following scenario OK:
Thread A uses a socket.
Thread A ends.
Thread B uses the same socket.
If not,
must the sole operating thread be the very same who created the socket?
Technically you can. However how can you guarantee that it is actually not used concurrently? I suggest using a lock if you want to use the socket from multiple threads. Also take a look at NetMQQueue, is new and not documented, thread safe for enqueueing only. It might help you solve synchronization threads between NetMQ Sockets as you can poll on it with the Poller.
https://github.com/zeromq/netmq/blob/master/src/NetMQ.Tests/NetMQQueueTests.cs
Related
I'm trying to create a socket server which can handle relatively large amount of clients. There are different approaches used for such a task, first is to use a separate thread for each incoming connection, and second is to use async/await pattern.
First approach is bad as there will be relatively small number of threads such as all system's resources will be lost on context switching.
Second approach from the first sight is good as we can have our own threadpool with limited number of worker threads, so dispatcher will receive incoming connections, add them to some queue and call async socket read methods which in turn will receive data from socket and add this data/errors to queue for further processing(error handling client responses, DB-related work).
There is not so much info on internal implementation of async/await I could found, but as I understood while using non-UI application all continuation is done through TaskScheduler.Current which is using runtime's threadpool and so it's resources are limited. Greater amount of incoming connections will result in no free threads in runtime's threadpool or amount will be so large that system will stop responding.
In this matter async/await will result in same problem as with 1-client/1-thread concern, however with little advantage as runtime threadpool's threads may not occupy so much address space as default System.Threading.Thread (I believe 1MB stack size + ~1/2MB of control data).
Is there any way I can made one thread to wait for some kernel interrupt on say 10 sockets so application will only use my explicitly sized thread pool? (I mean that in case there is any further data on one from 10 sockets, one thread will wake up and handle it.)
In this matter async/await will result in same problem as with 1-client/1-thread concern
When thread reach code that is running asynchronously then control is returned to caller so that means thread is returned to thread pool and can handle another request so it is any superior to 1-client/1-thread because thread isn't blocked.
There is some any intersting blog about asnyc/await:
1
My question is more related to how WebSockets (on the client) work/behave with threads in .Net and what I am looking for as an answer would be more of a low level explanation on how the OS interacts with the .Net thread when it receives data from the server on its socket.
Suppose I have a client that opens 1000 sockets to a server asynchronously. It then sits there waiting for updates/events to come through. These events can arrive at different times and frequencies.
Assuming that every time data comes in via a socket, a thread needs to pick it up and do some work on it, am I correct to assume that IF all the 1000 sockets receive data at the same time I will then have 1000 threads (1 thread per socket) coming from the Thread Pool to pick-up the data from the socket? What if I wanted to have 3000 sockets open?
Any clarification on this is very much appreciated.
Assuming you are using the .NET Framework library WebSocket the received data will be returned on a thread from the ThreadPool (probably the IO Completion Thread Pool).
Thread Pool
When the thread pool is used you don't know how many different threads that will be active a the same time. The data is put on a queue and the thread pool works through it as fast as it can. You can control the min/max number of threads that it will use, but the way that the pool creates/destroys its threads is unspecified.
The above hold true for most asynchronous operations in .NET.
Excpetions
If you awaited the asynchronous receive operation in a synchronization context (for instance a UI thread) the operation will resume in the same context (UI thread), unless you suppress the sync context. In this case only one thread will be used and the receive operations will be queued and processed in sequence.
The .Net Socket async API manages threads automatically when using the BeginXXX methods. For example, if I have 100 active connections sending and receiving TCP messages, will be used around 3 threads. And it makes me curious.
How the API makes this thread management?
How all flow of connections are divided among the threads to be processed?
How the manager prioritizes which connections/readings/writings must be processed first?
My questions may not have sense because I don't know how it works and what to ask specifically, so sorry. Basically I need to know how this whole process works in low level.
The .Net Socket async API manages threads automatically when using the
BeginXXX methods.
This is not quite correct. APM Begin/End-style socket API do not manage threads at all. Rather, the completion AsyncCallback is called on a random thread, which is the thread where the asynchronous socket I/O operation has completed. Most likely, this is going to be an IOCP pool thread (I/O completion port thread), different from the thread on which you called the BeginXXX method. For more details, check Stephen Cleary's "There Is No Thread".
How the manager prioritizes which connections/readings/writings must
be processed first?
The case when there's no IOCP threads available to handle the completion of the async I/O operation is called TheadPool starvation. It happens when all pool threads are busy executing some code (e.g., processing the received socket messages), or are blocked with a blocking call like WaitHandle.WaitOne(). In this case, the I/O completion routine is queued to ThreadPool to be executed when a thread becomes available, on FIFO basis.
You have an option to increase the size of ThreadPool with SetMinThreads/SetMaxThreads APIs, but doing so isn't always a good idea. The number of actual concurrent threads is anyway limited by the number of CPU/cores, so you'd rather want to finish any CPU-bound processing work as soon as possible and release the thread to go back to the pool.
I would like to use the C# asynchronous io model for my socket. I have multiple
threads that need to send over the socket. What is the best way of handling this?
A pool of N sockets,with access controlled by lock? Or is asynch send thread-safe
for multiple threads accessing a single socket?
THanks!
Jacko
The async methods already create new threads to send the data. This will probably add unnecessary overhead to your application. If you already have multiple threads, you can create an IDispoable type of object to represent access to the socket and a manager which will control the checkin and checkout for the socket. The socket would checkin itself when the IDisposable dispose method is called. This way you can control what methods your threads can perform on the socket as well.
If the socket is already checked out by another thread, the manager would simply block until it's available.
using (SharedSocket socket = SocketManager.GetSocket())
{
//do things with socket
}
System.Threading.Semaphore is something that becomes handy in synchronizing and racing conditions
I'm just trying to make some socket programming, using non-blocking sockets in c#.
The various samples that i've found, such as this, seems to use a while(true) loop, but this approach causes the cpu to burst at 100%.
Is there a way to use non-blocking sockets using a event programming style?
Thanks
See the MSDN example here. The example shows how to receive data asynchronously. You can also use the Socket BeginSend/EndSend methods to send data asynchronously.
You should note that the callback delegate executes in the context of a ThreadPool thread. This is important if the data received inside the callback needs to be shared with another thread, e.g., the main UI thread that displays the data in a Windows form. If so, you will need to synchronized access to the data using the lock keyword, for example.
As you've noticed, with nonblocking sockets and a while loop, the processor is pegged at 100%. The asynchronous model will only invoke the callback delegate when there is data to send or receive.
Talking generally about blocking/non-blocking IO, applicable generally:
The key thing is that in real life your program does other things whilst not doing IO. The examples are all contrived in this way.
In blocking IO, your thread 'blocks' while waiting for IO. The OS goes and does other things, e.g. allows other threads to run. So your application can do many things (conceptually) in parallel by using many threads.
In non-blocking IO, your thread queries to see if IO is possible, and otherwise goes and does something else. So you do many things in parallel by explicitly - at an application level - swapping between them.
To avoid a CPU issue in heavy while loop, when no data receive put thread.sleep(100) or less. That will let other processes change to do their task
Socket.BeginReceive and AsyncCallback