I'm writing a server in C#. The asynchronous example on msdn.microsoft.com suggests the following.
BeginAccept to listen for client (& start a new thread when client calls).
BeginReceive to receive the data from client ( & start a new thread to do it on).
BeginSend to reply to send data to client ( & start yet another thread to it on).
At this point there seems to be 4 separate threads, when from my (possibly naive) point of view there really only needs to be 2. 1 for the server to keep listening on and 1 for the conversation with the client. Why does my conversation with the client need 3 threads since I have to wait for a reply before I send and I won't be doing anything else while waiting to receive data from the client?
Cheers
BeginAccept does not start a new thread. It is attaching a handler to an OS level hook. No thread is going to be doing the meat of the work for this operation. The same is true for BeginReceive and BeginSend. None of these are starting new threads.
When the events that they are adding handlers for actually fire, then a thread pool thread is created to respond to the action happening. The CPU bound work done here should generally be quite low. What you'll see here is a lot of thread pool threads requested, but very little work being done by them, so they are sent back to the pool very quickly.
The thread pool is designed for this type of use. Rather than creating full threads for each event response (which would be expensive) you can create just 1-2 threads and continually re-use them to respond to all of these events in turn. The pool will only create as many threads as it needs to keep up with a sufficiently small backlog.
While your main thread will be marshalled around these operations, you should not have to "start a thread" yourself.
BeginAccept is a non blocking method - .NET will return immediately from it but invoke your callback on the thread pool when it fulfils its purpose asynchronously.
The threadpool is optimised outside of your control.
Related
My program uses a producer/consumer pattern, meaning that my producer adds tasks to a queue and my consumer executes those tasks in the background whenever there's something in the queue to execute.
My worker thread needs to use a serial port, and the standard way of using a serial port is to open it at the start of the program and keep it open for as long as it's needed (until shutdown). My program is an always-on web service, where usually objects have a lifetime scoped to the request. These two things contradict each other somewhat, so I need to make sure that when I get a request, the background thread is up and running and holding the serial port open instead of opening it for every request which might be more natural in most cases. So my program needs a high degree of self-sufficiency, it needs to detect errors in the worker thread and restart it if needed.
Is there a technique for guaranteeing that my worker thread stays up? I have considered wrapping the entire worker thread's code in a try/catch, and sending an event to the main thread in the finally block so that the worker can be restarted. Or I could continually send "ping" events to the main thread to let it know that the worker thread is still running, or even poll the worker thread from the main thread.
Is there a standard way of doing this? What's the most robust approach? Note that it's fine if the worker thread dies or becomes unable to complete its work for whatever reason - I will just restart the thread and keep trying, however ideally if that happens it should be able to put its task back in the queue.
Examples in C#/F#/dotnet (framework) are greatly appreciated.
I'm trying to create a socket server which can handle relatively large amount of clients. There are different approaches used for such a task, first is to use a separate thread for each incoming connection, and second is to use async/await pattern.
First approach is bad as there will be relatively small number of threads such as all system's resources will be lost on context switching.
Second approach from the first sight is good as we can have our own threadpool with limited number of worker threads, so dispatcher will receive incoming connections, add them to some queue and call async socket read methods which in turn will receive data from socket and add this data/errors to queue for further processing(error handling client responses, DB-related work).
There is not so much info on internal implementation of async/await I could found, but as I understood while using non-UI application all continuation is done through TaskScheduler.Current which is using runtime's threadpool and so it's resources are limited. Greater amount of incoming connections will result in no free threads in runtime's threadpool or amount will be so large that system will stop responding.
In this matter async/await will result in same problem as with 1-client/1-thread concern, however with little advantage as runtime threadpool's threads may not occupy so much address space as default System.Threading.Thread (I believe 1MB stack size + ~1/2MB of control data).
Is there any way I can made one thread to wait for some kernel interrupt on say 10 sockets so application will only use my explicitly sized thread pool? (I mean that in case there is any further data on one from 10 sockets, one thread will wake up and handle it.)
In this matter async/await will result in same problem as with 1-client/1-thread concern
When thread reach code that is running asynchronously then control is returned to caller so that means thread is returned to thread pool and can handle another request so it is any superior to 1-client/1-thread because thread isn't blocked.
There is some any intersting blog about asnyc/await:
1
I am still learning C# so please be easy on me. I am thinking about my application I am working on and I can't seem to figure out the best approach. This is not a forms application but rather a console. I am listening to a UDP port. I get UDP messages as fast as 10 times per second. I then look for a trigger in the UDP message. I am using an event handler that is raised each time i get a new UDP packet which will then call methods to parse the packet and look for my trigger. So, i have these questions.
With regard to threading, I assume a thread like my thread that listens to the UDP data should be a permanent thread?
Also on threading, when I get my trigger and decide to do something, in this case send a message out, i gather that I should use a thread pool each time I want to perform this task?
On thread pools, I am reading that they are not very high priority, is that true? If the message I need to send out is critical, can i rely on thread pools?
With the event handler which is raised when i get a UDP packet and then calls methods, what is the best way to ensure my methods all complete before the next packet/event is raised? At times I see event queue problems because if any of the methods take a bit longer than they should (for exampe writing to a DB) and the next packet comes in 100ms later, you get event queue growth because you cannot consume events in a timely manner. Is there a good way to address this?
With regard to threading, I assume a thread like my thread that listens to the UDP data should be a permanent thread?
There are no permanent threads. However there should be a thread that is responsible for receiving. Once you start it, let it run until you no longer need to receive any messages.
Also on threading, when I get my trigger and decide to do something, in this case send a message out, i gather that I should use a thread pool each time I want to perform this task?
That depends on how often would you send out messages. If your situation is more like consumer/producer than a separate thread for sending is a good idea. But if you send out a message only rarely, you can use thread pool. I can't define how often rare means in this case, you should watch your app and decide.
On thread pools, I am reading that they are not very high priority, is that true? If the message I need to send out is critical, can i rely on thread pools?
You can, it's more like your message will be delayed because of slow message processing or slow network rather than the thread pool.
With the event handler which is raised when i get a UDP packet and then calls methods, what is the best way to ensure my methods all complete before the next packet/event is raised? At times I see event queue problems because if any of the methods take a bit longer than they should (for exampe writing to a DB) and the next packet comes in 100ms later, you get event queue growth because you cannot consume events in a timely manner. Is there a good way to address this?
Queue is a perfect solution. You can have more queues if some messages are independent of others and their execution won't collide and then execute them in parallel.
I'll adress your points:
your listeting thread must be a 'permanent' thread that gets messages and distribute them.
(2+3) - Look at the TPL libarary you should use it instead of working with threads and thread pools (unless you need some fine control over the operations which, from your question, seems like you dont need) - as MSDN states:
The Task Parallel Library (TPL) is based on the concept of a task, which represents an asynchronous operation. In some ways, a task resembles a thread or ThreadPool work item, but at a higher level of abstraction
Look into using MessageQueues since what you need is a place to receive messages, store them for some time (in memory in your case)and handle them at your own pace.
You could implement this yourself but you'll find it gets complicated quickly,
I recommend looking into NetMQ - it's easy to use, especially for what you describe, and it's in c#.
My question is more related to how WebSockets (on the client) work/behave with threads in .Net and what I am looking for as an answer would be more of a low level explanation on how the OS interacts with the .Net thread when it receives data from the server on its socket.
Suppose I have a client that opens 1000 sockets to a server asynchronously. It then sits there waiting for updates/events to come through. These events can arrive at different times and frequencies.
Assuming that every time data comes in via a socket, a thread needs to pick it up and do some work on it, am I correct to assume that IF all the 1000 sockets receive data at the same time I will then have 1000 threads (1 thread per socket) coming from the Thread Pool to pick-up the data from the socket? What if I wanted to have 3000 sockets open?
Any clarification on this is very much appreciated.
Assuming you are using the .NET Framework library WebSocket the received data will be returned on a thread from the ThreadPool (probably the IO Completion Thread Pool).
Thread Pool
When the thread pool is used you don't know how many different threads that will be active a the same time. The data is put on a queue and the thread pool works through it as fast as it can. You can control the min/max number of threads that it will use, but the way that the pool creates/destroys its threads is unspecified.
The above hold true for most asynchronous operations in .NET.
Excpetions
If you awaited the asynchronous receive operation in a synchronization context (for instance a UI thread) the operation will resume in the same context (UI thread), unless you suppress the sync context. In this case only one thread will be used and the receive operations will be queued and processed in sequence.
I'm writing a TCP Server in C# and I'm using the BeginXXX and EndXXX methods for async communication. If I understand correctly, when I use BeginXXX the request will be handled on in the threadpool (when the request is ready) while the main thread keeps accepting new connections.
The question is what happens if I perform a blocking action in one of these AsyncCallbacks? Will it be better to run a blocking operation as a task? Tasks use the threadpool as well don't they?
The use case is the following:
The main thread set ups a listening socket which accepts connections using BeginAccept, and starts listening on those connections using BeginReceive. When a full message has been received, a function is called depending on what that message was, in 80% of all cases, those functions will start a database query/insertion/update.
I suggest you use SocketAsyncEventArgs which is introduced in .net 4.5
Here's some reading material you can start with
click me
The question is what happens if I perform a blocking action in one of these AsyncCallbacks? Will it be better to run a blocking operation as a task?
If you do that too often or for too long then the ThreadPool will grow. Possible to the point where it will crash your App.
So try to avoid blocking as much as possible. But a little bit of it should be acceptable. Keep in mind that the ThreadPool will grow with 1 new thread per 500 ms. So make sure and verify that it will level out on some reasonable number of threads.
A blunt instrument could be to cap the MaxThreads of the pool.
Tasks use the threadpool as well don't they?
Yes, so your options are limited.