Is there a C# reactor/proactor library? - c#

I'm looking to move a Windows C++ application to C# so that some major enhancements are a bit easier. The C++ application is single-threaded and uses a home-grown reactor pattern for all event handling of accepting, reading and writing sockets, and timers. All socket handling is done async.
What is the accepted way to implement a C# reactor pattern? Are the existing libraries?

brofield,
Unfortunately the mentality of the C# world is still in the thread per connection realm. I'm looking for a way to handle multiple connections on a single Compact Framework/ Windows CE box and looking to write my own Proactor/Reactor pattern (fashioned after the one used in ACE) Compact Framework doesn't seem to support asynch connecting - just asynch reading and writing. I also need to manage tight control over timeouts (soft realtime application).
Alan,
One reason to implement a proactor/reactor pattern is so that you don't have to have a thread running for each connection. A web server is the classic example. A busy server could easily have 100s of connections active at anyone time. With that many threads (and I've seen implementations that have one thread to read, another to write data) the time spent in a context switching becomes significant. Under Windows CE on a 750Mhz ARM processor, I measured over a millisecond with peaks as high as 4 milliseconds.
I still find most C# and Java applicatons that I've come across still have way too many threads running. Seems to be the solution for everything - start another thread. Example. Eclipse (the IDE) use 44 threads even before I actually open a project. 44 threads???? to do what exactly??? Is that why Eclipse is so slow?

Have a read of Asynchronous Programming in C# using Iterators;
In this article we will look how to write programs that perform asynchronous operations without the typical inversion of control. To briefly introduce what I mean by 'asynchronous' and 'inversion of control' - asynchronous refers to programs that perform some long running operations that don't necessary block a calling thread, for example accessing the network, calling web services or performing any other I/O operation in general.

The Windows kernel has a very nice asynchronous wait API called I/O Completion Ports, which is very good for implementing a reactor pattern. Unfortunately, the System.Net.Sockets library in the .NET framework does not fully expose I/O Completion Ports to developers. The biggest problem with System.Net.Sockets is that you have no control over which thread will dequeue an asynchronous completion event. All your asynchronous completions must occur on some random global .NET ThreadPool thread, chosen for you by the framework.

.NET Threads, Sockets and Event Handling have first class support within C# and .NET, so the need for an external library is pretty light. In otherwords, if you know what you are doing, the framework makes it very straight forward to roll your own socket event handling routines.
Both companies I work for, tended to reuse the default .NET Socket library as is for all network connections.
Edit to add: Basically what this means is this is an excellent opportunity to learn about Delegates, Events, Threads, and Sockets in .NET/C#

Check SignalR. It looks very promising.

Have a look at the project Interlace on google code. We use this in all of our products.

Related

Design of asynchronous socket classes in C#

I've done an small asynchronous tcp server/client in C#...
... And I've been just thinking :
C# API implements select and epoll, a classic but easy way to do async. Why does Microsoft introduce the BeginConnect/BeginSend family, which -in my opinion- have a more complicated design (and adds lines of code too).
So, using the BeginXXX() "trend", I noticed that the System.Threading import is required (for the events). Does it mean that threads are involved too ?
select and poll have two problems:
They are generally used in a single-threaded way. They do not scale for this reason.
They require all IO to be dispatched through a central place that does the polling.
It is much nicer to be able to just specify callback that magically will be called on completion. This scales automatically and there is no central place to dispatch needed. Async IO in .NET is quite free of hassles. It just works (efficiently).
Async IO on Windows is threadless. While an IO is running not a single thread is busy serving it. All async IO in .NET uses truly async IO supported by the OS. This means either overlapped IO or completion ports.
Look into async/await which also can be used with sockets. They provide the easiest way to use async IO that I know of. That includes all languages and platforms. select and poll aren't even in the same league judged by ease of use.

How can I implement Network and Serial communication components in my WPF application?

I'm looking for less technical and more conceptual answers on this one.
I am looking to build a WPF application using .NET 4.5 for controlling a rover, (glorified RC Car). Here is the intended functionality:
The application and rover will communicate wirelessly by sending and receiving strings - JSON over TCP Socket.
The GUI will display multiple video feeds via RTSP.
A control panel - custom hardware - will connect to the computer via USB and these signals will be converted to JSON before being sent over the TCP connection and providing movement instructions.
The GUI will need to update to reflect the state of the control panel as well as the state of the rover based on data received.
I'm not sure which technologies to use to implement this, but from my research, BackgroundWorkers or Threads, and Asynchronous techniques would be things to look into. Which of these seems like a good route to take? Also, should I use TCP Sockets directly in the application or should/could I use WCF to provide this data?
Any wisdom on this would be great. Thanks in advance.
EDIT:
Here was the final implementation used and boy did it workout great:
Everything fell into place around using the MVVM pattern.
There were Views for the control panel and the networking component which each had a corresponding ViewModel that handled the background operations.
Updating the UI was done via databinding, not the Dispatcher.
Wireless Communication was done Asynchronously (async/await) via TCPListener along with the use of Tasks.
Serial Port Communication was done Asynchronously via SerialPort and Tasks.
Used ModernUI for interface.
Used JSON.NET for the JSON parsing.
Here is a link to the project. It was done over the course of a month so it isn't the prettiest code. I have refined my practices a lot this summer so I'm excited to work on a refactored version that should be completed next year.
As you are using .NET 4.5 you dont need to use Threads and background workers for your project. you dont need to take care of all of your threads. As WPF's Dispatcher is a very powerful tool for handling UI from other threads.
For TCP Communication i would suggest you to use TCP Client and TCP Listner with Async Callbacks. and use Dispatcher for Updating your UI.
For Displaying Cameras over RTSP, Use VLC.Net an Open source wrapper for VLC library good for handling many real time video protocols.
Use Tasks instead of Threads, set their priority according to your requirement.
You don't need WCF for your application.
As far as I can tell (I'm no expert), MS's philosophy these days is to use asynchronous I/O, thread pool tasks for lengthy compute operations, and have a single main thread of execution for the main part of the application. That main thread drives the GUI and commissions the async I/O and thread pool tasks as and when required.
So for your application that would mean receiving messages asynchronously, and initiating a task on the thread pool to process the message, and finally displaying the results on the GUI when the task completes. It will end up looking like a single threaded event loop application. The async I/O and thread pool tasks do in fact use threads, its just they're hidden from you in an as convenient a way as possible.
I've tried (once) bucking this philosophy with my own separate thread handling all my I/O and an internal pipe connection to my main thread to tell it what's happening. I made it work, but it was really, really hard work. For example, I found it impossible to cancel a blocking network or pipe I/O operation in advance of its timeout (any thoughts from anyone out there more familiar with Win32 and .NET?). I was only trying to do that because there's no true equivalent to select() in Windows; the one that is there doesn't work with anything other than sockets... In case anyone is wondering 'why of why oh why?', I was re-implmenting an application originally written for Unix and naively didn't want to change the architecture.
Next time (if there is one) I'll stick to MS's approach.

Do .NET 4.5 Server-Side WebSockets use IOCP?

Does anyone know if AspNetWebSocket (introduced in .NET Framework 4.5) utilizes IOCP for handling requests instead of one thread per Connection?
There's a relatively easy way for you to work this out for yourself.
Create a simple program.
Watch it with a task monitor and look at the number of threads.
Create lots of web socket connections to it.
See if the thread count increases whilst the connections are alive.
Compare the number of connections to the number of threads.
But, what does it matter? As long as the API meets your performance and scalability requirements which you will only discover from performance testing your application.
Note that I would be VERY surprised if the implementation does NOT use IOCP but there's really little point in asking IMHO.

What is the difference in .NET between developing a multithreaded application and parallel programming?

Recently I've read a lot about parallel programming in .NET but I am still confused by contradicting statements over the texts on this subject.
For example, tThe popup (upon pointing a mouse on tag's icon) description of the stackoverflow.com task-parallel-library tag:
"The Task Parallel Library is part of .NET 4. It is a set of APIs tpo
enable developers to program multi-core shared memory processors"
Does this mean that multi-core-d and parallel programming applications impossible using prior versions of .NET?
Do I control a multicore/parallel usage/ditribution between cores in .NET multithreaded application?
How can I identify a core on which a thread to be run and attribute a thread to a specific core?
What has the .NET 4.0+ Task Parallel Library enabled that was impossible to do in previous versions of .NET?
Update:
Well, it was difficult to formulate specific questions but I'd like to better understand:
What is the difference in .NET between developing a multi-threaded application and parallel programming?
So far, I could not grasp the difference between them
Update2:
MSDN "Parallel Programming in the .NET Framework" starts from version .NET 4.0 and its article Task Parallel Library tells:
"Starting with the .NET Framework 4, the TPL is the preferred way to
write multithreaded and parallel code"
Can you give me hints how to specifically create parallel code in pre-.NET4 (in .NET3.5), taking into account that I am familiar with multi-threading development?
I see "multithreading" as just what the term says: using multiple threads.
"Parallel processing" would be: splitting up a group of work among multiple threads so the work can be processed in parallel.
Thus, parallel processing is a special case of multithreading.
Does this mean that multi-core-d and parallel programming applications impossible using prior versions of .NET?
Not at all. You could do it using the Thread class. It was just much harder to write, and much much harder to get it right.
Do I control a multicore/parallel usage/ditribution between cores in .NET multithreaded application?
Not really, but you don't need to. You can mess around with processor affinity for your application, but at the .NET level that's hardly ever a winning strategy.
The Task Parallel Library includes a "partitioner" concept that can be used to control the distribution of work, which is a better solution that controlling the distribution of threads over cores.
How can I identify a core on which a thread to be run and attribute a thread to a specific core?
You're not supposed to do this. A .NET thread doesn't necessarily correspond with an OS thread; you're at a higher level of abstraction than that. Now, the default .NET host does map threads 1-to-1, so if you want to depend on an undocumented implementation detail, then you can poke through the abstraction and use P/invoke to determine/drive your processor affinity. But as noted above, it's not useful.
What has the .NET 4.0+ Task Parallel Library enabled that was impossible to do in previous versions of .NET?
Nothing. But it sure has made parallel processing (and multithreading) much easier!
Can you give me hints how to specifically create parallel code in pre-.NET4 (in .NET3.5), taking into account that I am familiar with multi-threading development?
First off, there's no reason to develop for that platform. None. .NET 4.5 is already out, and the last version (.NET 4.0) supports all OSes that the next older version (.NET 3.5) did.
But if you really want to, you can do simple parallel processing by spinning up Thread objects or BackgroundWorkers, or by queueing work directly to the thread pool. All of these approaches require more code (particularly around error handling) than the Task type in the TPL.
What if i ask you "Do you write business software with your own developed language? or Do you drink water after digging your own well?"
That's the difference in writing multi threading by creating threads and manage them around while you can use abstraction over threads using TPL. Multicore and scheduling of threads on cores is maintained at OS so you don't need to worry about whether your threads are getting executed on the cores your system supports AFAIK.
Check this article, it basically sums up what was (virtually) impossible before TPL, even though many companies had brewed their own parallel processing libraries none of them had been fully optimized to take advantage of all resources of the popular architectures (simply because it's big task & Microsoft has a lot of resources + they are good). Also it's interesting to note Intel's counterpart implementation TBB vs TPL
Does this mean that multi-core-d and parallel programming applications impossible using prior versions of .NET?
Not at all. Types like Thread and ThreadPool for scheduling computations on other threads and ManualResetEvent for synchronization were there since .Net 1.
Do I control a multicore/parallel usage/ditribution between cores in .NET multithreaded application?
No, that's mostly the job of the OS. You can set ProcessorAffinity of a ProcessThread, but there is no simple way to get a ProcessThread from a Thread (because it was originally thought that .Net Threads may not directly correspond to OS threads). There is usually no reason to do this and you especially shouldn't do it for ThreadPool threads.
What has the .NET 4.0+ Task Parallel Library enabled that was impossible to do in previous versions of .NET?
I'd say it didn't make anything impossible possible. But it made lots of tasks much simpler.
You could always write your own version of ThreadPool and manually use synchronization primitives (like ManualResetEvent) for synchronization between threads. But doing that properly and efficiently is lots of error-prone work.
What is the difference in .NET between developing a multi-threaded application and parallel programming?
This is just a question of naming and doesn't have much to do with your previous questions. Parallel programming means performing multiple operations at the same time, but it doesn't say how do you achieve parallelism. For that, you could use multiple computers, or multiple processes or multiple threads, or even a single thread.
(Parallel programming on a single thread can work if the operations are not CPU-bound, like reading a file from disk or fetching some data from the internet.)
So, multi-threaded programming is a subset of parallel programming, though one that's most commonly used on .Net.
Multithreading used to be available on single-core CPUs. I believe in .NET world, "parallel programming" represents compiler/language, as well as namespace and "library" additions, that facilitate multi-core capabilities (better than before). In this sense "parallel programming" is a category under multithreading, that provides improved support for multiple CPUa/cores.
My own ponderings: at the same time I see .NET "parallel programming" to encompass not only multi-threading, but other techniques. Consider the fact that the new async/await facilities don't guarantee multi-threading, as in certain scenarios they are only an abstraction of the continuation-passing-style paradigm that could accomplish everything on a single thread. Include in the mix parallelism that comes from running different processes (potentially on different machines) and in that sense, multithreading is only a portion of the broader concept of "parallel programming".
But if you consider the .NET releases I think the former is a better explanation.

Server Architecture

Hopefully two simple questions relating to creating a server application:
Is there a theoretical/practical limit on the number of simultaneous sockets that can be open? Ignoring the resources required to process the data once it has arrived! If its of relevance I am targeting the .net framework
Should each connection be run in a separate thread that's permanently assigned to it, or should use of a Thread Pool be made? The dedicated thread approach seems simpler, but it seems odd to have 100+ threads running it once. Is this acceptable practice?
Any advice is greatly appreciated
Venatu
You may find the following answer useful. It illustrates how to write a scalable TCP server using the .NET thread pool and asynchronous sockets methods (BeginAccept/EndAccept and BeginReceive/EndReceive).
This being said it is rarely a good idea to write its own server when you could use one of the numerous WCF bindings (or even write custom ones) and benefit from the full power of the WCF infrastructure. It will probably scale better than every custom written server.
There are practical limits, yes. However, you will most likely run out of resources to handle the load long before you reach them. CPU, or memory are more likely to be exhausted before number of connections.
For maximum scalability, you don't want a seperate thread per connection, but rather you would use an Asynchronous model that only uses threads when servicing active (as in receiving or sending data) connections.
As I remember correctly (did sockets long time ago) the very best way of implementing them is with ReceiveAsync (.NET 3.5) / BeginReceive methods using asynchronous callbacks which will utilize thread pool. Don't open a thread for every connection, it is a waste of resources.

Categories