Handling multiple TcpClient connections without using threads - c#

I've got a C# program with lots (let's say around a thousand) opened TcpClient objects. I want to enter a state which will wait for something to happen for any of those connections.
I would rather not launch a thread for each connection.
Something like...
while (keepRunning)
{
// Wait for any one connection to receive something.
TcpClient active = WaitAnyTcpClient(collectionOfOpenTcpClients);
// One selected connection has incomming traffic. Deal with it.
// (If other connections have traffic during this function, the OS
// will have to buffer the data until the loop goes round again.)
DealWithConnection(active);
}
Additional info:
The TcpClient objects come from a TcpListener.
The target environment will be MS .NET or Mono-on-Linux.
The protocol calls for long periods of idleness while the connection is open.

What you're trying to do is called an Async Pattern in Microsoft terminology. The overall idea is to change all I/O blocking operations to non-blocking. If this is done, the application usually needs as many system threads as there are CPU cores at the machine.
Take a look at Task Parallel Library in .Net 4:
http://msdn.microsoft.com/en-us/library/dd460717%28VS.100%29.aspx
It's a pretty mature wrapper over the plain old Begin/Callback/Context .Net paradigm.
Update:
Think about what to you will do with the data after you read from the connection. In real life you probably have to reply to the client or save the data to a file. In this case you will need some C# infrastructure to contain/manage your logic and still stay within a single thread. TPL provides it to you for free. Its only drawback is that it was introduced in .Net 4, so probably it's not in Mono yet.
Another thing to consider is connection's lifetime. How often your connections are opened/closed and how long do they live? This is important because accepting and disconnecting a TCP connection requires packet exchange with the client (which is asynchronous by nature, and moreover - a malicious client may not return ACK(-nowledged) packets at all). If you think this aspect is significant for your app, you may want to research how to handle this properly in .Net. In WinAPI the corresponding functions are AcceptEx and DisconnectEx. Probably they are wrapped in .Net with Begin/End methods - in this case you're good to go. Otherwise you'll probably have to create a wrapper over these WinAPI calls.

Related

Good UDP relay server design

I'm creating a UDP server that needs to receive UDP packets from various clients, and then forward them to other clients. I'm using C# so each UDP socket is a UdpClient for me. I think I want 2 UdpClient objects, one for receiving and one for sending. The receiving socket will be bound to a known port, and the sender will not be bound at all.
The server will get each packet, lookup the username in the packet data, and then based on a routing list the server maintains, it will forward the packet to 1 or more other clients.
I start the listening UdpClient with:
UdpClient udpListener = new UdpClient(new IPEndPoint(ListenerIP, UdpListenerPort));
udpListener.BeginReceive(new AsyncCallback(OnUDPRead), udpListener);
My callback looks like:
void OnUDPRead(IAsyncResult ar)
{
UdpClient udpListener = (UdpClient)ar.AsyncState;
try
{
IPEndPoint remoteEndPoint = null;
byte[] packet = udpListener.EndReceive(ar, ref remoteEndPoint);
// Get connection based on data in packet
// Get connections routing table (from memory)
// ***Send to each client in routing table***
}
catch (Exception ex)
{
}
udpListener.BeginReceive(new AsyncCallback(OnUDPRead), udpListener);
}
The server will need to process 100s of packets per second (this is for VOIP). The main part I'm not sure about, is what to do at the "Send to each client" part above. Should I use UdpClient.Send or UdpClient.BeginSend? Since this is for a real-time protocol, it wouldn't make sense to have several sends pending/buffered for any given client.
Since I only have one BeginReceive() posted at any given time, I assume my OnUDPRead function will never be called while already executing. Unless UdpClient works differently than TcpClient in this area? Since I don't give it a single buffer (I don't give it any buffer), I suppose it could fire OnUDPRead several times with only one BeginReceive posted?
I don't see any reason why you can't use one socket, the straightforward way. Async I/O doesn't appear to me to offer any startling benefit in this situation. Just use non-blocking mode. Receive a packet, send it. If send() causes EAGAIN or EWOULDBLOCK or whatever the C# manifestation of that is, just drop it.
In your plan, the sending socket will be bound automatically the first time you call send(). Clients will reply to that port. So it needs to be a port you are reading on. And you already have one of those.
It very much depends on scale - i.e. how many concurrent RTP streams you want to relay. I have developed exactly this solution as part of a media gateway for our SIP switch. Despite initial reservations, I have it working very efficiently using C# (1000 concurrent streams on a single server, 50K datagrams per second using 20ms PTime).
Through trial-and-error, I determined the following:
1) Using Socket.ReceiveFromAsync is much better than using synchronous approaches, since you don't need individual threads for each stream. As packets are received, they are scheduled through the threadpool.
2) Creating/disposing the SocketAsyncEventArgs structures takes time - better to allocate a pool of structures at startup and 'borrow' them from the pool.
3) Similarly for any other objects required during processing (e.g. a RTPPacket class). Allocating these dynamically at such a high rate results in GC issues pretty quickly with any sort of real load. If you avoid all dynamic allocation, and instead use pools of objects you manage yourself, this issue goes away.
4) If you need to do any clocking to co-ordinate input and output streams, transcoding, mixing or file playback, don't try to use the standard .NET timers. I wrapped the Win32 Multimedia timers from winmm.dll - these were much more precise.
I ended up with an architecture where components could expose IRTPSource and IRTPSink interfaces, which could then be hooked-up to create the graph required. I created RTP input/output components, mixer, player, recorder, (simple) transcoder, playout buffer etc. - and then wired them up using these two interfaces. The RTP packets are then clocked through the graph using the MM timers.
C# proved to be a good choice for what would typically be approached using C or C++.
Mike
What EJP said.
Most all the literature on scaling socket servers with async IO is written with TCP scenarios in mind. I did a lot of research last year on how to scale a UDP socket server, and found that IOCP/epoll wasn't as appropriate for UDP as it is for TCP. See this answer here.
So it's actually quite simple to scale a UDP socket server just using multiple threads than trying to do any sort of asynchronous I/O thing. And often times, one single thread pumping out recvfrom/sendto calls is good enough.
I would suggest that the server have one synchronous socket per client. Then have several threads do a Socket.Select call on a different subset of client sockets. I suppose you could have a single socket for all clients, but then you'll get some out-of-ordering problems if you try to process audio packets on that socket from multiple threads.
Also consider using the lower level socket class intead of UdpClient.
But if you really want to scale out - you may want to do the core work in C/C++. With C#, more layers of abstraction on top of winsock means more buffer copies and higher latency. There are techniques by which you can use winsock to forward packets from one socket to another without incurring a buffer copy.
a good design would be not to re-invent the wheel :)
There are already lots of RTP media proxy solutions: Kamailio, FreeSWITCH, Asterisk, and some other open-source projects. I have great experience with FreeSWITCH, and would highly recommend it as a media proxy for SIP calls.
If your signalling protocol is not SIP, there are some solutions for other protocols as well.

Asynchronous vs. Synchronous socket server for real-time application

I am currently developing a C# socket server that needs to send and receive commands to a real-time process. The client is an android device. Currently the real-time requirements are "soft", however in the future more strict timing requirements might arise. Lets say in the future it might be to send commands to a crane that could be potentially dangerous.
The server is working, and seemingly very well with my current synchronous socket server design. I have separate threads for receiving and sending data. I am wondering if there would be any reason to attempt an asynchronous server socket approach? Could it provide more stability and/or faster performance?
I'll gloss over the definition of real time and say that asynchronous sockets won't make the body of the request process any faster, but will increase concurrency (the number of requests you can take at any one time). If all processors are busy processing something, you won't get any gain. This only gives you gain in the situation where a processor would have sat waiting for a socket to receive something.
Just a note on real time, if your real time requirements are anything like the need to guarantee a response in x-time, then C# and .NET will not give you such guarantees. This, however, depends on your current and future definitions of "soft". It may be the case that you happen to be getting good response times, but don't confuse that with true real time systems.
If you're doubting the usefullness of something asynchronous in your aplications then you should definitely read about this. It gives you a clear idea of what the asynchronous solutions could add to your applications
I don't think you are going to get more stability or faster performance. If it really is a "real-time" system, then it should be synchronous. If you can tolerate "near real-time" and there are long running or expensive compute operations, then you could consider an asynchronous approach. I would not add the complexity if not needed though.
If it's real time, then you absolutely want your communications to be backed by a queue so that you can prove temporal logic on that queue. This is what nio/io-completion-ports/async gives you. If you are using synchronous programming then you are wasting your CPU while copying data from RAM to the network card.
Furthermore, it means that your server is absolutely single-threaded. You may have a single thread even with async, but still be able to serve thousands of requests.
Say for example that a client wanted to perform a DOS attack. He would connect and send one byte of data. Your application would now become unable to receive further commands for the timeout of that connection, which could be quite large. With async, you would ACK the SYN package back, but your code would not be waiting for the full transmission.

need advice for type of TCP server to cater for this type of application

The requirement of the TCP server:
receive from each client and send
result back to same client (the
server only do this)
require to cater for 100 clients
speed is an important factor, ie:
even at 100 client connections, it should not be laggy.
For now I have been using C# async method, but I find that I always encounter laggy at around 20 connections. By laggy I mean taking around almost 15-20 seconds to get the result. At around 5-10 connections, time to get result is almost immediate.
Actually when the tcp server got the message, it will interact with a dll which does some processing to return a result. Not exactly sure what is the workflow behind it but at small scale you do not see any problem, so I thought the problem might be with my TCP server.
Right now, I thinking of using a sync method. Doing so, I will have a while loop to block the accept method, and spawn a new thread for each client after accept. But at 100 connections, it is definitely overkill.
Chance upon IOCP, not exactly sure, but it seems to be like a connection pool, as the way it handles tcp is quite like the normal way.
For these TCP methods I am also not sure whether it is a better option to open and close connection each time message needs to be passed. On average, message are passed from each client at around 5-10 min interval.
Another alternative might be to use a web, (looking at generic handler) to form only 1 connection with the server. Any message that needs to be handled will be passed to this generic handler, which then sends and receive message from the server.
Need advice from especially those who did TCP in large scale. I do not have 100 PC for me to test out, so quite hard for me. Language wise C# or C++ will do, I'm more familar with C#, but will consider porting to C++ for the speed.
You must be doing it wrong. I personally wrote C# based servers that could handle 1000+ connections, sending more than 1 message per second, with <10ms response time, on commodity hardware.
If you have such high response times it must be your server process that is causing blocking. Perhaps contention on locks, perhaps plain bad code, perhaps blocking on external access leading to thread pool exhaustion. Unfortunately, there are plenty of ways to screw this up, and only few ways to get it right. There are good guidelines out there, starting with the fundamentals covered in Rick Vicik's High Performance Windows Programming articles, going over the SocketAsyncEventArgs example which covers the most performant way of writing socket apps in .Net since the advent of Socket Performance Enhancements in Version 3.5 and so on and so forth.
If you find yourself lost at the task ahead (as it seems you happen to be) I would urge you to embrace an established communication framework, perhaps WCF with a net binding, and use the declarative service model programming of WCF. This way you'll piggyback on the WCF performance. While this may not be enough for some, it will get you far enough, much further than you are right now for sure, with regard to performance.
I don't see why C# should be any worse than C++ in this situation - chances are that you've not yet hit upon the 'right way' to handle the incoming connections. Spawning off a separate thread for each client would certainly be a step in the right direction, assuming that workload for each thread is more I/O bound than CPU intensive. Whether you spawn off a thread per connection or use a thread pool to manage a number of threads is another matter - and something to determine through experimentation and also whilst considering whether 100 clients is your maximum!

Serial Comms programming structure in c# / net /

I'm an embedded programmer trying to do a little bit of coding for a communications app and need a quick start guide on the best / easiest way to do something.
I'm successfully sending serial data packets but need to impliment some form of send/ response protocol to avoid overflow on the target system and to ensure that the packet was received ok.
Right now - I have all the transmit code under a button click and it sends the whole lot without any control.
What's the best way to structure this code , i.e sending some packets - waiting for response .. sending more .. etc etc until it's all done, then carrying on with the main program.
I've not used threads or callbacks or suchlike in this environment before but will learn - I just need a pointer to the most straigtforward ways to do it.
Thanks
Rob
The .NET serialport uses buffers, learn to work with them.
Sending packets that are (far) smaller than the Send-buffer can be done w/o threading.
Receiving can be done by the DataReceived event but beware that that is called from another thread. You might as well start your own thread and use blocking reads from there.
The best approach depends on what your 'packets' and protocol look like.
I think to have a long experience about serial comm, both MCU and PC-based.
I strongly UNSUGGEST the single-thread based solution, although it is very straigthful for light-speed testing, but absolutely out for final releases.
Surely you may choose among several patterns, but they are mostly shaped around a dedicated thread for the comm process and a finite-state-machine to parse the protocol (during receiveing).
The prevoius answers give you an idea to how build a simple program, but it might depends on the protocol specification, target device, scope of the application, etc.
there are of course different ways.
I will describe a thread based and an async operation based way:
If you don't use threads, your app will block as long as the operation is performing. This is not what a user is expecting today. Since you are talking about a series of sending and receiveing commands, I would recommend starting the protocol as a thread and then waiting for it to finish. You might also place an Abort button if neccesary. Set the ReadTimeout values and at every receive be ready to catch the exception! An introducing into creating such a work thread is here
If you want to, use Async Send/Receive functions instead of a thread (e.g. NetworkStream.BeginRead etc.). But this is more difficult because you have to manage state between the calls: I recommend using a Finite State Machine then. In fact you create an enumeration (i.e. ProtocolState) and change the state whenever an operation has completed. You can then simply create a function that performs the next step of the protocol with a simple switch/case statement. Since you are working with a remote entity (in your case the serial target system), you always have to consider the device is not working or stops working during the protocol. Do this by starting a timeout timer (e.g. set to 2000ms) and start it after sending each command (assuming each command will get a reply in your protocol). Stop it if the command was received successfully or on timeout.
You could also implement low-level handshaking on the serial port; set the serial port's Handshake property to rts/cts or xon/xoff.
Otherwise (or in addition), use a background worker thread. For simple threads, I like a Monitor.Wait/Pulse mechanism for managing the thread.
I have some code that does read-only serial communications in a thread; email me and I'll be happy to send it to you.
I wasn't sure from your question if you were designing both the PC and embedded sides of the communication link, if you are you might find this SO question interesting.

Tips / techniques for high-performance C# server sockets

I have a .NET 2.0 server that seems to be running into scaling problems, probably due to poor design of the socket-handling code, and I am looking for guidance on how I might redesign it to improve performance.
Usage scenario: 50 - 150 clients, high rate (up to 100s / second) of small messages (10s of bytes each) to / from each client. Client connections are long-lived - typically hours. (The server is part of a trading system. The client messages are aggregated into groups to send to an exchange over a smaller number of 'outbound' socket connections, and acknowledgment messages are sent back to the clients as each group is processed by the exchange.) OS is Windows Server 2003, hardware is 2 x 4-core X5355.
Current client socket design: A TcpListener spawns a thread to read each client socket as clients connect. The threads block on Socket.Receive, parsing incoming messages and inserting them into a set of queues for processing by the core server logic. Acknowledgment messages are sent back out over the client sockets using async Socket.BeginSend calls from the threads that talk to the exchange side.
Observed problems: As the client count has grown (now 60-70), we have started to see intermittent delays of up to 100s of milliseconds while sending and receiving data to/from the clients. (We log timestamps for each acknowledgment message, and we can see occasional long gaps in the timestamp sequence for bunches of acks from the same group that normally go out in a few ms total.)
Overall system CPU usage is low (< 10%), there is plenty of free RAM, and the core logic and the outbound (exchange-facing) side are performing fine, so the problem seems to be isolated to the client-facing socket code. There is ample network bandwidth between the server and clients (gigabit LAN), and we have ruled out network or hardware-layer problems.
Any suggestions or pointers to useful resources would be greatly appreciated. If anyone has any diagnostic or debugging tips for figuring out exactly what is going wrong, those would be great as well.
Note: I have the MSDN Magazine article Winsock: Get Closer to the Wire with High-Performance Sockets in .NET, and I have glanced at the Kodart "XF.Server" component - it looks sketchy at best.
Socket I/O performance has improved in .NET 3.5 environment. You can use ReceiveAsync/SendAsync instead of BeginReceive/BeginSend for better performance. Chech this out:
http://msdn.microsoft.com/en-us/library/bb968780.aspx
A lot of this has to do with many threads running on your system and the kernel giving each of them a time slice. The design is simple, but does not scale well.
You probably should look at using Socket.BeginReceive which will execute on the .net thread pools (you can specify somehow the number of threads it uses), and then pushing onto a queue from the asynchronous callback ( which can be running in any of the .NET threads ). This should give you much higher performance.
A thread per client seems massively overkill, especially given the low overall CPU usage here. Normally you would want a small pool of threads to service all clients, using BeginReceive to wait for work async - then simply despatch the processing to one of the workers (perhaps simply by adding the work to a synchronized queue upon which all the workers are waiting).
I am not a C# guy by any stretch, but for high-performance socket servers the most scalable solution is to use I/O Completion Ports with a number of active threads appropriate for the CPU(s) the process s running on, rather than using the one-thread-per-connection model.
In your case, with an 8-core machine you would want 16 total threads with 8 running concurrently. (The other 8 are basically held in reserve.)
The Socket.BeginConnect and Socket.BeginAccept are definitely useful. I believe they use the ConnectEx and AcceptEx calls in their implementation. These calls wrap the initial connection negotiation and data transfer into one user/kernel transition. Since the initial send/recieve buffer is already ready the kernel can just send it off - either to the remote host or to userspace.
They also have a queue of listeners/connectors ready which probably gives a bit of boost by avoiding the latency involved with userspace accepting/receiving a connection and handing it off (and all the user/kernel switching).
To use BeginConnect with a buffer it appears that you have to write the initial data to the socket before connecting.
As others have suggested, the best way to implement this would be to make the client facing code all asynchronous. Use BeginAccept() on the TcpServer() so that you dont have to manually spawn a thread. Then use BeginRead()/BeginWrite() on the underlying network stream that you get from the accepted TcpClient.
However, there is one thing I dont understand here. You said that these are long lived connections, and a large number of clients. Assuming that the system has reached steady state, where you have your max clients (say 70) connected. You have 70 threads listening for the client packets. Then, the system should still be responsive. Unless your application has memory/handle leaks and you are running out of resources so that your server is paging. I would put a timer around the call to Accept() where you kick off a client thread and see how much time that takes. Also, I would start taskmanager and PerfMon, and monitor "Non Paged Pool", "Virtual Memory", "Handle Count" for the app and see whether the app is in a resource crunch.
While it is true that going Async is the right way to go, I am not convinced if it will really solve the underlying problem. I would monitor the app as I suggested and make sure there are no intrinsic problems of leaking memory and handles. In this regard, "BigBlackMan" above was right - you need more instrumentation to proceed. Dont know why he was downvoted.
Random intermittent ~250msec delays might be due to the Nagle algorithm used by TCP. Try disabling that and see what happens.
One thing I would want to eliminate is that it isn't something as simple as the garbage collector running. If all your messages are on the heap, you are generating 10000 objects a second.
Take a read of Garbage Collection every 100 seconds
The only solution is to keep your messages off the heap.
I had the same issue 7 or 8 years ago and 100ms to 1 sec pauses , the problem was Garbage Collection .. Had about 400 Meg in use from 4 gig BUT there were a lot of objects.
I ended up storing messages in C++ but you could use ASP.NET cache ( which used to use COM and moved them out of the heap )
I don't have an answer but to get more information I'd suggest sprinkling your code with timers and logging avg and max time taken for suspect operations like adding to the queue or opening a socket.
At least that way you will have an idea of what to look at and where to begin.

Categories