Multi-Threaded TCP Listner (Server) in C# - c#

Intro:
I have developed a Server (TCP Listener) program developed in C# which runs at specific (constant) IP at specific (constant) port. This server program receives data packet sent by client programs, process it and sends back data packet to same socket. Whilst the number of client programs for this server may be in hundreds:
Code Sample
while(true)
{
socket = Host.AcceptSocket();
int len = socket.Receive(msgA);
performActivities(msgA); // May take up-to 5 seconds
socket.Send(msgB);
socket.Close();
}
Problem:
Due to some critical business requirements, processing may take up-to 5 seconds hence other requests during this time are queued which I need to avoid so that every request must be entertained in not more than 5 seconds.
Query:
I can make it multi-threaded but (pardon me if you find me novice):
how one socket will receive another packet from different clients if it is still opened by previous thread?
In case of entertaining multi-requests, how this can be made sure that response is sent back to respective clients?

Building an efficient, multi-threaded socket server requires strong knowledge and skills in that area. My proposal is instead of trying to build your own TCP server from scratch, use one of the existing libraries, that already solved this problem. Few that come to my mind are:
DotNetty used on Azure IoT services.
System.IO.Pipelines which is experimental, but already quite fast.
Akka.Streams TCP stream.
Each one of those libs covers things like:
Management of a TCP connection lifecycle.
Efficient management of byte buffers. Allocating new byte[] for every package is highly inefficient and causes a lot of GC pressure.
Safe access to socket API (sockets are not thread-safe by default).
Buffering of incoming packets.
Abstraction in form of handlers/pipes/stages, that allow you to compose and manipulate binary payload for further processing. This is particularly useful i.e. when you want to operate on the incoming data in terms of messages - by default TCP is a binary stream and it doesn't know when one message inside the pipeline ends and another one starts.
Writing a production-ready TCP server is a tremendous work. Unless you're an expert in network programming with a very specific requirements, you should never write the one from scratch.

Related

High-Availability TCP server application

In my project I have a cloud hosted virtual machine running a C# application which needs to:
accept TCP connection from several external clients (approximately 500)
receive data asynchronously from the connected clients (not high frequency, approximately 1 message per minute)
do some processing on received data
forward received data to other actors
reply back to connected clients and possibly do some asynchronous sending (based on internal time-checks)
The design seems to me quite straightforward. I provide a listener which accepts incoming TCP connection, when a new connection is establhised a new thread is spawned; that thread runs in loop (performing activities points from 2 to 5) and check for associated socket aliveness (if socket is dead, the thread exits the loop and would eventually terminate; later a new connection will be attempted from the external client the socket belonged to).
So now the issue is that for limited amount of external clients (I would say 200/300) everything runs smoothly, but as that number grows (or when the clients send data with higher frequency) the communication gets very slow and obstructed.
I was thinking about some better design, for example:
using Tasks instead of Threads
using ThreadPool
replace 1Thread1Socket with something like 1Thread10Socket
or even some scaling strategies:
open two different TCP listeners (different port) within the same application (reconfiguring clients so that half of them target each listener)
provide two identical application with two different TCP listeners (different port) on the same virtual machine
set up two different virtual machines with the same application running on each of them (reconfiguring clients so that half of them target each virtual machine address)
Finally the questions: is the current design poor or naive? do you see any major criticality in the way I handle the communication? do you have any more robust and efficient option (among those mentioned above, or any additional one)?
Thanks
The number of listeners is unlikely to be a limiting factor. Here at Stack Overflow we handle ~60k sockets per instance, and the only reason we need multiple listeners is so we can split the traffic over multiple ports to avoid ephemeral port exhaustion at the load balancer. Likewise, I should note that those 60k per-instance socket servers run at basically zero CPU, so: it is premature to think about multiple exes, VMs, etc. That is not the problem. The problem is the code, and distributing a poor socket infrastructure over multiple processes just hides the problem.
Writing high performance socket servers is hard, but the good news is: you can avoid most of this. Kestrel (the ASP.NET Core http server) can act as a perfectly good TCP server, dealing with most of the horrible bits of async, sockets, buffer management, etc for you, so all you have to worry about is the actual data processing. The "pipelines" API even deals with back-buffers for you, so you don't need to worry about over-read.
An extensive walkthrough of this is in my 3-and-a-bit part blog series starting here - it is simply way too much information to try and post here. But it links through to a demo server - a dummy redis server hosted via Kestrel. It can also be hosted without Kestrel, using Pipelines.Sockets.Unofficial, but... frankly I'd use Kestrel. The server shown there is broadly similar (in terms of broad initialization - not the actual things it does) to our 60k-per-instance web-socket tier.

Performance of SignalR for high-frequency messaging

I am currently self-hosting a SignalR server in a WPF Application. In this application I need to call a method on the client at least 500 times per seconds. Right now I'm calling the method on the client side on each change. The CPU on the client side is way to high. The object I'm transferring contains about 20 base properties.
My requirement is that I cannot lose or skip any messages. But I can send the notifications as a list in bursts each second.
I'm not sure which is going to perform the best: short and fast or long and rare.
I would buffer the information serverside (only storing changes) and wait for the client to ask for new data. Each time the client asks for data, the server send the information in one packet (binary). When the client finished processing the data, it asks for new data. This way you prevent writing too much data on the socket so it doesn't block. The length of the queue is an indication of the transport/processing of the client. When the queue grows too large (serverside) your network isn't fast enough or your client can't process that much data.
I use this method on my software which is connected to a PLC sending current states of robot angles/positions
Another way is using UDP, but it is lossy, so not very usefull in your situation.

Disconnecting socket after certain amount of time that no data recceived

I'm just making my server disconnect sockets that send no data after a certain amount of time, like 20 seconds.
I wonder whether working with timers is good for that or is there something special for that in socket library? Working with timers on the server for every socket makes it heavy.
Is it unsafe to make the client program handle that? For example every client disconnects after not sending data for a while.
This should be very easy to implement as part of your keep-alive checking. Unless you're completely ignoring the issue of dropped connections, you probably have a keep-alive system that periodically sends a message client->server and vice versa if there's been no communication. It should be trivial to add a simple "last data received time" value to the socket state, and then close the socket if it gets too far from DateTime.Now.
But the more important question is "Why?". The best solution depends on what your reasons for this are in the first place. Do you want to make the server usable to more clients by dumping those that aren't sending data? You'll probably make everything worse, since the timeouts for TCP sockets are more like 2-4 minutes, so when you disconnect the client after 20s and it reconnects, it will now be using two server-side ports, instead of one. Oops.
As for your comment on the deleted answer, and connection without data send and receive i think it gonna waste your threads points closer to your real problem - the amount of connection your server has should have no relation to how many threads the server uses to service those connections. So the only thing an open connection would "waste" is basically a bit of memory (depending on the amount of memory you need per connection, plus the socket cost with its buffers) and a TCP port. This can be an issue in some applications, but if you ever get to that level of "load", you can probably congratulate yourself already. You will much more likely run out of other resources before getting anywhere close to the port limits (assumption based on the fact that it sounds like you're making an MMO game). If you do really run into issues with those, you probably want to drop TCP anyway and rewrite everything in UDP (or preferrably, some ready solution on top of UDP).
The Client-Server model describes how a client should connect to a server and perform requests.
What I would recommend to you would be to connect to the server, and when you finish retrieving all the date you need, close the socket (on the client side).
The server will eventually find the socket's resources released, but you can check for the socket's Connected property to release the resources sooner.
When client disconnect to server then server can get disconnect event. it look like
socket.on('disconnect', function () {
// Disconnect event handling
});
On client side you also findout disconnect event .. in which you need to reconnect the server.

Total upload speed is slower when all connections are accepted by one TcpListener

I've recently encountered a strange situation in C# .NET Framework 4.0:
In a simple program, i create a TcpListener, specify its local port, start it and use async accept function to receive incoming connection requests.
Once it has pending connections inbound, the server accepts the TcpClient from async callback function and record it into a container (to be more specified, a List<TcpClient>).
And I write another simple client program which just connects to the server once it starts and then calls async receive function.
After all clients are connected, the server starts a group of parallel tasks using System.Threading.Tasks.Parallel.ForEach().
In each task, i use the TcpClient stored in that list to send data to the corresponding client. all TcpClients are sending data at the same time (I checked the client-side and they are all receiving data). The data is just a byte[8192] with random data generated when the server program starts. I make the server sending it repeatedly.
The client's receive callback is simple. Once data arrives, the client just ignores the data and run another async receive function.
The test environment is a 1Gbps LAN, one server and several clients.
The result is: no matter how many clients (from 3 ~ 8) are connected to the server, the server's total upload speed never exceeds 13MByte/s.
Then i tried another way:
I create a TcpListener at client-side also. Once the client connects to the server, the server will connect to the client's listening port also. Then the server will store this outgoing connection into the list instead of the incoming one.
This time, the test result changes a lot: when 3 clients are receiving data from the server, the total upload speed of server is nearly 30MByte/s; with 5 clients, the total upload speed goes up to nearly 50MBytes/s.
Though this 10MByte/s-per-client limit may due to hardware or network configuration, it is still far much better than the case above.
Anyone know why?
I don't know the cause of this behavior, but as a workaround I suggest sending much bigger buffers. Like 1MB (or at least 64k). On a 1Gbps LAN you are likely to be more efficient if your app is sending bigger chunks (and less packets). Also, enable jumbo frames.
Don't use threads or Tasks for the processing. It will hurt your performance.
I've made a framework which will help you develop performant networking applications whithout having to care about the actual IO processing.
http://blog.gauffin.org/2012/05/griffin-networking-a-somewhat-performant-networking-library-for-net/

Asynchronous multi-direction server-client communication over the same open socket?

I have a client-server app where the client is on a Windows Mobile 6 device, written in C++ and the server is on full Windows and written in C#.
Originally, I only needed it to send messages from the client to the server, with the server only ever sending back an acknowledgement that it received the message. Now, I would like to update it so that the server can actually send a message to the client to request data. As I currently have it set up so the client is only in receive mode after it sends data to the server, this doesn't allow for the server to send a request at any time. I would have to wait for client data. My first thought would be to create another thread on the client with a separate open socket, listening for server requests...just like the server already has in respect the client. Is there a way, within the same thread and using the same socket, to all the server to send requests at any time?
Can you use something to the effect of WaitForMultipleObjects() and pass it a receive buffer and an event that tells it there is data to be sent?
When I needed to write an application with a client-server model where the clients could leave and enter whenever they want, (I assume that's also the case for your application as you use mobile devices) I made sure that the clients send an online message to the server, indicating they were connected and ready to do whatever they needed doing.
at that time the server could send messages back to the client trough the same open connection.
Also, but I don't know if that is applicable for you, I had some sort of heartbeat the clients sent to the server, letting it know it was still online. That way the server knows when a client was forcibly disconnected from the network and it could mark that client back as offline.
Using asynchronous communication is totally possible in single thread!
There is a common design pattern in network software development called the reactor pattern (look at this book). Some well known network library provides an implementation of this pattern (look at ACE).
Briefly, the reactor is an object, you register all your sockets inside, and you wait for something. If something happened (new data arrived, connection close...) the reactor will notify you. And of course, you can use only one socket to send and received data asynchronously.
I'm not clear on whether or not you're wanting to add the asynchronous bits to the server in C# or the client in C++.
If you're talking about doing this in C++, desktop Windows platforms can do socket I/O asynchronously through the API's that use overlapped I/O. For sockets, WSASend, WSARecv both allow async I/O (read the documentation on their LPOVERLAPPED parameters, which you can populate with events that get set when the I/O completes).
I don't know if Windows Mobile platforms support these functions, so you might have to do some additional digging.
Check out asio. It is a cross compatable c++ library for asyncronous IO. I am not sure if this would be useful for the server ( I have never tried to link a standard c++ DLL to a c# project) but for the client it would be useful.
We use it with our application, and it solved most of our IO concurrency problems.

Categories