Performance of SignalR for high-frequency messaging - c#

I am currently self-hosting a SignalR server in a WPF Application. In this application I need to call a method on the client at least 500 times per seconds. Right now I'm calling the method on the client side on each change. The CPU on the client side is way to high. The object I'm transferring contains about 20 base properties.
My requirement is that I cannot lose or skip any messages. But I can send the notifications as a list in bursts each second.
I'm not sure which is going to perform the best: short and fast or long and rare.

I would buffer the information serverside (only storing changes) and wait for the client to ask for new data. Each time the client asks for data, the server send the information in one packet (binary). When the client finished processing the data, it asks for new data. This way you prevent writing too much data on the socket so it doesn't block. The length of the queue is an indication of the transport/processing of the client. When the queue grows too large (serverside) your network isn't fast enough or your client can't process that much data.
I use this method on my software which is connected to a PLC sending current states of robot angles/positions
Another way is using UDP, but it is lossy, so not very usefull in your situation.

Related

Multi-Threaded TCP Listner (Server) in C#

Intro:
I have developed a Server (TCP Listener) program developed in C# which runs at specific (constant) IP at specific (constant) port. This server program receives data packet sent by client programs, process it and sends back data packet to same socket. Whilst the number of client programs for this server may be in hundreds:
Code Sample
while(true)
{
socket = Host.AcceptSocket();
int len = socket.Receive(msgA);
performActivities(msgA); // May take up-to 5 seconds
socket.Send(msgB);
socket.Close();
}
Problem:
Due to some critical business requirements, processing may take up-to 5 seconds hence other requests during this time are queued which I need to avoid so that every request must be entertained in not more than 5 seconds.
Query:
I can make it multi-threaded but (pardon me if you find me novice):
how one socket will receive another packet from different clients if it is still opened by previous thread?
In case of entertaining multi-requests, how this can be made sure that response is sent back to respective clients?
Building an efficient, multi-threaded socket server requires strong knowledge and skills in that area. My proposal is instead of trying to build your own TCP server from scratch, use one of the existing libraries, that already solved this problem. Few that come to my mind are:
DotNetty used on Azure IoT services.
System.IO.Pipelines which is experimental, but already quite fast.
Akka.Streams TCP stream.
Each one of those libs covers things like:
Management of a TCP connection lifecycle.
Efficient management of byte buffers. Allocating new byte[] for every package is highly inefficient and causes a lot of GC pressure.
Safe access to socket API (sockets are not thread-safe by default).
Buffering of incoming packets.
Abstraction in form of handlers/pipes/stages, that allow you to compose and manipulate binary payload for further processing. This is particularly useful i.e. when you want to operate on the incoming data in terms of messages - by default TCP is a binary stream and it doesn't know when one message inside the pipeline ends and another one starts.
Writing a production-ready TCP server is a tremendous work. Unless you're an expert in network programming with a very specific requirements, you should never write the one from scratch.

Transferring data from server to multiple clients sequentially

I've an application in which I want to include the networking module for it to be able to send the data to multiple clients (different machines on same WiFi network). This application generates image data for each client every hour and has to send this data to 10 different clients which are on the same WiFi network. When the data transfer to all the clients finishes, all the clients have to simultaneously display the data on the screens.
I've not developed any networking modules earlier and have minimal experience in this. My initial search just showed that I probably should be transferring the data to all the clients first somehow and then broadcast a signal for the clients for them to show the data simultaneously. I wanted to get an idea of the approach that should be followed for something like this - how would the server send the image data to all the clients?
I think in my case, it's more of time-critical than reliable data transfer needed, so I'd be inclined to use UDP to get faster transfers. I understand that I can send the data to client in a queuing fashion but is there a mechanism of knowing which clients on the network are waiting for the data? Is there a client-register-with-server kind of thing through which I can keep a note of all the clients where the data has to be sent? Is this client-register thing possible in UDP?
Through my application, I'll be able to create a UDP Server Socket on a specific port - but how will multiple clients notify sequentially (every client can't notify together obv.) to my server about their availability on the network and how do I then keep a note of their host addresses/ports?
just another approach: create number of threads equal to number of users and load it from thread pool, for each client, one thread will be allocated and will make a tcp connection to send the image and have a tcp listener on each client that will listen for any data retrieval from server.
I advocate use of TCP over UDP if the data needs to sent reliably otherwise UDP would be sufficient . In your case it seems that image data must be reliably sent so use TCP. As you are dealing with multiple clients it would be good to be responsive so you should divide the data in chunks and then send it over network using round robin or Shortest first scheduling of a queue of client requests. At client side collect the data and render it as it comes rather then waiting for whole data. To achieve dynamic rendering use PNG or JPEG file formats. Use multithreading if needed.

Total upload speed is slower when all connections are accepted by one TcpListener

I've recently encountered a strange situation in C# .NET Framework 4.0:
In a simple program, i create a TcpListener, specify its local port, start it and use async accept function to receive incoming connection requests.
Once it has pending connections inbound, the server accepts the TcpClient from async callback function and record it into a container (to be more specified, a List<TcpClient>).
And I write another simple client program which just connects to the server once it starts and then calls async receive function.
After all clients are connected, the server starts a group of parallel tasks using System.Threading.Tasks.Parallel.ForEach().
In each task, i use the TcpClient stored in that list to send data to the corresponding client. all TcpClients are sending data at the same time (I checked the client-side and they are all receiving data). The data is just a byte[8192] with random data generated when the server program starts. I make the server sending it repeatedly.
The client's receive callback is simple. Once data arrives, the client just ignores the data and run another async receive function.
The test environment is a 1Gbps LAN, one server and several clients.
The result is: no matter how many clients (from 3 ~ 8) are connected to the server, the server's total upload speed never exceeds 13MByte/s.
Then i tried another way:
I create a TcpListener at client-side also. Once the client connects to the server, the server will connect to the client's listening port also. Then the server will store this outgoing connection into the list instead of the incoming one.
This time, the test result changes a lot: when 3 clients are receiving data from the server, the total upload speed of server is nearly 30MByte/s; with 5 clients, the total upload speed goes up to nearly 50MBytes/s.
Though this 10MByte/s-per-client limit may due to hardware or network configuration, it is still far much better than the case above.
Anyone know why?
I don't know the cause of this behavior, but as a workaround I suggest sending much bigger buffers. Like 1MB (or at least 64k). On a 1Gbps LAN you are likely to be more efficient if your app is sending bigger chunks (and less packets). Also, enable jumbo frames.
Don't use threads or Tasks for the processing. It will hurt your performance.
I've made a framework which will help you develop performant networking applications whithout having to care about the actual IO processing.
http://blog.gauffin.org/2012/05/griffin-networking-a-somewhat-performant-networking-library-for-net/

Multithreaded server, send data independently?

I'm trying to build a simple multithreaded tcp server. The client connects and sends data to server, the server responds and waits for data again. The problem is I need the server to listen for incoming data in separate thread and be able to send command to client any time (for example to notify about new update). As far as I understood, when ever client sends data to server, if server doesn't respond with any data, client app doesn't let me send more data, server simply doesn't receive them. If I send data ether way around, does the data need to be 'acknowledged' for tcpclient?
Here's the source for the server: http://csharp.net-informations.com/communications/files/print/csharp-multi-threaded-server-socket_print.htm
How can I make the server send command to a client in separate thread outside the "DoChat" functions loop? or do I have to handle everything in that thread? Do I have to respond to each request client sends me? Thanks!
The problem is I need the server to listen for incoming data in separate thread
No, there is an async API. You can polll a list of threads to see which ahve new data waiting, obcviously to be done froa worker thread.
As far as I understood, when ever client sends data to server, if server doesn't respond with any
data, client app doesn't let me send more data, server simply doesn't receive them.
That is a lot more crap programming than the way sockets work. Sockets are totally ok with streaming ata in sending and receiving direction att the same time.
How can I make the server send command to a client in separate thread outside the "DoChat"
functions
Wel, me diong your job costs money.
BUT: The example is retarded. As in- totally anti pattern. One thread per client? You will run into memroy problems and perforamnce problems once 1000+ clients connect. You get tons of context switches.
Second, the client is not async because it is not written so. Mayy I suggest giong to the documentation, reading up on sockts an trying to build that yourself? THEN come back with questions that show more than "i just try to copy paste".
With proper programming this is totally normal. I have a similar application in development, sending data lall the time to the client and getting commands from the client to modify the data stream. Works liek a charm.
If I send data ether way around, does the data need to be 'acknowledged' for tcpclient?
Yes and no. No, not for TCP - TCP does it'Äs wn handshake under the hoods. Yes, if your protocol decides it has to, which is a programmer level design decision. It may or may not be necesssary, depending on the content of the data. Sometimes the acknowledgement provides more information (timestamp server side, tracking numer) and is not pure ly there for "I got it".

How to Identify Bandwidth rate of TCP Client

I'm sending bulk data to client from my C# server application. Different clients may have different amounts of bandwidth available. For example, some clients may be using dial-up, broadband, etc.
A low-bandwidth client will be unable to get my data quickly, which may cause blocking in my server application.
I'm retrying the send 5 times to clients, if the data is not successfully received. I need to restrict data send by my server by tracking the bandwidth rate of clients.
How can I determine the bandwidth rate of receiving client in C# ?
That's not a very good approach, since bandwidth to any particular client can change dramatically.
Instead, implement some flow control (TCP provides this for you). Probably the only thing you need to do is configure your socket for non-blocking I/O, so it gives an error message when the transmit window fills instead of blocking your thread.

Categories