I'm sending bulk data to client from my C# server application. Different clients may have different amounts of bandwidth available. For example, some clients may be using dial-up, broadband, etc.
A low-bandwidth client will be unable to get my data quickly, which may cause blocking in my server application.
I'm retrying the send 5 times to clients, if the data is not successfully received. I need to restrict data send by my server by tracking the bandwidth rate of clients.
How can I determine the bandwidth rate of receiving client in C# ?
That's not a very good approach, since bandwidth to any particular client can change dramatically.
Instead, implement some flow control (TCP provides this for you). Probably the only thing you need to do is configure your socket for non-blocking I/O, so it gives an error message when the transmit window fills instead of blocking your thread.
Related
Intro:
I have developed a Server (TCP Listener) program developed in C# which runs at specific (constant) IP at specific (constant) port. This server program receives data packet sent by client programs, process it and sends back data packet to same socket. Whilst the number of client programs for this server may be in hundreds:
Code Sample
while(true)
{
socket = Host.AcceptSocket();
int len = socket.Receive(msgA);
performActivities(msgA); // May take up-to 5 seconds
socket.Send(msgB);
socket.Close();
}
Problem:
Due to some critical business requirements, processing may take up-to 5 seconds hence other requests during this time are queued which I need to avoid so that every request must be entertained in not more than 5 seconds.
Query:
I can make it multi-threaded but (pardon me if you find me novice):
how one socket will receive another packet from different clients if it is still opened by previous thread?
In case of entertaining multi-requests, how this can be made sure that response is sent back to respective clients?
Building an efficient, multi-threaded socket server requires strong knowledge and skills in that area. My proposal is instead of trying to build your own TCP server from scratch, use one of the existing libraries, that already solved this problem. Few that come to my mind are:
DotNetty used on Azure IoT services.
System.IO.Pipelines which is experimental, but already quite fast.
Akka.Streams TCP stream.
Each one of those libs covers things like:
Management of a TCP connection lifecycle.
Efficient management of byte buffers. Allocating new byte[] for every package is highly inefficient and causes a lot of GC pressure.
Safe access to socket API (sockets are not thread-safe by default).
Buffering of incoming packets.
Abstraction in form of handlers/pipes/stages, that allow you to compose and manipulate binary payload for further processing. This is particularly useful i.e. when you want to operate on the incoming data in terms of messages - by default TCP is a binary stream and it doesn't know when one message inside the pipeline ends and another one starts.
Writing a production-ready TCP server is a tremendous work. Unless you're an expert in network programming with a very specific requirements, you should never write the one from scratch.
I am currently self-hosting a SignalR server in a WPF Application. In this application I need to call a method on the client at least 500 times per seconds. Right now I'm calling the method on the client side on each change. The CPU on the client side is way to high. The object I'm transferring contains about 20 base properties.
My requirement is that I cannot lose or skip any messages. But I can send the notifications as a list in bursts each second.
I'm not sure which is going to perform the best: short and fast or long and rare.
I would buffer the information serverside (only storing changes) and wait for the client to ask for new data. Each time the client asks for data, the server send the information in one packet (binary). When the client finished processing the data, it asks for new data. This way you prevent writing too much data on the socket so it doesn't block. The length of the queue is an indication of the transport/processing of the client. When the queue grows too large (serverside) your network isn't fast enough or your client can't process that much data.
I use this method on my software which is connected to a PLC sending current states of robot angles/positions
Another way is using UDP, but it is lossy, so not very usefull in your situation.
I've an application in which I want to include the networking module for it to be able to send the data to multiple clients (different machines on same WiFi network). This application generates image data for each client every hour and has to send this data to 10 different clients which are on the same WiFi network. When the data transfer to all the clients finishes, all the clients have to simultaneously display the data on the screens.
I've not developed any networking modules earlier and have minimal experience in this. My initial search just showed that I probably should be transferring the data to all the clients first somehow and then broadcast a signal for the clients for them to show the data simultaneously. I wanted to get an idea of the approach that should be followed for something like this - how would the server send the image data to all the clients?
I think in my case, it's more of time-critical than reliable data transfer needed, so I'd be inclined to use UDP to get faster transfers. I understand that I can send the data to client in a queuing fashion but is there a mechanism of knowing which clients on the network are waiting for the data? Is there a client-register-with-server kind of thing through which I can keep a note of all the clients where the data has to be sent? Is this client-register thing possible in UDP?
Through my application, I'll be able to create a UDP Server Socket on a specific port - but how will multiple clients notify sequentially (every client can't notify together obv.) to my server about their availability on the network and how do I then keep a note of their host addresses/ports?
just another approach: create number of threads equal to number of users and load it from thread pool, for each client, one thread will be allocated and will make a tcp connection to send the image and have a tcp listener on each client that will listen for any data retrieval from server.
I advocate use of TCP over UDP if the data needs to sent reliably otherwise UDP would be sufficient . In your case it seems that image data must be reliably sent so use TCP. As you are dealing with multiple clients it would be good to be responsive so you should divide the data in chunks and then send it over network using round robin or Shortest first scheduling of a queue of client requests. At client side collect the data and render it as it comes rather then waiting for whole data. To achieve dynamic rendering use PNG or JPEG file formats. Use multithreading if needed.
I've recently encountered a strange situation in C# .NET Framework 4.0:
In a simple program, i create a TcpListener, specify its local port, start it and use async accept function to receive incoming connection requests.
Once it has pending connections inbound, the server accepts the TcpClient from async callback function and record it into a container (to be more specified, a List<TcpClient>).
And I write another simple client program which just connects to the server once it starts and then calls async receive function.
After all clients are connected, the server starts a group of parallel tasks using System.Threading.Tasks.Parallel.ForEach().
In each task, i use the TcpClient stored in that list to send data to the corresponding client. all TcpClients are sending data at the same time (I checked the client-side and they are all receiving data). The data is just a byte[8192] with random data generated when the server program starts. I make the server sending it repeatedly.
The client's receive callback is simple. Once data arrives, the client just ignores the data and run another async receive function.
The test environment is a 1Gbps LAN, one server and several clients.
The result is: no matter how many clients (from 3 ~ 8) are connected to the server, the server's total upload speed never exceeds 13MByte/s.
Then i tried another way:
I create a TcpListener at client-side also. Once the client connects to the server, the server will connect to the client's listening port also. Then the server will store this outgoing connection into the list instead of the incoming one.
This time, the test result changes a lot: when 3 clients are receiving data from the server, the total upload speed of server is nearly 30MByte/s; with 5 clients, the total upload speed goes up to nearly 50MBytes/s.
Though this 10MByte/s-per-client limit may due to hardware or network configuration, it is still far much better than the case above.
Anyone know why?
I don't know the cause of this behavior, but as a workaround I suggest sending much bigger buffers. Like 1MB (or at least 64k). On a 1Gbps LAN you are likely to be more efficient if your app is sending bigger chunks (and less packets). Also, enable jumbo frames.
Don't use threads or Tasks for the processing. It will hurt your performance.
I've made a framework which will help you develop performant networking applications whithout having to care about the actual IO processing.
http://blog.gauffin.org/2012/05/griffin-networking-a-somewhat-performant-networking-library-for-net/
Users in field with PDA's will generate messages and send to the server; users at the server end will generate messages which need to be sent to the PDA.
Messages are between the app and server code; not 100% user entered data. Ie, we'll capture some data in a form, add GPS location, time date and such and send that to the server.
Server may send us messages like updates to database records used in the PDA app, messages for the user etc.
For messages from the PDA to server, that's easy. PDA initiates call to server and passes data. Presently using web services at the server end and "add new web reference" and associated code on the PDA.
I'm coming unstuck trying to get messages from the the server to the PDA in a timely fashion. In some instances receiving the message quickly is important.
If the server had a message for a particular PDA, it would be great for the PDA to receive that within a few seconds of it being available. So polling once a minute is out; polling once a second will generate a lot of traffic and, maybe draim the PDA battery some ?
This post is the same question as mine and suggests http long polling:
Windows Mobile 6.0/6.5 - Push Notification
I've looked into WCF callbacks and they appear to be exactly what I want however, unavailable for compact framework.
This next post isn't for CF but raises issues of service availability:
To poll or not to poll (in a web services context)
In my context i'll have 500-700 devices wanting to communicate with a small number of web services (between 2-5).
That's a lot of long poll requests to keep open.
Is sockets the way to go ? Again that's a lot of connections.
I've also read about methods using exchange or gmail; i'm really hesitant to go down those paths.
Most of the posts i've found here and in google are a few years old; something may have come up since then ?
What's the best way to handle 500-700 PDA CF devices wanting near-instant communication from a server, whilst maintaing battery life ? Tall request i'm sure.
Socket communication seems like the easiest approach. You say you're using webservices for client-server comms, and that is essentially done behind the scenes by the server (webservice) opening a socket and listening for packets arriving, then responding to those packets.
You want to take the same approach in reverse, so each client opens a socket on its machine and waits for traffic to arrive. The client will basically need to poll its own socket (which doesnt incur any network traffic). Client will also need to communicate its ip address and socket to the server so that when the server needs to communicate back to the client it has a means of reaching it. The server will then use socket based comms (as opposed to webservices) to send messages out as required. Server can just open a socket, send message, then close socket again. No need to have lots of permanently open sockets.
There are potential catches though if the client is roaming around and hopping between networks. If this is the case then its likely that the ip address will be changing (and client will need to open a new socket and pass the new ip address/socket info to the server). It also increases the chances that the server will fail to communicate with the client.
Sounds like an interesting project. Good luck!
Ages ago, the CF team built an application called the "Lunch Launcher" which was based on WCF store-and-forward messaging. David Kline did a nice series on it (here the last one, which has a TOC for all earlier articles).
There's an on-demand Webcast on MSDN given by Jim Wilson that gives an outline of store-and-forward and the code from that webcast is available here.
This might do what you want, though it got some dependencies (e.g. Exchange) and some inherent limitations (e.g. no built-in delivery confirmation).
Ok, further looking and I may be closer to what I want; which I think i a form of http long poll anyway.
This article here - http://www.codeproject.com/KB/IP/socketsincsharp.aspx - shows how to have a listener on a socket. So I do this on the server side.
Client side then opens a socket to the server at this port; sends it's device ID.
Server code first checks to see if there is a response for that device. If there is, it responds.
If not, it either polls itself or subscribes to some event; then returns when it's got data.
I could put in place time out code on the server side if needed.
Blocking on the client end i'm not worried about because it's a background thread and no data is the same as blocking at the app level; as to CPU & batter life, not sure.
I know what i've written is fairly broad, but is this a strategy worth exploring ?