How to calculate bandwidth using c# - c#

I want to measure bandwidh using c#. Here what I did. Comments and suggestions are welcome.
Find maximum udp payload(on my test bed, its 1472 byte)
Create non compressible data with 1472 byte size
Send this data from a server to a client multiple times(on my test, its 5000 packets)
Client start stopwatch at the time the first packet arrive
When all data has been sent, send notification to client stating all data has been sent
Client stop stopwatch
I calculate bandwidth as (total packet sent(5000) * MTU(1500bytes)) / time lapse
I notice that some packets are loss. a best, 20% loss. at worst 40% loss. I did not account this when calculating the bandwidth. I suspect client network device experience buffer overrun. Do I need to take account this factor?
If you guys have any suggestion or comment, feel free to do so.
Thanks.

To calculate bandwith, I would use TCP instead of UDP. When you use UDP all the datagrams may get out really fast through your network card (at 100mbps) and get queued at the "slowest link" of the chain (e.g. a 512kbps cable modem/router). If the queue buffer gets full, its likely that datagrams will be discarded. So your test is not very reliable.
I would use TCP and make some math to transform tcp speed (KB/s) to throughput (Mbps) (I think TCP overhead is around 8%)

Related

How to measure the volume of data actually used up by a socket?

I need to measure as precisely as possible how much of cell service provider's data limit my application uses up.
Is it possible to get the volume of data transferred by a .Net UDP Socket over the network interface (including overhead of UDP and IP)?
The application is a server communicating with a great number of embedded devices, each of which is connected to the internet using GPRS with a very low data limit (several megabytes per month at best, so even a few bytes here and there matter). I know the devices don't open connections with any other servers, so measuring the traffic server-side should be enough.
I know I can't get 100% accurate number (I have no idea what traffic the service provider charges), but I would like to get as close as possible.
Assuming this is IPv4, you could add 28 bytes to every data you transfer but your problem is going to be detecting packet loss and potentially fragmentation. You could add some meta data to your communication to detect packet loss (e.g. sequence numbers, acknowledgments and so on) but that would add more overhead of course which you might not want. Maybe a percentage of expected package loss could help. As for fragmentation again you could compensate when the size of your message is greater than the MTU size (which I believe could be quite small, like 296 bytes, not too sure though, maybe check with your mobile provider)
Another somewhat non-intrusive option could be reading network performance counters of your process or restrict your communication into a separate AppDomain.

Improve C# Socket performance

We have a TCP Async socket game server written in C# serving many concurrent connections. Everything else works fine except for the problems below:
Frequent disconnections for some users (Not all mind you)
Degraded performance for users on slow connections
Most of the users who face this problem are using GSM connections (portable USB dongles based on CDMA) and often the signal strength is poor. So basically they have rather poor bandwidth. Perhaps under 1KB per sec.
My question: What can we do to make the connections more stable and improve performance even for the slower connections?
I am thinking dynamic TCP buffers on client and server side, but not really sure of the performance degradation due to overhead in dynamically doing this for each connection of my direction is even correct.
Max data packet size is under 1 KB.
Current TCP buffer size on server and client is 16KB
Any pointers or references on how to write stable anync socket code in C# for poor or slow connections would be a great help. Thanks in advance.
"Performance" is a very relative term. It looks like your main concern is with data transfer rates. Unfortunately you can't do much about it given low-bandwidth connections - maybe data compression can help, but actual effect depends on your data, and there's always a tradeoff between transfer rate improvement vs. compression/de-compression delays. There's also latency to consider for any sort of interactive game.
As #Pierre suggested in the comments you might consider UDP for the transport, but that only works if you can tolerate packet loss and re-ordering, and that again depends of the data and what you do with it.
Another approach I would suggest investigating is to provide two different quality-of-service modes. Clients on good links can use full functionality (say, full-resolution game images), while low-bandwidth clients would get reduced options (say, much smaller size low-quality images). You can measure initial round-trip times on client connect/handshake to select the mode.
Hope this helps a bit.

How to reduce the number of TCP ACK's during a highly reliable bulk transfer

I've got an application where two computers are in very close distance - typically within a few feet of one another.
I've got a TCP connection between applications on the two computers. The server was written in C on Linux, the client on Windows using C# with TCPClient.
Over this socket I'm transferring very large payloads, often gigabytes at a time.
When I use Wireshark to monitor the communication I notice that about 66% of the packets transmitted are ACK's. Each of the payload packets tends to be about 5k. So the percentage of data in ACK's is very low, just a percent or two.
Should I be concerned with the number of ACK's? I'm not concerned with packet loss, I expect the connection to be of high quality in terms of packet loss.
Is there anything I can (or should?) do to reduce the number of ACK's?
What you're probably seeing is the receiver acknowledging the sender's transmissions. The receiver has to use ACK-only packets, as it doesn't have anything else to send (the sender also sends ACKs - every TCP packet contains an ACK).
I don't think you should be bothered by the number of ACKs - the sender isn't waiting for them if its window size is large enough. The question you should ask yourself is - am I getting the throughput I should be getting on my LAN speed?

How much data to feed NetworkStream.Write() at a time?

I'm encrypting data on the fly and writing it to a network stream.
Should I write to the stream as soon as each 16-byte encrypted block data becomes available or should I buffer it? Is there a performance penalty to sending bunches of 16 byte writes rather than a single 20 kilobyte or 1 megabyte write?
Feed it as much as you have, It will let you know if it can't take any more. TCP will handle the buffering for you.
Also, the more you feed - the better, it will likely result in less traffic as packets will not be fragmented much.
By default Socket uses Nagle algorithm, which is designed to reduce network traffic by causing the socket to buffer small packets and then combine and send them in one packet under certain circumstances. A TCP packet consists of 40 bytes of header plus the data being sent. When small packets of data are sent with TCP, the overhead resulting from the TCP header can become a significant part of the network traffic. On heavily loaded networks, the congestion resulting from this overhead can result in lost datagrams and retransmissions, as well as excessive propagation time caused by congestion. The Nagle algorithm inhibits the sending of new TCP segments when new outgoing data arrives from the user if any previously transmitted data on the connection remains unacknowledged.
You can turn off Nagle algorithm, but this will likely result in more fragmentation and traffic.

Fastest form of downloading using sockets

Hi
I have TCP/IP client server application. i want to send large serialized object around 1MB through sockets.
Is it possible to get better performance by splitting byte array to for example 10 chunks of arrays and open a socket for each and send them Async compared to opening one socket and send all large data through it ?
Thanks
Splitting the data to less than the MTU will introduce more overhead as there will be more packets - this will actually slow things down. What you are proposing is already being done as part of the protocol i.e. splitting and re-assembling. I would experiment with sending less data e.g. compression.
No, this doesn't speed up the transfer under normal conditions, it only adds overhead. It would only help if you have a slow network segment which is quite busy otherwise and the traffic is shaped per TCP connection.
Make sure that your sockets code is efficient, because wrong buffer and therefore packet sizes, synchroneous operation and other stuff may slow the transfer down.

Categories