How much data to feed NetworkStream.Write() at a time? - c#

I'm encrypting data on the fly and writing it to a network stream.
Should I write to the stream as soon as each 16-byte encrypted block data becomes available or should I buffer it? Is there a performance penalty to sending bunches of 16 byte writes rather than a single 20 kilobyte or 1 megabyte write?

Feed it as much as you have, It will let you know if it can't take any more. TCP will handle the buffering for you.
Also, the more you feed - the better, it will likely result in less traffic as packets will not be fragmented much.
By default Socket uses Nagle algorithm, which is designed to reduce network traffic by causing the socket to buffer small packets and then combine and send them in one packet under certain circumstances. A TCP packet consists of 40 bytes of header plus the data being sent. When small packets of data are sent with TCP, the overhead resulting from the TCP header can become a significant part of the network traffic. On heavily loaded networks, the congestion resulting from this overhead can result in lost datagrams and retransmissions, as well as excessive propagation time caused by congestion. The Nagle algorithm inhibits the sending of new TCP segments when new outgoing data arrives from the user if any previously transmitted data on the connection remains unacknowledged.
You can turn off Nagle algorithm, but this will likely result in more fragmentation and traffic.

Related

C# Socket: is multiple sending less efficient than a single send?

I am writing a high-throughput server serving thousands of connections. Suppose I have 400 bytes to send via a socket. Suppose I do it in two ways:
Call the Socket.Send() 40 times, each time sending 10 bytes.
Call the Socket.Send() once, sending 400 bytes.
Do these two ways make much difference in terms of speed, CPU load, etc?
If Socket.NoDelay is left at false, then it will very rarely make any difference - most of the time, you're just going to buffering locally - albeit with a bit more P/Invoke overhead than is absolutely necessary (due to lots of calls through the socket layer). Note that Socket.NoDelay should usually be set to true in anything where you care.
If Socket.NoDelay is true, then if everything is working maximally, then you might introduce additional packet fragmentation by using 40 sends of 10 bytes, which would be avoided when using one send of 400 bytes. However, in many cases, the various abstractions and layers in the OS/hardware stacks means that a lot of the 10 byte chunks will probably end up sharing packets. That's still a lot more packets than 1, in the optimal case, though.
Note also that this is always a trade-off: packet fragmentation will decrease overall throughput, but sending the first bytes sooner could reduce the perceived latency, if the other 390 bytes are going to take a measurable (but presumably small) amount of time to construct.
In most cases: this is unlikely to be a bottleneck. If you can avoid packet fragmentation without causing latency, that may be desirable. If it was me, I'd probably be more concerned with efficient buffer management to maximise scalability while avoiding pauses due to GC; tools like the new "pipelines" IO API can really help with that, and Kestrel can be used to host a TCP server based on "pipelines" in a lot less code than you would be using if you wrote your own socket listener - and it then deals with all the buffer management for you.

How to reduce the number of TCP ACK's during a highly reliable bulk transfer

I've got an application where two computers are in very close distance - typically within a few feet of one another.
I've got a TCP connection between applications on the two computers. The server was written in C on Linux, the client on Windows using C# with TCPClient.
Over this socket I'm transferring very large payloads, often gigabytes at a time.
When I use Wireshark to monitor the communication I notice that about 66% of the packets transmitted are ACK's. Each of the payload packets tends to be about 5k. So the percentage of data in ACK's is very low, just a percent or two.
Should I be concerned with the number of ACK's? I'm not concerned with packet loss, I expect the connection to be of high quality in terms of packet loss.
Is there anything I can (or should?) do to reduce the number of ACK's?
What you're probably seeing is the receiver acknowledging the sender's transmissions. The receiver has to use ACK-only packets, as it doesn't have anything else to send (the sender also sends ACKs - every TCP packet contains an ACK).
I don't think you should be bothered by the number of ACKs - the sender isn't waiting for them if its window size is large enough. The question you should ask yourself is - am I getting the throughput I should be getting on my LAN speed?

Fastest form of downloading using sockets

Hi
I have TCP/IP client server application. i want to send large serialized object around 1MB through sockets.
Is it possible to get better performance by splitting byte array to for example 10 chunks of arrays and open a socket for each and send them Async compared to opening one socket and send all large data through it ?
Thanks
Splitting the data to less than the MTU will introduce more overhead as there will be more packets - this will actually slow things down. What you are proposing is already being done as part of the protocol i.e. splitting and re-assembling. I would experiment with sending less data e.g. compression.
No, this doesn't speed up the transfer under normal conditions, it only adds overhead. It would only help if you have a slow network segment which is quite busy otherwise and the traffic is shaped per TCP connection.
Make sure that your sockets code is efficient, because wrong buffer and therefore packet sizes, synchroneous operation and other stuff may slow the transfer down.

How to calculate bandwidth using c#

I want to measure bandwidh using c#. Here what I did. Comments and suggestions are welcome.
Find maximum udp payload(on my test bed, its 1472 byte)
Create non compressible data with 1472 byte size
Send this data from a server to a client multiple times(on my test, its 5000 packets)
Client start stopwatch at the time the first packet arrive
When all data has been sent, send notification to client stating all data has been sent
Client stop stopwatch
I calculate bandwidth as (total packet sent(5000) * MTU(1500bytes)) / time lapse
I notice that some packets are loss. a best, 20% loss. at worst 40% loss. I did not account this when calculating the bandwidth. I suspect client network device experience buffer overrun. Do I need to take account this factor?
If you guys have any suggestion or comment, feel free to do so.
Thanks.
To calculate bandwith, I would use TCP instead of UDP. When you use UDP all the datagrams may get out really fast through your network card (at 100mbps) and get queued at the "slowest link" of the chain (e.g. a 512kbps cable modem/router). If the queue buffer gets full, its likely that datagrams will be discarded. So your test is not very reliable.
I would use TCP and make some math to transform tcp speed (KB/s) to throughput (Mbps) (I think TCP overhead is around 8%)

Create TCP Packet in C#

I'm sending data to an extremely old system via TCP. I need to send 2000 bytes in one packet, and I need it not to be split up (what happens when I write out 2000 bytes via a socket).
While, yes, I shouldn't have to care about this at an application level -- I in fact do care about this, because I have no other options on the older system, everything MUST be received in a single packet.
Is there something less terrible than calling netcat?
Unless you are on a link with jumbo frames the usual MTU on the ethernet is 1500. Subtract IP (20 bytes) and TCP headers (at least 20 bytes). So no luck with 2000 bytes in a single packet.

Categories