I'm sending data to an extremely old system via TCP. I need to send 2000 bytes in one packet, and I need it not to be split up (what happens when I write out 2000 bytes via a socket).
While, yes, I shouldn't have to care about this at an application level -- I in fact do care about this, because I have no other options on the older system, everything MUST be received in a single packet.
Is there something less terrible than calling netcat?
Unless you are on a link with jumbo frames the usual MTU on the ethernet is 1500. Subtract IP (20 bytes) and TCP headers (at least 20 bytes). So no luck with 2000 bytes in a single packet.
Related
I was wondering about the order of sent and received bytes by/from TCP socket.
I got implemented socket, it's up and working, so that's good.
I have also something called "a message" - it's a byte array that contains string (serialized to bytes) and two integers (converted to bytes). It has to be like that - project specifications :/
Anyway, I was wondering about how it is working on bytes:
In byte array, we have order of bytes - 0,1,2,... Length-1. They sit in memory.
How are they sent? Is last one the first to send? Or is it the first? Receiving, I think, is quite easy - first byte to appear gets on first free place in buffer.
I think a little image I made nicely shows what I mean.
They are sent in the same order they are present in memory. Doing otherwise would be more complex... How would you do if you had a continuous stream of bytes? Wait that the last one has been sent and then reverse all? Or this inversion should work "packet by packet"? So each block of 2k bytes (or whatever is the size of the TCP packets) is internally reversed but the order of the packets is "correct"?
Receiving, I think, is quite easy - first byte to appear gets on first free place in buffer.
Why on the earth the sender should reverse the bytes but the receiver shouldn't? If you build a symmetric system, both do an action or none does it!
Note that the real problem is normally the one of endianness. The memory layout of an int on your computer could be different than the layout of an int of another computer. So one of the two computers could have to reverse the 4 bytes of the int. But endianness is something that is resolved "primitive type" by "primitive type". Many internet protocols, for historical reason, are Big Endian, while Intel CPUs are Little Endians. Even internal fields of TCP are Big Endian (see Big endian or Little endian on net?), but here we are speaking of fields of TCP, not of the data moved by the TCP protocol.
I'm building a File Sharing Program, and I would like to know if it's better, while using Sockets, to receive and send byte per byte, or a fixed amount. I'm sending messages of Login, Actual file size list, etc, of 512 bytes, and 65536, when sending and receiving files.
it is depend on your usage and goal:
for High Performance when in non-faulty environment:
choose 1500 bytes
for bad and faulty environment:
choose lower sizes but not byte per byte
It's always better to use reasonably sized blocks for efficiency reasons. Typical network packets are around 1500 bytes in size (Ethernet) and every packet carries a bunch of necessary overhead (such as protocol, destination address and port etc.).
Single bytes is the worst (in terms of efficiency) that you can do.
Handling 1500 or so bytes at a time will be much more efficient than one byte at a time. That is about the size of a typical Ethernet frame.
Keep in mind that you are using a stream of bytes: any concept of message or record is up to you to implement.
I've got an application where two computers are in very close distance - typically within a few feet of one another.
I've got a TCP connection between applications on the two computers. The server was written in C on Linux, the client on Windows using C# with TCPClient.
Over this socket I'm transferring very large payloads, often gigabytes at a time.
When I use Wireshark to monitor the communication I notice that about 66% of the packets transmitted are ACK's. Each of the payload packets tends to be about 5k. So the percentage of data in ACK's is very low, just a percent or two.
Should I be concerned with the number of ACK's? I'm not concerned with packet loss, I expect the connection to be of high quality in terms of packet loss.
Is there anything I can (or should?) do to reduce the number of ACK's?
What you're probably seeing is the receiver acknowledging the sender's transmissions. The receiver has to use ACK-only packets, as it doesn't have anything else to send (the sender also sends ACKs - every TCP packet contains an ACK).
I don't think you should be bothered by the number of ACKs - the sender isn't waiting for them if its window size is large enough. The question you should ask yourself is - am I getting the throughput I should be getting on my LAN speed?
I'm encrypting data on the fly and writing it to a network stream.
Should I write to the stream as soon as each 16-byte encrypted block data becomes available or should I buffer it? Is there a performance penalty to sending bunches of 16 byte writes rather than a single 20 kilobyte or 1 megabyte write?
Feed it as much as you have, It will let you know if it can't take any more. TCP will handle the buffering for you.
Also, the more you feed - the better, it will likely result in less traffic as packets will not be fragmented much.
By default Socket uses Nagle algorithm, which is designed to reduce network traffic by causing the socket to buffer small packets and then combine and send them in one packet under certain circumstances. A TCP packet consists of 40 bytes of header plus the data being sent. When small packets of data are sent with TCP, the overhead resulting from the TCP header can become a significant part of the network traffic. On heavily loaded networks, the congestion resulting from this overhead can result in lost datagrams and retransmissions, as well as excessive propagation time caused by congestion. The Nagle algorithm inhibits the sending of new TCP segments when new outgoing data arrives from the user if any previously transmitted data on the connection remains unacknowledged.
You can turn off Nagle algorithm, but this will likely result in more fragmentation and traffic.
I want to measure bandwidh using c#. Here what I did. Comments and suggestions are welcome.
Find maximum udp payload(on my test bed, its 1472 byte)
Create non compressible data with 1472 byte size
Send this data from a server to a client multiple times(on my test, its 5000 packets)
Client start stopwatch at the time the first packet arrive
When all data has been sent, send notification to client stating all data has been sent
Client stop stopwatch
I calculate bandwidth as (total packet sent(5000) * MTU(1500bytes)) / time lapse
I notice that some packets are loss. a best, 20% loss. at worst 40% loss. I did not account this when calculating the bandwidth. I suspect client network device experience buffer overrun. Do I need to take account this factor?
If you guys have any suggestion or comment, feel free to do so.
Thanks.
To calculate bandwith, I would use TCP instead of UDP. When you use UDP all the datagrams may get out really fast through your network card (at 100mbps) and get queued at the "slowest link" of the chain (e.g. a 512kbps cable modem/router). If the queue buffer gets full, its likely that datagrams will be discarded. So your test is not very reliable.
I would use TCP and make some math to transform tcp speed (KB/s) to throughput (Mbps) (I think TCP overhead is around 8%)