UDP data transmission slower than TCP - c#

I'm currently writing a prototype application in C#/.Net4 where i need to transfer an unknown amount of data. The data is read in from a text file and then serialized into a byte array.
Now i need to implement both transmission methods, UDP and TCP. The transmission in both ways does work fine but i have some struggleing with UDP. I assumend that the transmission using UDP have to be much faster than using TCP but in fact my tests proved that the UDP transmission is about 7 to 8 times slower than using TCP.
I tested the transmission with a 12 megabyte file and the TCP transmission took about 1 second whereas the UDP transmission took about 7 seconds.
In the application i use simple sockets to transmit the data. Since UDP does only allow a maximum of 65535kb per message, i splitted the serialized the byte array of the file into several parts where each part has the size of the socker SendBufferSize and then i transfer each part using Socket.Send() method call.
Here is the code for the Sender part.
while (startOffset < data.Length)
{
if ((startOffset + payloadSize) > data.Length)
{
payloadSize = data.Length - startOffset;
}
byte[] subMessageBytes = new byte[payloadSize + 16];
byte[] messagePrefix = new UdpMessagePrefix(data.Length, payloadSize, messageCount, messageId).ToByteArray();
Buffer.BlockCopy(messagePrefix, 0, subMessageBytes, 0, 16);
Buffer.BlockCopy(data, startOffset, subMessageBytes, messageOffset, payloadSize);
messageId++;
startOffset += payloadSize;
udpClient.Send(subMessageBytes, subMessageBytes.Length);
messages.Add(subMessageBytes);
}
This code does simply copy the next part to be send into an byte array and then call the send method on the socket. My first guess was, that the splitting/copying of the byte arrays was slowing down the performance, but i isolated and tested the splitting code and the splitting took only a few milliseconds, so that was not causing the problem.
int receivedMessageCount = 1;
Dictionary<int, byte[]> receivedMessages = new Dictionary<int, byte[]>();
while (receivedMessageCount != totalMessageCount)
{
byte[] data = udpClient.Receive(ref remoteIpEndPoint);
UdpMessagePrefix p = UdpMessagePrefix.FromByteArray(data);
receivedMessages.Add(p.MessageId, data);
//Console.WriteLine("Received packet: " + receivedMessageCount + " (ID: " + p.MessageId + ")");
receivedMessageCount++;
//Console.WriteLine("ReceivedMessageCount: " + receivedMessageCount);
}
Console.WriteLine("Done...");
return receivedMessages;
This is the server side code where i receive the UDP messages. Each message has some bytes as a prefix where the total number of messages is stored and the size. So i simply call socket.Receive in a loop until i received the amount of messages which were specified in the prefix.
My assumption here is that i may have implemented the UDP transmission code not "efficiently" enough... Maybe one of you guys already sees a problem in the code snippets or have any other suggestion or hint for me why my UDP transmission is slower than TCP.
thanks in advance!

While UDP datagram size can be up to 64K, the actual wire frames are usually 1500 bytes (normal ethernet MTU). That also has to fit an IP header of minimum 20 bytes and a UDP header of 8 bytes, leaving you with 1472 bytes of usable payload.
What you are seeing is the result of your OS network stack fragmenting the UDP datagrams on the sender side and then re-assembling them on the receiver side. That takes time, thus your results.
TCP, on the other hand, does its own packetization and tries to find path MTU, so it's more efficient in this case.
Limit your data chunks to 1472 bytes and measure again.

I think you should measure CPU usage and network throughput for the duration of the test.
If the CPU is pegged, this is your problem: Turn on a profiler.
If the network (cable) is pegged this is a different class of problems. I wouldn't know what to do about it ;-)
If neither is pegged, run a profiler and see where most wall-clock time is spent. There must be some waiting going on.
If you don't have a profiler just hit break 10 times in the debugger and see where it stops most often.
Edit: My response to your measurement: We know that 99% of all execution time is spent in receiving data. But we don't know yet if the CPU is busy. Look into task-manager and look which process is busy.
My guess is it is the System process. This is the windows kernel and probably the UDP component of it.
This might have to do with packet fragmentation. IP packets have a certain maximum size like 1472 bytes. Your UDP packets are being fragmented and reassembled on the receiving machine. I am surprised that is taking so much CPU time.
Try sending packets of total size of 1000 and 1472 (try both!) and report the results.

Related

Socket failed when more than 100 players online

I'm using this class to connect with a GTA:SA:MP server. My website is showing the online players in a table, but if the playercount is more than 100, it won't respond correctly and will return 0. I tried resizing the byte[] rBuffer from 500 to 3402 but it didn't work as well.
UDP datagrams have a maximum size on the internet. This will be around 500 bytes. If you need to send more data, you need to partition it, and send it as multiple datagrams. If the API doesn't support this, you should notify the guys who maintain it.
Removing the blank try-catch statements will show the errors more clearly - it's usually a very bad idea to ignore exceptions like this.
UDP basically says if data gets lost it is fine, as receiving it later would make little scene as data is deprecated. Now about the data packet size:
The length field of the UDP header allows up to 65535 bytes of data.
However, if you are sending your UDP packet across an Ethernet
network, the Ethernet MTU is 1500 bytes, limiting the maximum datagram
size. Also, some routers will attempt to fragment large UDP packets
into 512 byte chunks.
The receive buffers are way to small. Increase there size and you will be fine.
byte[] rBuffer = new byte[500]; -> change 500 to 32000
byte[] rBuffer = new byte[3402]; -> change 3402 to 32000
Where did you get these values from ? If you receive a bigger packet chance is very high you won't get it from your own socket layer. As depicted by Stevens. Berkeley sockets behavior only provide complete packets to the user.
Also change your exception handling to catch the exception rather than ignoring it :
catch(Exception Ex)
{
Trace.Writeline(Ex.Message);
return false;
}
and do that at all locations in the code where there is exceptionhandling.

C# TcpClient Socket.Send() accepting very large amounts of data, no left over bytes

I am using C# TcpClient in the following way to send a few packets,it is working correctly, but Send() is accepting 2MByte of data in one shot, so my upload progress counter is not working correctly..
Console.WriteLine("New transmit pack len={0}", transmit_data.Length);
while ((Transmit_offset < transmit_data.Length) && (client.sock.Poll(0, SelectMode.SelectWrite)))
{
int sendret = client.sock.Send(transmit_data, Transmit_offset, transmit_data.Length - Transmit_offset, SocketFlags.None);
//#if SocketDebug
Console.WriteLine("Sent " + sendret + " bytes");
//#endif
Transmit_offset += sendret;
BytesOut += sendret;
if (transmit_data.Length > 512)
{
double progress = ((double)Transmit_offset / (double)transmit_data.Length) * 100;
Console.WriteLine("Upload offset=" + offset + " packetlen="+PacketLen + " progress="+progress);
//Console.WriteLine("Data transfer complete!");
UploadProgress = Convert.ToInt32(progress);
// Console.WriteLine("Up={0}", UploadProgress);
if (progress > 100)
{
}
}
}
My Debug output is the following, and i have verified that this 2mb packet eventually arrives at the other end, despite my local Tcp Client claiming that it was sent immediately with the return value of the full payload..
SendBufferSize=8192
New transmit pack len=47
Sent 47 bytes
Connected
Sending file Server v1.56.exe....
New transmit pack len=79
Sent 79 bytes
New transmit pack len=2222367
Sent 2222367 bytes
Upload offset=0 packetlen=11504 progress=100
Upload Progress: 100%
Upload Progress: 100%
Upload Progress: 100%
Upload Progress: 100%
So my question is how to realistically get feedback of the real progress of the upload because i know its taking its time in the background, or how do i make C# Socket.Send()
behave more like Linux send()?
You need to cut the send into smaller pieces. Your code
int sendret = client.sock.Send(transmit_data, Transmit_offset, transmit_data.Length - Transmit_offset, SocketFlags.None);
will happily send a big chunk at once, which is exactly what you don't want. Try something along the lines of
int sendsize = transmit_data.Length - Transmit_offset;
if (sendsize > 8860) sendsize = 8860;
int sendret = client.sock.Send(transmit_data, Transmit_offset, sendsize, SocketFlags.None);
Ofcourse hardcoding 8860 (which is max payload for MTU 9000) is a bad idea, use a constant!
EDIT
To understand the performance implications it is necessary to understand, that
the call to Send() will return, when data is put in the final send buffer, not when it is physically sent and acknowledged
the call overhead is tiny compared to the time it takes to physically send the data (10GE on a slow machine might have different rules)
This means, that the next Send() is likely not to delay things at all: It will have prepared the next packet, before the previous one is on the wire, thus keeping the line at full speed.
This also implies, that your progress counter is still slightly off: It will show the progress of the packets being ready to send, not those being sent and acknowledged. On any broadband connection this is very unlikely to be noticable to a human being.

How solid is the Mono SerialPort class?

I have an application that, among other things, uses the SerialPort to communicate with a Digi XBee coordinator radio.
The code for this works rock solid on the desktop under .NET.
Under Mono running on a Quark board and WindRiver Linux, I get about a 99% failure rate when attempting to receive and decode messages from other radios in the network due to checksum validation errors.
Things I have tested:
I'm using polling for the serial port, not events, since event-driven serial is not supported in Mono. So the problem is not event related.
The default USB Coordinator uses an FTDI chipset, but I swapped out to use a proto board and a Prolific USB to serial converter and I see the same failure rate. I think this eliminates the FTDI driver as the problem.
I changed the code to never try duplex communication. It's either sending or receiving. Same errors.
I changed the code to read one byte at a time instead of in blocks sized by the size identifier in the incoming packet. Same errors.
I see this with a variety of remote devices (smart plug, wall router, LTH), so it's not remote-device specific.
The error occurs with solicited or unsolicited messages coming from other devices.
I looked at some of the raw packets that fail a checksum and manual calculation gets the same result, so the checksum calculation itself is right.
Looking at the data I see what appear to be packet headers mid-packet (i.e. inside the length indicated in the packet header). This makes me think that I'm "missing" some bytes, causing subsequent packet data to be getting read into earlier packets.
Again, this works fine on the desktop, but for completeness, this is the core of the receiver code (with error checking removed for brevity):
do
{
byte[] buffer;
// find the packet start
byte #byte = 0;
do
{
#byte = (byte)m_port.ReadByte();
} while (#byte != PACKET_DELIMITER);
int read = 0;
while(read < 2)
{
read += m_port.Read(lengthBuffer, read, 2 - read);
}
var length = lengthBuffer.NetworkToHostUShort(0);
// get the packet data
buffer = new byte[length + 4];
buffer[0] = PACKET_DELIMITER;
buffer[1] = lengthBuffer[0];
buffer[2] = lengthBuffer[1];
do
{
read += m_port.Read(buffer, 3 + read, (buffer.Length - 3) - read);
} while (read < (length + 1));
m_frameQueue.Enqueue(buffer);
m_frameReadyEvent.Set();
} while (m_port.BytesToRead > 0);
I can only think of two places where the failure might be happening - the Mono SerialPort implementation or the WindRiver serial port driver that's sitting above the USB stack. I'm inclined to think that WindRiver has a good driver.
To add to the confusion, we're running Modbus Serial on the same device (in a different application) via Mono and that works fine for days, which somewhat vindicates Mono.
Has anyone else got any experience with the Mono SerialPort? Is it solid? Flaky? Any ideas on what could be going on here?
m_port.Read(lengthBuffer, 0, 2);
That's a bug, you have no guarantee whatsoever that you'll actually read two bytes. Getting just one is very common, serial ports are slow. You must use the return value of Read() to check. Note how you did it right in your second usage. Beyond looping, the simple alternative is to just call ReadByte() twice.

TCP segments disappearing

I run into a problem that googling seems can't solve. To keep it simple I have a client written in C# and a server running Linux written in C. Client is calling Send(buffer) in a loop 100 times. The problem is that server receives only a dozen of them. If I put a sleep, big enough, in a loop everything turns out fine. The buffer is small - about 30B. I read about Nagle's algorithm and ACK delay but it doesn't answer my problems.
for(int i = 0; i < 100; i++)
{
try
{
client.Send(oneBuffer, 0, oneBuffer.Length, SocketFlags.None)
}
catch (SocketException socE)
{
if ((socE.SocketErrorCode == SocketError.WouldBlock)
|| (socE.SocketErrorCode == SocketError.NoBufferSpaceAvailable)
|| (socE.SocketErrorCode == SocketError.IOPending))
{
Console.WriteLine("Never happens :(");
}
}
Thread.Sleep(100); //problem solver but why??
}
It's look like send buffer gets full and rejects data until it gets empty again, in blocking mode and nonblocking mode. Even better, I never get any exception!? I would expect some of the exceptions to raise but nothing. :( Any ideas? Thnx in advance.
TCP is stream oriented. This means that recv can read any amount of bytes between one and the total number of bytes outstanding (sent but not yet read). "Messages" do not exist. Sent buffers can be split or merged.
There is no way to get message behavior from TCP. There is no way to make recv read at least N bytes. Message semantics are constructed by the application protocol. Often, by using fixed-size messages or a length prefix. You can read at least N bytes by doing a read loop.
Remove that assumption from your code.
I think this issue is due to the nagle algorithm :
The Nagle algorithm is designed to reduce network traffic by causing
the socket to buffer small packets and then combine and send them in
one packet under certain circumstances. A TCP packet consists of 40
bytes of header plus the data being sent. When small packets of data
are sent with TCP, the overhead resulting from the TCP header can
become a significant part of the network traffic. On heavily loaded
networks, the congestion resulting from this overhead can result in
lost datagrams and retransmissions, as well as excessive propagation
time caused by congestion. The Nagle algorithm inhibits the sending of
new TCP segments when new outgoing data arrives from the user if any
previouslytransmitted data on the connection remains unacknowledged.
Calling client.Send function doesn't mean a TCP segment will be sent.
In your case, as buffers are small, the naggle algorithm will regroup them into larger segments. Check on server side that the dozen of buffers received contains the whole data.
When you add a Thread.Sleep(100), you will receive 100 packets on server side because nagle algotithm won't wait longer for further data.
If you really need a short latency in your application, you can explicitly disable nagle algorithm for your TcpClient : set the NoDelay property to true. Add this line at the begening of your code :
client.NoDelay = true;
I was naive thinking there was a problem with TCP stack. It was with my server code. Somewhere in between the data manipulation I used strncpy() function on a buffer that stores messages. Every message contained \0 at the end. Strncpy copied only the first message (the first string) out of the buffer regardless the count that was given (buffer length). That resulted in me thinking I had lost messages.
When I used the delay between send() calls on client, messages didn't get buffered. So, strncpy() worked on a buffer with one message and everything went smoothly. That "phenomenon" led we into thinking that speed rate of send calls is causing my problems.
Again thanks on help, your comments made me wonder. :)

Sending UDP packets in C# through loopback. Why I can't trace them?

I have already googled and searched stackoverflow but I can find nothing related to my problem.
First of all, let me describe briefly the scene.
Imagine a camera sending all the time pieces (from now on I will call them 'chunks') of JPEG through the network by UDP. Using UDP is a constraint, I cannot change that, so please do not answer 'why don't you use TCP?', because you know my point: 'Because I can't'.
On the other side a client receives the chunks sent by the cam. To build a kind of flow control I have set three fields (three bytes to store JPEG number, chunk index, amount of chunks) at the beginning of my datagram, and after that the chunk itself.
By the moment I am testing both sides in my own laptop sending and receiving through the loopback (127.0.0.1) but the problem is that while the camera (sender) says it has sent all the chunks propperly (I am testing with a single JPEG picture that gets splitted in 161 chunks) the client receives a random number of pieces (sometimes near 70, some others 100, and a few times all of them). I have tried sniffing my loopback with rawcap (http://www.netresec.com/?page=RawCap) and it detects another amount of UDP 'datagrams', different from 161 (which is supposed to be sent), and from the amount the client claims to have received.
So, is it possible that the send sentence is not working as expected? Any other suggestion to continue investigation?
Here is my sending method.
private void startSendingData(IPEndPoint target)
{
fatherForm.writeMessage("START command received. Sending data to " + target.Address.ToString() + " port " + target.Port.ToString() + "...");
//Get chunks prepared
List<byte[]> listOfChunks = getChunksInAList(collectFiles());
List<uint> chunksSucces = new List<uint>();
List<uint> chunksFailed = new List<uint>();
byte[] stopDatagram = getStopDatagram();
//Initialise the socket
Socket s = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
Stopwatch sw = Stopwatch.StartNew();
foreach (byte[] chunk in listOfChunks)
{
if (s.SendTo(chunk, target) == chunk.Length)
chunksSucces.Add(BitConverter.ToUInt32(chunk,sizeof(uint)));
else
chunksFailed.Add(BitConverter.ToUInt32(chunk, sizeof(uint)));
}
Debug.WriteLine(chunksSucces.Count + " sent successfully");
//Tell the receiver not to continue receiving
s.SendTo(stopDatagram, target);
long ellapsedMs = sw.ElapsedMilliseconds;
sw.Stop();
writeTransmissionRate(listOfChunks.Count, ellapsedMs);
Debug.WriteLine(sw.ElapsedMilliseconds + "ms ellapsed");
sw.Stop();
s.Close();
}
And the output is:
161 chunks to be sent
161 sent successfully
6ms ellapsed
Transmission rate: 37000.65KB/s 36.13MB/s
But at the other side I only recieve 22 datagrams in this test.
Thank you in advance.
UDP is not guaranteed. The network stack may be dropping packets if your receive function is not processing them fast enough.
Try sending them slower, or increasing your UDP receive buffer:

Categories