I run into a problem that googling seems can't solve. To keep it simple I have a client written in C# and a server running Linux written in C. Client is calling Send(buffer) in a loop 100 times. The problem is that server receives only a dozen of them. If I put a sleep, big enough, in a loop everything turns out fine. The buffer is small - about 30B. I read about Nagle's algorithm and ACK delay but it doesn't answer my problems.
for(int i = 0; i < 100; i++)
{
try
{
client.Send(oneBuffer, 0, oneBuffer.Length, SocketFlags.None)
}
catch (SocketException socE)
{
if ((socE.SocketErrorCode == SocketError.WouldBlock)
|| (socE.SocketErrorCode == SocketError.NoBufferSpaceAvailable)
|| (socE.SocketErrorCode == SocketError.IOPending))
{
Console.WriteLine("Never happens :(");
}
}
Thread.Sleep(100); //problem solver but why??
}
It's look like send buffer gets full and rejects data until it gets empty again, in blocking mode and nonblocking mode. Even better, I never get any exception!? I would expect some of the exceptions to raise but nothing. :( Any ideas? Thnx in advance.
TCP is stream oriented. This means that recv can read any amount of bytes between one and the total number of bytes outstanding (sent but not yet read). "Messages" do not exist. Sent buffers can be split or merged.
There is no way to get message behavior from TCP. There is no way to make recv read at least N bytes. Message semantics are constructed by the application protocol. Often, by using fixed-size messages or a length prefix. You can read at least N bytes by doing a read loop.
Remove that assumption from your code.
I think this issue is due to the nagle algorithm :
The Nagle algorithm is designed to reduce network traffic by causing
the socket to buffer small packets and then combine and send them in
one packet under certain circumstances. A TCP packet consists of 40
bytes of header plus the data being sent. When small packets of data
are sent with TCP, the overhead resulting from the TCP header can
become a significant part of the network traffic. On heavily loaded
networks, the congestion resulting from this overhead can result in
lost datagrams and retransmissions, as well as excessive propagation
time caused by congestion. The Nagle algorithm inhibits the sending of
new TCP segments when new outgoing data arrives from the user if any
previouslytransmitted data on the connection remains unacknowledged.
Calling client.Send function doesn't mean a TCP segment will be sent.
In your case, as buffers are small, the naggle algorithm will regroup them into larger segments. Check on server side that the dozen of buffers received contains the whole data.
When you add a Thread.Sleep(100), you will receive 100 packets on server side because nagle algotithm won't wait longer for further data.
If you really need a short latency in your application, you can explicitly disable nagle algorithm for your TcpClient : set the NoDelay property to true. Add this line at the begening of your code :
client.NoDelay = true;
I was naive thinking there was a problem with TCP stack. It was with my server code. Somewhere in between the data manipulation I used strncpy() function on a buffer that stores messages. Every message contained \0 at the end. Strncpy copied only the first message (the first string) out of the buffer regardless the count that was given (buffer length). That resulted in me thinking I had lost messages.
When I used the delay between send() calls on client, messages didn't get buffered. So, strncpy() worked on a buffer with one message and everything went smoothly. That "phenomenon" led we into thinking that speed rate of send calls is causing my problems.
Again thanks on help, your comments made me wonder. :)
Related
I'm using this class to connect with a GTA:SA:MP server. My website is showing the online players in a table, but if the playercount is more than 100, it won't respond correctly and will return 0. I tried resizing the byte[] rBuffer from 500 to 3402 but it didn't work as well.
UDP datagrams have a maximum size on the internet. This will be around 500 bytes. If you need to send more data, you need to partition it, and send it as multiple datagrams. If the API doesn't support this, you should notify the guys who maintain it.
Removing the blank try-catch statements will show the errors more clearly - it's usually a very bad idea to ignore exceptions like this.
UDP basically says if data gets lost it is fine, as receiving it later would make little scene as data is deprecated. Now about the data packet size:
The length field of the UDP header allows up to 65535 bytes of data.
However, if you are sending your UDP packet across an Ethernet
network, the Ethernet MTU is 1500 bytes, limiting the maximum datagram
size. Also, some routers will attempt to fragment large UDP packets
into 512 byte chunks.
The receive buffers are way to small. Increase there size and you will be fine.
byte[] rBuffer = new byte[500]; -> change 500 to 32000
byte[] rBuffer = new byte[3402]; -> change 3402 to 32000
Where did you get these values from ? If you receive a bigger packet chance is very high you won't get it from your own socket layer. As depicted by Stevens. Berkeley sockets behavior only provide complete packets to the user.
Also change your exception handling to catch the exception rather than ignoring it :
catch(Exception Ex)
{
Trace.Writeline(Ex.Message);
return false;
}
and do that at all locations in the code where there is exceptionhandling.
This is my first question posted on this forum, and I'm a beginner in c# world , so this is kind of exciting for me, but I'm facing some issues with sending a large amount of data through sockets so this is more details about my problem:
I'm sending a binary image of 5 Mo through a TCP socket, at the receiving part I'm saving the result(data received ) and getting only 1.5 Mo ==> data has been lost (I compared the original and the resulting file and it showed me the missed parts)
this is the code I use:
private void senduimage_Click(object sender, EventArgs e)
{
if (!user.clientSocket_NewSocket.Connected)
{
Socket clientSocket_NewSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
user.clientSocket_NewSocket = clientSocket_NewSocket;
System.IAsyncResult _NewSocket = user.clientSocket_NewSocket.BeginConnect(ip_address, NewSocket.Transceiver_TCP_Port, null, null);
bool successNewSocket = _NewSocket.AsyncWaitHandle.WaitOne(2000, true);
}
byte[] outStream = System.Text.Encoding.ASCII.GetBytes(Uimage_Data);
user.clientSocket_NewSocket.Send(outStream);
}
In forums they say to divide data into chunks, is this a solution, if so how can I do this, I've tried but it didn't work!
There are lots of different solutions but chunking is usually a good solution, you can either do this blindly where you keep filling your temporary buffer and then putting it into some stateful buffer until you hit some arbitrary token or the buffer is not completely full, or you can adhere to some sort of contract per tcp message (a message being the overall data to recieve).
If you were to look at doing some sort of contract then do something like the first N bytes of a message is the descriptor, which you could make as big or as small as you want, but your temp buffer will ONLY read this size up front from the stream.
A typical header could be something like:
public struct StreamHeader // 5 bytes
{
public byte MessageType {get;set;} // 1 byte
public int MessageSize {get;set;} // 4 bytes
}
So you would read that in then if its small enough allocate the full message size to the temp buffer and read it all in, or if you deem it too big chunk it into sections and keep reading until the total bytes you have received match the MessageSize portion of your header structure.
Probably you haven't read the documentation on socket usage in C#. (http://msdn.microsoft.com/en-us/library/ms145160.aspx)
The internal buffer can not store all the data you provided to send methode. A possible solution to your problem can be is like you said to divide your data into chunks.
int totalBytesToSend = outstream.length; int bytesSend = 0;
while(bytesSend < totalBytesToSend )
bytesSend+= user.clientSocket_NewSocket.Send(outStream, bytesSend, totalBytesToSend - bytesSend,...);
I suspect that one of your problems is that you are not calling EndConnect. From the MSDN documentation:
The asynchronous BeginConnect operation must be completed by calling the EndConnect method.
Also, the wait:-
bool successNewSocket = _NewSocket.AsyncWaitHandle.WaitOne(2000, true);
is probably always false as there is nothing setting the event to the signaled state. Usually, you would specify a callback function to the BeginConnect function and in the callback you'd call EndConnect and set the state of the event to signaled. See the example code on this MSDN page.
UPDATE
I think I see another problem:-
byte[] outStream = System.Text.Encoding.ASCII.GetBytes(Uimage_Data);
I don't know what type Uimage_Data but I really don't think you want to convert it to ASCII. A zero in the data may signal an end of data byte (or maybe a 26 or someother ASCII code). The point is, the encoding process is likely to be changing the data.
Can you provide the type for the Uimage_Data object?
Most likely the problem is that you are closing the client-side socket before all the data has been transmitted to the server, and it is therefore getting discarded.
By default when you close a socket, all untransmitted data (sitting in the operating system buffers) is discarded. There are a few solutions:
[1] Set SO_LINGER (see http://developerweb.net/viewtopic.php?id=2982)
[2] Get the server to send an acknowledgement to the client, and don't close the client-side socket until you receive it.
[3] Wait until the output buffer is empty on the client side before closing the socket (test using getsocketopt SO_SND_BUF - I'm not sure of the syntax on c#).
Also you really should be testing the return value of Send(). Although in theory it should block until it sends all the data, I would want to actually verify that and at least print an error message if there is a mismatch.
I'm currently writing a prototype application in C#/.Net4 where i need to transfer an unknown amount of data. The data is read in from a text file and then serialized into a byte array.
Now i need to implement both transmission methods, UDP and TCP. The transmission in both ways does work fine but i have some struggleing with UDP. I assumend that the transmission using UDP have to be much faster than using TCP but in fact my tests proved that the UDP transmission is about 7 to 8 times slower than using TCP.
I tested the transmission with a 12 megabyte file and the TCP transmission took about 1 second whereas the UDP transmission took about 7 seconds.
In the application i use simple sockets to transmit the data. Since UDP does only allow a maximum of 65535kb per message, i splitted the serialized the byte array of the file into several parts where each part has the size of the socker SendBufferSize and then i transfer each part using Socket.Send() method call.
Here is the code for the Sender part.
while (startOffset < data.Length)
{
if ((startOffset + payloadSize) > data.Length)
{
payloadSize = data.Length - startOffset;
}
byte[] subMessageBytes = new byte[payloadSize + 16];
byte[] messagePrefix = new UdpMessagePrefix(data.Length, payloadSize, messageCount, messageId).ToByteArray();
Buffer.BlockCopy(messagePrefix, 0, subMessageBytes, 0, 16);
Buffer.BlockCopy(data, startOffset, subMessageBytes, messageOffset, payloadSize);
messageId++;
startOffset += payloadSize;
udpClient.Send(subMessageBytes, subMessageBytes.Length);
messages.Add(subMessageBytes);
}
This code does simply copy the next part to be send into an byte array and then call the send method on the socket. My first guess was, that the splitting/copying of the byte arrays was slowing down the performance, but i isolated and tested the splitting code and the splitting took only a few milliseconds, so that was not causing the problem.
int receivedMessageCount = 1;
Dictionary<int, byte[]> receivedMessages = new Dictionary<int, byte[]>();
while (receivedMessageCount != totalMessageCount)
{
byte[] data = udpClient.Receive(ref remoteIpEndPoint);
UdpMessagePrefix p = UdpMessagePrefix.FromByteArray(data);
receivedMessages.Add(p.MessageId, data);
//Console.WriteLine("Received packet: " + receivedMessageCount + " (ID: " + p.MessageId + ")");
receivedMessageCount++;
//Console.WriteLine("ReceivedMessageCount: " + receivedMessageCount);
}
Console.WriteLine("Done...");
return receivedMessages;
This is the server side code where i receive the UDP messages. Each message has some bytes as a prefix where the total number of messages is stored and the size. So i simply call socket.Receive in a loop until i received the amount of messages which were specified in the prefix.
My assumption here is that i may have implemented the UDP transmission code not "efficiently" enough... Maybe one of you guys already sees a problem in the code snippets or have any other suggestion or hint for me why my UDP transmission is slower than TCP.
thanks in advance!
While UDP datagram size can be up to 64K, the actual wire frames are usually 1500 bytes (normal ethernet MTU). That also has to fit an IP header of minimum 20 bytes and a UDP header of 8 bytes, leaving you with 1472 bytes of usable payload.
What you are seeing is the result of your OS network stack fragmenting the UDP datagrams on the sender side and then re-assembling them on the receiver side. That takes time, thus your results.
TCP, on the other hand, does its own packetization and tries to find path MTU, so it's more efficient in this case.
Limit your data chunks to 1472 bytes and measure again.
I think you should measure CPU usage and network throughput for the duration of the test.
If the CPU is pegged, this is your problem: Turn on a profiler.
If the network (cable) is pegged this is a different class of problems. I wouldn't know what to do about it ;-)
If neither is pegged, run a profiler and see where most wall-clock time is spent. There must be some waiting going on.
If you don't have a profiler just hit break 10 times in the debugger and see where it stops most often.
Edit: My response to your measurement: We know that 99% of all execution time is spent in receiving data. But we don't know yet if the CPU is busy. Look into task-manager and look which process is busy.
My guess is it is the System process. This is the windows kernel and probably the UDP component of it.
This might have to do with packet fragmentation. IP packets have a certain maximum size like 1472 bytes. Your UDP packets are being fragmented and reassembled on the receiving machine. I am surprised that is taking so much CPU time.
Try sending packets of total size of 1000 and 1472 (try both!) and report the results.
This is code I'm using to test a webserver on an embedded product that hasn't been behaving well when an HTTP request comes in fragmented across multiple TCP packets:
/* This is all within a loop that cycles size_chunk up to the size of the whole
* test request, in order to test all possible fragment sizes. */
TcpClient client_sensor = new TcpClient(NAME_MODULE, 80);
client_sensor.Client.NoDelay = true; /* SHOULD force the TCP socket to send the packets in exactly the chunks we tell it to, rather than buffering the output. */
/* I have also tried just "client_sensor.NoDelay = true, with no luck. */
client_sensor.Client.SendBufferSize = size_chunk; /* Added in a desperate attempt to fix the problem before posting my shameful ignorance on stackoverflow. */
for (int j = 0; j < TEST_HEADERS.Length; j += size_chunk)
{
String request_fragment = TEST_HEADERS.Substring(j, (TEST_HEADERS.Length < j + size_chunk) ? (TEST_HEADERS.Length - j) : size_chunk);
client_sensor.Client.Send(Encoding.ASCII.GetBytes(request_fragment));
client_sensor.GetStream().Flush();
}
/* Test stuff goes here, check that the embedded web server responded correctly, etc.. */
Looking at Wireshark, I see only one TCP packet go out, which contains the entire test header, not the approximately header length / chunk size packets I expect. I have used NoDelay to turn off the Nagle algorithm before, and it usually works just like I expect it to. The online documentation for NoDelay at http://msdn.microsoft.com/en-us/library/system.net.sockets.tcpclient.nodelay%28v=vs.90%29.aspx definitely states "Sends data immediately upon calling NetworkStream.Write" in its associated code sample, so I think I've been using it correctly all this time.
This happens whether or not I step through the code. Is the .NET runtime optimizing away my packet fragmentation?
I'm running x64 Windows 7, .NET Framework 3.5, Visual Studio 2010.
TcpClient.NoDelay does not mean that blocks of bytes will not be aggregated into a single packet. It means that blocks of bytes will not be delayed in order to aggregate into a single packet.
If you want to force a packet boundary, use Stream.Flush.
Grr. It was my antivirus getting in the way. A recent update caused it to start interfering with the sending of HTTP requests to port 80 by buffering all output until the final "\r\n\r\n" marker was seen, regardless of how the OS was trying to handle the outbound TCP traffic. I should have checked that first, but I've been using this same antivirus program for years and never had this problem before, so I didn't even think of it. Everything works just the way it used to when I disable the antivirus.
The MSDN docs show setting the TcpClient.NoDelay = true, not the TcpClient.Client.NoDelay property. Did you try that?
Your test code is just fine (I assume that you send valid HTTP). What you should check is why TCP server is not behaving well when reading from TCP connection. TCP is a stream protocol - that means you cannot make any assumptions on the size of data packets unless you explicitly specify those sizes in your data protocol. For instance you can prefix all your data packets using fixed-size (2 bytes) prefix, that will contain the size of the data to be received.
When reading HTTP usually read is made of several phases: read HTTP line, read HTTP headers, read HTTP content (if applicable). First two parts do not have any size specifications, but they have special delimiter (CRLF).
Here is some info how HTTP can be read and parsed.
I have a group of "Packets" which are custom classed that are coverted to byte[] and then sent to the client. When a client joins, they are updated with the previous "Catch Up Packets" that were sent previous to the user joining. Think of it as a chat room where you are updated with the previous conversations.
My issue is on the client end, we do not receive all the information; Sometimes not at all..
Below is pseudo c# code for what I see
code looks like this.
lock(CatchUpQueue.SyncRoot)
{
foreach(Packet packet in CatchUpQueue)
{
// If I put Console.WriteLine("I am Sending Packets"); It will work fine up to (2) client sockets else if fails again.
clientSocket.BeginSend(data, 0, data.length, SocketFlags.None, new AsyncCallback(EndSend), data);
}
}
Is this some sort of throttle issue or an issue with sending to many times: ie: if there are 4 packets in the queue then it calls begin send 4 times.
I have searched for a topic similiar and I cannot find one. Thank you for your help.
Edit: I would also like to point out that the sending between clients continues normally for any sends after the client connects. But for some reason the packets within this for loop are not all sent.
I would suspect that you are flooding the TCP port with packets, and probably overflowing its send buffer, at which point it will probably return errors rather than sending the data.
The idea of Async I/O is not to allow you to send an infinite amount of data packets simultaneously, but to allow your foreground thread to continue processing while a linear sequence of one or more I/O operations occurs in the background.
As the TCP stream is a serial stream, try respecting that and send each packet in turn. That is, after BeginSend, use the Async callback to detect when the Send has completed before you send again. You are effectively doing this by adding a Sleep, but this is not a very good solution (you will either be sending packets more slowly than possible, or you may not sleep for long enough and packets will be lost again)
Or, if you don't need the I/O to run in the background, use your simple foreach loop, but use a synchronous rather than Async send.
Okay,
Apparently a fix, so far still has me confused, is to Thread.Sleep for the number of ms for each packet I am sending.
So...
for(int i = 0; i < PacketQueue.Count; i++)
{
Packet packet = PacketQueue[i];
clientSocket.BeginSend(data, 0, data.length, SocketFlags.None, new AsyncCallback(EndSend), data);
Thread.Sleep(PacketQueue.Count);
}
I assume that for some reason the loop stops some of the calls from happening... Well I will continue to work with this and try to find the real answer.