Socket failed when more than 100 players online - c#

I'm using this class to connect with a GTA:SA:MP server. My website is showing the online players in a table, but if the playercount is more than 100, it won't respond correctly and will return 0. I tried resizing the byte[] rBuffer from 500 to 3402 but it didn't work as well.

UDP datagrams have a maximum size on the internet. This will be around 500 bytes. If you need to send more data, you need to partition it, and send it as multiple datagrams. If the API doesn't support this, you should notify the guys who maintain it.
Removing the blank try-catch statements will show the errors more clearly - it's usually a very bad idea to ignore exceptions like this.

UDP basically says if data gets lost it is fine, as receiving it later would make little scene as data is deprecated. Now about the data packet size:
The length field of the UDP header allows up to 65535 bytes of data.
However, if you are sending your UDP packet across an Ethernet
network, the Ethernet MTU is 1500 bytes, limiting the maximum datagram
size. Also, some routers will attempt to fragment large UDP packets
into 512 byte chunks.

The receive buffers are way to small. Increase there size and you will be fine.
byte[] rBuffer = new byte[500]; -> change 500 to 32000
byte[] rBuffer = new byte[3402]; -> change 3402 to 32000
Where did you get these values from ? If you receive a bigger packet chance is very high you won't get it from your own socket layer. As depicted by Stevens. Berkeley sockets behavior only provide complete packets to the user.
Also change your exception handling to catch the exception rather than ignoring it :
catch(Exception Ex)
{
Trace.Writeline(Ex.Message);
return false;
}
and do that at all locations in the code where there is exceptionhandling.

Related

TCP sender machine receiver-window-size shrinking to 0 on windows machine after sending tiny amounts of data for a few minutes

I'm writing an app that works on a windows laptop that sends TCP bytes in a loop as a "homemade keep-alive" message to keep a connection active (the server machine will disconnect after 15 seconds of no TCP data received). The "server machine" will send the laptop small chunks of data (about .5K bytes/second) as long as a connection is alive (according to the server documentation, this is ideally this is an "echo" packet, but I was unable to find how this is accomplished in .NET). My problem is that when I view this data in Wireshark I can see good network activity, then, after a few minutes, the "win" (receive window size available on the laptop) shrinks from 65K to 0 in increments of about 240 bytes each packet. Why is this happening and how can I prevent it? I can't seem to get the "keep-alive" flags in .Net to work, so this was supposed to be my workaround. I do not see any missed ACK messages, and my data rate is about 2Kb/sec, so I don't understand why the laptop window size is dropping. I definitely assume there is a misconception on my part about TCP and or windows/.NET use of TCPsockets since I have no experience with TCP (I've always used UDP).
TcpClient client = new TcpClient(iPEndpoint);
//Socket s = client.Client; none of these flags actually work on the keep alive feature
//s.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.KeepAlive, true);
//s.SetSocketOption(SocketOptionLevel.Tcp, SocketOptionName.TcpKeepAliveInterval, 10);
//s.SetSocketOption(SocketOptionLevel.Tcp, SocketOptionName.TcpKeepAliveInterval, 10);
// Translate the passed message into ASCII and store it as a Byte array.
// Byte[] data = System.Text.Encoding.ASCII.GetBytes(message);
IPAddress ipAdd = IPAddress.Parse("192.168.1.10");
IPEndPoint ipEndPoin = new IPEndPoint(ipAdd, 13000);
client.Connect(ipEndPoin);
NetworkStream stream = client.GetStream();
// Send the message to the connected TcpServer.
bool finished = false;
while (!finished)
{
try
{
stream.Write(data, 0, data.Length);
Thread.Sleep(5000);
}
catch (System.IO.IOException ioe)
{
if (ioe.InnerException is System.Net.Sockets.SocketException)
{
client.Dispose();
client = new TcpClient(iPEndpoint);
client.Connect(ipEndPoin);
stream = client.GetStream();
Console.Write("reconnected");
// this imediately fails again after exiting this catch to go back to the while loop because send window is still 0
}
}
}
You should really be familiar with RFC 793, Transmission Control Protocol, which is the definition of TCP. It explains the wondow and how it is used for flow control:
Flow Control:
TCP provides a means for the receiver to govern the amount of data
sent by the sender. This is achieved by returning a "window" with
every ACK indicating a range of acceptable sequence numbers beyond the
last segment successfully received. The window indicates an allowed
number of octets that the sender may transmit before receiving further
permission.
-and-
To govern the flow of data between TCPs, a flow control mechanism is
employed. The receiving TCP reports a "window" to the sending TCP.
This window specifies the number of octets, starting with the
acknowledgment number, that the receiving TCP is currently prepared to
receive.
-and-
Window: 16 bits
The number of data octets beginning with the one indicated in the
acknowledgment field which the sender of this segment is willing to
accept.
The window size is dictated by the receiver of the data in its ACK segments where it acknowledges receipt of the data. If your laptop receive window shrinks to 0, it is setting the window to that because it has no more space to receive, and it needs time to process and free up space in the receive buffer. When it has more space, it will send an ACK segment with a larger window.
Segment Receive Test
Length Window
------- ------- -------------------------------------------
0 0 SEG.SEQ = RCV.NXT
0 >0 RCV.NXT =< SEG.SEQ < RCV.NXT+RCV.WND
>0 0 not acceptable
>0 >0 RCV.NXT =< SEG.SEQ < RCV.NXT+RCV.WND
or RCV.NXT =< SEG.SEQ+SEG.LEN-1 < RCV.NXT+RCV.WND
Note that when the receive window is zero no segments should be
acceptable except ACK segments. Thus, it is be possible for a TCP to
maintain a zero receive window while transmitting data and receiving
ACKs. However, even when the receive window is zero, a TCP must
process the RST and URG fields of all incoming segments.

Receiving large UDP packet fails behind Linux firewall using C#

I was successfully using this code on my home computer - please note that I have minified the code in order to only show the important parts
Socket Sock = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
// Connect to the server
Sock.Connect(Ip, Port);
// Setup buffers
byte[] bufferSend = ...; // some data prepared before
byte[] bufferRec = new byte[8024];
// Send the command
Sock.Send(bufferSend, SocketFlags.None);
// UNTIL HERE EVERYTHING WORKS FINE
// Receive the answer
int ct = 0;
StringBuilder response = new StringBuilder();
while ((ct = Sock.Receive(bufferRec)) > 0)
{
response.Append(Encoding.Default.GetString(bufferRec, 0, ct));
}
// Print the result
Console.WriteLine("This is the result:\n" + response.ToString());
In another environment (Windows but behind an Ubuntu firewall) I have problems receiving packets with over 1472 bytes: An exception is thrown that the request timed out.
So basically I have two options:
Either fixing the Ubuntu firewall server (how?)
Adjusting my code (probably better option?)
How would I need to adapt my code in order to split packets in working size? I thought adjusting the variable bufferRec = new byte[1024] would suffice, but obviously this does not work. I get an exception then that the received package is greater than the bufferRec. Do I have to change the SocketType?
Unfortunately I do not know a lot about sockets, so your explainations would help a lot!
Your packet is traveling various networks. Each network probably has its own MTU size. This is the maximum size a single packet can be.
If your data exceeds that size, the DF flag is checked (DF stands for Don't Fragment). If that flag is set, the packet is dropped and an ICMP response is generated.
In C# sockets, this option is controlled by the DontFragment property, wich defaults to True, hence your problem.
Note that UDP with fragmentation enabled is to be considered unreilable, since packets will probably get lost on a busy network

How solid is the Mono SerialPort class?

I have an application that, among other things, uses the SerialPort to communicate with a Digi XBee coordinator radio.
The code for this works rock solid on the desktop under .NET.
Under Mono running on a Quark board and WindRiver Linux, I get about a 99% failure rate when attempting to receive and decode messages from other radios in the network due to checksum validation errors.
Things I have tested:
I'm using polling for the serial port, not events, since event-driven serial is not supported in Mono. So the problem is not event related.
The default USB Coordinator uses an FTDI chipset, but I swapped out to use a proto board and a Prolific USB to serial converter and I see the same failure rate. I think this eliminates the FTDI driver as the problem.
I changed the code to never try duplex communication. It's either sending or receiving. Same errors.
I changed the code to read one byte at a time instead of in blocks sized by the size identifier in the incoming packet. Same errors.
I see this with a variety of remote devices (smart plug, wall router, LTH), so it's not remote-device specific.
The error occurs with solicited or unsolicited messages coming from other devices.
I looked at some of the raw packets that fail a checksum and manual calculation gets the same result, so the checksum calculation itself is right.
Looking at the data I see what appear to be packet headers mid-packet (i.e. inside the length indicated in the packet header). This makes me think that I'm "missing" some bytes, causing subsequent packet data to be getting read into earlier packets.
Again, this works fine on the desktop, but for completeness, this is the core of the receiver code (with error checking removed for brevity):
do
{
byte[] buffer;
// find the packet start
byte #byte = 0;
do
{
#byte = (byte)m_port.ReadByte();
} while (#byte != PACKET_DELIMITER);
int read = 0;
while(read < 2)
{
read += m_port.Read(lengthBuffer, read, 2 - read);
}
var length = lengthBuffer.NetworkToHostUShort(0);
// get the packet data
buffer = new byte[length + 4];
buffer[0] = PACKET_DELIMITER;
buffer[1] = lengthBuffer[0];
buffer[2] = lengthBuffer[1];
do
{
read += m_port.Read(buffer, 3 + read, (buffer.Length - 3) - read);
} while (read < (length + 1));
m_frameQueue.Enqueue(buffer);
m_frameReadyEvent.Set();
} while (m_port.BytesToRead > 0);
I can only think of two places where the failure might be happening - the Mono SerialPort implementation or the WindRiver serial port driver that's sitting above the USB stack. I'm inclined to think that WindRiver has a good driver.
To add to the confusion, we're running Modbus Serial on the same device (in a different application) via Mono and that works fine for days, which somewhat vindicates Mono.
Has anyone else got any experience with the Mono SerialPort? Is it solid? Flaky? Any ideas on what could be going on here?
m_port.Read(lengthBuffer, 0, 2);
That's a bug, you have no guarantee whatsoever that you'll actually read two bytes. Getting just one is very common, serial ports are slow. You must use the return value of Read() to check. Note how you did it right in your second usage. Beyond looping, the simple alternative is to just call ReadByte() twice.

TCP segments disappearing

I run into a problem that googling seems can't solve. To keep it simple I have a client written in C# and a server running Linux written in C. Client is calling Send(buffer) in a loop 100 times. The problem is that server receives only a dozen of them. If I put a sleep, big enough, in a loop everything turns out fine. The buffer is small - about 30B. I read about Nagle's algorithm and ACK delay but it doesn't answer my problems.
for(int i = 0; i < 100; i++)
{
try
{
client.Send(oneBuffer, 0, oneBuffer.Length, SocketFlags.None)
}
catch (SocketException socE)
{
if ((socE.SocketErrorCode == SocketError.WouldBlock)
|| (socE.SocketErrorCode == SocketError.NoBufferSpaceAvailable)
|| (socE.SocketErrorCode == SocketError.IOPending))
{
Console.WriteLine("Never happens :(");
}
}
Thread.Sleep(100); //problem solver but why??
}
It's look like send buffer gets full and rejects data until it gets empty again, in blocking mode and nonblocking mode. Even better, I never get any exception!? I would expect some of the exceptions to raise but nothing. :( Any ideas? Thnx in advance.
TCP is stream oriented. This means that recv can read any amount of bytes between one and the total number of bytes outstanding (sent but not yet read). "Messages" do not exist. Sent buffers can be split or merged.
There is no way to get message behavior from TCP. There is no way to make recv read at least N bytes. Message semantics are constructed by the application protocol. Often, by using fixed-size messages or a length prefix. You can read at least N bytes by doing a read loop.
Remove that assumption from your code.
I think this issue is due to the nagle algorithm :
The Nagle algorithm is designed to reduce network traffic by causing
the socket to buffer small packets and then combine and send them in
one packet under certain circumstances. A TCP packet consists of 40
bytes of header plus the data being sent. When small packets of data
are sent with TCP, the overhead resulting from the TCP header can
become a significant part of the network traffic. On heavily loaded
networks, the congestion resulting from this overhead can result in
lost datagrams and retransmissions, as well as excessive propagation
time caused by congestion. The Nagle algorithm inhibits the sending of
new TCP segments when new outgoing data arrives from the user if any
previouslytransmitted data on the connection remains unacknowledged.
Calling client.Send function doesn't mean a TCP segment will be sent.
In your case, as buffers are small, the naggle algorithm will regroup them into larger segments. Check on server side that the dozen of buffers received contains the whole data.
When you add a Thread.Sleep(100), you will receive 100 packets on server side because nagle algotithm won't wait longer for further data.
If you really need a short latency in your application, you can explicitly disable nagle algorithm for your TcpClient : set the NoDelay property to true. Add this line at the begening of your code :
client.NoDelay = true;
I was naive thinking there was a problem with TCP stack. It was with my server code. Somewhere in between the data manipulation I used strncpy() function on a buffer that stores messages. Every message contained \0 at the end. Strncpy copied only the first message (the first string) out of the buffer regardless the count that was given (buffer length). That resulted in me thinking I had lost messages.
When I used the delay between send() calls on client, messages didn't get buffered. So, strncpy() worked on a buffer with one message and everything went smoothly. That "phenomenon" led we into thinking that speed rate of send calls is causing my problems.
Again thanks on help, your comments made me wonder. :)

UDP data transmission slower than TCP

I'm currently writing a prototype application in C#/.Net4 where i need to transfer an unknown amount of data. The data is read in from a text file and then serialized into a byte array.
Now i need to implement both transmission methods, UDP and TCP. The transmission in both ways does work fine but i have some struggleing with UDP. I assumend that the transmission using UDP have to be much faster than using TCP but in fact my tests proved that the UDP transmission is about 7 to 8 times slower than using TCP.
I tested the transmission with a 12 megabyte file and the TCP transmission took about 1 second whereas the UDP transmission took about 7 seconds.
In the application i use simple sockets to transmit the data. Since UDP does only allow a maximum of 65535kb per message, i splitted the serialized the byte array of the file into several parts where each part has the size of the socker SendBufferSize and then i transfer each part using Socket.Send() method call.
Here is the code for the Sender part.
while (startOffset < data.Length)
{
if ((startOffset + payloadSize) > data.Length)
{
payloadSize = data.Length - startOffset;
}
byte[] subMessageBytes = new byte[payloadSize + 16];
byte[] messagePrefix = new UdpMessagePrefix(data.Length, payloadSize, messageCount, messageId).ToByteArray();
Buffer.BlockCopy(messagePrefix, 0, subMessageBytes, 0, 16);
Buffer.BlockCopy(data, startOffset, subMessageBytes, messageOffset, payloadSize);
messageId++;
startOffset += payloadSize;
udpClient.Send(subMessageBytes, subMessageBytes.Length);
messages.Add(subMessageBytes);
}
This code does simply copy the next part to be send into an byte array and then call the send method on the socket. My first guess was, that the splitting/copying of the byte arrays was slowing down the performance, but i isolated and tested the splitting code and the splitting took only a few milliseconds, so that was not causing the problem.
int receivedMessageCount = 1;
Dictionary<int, byte[]> receivedMessages = new Dictionary<int, byte[]>();
while (receivedMessageCount != totalMessageCount)
{
byte[] data = udpClient.Receive(ref remoteIpEndPoint);
UdpMessagePrefix p = UdpMessagePrefix.FromByteArray(data);
receivedMessages.Add(p.MessageId, data);
//Console.WriteLine("Received packet: " + receivedMessageCount + " (ID: " + p.MessageId + ")");
receivedMessageCount++;
//Console.WriteLine("ReceivedMessageCount: " + receivedMessageCount);
}
Console.WriteLine("Done...");
return receivedMessages;
This is the server side code where i receive the UDP messages. Each message has some bytes as a prefix where the total number of messages is stored and the size. So i simply call socket.Receive in a loop until i received the amount of messages which were specified in the prefix.
My assumption here is that i may have implemented the UDP transmission code not "efficiently" enough... Maybe one of you guys already sees a problem in the code snippets or have any other suggestion or hint for me why my UDP transmission is slower than TCP.
thanks in advance!
While UDP datagram size can be up to 64K, the actual wire frames are usually 1500 bytes (normal ethernet MTU). That also has to fit an IP header of minimum 20 bytes and a UDP header of 8 bytes, leaving you with 1472 bytes of usable payload.
What you are seeing is the result of your OS network stack fragmenting the UDP datagrams on the sender side and then re-assembling them on the receiver side. That takes time, thus your results.
TCP, on the other hand, does its own packetization and tries to find path MTU, so it's more efficient in this case.
Limit your data chunks to 1472 bytes and measure again.
I think you should measure CPU usage and network throughput for the duration of the test.
If the CPU is pegged, this is your problem: Turn on a profiler.
If the network (cable) is pegged this is a different class of problems. I wouldn't know what to do about it ;-)
If neither is pegged, run a profiler and see where most wall-clock time is spent. There must be some waiting going on.
If you don't have a profiler just hit break 10 times in the debugger and see where it stops most often.
Edit: My response to your measurement: We know that 99% of all execution time is spent in receiving data. But we don't know yet if the CPU is busy. Look into task-manager and look which process is busy.
My guess is it is the System process. This is the windows kernel and probably the UDP component of it.
This might have to do with packet fragmentation. IP packets have a certain maximum size like 1472 bytes. Your UDP packets are being fragmented and reassembled on the receiving machine. I am surprised that is taking so much CPU time.
Try sending packets of total size of 1000 and 1472 (try both!) and report the results.

Categories