Receiving large UDP packet fails behind Linux firewall using C# - c#

I was successfully using this code on my home computer - please note that I have minified the code in order to only show the important parts
Socket Sock = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
// Connect to the server
Sock.Connect(Ip, Port);
// Setup buffers
byte[] bufferSend = ...; // some data prepared before
byte[] bufferRec = new byte[8024];
// Send the command
Sock.Send(bufferSend, SocketFlags.None);
// UNTIL HERE EVERYTHING WORKS FINE
// Receive the answer
int ct = 0;
StringBuilder response = new StringBuilder();
while ((ct = Sock.Receive(bufferRec)) > 0)
{
response.Append(Encoding.Default.GetString(bufferRec, 0, ct));
}
// Print the result
Console.WriteLine("This is the result:\n" + response.ToString());
In another environment (Windows but behind an Ubuntu firewall) I have problems receiving packets with over 1472 bytes: An exception is thrown that the request timed out.
So basically I have two options:
Either fixing the Ubuntu firewall server (how?)
Adjusting my code (probably better option?)
How would I need to adapt my code in order to split packets in working size? I thought adjusting the variable bufferRec = new byte[1024] would suffice, but obviously this does not work. I get an exception then that the received package is greater than the bufferRec. Do I have to change the SocketType?
Unfortunately I do not know a lot about sockets, so your explainations would help a lot!

Your packet is traveling various networks. Each network probably has its own MTU size. This is the maximum size a single packet can be.
If your data exceeds that size, the DF flag is checked (DF stands for Don't Fragment). If that flag is set, the packet is dropped and an ICMP response is generated.
In C# sockets, this option is controlled by the DontFragment property, wich defaults to True, hence your problem.
Note that UDP with fragmentation enabled is to be considered unreilable, since packets will probably get lost on a busy network

Related

TCP sender machine receiver-window-size shrinking to 0 on windows machine after sending tiny amounts of data for a few minutes

I'm writing an app that works on a windows laptop that sends TCP bytes in a loop as a "homemade keep-alive" message to keep a connection active (the server machine will disconnect after 15 seconds of no TCP data received). The "server machine" will send the laptop small chunks of data (about .5K bytes/second) as long as a connection is alive (according to the server documentation, this is ideally this is an "echo" packet, but I was unable to find how this is accomplished in .NET). My problem is that when I view this data in Wireshark I can see good network activity, then, after a few minutes, the "win" (receive window size available on the laptop) shrinks from 65K to 0 in increments of about 240 bytes each packet. Why is this happening and how can I prevent it? I can't seem to get the "keep-alive" flags in .Net to work, so this was supposed to be my workaround. I do not see any missed ACK messages, and my data rate is about 2Kb/sec, so I don't understand why the laptop window size is dropping. I definitely assume there is a misconception on my part about TCP and or windows/.NET use of TCPsockets since I have no experience with TCP (I've always used UDP).
TcpClient client = new TcpClient(iPEndpoint);
//Socket s = client.Client; none of these flags actually work on the keep alive feature
//s.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.KeepAlive, true);
//s.SetSocketOption(SocketOptionLevel.Tcp, SocketOptionName.TcpKeepAliveInterval, 10);
//s.SetSocketOption(SocketOptionLevel.Tcp, SocketOptionName.TcpKeepAliveInterval, 10);
// Translate the passed message into ASCII and store it as a Byte array.
// Byte[] data = System.Text.Encoding.ASCII.GetBytes(message);
IPAddress ipAdd = IPAddress.Parse("192.168.1.10");
IPEndPoint ipEndPoin = new IPEndPoint(ipAdd, 13000);
client.Connect(ipEndPoin);
NetworkStream stream = client.GetStream();
// Send the message to the connected TcpServer.
bool finished = false;
while (!finished)
{
try
{
stream.Write(data, 0, data.Length);
Thread.Sleep(5000);
}
catch (System.IO.IOException ioe)
{
if (ioe.InnerException is System.Net.Sockets.SocketException)
{
client.Dispose();
client = new TcpClient(iPEndpoint);
client.Connect(ipEndPoin);
stream = client.GetStream();
Console.Write("reconnected");
// this imediately fails again after exiting this catch to go back to the while loop because send window is still 0
}
}
}
You should really be familiar with RFC 793, Transmission Control Protocol, which is the definition of TCP. It explains the wondow and how it is used for flow control:
Flow Control:
TCP provides a means for the receiver to govern the amount of data
sent by the sender. This is achieved by returning a "window" with
every ACK indicating a range of acceptable sequence numbers beyond the
last segment successfully received. The window indicates an allowed
number of octets that the sender may transmit before receiving further
permission.
-and-
To govern the flow of data between TCPs, a flow control mechanism is
employed. The receiving TCP reports a "window" to the sending TCP.
This window specifies the number of octets, starting with the
acknowledgment number, that the receiving TCP is currently prepared to
receive.
-and-
Window: 16 bits
The number of data octets beginning with the one indicated in the
acknowledgment field which the sender of this segment is willing to
accept.
The window size is dictated by the receiver of the data in its ACK segments where it acknowledges receipt of the data. If your laptop receive window shrinks to 0, it is setting the window to that because it has no more space to receive, and it needs time to process and free up space in the receive buffer. When it has more space, it will send an ACK segment with a larger window.
Segment Receive Test
Length Window
------- ------- -------------------------------------------
0 0 SEG.SEQ = RCV.NXT
0 >0 RCV.NXT =< SEG.SEQ < RCV.NXT+RCV.WND
>0 0 not acceptable
>0 >0 RCV.NXT =< SEG.SEQ < RCV.NXT+RCV.WND
or RCV.NXT =< SEG.SEQ+SEG.LEN-1 < RCV.NXT+RCV.WND
Note that when the receive window is zero no segments should be
acceptable except ACK segments. Thus, it is be possible for a TCP to
maintain a zero receive window while transmitting data and receiving
ACKs. However, even when the receive window is zero, a TCP must
process the RST and URG fields of all incoming segments.

Unable to receive known reply when broadcasting UDP datagrams over LAN using Sockets or UdpClient

I have searched for 2 days and found many, many questions/answers to what appears to be this same issue, with some differences, however none really seem to provide a solution.
I am implementing a library for controlling a DMX system (ColorKinetics devices) directly without an OEM controller. This involves communicating with an Ethernet-enabled power supply (PDS) connected to my home LAN, through a router, which drives the lighting fixtures. The PDS operates on a specific port (6038) and responds to properly formatted datagrams broadcast over the network.
I can successfully broadcast a simple DMX message (Header + DMX data), which gets picked up by the PDS and applied to connected lighting fixtures, so one-way communication is not an issue.
My issue is that I am now trying to implement a device discovery function to detect the PDS(s) and attached lights on the LAN, and I am not able to receive datagrams which are (absolutely) being sent back from the PDS. I can successfully transmit a datagram which instructs the devices to reply, and I can see the reply coming back in WireShark, but my application does not detect the reply.
I also tried running a simple listener app on another machine, which could detect the initial broadcast, but could not hear the return datagram either, however I figure this wouldn't work since the return packet is addressed to the original sender IP address.
I initially tried implementing via UdpClient, then via Sockets, and both produce the same result no matter what options and parameters I seem to specify.
Here is my current, very simple code to test functionality, currently using Sockets.
byte[] datagram = new CkPacket_DiscoverPDSRequestHeader().ToPacket();
Socket sender = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
IPEndPoint ep = new IPEndPoint(IPAddress.Parse("192.168.1.149"), 6039);
public Start()
{
// Start listener
new Thread(() =>
{
Receive();
}).Start();
sender.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
sender.EnableBroadcast = true;
// Bind the sender to known local IP and port 6039
sender.Bind(ep);
}
public void Send()
{
// Broadcast the datagram to port 6038
sender.SendTo(datagram, new IPEndPoint(IPAddress.Broadcast, 6038));
}
public void Receive()
{
Socket receiver = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
receiver.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
receiver.EnableBroadcast = true;
// Bind the receiver to known local IP and port 6039 (same as sender)
IPEndPoint EndPt = new IPEndPoint(IPAddress.Parse("192.168.1.149"),6039);
receiver.Bind(EndPt);
// Listen
while (true)
{
byte[] receivedData = new byte[256];
// Get the data
int rec = receiver.Receive(receivedData);
// Write to console the number of bytes received
Console.WriteLine($"Received {rec} bytes");
}
}
The sender and receiver are bound to an IPEndPoint with the local IP and port 6039. I did this because I could see that each time I initialized a new UdpClient, the system would dynamically assign an outgoing port, which the PDS would send data back to. Doing it this way, I can say that the listener is definitely listening on the port which should receive the PDS response (6039). I believe that since I have the option ReuseAddress set to true, this shouldn't be a problem (no exceptions thrown).
Start() creates a new thread to contain the listener, and initializes options on the sending client.
Send() successfully broadcasts the 16-byte datagram which is received by the PDS on port 6038, and generates a reply to port 6039 (Seen in WireShark)
Receive() does not receive the datagram. If I bind the listener to port 6038, it will receive the original 16-byte datagram broadcast.
Here is the WireShark data:
Wireshark
I have looked at using a library like SharpPCap, as many answers have suggested, but there appear to be some compatibility issues in the latest release that I am not smart enough to circumvent, which prevent the basic examples from functioning properly on my system. It also seems like this sort of basic functionality shouldn't require that type of external dependency. I've also seen many other questions/answers where the issue was similar, but it was solved by setting this-or-that parameter for the Socket or UdpClient, of which I have tried every combination to no avail.
I have also enabled access permissions through windows firewall, allowed port usage, and even completely disabled the firewall, to no success. I don't believe the issue would be with my router, since messages are getting to Wireshark.
UPDATE 1
Per suggestions, I believe I put the listener Socket in promiscuous mode as follows:
Socket receiver = new Socket(AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.IP);
receiver.SetSocketOption(SocketOptionLevel.IP, SocketOptionName.HeaderIncluded, true);
receiver.EnableBroadcast = true;
IPEndPoint EndPt = new IPEndPoint(IPAddress.Parse("192.168.1.149"), 0);
receiver.Bind(EndPt);
receiver.IOControl(IOControlCode.ReceiveAll, new byte[] { 1, 0, 0, 0 }, null);
This resulted in the listener receiving all sorts of network traffic, including the outbound requests, but still no incoming reply.
UPDATE 2
As Viet suggested, there is some sort of addressing problem in the request datagram, which is formatted as such:
public class CkPacket_DiscoverPDSRequest : BytePacket
{
public uint magic = 0x0401dc4a;
public ushort version = 0x0100;
public ushort type = 0x0100;
public uint sequence = 0x00000000;
public uint command = 0xffffffff;
}
If I change the command field to my broadcast address 192.168.1.149' or192.168.255.255`, my listener begins detecting the return packets. I admittedly do not know what this field is supposed to represent, and my original guess was to just put in a broadcast address since the point of the datagram is to discover all devices on the network. This is obviously not the case, though I am still not sure the exact point of it.
Either way, thank you for the help, this is progress.
So in actuality it turns out that my issue was with the formatting of the outgoing datagram. The command field needs to be an address on the local subnet 192.168.xxx.xxx, and not 255.255.255.255... for whatever reason this was causing the packet to be filtered somewhere before getting to my application, though WireShark could still see it. This may be common sense in this type of work but being relatively ignorant as to network programming as well as the specifics of this interface it wasn't something I had considered.
Making the change allows a simple UdpClient send/receive to function perfectly.
Much thanks to Viet Hoang for helping me find this!
As you've already noted, you don't need to bind to send out a broadcast but it uses a random source port.
If you adjust your code to not bind the sender, your listener should behave as expected again:
Socket sender = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
sender.EnableBroadcast = true;
Thread read_thread;
public Start()
{
// Start listener
read_thread = new Thread(Receive);
read_thread.Start();
}
The issue you've bumped into is that the operating system kernel is only delivering packets up to one socket binder (first come first serve basis).
If you want true parallel read access, you'll need to look into sniffing example such as: https://stackoverflow.com/a/12437794/8408335.
Since you are only looking to source the broadcast from the same ip/port, you simply need to let the receiver bind first.
If you add in a short sleep after kicking off the receive thread, and before binding the sender, you will be able to see the expected results.
public Start()
{
// Start listener
new Thread(() =>
{
Receive();
}).Start();
Thread.Sleep(100);
sender.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
sender.EnableBroadcast = true;
// Bind the sender to known local IP and port 6039
sender.Bind(ep);
}
Extra note: You can quickly test your udp sockets from a linux box using netcat:
# echo "hello" | nc -q -1 -u 192.168.1.149 6039 -
- Edit -
Problem Part #2
The source address of "255.255.255.255" is invalid.
Did a quick test with two identical packets altering the source ip:
https://i.stack.imgur.com/BvWIa.jpg
Only the one with the valid source IP was printed to the console.
Received 26 bytes
Received 26 bytes

Why isn't the udp packet received? (Can be seen in Wireshark)

I have a problem with receiving a given udp packet in my C# program.
The code in question is as follows:
//create transport
m_conn = new UdpClient(new IPEndPoint(m_interface_ip, 0));
m_conn.DontFragment = true;
m_conn.EnableBroadcast = true;
m_conn.Connect(new IPEndPoint(IPAddress.Parse(destination_ip), m_port));
m_conn.BeginReceive(new AsyncCallback(OnDataReceived), m_conn);
//create packet
//...
//send
m_conn.Send(buffer, length);
The OnDataReceived function is just an empty function with a breakpoint.
I've disabled my firewall and the program to 'allowed list'.
The 'request' and 'response' can be seen in Wireshark. See attached pcap file. The packets seems to be valid. The device is a certified Profinet device (in case it makes a difference).
And yet I can't seem to suck up the 'response' into my OnDataReceived function.
Am I missing something fundamental? Is there something unusual with the udp/ip header perhaps?
Attached Wireshark output
I've found the answer. It seems that the device is responding to the request from a different source port. This means that the Windows drivers (and Wireshark) marks the messages as unrelated. (Not responding from the same socket/port is a rather foolish thing to do and I'm guessing it's an error.)
However, it can be fixed. Instead of the 'unicast' socket above, one have to create a server socket. Like so:
//create transport
IPEndPoint local = new IPEndPoint(m_interface_ip, 0); //bind to any free port
m_conn = new UdpClient();
m_conn.Client.Bind(local);
m_conn.EnableBroadcast = true;
m_conn.BeginReceive(new AsyncCallback(OnDataReceived), m_conn);
//create packet...
//...
//send
m_conn.Send(buffer, length, new IPEndPoint(m_destination_ip, m_port));
This will annoy the firewall, but it seems to work.

Weird tcp connection scenario

I am using TCP as a mechanism for keep alive here is my code:
Client
TcpClient keepAliveTcpClient = new TcpClient();
keepAliveTcpClient.Connect(HostId, tcpPort);
//this 'read' is supposed to blocked till a legal disconnect is requested
//or till the server unexpectedly dissapears
int numberOfByptes = keepAliveTcpClient.GetStream().Read(new byte[10], 0, 10);
//more client code...
Server
TcpListener _tcpListener = new TcpListener(IPAddress.Any, 1000);
_tcpListener.Start();
_tcpClient = _tcpListener.AcceptTcpClient();
Tracer.Write(Tracer.TraceLevel.INFO, "get a client");
buffer = new byte[10];
numOfBytes = _tcpClient.GetStream().Read(buffer, 0, buffer.Length);
if(numOfBytes==0)
{
//shouldn't reach here unless the connection is close...
}
I put only the relevant code... Now what that happens is that the client code is block on read as expected, but the server read return immediately with numOfBytes equals to 0, even if I retry to do read on the server it return immediately... but the client read is still block! so in the server side I think mistakenly that the client is disconnected from the server but the client thinks it connected to the server... someone can tell how it is possible? or what is wrong with my mechanism?
Edit: After a failure I wrote to the log these properties:
_tcpClient: _tcpClient.Connected=true
Socket: (_tcpClient.Client properties)
_tcpClient.Client.Available=0
_tcpClient.Client.Blocking=true
_tcpClient.Client.Connected=true
_tcpClient.Client.IsBound=true
Stream details
_tcpClient.GetStream().DataAvailable=false;
Even when correctly implemented, this approach will only detect some remote server failures. Consider the case where the intervening network partitions the two machines. Then, only when the underlying TCP stack sends a transport level keep-alive will the system detect the failure. Keepalive is a good description of the problem. [Does a TCP socket connection have a “keep alive”?] 2 is a companion question. The RFC indicates the functionality is optional.
The only certain way to reliably confirm that the other party is still alive is to occasionally send actual data between the two endpoints. This will result in TCP promptly detecting the failure and reporting it back to the application.
Maybe something that will give clue: it happens only when 10 or more clients
connect the server the same time(the server listen to 10 or more ports).
If you're writing this code on Windows 7/8, you may be running into a connection limit issue. Microsoft's license allows 20 concurrent connections, but the wording is very specific:
[Start->Run->winver, click "Microsoft Software License Terms"]
3e. Device Connections. You may allow up to 20 other devices to access software installed on the licensed computer to use only File Services, Print Services, Internet Information Services and Internet Connection Sharing and Telephony Services.
Since what you're doing isn't file, print, IIS, ICS, or telephony, it's possible that the previous connection limit of 10 from XP/Vista is still enforced in these circumstances. Set a limit of concurrent connections to 9 in your code temporarily, and see if it keeps happening.
The way I am interpretting the MSDN remarks it seems that behavior is expected. If you have no data the Read the method returns.
With that in mind I think what I would try is to send data at a specified interval like some of the previous suggestions along with a "timeout" of some sort. If you don't see the "ping" within your designated interval you could fail the keepalive. With TCP you have to keep in mind that there is no requirement to deem a connection "broken" just because you aren't seeing data. You could completely unplug the network cables and the connection will still be considered good up until the point that you send some data. Once you send data you'll see one of 2 behaviors. Either you'll never see a response (listening machine was shutdown?) or you'll get an "ack-reset" (listening machine is no longer listening on that particular socket)
https://msdn.microsoft.com/en-us/library/vstudio/system.net.sockets.networkstream.read(v=vs.100).aspx
Remarks:
This method reads data into the buffer parameter and returns the number of bytes successfully read. If no data is available for reading, the Read method returns 0. The Read operation reads as much data as is available, up to the number of bytes specified by the size parameter. If the remote host shuts down the connection, and all available data has been received, the Read method completes immediately and return zero bytes.
As I can see you are reading data on both sides, server and client. You need to write some data from the server to the client, to ensure that your client will have something to read. You can find a small test program below (The Task stuff is just to run the Server and Client in the same program).
class Program
{
private static Task _tcpServerTask;
private const int ServerPort = 1000;
static void Main(string[] args)
{
StartTcpServer();
KeepAlive();
Console.ReadKey();
}
private static void StartTcpServer()
{
_tcpServerTask = new Task(() =>
{
var tcpListener = new TcpListener(IPAddress.Any, ServerPort);
tcpListener.Start();
var tcpClient = tcpListener.AcceptTcpClient();
Console.WriteLine("Server got client ...");
using (var stream = tcpClient.GetStream())
{
const string message = "Stay alive!!!";
var arrayMessage = Encoding.UTF8.GetBytes(message);
stream.Write(arrayMessage, 0, arrayMessage.Length);
}
tcpListener.Stop();
});
_tcpServerTask.Start();
}
private static void KeepAlive()
{
var tcpClient = new TcpClient();
tcpClient.Connect("127.0.0.1", ServerPort);
using (var stream = tcpClient.GetStream())
{
var buffer = new byte[16];
while (stream.Read(buffer, 0, buffer.Length) != 0)
Console.WriteLine("Client received: {0} ", Encoding.UTF8.GetString(buffer));
}
}
}

C# UDP send/sendAsync too slow

I'm trying to send many small (~350 byte) UDP messages to different remote hosts, one packet for each remote host. I'm using a background thread to listen for responses
private void ReceivePackets()
{
while (true)
{
try
{
receiveFrom = new IPEndPoint(IPAddress.Any, localPort);
byte[] data = udpClientReceive.Receive(ref receiveFrom);
// Decode message & short calculation
}
catch (Exception ex)
{
Log("Error receiving data: " + ex.ToString());
}
}
}
and a main thread for sending messages using
udpClientSend.SendAsync(send_buffer, send_buffer.Length, destinationIPEP);
Both UdpClient udpClientReceive and UdpClient udpClientSend are bound to the same port.
The problem is SendAsync() takes around 15ms to complete and I need to send a few thousand packets per second. What I already tried is using udpClientSend.Send(send_buffer, send_buffer.Length, destination);which is just as slow. I also set both receive/send buffers higher and I tried setting udpClientSend.Client.SendTimeout = 1; which has no effect. I suspect it might have to do with the remote host changing for every single packet? If that is the case, will using many UDPclients in seperate threads make things faster?
Thanks for any help!
Notes:
Network bandwidth is not the problem and I need to use UDP not TCP.
I've seen similar questions on this website but none have a satisfying answer.
Edit
There is only one thread for sending, it runs a simple loop in which udpClientSend.SendAsync() is called.
I'm querying nodes in the DHT (bittorrent hashtable) so multicasting is not an option (?) - every host only gets 1 packet.
Exchanging UDPClient class with the Socket class and using AsyncSendTo() does not speed things up (or insignificantly).
I have narrowed down the problem: Changing the remote host address to some fixed IP & port increases throughput to over 3000 packets/s. Thus changing the destination address too often seems to be the bottleneck.
I'm thinking my problem might be related to UDP "Connect"-Speed in C# and UDPClient.Connect() is slowing down the code. If so, is there a fix for this? Is it a language or an OS problem?
Why are you using UdpClient Connect before sending? It's an optional step if you want to set a default host. Looks like you want to send to multiple destinations so you can set the remote host each time you call the Send method.
If you decide to remove the Connect method you may see an improvement, but it still looks like you have a bottleneck somewhere else in your code. Isolate the Send calls and measure time then start adding more steps until you find what's slowing you down.
var port = 4242;
var ipEndPoint1 = new IPEndPoint(IPAddress.Parse("192.168.56.101"), port);
var ipEndPoint2 = new IPEndPoint(IPAddress.Parse("192.168.56.102"), port);
var buff = new byte[350];
var client = new UdpClient();
int count = 0;
var stopWatch = new Stopwatch();
stopWatch.Start();
while (count < 3000)
{
IPEndPoint endpoint = ipEndPoint1;
if ((count % 2) == 0)
endpoint = ipEndPoint2;
client.Send(buff, buff.Length, endpoint);
count++;
}
stopWatch.Stop();
Console.WriteLine("RunTime " + stopWatch.Elapsed.TotalMilliseconds.ToString());

Categories