C# UDP send/sendAsync too slow - c#

I'm trying to send many small (~350 byte) UDP messages to different remote hosts, one packet for each remote host. I'm using a background thread to listen for responses
private void ReceivePackets()
{
while (true)
{
try
{
receiveFrom = new IPEndPoint(IPAddress.Any, localPort);
byte[] data = udpClientReceive.Receive(ref receiveFrom);
// Decode message & short calculation
}
catch (Exception ex)
{
Log("Error receiving data: " + ex.ToString());
}
}
}
and a main thread for sending messages using
udpClientSend.SendAsync(send_buffer, send_buffer.Length, destinationIPEP);
Both UdpClient udpClientReceive and UdpClient udpClientSend are bound to the same port.
The problem is SendAsync() takes around 15ms to complete and I need to send a few thousand packets per second. What I already tried is using udpClientSend.Send(send_buffer, send_buffer.Length, destination);which is just as slow. I also set both receive/send buffers higher and I tried setting udpClientSend.Client.SendTimeout = 1; which has no effect. I suspect it might have to do with the remote host changing for every single packet? If that is the case, will using many UDPclients in seperate threads make things faster?
Thanks for any help!
Notes:
Network bandwidth is not the problem and I need to use UDP not TCP.
I've seen similar questions on this website but none have a satisfying answer.
Edit
There is only one thread for sending, it runs a simple loop in which udpClientSend.SendAsync() is called.
I'm querying nodes in the DHT (bittorrent hashtable) so multicasting is not an option (?) - every host only gets 1 packet.
Exchanging UDPClient class with the Socket class and using AsyncSendTo() does not speed things up (or insignificantly).
I have narrowed down the problem: Changing the remote host address to some fixed IP & port increases throughput to over 3000 packets/s. Thus changing the destination address too often seems to be the bottleneck.
I'm thinking my problem might be related to UDP "Connect"-Speed in C# and UDPClient.Connect() is slowing down the code. If so, is there a fix for this? Is it a language or an OS problem?

Why are you using UdpClient Connect before sending? It's an optional step if you want to set a default host. Looks like you want to send to multiple destinations so you can set the remote host each time you call the Send method.
If you decide to remove the Connect method you may see an improvement, but it still looks like you have a bottleneck somewhere else in your code. Isolate the Send calls and measure time then start adding more steps until you find what's slowing you down.
var port = 4242;
var ipEndPoint1 = new IPEndPoint(IPAddress.Parse("192.168.56.101"), port);
var ipEndPoint2 = new IPEndPoint(IPAddress.Parse("192.168.56.102"), port);
var buff = new byte[350];
var client = new UdpClient();
int count = 0;
var stopWatch = new Stopwatch();
stopWatch.Start();
while (count < 3000)
{
IPEndPoint endpoint = ipEndPoint1;
if ((count % 2) == 0)
endpoint = ipEndPoint2;
client.Send(buff, buff.Length, endpoint);
count++;
}
stopWatch.Stop();
Console.WriteLine("RunTime " + stopWatch.Elapsed.TotalMilliseconds.ToString());

Related

How to keep alive the tcp listener to avoid error - unable to write data to transport connection?

I want to keep alive the TCP listener to avoid connection loss. How to make the TCP listener alive for 12 hours?
Below Code:
TcpListener tcp = new TcpListener(port);
tcp.Start();
while (true)
{
Console.Write("Waiting for a connection... ");
TcpClient client = tcp.AcceptTcpClient();
client.ReceiveTimeout = 5000000;
// Gets the receive time out using the ReceiveTimeout public property.
if (client.ReceiveTimeout > 5000000)
Console.WriteLine("The receive time out limit was successfully set " + client.ReceiveTimeout.ToString());
Console.WriteLine("Connected!");
Console.WriteLine("Connected!");
StreamReader sr = new StreamReader(client.GetStream());
StreamWriter sw = new StreamWriter(client.GetStream());
try
{
String request = sr.ReadLine();
Console.WriteLine(request);
string strValue = request;
}
}
Ultimately, you're writing to a socket that turns out to be dead. This is normal and expected in anything socket-related, and in many cases the only way to properly detect that a socket is dead: is to try reading from / writing to it, so: you should expect any socket operation to fail, and handle it. As for keeping a socket alive; the best way to do that is to periodically (say, every five minutes, or one minute, or whatever you want) send a dummy message backwards and forwards. This achieves two goals:
it can help the OS and other software realise that the socket is still in use, and not tear it down
if the socket does fail for any reason, it means you can detect it by tracking when the last message was seen, and if it has missed a couple of heartbeats: assume the worst

Unable to receive known reply when broadcasting UDP datagrams over LAN using Sockets or UdpClient

I have searched for 2 days and found many, many questions/answers to what appears to be this same issue, with some differences, however none really seem to provide a solution.
I am implementing a library for controlling a DMX system (ColorKinetics devices) directly without an OEM controller. This involves communicating with an Ethernet-enabled power supply (PDS) connected to my home LAN, through a router, which drives the lighting fixtures. The PDS operates on a specific port (6038) and responds to properly formatted datagrams broadcast over the network.
I can successfully broadcast a simple DMX message (Header + DMX data), which gets picked up by the PDS and applied to connected lighting fixtures, so one-way communication is not an issue.
My issue is that I am now trying to implement a device discovery function to detect the PDS(s) and attached lights on the LAN, and I am not able to receive datagrams which are (absolutely) being sent back from the PDS. I can successfully transmit a datagram which instructs the devices to reply, and I can see the reply coming back in WireShark, but my application does not detect the reply.
I also tried running a simple listener app on another machine, which could detect the initial broadcast, but could not hear the return datagram either, however I figure this wouldn't work since the return packet is addressed to the original sender IP address.
I initially tried implementing via UdpClient, then via Sockets, and both produce the same result no matter what options and parameters I seem to specify.
Here is my current, very simple code to test functionality, currently using Sockets.
byte[] datagram = new CkPacket_DiscoverPDSRequestHeader().ToPacket();
Socket sender = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
IPEndPoint ep = new IPEndPoint(IPAddress.Parse("192.168.1.149"), 6039);
public Start()
{
// Start listener
new Thread(() =>
{
Receive();
}).Start();
sender.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
sender.EnableBroadcast = true;
// Bind the sender to known local IP and port 6039
sender.Bind(ep);
}
public void Send()
{
// Broadcast the datagram to port 6038
sender.SendTo(datagram, new IPEndPoint(IPAddress.Broadcast, 6038));
}
public void Receive()
{
Socket receiver = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
receiver.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
receiver.EnableBroadcast = true;
// Bind the receiver to known local IP and port 6039 (same as sender)
IPEndPoint EndPt = new IPEndPoint(IPAddress.Parse("192.168.1.149"),6039);
receiver.Bind(EndPt);
// Listen
while (true)
{
byte[] receivedData = new byte[256];
// Get the data
int rec = receiver.Receive(receivedData);
// Write to console the number of bytes received
Console.WriteLine($"Received {rec} bytes");
}
}
The sender and receiver are bound to an IPEndPoint with the local IP and port 6039. I did this because I could see that each time I initialized a new UdpClient, the system would dynamically assign an outgoing port, which the PDS would send data back to. Doing it this way, I can say that the listener is definitely listening on the port which should receive the PDS response (6039). I believe that since I have the option ReuseAddress set to true, this shouldn't be a problem (no exceptions thrown).
Start() creates a new thread to contain the listener, and initializes options on the sending client.
Send() successfully broadcasts the 16-byte datagram which is received by the PDS on port 6038, and generates a reply to port 6039 (Seen in WireShark)
Receive() does not receive the datagram. If I bind the listener to port 6038, it will receive the original 16-byte datagram broadcast.
Here is the WireShark data:
Wireshark
I have looked at using a library like SharpPCap, as many answers have suggested, but there appear to be some compatibility issues in the latest release that I am not smart enough to circumvent, which prevent the basic examples from functioning properly on my system. It also seems like this sort of basic functionality shouldn't require that type of external dependency. I've also seen many other questions/answers where the issue was similar, but it was solved by setting this-or-that parameter for the Socket or UdpClient, of which I have tried every combination to no avail.
I have also enabled access permissions through windows firewall, allowed port usage, and even completely disabled the firewall, to no success. I don't believe the issue would be with my router, since messages are getting to Wireshark.
UPDATE 1
Per suggestions, I believe I put the listener Socket in promiscuous mode as follows:
Socket receiver = new Socket(AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.IP);
receiver.SetSocketOption(SocketOptionLevel.IP, SocketOptionName.HeaderIncluded, true);
receiver.EnableBroadcast = true;
IPEndPoint EndPt = new IPEndPoint(IPAddress.Parse("192.168.1.149"), 0);
receiver.Bind(EndPt);
receiver.IOControl(IOControlCode.ReceiveAll, new byte[] { 1, 0, 0, 0 }, null);
This resulted in the listener receiving all sorts of network traffic, including the outbound requests, but still no incoming reply.
UPDATE 2
As Viet suggested, there is some sort of addressing problem in the request datagram, which is formatted as such:
public class CkPacket_DiscoverPDSRequest : BytePacket
{
public uint magic = 0x0401dc4a;
public ushort version = 0x0100;
public ushort type = 0x0100;
public uint sequence = 0x00000000;
public uint command = 0xffffffff;
}
If I change the command field to my broadcast address 192.168.1.149' or192.168.255.255`, my listener begins detecting the return packets. I admittedly do not know what this field is supposed to represent, and my original guess was to just put in a broadcast address since the point of the datagram is to discover all devices on the network. This is obviously not the case, though I am still not sure the exact point of it.
Either way, thank you for the help, this is progress.
So in actuality it turns out that my issue was with the formatting of the outgoing datagram. The command field needs to be an address on the local subnet 192.168.xxx.xxx, and not 255.255.255.255... for whatever reason this was causing the packet to be filtered somewhere before getting to my application, though WireShark could still see it. This may be common sense in this type of work but being relatively ignorant as to network programming as well as the specifics of this interface it wasn't something I had considered.
Making the change allows a simple UdpClient send/receive to function perfectly.
Much thanks to Viet Hoang for helping me find this!
As you've already noted, you don't need to bind to send out a broadcast but it uses a random source port.
If you adjust your code to not bind the sender, your listener should behave as expected again:
Socket sender = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
sender.EnableBroadcast = true;
Thread read_thread;
public Start()
{
// Start listener
read_thread = new Thread(Receive);
read_thread.Start();
}
The issue you've bumped into is that the operating system kernel is only delivering packets up to one socket binder (first come first serve basis).
If you want true parallel read access, you'll need to look into sniffing example such as: https://stackoverflow.com/a/12437794/8408335.
Since you are only looking to source the broadcast from the same ip/port, you simply need to let the receiver bind first.
If you add in a short sleep after kicking off the receive thread, and before binding the sender, you will be able to see the expected results.
public Start()
{
// Start listener
new Thread(() =>
{
Receive();
}).Start();
Thread.Sleep(100);
sender.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
sender.EnableBroadcast = true;
// Bind the sender to known local IP and port 6039
sender.Bind(ep);
}
Extra note: You can quickly test your udp sockets from a linux box using netcat:
# echo "hello" | nc -q -1 -u 192.168.1.149 6039 -
- Edit -
Problem Part #2
The source address of "255.255.255.255" is invalid.
Did a quick test with two identical packets altering the source ip:
https://i.stack.imgur.com/BvWIa.jpg
Only the one with the valid source IP was printed to the console.
Received 26 bytes
Received 26 bytes

UdpClient beginreceive how to detect when server is off

I have an application that is reading data from a Udp server on 32 different ports that I need to process. I'm using the UdpClient.BeginReceive that is calling itself because I want to listen all the time :
private void ProcessEndpointData(IAsyncResult result)
{
UdpClient client = result.AsyncState as UdpClient;
// points towards whoever had sent the message:
IPEndPoint source = new IPEndPoint(0, 0);
// schedule the next receive operation once reading is done:
client.BeginReceive(new AsyncCallback(this.ProcessEndpointData), client);
// get the actual message and fill out the source:
this.DecodeDatagram(new DatagrammeAscb()
{
Datagramme = this.ByteArrayToStructure<Datagram>(client.EndReceive(result, ref source))
});
}
When I stop the server side, the function is waiting for data (that is normal behavior). What I would like to do is to detect when the server is disconnected and then close all my clients.
I'm asking myself if I should use sockets class to have more controls or just maybe I'm missing something here.
Anyway thanks for your help.

Receiving large UDP packet fails behind Linux firewall using C#

I was successfully using this code on my home computer - please note that I have minified the code in order to only show the important parts
Socket Sock = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
// Connect to the server
Sock.Connect(Ip, Port);
// Setup buffers
byte[] bufferSend = ...; // some data prepared before
byte[] bufferRec = new byte[8024];
// Send the command
Sock.Send(bufferSend, SocketFlags.None);
// UNTIL HERE EVERYTHING WORKS FINE
// Receive the answer
int ct = 0;
StringBuilder response = new StringBuilder();
while ((ct = Sock.Receive(bufferRec)) > 0)
{
response.Append(Encoding.Default.GetString(bufferRec, 0, ct));
}
// Print the result
Console.WriteLine("This is the result:\n" + response.ToString());
In another environment (Windows but behind an Ubuntu firewall) I have problems receiving packets with over 1472 bytes: An exception is thrown that the request timed out.
So basically I have two options:
Either fixing the Ubuntu firewall server (how?)
Adjusting my code (probably better option?)
How would I need to adapt my code in order to split packets in working size? I thought adjusting the variable bufferRec = new byte[1024] would suffice, but obviously this does not work. I get an exception then that the received package is greater than the bufferRec. Do I have to change the SocketType?
Unfortunately I do not know a lot about sockets, so your explainations would help a lot!
Your packet is traveling various networks. Each network probably has its own MTU size. This is the maximum size a single packet can be.
If your data exceeds that size, the DF flag is checked (DF stands for Don't Fragment). If that flag is set, the packet is dropped and an ICMP response is generated.
In C# sockets, this option is controlled by the DontFragment property, wich defaults to True, hence your problem.
Note that UDP with fragmentation enabled is to be considered unreilable, since packets will probably get lost on a busy network

Weird tcp connection scenario

I am using TCP as a mechanism for keep alive here is my code:
Client
TcpClient keepAliveTcpClient = new TcpClient();
keepAliveTcpClient.Connect(HostId, tcpPort);
//this 'read' is supposed to blocked till a legal disconnect is requested
//or till the server unexpectedly dissapears
int numberOfByptes = keepAliveTcpClient.GetStream().Read(new byte[10], 0, 10);
//more client code...
Server
TcpListener _tcpListener = new TcpListener(IPAddress.Any, 1000);
_tcpListener.Start();
_tcpClient = _tcpListener.AcceptTcpClient();
Tracer.Write(Tracer.TraceLevel.INFO, "get a client");
buffer = new byte[10];
numOfBytes = _tcpClient.GetStream().Read(buffer, 0, buffer.Length);
if(numOfBytes==0)
{
//shouldn't reach here unless the connection is close...
}
I put only the relevant code... Now what that happens is that the client code is block on read as expected, but the server read return immediately with numOfBytes equals to 0, even if I retry to do read on the server it return immediately... but the client read is still block! so in the server side I think mistakenly that the client is disconnected from the server but the client thinks it connected to the server... someone can tell how it is possible? or what is wrong with my mechanism?
Edit: After a failure I wrote to the log these properties:
_tcpClient: _tcpClient.Connected=true
Socket: (_tcpClient.Client properties)
_tcpClient.Client.Available=0
_tcpClient.Client.Blocking=true
_tcpClient.Client.Connected=true
_tcpClient.Client.IsBound=true
Stream details
_tcpClient.GetStream().DataAvailable=false;
Even when correctly implemented, this approach will only detect some remote server failures. Consider the case where the intervening network partitions the two machines. Then, only when the underlying TCP stack sends a transport level keep-alive will the system detect the failure. Keepalive is a good description of the problem. [Does a TCP socket connection have a “keep alive”?] 2 is a companion question. The RFC indicates the functionality is optional.
The only certain way to reliably confirm that the other party is still alive is to occasionally send actual data between the two endpoints. This will result in TCP promptly detecting the failure and reporting it back to the application.
Maybe something that will give clue: it happens only when 10 or more clients
connect the server the same time(the server listen to 10 or more ports).
If you're writing this code on Windows 7/8, you may be running into a connection limit issue. Microsoft's license allows 20 concurrent connections, but the wording is very specific:
[Start->Run->winver, click "Microsoft Software License Terms"]
3e. Device Connections. You may allow up to 20 other devices to access software installed on the licensed computer to use only File Services, Print Services, Internet Information Services and Internet Connection Sharing and Telephony Services.
Since what you're doing isn't file, print, IIS, ICS, or telephony, it's possible that the previous connection limit of 10 from XP/Vista is still enforced in these circumstances. Set a limit of concurrent connections to 9 in your code temporarily, and see if it keeps happening.
The way I am interpretting the MSDN remarks it seems that behavior is expected. If you have no data the Read the method returns.
With that in mind I think what I would try is to send data at a specified interval like some of the previous suggestions along with a "timeout" of some sort. If you don't see the "ping" within your designated interval you could fail the keepalive. With TCP you have to keep in mind that there is no requirement to deem a connection "broken" just because you aren't seeing data. You could completely unplug the network cables and the connection will still be considered good up until the point that you send some data. Once you send data you'll see one of 2 behaviors. Either you'll never see a response (listening machine was shutdown?) or you'll get an "ack-reset" (listening machine is no longer listening on that particular socket)
https://msdn.microsoft.com/en-us/library/vstudio/system.net.sockets.networkstream.read(v=vs.100).aspx
Remarks:
This method reads data into the buffer parameter and returns the number of bytes successfully read. If no data is available for reading, the Read method returns 0. The Read operation reads as much data as is available, up to the number of bytes specified by the size parameter. If the remote host shuts down the connection, and all available data has been received, the Read method completes immediately and return zero bytes.
As I can see you are reading data on both sides, server and client. You need to write some data from the server to the client, to ensure that your client will have something to read. You can find a small test program below (The Task stuff is just to run the Server and Client in the same program).
class Program
{
private static Task _tcpServerTask;
private const int ServerPort = 1000;
static void Main(string[] args)
{
StartTcpServer();
KeepAlive();
Console.ReadKey();
}
private static void StartTcpServer()
{
_tcpServerTask = new Task(() =>
{
var tcpListener = new TcpListener(IPAddress.Any, ServerPort);
tcpListener.Start();
var tcpClient = tcpListener.AcceptTcpClient();
Console.WriteLine("Server got client ...");
using (var stream = tcpClient.GetStream())
{
const string message = "Stay alive!!!";
var arrayMessage = Encoding.UTF8.GetBytes(message);
stream.Write(arrayMessage, 0, arrayMessage.Length);
}
tcpListener.Stop();
});
_tcpServerTask.Start();
}
private static void KeepAlive()
{
var tcpClient = new TcpClient();
tcpClient.Connect("127.0.0.1", ServerPort);
using (var stream = tcpClient.GetStream())
{
var buffer = new byte[16];
while (stream.Read(buffer, 0, buffer.Length) != 0)
Console.WriteLine("Client received: {0} ", Encoding.UTF8.GetString(buffer));
}
}
}

Categories