Waiting for networking C# console application to fully start - c#

I have run into an issue with the slow C# start-up time causing UDP packets to drop initially. Below, I is what I have done to mitigate this start-up delay. I essentially wait an additional 10ms between the first two packet transmissions. This fixes the initial drops at least on my machine. My concern is that a slower machine may need a longer delay than this.
private void FlushPacketsToNetwork()
{
MemoryStream packetStream = new MemoryStream();
while (packetQ.Count != 0)
{
byte[] packetBytes = packetQ.Dequeue().ToArray();
packetStream.Write(packetBytes, 0, packetBytes.Length);
}
byte[] txArray = packetStream.ToArray();
udpSocket.Send(txArray);
txCount++;
ExecuteStartupDelay();
}
// socket takes too long to transmit unless I give it some time to "warm up"
private void ExecuteStartupDelay()
{
if (txCount < 3)
{
timer.SpinWait(10e-3);
}
}
So, I am wondering is there a better approach to let C# fully load all of its dependencies? I really don't mind if it takes several seconds to completely load; I just do not want to do any high bandwidth transmissions until C# is ready for full speed.
Additional relevant details
This is a console application, the network transmission is run from a separate thread, and the main thread just waits for a key press to terminate the network transmitter.
In the Program.Main method I have tried to get the most performance from my application by using the highest priorities reasonable:
public static void Main(string[] args)
{
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;
...
Thread workerThread = new Thread(new ThreadStart(worker.Run));
workerThread.Priority = ThreadPriority.Highest;
workerThread.Start();
...
Console.WriteLine("Press any key to stop the stream...");
WaitForKeyPress();
worker.RequestStop = true;
workerThread.Join();
Also, the socket settings I am currently using are shown below:
udpSocket = new Socket(targetEndPoint.Address.AddressFamily,
SocketType.Dgram,
ProtocolType.Udp);
udpSocket.Ttl = ttl;
udpSocket.SendBufferSize = 1024 * 1024;
udpSocket.Blocking = true;
udpSocket.Connect(targetEndPoint);
The default SendBufferSize is 8192, so I went ahead and moved it up to a megabyte, but this setting did not seem to have any affect on the dropped packets at the beginning.

From the comments I learned that TCP is not an option for you (because of inherent delays in transmission), also you do not want to loose packets due to other side being not fully loaded.
So you actually need to implement some features present in TCP (retransmission) but in more robust and lightweight fashion. I also assume that you are in control of the receiving side.
I propose that you send some predetermined number of packets. And then wait for confirmation. For instance, every packet can have an id that constantly grows. Every N packets, receiving application sends the number of last received packet to the sender. After receiving this number sender will know if it is necessary to repeat last N packets.
This approach should not hurt your bandwidth very much and you will get some sort of information about received data (although not guaranteed).
Otherwise it is best to switch to TCP. By the way did you try using TCP? How much your bandwidth hurts because of it?

Related

What is the best approach for serial data reception and processing using c#?

I am pretty new to coding with some experience in ASM and C for PIC. I am still learning high level programming with C#.
Question
I have a Serial port data reception and processing program in C#. To avoid losing data and knowing when it was coming, I set a DataReceived event and loop into the handling method until there were no more bytes to read.
When I attempted this, the loop continued endlessly and blocked my program from other tasks (such as processing the retrieved data) when I continuously received data.
I read about threading in C#, I created a thread that constantly checks for SerialPort.Bytes2Read property so it will know when to retrieve available data.
I created a second thread that can process data while new data is still being read. If bytes have been read and ReadSerial() has more bytes to read and the timeout (restarted every time a new byte is read from the serial) they can still be processed and the frames assembled via a method named DataProcessing() which reads from the same variable being filled by ReadSerial().
This gave me the desired results, but I noticed that with my solution (both ReadSerial() and DataProcessing() threads alive), CPU Usage was skyrocketed all the way to 100%!
How do you approach this problem without causing such high CPU usage?
public static void ReadSerial() //Method that handles Serial Reception
{
while (KeepAlive) // Bool variable used to keep alive the thread. Turned to false
{ // when the program ends.
if (Port.BytesToRead != 0)
{
for (int i = 0; i < 5000; i++)
{
/* I Don't know any other way to
implement a timeout to wait for
additional characters so i took what
i knew from PIC Serial Data Handling. */
if (Port.BytesToRead != 0)
{
RxList.Add(Convert.ToByte(Port.ReadByte()));
i = 0;
if (RxList.Count > 20) // In case the method is stuck still reading
BufferReady = true; // signal the Data Processing thread to
} // work with that chunk of data.
BufferReady = true; // signals the DataProcessing Method to work
} // with the current data in RxList.
}
}
}
I can not understand completely what you are meaning with the "DataReceived" and the "loop". I am also working a lot with Serial Ports as well as other interfaces. In my application I am attaching to the DataReceived Event and also reading based on the Bytes to read, but I dont use a loop there:
int bytesToRead = this._port.BytesToRead;
var data = new byte[bytesToRead];
this._port.BaseStream.Read(data , 0, bytesToRead);
If you are using a loop to read the bytes I recommend something like:
System.Threading.Thread.Sleep(...);
Otherwise the Thread you are using to read the bytes is busy all the time. And this will lead to the fact that other threads cannot be processed or your CPU is at 100%.
But I think you don't have to use a loop for polling for the data if you are using the DataReceived event. If my undertanding is not correct or you need further information please ask.

Serial Port Async Read (non blocking) + Threads

Well, I've been strugling for the last 4 days with this SerialPort control in C# with no satisfactory results. Let me explain:
I have a device (Arduino UNO Board) that comunicates with a c# prog which simulates a scale (simple request/response pattern), the device sends a command sequence consisting of 3 bytes (asking for a weigh): CHR(27)+P+CHR(13) so the simulator responds with a simulated weight (I have sorted out how the device catches and parses this weight so this is not longer of the problem).
Using the DataReceive event seems I'm loosing data using Serialport.Read() so I wasted this approach so far.
The simulator HAVE TO BE ALWAYS listening for the said seq. of bytes and HAVE TO HAVE a GUI. I understand that for this I must use a Thread in order to prevent the GUI is locked (perhaps a backgroundworker?) and a sort of buffer which is shared between (now) this 2 threads and prevent the threads read/write at the same time to the buffer (do I need a state machine?) (I ask for help on this since I don't know if this is a good approach or my assumptions are wrong or if theres is a more easy way to solve this) so I'm asking for advice and (with lot of luck) code fragments or if you've faced to develop a similar app how you solved it.
I can provide the code I've done so far if necesary to clarify further more. Hope you can shed a light on this.
Thanks in advance.
UPDATE 1
This is the code i have so far:
ConcurrentQueue<byte> queue = new ConcurrentQueue<byte>();
....
private void backgroundWorker_DoWork(object sender, DoWorkEventArgs e)
{
bool listening = true;
while(listening)
{
if(serialPort.BytesToRead > 0)
{
byte b = (byte)serialPort.ReadByte();
queue.Enqueue(b);
}
}
}
So since a command have to end with character 13 (CR in ASCII):
public string GetCommand()
{
string ret = "";
byte[] ba = new byte[1];
byte b = (byte)' ';
while(b!=13)
{
if(queue.TryDequeue(out b))
{
ba[0] = b;
ret += ASCIIEncoding.ASCII.GetString([ba]);
}
}
return ret;
}
In order to test this GetCommand() method I call it from the main ui thread within a buton_click event but it hangs the app, do i need to create another thread to call GetCommand() ?
This is ok for small amount of data. But if the data is bigger like if you are passing some http information, then the queue size may not be sufficient. So I think you should use a non-blocking type of architecture.
See this answer for how to implement the sending side.
For the reading side use a dedicated thread, in that thread read a message from the port, queue it up in a suitable concurrent data structure (e.g. a ConcurrentQueue) and immediately loop back to wait for the next input from the serial port.
Consume the input from the queue on a separate thread.
There may be more efficient ways but this one is easy to implement and foolproof.

c# - SerialPort RS-485 and communication limits

I'm trying to communicate to a device using RS-485 through the serial port. Everything works fine, until we're trying to boost the communication to test the speed limit of the card then weird problem seem to occur. We are basically sending a first command with an image as arguments, and then another command to display this image. After every command, the card answers saying that the command was well received. But we are reaching limits too soon and the card is supposed to handle much more.
So I'm wondering since the transmission and the reception are going through the same wire, if there is some sort of collision of data? And should I wait to receive all the data? Is the SerialDataReceivedEventHandler too slow of this situation and should I keep reading the bytes in a while true loop in seperate thread and signal other thread once a complete message is arrived?
Other information :
We already have a protocol for communication : startdelimiter, data,
CRC16, enddelimiter
Sending in 2 commands is the way we do it and cannot be changed.
BaudRate is defined at 115200
The engineer is still working on the program in the card so problem might also be on his end.
English is not my first language so feel free to ask if I was not clear... :)
I recognize SerialPort programming is not my strength, and I've been trying to find some sort of wrapper but I haven't found any that would fit my needs. If someone has one to propose to me that'd be great or maybe someone has an idea of what could be wrong.
Anyway here is a bit of coding :
Thread sending frames :
public void SendOne()
{
timerLast = Stopwatch.GetTimestamp();
while (!Paused && conn.ClientConnState == Connexion.ConnectionState.Connected)
{
timerNow = Stopwatch.GetTimestamp();
if ((timerNow - timerLast) / (double)Stopwatch.Frequency >= 1 / (double)fps)
{
averageFPS.Add((int)((double)Stopwatch.Frequency / (timerNow - timerLast)) + 1);
if (averageFPS.Count > 10) averageFPS.RemoveAt(0);
timerLast = Stopwatch.GetTimestamp();
if (atFrame >= toSend.Count - 1)
{
atFrame = 0;
if (!isLoop)
Paused = true;
}
SendColorImage();
}
}
public void SendColorImage()
{
conn.Write(VIP16.bytesToVIP16(0x70C1, VIP16.Request.SendImage, toSend[++atFrame]));
WaitForResponse();
conn.Write(VIP16.bytesToVIP16(0x70C1, VIP16.Request.DisplayImage, VIP16.DisplayOnArg));
WaitForResponse();
}
private void WaitForResponse()
{
Thread.Sleep(25);
}
So the WaitForResponse() is crucial because if I send another command before the card answered it would go nuts. Although I hate to use Thread.Sleep() because it is not very accurate plus it'd limit my speed to 20fps, and if I use something lower than 25ms, risks of crash is much more likely to occur. So I was about to change the Thread.Sleep to "Read bytes until whole message is received" and ignore the DataReceivedEvent... just wondering if I'm completely off track here?
Tx a lot!
UPDATE 1
First Thank you Brad and 500 - Internal Server Error! But I've decide to stick with the .NET Serial Port for now and improve the Thread.Sleep accuracy (with timebeginperiod). I've decided to wait for the full response to be received and I synchronized my threads like so using ManualResetEventSlim (for speed) :
public static ManualResetEventSlim _waitHandle = new ManualResetEventSlim(false);
Then I changed SendColorIMage to :
public void SendColorImage()
{
conn.Write(VIP16.bytesToVIP16(0x70C1, VIP16.Requetes.SendImage, toSend[++atFrame]));
WaitForResponse();
conn.Write(VIP16.bytesToVIP16(0x70C1, VIP16.Requetes.DisplayImage, VIP16.DisplayOnArg));
WaitForResponse2();
}
private void WaitForResponse()
{
Connexion._waitHandle.Wait(100);
Thread.Sleep(20);
}
private void WaitForResponse2()
{
Connexion._waitHandle.Wait(100);
//Thread.Sleep(5);
}
With SerialDataReceivedEventHandler calling :
public void Recevoir(object sender, SerialDataReceivedEventArgs e)
{
if (!msg.IsIncomplete)
msg = new Vip16Message();
lock (locker)
{
if (sp.BytesToRead > 0)
{
byte[] byteMsg = new byte[sp.BytesToRead];
sp.Read(byteMsg, 0, byteMsg.Length);
msg.Insert(byteMsg);
}
}
if (!msg.IsIncomplete)
{
_waitHandle.Set();
if (MessageRecu != null)
MessageRecu(msg.toByte());
}
}
So I found out that after the second command I didn't need to call Thread.Sleep at all... and after the first one I needed to sleep for at least 20ms for the card not to crash. So I guess it's the time the card needs to receive/process the whole image to it's pixel. AND collision of data shouldn't really occur since I wait until whole message has arrived which means the problem is not on my end! YES! :p
A couple of pointers:
After sending, you'll want to wait for the transfer buffer empty event before reading the response. It's EV_TXEMPTY in unmanaged, I don't recall how it's encapsulated on the managed side - our RS485 code predates the .NET comport component.
You can reprogram the timer chip with a timeBeginPeriod(1) call to get a 1 millisecond resolution on Thread.Sleep().
For what it's worth, we sleep only briefly (1 ms) after send and then enter a reading loop where we keep attempting to read (again, with a 1 ms delay between read attempts) from the port until a full response has been received (or until a timeout or the retry counter is exhausted).
Here's the import declaration for timeBeginPeriod - I don't believe it's directly available in .NET (yet?):
[DllImport("winmm.dll")]
internal static extern uint timeBeginPeriod(uint period);
I hope this helps.

C# UDPClient bad throughput

We have a production system that gathers telemetry data from remote devices. This data is sent at a reasonable frequency and we end up receiving up to thousands of messages a second at peak times. The payload is circa 80 bytes per message. I am starting to do some performance testing of various storage machanisms, but I thought first of all I would try and see how fast I could push UDP without any data storage involved. I am getting approx 70,000 messages a second max throughput testing on my local machine (seems to be around the same if I use another machine to send the test data). From my rough calcuations, this is way lower than I expected given the network link capacity. The sender sits in a tight loop sending data. I am fully aware of all the issues with UDP re; lost packets, etc. I just want to get an idea of our systems weak points.
Is the throughput so low because of the small packet size?
Matt
private IPEndPoint _receiveEndpoint = new IPEndPoint(IPAddress.Any, _receivePort);
private Stopwatch sw = new Stopwatch();
private int _recievedCount = 0;
private long _lastCount = 0;
private Thread _receiverThread;
private bool _running = true;
_clientReceive = new UdpClient();
_clientReceive.Client.Bind(_receiveEndpoint);
_receiverThread = new Thread(DoReceive);
_receiverThread.Start();
while (_running)
{
Byte[] receiveBytes = _clientReceive.Receive(ref _receiveEndpoint);
_clientReceive.Receive(ref _receiveEndpoint);
if (!sw.IsRunning)
sw.Start();
string receiveString = Encoding.ASCII.GetString(receiveBytes);
_recievedCount = ++_recievedCount;
long howLong = sw.ElapsedMilliseconds;
if (howLong/1000 > _lastCount)
{
_lastCount = howLong/1000;
Invoke(new MethodInvoker(() => { Text = _recievedCount + " iterations in " + sw.ElapsedMilliseconds + " msecs"; }));
}
}
Lots of small UDP packets are definitely going to result in a lower network throughput than you'd get with larger packets however did you include the IP & UDP header sizes in your calculations?
Apart from that 70k messages/second is very very high and definitely not something you'd want to have happening across the internet if that was where the app is eventually going to be deployed. Even thousands of messages/second is high and if it was me I'd be looking to try and make the communications from the telemetry equipment less chatty perhaps by bundling up multiple readings into a single transmission.
If that's not an option and you are on a private network and you need to increase the network throughput you may have to start looking at your network card, its driver and then fine tuning some Windows networking parameters. But whatever you do with the messages you are almost certainly going to bottle-neck on whatever processing you do on them, especially if it involves disk, way before you get to 70k messages/second (I'd be suprised if you can even get to 10K/second when you're doing anything useful with them).
Yes, you should do measurements with various payload sizes, and see how much throughput you get. For small payloads, there might be an overhead with UDP/IP/Ethernet headers that might be reducing your throughput.
Also see the following article on SO: Having trouble achieving 1Gbit UDP throughput

How do I obtain the latency between server and client in C#?

I'm working on a C# Server application for a game engine I'm writing in ActionScript 3. I'm using an authoritative server model as to prevent cheating and ensure fair game. So far, everything works well:
When the client begins moving, it tells the server and starts rendering locally; the server, then, tells everyone else that client X has began moving, among with details so they can also begin rendering. When the client stops moving, it tells the server, which performs calculations based on the time the client began moving and the client render tick delay and replies to everyone, so they can update with the correct values.
The thing is, when I use the default 20ms tick delay on server calculations, when the client moves for a rather long distance, there's a noticeable tilt forward when it stops. If I increase slightly the delay to 22ms, on my local network everything runs very smoothly, but in other locations, the tilt is still there. After experimenting a little, I noticed that the extra delay needed is pretty much tied to the latency between client and server. I even boiled it down to a formula that would work quite nicely: delay = 20 + (latency / 10).
So, how would I proceed to obtain the latency between a certain client and the server (I'm using asynchronous sockets). The CPU effort can't be too much, as to not have the server run slowly. Also, is this really the best way, or is there a more efficient/easier way to do this?
Sorry that this isn't directly answering your question, but generally speaking you shouldn't rely too heavily on measuring latency because it can be quite variable. Not only that, you don't know if the ping time you measure is even symmetrical, which is important. There's no point applying 10ms of latency correction if it turns out that the ping time of 20ms is actually 19ms from server to client and 1ms from client to server. And latency in application terms is not the same as in networking terms - you may be able to ping a certain machine and get a response in 20ms but if you're contacting a server on that machine that only processes network input 50 times a second then your responses will be delayed by an extra 0 to 20ms, and this will vary rather unpredictably.
That's not to say latency measurement it doesn't have a place in smoothing predictions out, but it's not going to solve your problem, just clean it up a bit.
On the face of it, the problem here seems to be that that you're sent information in the first message which you use to extrapolate data from until the last message is received. If all else stays constant then the movement vector given in the first message multiplied by the time between the messages will give the server the correct end position that the client was in at roughly now-(latency/2). But if the latency changes at all, the time between the messages will grow or shrink. The client may know he's moved 10 units, but the server simulated him moving 9 or 11 units before being told to snap him back to 10 units.
The general solution to this is to not assume that latency will stay constant but to send periodic position updates, which allow the server to verify and correct the client's position. With just 2 messages as you have now, all the error is found and corrected after the 2nd message. With more messages, the error is spread over many more sample points allowing for smoother and less visible correction.
It can never be perfect though: all it takes is a lag spike in the last millisecond of movement and the server's representation will overshoot. You can't get around that if you're predicting future movement based on past events, as there's no real alternative to choosing either correct-but-late or incorrect-but-timely since information takes time to travel. (Blame Einstein.)
One thing to keep in mind when using ICMP based pings is that networking equipment will often give ICMP traffic lower priority than normal packets, especially when the packets cross network boundaries such as WAN links. This can lead to pings being dropped or showing higher latency than traffic is actually experiencing and lends itself to being an indicator of problems rather than a measurement tool.
The increasing use of Quality of Service (QoS) in networks only exacerbates this and as a consequence though ping still remains a useful tool, it needs to be understood that it may not be a true reflection of the network latency for non-ICMP based real traffic.
There is a good post at the Itrinegy blog How do you measure Latency (RTT) in a network these days? about this.
You could use the already available Ping Class. Should be preferred over writing your own IMHO.
Have a "ping" command, where you send a message from the server to the client, then time how long it takes to get a response. Barring CPU overload scenarios, it should be pretty reliable. To get the one-way trip time, just divide the time by 2.
We can measure the round-trip time using the Ping class of the .NET Framework.
Instantiate a Ping and subscribe to the PingCompleted event:
Ping pingSender = new Ping();
pingSender.PingCompleted += PingCompletedCallback;
Add code to configure and action the ping.
Our PingCompleted event handler (PingCompletedEventHandler) has a PingCompletedEventArgs argument. The PingCompletedEventArgs.Reply gets us a PingReply object. PingReply.RoundtripTime returns the round trip time (the "number of milliseconds taken to send an Internet Control Message Protocol (ICMP) echo request and receive the corresponding ICMP echo reply message"):
public static void PingCompletedCallback(object sender, PingCompletedEventArgs e)
{
...
Console.WriteLine($"Roundtrip Time: {e.Reply.RoundtripTime}");
...
}
Code-dump of a full working example, based on MSDN's example. I have modified it to write the RTT to the console:
public static void Main(string[] args)
{
string who = "www.google.com";
AutoResetEvent waiter = new AutoResetEvent(false);
Ping pingSender = new Ping();
// When the PingCompleted event is raised,
// the PingCompletedCallback method is called.
pingSender.PingCompleted += PingCompletedCallback;
// Create a buffer of 32 bytes of data to be transmitted.
string data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
byte[] buffer = Encoding.ASCII.GetBytes(data);
// Wait 12 seconds for a reply.
int timeout = 12000;
// Set options for transmission:
// The data can go through 64 gateways or routers
// before it is destroyed, and the data packet
// cannot be fragmented.
PingOptions options = new PingOptions(64, true);
Console.WriteLine("Time to live: {0}", options.Ttl);
Console.WriteLine("Don't fragment: {0}", options.DontFragment);
// Send the ping asynchronously.
// Use the waiter as the user token.
// When the callback completes, it can wake up this thread.
pingSender.SendAsync(who, timeout, buffer, options, waiter);
// Prevent this example application from ending.
// A real application should do something useful
// when possible.
waiter.WaitOne();
Console.WriteLine("Ping example completed.");
}
public static void PingCompletedCallback(object sender, PingCompletedEventArgs e)
{
// If the operation was canceled, display a message to the user.
if (e.Cancelled)
{
Console.WriteLine("Ping canceled.");
// Let the main thread resume.
// UserToken is the AutoResetEvent object that the main thread
// is waiting for.
((AutoResetEvent)e.UserState).Set();
}
// If an error occurred, display the exception to the user.
if (e.Error != null)
{
Console.WriteLine("Ping failed:");
Console.WriteLine(e.Error.ToString());
// Let the main thread resume.
((AutoResetEvent)e.UserState).Set();
}
Console.WriteLine($"Roundtrip Time: {e.Reply.RoundtripTime}");
// Let the main thread resume.
((AutoResetEvent)e.UserState).Set();
}
You might want to perform several pings and then calculate an average, depending on your requirements of course.

Categories