Serial <> Ethernet converter and SerialPort.Write() - c#

I'm trying to achieve maximum throughput on a serial port. I believe my C# code is causing a buffer overrun condition. SerialPort.Write() is usually a blocking method.
The problem is the unit/driver doing the Ethernet to Serial conversion doesn't block for the duration it takes for it to transmit the message. It doesn't appear to block at at all. Until it ends up blocking forever once too much data is written to it too fast. Then the SerialPort needs to be disposed before it will work again. Another issue is BytesToWrite always == 0 directly after thw write. Driver???
So, how do I get around this issue?
I tried doing a Sleep directly after the write for the duration it would take send the message out, but it doesn't work.
com.Write(buffer, 0, length);
double sleepTime = ((length + 1) * .000572916667) * 1000; //11 bits, 19.2K baud
Thread.Sleep((int) sleepTime);
I realize there may be some delay between when the unit receives the message and when it sends it out the COM port. Perhaps this is the reason why the driver does not block the .Write call?
I could wait for the message to be ack'd by the node. Problem is I'm dealing with thousands of nodes and some messages are broadcast globally. It is not feasible to wait for everyone to ack. What to do?
Any ideas?

Related

How solid is the Mono SerialPort class?

I have an application that, among other things, uses the SerialPort to communicate with a Digi XBee coordinator radio.
The code for this works rock solid on the desktop under .NET.
Under Mono running on a Quark board and WindRiver Linux, I get about a 99% failure rate when attempting to receive and decode messages from other radios in the network due to checksum validation errors.
Things I have tested:
I'm using polling for the serial port, not events, since event-driven serial is not supported in Mono. So the problem is not event related.
The default USB Coordinator uses an FTDI chipset, but I swapped out to use a proto board and a Prolific USB to serial converter and I see the same failure rate. I think this eliminates the FTDI driver as the problem.
I changed the code to never try duplex communication. It's either sending or receiving. Same errors.
I changed the code to read one byte at a time instead of in blocks sized by the size identifier in the incoming packet. Same errors.
I see this with a variety of remote devices (smart plug, wall router, LTH), so it's not remote-device specific.
The error occurs with solicited or unsolicited messages coming from other devices.
I looked at some of the raw packets that fail a checksum and manual calculation gets the same result, so the checksum calculation itself is right.
Looking at the data I see what appear to be packet headers mid-packet (i.e. inside the length indicated in the packet header). This makes me think that I'm "missing" some bytes, causing subsequent packet data to be getting read into earlier packets.
Again, this works fine on the desktop, but for completeness, this is the core of the receiver code (with error checking removed for brevity):
do
{
byte[] buffer;
// find the packet start
byte #byte = 0;
do
{
#byte = (byte)m_port.ReadByte();
} while (#byte != PACKET_DELIMITER);
int read = 0;
while(read < 2)
{
read += m_port.Read(lengthBuffer, read, 2 - read);
}
var length = lengthBuffer.NetworkToHostUShort(0);
// get the packet data
buffer = new byte[length + 4];
buffer[0] = PACKET_DELIMITER;
buffer[1] = lengthBuffer[0];
buffer[2] = lengthBuffer[1];
do
{
read += m_port.Read(buffer, 3 + read, (buffer.Length - 3) - read);
} while (read < (length + 1));
m_frameQueue.Enqueue(buffer);
m_frameReadyEvent.Set();
} while (m_port.BytesToRead > 0);
I can only think of two places where the failure might be happening - the Mono SerialPort implementation or the WindRiver serial port driver that's sitting above the USB stack. I'm inclined to think that WindRiver has a good driver.
To add to the confusion, we're running Modbus Serial on the same device (in a different application) via Mono and that works fine for days, which somewhat vindicates Mono.
Has anyone else got any experience with the Mono SerialPort? Is it solid? Flaky? Any ideas on what could be going on here?
m_port.Read(lengthBuffer, 0, 2);
That's a bug, you have no guarantee whatsoever that you'll actually read two bytes. Getting just one is very common, serial ports are slow. You must use the return value of Read() to check. Note how you did it right in your second usage. Beyond looping, the simple alternative is to just call ReadByte() twice.

TCP segments disappearing

I run into a problem that googling seems can't solve. To keep it simple I have a client written in C# and a server running Linux written in C. Client is calling Send(buffer) in a loop 100 times. The problem is that server receives only a dozen of them. If I put a sleep, big enough, in a loop everything turns out fine. The buffer is small - about 30B. I read about Nagle's algorithm and ACK delay but it doesn't answer my problems.
for(int i = 0; i < 100; i++)
{
try
{
client.Send(oneBuffer, 0, oneBuffer.Length, SocketFlags.None)
}
catch (SocketException socE)
{
if ((socE.SocketErrorCode == SocketError.WouldBlock)
|| (socE.SocketErrorCode == SocketError.NoBufferSpaceAvailable)
|| (socE.SocketErrorCode == SocketError.IOPending))
{
Console.WriteLine("Never happens :(");
}
}
Thread.Sleep(100); //problem solver but why??
}
It's look like send buffer gets full and rejects data until it gets empty again, in blocking mode and nonblocking mode. Even better, I never get any exception!? I would expect some of the exceptions to raise but nothing. :( Any ideas? Thnx in advance.
TCP is stream oriented. This means that recv can read any amount of bytes between one and the total number of bytes outstanding (sent but not yet read). "Messages" do not exist. Sent buffers can be split or merged.
There is no way to get message behavior from TCP. There is no way to make recv read at least N bytes. Message semantics are constructed by the application protocol. Often, by using fixed-size messages or a length prefix. You can read at least N bytes by doing a read loop.
Remove that assumption from your code.
I think this issue is due to the nagle algorithm :
The Nagle algorithm is designed to reduce network traffic by causing
the socket to buffer small packets and then combine and send them in
one packet under certain circumstances. A TCP packet consists of 40
bytes of header plus the data being sent. When small packets of data
are sent with TCP, the overhead resulting from the TCP header can
become a significant part of the network traffic. On heavily loaded
networks, the congestion resulting from this overhead can result in
lost datagrams and retransmissions, as well as excessive propagation
time caused by congestion. The Nagle algorithm inhibits the sending of
new TCP segments when new outgoing data arrives from the user if any
previouslytransmitted data on the connection remains unacknowledged.
Calling client.Send function doesn't mean a TCP segment will be sent.
In your case, as buffers are small, the naggle algorithm will regroup them into larger segments. Check on server side that the dozen of buffers received contains the whole data.
When you add a Thread.Sleep(100), you will receive 100 packets on server side because nagle algotithm won't wait longer for further data.
If you really need a short latency in your application, you can explicitly disable nagle algorithm for your TcpClient : set the NoDelay property to true. Add this line at the begening of your code :
client.NoDelay = true;
I was naive thinking there was a problem with TCP stack. It was with my server code. Somewhere in between the data manipulation I used strncpy() function on a buffer that stores messages. Every message contained \0 at the end. Strncpy copied only the first message (the first string) out of the buffer regardless the count that was given (buffer length). That resulted in me thinking I had lost messages.
When I used the delay between send() calls on client, messages didn't get buffered. So, strncpy() worked on a buffer with one message and everything went smoothly. That "phenomenon" led we into thinking that speed rate of send calls is causing my problems.
Again thanks on help, your comments made me wonder. :)

C# "using" SerialPort transmit with data loss

I'm new to this forum, and I have a question that has been bothering me for a while.
My setup is a serial enabled character display connected to my pc with a usb/uart converter. I'm transmitting bytes to the display via the serialPort class in a separate write buffer thread in a C++ style:
private void transmitThread(){
while(threadAlive){
if(q.Count > 0){ // Queue not empty
byte[] b = q.Dequeue();
s.Write(b,0,b.Length);
System.Threading.Thread.Sleep(100);
}
else{ // Queue empty
System.Threading.Thread.Sleep(10);
}
}
}
Assuming the serial port is already opened, this works perfectly and transmits all the data to the display. There are though no exception handling at all in this snippet. Therefore I was looking into implementing a typical C# feature, the 'using' statement and only opening the port when needed, like so:
private void transmitThread(){
while(threadAlive){
if(q.Count > 0){ // Queue not empty
byte[] b = q.Dequeue();
using(s){ //using the serialPort
s.Open();
s.Write(b,0,b.Length);
s.Close();
}
System.Threading.Thread.Sleep(100);
}
else{ // Queue empty
System.Threading.Thread.Sleep(10);
}
}
}
The problem with this function is, that it only transmits a random amount of the data, typically about one third of the byte-array of 80 bytes. I have tried different priority settings of the thread, but nothing changes.
Am I missing something important, or do I simply close the port too fast after a transmit request?
I hope you can help me. Thanks :)
No, that was a Really Bad Idea. The things that go wrong, roughly in the order you'll encounter them:
the serial port driver discards any bytes left in the transmit buffer that were not yet transmitted when you close the port. Which is what you are seeing now.
the MSDN article for SerialPort.Close() warns that you must "wait a while" before opening the port again. There's an internal worker thread that needs to shut down. The amount of time you have to wait is not specified and is variable, depending on machine load.
closing a port allows another program to grab the port and open it. Serial ports cannot be shared, your program will fail when you try to open it again.
Serial ports were simply not designed to be opened and closed on-the-fly. Only open it at the start of your program, close it when it ends. Not calling Close() at all is quite acceptable and avoids a deadlock scenario.
I think you're missing the point of the using block. A typical using block will look like this:
using (var resource = new SomeResource())
{
resource.DoSomething();
}
The opening happens at the very beginning. Typically as part of the constructor. But sometimes on the first line of the using block.
But the big red flag I see is that the closing happens automatically. You don't need the .Close() call.
If the successful operation of your serial device is dependent on the calls to Thread.Sleep then perhaps the thread is being interrupted at some point, sufficient to make the data transmission out of sync with the device. There would most likely be ways to solve this but the first thing I would do is try to use the .NET SerialPort class instead. The Write method is very similar to what you want to do, and there are C++ code examples in those articles.

Does delaying blocks data receiving

I am working on a project on Visual Studio C#.
I am collecting data from a device connected to PC via serial port.
First I send a request command, and wait for response.
There is a 1 sec delay of device to response after sending request command.
The thing is device may not be reached and may not response sometimes.
In order to wait response (if any) and not to sent next data request command early, I make a delay by: System.Threading.Thread method.
My question is, if I make that delay time longer, do I loose serial port data receiving.
The Delay function I use is:
private void Delay(byte WaitMiliSec)
{
// WaitTime here is increased by a WaitTimer ticking at every 100msec
WaitTime = 0;
while (WaitTime < WaitMiliSec)
{
System.Threading.Thread.Sleep(25);
Application.DoEvents();
}
}
no - you won't loose any data - the serial-port has it's own buffer which does not depend on your application at all. The OS and the hardware will handle this for your.
I would suggest to refactor the data-send/receive into it's own task/thread. That way you don't need the Application.DoEvents();
If you post some more of your send/receive code I might help you with this.
PS: it seems to me that your code will not work anyhow (WaitTime is allways zero) but I guess it's just a snippet right?

C# Begin Send within a foreach loop issue

I have a group of "Packets" which are custom classed that are coverted to byte[] and then sent to the client. When a client joins, they are updated with the previous "Catch Up Packets" that were sent previous to the user joining. Think of it as a chat room where you are updated with the previous conversations.
My issue is on the client end, we do not receive all the information; Sometimes not at all..
Below is pseudo c# code for what I see
code looks like this.
lock(CatchUpQueue.SyncRoot)
{
foreach(Packet packet in CatchUpQueue)
{
// If I put Console.WriteLine("I am Sending Packets"); It will work fine up to (2) client sockets else if fails again.
clientSocket.BeginSend(data, 0, data.length, SocketFlags.None, new AsyncCallback(EndSend), data);
}
}
Is this some sort of throttle issue or an issue with sending to many times: ie: if there are 4 packets in the queue then it calls begin send 4 times.
I have searched for a topic similiar and I cannot find one. Thank you for your help.
Edit: I would also like to point out that the sending between clients continues normally for any sends after the client connects. But for some reason the packets within this for loop are not all sent.
I would suspect that you are flooding the TCP port with packets, and probably overflowing its send buffer, at which point it will probably return errors rather than sending the data.
The idea of Async I/O is not to allow you to send an infinite amount of data packets simultaneously, but to allow your foreground thread to continue processing while a linear sequence of one or more I/O operations occurs in the background.
As the TCP stream is a serial stream, try respecting that and send each packet in turn. That is, after BeginSend, use the Async callback to detect when the Send has completed before you send again. You are effectively doing this by adding a Sleep, but this is not a very good solution (you will either be sending packets more slowly than possible, or you may not sleep for long enough and packets will be lost again)
Or, if you don't need the I/O to run in the background, use your simple foreach loop, but use a synchronous rather than Async send.
Okay,
Apparently a fix, so far still has me confused, is to Thread.Sleep for the number of ms for each packet I am sending.
So...
for(int i = 0; i < PacketQueue.Count; i++)
{
Packet packet = PacketQueue[i];
clientSocket.BeginSend(data, 0, data.length, SocketFlags.None, new AsyncCallback(EndSend), data);
Thread.Sleep(PacketQueue.Count);
}
I assume that for some reason the loop stops some of the calls from happening... Well I will continue to work with this and try to find the real answer.

Categories