WinForms Serial port eats data - c#

I have a serial port that recieves some color data from arduino board at 115200 baud rate. At small scale (1-byte request from arduino for pc to send next command, implemented for synchronization) it works fine, but when i request a lot of data (393 bytes) and it sends them, it looks as if serial port just eats data, and BytesToRead usually equals to 5 or 6
Code:
void GetCurrentState()
{
int i = 0;
Color WrittenColor; //Color to write to an array
byte red;
byte green;
byte blue;
AddToQueue(new byte[] { 6 }); //adds command that requests data to a sender queue
while (i <= StripLength) //read data until array is read completly
{
Console.WriteLine($"Bytes to read: {SerialPort1.BytesToRead}");//Debug
if (SerialPort1.BytesToRead >= 3) //if we have a color ready to read and construct, do it, else wait for more data
{
red = (byte)SerialPort1.ReadByte(); //Read red part
green = (byte)SerialPort1.ReadByte(); //Read green part
blue = (byte)SerialPort1.ReadByte(); //Read blue part
WrittenColor = Color.FromArgb(red, green, blue); //Make a color
SavedState[i] = WrittenColor; //Write it
i++; //increment counter
}
}
}

You might try buffering the serial port data to another stream in memory, and then read data from that stream. This is because if you are actually transferring enough data to need that high of a baud rate, then it's possible that you are not reading the data fast enough. (I know that's not a massive data rate, but most hobby applications can get away with less)
Have you tried a lower baud rate? Or are you stuck with this one?
You might find this post helpful:
Reading serial port faster
I would probably have one thread reading the serial port in a loop, and then queuing the data into a thread-safe ConcurrentQueue. I would have another thread read from that queue and do useful things with the data. And, as you are already doing, use a queue to send commands, but I would use another thread for sending them (you might already be doing this).

Related

TCP sender machine receiver-window-size shrinking to 0 on windows machine after sending tiny amounts of data for a few minutes

I'm writing an app that works on a windows laptop that sends TCP bytes in a loop as a "homemade keep-alive" message to keep a connection active (the server machine will disconnect after 15 seconds of no TCP data received). The "server machine" will send the laptop small chunks of data (about .5K bytes/second) as long as a connection is alive (according to the server documentation, this is ideally this is an "echo" packet, but I was unable to find how this is accomplished in .NET). My problem is that when I view this data in Wireshark I can see good network activity, then, after a few minutes, the "win" (receive window size available on the laptop) shrinks from 65K to 0 in increments of about 240 bytes each packet. Why is this happening and how can I prevent it? I can't seem to get the "keep-alive" flags in .Net to work, so this was supposed to be my workaround. I do not see any missed ACK messages, and my data rate is about 2Kb/sec, so I don't understand why the laptop window size is dropping. I definitely assume there is a misconception on my part about TCP and or windows/.NET use of TCPsockets since I have no experience with TCP (I've always used UDP).
TcpClient client = new TcpClient(iPEndpoint);
//Socket s = client.Client; none of these flags actually work on the keep alive feature
//s.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.KeepAlive, true);
//s.SetSocketOption(SocketOptionLevel.Tcp, SocketOptionName.TcpKeepAliveInterval, 10);
//s.SetSocketOption(SocketOptionLevel.Tcp, SocketOptionName.TcpKeepAliveInterval, 10);
// Translate the passed message into ASCII and store it as a Byte array.
// Byte[] data = System.Text.Encoding.ASCII.GetBytes(message);
IPAddress ipAdd = IPAddress.Parse("192.168.1.10");
IPEndPoint ipEndPoin = new IPEndPoint(ipAdd, 13000);
client.Connect(ipEndPoin);
NetworkStream stream = client.GetStream();
// Send the message to the connected TcpServer.
bool finished = false;
while (!finished)
{
try
{
stream.Write(data, 0, data.Length);
Thread.Sleep(5000);
}
catch (System.IO.IOException ioe)
{
if (ioe.InnerException is System.Net.Sockets.SocketException)
{
client.Dispose();
client = new TcpClient(iPEndpoint);
client.Connect(ipEndPoin);
stream = client.GetStream();
Console.Write("reconnected");
// this imediately fails again after exiting this catch to go back to the while loop because send window is still 0
}
}
}
You should really be familiar with RFC 793, Transmission Control Protocol, which is the definition of TCP. It explains the wondow and how it is used for flow control:
Flow Control:
TCP provides a means for the receiver to govern the amount of data
sent by the sender. This is achieved by returning a "window" with
every ACK indicating a range of acceptable sequence numbers beyond the
last segment successfully received. The window indicates an allowed
number of octets that the sender may transmit before receiving further
permission.
-and-
To govern the flow of data between TCPs, a flow control mechanism is
employed. The receiving TCP reports a "window" to the sending TCP.
This window specifies the number of octets, starting with the
acknowledgment number, that the receiving TCP is currently prepared to
receive.
-and-
Window: 16 bits
The number of data octets beginning with the one indicated in the
acknowledgment field which the sender of this segment is willing to
accept.
The window size is dictated by the receiver of the data in its ACK segments where it acknowledges receipt of the data. If your laptop receive window shrinks to 0, it is setting the window to that because it has no more space to receive, and it needs time to process and free up space in the receive buffer. When it has more space, it will send an ACK segment with a larger window.
Segment Receive Test
Length Window
------- ------- -------------------------------------------
0 0 SEG.SEQ = RCV.NXT
0 >0 RCV.NXT =< SEG.SEQ < RCV.NXT+RCV.WND
>0 0 not acceptable
>0 >0 RCV.NXT =< SEG.SEQ < RCV.NXT+RCV.WND
or RCV.NXT =< SEG.SEQ+SEG.LEN-1 < RCV.NXT+RCV.WND
Note that when the receive window is zero no segments should be
acceptable except ACK segments. Thus, it is be possible for a TCP to
maintain a zero receive window while transmitting data and receiving
ACKs. However, even when the receive window is zero, a TCP must
process the RST and URG fields of all incoming segments.

Can't get data out of FTDI FT201X using i2c

I have a project using the FTDI FT201X as a USB to i2c slave and the i2c master is an AVR microcontroller. I'm using WPF Core 3.1 C# on a Windows 10 machine. Basically, everything with the FTDI chip works fine except I can't successfully get data sent from the PC to the FTDI chip no matter what I try. The D2XX Write function says it was successful and returns no error, but there is never any data in the buffer when I try to read.
I've since written a small test program in an attempt to isolate the issue but the problem remains. Basically, when a button is clicked we open the device by serial number, we write a command to the device's buffers, handshake with the AVR to let it know to read and then wait for the AVR to drive a handshake pin low meaning it has received the data.
public class USBLibrary
{
byte targetDeviceCount = 0;
FTDI.FT_STATUS ftStatus = FTDI.FT_STATUS.FT_OK;
public FTDI connectedUSBDevice;
// Called from button click event
public void ConnectUSB()
{
bool isOK = true;
byte numOfBytes = 1;
uint bytesWritten = 0;
bool usbInPinIsHigh = false; // Tracks USB In Pin
byte lowMask = 0b00010000; // CBUS 0 is output (4-7), all pins low (0-3) (Default Setting)
byte highMask = 0b00010001; // CBUS 0 is output (4-7), CBUS 3 is high
byte inPinMask = 0b00001000; // AND with pin states to get input pin value (Bus3)
byte pinStates = 0; // Used to get the current pin values
double timeout = 0;
// Create new instance of the FTDI device class
connectedUSBDevice = new FTDI();
// Determine the number of FTDI devices connected to the machine
ftStatus = connectedUSBDevice.OpenBySerialNumber("P00001");
/*** Write to Device ***/
byte[] firmwareCmd = new byte[numOfBytes];
firmwareCmd[0] = 128; // 128 is Get Firmware Command
// firmwareCmd[1] = 61; // Just Testing
// Write Firmware Command to Tx buffer
ftStatus = connectedUSBDevice.Write(firmwareCmd, numOfBytes, ref bytesWritten);
Trace.WriteLine(bytesWritten);
// Handshake with Device
isOK = DeviceHandshake(lowMask, highMask, inPinMask);
// Check if handshake failed
if (isOK == false)
{
return;
}
Task.Delay(10);
// Wait until message is sent
while ((usbInPinIsHigh == false) && (timeout <= 1000))
{
Task.Delay(1);
// Check for USB In pin to go high. Signals FW transfer is complete and to retrieve.
ftStatus = connectedUSBDevice.GetPinStates(ref pinStates);
// Is input pin high or low?
if ((pinStates & inPinMask) == inPinMask) // In pin high
{
usbInPinIsHigh = true; // Means uC finished sending data
}
timeout++;
}
// TEST: displays timeout amount for testing
Trace.WriteLine("Timeout=" + timeout);
ftStatus = connectedUSBDevice.Close();
}
}
NOTE: For this code, I've taken out a lot of the error checking code for clarity. Also, the handshake code is not shown because it shouldn't be relevant: raise output pin, listen for AVR to raise output pin, lower output pin, listen for AVR to lower output pin.
On the AVR side, we simply poll for the FT201X's pin to go high and then handshake with the chip. Then we simply read. The read function always returns 0.
I doubt the problem is with i2c as there are 3 IO Expander chips controlling LEDs and buttons and we can read and write to those fine. Further, the FT chip has a function called Get USB State where you can check to see the device's status by sending the command and reading the result via i2c. When I do this, I always get back the correct 0x03 "Configured" state. So we can read from the chip via i2c.
There's also a function that will return the # of bytes in the buffer waiting to be read...when I do this, it always says 0 bytes.
And for good measure I replaced the chip with a new one in case it was bad and again we had the same results.
Is there anything I'm missing in terms of setting up the chip beyond using FT_Prog, like an initialization procedure or setting registers or something? Or do I need to somehow push the byte I write to the front of the queue or something before it can be read? Anybody seen anything like this before?
Given that I haven't affected the results, I'm either missing a key part in the process or something is wrong with their driver/version of the chip. It's been 3 weeks, I'm out of ideas, and my hair is patchy from ripping out large chunks. Please save my hair.
Check by oscilloscope that your I2C master gives clock for your slave (FT201x). Try to control only I2C (rip off GPIO controls) and check if you can isolate problem that way. I suppose you are very familiar with FT201X datasheet. Good luck!
Check the latency timer setting. It’s described in this document, https://www.ftdichip.com/Support/Documents/AppNotes/AN232B-04_DataLatencyFlow.pdf. In section 3.3, you’ll find a section describing a scenario in which no data will be made available to the app.
“While the host controller is waiting for one of the above conditions to occur, NO data is received by our driver and hence the user's application. The data, if there is any, is only finally transferred after one of the above conditions has occurred.”
You can use the latency timer to work around it, if you’re hitting this. Try setting it to 1ms, its lowest value. If your data has a delimiter character, consider setting that as an event character and you might get even better results.
Did this issue ever get resolved?
Experiencing the same issues with an FT200X except the function "bytes available" (0x0C) returns the correct byte count sent from the host PC, but can't read the actual bytes using the read procedure mentioned in the datasheet.
I have also several other I2C devices on the bus, working fine.

How solid is the Mono SerialPort class?

I have an application that, among other things, uses the SerialPort to communicate with a Digi XBee coordinator radio.
The code for this works rock solid on the desktop under .NET.
Under Mono running on a Quark board and WindRiver Linux, I get about a 99% failure rate when attempting to receive and decode messages from other radios in the network due to checksum validation errors.
Things I have tested:
I'm using polling for the serial port, not events, since event-driven serial is not supported in Mono. So the problem is not event related.
The default USB Coordinator uses an FTDI chipset, but I swapped out to use a proto board and a Prolific USB to serial converter and I see the same failure rate. I think this eliminates the FTDI driver as the problem.
I changed the code to never try duplex communication. It's either sending or receiving. Same errors.
I changed the code to read one byte at a time instead of in blocks sized by the size identifier in the incoming packet. Same errors.
I see this with a variety of remote devices (smart plug, wall router, LTH), so it's not remote-device specific.
The error occurs with solicited or unsolicited messages coming from other devices.
I looked at some of the raw packets that fail a checksum and manual calculation gets the same result, so the checksum calculation itself is right.
Looking at the data I see what appear to be packet headers mid-packet (i.e. inside the length indicated in the packet header). This makes me think that I'm "missing" some bytes, causing subsequent packet data to be getting read into earlier packets.
Again, this works fine on the desktop, but for completeness, this is the core of the receiver code (with error checking removed for brevity):
do
{
byte[] buffer;
// find the packet start
byte #byte = 0;
do
{
#byte = (byte)m_port.ReadByte();
} while (#byte != PACKET_DELIMITER);
int read = 0;
while(read < 2)
{
read += m_port.Read(lengthBuffer, read, 2 - read);
}
var length = lengthBuffer.NetworkToHostUShort(0);
// get the packet data
buffer = new byte[length + 4];
buffer[0] = PACKET_DELIMITER;
buffer[1] = lengthBuffer[0];
buffer[2] = lengthBuffer[1];
do
{
read += m_port.Read(buffer, 3 + read, (buffer.Length - 3) - read);
} while (read < (length + 1));
m_frameQueue.Enqueue(buffer);
m_frameReadyEvent.Set();
} while (m_port.BytesToRead > 0);
I can only think of two places where the failure might be happening - the Mono SerialPort implementation or the WindRiver serial port driver that's sitting above the USB stack. I'm inclined to think that WindRiver has a good driver.
To add to the confusion, we're running Modbus Serial on the same device (in a different application) via Mono and that works fine for days, which somewhat vindicates Mono.
Has anyone else got any experience with the Mono SerialPort? Is it solid? Flaky? Any ideas on what could be going on here?
m_port.Read(lengthBuffer, 0, 2);
That's a bug, you have no guarantee whatsoever that you'll actually read two bytes. Getting just one is very common, serial ports are slow. You must use the return value of Read() to check. Note how you did it right in your second usage. Beyond looping, the simple alternative is to just call ReadByte() twice.

Serial Port Trigger DataReceived when certain amounts of bytes received

I am trying to write a program that updates a windows form every time new data comes in on a serial port, but am struggling with understanding how the serial port works, and how I can use it in a way I want it.
I have an external device sending 8 bytes at 1Hz to my serial port, and wish to use the DataReceived event from the SerialPort Class. When I debug my code, the event is more or less triggered randomly based on what the program is doing at a certain time. The code as is is below:
private void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
//byte[] rxbyte = new byte[1];
byte[] rxbyte = new byte[8];
byte currentbyte;
port.Read(rxbyte, 0, port.BytesToRead);
currentbyte = rxbyte[0];
int channel = (currentbyte >> 6) & 3; //3 = binary 11, ANDS the last 2 bits
int msb_2bit = (currentbyte >> 0) & 255; //AND compare all bits in a byte
currentbyte = rxbyte[1];
int val = ((msb_2bit << 8) | (currentbyte << 0));
//Extra stuff
SetText_tmp1(val.ToString());
}
I want to be able to have exactly 8 bytes in the receive buffer before I call the Read function, but I am not sure how to do this (never used SerialPort class before), and want to do all manipulation of data only when I have the entire 8 bytes. Is there a built in way to toggle the event only when a certain amount of bytes are in the buffer? Or is there another way to obtain only 8 bytes, but not more, and leave the remaining bytes to the next instance?
Yeah, you are not coding this correctly. You cannot predict how many bytes you are going to receive. So just don't process the received bytes until you've got them all. Like this:
private byte[] rxbyte = new byte[8];
private int rxcount = 0;
private void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
rxcount += port.Read(rxbyte, rxcount, 8 - rxcount);
if (rxcount < 8) return;
rxcount = 0;
// Process rxbyte content
//...
}
Set the ReceivedBytesThreshold property to 8. As in port.ReceivedBytesThreshold = 8;
An effective way to handle this is to add a timer to the class that ticks along at maybe 9 times a second. Eliminate the serial port event handler completely.
At each timer tick have the code check the serial port for bytes received from the serial port. If there are some there then grab them out of the serial port and append them to the end of buffer maintained in the class as a data member.
When the buffer has eight or more characters in it then timer tick logic would take the first 8 bytes out of the buffer and use them to update the user interface window. Any remaining bytes in the buffer can be moved up to the head of the buffer.
The timer tick routine can also maintain a counter value that increments each time the tick comes in and there is no data ready at the serial port at this tick. When this counter reaches a value of say 3 or 4 the code would reset the data buffer to empty and reset the counter back to zero. When data is actually seen from the serial port this counter is reset to zero. The purpose of this counter mechanism is to synchronize the data receive buffer with the 1Hz data stream coming in so the receive process does not get out of sync with what data represents the start of the 8-byte message.
Note that this method is superior to the serial port received data event because it allows your program to stay in control of things. I've already described the ability to synchronize with the data stream bursts - which is not possible to do with trying to set the serial port received data threshold to a count like 8. Another advantage is that the timer tick code can include additional handling functions such as signalling a timeout if no data arrives from the serial port in say 2 or 3 seconds.

UDP data transmission slower than TCP

I'm currently writing a prototype application in C#/.Net4 where i need to transfer an unknown amount of data. The data is read in from a text file and then serialized into a byte array.
Now i need to implement both transmission methods, UDP and TCP. The transmission in both ways does work fine but i have some struggleing with UDP. I assumend that the transmission using UDP have to be much faster than using TCP but in fact my tests proved that the UDP transmission is about 7 to 8 times slower than using TCP.
I tested the transmission with a 12 megabyte file and the TCP transmission took about 1 second whereas the UDP transmission took about 7 seconds.
In the application i use simple sockets to transmit the data. Since UDP does only allow a maximum of 65535kb per message, i splitted the serialized the byte array of the file into several parts where each part has the size of the socker SendBufferSize and then i transfer each part using Socket.Send() method call.
Here is the code for the Sender part.
while (startOffset < data.Length)
{
if ((startOffset + payloadSize) > data.Length)
{
payloadSize = data.Length - startOffset;
}
byte[] subMessageBytes = new byte[payloadSize + 16];
byte[] messagePrefix = new UdpMessagePrefix(data.Length, payloadSize, messageCount, messageId).ToByteArray();
Buffer.BlockCopy(messagePrefix, 0, subMessageBytes, 0, 16);
Buffer.BlockCopy(data, startOffset, subMessageBytes, messageOffset, payloadSize);
messageId++;
startOffset += payloadSize;
udpClient.Send(subMessageBytes, subMessageBytes.Length);
messages.Add(subMessageBytes);
}
This code does simply copy the next part to be send into an byte array and then call the send method on the socket. My first guess was, that the splitting/copying of the byte arrays was slowing down the performance, but i isolated and tested the splitting code and the splitting took only a few milliseconds, so that was not causing the problem.
int receivedMessageCount = 1;
Dictionary<int, byte[]> receivedMessages = new Dictionary<int, byte[]>();
while (receivedMessageCount != totalMessageCount)
{
byte[] data = udpClient.Receive(ref remoteIpEndPoint);
UdpMessagePrefix p = UdpMessagePrefix.FromByteArray(data);
receivedMessages.Add(p.MessageId, data);
//Console.WriteLine("Received packet: " + receivedMessageCount + " (ID: " + p.MessageId + ")");
receivedMessageCount++;
//Console.WriteLine("ReceivedMessageCount: " + receivedMessageCount);
}
Console.WriteLine("Done...");
return receivedMessages;
This is the server side code where i receive the UDP messages. Each message has some bytes as a prefix where the total number of messages is stored and the size. So i simply call socket.Receive in a loop until i received the amount of messages which were specified in the prefix.
My assumption here is that i may have implemented the UDP transmission code not "efficiently" enough... Maybe one of you guys already sees a problem in the code snippets or have any other suggestion or hint for me why my UDP transmission is slower than TCP.
thanks in advance!
While UDP datagram size can be up to 64K, the actual wire frames are usually 1500 bytes (normal ethernet MTU). That also has to fit an IP header of minimum 20 bytes and a UDP header of 8 bytes, leaving you with 1472 bytes of usable payload.
What you are seeing is the result of your OS network stack fragmenting the UDP datagrams on the sender side and then re-assembling them on the receiver side. That takes time, thus your results.
TCP, on the other hand, does its own packetization and tries to find path MTU, so it's more efficient in this case.
Limit your data chunks to 1472 bytes and measure again.
I think you should measure CPU usage and network throughput for the duration of the test.
If the CPU is pegged, this is your problem: Turn on a profiler.
If the network (cable) is pegged this is a different class of problems. I wouldn't know what to do about it ;-)
If neither is pegged, run a profiler and see where most wall-clock time is spent. There must be some waiting going on.
If you don't have a profiler just hit break 10 times in the debugger and see where it stops most often.
Edit: My response to your measurement: We know that 99% of all execution time is spent in receiving data. But we don't know yet if the CPU is busy. Look into task-manager and look which process is busy.
My guess is it is the System process. This is the windows kernel and probably the UDP component of it.
This might have to do with packet fragmentation. IP packets have a certain maximum size like 1472 bytes. Your UDP packets are being fragmented and reassembled on the receiving machine. I am surprised that is taking so much CPU time.
Try sending packets of total size of 1000 and 1472 (try both!) and report the results.

Categories