I have an application that, among other things, uses the SerialPort to communicate with a Digi XBee coordinator radio.
The code for this works rock solid on the desktop under .NET.
Under Mono running on a Quark board and WindRiver Linux, I get about a 99% failure rate when attempting to receive and decode messages from other radios in the network due to checksum validation errors.
Things I have tested:
I'm using polling for the serial port, not events, since event-driven serial is not supported in Mono. So the problem is not event related.
The default USB Coordinator uses an FTDI chipset, but I swapped out to use a proto board and a Prolific USB to serial converter and I see the same failure rate. I think this eliminates the FTDI driver as the problem.
I changed the code to never try duplex communication. It's either sending or receiving. Same errors.
I changed the code to read one byte at a time instead of in blocks sized by the size identifier in the incoming packet. Same errors.
I see this with a variety of remote devices (smart plug, wall router, LTH), so it's not remote-device specific.
The error occurs with solicited or unsolicited messages coming from other devices.
I looked at some of the raw packets that fail a checksum and manual calculation gets the same result, so the checksum calculation itself is right.
Looking at the data I see what appear to be packet headers mid-packet (i.e. inside the length indicated in the packet header). This makes me think that I'm "missing" some bytes, causing subsequent packet data to be getting read into earlier packets.
Again, this works fine on the desktop, but for completeness, this is the core of the receiver code (with error checking removed for brevity):
do
{
byte[] buffer;
// find the packet start
byte #byte = 0;
do
{
#byte = (byte)m_port.ReadByte();
} while (#byte != PACKET_DELIMITER);
int read = 0;
while(read < 2)
{
read += m_port.Read(lengthBuffer, read, 2 - read);
}
var length = lengthBuffer.NetworkToHostUShort(0);
// get the packet data
buffer = new byte[length + 4];
buffer[0] = PACKET_DELIMITER;
buffer[1] = lengthBuffer[0];
buffer[2] = lengthBuffer[1];
do
{
read += m_port.Read(buffer, 3 + read, (buffer.Length - 3) - read);
} while (read < (length + 1));
m_frameQueue.Enqueue(buffer);
m_frameReadyEvent.Set();
} while (m_port.BytesToRead > 0);
I can only think of two places where the failure might be happening - the Mono SerialPort implementation or the WindRiver serial port driver that's sitting above the USB stack. I'm inclined to think that WindRiver has a good driver.
To add to the confusion, we're running Modbus Serial on the same device (in a different application) via Mono and that works fine for days, which somewhat vindicates Mono.
Has anyone else got any experience with the Mono SerialPort? Is it solid? Flaky? Any ideas on what could be going on here?
m_port.Read(lengthBuffer, 0, 2);
That's a bug, you have no guarantee whatsoever that you'll actually read two bytes. Getting just one is very common, serial ports are slow. You must use the return value of Read() to check. Note how you did it right in your second usage. Beyond looping, the simple alternative is to just call ReadByte() twice.
Related
I have a project using the FTDI FT201X as a USB to i2c slave and the i2c master is an AVR microcontroller. I'm using WPF Core 3.1 C# on a Windows 10 machine. Basically, everything with the FTDI chip works fine except I can't successfully get data sent from the PC to the FTDI chip no matter what I try. The D2XX Write function says it was successful and returns no error, but there is never any data in the buffer when I try to read.
I've since written a small test program in an attempt to isolate the issue but the problem remains. Basically, when a button is clicked we open the device by serial number, we write a command to the device's buffers, handshake with the AVR to let it know to read and then wait for the AVR to drive a handshake pin low meaning it has received the data.
public class USBLibrary
{
byte targetDeviceCount = 0;
FTDI.FT_STATUS ftStatus = FTDI.FT_STATUS.FT_OK;
public FTDI connectedUSBDevice;
// Called from button click event
public void ConnectUSB()
{
bool isOK = true;
byte numOfBytes = 1;
uint bytesWritten = 0;
bool usbInPinIsHigh = false; // Tracks USB In Pin
byte lowMask = 0b00010000; // CBUS 0 is output (4-7), all pins low (0-3) (Default Setting)
byte highMask = 0b00010001; // CBUS 0 is output (4-7), CBUS 3 is high
byte inPinMask = 0b00001000; // AND with pin states to get input pin value (Bus3)
byte pinStates = 0; // Used to get the current pin values
double timeout = 0;
// Create new instance of the FTDI device class
connectedUSBDevice = new FTDI();
// Determine the number of FTDI devices connected to the machine
ftStatus = connectedUSBDevice.OpenBySerialNumber("P00001");
/*** Write to Device ***/
byte[] firmwareCmd = new byte[numOfBytes];
firmwareCmd[0] = 128; // 128 is Get Firmware Command
// firmwareCmd[1] = 61; // Just Testing
// Write Firmware Command to Tx buffer
ftStatus = connectedUSBDevice.Write(firmwareCmd, numOfBytes, ref bytesWritten);
Trace.WriteLine(bytesWritten);
// Handshake with Device
isOK = DeviceHandshake(lowMask, highMask, inPinMask);
// Check if handshake failed
if (isOK == false)
{
return;
}
Task.Delay(10);
// Wait until message is sent
while ((usbInPinIsHigh == false) && (timeout <= 1000))
{
Task.Delay(1);
// Check for USB In pin to go high. Signals FW transfer is complete and to retrieve.
ftStatus = connectedUSBDevice.GetPinStates(ref pinStates);
// Is input pin high or low?
if ((pinStates & inPinMask) == inPinMask) // In pin high
{
usbInPinIsHigh = true; // Means uC finished sending data
}
timeout++;
}
// TEST: displays timeout amount for testing
Trace.WriteLine("Timeout=" + timeout);
ftStatus = connectedUSBDevice.Close();
}
}
NOTE: For this code, I've taken out a lot of the error checking code for clarity. Also, the handshake code is not shown because it shouldn't be relevant: raise output pin, listen for AVR to raise output pin, lower output pin, listen for AVR to lower output pin.
On the AVR side, we simply poll for the FT201X's pin to go high and then handshake with the chip. Then we simply read. The read function always returns 0.
I doubt the problem is with i2c as there are 3 IO Expander chips controlling LEDs and buttons and we can read and write to those fine. Further, the FT chip has a function called Get USB State where you can check to see the device's status by sending the command and reading the result via i2c. When I do this, I always get back the correct 0x03 "Configured" state. So we can read from the chip via i2c.
There's also a function that will return the # of bytes in the buffer waiting to be read...when I do this, it always says 0 bytes.
And for good measure I replaced the chip with a new one in case it was bad and again we had the same results.
Is there anything I'm missing in terms of setting up the chip beyond using FT_Prog, like an initialization procedure or setting registers or something? Or do I need to somehow push the byte I write to the front of the queue or something before it can be read? Anybody seen anything like this before?
Given that I haven't affected the results, I'm either missing a key part in the process or something is wrong with their driver/version of the chip. It's been 3 weeks, I'm out of ideas, and my hair is patchy from ripping out large chunks. Please save my hair.
Check by oscilloscope that your I2C master gives clock for your slave (FT201x). Try to control only I2C (rip off GPIO controls) and check if you can isolate problem that way. I suppose you are very familiar with FT201X datasheet. Good luck!
Check the latency timer setting. It’s described in this document, https://www.ftdichip.com/Support/Documents/AppNotes/AN232B-04_DataLatencyFlow.pdf. In section 3.3, you’ll find a section describing a scenario in which no data will be made available to the app.
“While the host controller is waiting for one of the above conditions to occur, NO data is received by our driver and hence the user's application. The data, if there is any, is only finally transferred after one of the above conditions has occurred.”
You can use the latency timer to work around it, if you’re hitting this. Try setting it to 1ms, its lowest value. If your data has a delimiter character, consider setting that as an event character and you might get even better results.
Did this issue ever get resolved?
Experiencing the same issues with an FT200X except the function "bytes available" (0x0C) returns the correct byte count sent from the host PC, but can't read the actual bytes using the read procedure mentioned in the datasheet.
I have also several other I2C devices on the bus, working fine.
I run into a problem that googling seems can't solve. To keep it simple I have a client written in C# and a server running Linux written in C. Client is calling Send(buffer) in a loop 100 times. The problem is that server receives only a dozen of them. If I put a sleep, big enough, in a loop everything turns out fine. The buffer is small - about 30B. I read about Nagle's algorithm and ACK delay but it doesn't answer my problems.
for(int i = 0; i < 100; i++)
{
try
{
client.Send(oneBuffer, 0, oneBuffer.Length, SocketFlags.None)
}
catch (SocketException socE)
{
if ((socE.SocketErrorCode == SocketError.WouldBlock)
|| (socE.SocketErrorCode == SocketError.NoBufferSpaceAvailable)
|| (socE.SocketErrorCode == SocketError.IOPending))
{
Console.WriteLine("Never happens :(");
}
}
Thread.Sleep(100); //problem solver but why??
}
It's look like send buffer gets full and rejects data until it gets empty again, in blocking mode and nonblocking mode. Even better, I never get any exception!? I would expect some of the exceptions to raise but nothing. :( Any ideas? Thnx in advance.
TCP is stream oriented. This means that recv can read any amount of bytes between one and the total number of bytes outstanding (sent but not yet read). "Messages" do not exist. Sent buffers can be split or merged.
There is no way to get message behavior from TCP. There is no way to make recv read at least N bytes. Message semantics are constructed by the application protocol. Often, by using fixed-size messages or a length prefix. You can read at least N bytes by doing a read loop.
Remove that assumption from your code.
I think this issue is due to the nagle algorithm :
The Nagle algorithm is designed to reduce network traffic by causing
the socket to buffer small packets and then combine and send them in
one packet under certain circumstances. A TCP packet consists of 40
bytes of header plus the data being sent. When small packets of data
are sent with TCP, the overhead resulting from the TCP header can
become a significant part of the network traffic. On heavily loaded
networks, the congestion resulting from this overhead can result in
lost datagrams and retransmissions, as well as excessive propagation
time caused by congestion. The Nagle algorithm inhibits the sending of
new TCP segments when new outgoing data arrives from the user if any
previouslytransmitted data on the connection remains unacknowledged.
Calling client.Send function doesn't mean a TCP segment will be sent.
In your case, as buffers are small, the naggle algorithm will regroup them into larger segments. Check on server side that the dozen of buffers received contains the whole data.
When you add a Thread.Sleep(100), you will receive 100 packets on server side because nagle algotithm won't wait longer for further data.
If you really need a short latency in your application, you can explicitly disable nagle algorithm for your TcpClient : set the NoDelay property to true. Add this line at the begening of your code :
client.NoDelay = true;
I was naive thinking there was a problem with TCP stack. It was with my server code. Somewhere in between the data manipulation I used strncpy() function on a buffer that stores messages. Every message contained \0 at the end. Strncpy copied only the first message (the first string) out of the buffer regardless the count that was given (buffer length). That resulted in me thinking I had lost messages.
When I used the delay between send() calls on client, messages didn't get buffered. So, strncpy() worked on a buffer with one message and everything went smoothly. That "phenomenon" led we into thinking that speed rate of send calls is causing my problems.
Again thanks on help, your comments made me wonder. :)
I have a serial port app that read weighing machine.
public void Read()
{
while (Puerto.BytesToRead > 0)
{
try
{
string inputData = Puerto.ReadExisting();
dataReceived = inputData;
}
catch (TimeoutException) { }
}
}
the return string is like this
It has other extrange chars in it, how can i do to parse or get a clean data from it? all I need is 0.52lb
I have no idea what weighing machine is it and what the serial port specs on it but, if it is black box to you too then, check the following:
- check if you have a technical spec that explains what comes out of RS232 port
- do several (10?) samples with one sample weight and see if the number of bytes are delivered each time
- if you see the number of bytes being constant (barring discrepancy in the 0.52lb text changing to 0.5lb once in a while), it is likely that garbage following the weight is additional binary data.
- if not, and you see the weight (text) with exact offset each time, you just can scrape the output
this is complete reverse engineering, I suggest going after technical spec and doing more insightful data handling though.
This could be anything - a bug in the weighing machine, some sort of hardware issue, or a problem in how the serial port is configured. I would suspect a configuration problem. Make sure all the settings are correct (BaudRate, Handshake, Parity, StopBits). Also, try connecting to the same serial port device using another program (e.g. see http://helpdeskgeek.com/windows-7/windows-7-hyperterminal/ ) and see if you see the same garbage data.
I'm currently writing a prototype application in C#/.Net4 where i need to transfer an unknown amount of data. The data is read in from a text file and then serialized into a byte array.
Now i need to implement both transmission methods, UDP and TCP. The transmission in both ways does work fine but i have some struggleing with UDP. I assumend that the transmission using UDP have to be much faster than using TCP but in fact my tests proved that the UDP transmission is about 7 to 8 times slower than using TCP.
I tested the transmission with a 12 megabyte file and the TCP transmission took about 1 second whereas the UDP transmission took about 7 seconds.
In the application i use simple sockets to transmit the data. Since UDP does only allow a maximum of 65535kb per message, i splitted the serialized the byte array of the file into several parts where each part has the size of the socker SendBufferSize and then i transfer each part using Socket.Send() method call.
Here is the code for the Sender part.
while (startOffset < data.Length)
{
if ((startOffset + payloadSize) > data.Length)
{
payloadSize = data.Length - startOffset;
}
byte[] subMessageBytes = new byte[payloadSize + 16];
byte[] messagePrefix = new UdpMessagePrefix(data.Length, payloadSize, messageCount, messageId).ToByteArray();
Buffer.BlockCopy(messagePrefix, 0, subMessageBytes, 0, 16);
Buffer.BlockCopy(data, startOffset, subMessageBytes, messageOffset, payloadSize);
messageId++;
startOffset += payloadSize;
udpClient.Send(subMessageBytes, subMessageBytes.Length);
messages.Add(subMessageBytes);
}
This code does simply copy the next part to be send into an byte array and then call the send method on the socket. My first guess was, that the splitting/copying of the byte arrays was slowing down the performance, but i isolated and tested the splitting code and the splitting took only a few milliseconds, so that was not causing the problem.
int receivedMessageCount = 1;
Dictionary<int, byte[]> receivedMessages = new Dictionary<int, byte[]>();
while (receivedMessageCount != totalMessageCount)
{
byte[] data = udpClient.Receive(ref remoteIpEndPoint);
UdpMessagePrefix p = UdpMessagePrefix.FromByteArray(data);
receivedMessages.Add(p.MessageId, data);
//Console.WriteLine("Received packet: " + receivedMessageCount + " (ID: " + p.MessageId + ")");
receivedMessageCount++;
//Console.WriteLine("ReceivedMessageCount: " + receivedMessageCount);
}
Console.WriteLine("Done...");
return receivedMessages;
This is the server side code where i receive the UDP messages. Each message has some bytes as a prefix where the total number of messages is stored and the size. So i simply call socket.Receive in a loop until i received the amount of messages which were specified in the prefix.
My assumption here is that i may have implemented the UDP transmission code not "efficiently" enough... Maybe one of you guys already sees a problem in the code snippets or have any other suggestion or hint for me why my UDP transmission is slower than TCP.
thanks in advance!
While UDP datagram size can be up to 64K, the actual wire frames are usually 1500 bytes (normal ethernet MTU). That also has to fit an IP header of minimum 20 bytes and a UDP header of 8 bytes, leaving you with 1472 bytes of usable payload.
What you are seeing is the result of your OS network stack fragmenting the UDP datagrams on the sender side and then re-assembling them on the receiver side. That takes time, thus your results.
TCP, on the other hand, does its own packetization and tries to find path MTU, so it's more efficient in this case.
Limit your data chunks to 1472 bytes and measure again.
I think you should measure CPU usage and network throughput for the duration of the test.
If the CPU is pegged, this is your problem: Turn on a profiler.
If the network (cable) is pegged this is a different class of problems. I wouldn't know what to do about it ;-)
If neither is pegged, run a profiler and see where most wall-clock time is spent. There must be some waiting going on.
If you don't have a profiler just hit break 10 times in the debugger and see where it stops most often.
Edit: My response to your measurement: We know that 99% of all execution time is spent in receiving data. But we don't know yet if the CPU is busy. Look into task-manager and look which process is busy.
My guess is it is the System process. This is the windows kernel and probably the UDP component of it.
This might have to do with packet fragmentation. IP packets have a certain maximum size like 1472 bytes. Your UDP packets are being fragmented and reassembled on the receiving machine. I am surprised that is taking so much CPU time.
Try sending packets of total size of 1000 and 1472 (try both!) and report the results.
This is code I'm using to test a webserver on an embedded product that hasn't been behaving well when an HTTP request comes in fragmented across multiple TCP packets:
/* This is all within a loop that cycles size_chunk up to the size of the whole
* test request, in order to test all possible fragment sizes. */
TcpClient client_sensor = new TcpClient(NAME_MODULE, 80);
client_sensor.Client.NoDelay = true; /* SHOULD force the TCP socket to send the packets in exactly the chunks we tell it to, rather than buffering the output. */
/* I have also tried just "client_sensor.NoDelay = true, with no luck. */
client_sensor.Client.SendBufferSize = size_chunk; /* Added in a desperate attempt to fix the problem before posting my shameful ignorance on stackoverflow. */
for (int j = 0; j < TEST_HEADERS.Length; j += size_chunk)
{
String request_fragment = TEST_HEADERS.Substring(j, (TEST_HEADERS.Length < j + size_chunk) ? (TEST_HEADERS.Length - j) : size_chunk);
client_sensor.Client.Send(Encoding.ASCII.GetBytes(request_fragment));
client_sensor.GetStream().Flush();
}
/* Test stuff goes here, check that the embedded web server responded correctly, etc.. */
Looking at Wireshark, I see only one TCP packet go out, which contains the entire test header, not the approximately header length / chunk size packets I expect. I have used NoDelay to turn off the Nagle algorithm before, and it usually works just like I expect it to. The online documentation for NoDelay at http://msdn.microsoft.com/en-us/library/system.net.sockets.tcpclient.nodelay%28v=vs.90%29.aspx definitely states "Sends data immediately upon calling NetworkStream.Write" in its associated code sample, so I think I've been using it correctly all this time.
This happens whether or not I step through the code. Is the .NET runtime optimizing away my packet fragmentation?
I'm running x64 Windows 7, .NET Framework 3.5, Visual Studio 2010.
TcpClient.NoDelay does not mean that blocks of bytes will not be aggregated into a single packet. It means that blocks of bytes will not be delayed in order to aggregate into a single packet.
If you want to force a packet boundary, use Stream.Flush.
Grr. It was my antivirus getting in the way. A recent update caused it to start interfering with the sending of HTTP requests to port 80 by buffering all output until the final "\r\n\r\n" marker was seen, regardless of how the OS was trying to handle the outbound TCP traffic. I should have checked that first, but I've been using this same antivirus program for years and never had this problem before, so I didn't even think of it. Everything works just the way it used to when I disable the antivirus.
The MSDN docs show setting the TcpClient.NoDelay = true, not the TcpClient.Client.NoDelay property. Did you try that?
Your test code is just fine (I assume that you send valid HTTP). What you should check is why TCP server is not behaving well when reading from TCP connection. TCP is a stream protocol - that means you cannot make any assumptions on the size of data packets unless you explicitly specify those sizes in your data protocol. For instance you can prefix all your data packets using fixed-size (2 bytes) prefix, that will contain the size of the data to be received.
When reading HTTP usually read is made of several phases: read HTTP line, read HTTP headers, read HTTP content (if applicable). First two parts do not have any size specifications, but they have special delimiter (CRLF).
Here is some info how HTTP can be read and parsed.