I did write a small C# app that reads from a COM port a series of numbers sent by an Arduino board.
Question:
If the Arduino sends a single value every 500ms but my C# program reads a single value every 1s doesn't the C# get left behind the Arduino? If that is true, does the data sent from Arduino get stored in a buffer or is it simply discarded?
[Edit]
Bellow is the code I use to read from COM
System.Windows.Forms.Timer tCOM;
...
tCOM.Interval = 1000;
tCOM.Tick += new System.EventHandler(this.timer1_Tick);
...
SerialPort port = new SerialPort();
port.PortName = defaultPortName;
port.BaudRate = 9600;
port.Open();
.....
private void timer1_Tick(object sender, EventArgs e)
{
log("Time to read from COM");
//read a string from serial port
string l;
if ((l = port.ReadLine()) != null)
{
......
}
}
Serial port communications normally require flow control. A way for the transmitter to know that the receiver is ready to receive data. This is often overlooked, especially in Arduino projects. Which tends to work out okay, serial ports are very slow and modern machines are very fast compared to the kind of machines that first started using serial ports.
But clearly, in your scenario something is going to go bang! after a while. Your Arduino will cause a buffer overflow condition when the receive buffer in the PC fills up to capacity. And that causes irretrievable loss of data. Listening for a notification of this condition is something else that's often skipped, you must register an event handler for the SerialPort.ErrorReceived event. You'd expect a SerialError.Overrun notification in this case. There's no clean way to recover from this condition, a full protocol reset is required.
There are two basic ways to implement flow control on serial ports to avoid this error. The most common one is to use hardware handshaking, using the RTS (Request To Send) and CTS (Clear To Send) signals. Provided by Handshake.RequestToSend. The PC will automatically turn the RTS signal off when its receive buffer gets too full. Your Arduino must pay attention to the CTS signal and not send anything when it is off.
The second way is software handshaking, the receiver sends a special byte to indicate whether it is ready to receive data. Provided by Handshake.XonXoff, which uses the standard control characters Xon (Ctrl+Q) and Xoff (Ctrl+S). Suitable only when the communication protocol doesn't otherwise use these control codes in their data. In other words, when you transmit text instead of binary data.
The third way is a completely different approach, very common as well, you make the device only ever send anything when the PC asks for it. A master-slave protocol. Having enough room in the receive buffer for the response is easy to guarantee. You specify specific commands in your protocol, commands that the PC sends to query for a specific data item.
When you open a serial port for input, a buffer (queue) is automatically created to hold incoming data until it is read by your program. This buffer is typically 4096 bytes in size (although that may vary according to the version of Windows, serial port driver etc.).
A 4096-byte buffer is normally sufficient in almost all situations. At the highest standard baud rate (115200 baud) it corresponds to more than 300 msecond of storage (FIFO) first in first out, so as long as your program services the serial port at least three times a second no data should be lost. In your particular case, because you read the serial every 1 second, you may loose data if the timing and the buffered data do not match.
However in exceptional circumstances it may be useful to be able to increase the size of the serial input buffer. Windows provides the means to request an increased buffer size, but there is no guarantee that the request will be granted.
Personally I prefer to have a continuous stream of data from Arduino and decide in my c# app what to do with those data but at least I am sure I do not loose information due to limitation of the hardware involved.
Update:
Playing with Arduino quite often, I also agree with the third option given by Hans in his answer. Basically your app should send to Arduino a command to get printed out (Serial.Print or Serial.Println) the data you need and be ready to read it.
Related
I'm writing a C# program that reads data from a USB GPS logger, which acts like a new COM port when i plug it in. I've managed to write some code that listens to the COM port and fires an event when it receives data, but i have a few problems with this approach :
The event listener is way to slow : i get like only 1 result per second which takes forever if there are thousands of tracks on the logger.
Since i don't know how much data the logger contains, how do i know when i should stop the event listener, without losing data? I would also like to write all the data to a csv file, but since i dont know when to stop listening, i also don't know when to call my writer function.
I actually don't understand why this happens over a COM port, since the logger already contains all the data i need? I just want to extract it all at once. Is there a way to accomplish this? Thanks in advance!
I believe there is not much you can do about this.
You can't change the behavior of the USB device, since this is a driver issue.
The reason it is recognized as a COM port is probably because the manufacturer who made the device didn't want to have to deal with the devious task of writing drivers for the device.
So instead he used a chip that translates the data from the microchip to "Serial communiation", to emulate RS-232 communication. Which is far easier to handle. Also for you, because you probably wouldn't be able to read the data or interact with a Custom USB device without proper documentation..
The normal baud rate for RS-232 is usual 9600. This would be minimal 9600 Bits per Second.
So thinking it could be a 8 or 16 bit device, this would result in 1200 or 600 integers per minute.
So depending on the data you read per result, i think 1 result/second is rather slow.
Hope this is of any help.
i would like to implement a rather simple function, that outputs the byte array of a serial port, e.g.
byte[] o = readAllDataFromSerialPort();
Implementing the actual serial port functions is done. I use the serial port to receive some data and process the data through the event DataReceived.
sp = new SerialPort(portname, 9600, System.IO.Ports.Parity.None, 8, System.IO.Ports.StopBits.One);
sp.Handshake = Handshake.None;
sp.DataReceived += new SerialDataReceivedEventHandler(serialDataReceived);
I check the received data for an "message end"-package in order to then close the serial port, so sth. like
if (data = "UA") sp.Close()
So basically what I would like to do is wait for the closure, before giving back the data, so that on the top level view the program doesn't progress, until the data has arrived. However I cannot wrap my head around as to how I implement this "waiting" in an effective and elegant way, as I'm relying on events for my data. Any hints or clues or examples would be much appreciated.
Serial ports are not open or closed. The Open or Close functions open a handle to the serial port driver.
If no handle is open to the driver all input from the port is ignored.
The only way you can determine whether you have received all the data is to design a protocol that provides you with a guaranteed way to detect the end of a transmission.
You can do this with one of:
Either select a unique terminator for the end of your message,
Include a length towards the beginning of your message that indicates the amount of remaining data, or
Wait for long enough (which also depends) to be sure no more data is pending.
A further reason for the Open, Close metaphor is that a serial port is typically an exclusive resource and only a single process can gain access to the handle at a time to prevent incompatible (and possibly dangerous) access to the device at the other end of the port inadvertently. You should keep the port open throughout your program to prevent the connected device from becoming inaccessible because another program opens the device inappropriately.
The lack of hot-plugging facilities (and in fact device identification) makes serial ports much more static and keeping the device open should not be a problem.
You seem to favour the third option. Implement this by resetting a timer that is set each time data is received, and if it expires assume the transmission is complete.
As it sais on the SerialPort.Close() documentation:
The best practice for any application is to wait for some amount of time after calling the Close method before attempting to call the Open method, as the port may not be closed instantly.
There is no way to wait for it to be closed. you can call it a "bug" or a "function as designed"
It is a bad practice to Open and Close a SerialPort over and over again with the same program. You should keep the SerialPort open.
If you really want to close it and will open it later again you can add a small sleep before returning, but sleeps without meanings are bad practice.
I found this nice post https://stackoverflow.com/a/10210279/717559
with a nice quote:
This is the worst possible practice for "best practice" advice since it doesn't at all specify exactly how long you are supposed to wait.
Here's the scenario - I have a C# application that reads from a COM port. Most of the time I use devices with a serial adapter and on a machine with a serial ports. However, machines with serial ports are increasingly difficult to come by so I have started using machines with a USB/Serial connection.
In some cases, the C# code that I have will work just fine with a "true" serial connection but fail with a USB/serial connection. The data comes in fragmented with the first part of the data coming in (like maybe the first 1 or 2 characters) and nothing else.
I'm using something basic like comport.ReadExisting() to pick up the data from the port. Is this part of the problem? Are there other methods that would guarantee that all the data would be read in a single string?
Finally, I want to add that I've already played around with some of the USB/serial settings in device manager AND the data comes in fine when using good, ole' hyperterminal . . . so it has to be something in the code.
With USB-serial converters, you MUST set your receive timeout, because the data can sit in the USB device for a long time where Windows doesn't know about it. Real serial ports hold data in the 16550 FIFO where the Windows driver can see it.
Most of the times you would want to use the SerialPort.DataReceived event ( http://msdn.microsoft.com/en-us/library/system.io.ports.serialport.datareceived.aspx ).
This requires you to combine the chunks of data manually since you can receive parts of your string, but once you detect the boundary of a single 'record' you can fire of processing that record while letting the IO receive more events.
This allows some asynchronous IO which doesn't block your threads, and thus allows more efficient handling of the data in general. If it helps in your specific situation I don't know, but it has helped me in the past dealing with data reading issues, low speeds, thread pooling, and lock-ups.
I have written an application in C# that receives UDP packets continuously streamed from an electronic device.
The electronic device is connected directly to a PC running the application, and there are no switches or others network device connected.
I imagine that the network infrastructure of the PC is as follows:
The network interface (network card) receives the data and process them up to the transport layer, puts packets in a buffer and sends an interrupt to the OS.
The OS handles the interrupt at some point, and empties the buffer of the network interface (i.e. all packets in the buffer are emptied.)
The OS signals the .NET framework that data is available.
My program picks up the packets and process them.
Now, as I use UDP, I would expect to see packet loss and duplicates, etc. and I do.
The drop out seem to occur whenever the PC does something else (I remote control the PC using remote desktop over a second network interface in the PC).
Can I be certain that if the packets have been received by the OS that they will get passed on to my application? (Intuitively I think; yes, because the OS will use RAM as a buffer so there will be plenty of space.)
If so, I suspect that the OS grabs all packets from the network interface that may have gathered since the network interface raised the interrupt, and puts them in the RAM. If this is true, then does .NET iterate through each of these packets invoking the callbacks in .NET and my program associated with a packet being received?
Intuitively I would also blame the network interface for packet loss due to its buffer overflowing before the OS can empty it. Is this correct?
I also suspect that packet sniffers also receive the data AFTER the OS received it, and so the data received by the packet sniffer and the data received by my program should be identical.
I have written a test-program that streams 1 kB UDP data to my own network interface at a rate of 1000 Hz. The program also receives the data, and checks to see whether the data has been received in order. (The packets contain a packet number). I can conclude that they are received out of order, and that more are received out of order whenever the computer is doing something else. (I didn't expect this behavior, so I didn't write a routine to check for lost packets. I will do this tomorrow.)
To finalize.
How can I reduce the packet loss? Is it a good idea to create a buffer in C#, or can I rely on the OS to have a buffer?
Can I setup the buffer size of the OS?
Can I setup the buffer size of the network interface?
Thanks.
A lot of questions, let me answer what I can (in no particular order)
The larger the packet, the higher chance of fragmentation, the higher the number of fragments. In theory, if you lose a fragment, you've lost the packet (the underlying layers can't reassemble the packet, and therefore it's dropped)
A number of sources suggest UDP packets to be around 576 bytes (see http://www.faqs.org/rfcs/rfc791.html) to ensure fragmentation never occurs.
There is a fairly large buffer in the OS (4M by default if my memory is correct), but it's at the kernel level and not easily viewed. However, you can adjust the ReceiveBufferSize in the Socket class to expand this value. If you are on a 32-bit machine, I would keep that value no higher than 8M, for 64bit you can go somewhat higher, I think for our application we went to 64M.
UDP does not give you any packet ordering whatsoever, if you need ordered packets you are immediately back in TCP-land (or writing UDP apps that basically implement TCP/IP, which causes you to lose the value of UDP)
Make sure that your calls to BeginReceive are handled Asynchronously, even if you are just dumping them into a buffer internally. If you are handling them synchronously, you will lose data if your receive code handles them at a slower rate than they are coming in, at least eventually.
Lastly, while sometimes tweaking the network interface will help (badly configured driver settings, for example) generally you are going to find that it's at least as challenged as your network hardware itself, which means that most tweaks you do there are negated by network issues upstream.
Here's some background on what I'm trying to do:
Open a serial port from a mobile device to a Bluetooth printer.
Send an EPL/2 form to the Bluetooth printer, so that it understands how to treat the data it is about to receive.
Once the form has been received, send some data to the printer which will be printed on label stock.
Repeat step 3 as many times as necessary for each label to be printed.
Step 2 only happens the first time, since the form does not need to precede each label. My issue is that when I send the form, if I send the label data too quickly it will not print. Sometimes I get "Bluetooth Failure: Radio Non-Operational" printed on the label instead of the data I sent.
I have found a way around the issue by doing the following:
for (int attempt = 0; attempt < 3; attempt++)
{
try
{
serialPort.Write(labelData);
break;
}
catch (TimeoutException ex)
{
// Log info or display info based on ex.Message
Thread.Sleep(3000);
}
}
So basically, I can catch a TimeoutException and retry the write method after waiting a certain amount of time (three seconds seems to work all the time, but any less and it seems to throw the exception every attempt). After three attempts I just assume the serial port has something wrong and let the user know.
This way seems to work ok, but I'm sure there's a better way to handle this. There are a few properties in the SerialPort class that I think I need to use, but I can't really find any good documentation or examples of how to use them. I've tried playing around with some of the properties, but none of them seem to do what I'm trying to achieve.
Here's a list of the properties I have played with:
CDHolding
CtsHolding
DsrHolding
DtrEnable
Handshake
RtsEnable
I'm sure some combination of these will handle what I'm trying to do more gracefully.
I'm using C# (2.0 framework), a Zebra QL 220+ Bluetooth printer and a windows Mobile 6 handheld device, if that makes any difference for solutions.
Any suggestions would be appreciated.
[UPDATE]
I should also note that the mobile device is using Bluetooth 2.0, whereas the printer is only at version 1.1. I'm assuming the speed difference is what's causing the printer to lag behind in receiving the data.
Well I've found a way to do this based on the two suggestions already given. I need to set up my serial port object with the following:
serialPort.Handshake = Handshake.RequestToSendXOnXOff;
serialPort.WriteTimeout = 10000; // Could use a lower value here.
Then I just need to do the write call:
serialPort.Write(labelData);
Since the Zebra printer supports software flow control, it will send an XOff value to the mobile device when the buffer is nearly full. This causes the mobile device to wait for an XOn value to be sent from the printer, effectively notifying the mobile device that it can continue transmitting.
By setting the write time out property, I'm giving a total time allowed for the transmission before a write timeout exception is thrown. You would still want to catch the write timeout, as I had done in my sample code in the question. However, it wouldn't be necessary to loop 3 (or an arbitrary amount of) times, trying to write each time, since the software flow control would start and stop the serial port write transmission.
Flow control is the correct answer here, and it may not be present/implemented/applicable to your bluetooth connection.
Check out the Zebra specification and see if they implement, or if you can turn on, software flow control (xon, xoff) which will allow you to see when the various buffers are getting full.
Further, the bluetooth radio is unlikely to be capable of transmitting faster than 250k at the maximum. You might consider artificially limiting it to 9,600bps - this will allow the radio a lot of breathing room for retransmits, error correction, detection, and its own flow control.
If all else fails, the hack you're using right now isn't bad, but I'd call Zebra tech support and find out what they recommend before giving up.
-Adam
The issue is likely not with the serial port code, but with the underlying bluetooth stack. The port you're using is purely virtual, and it's unlikely that any of the handshaking is even implemented (as it would be largely meaningless). CTS/RTS DTR/DSR are simply non-applicable for what you're working on.
The underlying issue is that when you create the virtual port, underneath it has to bind to the bluetooth stack and connect to the paired serial device. The port itself has no idea how long that might take and it's probably set up to do this asynchronously (though it would be purely up to the device OEM how that's done) to prevent the caller from locking up for a long period if there is no paired device or the paired device is out of range.
While your code may feel like a hack, it's probably the best, most portable way to do what you're doing.
You could use a bluetooth stack API to try to see if the device is there and alive before connecting, but there is no standardization of stack APIs, so the Widcom and Microsoft APIs differ on how you'd do that, and Widcom is proprietary and expensive. What you'd end up with is a mess of trying to discover the stack type, dynamically loading an appropriate verifier class, having it call the stack and look for the device. In light of that, your simple poll seems much cleaner, and you don't have to shell out a few $k for the Widcom SDK.