C# General Serial Port Communication - c#

If I have a port that I need to read from quite a lot, and the returning amount of data varies from one line to many lines, what is the best way to read / sift through what I am reading?
After some research I found out what I was supposed to do.
string Xposition;
Xposition = "";
threeAxisPort.ReadExisting();
threeAxisPort.WriteLine("rx");
Thread.Sleep(50);
Xposition = threeAxisPort.ReadExisting();
threeAxisPort.WriteLine("end");
What I ended up doing was clearing the port of everything usign a ReadExisting function, then waited 50 ms to not flood the port, and did another ReadExisting command. With the additional 50ms wait time there was no leftover from the motors, and the returning block of text was exactly what I needed, however, I will still be thinking up of a more dynamic way to handle that string, because in a worst case scenario the read existing will pick up something I won't want.

Related

C#: reading serial port - BytesToRead

I am modifying a C# based UI that interfaces to a small PIC microcontroller tester device.
The UI consists of a couple buttons that initiates a test by sending a "command" to the microcontroller via a serial port connection. Every 250 milliseconds, the UI polls the serial interface looking for a brief message comprised of test results from the PIC. The message is displayed in a text box.
The code I inherited is as follows:
try
{
btr = serialPort1.BytesToRead;
if (btr > 0)
Thread.Sleep(300);
btr = serialPort1.BytesToRead;
if (btr > 0)
{
Thread.Sleep(300);
btr = serialPort1.BytesToRead;
numbytes = serialPort1.Read(stuffchar, 0, btr);
for (x = 0; x < (numbytes); x++)
{
cc = (stuffchar[x]);
stuff += Convert.ToString(Convert.ToChar((stuffchar[x])));
}
What would be the rationale for the first several lines consisting of three calls to BytesToRead and two 300 millisecond sleep calls before finally reading the serial port? Unless I am interpreting the code incorrectly, any successful read from the serial port will take more than 600 milliseconds, which seems peculiar to me.
It is a dreadful hack around the behavior of SerialPort.Read(). Which returns only the number of bytes actually received. Usually just 1 or 2, serial ports are slow and modern PCs are very fast. So by calling Thread.Sleep(), the code is delaying the UI thread long enough to get the Read() call to return more bytes. Hopefully all of them, whatever the protocol looks like. Usually works, not always. And in the posted code it didn't work and the programmer just arbitrarily delayed twice as long. Ugh.
The great misery of course is that the UI thread is pretty catatonic when it is forced to sleep. Pretty noticeable, it gets very slow to paint and to respond to user input.
This needs to be repaired by first paying attention to the protocol. The PIC needs to either send a fixed number of bytes in its response, so you can simply count them off, or give the PC a way to detect that the full response is received. Usually done by sending a unique byte as the last byte of a response (SerialPort.NewLine) or by including the length of the response as a byte value at the start of the message. Specific advice is hard to give, you didn't describe the protocol at all.
You can keep the hacky code and move it into a worker thread so it won't affect the UI so badly. You get one for free from the SerialPort.DataReceived event. But that tend to produce two problems instead of solving the core issue.
If that code initially was in a loop it might have been a way to wait for the PIC to collect data.
If you have real hardware to test on I would sugest you remove both Sleeps.
#TomWr you are right, from what I'm reading this is the case.
Your snippet below with my comments:
try
{
// Let's check how many bytes are available on the Serial Port
btr = serialPort1.BytesToRead;
// Something? Alright then, let's wait 300 ms.
if (btr > 0)
Thread.Sleep(300);
// Let's check again that there are some bytes available the Serial Port
btr = serialPort1.BytesToRead;
// ... and if so wait (maybe again) for 300 ms
// Please note that, at that point can be all cumulated about 600ms
// (if we actually already waited previously)
if (btr > 0)
{
Thread.Sleep(300);
btr = serialPort1.BytesToRead;
numbytes = serialPort1.Read(stuffchar, 0, btr);
for (x = 0; x < (numbytes); x++)
{
// Seems like a useless overhead could directly use
// an Encoding and ReadExisting method() of the SerialPort.
cc = (stuffchar[x]);
stuff += Convert.ToString(Convert.ToChar((stuffchar[x])));
}
My guess is the same it as been already mentioned above by idstam, basically probably to check whether is data sent by your device and fetch them
You can easily refactor this code with the appropriate SerialPort methods cause there are actually much better and concise ways to do check whether there are data available or not on the Serial Port.
Instead of "I'm checking how many bytes are on the port, if there is something then I wait 300 ms and later same thing again." that is miserably ending up with
"So yep 2 times 300 ms = 600ms, or just once (depending on whether there was a first time", or maybe nothing at all (depending on the device you are communicating to through this UI which can be really slacky since the Thread.Sleep is blocking the UI...). "
First let's considering for a while that you are trying to keep as much of the same codebase, why not just wait for 600ms?
Or why not just using the ReadTimeout property and catching the timeout exception, not that clean but at least better in terms of readability and plus you are getting your String directly instead of using some Convert.ToChar() calls...
I sense that the code has been ported from C or C++ (or at least the rationale behind) from someone who rather has mostly an embedded software background.
Anyway, back to the number of available bytes checks, I mean unless the Serial Port data are flushed in another Thread / BackgroundWorker / Task handler, I don't see any reason of checking it twice especially in the way it is coded.
To make it faster? Not really, cause there is an additional delay if there are actually data on the Serial Port. It does not make that much sense to me.
Another way to make your snippet slightly better is to poll using the ReadExisting().
Otherwise you can also consider asynchronous methods using the SerialPort BaseStream.
All in all it's pretty hard to say without access the rest of your codebase, aka, the context.
If you had more information about what are the objectives / protocol it could give some hints about what to do. Otherwise, I just can say this seems to be poorly coded, once again, out of context.
I double and even triple back what Hans mentioned about the UI responsiveness in the sense that really I hope your snippet is running in a thread which is not the UI one (although you mentioned in your post that the UI is polling I still hope the snippet is for another worker).
If that's really the UI thread then it will be blocked every time there is a Thread.Sleep call which makes the UI not really responsive to the user interactions and may give some feeling of frustration to your end-user.
Might also worth to subscribe to the DataReceived event and perform what you want/need with the handler (e.g. using a buffer and comparing value, etc.).
Please note that mono is still not implementing the trigger of this event but if you are running against a plain MS .NET implementation this is perfectly fine without the hassles of the multi-threading.
In short:
Check which thread(s) is(are) taking care of your snippet and mind about the UI responsiveness
If it is the UI, then use another thread through Thread, BackgroundWorker (Threadpool) or Task.
Stream Asynchronous Methods in order to avoid the hassles of the of UI thread synchronization
Try to see whether the objectives really deserve a double 300 ms Thread Sleep method call
You can directly fetch the String instead of gathering if latter checks are using so to perform operations rather than gathering the Bytes by yourself (if the chosen encoding can fulfill your needs).

Communication between LAN and Microcontroller

so I have a kinda strange problem. I'm using LAN for the communication with a microcontroller. Everything was working perfect. Meaning: I can send and receive data. For receiving data I'm using a simple method, which is Thread.sleep(1) in a for loop in which I keep checking client.GetStream().DataAvailable for true while client is a TcpClient
Now, with one process I have to send and receive to the microcontroller with a higher Baud rate. I was using 9600 for all other operations and everythingwas fine. Now with 115200 client.GetStream().DataAvailableseems to always have the value false.
What could be the problem?
PS: Another way to communicate with the microcontroller (all chosen by user) is serial communication. This is still working fine with the higher Baud rate.
Here is a code snippet:
using (client = new TcpClient(IP_String, LAN_Port))`
{
client.SendTimeout = 200;
client.ReceiveTimeout = 200;
stream = client.GetStream();
.
.
bool OK = false;
stream.Write(ToSend, 0, ToSend.Length);
for (int j = 0; j < 1000; j++)
{
if (stream.DataAvailable)
{
OK = true;
break;
}
Thread.Sleep(1);
}
.
.
}
EDIT:
While monitoring the communication with a listing device I realized that the bits actually arrive and that the device actually answers. The one and only problem seem that the DataAvailable flag is not being raised. I should probably find another way to check data availability. Any ideas?
I've been trying to think of things I've seen that act this way...
I've seen serial chips that say they'll do 115,200, but actually won't. See what happens if you drop the baud rate one notch. Either way you'll learn something.
Some microcontrollers "bit-bang" the serial port by having the CPU raise and lower the data pin and essentially go through the bits, banging 1 or 0 onto the serial pin. When a byte comes in, they read it, and do the same thing.
This does save money (no serial chip) but it is an absolute hellish nightmare to actually get working reliably. 115,200 may push a bit-banger too hard.
This might be a subtle microcontroller problem. Say you have a receiving serial chip which asserts a pin when a byte has come in, usually something like DRQ* for "Data Request" (the * in DRQ* means a 0-volt is "we have a byte" condition) (c'mon, people, a * isn't always a pointer:-). Well, DRQ* requests an interrupt, the firmware & CPU interrupt, it reads the serial chip's byte, and stashes it into some handy memory buffer. Then it returns from interrupt.
A problem can emerge if you're getting data very fast. Let's assume data has come in, serial chip got a byte ("#1" in this example), asserted DRQ*, we interrupted, the firmware grabs and stashes byte #1, and returns from interrupt. All well and good. But think what happens if another byte comes winging in while that first interrupt is still running. The serial chip now has byte #2 in it, so it again asserts the already-asserted DRQ* pin. The interrupt of the first byte completes. What happens?
You hang.
This is because it's the -edge- of DRQ*, physically going from 5V to 0V, that actually causes the CPU interrupt. On the second byte, DRQ* started at 0 and was set to 0. So DRQ* is (still) asserted, but there's no -edge- to tell the interrupt hardware/CPU that another byte is waiting. And now, of course, all the rest of the incoming data is also dropped.
See why it gets worse at higher speeds? The interrupt routine is fielding data more and more quickly, and typically doing circular I/O buffer calculations within the interrupt handler, and it must be fast and efficient, because fast input can push the interrupt handler to where a full new byte comes in before the interrupt finishes.
This is why it's a good idea to check DRQ* during the interrupt handler to see if another byte (#2) is already waiting (if so, just read it in, to clear the serial chip's DRQ*, and stash the byte in memory right then), or use "level triggering" for interrupts, not "edge triggering". Edge triggering definitely has good uses, but you need to watch out for this.
I hope this is helpful. It sure took me long enough to figure it out the first time. Now I take great care on stuff like this.
Good luck, let me know how it goes.
thanks,
Dave Small

C# How to receive information from serial port that doesn't end in "\r" or "\n"

In my application I have a serial port object and a listbox. In the DataRecieved event, I send serialPort.ReadLine() to the listbox. If I write a "n" character to the serial port, nothing will get added to the listbox because what gets recieved doesn't end in "\r" or "\n".
What is the correct way to read information from a serial port? (Keep in mind that I need to keep the full string/char[] of the last thing recieved.)
The 'correct' way depends heavily on implementation.
The SerialPort.ReadLine() method expects a CR/LF as a means to define a payload unit. And, by thing, I imagine that you mean exactly that - a message, payload or package (as in one meaningful, functional unit of information.)
What SerialPort.ReadLine() does is to wrap the whole 'receive everything coming from the buffer and wait for a end-of-payload mark before continuing' mechanism for you.
If you'd rather have the raw incoming content as soon as it arrives, then you may consider changing your code to use SerialPort.Read() instead.
If your message consists of an exact amount of bytes (sometimes the case with sensor data protocols) you can define the bytes you expect - but you should set a timeout in this case.
SerialPort.ReadTimeout = timeOut;
SerialPort.Read(responseBytes, 0, bytesExpected)

Socket and ports setup for high-speed audio/video streaming

I have a one-on-one connection between a server and a client. The server is streaming real-time audio/video data.
My question may sound weird, but should I use multiple ports/socket or only one? Is it faster to use multiple ports or a single one offer better performance? Should I have a port only for messages, one for video and one for audio or is it more simple to package the whole thing in a single port?
One of my current problem is that I need to first send the size of the current frame as the size - in bytes - may change from one frame to the next. I'm fairly new to Networking, but I haven't found any mechanism that would automatically detect the correct range for a specific object being transmitted. For example, if I send a 2934 bytes long packet, do I really need to tell the receiver the size of that packet?
I first tried to package the frame as fast as they were coming in, but I found out the receiving end would sometime not get the appropriated number of bytes. Most of the time, it would read faster than I send them, getting only a partial frame. What's the best way to get only the appropriated number of bytes as quickly as possible?
Or am I looking too low and there's a higher-level class/framework used to handle object transmission?
I think it is better to use an object mechanism and send data in an interleaved fashion. This mechanism may work faster than multiple port mechanism.
eg:
class Data {
DataType, - (Adio/Video)
Size, - (Size of the Data buffer)
Data Buffer - (Data depends on the type)
}
'DataType' and 'Size' always of constant size. At the client side take the 'DataType' and 'Size' and then read the specifed size of corresponding sent data(Adio/Video).
Just making something up off the top of my head. Shove "packets" like this down the wire:
1 byte - packet type (audio or video)
2 bytes - data length
(whatever else you need)
|
| (raw data)
|
So whenever you get one of these packets on the other end, you know exactly how much data to read, and where the beginning of the next packet should start.
[430 byte audio L packet]
[430 byte audio R packet]
[1000 byte video packet]
[20 byte control packet]
[2000 byte video packet]
...
But why re-invent the wheel? There are protocols to do these things already.

What is the minimum number of bytes that will cause Socket.Receive to return?

We are using a application protocol which specifies the length indicator of the message in the first 4 bytes. Socket.Receive will return as much data as in the protocol stack at the time or block until data is available. This is why we have to continously read from the socket until we receive the number of bytes in the length indicator. The Socket.Receive will return 0 if the other side closed the connection. I understand all that.
Is there a minimum number of bytes that has to be read? The reason I ask is from the documentation it seems entirely possible that the entire length indicator (4 bytes) might not be available when socket.Receive can return. We would then have to have to keep trying. It would be more efficient to minimize the number of times we call socket.receive because it has to copy things in and out of buffers. So is it safer to get a single byte at a time to get the length indicator, is it safe to assume that 4 bytes will always be available or should we keep trying to get 4 bytes using an offset variable?
The reason that I think that there may be some sort of default minimum level is that I came across a varaible called ReceiveLowWater variable that I can set in the socket options. But this appears to only apply to BSD. MSDN See SO_RCVLOWAT.
It isn't really that important but I am trying to write unit tests. I have already wrapped a standard .Net Socket behind an interface.
is it safe to assume that 4 bytes will always be available
NO. Never. What if someone is testing your protocol with, say, telnet and a keyboard? Or over a real slow or busy connection? You can receive one byte at a time or a split "length indicator" over multiple Receive() calls. This isn't unit testing matter, it's basic socket matter that causes problems in production, especially under stressful situations.
or should we keep trying to get 4 bytes using an offset variable?
Yes, you should. For your convenience, you can use the Socket.Receive() overload that allows you to specify a number of bytes to be read so you won't read too much. But please note it can return less than required, that's what the offset parameter is for, so it can continue to write in the same buffer:
byte[] lenBuf = new byte[4];
int offset = 0;
while (offset < lenBuf.Length)
{
int received = socket.Receive(lenBuf, offset, lenBuf.Length - offset, 0);
offset += received;
if (received == 0)
{
// connection gracefully closed, do your thing to handle that
}
}
// Here you're ready to parse lenBuf
The reason that I think that there may be some sort of default minimum level is that I came across a varaible called ReceiveLowWater variable that I can set in the socket options. But this appears to only apply to BSD.
That is correct, the "receive low water" flag is only included for backwards compatibility and does nothing apart from throwing errors, as per MSDN, search for SO_RCVLOWAT:
This option is not supported by the Windows TCP/IP provider. If this option is used on Windows Vista and later, the getsockopt and setsockopt functions fail with WSAEINVAL. On earlier versions of Windows, these functions fail with WSAENOPROTOOPT". So I guess you'll have to use the offset.
It's a shame, because it can enhance performance. However, as #cdleonard pointed out in a comment, the performance penalty from keeping an offset variable will be minimal, as you'l usually receive the four bytes at once.
No, there isn't a minimum buffer size, the length in the receive just needs to match the actual space.
If you send a length in four bytes before the messages actual data, the recipient needs to handle the cases where 1, 2, 3 or 4 bytes are returned and keep repeating the read until all four bytes are received and then repeat the procedure to receive the actual data.

Categories