so I have a kinda strange problem. I'm using LAN for the communication with a microcontroller. Everything was working perfect. Meaning: I can send and receive data. For receiving data I'm using a simple method, which is Thread.sleep(1) in a for loop in which I keep checking client.GetStream().DataAvailable for true while client is a TcpClient
Now, with one process I have to send and receive to the microcontroller with a higher Baud rate. I was using 9600 for all other operations and everythingwas fine. Now with 115200 client.GetStream().DataAvailableseems to always have the value false.
What could be the problem?
PS: Another way to communicate with the microcontroller (all chosen by user) is serial communication. This is still working fine with the higher Baud rate.
Here is a code snippet:
using (client = new TcpClient(IP_String, LAN_Port))`
{
client.SendTimeout = 200;
client.ReceiveTimeout = 200;
stream = client.GetStream();
.
.
bool OK = false;
stream.Write(ToSend, 0, ToSend.Length);
for (int j = 0; j < 1000; j++)
{
if (stream.DataAvailable)
{
OK = true;
break;
}
Thread.Sleep(1);
}
.
.
}
EDIT:
While monitoring the communication with a listing device I realized that the bits actually arrive and that the device actually answers. The one and only problem seem that the DataAvailable flag is not being raised. I should probably find another way to check data availability. Any ideas?
I've been trying to think of things I've seen that act this way...
I've seen serial chips that say they'll do 115,200, but actually won't. See what happens if you drop the baud rate one notch. Either way you'll learn something.
Some microcontrollers "bit-bang" the serial port by having the CPU raise and lower the data pin and essentially go through the bits, banging 1 or 0 onto the serial pin. When a byte comes in, they read it, and do the same thing.
This does save money (no serial chip) but it is an absolute hellish nightmare to actually get working reliably. 115,200 may push a bit-banger too hard.
This might be a subtle microcontroller problem. Say you have a receiving serial chip which asserts a pin when a byte has come in, usually something like DRQ* for "Data Request" (the * in DRQ* means a 0-volt is "we have a byte" condition) (c'mon, people, a * isn't always a pointer:-). Well, DRQ* requests an interrupt, the firmware & CPU interrupt, it reads the serial chip's byte, and stashes it into some handy memory buffer. Then it returns from interrupt.
A problem can emerge if you're getting data very fast. Let's assume data has come in, serial chip got a byte ("#1" in this example), asserted DRQ*, we interrupted, the firmware grabs and stashes byte #1, and returns from interrupt. All well and good. But think what happens if another byte comes winging in while that first interrupt is still running. The serial chip now has byte #2 in it, so it again asserts the already-asserted DRQ* pin. The interrupt of the first byte completes. What happens?
You hang.
This is because it's the -edge- of DRQ*, physically going from 5V to 0V, that actually causes the CPU interrupt. On the second byte, DRQ* started at 0 and was set to 0. So DRQ* is (still) asserted, but there's no -edge- to tell the interrupt hardware/CPU that another byte is waiting. And now, of course, all the rest of the incoming data is also dropped.
See why it gets worse at higher speeds? The interrupt routine is fielding data more and more quickly, and typically doing circular I/O buffer calculations within the interrupt handler, and it must be fast and efficient, because fast input can push the interrupt handler to where a full new byte comes in before the interrupt finishes.
This is why it's a good idea to check DRQ* during the interrupt handler to see if another byte (#2) is already waiting (if so, just read it in, to clear the serial chip's DRQ*, and stash the byte in memory right then), or use "level triggering" for interrupts, not "edge triggering". Edge triggering definitely has good uses, but you need to watch out for this.
I hope this is helpful. It sure took me long enough to figure it out the first time. Now I take great care on stuff like this.
Good luck, let me know how it goes.
thanks,
Dave Small
Related
I am modifying a C# based UI that interfaces to a small PIC microcontroller tester device.
The UI consists of a couple buttons that initiates a test by sending a "command" to the microcontroller via a serial port connection. Every 250 milliseconds, the UI polls the serial interface looking for a brief message comprised of test results from the PIC. The message is displayed in a text box.
The code I inherited is as follows:
try
{
btr = serialPort1.BytesToRead;
if (btr > 0)
Thread.Sleep(300);
btr = serialPort1.BytesToRead;
if (btr > 0)
{
Thread.Sleep(300);
btr = serialPort1.BytesToRead;
numbytes = serialPort1.Read(stuffchar, 0, btr);
for (x = 0; x < (numbytes); x++)
{
cc = (stuffchar[x]);
stuff += Convert.ToString(Convert.ToChar((stuffchar[x])));
}
What would be the rationale for the first several lines consisting of three calls to BytesToRead and two 300 millisecond sleep calls before finally reading the serial port? Unless I am interpreting the code incorrectly, any successful read from the serial port will take more than 600 milliseconds, which seems peculiar to me.
It is a dreadful hack around the behavior of SerialPort.Read(). Which returns only the number of bytes actually received. Usually just 1 or 2, serial ports are slow and modern PCs are very fast. So by calling Thread.Sleep(), the code is delaying the UI thread long enough to get the Read() call to return more bytes. Hopefully all of them, whatever the protocol looks like. Usually works, not always. And in the posted code it didn't work and the programmer just arbitrarily delayed twice as long. Ugh.
The great misery of course is that the UI thread is pretty catatonic when it is forced to sleep. Pretty noticeable, it gets very slow to paint and to respond to user input.
This needs to be repaired by first paying attention to the protocol. The PIC needs to either send a fixed number of bytes in its response, so you can simply count them off, or give the PC a way to detect that the full response is received. Usually done by sending a unique byte as the last byte of a response (SerialPort.NewLine) or by including the length of the response as a byte value at the start of the message. Specific advice is hard to give, you didn't describe the protocol at all.
You can keep the hacky code and move it into a worker thread so it won't affect the UI so badly. You get one for free from the SerialPort.DataReceived event. But that tend to produce two problems instead of solving the core issue.
If that code initially was in a loop it might have been a way to wait for the PIC to collect data.
If you have real hardware to test on I would sugest you remove both Sleeps.
#TomWr you are right, from what I'm reading this is the case.
Your snippet below with my comments:
try
{
// Let's check how many bytes are available on the Serial Port
btr = serialPort1.BytesToRead;
// Something? Alright then, let's wait 300 ms.
if (btr > 0)
Thread.Sleep(300);
// Let's check again that there are some bytes available the Serial Port
btr = serialPort1.BytesToRead;
// ... and if so wait (maybe again) for 300 ms
// Please note that, at that point can be all cumulated about 600ms
// (if we actually already waited previously)
if (btr > 0)
{
Thread.Sleep(300);
btr = serialPort1.BytesToRead;
numbytes = serialPort1.Read(stuffchar, 0, btr);
for (x = 0; x < (numbytes); x++)
{
// Seems like a useless overhead could directly use
// an Encoding and ReadExisting method() of the SerialPort.
cc = (stuffchar[x]);
stuff += Convert.ToString(Convert.ToChar((stuffchar[x])));
}
My guess is the same it as been already mentioned above by idstam, basically probably to check whether is data sent by your device and fetch them
You can easily refactor this code with the appropriate SerialPort methods cause there are actually much better and concise ways to do check whether there are data available or not on the Serial Port.
Instead of "I'm checking how many bytes are on the port, if there is something then I wait 300 ms and later same thing again." that is miserably ending up with
"So yep 2 times 300 ms = 600ms, or just once (depending on whether there was a first time", or maybe nothing at all (depending on the device you are communicating to through this UI which can be really slacky since the Thread.Sleep is blocking the UI...). "
First let's considering for a while that you are trying to keep as much of the same codebase, why not just wait for 600ms?
Or why not just using the ReadTimeout property and catching the timeout exception, not that clean but at least better in terms of readability and plus you are getting your String directly instead of using some Convert.ToChar() calls...
I sense that the code has been ported from C or C++ (or at least the rationale behind) from someone who rather has mostly an embedded software background.
Anyway, back to the number of available bytes checks, I mean unless the Serial Port data are flushed in another Thread / BackgroundWorker / Task handler, I don't see any reason of checking it twice especially in the way it is coded.
To make it faster? Not really, cause there is an additional delay if there are actually data on the Serial Port. It does not make that much sense to me.
Another way to make your snippet slightly better is to poll using the ReadExisting().
Otherwise you can also consider asynchronous methods using the SerialPort BaseStream.
All in all it's pretty hard to say without access the rest of your codebase, aka, the context.
If you had more information about what are the objectives / protocol it could give some hints about what to do. Otherwise, I just can say this seems to be poorly coded, once again, out of context.
I double and even triple back what Hans mentioned about the UI responsiveness in the sense that really I hope your snippet is running in a thread which is not the UI one (although you mentioned in your post that the UI is polling I still hope the snippet is for another worker).
If that's really the UI thread then it will be blocked every time there is a Thread.Sleep call which makes the UI not really responsive to the user interactions and may give some feeling of frustration to your end-user.
Might also worth to subscribe to the DataReceived event and perform what you want/need with the handler (e.g. using a buffer and comparing value, etc.).
Please note that mono is still not implementing the trigger of this event but if you are running against a plain MS .NET implementation this is perfectly fine without the hassles of the multi-threading.
In short:
Check which thread(s) is(are) taking care of your snippet and mind about the UI responsiveness
If it is the UI, then use another thread through Thread, BackgroundWorker (Threadpool) or Task.
Stream Asynchronous Methods in order to avoid the hassles of the of UI thread synchronization
Try to see whether the objectives really deserve a double 300 ms Thread Sleep method call
You can directly fetch the String instead of gathering if latter checks are using so to perform operations rather than gathering the Bytes by yourself (if the chosen encoding can fulfill your needs).
In my application I have a serial port object and a listbox. In the DataRecieved event, I send serialPort.ReadLine() to the listbox. If I write a "n" character to the serial port, nothing will get added to the listbox because what gets recieved doesn't end in "\r" or "\n".
What is the correct way to read information from a serial port? (Keep in mind that I need to keep the full string/char[] of the last thing recieved.)
The 'correct' way depends heavily on implementation.
The SerialPort.ReadLine() method expects a CR/LF as a means to define a payload unit. And, by thing, I imagine that you mean exactly that - a message, payload or package (as in one meaningful, functional unit of information.)
What SerialPort.ReadLine() does is to wrap the whole 'receive everything coming from the buffer and wait for a end-of-payload mark before continuing' mechanism for you.
If you'd rather have the raw incoming content as soon as it arrives, then you may consider changing your code to use SerialPort.Read() instead.
If your message consists of an exact amount of bytes (sometimes the case with sensor data protocols) you can define the bytes you expect - but you should set a timeout in this case.
SerialPort.ReadTimeout = timeOut;
SerialPort.Read(responseBytes, 0, bytesExpected)
We are using a application protocol which specifies the length indicator of the message in the first 4 bytes. Socket.Receive will return as much data as in the protocol stack at the time or block until data is available. This is why we have to continously read from the socket until we receive the number of bytes in the length indicator. The Socket.Receive will return 0 if the other side closed the connection. I understand all that.
Is there a minimum number of bytes that has to be read? The reason I ask is from the documentation it seems entirely possible that the entire length indicator (4 bytes) might not be available when socket.Receive can return. We would then have to have to keep trying. It would be more efficient to minimize the number of times we call socket.receive because it has to copy things in and out of buffers. So is it safer to get a single byte at a time to get the length indicator, is it safe to assume that 4 bytes will always be available or should we keep trying to get 4 bytes using an offset variable?
The reason that I think that there may be some sort of default minimum level is that I came across a varaible called ReceiveLowWater variable that I can set in the socket options. But this appears to only apply to BSD. MSDN See SO_RCVLOWAT.
It isn't really that important but I am trying to write unit tests. I have already wrapped a standard .Net Socket behind an interface.
is it safe to assume that 4 bytes will always be available
NO. Never. What if someone is testing your protocol with, say, telnet and a keyboard? Or over a real slow or busy connection? You can receive one byte at a time or a split "length indicator" over multiple Receive() calls. This isn't unit testing matter, it's basic socket matter that causes problems in production, especially under stressful situations.
or should we keep trying to get 4 bytes using an offset variable?
Yes, you should. For your convenience, you can use the Socket.Receive() overload that allows you to specify a number of bytes to be read so you won't read too much. But please note it can return less than required, that's what the offset parameter is for, so it can continue to write in the same buffer:
byte[] lenBuf = new byte[4];
int offset = 0;
while (offset < lenBuf.Length)
{
int received = socket.Receive(lenBuf, offset, lenBuf.Length - offset, 0);
offset += received;
if (received == 0)
{
// connection gracefully closed, do your thing to handle that
}
}
// Here you're ready to parse lenBuf
The reason that I think that there may be some sort of default minimum level is that I came across a varaible called ReceiveLowWater variable that I can set in the socket options. But this appears to only apply to BSD.
That is correct, the "receive low water" flag is only included for backwards compatibility and does nothing apart from throwing errors, as per MSDN, search for SO_RCVLOWAT:
This option is not supported by the Windows TCP/IP provider. If this option is used on Windows Vista and later, the getsockopt and setsockopt functions fail with WSAEINVAL. On earlier versions of Windows, these functions fail with WSAENOPROTOOPT". So I guess you'll have to use the offset.
It's a shame, because it can enhance performance. However, as #cdleonard pointed out in a comment, the performance penalty from keeping an offset variable will be minimal, as you'l usually receive the four bytes at once.
No, there isn't a minimum buffer size, the length in the receive just needs to match the actual space.
If you send a length in four bytes before the messages actual data, the recipient needs to handle the cases where 1, 2, 3 or 4 bytes are returned and keep repeating the read until all four bytes are received and then repeat the procedure to receive the actual data.
Is there a way i can get a voltage input using rs232? What i am doing now is i have 2 pc hooked up to rs232 to communicate using gnd, tx and rx pin. Ive also connect DTR to a switch and when the switch is pressed im able to get a voltage output to drive a LED. What i want to do now is that when the led lights up, i will somehow get a input and change some stuffs in m programming. Is it possible to do that? Thanks in advance.
P.S. Im doing C# programming
Not directly. Your port RS323 communicates using digital signals that are always 5 volts. You need an analog-to-digital converter (ADC) between your voltage source and the RS232 port. Check out sparkfun.com for plenty of options.
Yes, there are several inputs. First of all which are inputs depends on if the RS232 is a DCE or DTE. Originally this standard was put together to connect a terminal to a modem. The DCE is the model and the DTE is the terminal. You wire the pins 1 to 1. Hence Tx can be a receiver or a transmitter.
A PC is typically a DTE. With that in mind this info may make more sense now: http://en.wikipedia.org/wiki/RS-232
The voltages are a nominal +- 7V. Actual a typical range of +-3V to +-15V. Be warned on a negative voltage is a logic 1.
As for C# you have access to all of the pins. Check out the class SerialPort.
Good luck
Rob
Yes it is possible. I did it.
Just connect the - (minus, ground) of the energy source to RS232 RX pin (Refer to anywhere in Internet to be sure where the RX pin is). When you give voltage (as pulse) to this pin, you see some meaningless actions in your port. There appears an OnComm event as 'Receive' but it has no start bit, no stop bit, it is fully meaningless. But any program which monitors RS232 (COM Ports) will show that something is happening on the port (Such as ' 00 ' in HEX or '[00]' in ascii).
As I said it has no start and stop bits, its voltage is not the average voltage needed for RS232 communication, so you must detect it with a subroutine like this (you can not handle it any other way):
Create a timer with 1ms.
timer1.enabled = false
----Don't forget to disable it
if mscomm1.inbuffercount > 0 then
-----put your code here
-----it means the voltage has appeared at RX
-----for example you can code 'buttonpresscount=buttonpresscount+1'
end if
waitfortheactionend:
if mscomm1.inbuffercount >0 then
----there is still some meaningless voltage
----wait until pulse ends, so
doevents
goto waitfortheactionstop
end if
----Don't forget to enable the timer again
timer1.enabled = true
I hope this will be helpfull.
Sedat K.
If I have a port that I need to read from quite a lot, and the returning amount of data varies from one line to many lines, what is the best way to read / sift through what I am reading?
After some research I found out what I was supposed to do.
string Xposition;
Xposition = "";
threeAxisPort.ReadExisting();
threeAxisPort.WriteLine("rx");
Thread.Sleep(50);
Xposition = threeAxisPort.ReadExisting();
threeAxisPort.WriteLine("end");
What I ended up doing was clearing the port of everything usign a ReadExisting function, then waited 50 ms to not flood the port, and did another ReadExisting command. With the additional 50ms wait time there was no leftover from the motors, and the returning block of text was exactly what I needed, however, I will still be thinking up of a more dynamic way to handle that string, because in a worst case scenario the read existing will pick up something I won't want.