Serial port communication in .NET - c#

I am using C# to receive data from a serial port but there are some problems. I'm new to this so I need some help.
First off all I want to know which functions are event driven:
ReadExisting()
Read()
Readbyte()
Readchar()
ReadLine()
Readto()
How can I take the required data form input stream of this port?
I have static sized protocols. Can I use a special char to specify limits of a protocol data, and which will be a suitable char for this?
How do I handle this exception:
C# SerialPort System.ObjectDisposedException, safe handle has been closed in System.DLL

None of these methods are "event driven", you'd use them in the DataReceived event. Which is called when the serial port has at least one byte of data available to read.
Not sure what "static sized" means. If the device sends a fixed number of bytes then you'd use the Read() method to read them. Pay attention to the return value, you'll get only as many bytes as are available. Store them in a byte[] and append to that in the next DR event until you've got them all.
If the device sends characters rather than bytes then you can usually take advantage of the NewLine property. Set it to the character or string that terminates the response. A linefeed ("\n") is by far the most typical choice. Read the response with ReadLine(). No buffering is required in that case.
You'll get the ObjectDisposed exception when you close a form but don't ensure that the device stops sending data. Be sure to use only BeginInvoke in the DataReceived event, not Invoke. And don't call BeginInvoke if the form's IsDisposed property is true.

I can't add anything much to Hans' answer except to say that one of the biggest traps I have seen is that people tend to expect that when the DataReceived event fires, all of the bytes they would like to receive are all present.
e.g. if your message protocol is 20 bytes long, the DataReceived event fires and you try to read 20 bytes. They may all be there, they may not. Pretty likely that they won't be, depending on your baud rate.
You need to check the BytesToRead property of the port you are reading from, and Read that amount into your buffer. If and when more bytes are available, the DataReceived event will fire again.
Note that the DataReceived event will fire when the number of bytes to receive is at least equal to the ReceivedBytesThreshold property of the serial port. By default I believe this is set to a value of 1.
If you set this to 10 for example, the event will fire when there are 10 or more bytes waiting to be received, but not fewer. This may or may not cause problems, and it is my personal preference to leave this property value set to 1, so that all data received will fire the event, even if only 1 byte is received.
Do not make the mistake that this will cause the event to fire for every single byte received - it won't do that.

Related

C# Socket sending image results in bad formattted image [duplicate]

If i send 1000 bytes in TCP, does it guarantee that the receiver will get the entire 1000 bytes "togther"? or perhaps he will first only get 500 bytes, and later he'll receive the other bytes?
EDIT: the question comes from the application's point of view. If the 1000 bytes are reassembles into a single buffer before they reach the application .. then i don't care if it was fragmented in the way..
See Transmission Control Protocol:
TCP provides reliable, ordered delivery of a stream of bytes from a program on one computer to another program on another computer.
A "stream" means that there is no message boundary from the receiver's point of view. You could get one 1000 byte message or one thousand 1 byte messages depending on what's underneath and how often you call read/select.
Edit: Let me clarify from the application's point of view. No, TCP will not guarantee that the single read would give you all of the 1000 bytes (or 1MB or 1GB) packet the sender may have sent. Thus, a protocol above the TCP usually contains fixed length header with the total content length in it. For example you could always send 1 byte that indicates the total length of the content in bytes, which would support up to 255 bytes.
As other answers indicated, TCP is a stream protocol -- every byte sent will be received (once and in the same order), but there are no intrinsic "message boundaries" -- whether all bytes are sent in a single .send call, or multiple ones, they might still be received in one or multiple .receive calls.
So, if you need "message boundaries", you need to impose them on top of the TCP stream, IOW, essentially, at application level. For example, if you know the bytes you're sending will never contain a \0, null-terminated strings work fine; various methods of "escaping" let you send strings of bytes which obey no such limitations. (There are existing protocols for this but none is really widespread or widely accepted).
Basically as far as TCP goes it only guarantees that the data sent from one end to the other end will be sent in the same order.
Now usually what you'll have to do is have an internal buffer that keeps looping until it has received your 1000 byte "packet".
Because the recv command as mentioned returns how much has actually been received.
So usually you'll have to then implement a protocol on top of TCP to make sure you send data at an appropriate speed. Because if you send() all the data in one run through it will overload the under lying networking stack, and which will cause complications.
So usually in the protocol there is a tiny acknowledgement packet sent back to confirm that the packet of 1000 bytes are sent.
You decide, in your message that how many bytes your message shall contain. For instance in your case its 1000. Following is up and running C# code to achieve the same. The method returns with 1000 bytes. The abort code is 0 bytes; you can tailor that according to your needs.
Usage:
strMsg = ReadData(thisTcpClient.Client, 1000, out bDisconnected);
Following is the method:
string ReadData(Socket sckClient, int nBytesToRead, out bool bShouldDisconnect)
{
bShouldDisconnect = false;
byte[] byteBuffer = new byte[nBytesToRead];
Array.Clear(byteBuffer, 0, byteBuffer.Length);
int nDataRead = 0;
int nStartIndex = 0;
while (nDataRead < nBytesToRead)
{
int nBytesRead = sckClient.Receive(byteBuffer, nStartIndex, nBytesToRead - nStartIndex, SocketFlags.None);
if (0 == nBytesRead)
{
bShouldDisconnect = true;
//0 bytes received; assuming disconnect signal
break;
}
nDataRead += nBytesRead;
nStartIndex += nBytesRead;
}
return Encoding.Default.GetString(byteBuffer, 0, nDataRead);
}
Let us know this didn't help you (0: Good luck.
Yes, there is a chance for receiving packets part by part. Hope this msdn article and following example (taken from the article in msdn for quick review) would be helpful to you if you are using windows sockets.
void CChatSocket::OnReceive(int nErrorCode)
{
CSocket::OnReceive(nErrorCode);
DWORD dwReceived;
if (IOCtl(FIONREAD, &dwReceived))
{
if (dwReceived >= dwExpected) // Process only if you have enough data
m_pDoc->ProcessPendingRead();
}
else
{
// Error handling here
}
}
TCP guarantees that they will recieve all 1000 bytes, but not necessarily in order (though, it will appear so to the recieving application) and not necessarily all at once (unless you craft the packet yourself and make it so.).
That said, for a packet as small as 1000 bytes, there is a good chance it'll send in one packet as long as you do it in one call to send, though for larger transmissions it may not.
The only thing that the TCP layer guarantees is that the receiver will receive:
all the bytes transmitted by the sender
in the same order
There are no guarantees at all about how the bytes might be split up into "packets". All the stuff you might read about MTU, packet fragmentation, maximum segment size, or whatever else is all below the layer of TCP sockets, and is irrelevant. TCP provides a stream service only.
With reference to your question, this means that the receiver may receive the first 500 bytes, then the next 500 bytes later. Or, the receiver might receive the data one byte at a time, if that's what it asks for. This is the reason that the recv() function takes a parameter that tells it how much data to return, instead of it telling you how big a packet is.
The transmission control protocol guarantees successful delivery of all packets by requiring acknowledgment of the successful delivery of each packet to the sender by the receiver. By this definition the receiver will always receive the payload in chunks when the size of the payload exceeds the MTU (maximum transmission unit).
For more information please read Transmission Control Protocol.
The IP packets may get fragmented during retransmission.
So the destination machine may receive multiple packets - which will be reassembled back by TCP/IP stack. Depending on the network API you are using - the data will be given to you either reassembled or in RAW packets.
It depends of the stablished MTU (Maximum transfer unit). If your stablished connection (once handshaked) refers to a MTU of 512 bytes you will need two or more TCP packets to send 1000 bytes.

Communication between LAN and Microcontroller

so I have a kinda strange problem. I'm using LAN for the communication with a microcontroller. Everything was working perfect. Meaning: I can send and receive data. For receiving data I'm using a simple method, which is Thread.sleep(1) in a for loop in which I keep checking client.GetStream().DataAvailable for true while client is a TcpClient
Now, with one process I have to send and receive to the microcontroller with a higher Baud rate. I was using 9600 for all other operations and everythingwas fine. Now with 115200 client.GetStream().DataAvailableseems to always have the value false.
What could be the problem?
PS: Another way to communicate with the microcontroller (all chosen by user) is serial communication. This is still working fine with the higher Baud rate.
Here is a code snippet:
using (client = new TcpClient(IP_String, LAN_Port))`
{
client.SendTimeout = 200;
client.ReceiveTimeout = 200;
stream = client.GetStream();
.
.
bool OK = false;
stream.Write(ToSend, 0, ToSend.Length);
for (int j = 0; j < 1000; j++)
{
if (stream.DataAvailable)
{
OK = true;
break;
}
Thread.Sleep(1);
}
.
.
}
EDIT:
While monitoring the communication with a listing device I realized that the bits actually arrive and that the device actually answers. The one and only problem seem that the DataAvailable flag is not being raised. I should probably find another way to check data availability. Any ideas?
I've been trying to think of things I've seen that act this way...
I've seen serial chips that say they'll do 115,200, but actually won't. See what happens if you drop the baud rate one notch. Either way you'll learn something.
Some microcontrollers "bit-bang" the serial port by having the CPU raise and lower the data pin and essentially go through the bits, banging 1 or 0 onto the serial pin. When a byte comes in, they read it, and do the same thing.
This does save money (no serial chip) but it is an absolute hellish nightmare to actually get working reliably. 115,200 may push a bit-banger too hard.
This might be a subtle microcontroller problem. Say you have a receiving serial chip which asserts a pin when a byte has come in, usually something like DRQ* for "Data Request" (the * in DRQ* means a 0-volt is "we have a byte" condition) (c'mon, people, a * isn't always a pointer:-). Well, DRQ* requests an interrupt, the firmware & CPU interrupt, it reads the serial chip's byte, and stashes it into some handy memory buffer. Then it returns from interrupt.
A problem can emerge if you're getting data very fast. Let's assume data has come in, serial chip got a byte ("#1" in this example), asserted DRQ*, we interrupted, the firmware grabs and stashes byte #1, and returns from interrupt. All well and good. But think what happens if another byte comes winging in while that first interrupt is still running. The serial chip now has byte #2 in it, so it again asserts the already-asserted DRQ* pin. The interrupt of the first byte completes. What happens?
You hang.
This is because it's the -edge- of DRQ*, physically going from 5V to 0V, that actually causes the CPU interrupt. On the second byte, DRQ* started at 0 and was set to 0. So DRQ* is (still) asserted, but there's no -edge- to tell the interrupt hardware/CPU that another byte is waiting. And now, of course, all the rest of the incoming data is also dropped.
See why it gets worse at higher speeds? The interrupt routine is fielding data more and more quickly, and typically doing circular I/O buffer calculations within the interrupt handler, and it must be fast and efficient, because fast input can push the interrupt handler to where a full new byte comes in before the interrupt finishes.
This is why it's a good idea to check DRQ* during the interrupt handler to see if another byte (#2) is already waiting (if so, just read it in, to clear the serial chip's DRQ*, and stash the byte in memory right then), or use "level triggering" for interrupts, not "edge triggering". Edge triggering definitely has good uses, but you need to watch out for this.
I hope this is helpful. It sure took me long enough to figure it out the first time. Now I take great care on stuff like this.
Good luck, let me know how it goes.
thanks,
Dave Small

C# How to receive information from serial port that doesn't end in "\r" or "\n"

In my application I have a serial port object and a listbox. In the DataRecieved event, I send serialPort.ReadLine() to the listbox. If I write a "n" character to the serial port, nothing will get added to the listbox because what gets recieved doesn't end in "\r" or "\n".
What is the correct way to read information from a serial port? (Keep in mind that I need to keep the full string/char[] of the last thing recieved.)
The 'correct' way depends heavily on implementation.
The SerialPort.ReadLine() method expects a CR/LF as a means to define a payload unit. And, by thing, I imagine that you mean exactly that - a message, payload or package (as in one meaningful, functional unit of information.)
What SerialPort.ReadLine() does is to wrap the whole 'receive everything coming from the buffer and wait for a end-of-payload mark before continuing' mechanism for you.
If you'd rather have the raw incoming content as soon as it arrives, then you may consider changing your code to use SerialPort.Read() instead.
If your message consists of an exact amount of bytes (sometimes the case with sensor data protocols) you can define the bytes you expect - but you should set a timeout in this case.
SerialPort.ReadTimeout = timeOut;
SerialPort.Read(responseBytes, 0, bytesExpected)

What is the minimum number of bytes that will cause Socket.Receive to return?

We are using a application protocol which specifies the length indicator of the message in the first 4 bytes. Socket.Receive will return as much data as in the protocol stack at the time or block until data is available. This is why we have to continously read from the socket until we receive the number of bytes in the length indicator. The Socket.Receive will return 0 if the other side closed the connection. I understand all that.
Is there a minimum number of bytes that has to be read? The reason I ask is from the documentation it seems entirely possible that the entire length indicator (4 bytes) might not be available when socket.Receive can return. We would then have to have to keep trying. It would be more efficient to minimize the number of times we call socket.receive because it has to copy things in and out of buffers. So is it safer to get a single byte at a time to get the length indicator, is it safe to assume that 4 bytes will always be available or should we keep trying to get 4 bytes using an offset variable?
The reason that I think that there may be some sort of default minimum level is that I came across a varaible called ReceiveLowWater variable that I can set in the socket options. But this appears to only apply to BSD. MSDN See SO_RCVLOWAT.
It isn't really that important but I am trying to write unit tests. I have already wrapped a standard .Net Socket behind an interface.
is it safe to assume that 4 bytes will always be available
NO. Never. What if someone is testing your protocol with, say, telnet and a keyboard? Or over a real slow or busy connection? You can receive one byte at a time or a split "length indicator" over multiple Receive() calls. This isn't unit testing matter, it's basic socket matter that causes problems in production, especially under stressful situations.
or should we keep trying to get 4 bytes using an offset variable?
Yes, you should. For your convenience, you can use the Socket.Receive() overload that allows you to specify a number of bytes to be read so you won't read too much. But please note it can return less than required, that's what the offset parameter is for, so it can continue to write in the same buffer:
byte[] lenBuf = new byte[4];
int offset = 0;
while (offset < lenBuf.Length)
{
int received = socket.Receive(lenBuf, offset, lenBuf.Length - offset, 0);
offset += received;
if (received == 0)
{
// connection gracefully closed, do your thing to handle that
}
}
// Here you're ready to parse lenBuf
The reason that I think that there may be some sort of default minimum level is that I came across a varaible called ReceiveLowWater variable that I can set in the socket options. But this appears to only apply to BSD.
That is correct, the "receive low water" flag is only included for backwards compatibility and does nothing apart from throwing errors, as per MSDN, search for SO_RCVLOWAT:
This option is not supported by the Windows TCP/IP provider. If this option is used on Windows Vista and later, the getsockopt and setsockopt functions fail with WSAEINVAL. On earlier versions of Windows, these functions fail with WSAENOPROTOOPT". So I guess you'll have to use the offset.
It's a shame, because it can enhance performance. However, as #cdleonard pointed out in a comment, the performance penalty from keeping an offset variable will be minimal, as you'l usually receive the four bytes at once.
No, there isn't a minimum buffer size, the length in the receive just needs to match the actual space.
If you send a length in four bytes before the messages actual data, the recipient needs to handle the cases where 1, 2, 3 or 4 bytes are returned and keep repeating the read until all four bytes are received and then repeat the procedure to receive the actual data.

Receiving data in TCP

If i send 1000 bytes in TCP, does it guarantee that the receiver will get the entire 1000 bytes "togther"? or perhaps he will first only get 500 bytes, and later he'll receive the other bytes?
EDIT: the question comes from the application's point of view. If the 1000 bytes are reassembles into a single buffer before they reach the application .. then i don't care if it was fragmented in the way..
See Transmission Control Protocol:
TCP provides reliable, ordered delivery of a stream of bytes from a program on one computer to another program on another computer.
A "stream" means that there is no message boundary from the receiver's point of view. You could get one 1000 byte message or one thousand 1 byte messages depending on what's underneath and how often you call read/select.
Edit: Let me clarify from the application's point of view. No, TCP will not guarantee that the single read would give you all of the 1000 bytes (or 1MB or 1GB) packet the sender may have sent. Thus, a protocol above the TCP usually contains fixed length header with the total content length in it. For example you could always send 1 byte that indicates the total length of the content in bytes, which would support up to 255 bytes.
As other answers indicated, TCP is a stream protocol -- every byte sent will be received (once and in the same order), but there are no intrinsic "message boundaries" -- whether all bytes are sent in a single .send call, or multiple ones, they might still be received in one or multiple .receive calls.
So, if you need "message boundaries", you need to impose them on top of the TCP stream, IOW, essentially, at application level. For example, if you know the bytes you're sending will never contain a \0, null-terminated strings work fine; various methods of "escaping" let you send strings of bytes which obey no such limitations. (There are existing protocols for this but none is really widespread or widely accepted).
Basically as far as TCP goes it only guarantees that the data sent from one end to the other end will be sent in the same order.
Now usually what you'll have to do is have an internal buffer that keeps looping until it has received your 1000 byte "packet".
Because the recv command as mentioned returns how much has actually been received.
So usually you'll have to then implement a protocol on top of TCP to make sure you send data at an appropriate speed. Because if you send() all the data in one run through it will overload the under lying networking stack, and which will cause complications.
So usually in the protocol there is a tiny acknowledgement packet sent back to confirm that the packet of 1000 bytes are sent.
You decide, in your message that how many bytes your message shall contain. For instance in your case its 1000. Following is up and running C# code to achieve the same. The method returns with 1000 bytes. The abort code is 0 bytes; you can tailor that according to your needs.
Usage:
strMsg = ReadData(thisTcpClient.Client, 1000, out bDisconnected);
Following is the method:
string ReadData(Socket sckClient, int nBytesToRead, out bool bShouldDisconnect)
{
bShouldDisconnect = false;
byte[] byteBuffer = new byte[nBytesToRead];
Array.Clear(byteBuffer, 0, byteBuffer.Length);
int nDataRead = 0;
int nStartIndex = 0;
while (nDataRead < nBytesToRead)
{
int nBytesRead = sckClient.Receive(byteBuffer, nStartIndex, nBytesToRead - nStartIndex, SocketFlags.None);
if (0 == nBytesRead)
{
bShouldDisconnect = true;
//0 bytes received; assuming disconnect signal
break;
}
nDataRead += nBytesRead;
nStartIndex += nBytesRead;
}
return Encoding.Default.GetString(byteBuffer, 0, nDataRead);
}
Let us know this didn't help you (0: Good luck.
Yes, there is a chance for receiving packets part by part. Hope this msdn article and following example (taken from the article in msdn for quick review) would be helpful to you if you are using windows sockets.
void CChatSocket::OnReceive(int nErrorCode)
{
CSocket::OnReceive(nErrorCode);
DWORD dwReceived;
if (IOCtl(FIONREAD, &dwReceived))
{
if (dwReceived >= dwExpected) // Process only if you have enough data
m_pDoc->ProcessPendingRead();
}
else
{
// Error handling here
}
}
TCP guarantees that they will recieve all 1000 bytes, but not necessarily in order (though, it will appear so to the recieving application) and not necessarily all at once (unless you craft the packet yourself and make it so.).
That said, for a packet as small as 1000 bytes, there is a good chance it'll send in one packet as long as you do it in one call to send, though for larger transmissions it may not.
The only thing that the TCP layer guarantees is that the receiver will receive:
all the bytes transmitted by the sender
in the same order
There are no guarantees at all about how the bytes might be split up into "packets". All the stuff you might read about MTU, packet fragmentation, maximum segment size, or whatever else is all below the layer of TCP sockets, and is irrelevant. TCP provides a stream service only.
With reference to your question, this means that the receiver may receive the first 500 bytes, then the next 500 bytes later. Or, the receiver might receive the data one byte at a time, if that's what it asks for. This is the reason that the recv() function takes a parameter that tells it how much data to return, instead of it telling you how big a packet is.
The transmission control protocol guarantees successful delivery of all packets by requiring acknowledgment of the successful delivery of each packet to the sender by the receiver. By this definition the receiver will always receive the payload in chunks when the size of the payload exceeds the MTU (maximum transmission unit).
For more information please read Transmission Control Protocol.
The IP packets may get fragmented during retransmission.
So the destination machine may receive multiple packets - which will be reassembled back by TCP/IP stack. Depending on the network API you are using - the data will be given to you either reassembled or in RAW packets.
It depends of the stablished MTU (Maximum transfer unit). If your stablished connection (once handshaked) refers to a MTU of 512 bytes you will need two or more TCP packets to send 1000 bytes.

Categories