Serial connection falling out of sync. Ardunio and C# connection - c#

I am writing a simple app where an Ardunio sends read data to the C# program using simple serial.write()'s. I send the data from Ardunio with special syntax and the c# program reads and interprets it and eventually graphs the data.
For example, the Ardunio program sends data
Serial.write("DATA 1 3 90;");
which means: X value and Y value in chart 1 is 3 and 90 respectively.
And the c# program reads it using
private async void Serialport_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
await serialport.BaseStream.ReadAsync(buffer, 0, 1024);
await Task.Run(() => Graph()); // Graph function graphs the data
}
In the graph() function, I convert the buffer to a string with Encode.ASCII.GetString(buffer); and interpret the data with the syntax and stuff. But for some reason either the c# program doesn't read it fast enough or the Arduino doesn't send it fast enough and the messages are sometimes interrupted For example the packet I received is:
DATA 2 5 90;DATA 5 90 10;DATA 1 1 <|-------- I cannot get the last Y Value
And the next data module starts with
75;DATA 5 4 60;DATA 14 5 6;DATA
/\
|
+==================== It is here
BTW, all the packets are 32 bytes.
So I either need to get the data line by line but I cannot do that because ardunio sends it too fast
Serial.write("DATA 1 3 90;");
Serial.write("DATA 2 4 40;");
Comes to C# as DATA 1 3 90;DATA 2 4 40; as a whole block.
Or I need to get it all at once?
(I prefer getting it line by line)
UPDATE:
When delay(1000); is added between sends. The data is processed correctly. Without the delays, Arduino sends the data too fast, and data clumps and interrupted. How can I make sure that there is no delay in the data yet the data is reliable and without interruption?
UPDATE 2:
When buffer size is increased to 100 * 1024 * 1024 as well as the readCount in ReadAsync method, the read message is much larger still with interruptions tho.
I can give you any extra information.
PS. I didn't give the whole code because it is a large block. But I can give it piece by piece if you tell me where you want it.
Any help is appreciated.

Related

How to keep c# and ardunio communication in sync?

I am writing a simple app where an Ardunio sends read data to the C# program using simple serial.write()'s. I send the data from Ardunio with special syntax and the c# program reads and interprets it and eventually graphs the data.
For example, the Ardunio program sends data
Serial.write("DATA 1 3 90;");
which means: X value and Y value in chart 1 is 3 and 90 respectively. And the c# program reads it using
private async void Serialport_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
await serialport.BaseStream.ReadAsync(buffer, 0, 1024);
await Task.Run(() => Graph()); // Graph function graphs the data
}
In the graph() function, I convert the buffer to a string with Encode.ASCII.GetString(buffer); and interpret the data with the syntax and stuff. But for some reason either the c# program doesn't read it fast enough or the Arduino doesn't send it fast enough and the messages are sometimes interrupted For example the packet I received is:
DATA 2 5 90;DATA 5 90 10;DATA 1 1 <|-------- I cannot get the last Y Value
And the next data module starts with
75;DATA 5 4 60;DATA 14 5 6;DATA
/\
|
+==================== It is here
BTW, all the packets are 32 bytes.
So I either need to get the data line by line but I cannot do that because ardunio sends it too fast
Serial.write("DATA 1 3 90;");
Serial.write("DATA 2 4 40;");
Comes to C# as DATA 1 3 90;DATA 2 4 40; as a whole block. Or I need to get it all at once?
(I prefer getting it line by line)
UPDATE:
When delay(1000); is added between sends. The data is processed correctly. Without the delays, Arduino sends the data too fast, and data clumps and interrupted. How can I make sure that there is no delay in the data yet the data is reliable and without interruption?
UPDATE 2:
When buffer size is increased to 100 * 1024 * 1024 as well as the readCount in ReadAsync method, the read message is much larger still with interruptions tho.
I can give you any extra information.
UPDATE 3:
I ll try to answer the comments;
First of all, I am already doing that and I forgot to mention it. I split the incoming data from the semicolons and interpret them one by one with a foreach loop
Second I am using buffer globally instead of passing it as an argument to graph function
Third I ll try Serial.PrintLine and give you feedback on it
UPDATE 4:
I was able to solve the problem by replacing ReadAsync with Serial.ReadLine() and serial.write with serial.printline(); The solution is suggested by Hans Passant
UPDATE 5:
I still encounter the same problem but differently. The graph function is async so when I write the data so fast I miss some data trying to graph and read other data.
Also because the graph function is async chart control plots the points without order and the graph gets messy.
This can be easily solvable if I add delay(50) between Serial.println()'s
PS. I didn't give the whole code because it is a large block. But I can give it piece by piece if you tell me where you want it.
Any help is appreciated.

Socket and ports setup for high-speed audio/video streaming

I have a one-on-one connection between a server and a client. The server is streaming real-time audio/video data.
My question may sound weird, but should I use multiple ports/socket or only one? Is it faster to use multiple ports or a single one offer better performance? Should I have a port only for messages, one for video and one for audio or is it more simple to package the whole thing in a single port?
One of my current problem is that I need to first send the size of the current frame as the size - in bytes - may change from one frame to the next. I'm fairly new to Networking, but I haven't found any mechanism that would automatically detect the correct range for a specific object being transmitted. For example, if I send a 2934 bytes long packet, do I really need to tell the receiver the size of that packet?
I first tried to package the frame as fast as they were coming in, but I found out the receiving end would sometime not get the appropriated number of bytes. Most of the time, it would read faster than I send them, getting only a partial frame. What's the best way to get only the appropriated number of bytes as quickly as possible?
Or am I looking too low and there's a higher-level class/framework used to handle object transmission?
I think it is better to use an object mechanism and send data in an interleaved fashion. This mechanism may work faster than multiple port mechanism.
eg:
class Data {
DataType, - (Adio/Video)
Size, - (Size of the Data buffer)
Data Buffer - (Data depends on the type)
}
'DataType' and 'Size' always of constant size. At the client side take the 'DataType' and 'Size' and then read the specifed size of corresponding sent data(Adio/Video).
Just making something up off the top of my head. Shove "packets" like this down the wire:
1 byte - packet type (audio or video)
2 bytes - data length
(whatever else you need)
|
| (raw data)
|
So whenever you get one of these packets on the other end, you know exactly how much data to read, and where the beginning of the next packet should start.
[430 byte audio L packet]
[430 byte audio R packet]
[1000 byte video packet]
[20 byte control packet]
[2000 byte video packet]
...
But why re-invent the wheel? There are protocols to do these things already.

Improper format reading the serial port in C#

I am reading data from an Arduino board using C#.
In C#, I have the following:
// Write a string
port.Write("x");
Thread.Sleep(50);
SerialPortRead();
... and in SerialPortRead() I have:
private static void SerialPortRead()
{
port.DataReceived += new SerialDataReceivedEventHandler(port_DataReceived);
}
private static void port_DataReceived(object sender,SerialDataReceivedEventArgs e)
{
Console.WriteLine(port.ReadExisting());
}
and, the output would look something like:
329327
32
7
327
3
26
327
3
26
32
7
What did I do wrong? The output should be a around 326-329, where this value is coming from a compass which is hooked up to one of the pins that I am reading from Arduino.
Note that:
In Arduino, I have a Serial read method that watch for the input character x and return the value of the compass.
I'd guess that you are reading faster that the compass is writing. So that way your program believes that it is receiving two direction, while if it has delayed reading a little bit it would have received only one.
This hypothesis is supported by that if you group the read data into numbers of three, we get directions in the range you expect.
Try adding a delay before reading after sending the trigger character, or even better - add a control character to signal that the compass has written the entire direction and then read until you receive that char.
Edit: I now noticed that you already have a sleep - so the option remaining would be a separation/termination char.

Fragmented length prefix causes next data read from buffer use incorrect message length

I'm one of those guys who come here to find answers to those questions that others have asked, and I think i newer asked anything myself, but after two days searching unsuccessfully I decided that it's time to ask something myself. So here it is...
I have a TCP server and client written in C#, .NET 4, asynchronous sockets using SocketAsyncEventArgs. I have a length-prefixed message framing protocol. Overall everything works just fine, but one issue keeps bugging me.
Situation is like this (I will use small numbers just as an example):
Lets say Server has a Send buffer length of 16 bytes.
It sends a message which is 6 bytes long, and prefixes it with 4 bytes long length prefix. Total message length is 6+4=10.
Client reads the data and receives a buffer of 16 bytes length (yes 10 bytes of data and 6 bytes equal to zero).
Received buffer looks like this: 6 0 0 0 56 21 33 1 5 7 0 0 0 0 0 0
So I read first 4 bytes which is my length prefix, I determine that my message is 6 bytes long, I read it as well and everything is fine so far. Then i have 16-10=6 bytes left to read. All of them are zeroes I read 4 of them, since it's my length prefix. So it's a zero length message which is allowed as keep-alive packet.
Remaining data to read: 0 0
Now the issue "kicks in". I got only 2 remaining bytes to read, they are not enough to complete a 4 byte-long length prefix buffer. So I read those 2 bytes, and wait for more incoming data. Now server is not aware that I'm still reading length prefix (I'm just reading all those zeroes in the buffer) and sends another message correctly prefixed with 4 bytes. And the client is assuming the server sends those missing 2 bytes. I receive the data on the client side, and read first two bytes to form a complete 4 byte length buffer. The results are something like that
lengthBuffer = new byte[4]{0, 0, 42, 0}
Which then translates into 2752512 message length. So my code will continue to read next 2752512 bytes to complete the message...
So in every single message framing example I have seen zero length messages are supported as keep-alive's. And every example I've seen doesn't do anything more than I do. The problem is that I do not know how much data I have to read when I receive it from the server. Since I have partially-filled buffer with zeroes, I have to read it all as those zeroes could be keep-alive's I sent from the other end of connection.
I could drop zero-length messages and stop reading the buffer after first empty message and it should fix this issue, and use custom messages for my keep-alive mechanism. But I want to know if I am missing something, or doing something wrong, since every code example I've seen seems to have same issue (?)
UPDATE
Marc Gravell, you sir pulled words out of my mouth. Was about to update that the issue is with sending the data. The problem is that initially when exploring .NET Sockets and SocketAsyncEventArgs I came across this sample: http://archive.msdn.microsoft.com/nclsamples/Wiki/View.aspx?title=socket%20performance
It uses reusable pool of buffers. Simply takes predefined number of maximum client connections allowed, for example 10, takes maximum single buffer size, for example 512, and creates one large buffer for all of them. So 512 * 10 * 2 (for send and receive) = 10240
So we have byte[] buff = new byte[10240];
Then for each client that connects it assigns a piece of this large buffer. First connected client gets first 512 bytes for Data Reading operations, and gets next 512 bytes (offset 512) for Data Sending operations. Therefore the code ended up having already allocated Send buffer which size is 512 (exactly the number the client later receives as BytesTransferred). This buffer is populated with data, and all remaining space out of these 512 bytes is sent as zeroes.
Strange enough this example is from msdn. The reason there is a single huge buffer is to avoid fragmented heap memory, when buffer gets pinned and GC cant collect it or something like that.
Comment from BufferManager.cs in the provided example (see link above):
This class creates a single large buffer which can be divided up and
assigned to SocketAsyncEventArgs objects for use with each socket I/O
operation. This enables bufffers to be easily reused and gaurds
against fragmenting heap memory.
So the issue is pretty much clear. Any suggestions on how I should resolve this are welcome :) Is it true what they say about fragmented heap memory, is it OK to create a data buffer "on the fly"? If so, will I have memory issues when the server scales to a few hundred or even thousands of clients?
I guess the problem is that you are treating the trailing zeros in the buffer you read as data. This is not data. It is garbage. No one ever sent it to you.
The Stream.Read call returns you the number of bytes actually read. You should not interpret the rest of the buffer in any way.
The problem is that I do not know how much data I have to read when I
receive it from the server.
Yes, you do: Use the return value from Stream.Read.
That sounds simply like a bug in either your send or receive code. You should only get BytesTransferred as the data that was actually sent, or some number smaller than that if arriving in fragments. The first thing I would wonder is: did you setup the send correctly? i.e. if you have an oversized buffer, a correct implementation might look like:
args.SetBuffer(buffer, 0, actualBytesToSend);
if (!socket.SendAsync(args)) { /* whatever */ }
where actualBytesToSend can be much less than buffer.Length. My initial suspicion is that
you are doing something like:
args.SetBuffer(buffer, 0, buffer.Length);
and therefore sending more data than you have actually populated.
I should emphasize: there is something wrong in either your send or receive; I do not believe, at least without an example, that there is some fundamental underlying bug in the BCL here - I use the async API extensively, and it works fine - but you do need to accurately track the data you are sending and receiving at all points.
"Now server is not aware that I'm still reading length prefix (I'm just reading all those zeroes in the buffer) and sends another message correctly prefixed with 4 bytes.".
Why? How does the server know what you are and aren't reading? If the server retransmits any part of a message it is in error. TCP already does that for you.
There seems to be something radically wrong with your server.

Why is the calculated checksum not matching the BCC sent over the serial port?

I've got a little application written in C# that listens on a SerialPort for information to come in. The information comes in as: STX + data + ETX + BCC. We then calculate the BCC of the transmission packet and compare. The function is:
private bool ConsistencyCheck(byte[] buffer)
{
byte expected = buffer[buffer.Length - 1];
byte actual = 0x00;
for (int i = 1; i < buffer.Length - 1; i++)
{
actual ^= buffer[i];
}
if ((expected & 0xFF) != (actual & 0xFF))
{
if (AppTools.Logger.IsDebugEnabled)
{
AppTools.Logger.Warn(String.Format("ConsistencyCheck failed: Expected: #{0} Got: #{1}", expected, actual));
}
}
return (expected & 0xFF) == (actual & 0xFF);
}
And it seems to work more or less. It is accurately not including the STX or the BCC and accurately including the ETX in it's calculations. It seems to work a very large percentage of the time, however we have at least two machines we are running this on, both of which are Windows 2008 64-bit in which the BCC calculation NEVER adds up. Pulling from a recent log I had in one byte 20 was sent and I calculated 16 and one where 11 was sent and I calculated 27.
I'm absolutely stumped as to what is going on here. Is there perhaps a 64 bit or Windows 2008 "gotcha" I'm missing here? Any help or even wild ideas would be appreciated.
EDIT:
Here's the code that reads the data in:
private void port_DataReceived(object sender, System.IO.Ports.SerialDataReceivedEventArgs e)
{
// Retrieve number of bytes in the buffer
int bytes = serialPort.BytesToRead;
// Create a byte array to hold the awaiting data
byte[] received = new byte[bytes];
//read the data and store it
serialPort.Read(received, 0, bytes);
DataReceived(received);
}
And the DataReceived() function takes that string and appends it to global StringBuilder object. It then stays as a string builder until it's passed to these various functions at which point the .ToString() is called on it.
EDIT2: Changed the code to reflect my altered routines that operate on bytes/byte arrays rather than strings.
EDIT3: I still haven't figured this out yet, and I've gotten more test data that has completely inconsistent results (the amount I'm off of the send checksum varies each time with no pattern). It feels like I'm just calculating the checksum wrong, but I don't know how.
The buffer is defined as a String. While I suspect the data you are transmitting are bytes. I would recommend using byte arrays (even if you are sending ascii/utf/whatever encoding). Then after the checksum is valid, convert the data to a string
computing BCC is not standard, but "customer defined". we program interfaces for our customers and many times found different algorithms, including sum, xor, masking, letting apart stx, etx, or both, or letting apart all known bytes. for example, package structure is "stx, machine code, command code, data, ..., data, etx, bcc", and the calculus of bcc is (customer specified!) as "binary sum of all bytes from command code to last data, inclusive, and all masked with 0xCD". That is, we have first to add all the unknown bytes (it make no sense to add stx, etx, or machine code, if these bytes do not match, the frame is discarded anyhow! their value is tested when they are got, to be sure the frame starts, ends correctly, and it is addressed to the receiving machine, and in this case, we have to bcc only the bytes that can change in the frame, this will decrease the time, as in many cases we work with 4 or 8 bit slow microcontrollers, and caution, this is summing the bytes, and not xoring them, this was just an example, other customer wants something else), and second, after we have the sum (which can be 16 bits if is not truncated during the addition), we mask it (bitwise AND) with the key (in this example 0xCD). This kind of stuff is frequently used for all kind of close systems, like ATM's for example (connecting a serial keyboard to an ATM) for protection reasons, etc., in top of encryption and other things. So, you really have to check (read "crack") how your two machines are computing their (non standard) BCC's.
Make sure you have the port set to accept null bytes somewhere in your port setup code. (This maybe the default value, I'm not sure.)
port.DiscardNull = false;
Also, check for the type of byte arriving at he serial port, and accept only data:
private void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
if (e.EventType == SerialData.Chars)
{
// Your existing code
}
}

Categories