.NET SerialPort Write/Read optimization - c#

I've an UART device which I'm writing to it a command (via System.IO.Ports.SerialPort) and then immediately the device will respond.
So basically my approach is:
->Write to SerialPort->await Task.Delay->Read from the Port.
//The port is open all the time.
public async byte[] WriteAndRead(byte[] message){
port.Write(command, 0, command.Length);
await Task.Delay(timeout);
var msglen = port.BytesToRead;
if (msglen > 0)
{
byte[] message = new byte[msglen];
int readbytes = 0;
while (port.Read(message, readbytes, msglen - readbytes) <= 0)
;
return message;
}
This works fine on my computer. But if I try it on another computer for example, the bytesToRead property is sometimes mismatched. There are empty bytes in it or the answer is not completed. (E.g. I get two bytes, if I expect one byte: 0xBB, 0x00 or 0x00, 0xBB)
I've also looked into the SerialPort.DataReceived Event, but it fires too often and is (as far as I understand) not really useful for this write and read approach. (As I expect the answer immediately from the device).
Is there a better approach to a write-and-read?

Read carefully the Remarks in https://msdn.microsoft.com/en-us/library/ms143549(v=vs.110).aspx
You should not rely on the BytesToRead value to indicate message length.
You should know, how much data you expect to read to decompose the message.
Also, as #itsme85 noticed, you are not updating the readbytes, and therefore you are always writing received bytes to beginning of your array. Proper code with updating the readbytes should look like this:
int r;
while ((r = port.Read(message, readbytes, msglen - readbytes)) <= 0){
readbytes += r;
}
However, during the time you will read data, more data can come and your "message" might be incomplete.
Rethink, what you want to achieve.

Related

C# - Stream.Read reading null bytes

On receiving Multipart data from the browser (which has a size of greater than ~2KB), I start receiving null '\0' bytes after the first few chunks which are relevant when I use:
_Stream.Read(ByteArray, Offset, ContentLength);
But, if I divide the ContentLength into small buffers (around 2KB each) AND add a delay of 1ms after each call to Read(), then it works fine:
for(int i = 0; i < x; i++)
{
_Stream.Read(ByteArray, Offset * i, BufferSize);
System.Threading.Thread.Sleep(1);
}
But, adding delay is quite slow. How to prevent reading null bytes. How can I know how many bytes have been written by the browser.
Thanks
The 0x00 bytes were not actually received, they were never written to.
Stream.Read() returns the number of bytes actually read, which is in your case often less than BufferSize. Small amounts of data typically arrive in a single message, in which case the problem does not occur.
The delay might "work" in your test scenario because by then the network layer has buffered more than BufferSize data. It will probably fail in a production environment.
So you'll need to change your code into something like:
int remaining = ContentLength;
int offset = 0;
while (remaining > 0)
{
int bytes = _Stream.Read(ByteArray, offset, remaining);
if (bytes == 0)
{
throw new ApplicationException("Server disconnected before the expected amount of data was received");
}
offset += bytes;
remaining -= bytes;
}

How to get the Socket Buffer size in C#

I have a Socket code which is communicating through TCP/IP.The machine to which i am communicating has buffer data in its buffer.At present i am trying to get the buffer data using this code.
byte data = new byte[1024];
int recv = sock.Receive(data);
stringData = Encoding.ASCII.GetString(data, 0, recv);
But this code retrieves only 11 lines of data whereas more data is there in the machines buffer.Is this because i have used int recv = sock.Receive(data); and data is 1024 ?
If yes ,How to get the total buffer size and retrieve it into string.
If you think you are missing some data, then you need to check recv and almost certainly: loop. Fortunately, ASCII is always single byte - in most other encodings you would also have to worry about receiving partial characters.
A common approach is basically:
int recv;
while((recv = sock.Receive(data)) > 0)
{
// process recv-many bytes
// ... stringData = Encoding.ASCII.GetString(data, 0, recv);
}
Keep in mind that there is no guarantee that stringData will be any particular entire unit of work; what you send is not always what you receive, and that could be a single character, 14 lines, or the second half of one word and the first half of another. You generally need to maintain your own back-buffer of received data until you have a complete logical frame to process.
Note, however, Receive always tries to return something (at least one byte), unless the inbound stream has closed - and will block to do so. If this is a problem, you may need to check the available buffer (sock.Available) to decide whether to do synchronous versus asynchronous receive (i.e. read synchronously while data is available, otherwise request an asynchronous read).
Try something along these lines:
StringBuilder sbContent=new StringBuilder();
byte data = new byte[1024];
int numBytes;
while ((numBytes = sock.Receive(data))>0)
{
sbContent.Append(Encoding.UTF8.GetString(data));
}
// use sbContent.ToString()
Socket tcpSocket = new Socket(ipe.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
Console.WriteLine(" ReceiveBufferSize {0}", tcpSocket.ReceiveBufferSize);
For actual data you can put below condition:-
int receiveBytes;
while((receiveBytes = tcpSocket.Receive.Data(receiveBytes)) > 0)
{
}

Understanding the NetworkStream.EndRead()-example from MSDN

I tried to understand the MSDN example for NetworkStream.EndRead(). There are some parts that i do not understand.
So here is the example (copied from MSDN):
// Example of EndRead, DataAvailable and BeginRead.
public static void myReadCallBack(IAsyncResult ar ){
NetworkStream myNetworkStream = (NetworkStream)ar.AsyncState;
byte[] myReadBuffer = new byte[1024];
String myCompleteMessage = "";
int numberOfBytesRead;
numberOfBytesRead = myNetworkStream.EndRead(ar);
myCompleteMessage =
String.Concat(myCompleteMessage, Encoding.ASCII.GetString(myReadBuffer, 0, numberOfBytesRead));
// message received may be larger than buffer size so loop through until you have it all.
while(myNetworkStream.DataAvailable){
myNetworkStream.BeginRead(myReadBuffer, 0, myReadBuffer.Length,
new AsyncCallback(NetworkStream_ASync_Send_Receive.myReadCallBack),
myNetworkStream);
}
// Print out the received message to the console.
Console.WriteLine("You received the following message : " +
myCompleteMessage);
}
It uses BeginRead() and EndRead() to read asynchronously from the network stream.
The whole thing is invoked by calling
myNetworkStream.BeginRead(someBuffer, 0, someBuffer.Length, new AsyncCallback(NetworkStream_ASync_Send_Receive.myReadCallBack), myNetworkStream);
somewhere else (not displayed in the example).
What I think it should do is print the whole message received from the NetworkStream in a single WriteLine (the one at the end of the example). Notice that the string is called myCompleteMessage.
Now when I look at the implementation some problems arise for my understanding.
First of all: The example allocates a new method-local buffer myReadBuffer. Then EndStream() is called which writes the received message into the buffer that BeginRead() was supplied. This is NOT the myReadBuffer that was just allocated. How should the network stream know of it? So in the next line numberOfBytesRead-bytes from the empty buffer are appended to myCompleteMessage. Which has the current value "". In the last line this message consisting of a lot of '\0's is printed with Console.WriteLine.
This doesn't make any sense to me.
The second thing I do not understand is the while-loop.
BeginRead is an asynchronous call. So no data is immediately read. So as I understand it, the while loop should run quite a while until some asynchronous call is actually executed and reads from the stream so that there is no data available any more. The documentation doesn't say that BeginRead immediately marks some part of the available data as being read, so I do not expect it to do so.
This example does not improve my understanding of those methods. Is this example wrong or is my understanding wrong (I expect the latter)? How does this example work?
I think the while loop around the BeginRead shouldn't be there. You don't want to execute the BeginRead more than ones before the EndRead is done. Also the buffer needs to be specified outside the BeginRead, because you may use more than one reads per packet/buffer.
There are some things you need to think about, like how long are my messages/blocks (fixed size). Shall I prefix it with a length. (variable size) <datalength><data><datalength><data>
Don't forget it is a Streaming connection, so multiple/partial messages/packets can be read in one read.
Pseudo example:
int bytesNeeded;
int bytesRead;
public void Start()
{
bytesNeeded = 40; // u need to know how much bytes you're needing
bytesRead = 0;
BeginReading();
}
public void BeginReading()
{
myNetworkStream.BeginRead(
someBuffer, bytesRead, bytesNeeded - bytesRead,
new AsyncCallback(EndReading),
myNetworkStream);
}
public void EndReading(IAsyncResult ar)
{
numberOfBytesRead = myNetworkStream.EndRead(ar);
if(numberOfBytesRead == 0)
{
// disconnected
return;
}
bytesRead += numberOfBytesRead;
if(bytesRead == bytesNeeded)
{
// Handle buffer
Start();
}
else
BeginReading();
}

C# Serial Port communication

I'm trying to handle a serial port using the SerialPort class.
The application requires us receive one command first, and then give a reply in 20ms; the problem is, there is a delay(up to 15ms) between the command we read, and the actual command, and we don't have time to send the reply back.
The length of the command we need to read is fixed as 20 bytes, and we poll one byte from the input buffer each time.
serialPort.Read(input, 0, 1).
I don't know what is wrong with this process.
Why read one byte at a time? If you're expecting 20 bytes, you can write:
byte[] buffer = new byte[20];
int bytesRead;
int totalBytesRead = 0;
while ((bytesRead = serialPort.Read(buffer, totalBytesRead, buffer.Length - totalBytesRead)) != 0
&& totalBytesRead < buffer.Length)
{
totalBytesRead += bytesRead;
}
At that point, you have all 20 bytes or you've reached the end of the stream.
What do you mean by "there is a delay(up to 15ms) between the command we read, and the actual command,"?
Are you using the DataRecieved event? I had a similar error some time ago, apparently some of the functionality is not invoked without using the event handler.

TCP Socket Communication Limit

Is there any limit on the size of data that can be received by TCP client.
With TCP socket communication, server is sending more data but the client is only getting 4K and stopping.
I'm guessing that you're doing exactly 1 Send and exactly 1 Receive.
You need to do multiple reads, there is no guarantee that a single read from the socket will contain everything.
The Receive method will read as much data as is available, up to the size of the buffer. But it will return when it has some data so your program can use it.
You may consider splitting your read/writes over multiple calls. I've definitely had some problems with TcpClient in the past. To fix that we use a wrapped stream class with the following read/write methods:
public override int Read(byte[] buffer, int offset, int count)
{
int totalBytesRead = 0;
int chunkBytesRead = 0;
do
{
chunkBytesRead = _stream.Read(buffer, offset + totalBytesRead, Math.Min(__frameSize, count - totalBytesRead));
totalBytesRead += chunkBytesRead;
} while (totalBytesRead < count && chunkBytesRead > 0);
return totalBytesRead;
}
public override void Write(byte[] buffer, int offset, int count)
{
int bytesSent = 0;
do
{
int chunkSize = Math.Min(__frameSize, count - bytesSent);
_stream.Write(buffer, offset + bytesSent, chunkSize);
bytesSent += chunkSize;
} while (bytesSent < count);
}
//_stream is the wrapped stream
//__frameSize is a constant, we use 4096 since its easy to allocate.
No, it should be fine. I suspect that your code to read from the client is flawed, but it's hard to say without you actually showing it.
No limit, TCP socket is a stream.
There's no limit for data with TCP in theory BUT since we're limited by physical resources (i.e memory), implementors such as Microsoft Winsock utilize something called "tcp window size".
That means that when you send something with the Winsock's send() function for example (and didn't set any options on the socket handler) the data will be first copied to the socket's temporary buffer. Only when the receiving side has acknowledged that he got that data, Winsock will use this memory again.
So, you might flood this buffer by sending faster than it frees up and then - error!

Categories