I'm trying to handle a serial port using the SerialPort class.
The application requires us receive one command first, and then give a reply in 20ms; the problem is, there is a delay(up to 15ms) between the command we read, and the actual command, and we don't have time to send the reply back.
The length of the command we need to read is fixed as 20 bytes, and we poll one byte from the input buffer each time.
serialPort.Read(input, 0, 1).
I don't know what is wrong with this process.
Why read one byte at a time? If you're expecting 20 bytes, you can write:
byte[] buffer = new byte[20];
int bytesRead;
int totalBytesRead = 0;
while ((bytesRead = serialPort.Read(buffer, totalBytesRead, buffer.Length - totalBytesRead)) != 0
&& totalBytesRead < buffer.Length)
{
totalBytesRead += bytesRead;
}
At that point, you have all 20 bytes or you've reached the end of the stream.
What do you mean by "there is a delay(up to 15ms) between the command we read, and the actual command,"?
Are you using the DataRecieved event? I had a similar error some time ago, apparently some of the functionality is not invoked without using the event handler.
Related
I've an UART device which I'm writing to it a command (via System.IO.Ports.SerialPort) and then immediately the device will respond.
So basically my approach is:
->Write to SerialPort->await Task.Delay->Read from the Port.
//The port is open all the time.
public async byte[] WriteAndRead(byte[] message){
port.Write(command, 0, command.Length);
await Task.Delay(timeout);
var msglen = port.BytesToRead;
if (msglen > 0)
{
byte[] message = new byte[msglen];
int readbytes = 0;
while (port.Read(message, readbytes, msglen - readbytes) <= 0)
;
return message;
}
This works fine on my computer. But if I try it on another computer for example, the bytesToRead property is sometimes mismatched. There are empty bytes in it or the answer is not completed. (E.g. I get two bytes, if I expect one byte: 0xBB, 0x00 or 0x00, 0xBB)
I've also looked into the SerialPort.DataReceived Event, but it fires too often and is (as far as I understand) not really useful for this write and read approach. (As I expect the answer immediately from the device).
Is there a better approach to a write-and-read?
Read carefully the Remarks in https://msdn.microsoft.com/en-us/library/ms143549(v=vs.110).aspx
You should not rely on the BytesToRead value to indicate message length.
You should know, how much data you expect to read to decompose the message.
Also, as #itsme85 noticed, you are not updating the readbytes, and therefore you are always writing received bytes to beginning of your array. Proper code with updating the readbytes should look like this:
int r;
while ((r = port.Read(message, readbytes, msglen - readbytes)) <= 0){
readbytes += r;
}
However, during the time you will read data, more data can come and your "message" might be incomplete.
Rethink, what you want to achieve.
On receiving Multipart data from the browser (which has a size of greater than ~2KB), I start receiving null '\0' bytes after the first few chunks which are relevant when I use:
_Stream.Read(ByteArray, Offset, ContentLength);
But, if I divide the ContentLength into small buffers (around 2KB each) AND add a delay of 1ms after each call to Read(), then it works fine:
for(int i = 0; i < x; i++)
{
_Stream.Read(ByteArray, Offset * i, BufferSize);
System.Threading.Thread.Sleep(1);
}
But, adding delay is quite slow. How to prevent reading null bytes. How can I know how many bytes have been written by the browser.
Thanks
The 0x00 bytes were not actually received, they were never written to.
Stream.Read() returns the number of bytes actually read, which is in your case often less than BufferSize. Small amounts of data typically arrive in a single message, in which case the problem does not occur.
The delay might "work" in your test scenario because by then the network layer has buffered more than BufferSize data. It will probably fail in a production environment.
So you'll need to change your code into something like:
int remaining = ContentLength;
int offset = 0;
while (remaining > 0)
{
int bytes = _Stream.Read(ByteArray, offset, remaining);
if (bytes == 0)
{
throw new ApplicationException("Server disconnected before the expected amount of data was received");
}
offset += bytes;
remaining -= bytes;
}
I need to transfer some large data buffer via UDP(User Datagram protocol ). This buffer is divided into 1452 bytes datagrams. The first two bytes is the datagram number (++dtgrNr % 65536). The number is for detection of lost datagrams. I'd like to control the bitrate as follows:
e.g., for 1 MBps the buffer length (bufferSize) is 100 kB, the timer function SendBuffer is called every 100ms.
for 2 MBps the buffer length is 200 kB and so on.
SendBuffer() // simplified version
{
int current_idx = 0;
while(current_idx < bufferSize)
{
GetCounterBytes(counter, ref b1, ref b2); // obtaining two bytes of datagram counter
datagram[0] = b1;
datagram[1] = b2;
Array.Copy(buffer, current_idx, datagram, 2, datagramSize);
try
{
int sentLen = udpClient.Send(datagram, datagram.Length);
if(sentLen <= 0)
Console.WriteLine("error");
}
catch .........
current_idx += datagramSize;
}
The problem is with sending datagrams. It seems that some of these are not send.
Send method is called inside the loop and it may cause some timeout problem?
But sentLen value is always > 0 and no catch block is called.
Could you help me with the problem?
Any suggestions?
Best regards
I have a Socket code which is communicating through TCP/IP.The machine to which i am communicating has buffer data in its buffer.At present i am trying to get the buffer data using this code.
byte data = new byte[1024];
int recv = sock.Receive(data);
stringData = Encoding.ASCII.GetString(data, 0, recv);
But this code retrieves only 11 lines of data whereas more data is there in the machines buffer.Is this because i have used int recv = sock.Receive(data); and data is 1024 ?
If yes ,How to get the total buffer size and retrieve it into string.
If you think you are missing some data, then you need to check recv and almost certainly: loop. Fortunately, ASCII is always single byte - in most other encodings you would also have to worry about receiving partial characters.
A common approach is basically:
int recv;
while((recv = sock.Receive(data)) > 0)
{
// process recv-many bytes
// ... stringData = Encoding.ASCII.GetString(data, 0, recv);
}
Keep in mind that there is no guarantee that stringData will be any particular entire unit of work; what you send is not always what you receive, and that could be a single character, 14 lines, or the second half of one word and the first half of another. You generally need to maintain your own back-buffer of received data until you have a complete logical frame to process.
Note, however, Receive always tries to return something (at least one byte), unless the inbound stream has closed - and will block to do so. If this is a problem, you may need to check the available buffer (sock.Available) to decide whether to do synchronous versus asynchronous receive (i.e. read synchronously while data is available, otherwise request an asynchronous read).
Try something along these lines:
StringBuilder sbContent=new StringBuilder();
byte data = new byte[1024];
int numBytes;
while ((numBytes = sock.Receive(data))>0)
{
sbContent.Append(Encoding.UTF8.GetString(data));
}
// use sbContent.ToString()
Socket tcpSocket = new Socket(ipe.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
Console.WriteLine(" ReceiveBufferSize {0}", tcpSocket.ReceiveBufferSize);
For actual data you can put below condition:-
int receiveBytes;
while((receiveBytes = tcpSocket.Receive.Data(receiveBytes)) > 0)
{
}
Is there any limit on the size of data that can be received by TCP client.
With TCP socket communication, server is sending more data but the client is only getting 4K and stopping.
I'm guessing that you're doing exactly 1 Send and exactly 1 Receive.
You need to do multiple reads, there is no guarantee that a single read from the socket will contain everything.
The Receive method will read as much data as is available, up to the size of the buffer. But it will return when it has some data so your program can use it.
You may consider splitting your read/writes over multiple calls. I've definitely had some problems with TcpClient in the past. To fix that we use a wrapped stream class with the following read/write methods:
public override int Read(byte[] buffer, int offset, int count)
{
int totalBytesRead = 0;
int chunkBytesRead = 0;
do
{
chunkBytesRead = _stream.Read(buffer, offset + totalBytesRead, Math.Min(__frameSize, count - totalBytesRead));
totalBytesRead += chunkBytesRead;
} while (totalBytesRead < count && chunkBytesRead > 0);
return totalBytesRead;
}
public override void Write(byte[] buffer, int offset, int count)
{
int bytesSent = 0;
do
{
int chunkSize = Math.Min(__frameSize, count - bytesSent);
_stream.Write(buffer, offset + bytesSent, chunkSize);
bytesSent += chunkSize;
} while (bytesSent < count);
}
//_stream is the wrapped stream
//__frameSize is a constant, we use 4096 since its easy to allocate.
No, it should be fine. I suspect that your code to read from the client is flawed, but it's hard to say without you actually showing it.
No limit, TCP socket is a stream.
There's no limit for data with TCP in theory BUT since we're limited by physical resources (i.e memory), implementors such as Microsoft Winsock utilize something called "tcp window size".
That means that when you send something with the Winsock's send() function for example (and didn't set any options on the socket handler) the data will be first copied to the socket's temporary buffer. Only when the receiving side has acknowledged that he got that data, Winsock will use this memory again.
So, you might flood this buffer by sending faster than it frees up and then - error!