On receiving Multipart data from the browser (which has a size of greater than ~2KB), I start receiving null '\0' bytes after the first few chunks which are relevant when I use:
_Stream.Read(ByteArray, Offset, ContentLength);
But, if I divide the ContentLength into small buffers (around 2KB each) AND add a delay of 1ms after each call to Read(), then it works fine:
for(int i = 0; i < x; i++)
{
_Stream.Read(ByteArray, Offset * i, BufferSize);
System.Threading.Thread.Sleep(1);
}
But, adding delay is quite slow. How to prevent reading null bytes. How can I know how many bytes have been written by the browser.
Thanks
The 0x00 bytes were not actually received, they were never written to.
Stream.Read() returns the number of bytes actually read, which is in your case often less than BufferSize. Small amounts of data typically arrive in a single message, in which case the problem does not occur.
The delay might "work" in your test scenario because by then the network layer has buffered more than BufferSize data. It will probably fail in a production environment.
So you'll need to change your code into something like:
int remaining = ContentLength;
int offset = 0;
while (remaining > 0)
{
int bytes = _Stream.Read(ByteArray, offset, remaining);
if (bytes == 0)
{
throw new ApplicationException("Server disconnected before the expected amount of data was received");
}
offset += bytes;
remaining -= bytes;
}
Related
Kindly bear with me for this confusing question. I'm finding it as hard to describe as it is involving and tiresome. Read it and you'll know why.
I've been hounding this issue for over a month now without much progress. I'm using an STM32 (STM32F103C8 mounted on a BluePill board) to communicate with a C# app through an FT232r Serial-USB converter. The complete communication protocol is a bit complex. I'm writing here a simplistic version of the code that explains my problem quite accurately.
STM32 does the following.
In the initial setup,
Serial.begin at 2000000 (Yes it's very high but I've analyzed it using an oscilloscope and the signal is very healthy; impedance matching and clock jitter is very accurate).
Waits for a command from the C# end to enter the loop
In the loop, it does the following.
TX a byte buffer of length N on the serial port. Packet structure is 0xAA, N bytes, 1 byte checksum.
repeat the loop
And on the C# side (Pseudo code),
new Thread(()=>{while(true) IOTick(); Thread.Sleep(30); }).Start();
IOTick() is defined as:
{
while(SerialPortObject.BytesToRead > 1)
{
header = read();
if (header != 0xAA) continue;
byte [] buffer = new byte[N + 1];
receivedBytes = readBytes(buffer, N + 1, Timeout = 500ms); // receivedBytes is never less than N + 1 for timeout greater than 120)
use the N=16 bytes. Check Nth byte to compare checksum. Doen't take too much CPU time.
Send a packet received software event.
}
}
readBytes is defined as
int readBytes(byte[] buffer, int count, int timeout)
{
var st = DateTime.Now;
for (int i = 0; i < count; i++)
{
var b_ = read(timeout);
if (b_ == -1)
return i;
buffer[i] = (byte)b_;
timeout -= (int)(DateTime.Now - st).TotalMilliseconds;
}
return count;
}
int buffer2ReadIndex = 0;
byte[] buffer2= new byte[0];
int read(int timeout)
{
DateTime start = DateTime.Now;
if (buffer2.Length == 0)
{
while (SerialPortObject.BytesToRead <= 0)
{
if ((DateTime.Now - start).TotalMilliseconds > timeout)
return -1;
System.Threading.Thread.Sleep(30);
}
buffer2 = new byte[SerialPortObject.BytesToRead];
sp.Read(buffer2, 0, buffer2.Length);
}
if (buffer2.Length > 0)
{
var b = buffer2[buffer2ReadIndex];
buffer2ReadIndex++;
if (buffer2ReadIndex >= buffer2.Length)
{
buffer2ReadIndex = 0;
buffer2 = new byte[0];
}
return b;
}
return -1;
}
Now, everything is working as expected. The packet received software event is triggered not later than every ~30ms (the windows tick time). The problem starts if I have to wait between each packet TX at the STM side. First, I suspected that the I2C I was using for some tasks between each packet TX was causing some HW or software conflict with serial data which gets corrupted. But then I noticed that only if I introduce a delay of 1 millisecond using Arduino delay() between each packet TX, the same thing happens. Almost, 1K packets should be received every second now. Almost 1 out of 10 packets after a successful header exception get either not delivered completely or delivered with corrupted checksum, causing the C# app to lose the packet Header. The new header trace obviously requires flushing some bytes, losing some packets in the communication. Even this doesn't sound too bad for an app that can afford 5% data packet loss, strangely though, when this anomaly occurs, the packet received software interrupt waits for more than 1 second after every couple hundred of consecutive events.
I'm completely blind here. Even tried it with 115200 baud rate, does the same loss with a slightly lesser loss ratio. It should be noted that at 9600 baud, the issue doesn't happen. This is the only hint I've got right now.
It looks like I've found an answer.
After digging deep into SerialPort and SerialPort.base stream class and after doing some document reading and benchmarking, here is what I've observed:
SerialPort.BytesToRead updates are not uniform. DataReceived event seems to be following it. When bytes are coming at ~200kHz, (baud = 2Mbps), It is updated almost instantaneously (or within 30ms, worst case). When they are coming at ~20kHz or slower (evenly spaced on time using a micrcontroller), the SerialPort.BytesToRead can take up to 400ms to update. This happens only after a dozen 30ms updates.
So, observing this, I can say that SerialPort.BytesToRead is updated on two conditions. Some amount of time has passed since the data arrived (and this time is not constrained to 30ms) or the data is coming too fast.
This is a strange behavior. No data is lost when this anomaly is occurring. Not to surprise, 0.06% of bytes are lost when working at full bandwidth (200KBps at baud of 2Mbps).
I've an UART device which I'm writing to it a command (via System.IO.Ports.SerialPort) and then immediately the device will respond.
So basically my approach is:
->Write to SerialPort->await Task.Delay->Read from the Port.
//The port is open all the time.
public async byte[] WriteAndRead(byte[] message){
port.Write(command, 0, command.Length);
await Task.Delay(timeout);
var msglen = port.BytesToRead;
if (msglen > 0)
{
byte[] message = new byte[msglen];
int readbytes = 0;
while (port.Read(message, readbytes, msglen - readbytes) <= 0)
;
return message;
}
This works fine on my computer. But if I try it on another computer for example, the bytesToRead property is sometimes mismatched. There are empty bytes in it or the answer is not completed. (E.g. I get two bytes, if I expect one byte: 0xBB, 0x00 or 0x00, 0xBB)
I've also looked into the SerialPort.DataReceived Event, but it fires too often and is (as far as I understand) not really useful for this write and read approach. (As I expect the answer immediately from the device).
Is there a better approach to a write-and-read?
Read carefully the Remarks in https://msdn.microsoft.com/en-us/library/ms143549(v=vs.110).aspx
You should not rely on the BytesToRead value to indicate message length.
You should know, how much data you expect to read to decompose the message.
Also, as #itsme85 noticed, you are not updating the readbytes, and therefore you are always writing received bytes to beginning of your array. Proper code with updating the readbytes should look like this:
int r;
while ((r = port.Read(message, readbytes, msglen - readbytes)) <= 0){
readbytes += r;
}
However, during the time you will read data, more data can come and your "message" might be incomplete.
Rethink, what you want to achieve.
I need to transfer some large data buffer via UDP(User Datagram protocol ). This buffer is divided into 1452 bytes datagrams. The first two bytes is the datagram number (++dtgrNr % 65536). The number is for detection of lost datagrams. I'd like to control the bitrate as follows:
e.g., for 1 MBps the buffer length (bufferSize) is 100 kB, the timer function SendBuffer is called every 100ms.
for 2 MBps the buffer length is 200 kB and so on.
SendBuffer() // simplified version
{
int current_idx = 0;
while(current_idx < bufferSize)
{
GetCounterBytes(counter, ref b1, ref b2); // obtaining two bytes of datagram counter
datagram[0] = b1;
datagram[1] = b2;
Array.Copy(buffer, current_idx, datagram, 2, datagramSize);
try
{
int sentLen = udpClient.Send(datagram, datagram.Length);
if(sentLen <= 0)
Console.WriteLine("error");
}
catch .........
current_idx += datagramSize;
}
The problem is with sending datagrams. It seems that some of these are not send.
Send method is called inside the loop and it may cause some timeout problem?
But sentLen value is always > 0 and no catch block is called.
Could you help me with the problem?
Any suggestions?
Best regards
I have a race condition or something like it. I mean if I toggle a breakpoint before reading from COM, everything is good. But when i'm toggling it off, it freezes. writing:
public void Send(ComMessage message)
{
byte[] bytes = message.Serialise();
if (!_outputPort.IsOpen)
_outputPort.Open();
try
{
byte[] size = BitConverter.GetBytes(bytes.Length);
_outputPort.Write(size, 0, size.Length);
_outputPort.Write(bytes, 0, bytes.Length);
}
finally
{
if (_outputPort != _inputPort)
_outputPort.Close();
}
}
reading
private void InputPortOnDataReceived(object sender, SerialDataReceivedEventArgs serialDataReceivedEventArgs)
{
var port = (SerialPort) sender;
byte[] sizeBuffer = new byte[sizeof(long)];
port.Read(sizeBuffer, 0, sizeBuffer.Length);
int length = BitConverter.ToInt32(sizeBuffer, 0);
byte[] buffer = new byte[length];
int i = 0;
while (i < length)
{
int readed = port.Read(buffer, i, length - i);
i += readed;
}
var message = ComMessage.Deserialize(buffer);
MessageReceived(this, message);
}
for example, message has 625 bytes length. If I toggle a breakpoint, port.BytesToRead is equal 625, but if I disable it, byte count is 621.
Strange, but it works for a little amount of bytes (for short messages), but doesn't for long.
Please, advice.
message has 625 bytes length. If I toggle a breakpoint,
port.BytesToRead is equal 625, but if I disable it, byte count is 621.
You never check the first Read to see how many bytes it read. It may have read less than sizeof(long) bytes in. However, that is not the source of your problem, your main problem is you are making a buffer of size long but long is Int64, you are calling ToInt32 (and writing a Int32 in your sender).
The reason your byte count is 261 instead of 265 is because the first 4 bytes of your message is sitting in sizeBuffer[4] through sizeBuffer[7] which you never processed.
To fix this you should be either doing sizeof(int) or even better to make it more obvious that the buffer is for the ToInt32 call, use sizeof(Int32)
Is there any limit on the size of data that can be received by TCP client.
With TCP socket communication, server is sending more data but the client is only getting 4K and stopping.
I'm guessing that you're doing exactly 1 Send and exactly 1 Receive.
You need to do multiple reads, there is no guarantee that a single read from the socket will contain everything.
The Receive method will read as much data as is available, up to the size of the buffer. But it will return when it has some data so your program can use it.
You may consider splitting your read/writes over multiple calls. I've definitely had some problems with TcpClient in the past. To fix that we use a wrapped stream class with the following read/write methods:
public override int Read(byte[] buffer, int offset, int count)
{
int totalBytesRead = 0;
int chunkBytesRead = 0;
do
{
chunkBytesRead = _stream.Read(buffer, offset + totalBytesRead, Math.Min(__frameSize, count - totalBytesRead));
totalBytesRead += chunkBytesRead;
} while (totalBytesRead < count && chunkBytesRead > 0);
return totalBytesRead;
}
public override void Write(byte[] buffer, int offset, int count)
{
int bytesSent = 0;
do
{
int chunkSize = Math.Min(__frameSize, count - bytesSent);
_stream.Write(buffer, offset + bytesSent, chunkSize);
bytesSent += chunkSize;
} while (bytesSent < count);
}
//_stream is the wrapped stream
//__frameSize is a constant, we use 4096 since its easy to allocate.
No, it should be fine. I suspect that your code to read from the client is flawed, but it's hard to say without you actually showing it.
No limit, TCP socket is a stream.
There's no limit for data with TCP in theory BUT since we're limited by physical resources (i.e memory), implementors such as Microsoft Winsock utilize something called "tcp window size".
That means that when you send something with the Winsock's send() function for example (and didn't set any options on the socket handler) the data will be first copied to the socket's temporary buffer. Only when the receiving side has acknowledged that he got that data, Winsock will use this memory again.
So, you might flood this buffer by sending faster than it frees up and then - error!