I need to transfer some large data buffer via UDP(User Datagram protocol ). This buffer is divided into 1452 bytes datagrams. The first two bytes is the datagram number (++dtgrNr % 65536). The number is for detection of lost datagrams. I'd like to control the bitrate as follows:
e.g., for 1 MBps the buffer length (bufferSize) is 100 kB, the timer function SendBuffer is called every 100ms.
for 2 MBps the buffer length is 200 kB and so on.
SendBuffer() // simplified version
{
int current_idx = 0;
while(current_idx < bufferSize)
{
GetCounterBytes(counter, ref b1, ref b2); // obtaining two bytes of datagram counter
datagram[0] = b1;
datagram[1] = b2;
Array.Copy(buffer, current_idx, datagram, 2, datagramSize);
try
{
int sentLen = udpClient.Send(datagram, datagram.Length);
if(sentLen <= 0)
Console.WriteLine("error");
}
catch .........
current_idx += datagramSize;
}
The problem is with sending datagrams. It seems that some of these are not send.
Send method is called inside the loop and it may cause some timeout problem?
But sentLen value is always > 0 and no catch block is called.
Could you help me with the problem?
Any suggestions?
Best regards
Related
Kindly bear with me for this confusing question. I'm finding it as hard to describe as it is involving and tiresome. Read it and you'll know why.
I've been hounding this issue for over a month now without much progress. I'm using an STM32 (STM32F103C8 mounted on a BluePill board) to communicate with a C# app through an FT232r Serial-USB converter. The complete communication protocol is a bit complex. I'm writing here a simplistic version of the code that explains my problem quite accurately.
STM32 does the following.
In the initial setup,
Serial.begin at 2000000 (Yes it's very high but I've analyzed it using an oscilloscope and the signal is very healthy; impedance matching and clock jitter is very accurate).
Waits for a command from the C# end to enter the loop
In the loop, it does the following.
TX a byte buffer of length N on the serial port. Packet structure is 0xAA, N bytes, 1 byte checksum.
repeat the loop
And on the C# side (Pseudo code),
new Thread(()=>{while(true) IOTick(); Thread.Sleep(30); }).Start();
IOTick() is defined as:
{
while(SerialPortObject.BytesToRead > 1)
{
header = read();
if (header != 0xAA) continue;
byte [] buffer = new byte[N + 1];
receivedBytes = readBytes(buffer, N + 1, Timeout = 500ms); // receivedBytes is never less than N + 1 for timeout greater than 120)
use the N=16 bytes. Check Nth byte to compare checksum. Doen't take too much CPU time.
Send a packet received software event.
}
}
readBytes is defined as
int readBytes(byte[] buffer, int count, int timeout)
{
var st = DateTime.Now;
for (int i = 0; i < count; i++)
{
var b_ = read(timeout);
if (b_ == -1)
return i;
buffer[i] = (byte)b_;
timeout -= (int)(DateTime.Now - st).TotalMilliseconds;
}
return count;
}
int buffer2ReadIndex = 0;
byte[] buffer2= new byte[0];
int read(int timeout)
{
DateTime start = DateTime.Now;
if (buffer2.Length == 0)
{
while (SerialPortObject.BytesToRead <= 0)
{
if ((DateTime.Now - start).TotalMilliseconds > timeout)
return -1;
System.Threading.Thread.Sleep(30);
}
buffer2 = new byte[SerialPortObject.BytesToRead];
sp.Read(buffer2, 0, buffer2.Length);
}
if (buffer2.Length > 0)
{
var b = buffer2[buffer2ReadIndex];
buffer2ReadIndex++;
if (buffer2ReadIndex >= buffer2.Length)
{
buffer2ReadIndex = 0;
buffer2 = new byte[0];
}
return b;
}
return -1;
}
Now, everything is working as expected. The packet received software event is triggered not later than every ~30ms (the windows tick time). The problem starts if I have to wait between each packet TX at the STM side. First, I suspected that the I2C I was using for some tasks between each packet TX was causing some HW or software conflict with serial data which gets corrupted. But then I noticed that only if I introduce a delay of 1 millisecond using Arduino delay() between each packet TX, the same thing happens. Almost, 1K packets should be received every second now. Almost 1 out of 10 packets after a successful header exception get either not delivered completely or delivered with corrupted checksum, causing the C# app to lose the packet Header. The new header trace obviously requires flushing some bytes, losing some packets in the communication. Even this doesn't sound too bad for an app that can afford 5% data packet loss, strangely though, when this anomaly occurs, the packet received software interrupt waits for more than 1 second after every couple hundred of consecutive events.
I'm completely blind here. Even tried it with 115200 baud rate, does the same loss with a slightly lesser loss ratio. It should be noted that at 9600 baud, the issue doesn't happen. This is the only hint I've got right now.
It looks like I've found an answer.
After digging deep into SerialPort and SerialPort.base stream class and after doing some document reading and benchmarking, here is what I've observed:
SerialPort.BytesToRead updates are not uniform. DataReceived event seems to be following it. When bytes are coming at ~200kHz, (baud = 2Mbps), It is updated almost instantaneously (or within 30ms, worst case). When they are coming at ~20kHz or slower (evenly spaced on time using a micrcontroller), the SerialPort.BytesToRead can take up to 400ms to update. This happens only after a dozen 30ms updates.
So, observing this, I can say that SerialPort.BytesToRead is updated on two conditions. Some amount of time has passed since the data arrived (and this time is not constrained to 30ms) or the data is coming too fast.
This is a strange behavior. No data is lost when this anomaly is occurring. Not to surprise, 0.06% of bytes are lost when working at full bandwidth (200KBps at baud of 2Mbps).
On receiving Multipart data from the browser (which has a size of greater than ~2KB), I start receiving null '\0' bytes after the first few chunks which are relevant when I use:
_Stream.Read(ByteArray, Offset, ContentLength);
But, if I divide the ContentLength into small buffers (around 2KB each) AND add a delay of 1ms after each call to Read(), then it works fine:
for(int i = 0; i < x; i++)
{
_Stream.Read(ByteArray, Offset * i, BufferSize);
System.Threading.Thread.Sleep(1);
}
But, adding delay is quite slow. How to prevent reading null bytes. How can I know how many bytes have been written by the browser.
Thanks
The 0x00 bytes were not actually received, they were never written to.
Stream.Read() returns the number of bytes actually read, which is in your case often less than BufferSize. Small amounts of data typically arrive in a single message, in which case the problem does not occur.
The delay might "work" in your test scenario because by then the network layer has buffered more than BufferSize data. It will probably fail in a production environment.
So you'll need to change your code into something like:
int remaining = ContentLength;
int offset = 0;
while (remaining > 0)
{
int bytes = _Stream.Read(ByteArray, offset, remaining);
if (bytes == 0)
{
throw new ApplicationException("Server disconnected before the expected amount of data was received");
}
offset += bytes;
remaining -= bytes;
}
I've been working with windows app store programming in c# recently, and I've come across a problem with sockets.
I need to be able to read data with an unknown length from a DataReader().
It sounds simple enough, but I've not been able to find a solution after a few days of searching.
Here's my current receiving code (A little sloppy, need to clean it up after I find a solution to this problem. And yes, a bit of this is from the Microsoft example)
DataReader reader = new DataReader(args.Socket.InputStream);
try
{
while (true)
{
// Read first 4 bytes (length of the subsequent string).
uint sizeFieldCount = await reader.LoadAsync(sizeof(uint));
if (sizeFieldCount != sizeof(uint))
{
// The underlying socket was closed before we were able to read the whole data.
return;
}
reader.InputStreamOptions
// Read the string.
uint stringLength = reader.ReadUInt32();
uint actualStringLength = await reader.LoadAsync(stringLength);
if (stringLength != actualStringLength)
{
// The underlying socket was closed before we were able to read the whole data.
return;
}
// Display the string on the screen. The event is invoked on a non-UI thread, so we need to marshal
// the text back to the UI thread.
//MessageBox.Show("Received data: " + reader.ReadString(actualStringLength));
MessageBox.updateList(reader.ReadString(actualStringLength));
}
}
catch (Exception exception)
{
// If this is an unknown status it means that the error is fatal and retry will likely fail.
if (SocketError.GetStatus(exception.HResult) == SocketErrorStatus.Unknown)
{
throw;
}
MessageBox.Show("Read stream failed with error: " + exception.Message);
}
You are going down the right lines - read the first INT to find out how many bytes are to be sent.
Franky Boyle is correct - without a signalling mechanism it is impossible to ever know the length of a stream. Thats why it is called a stream!
NO socket implementation (including the WinSock) will ever be clever enough to know when a client has finished sending data. The client could be having a cup of tea half way through sending the data!
Your server and its sockets will never know! What are they going to do? Wait forever? I suppose they could wait until the client had 'closed' the connection? But your client could have had a blue screen and the server will never get that TCP close packet, it will just be sitting there thinking it is getting more data one day?
I have never used a DataReader - i have never even heard of that class! Use NetworkStream instead.
From my memory I have written code like this in the past. I am just typing, no checking of syntax.
using(MemoryStream recievedData = new MemoryStream())
{
using(NetworkStream networkStream = new NetworkStream(connectedSocket))
{
int totalBytesToRead = networkStream.ReadByte();
// This is your mechanism to find out how many bytes
// the client wants to send.
byte[] readBuffer = new byte[1024]; // Up to you the length!
int totalBytesRead = 0;
int bytesReadInThisTcpWindow = 0;
// The length of the TCP window of the client is usually
// the number of bytes that will be pushed through
// to your server in one SOCKET.READ method call.
// For example, if the clients TCP window was 777 bytes, a:
// int bytesRead =
// networkStream.Read(readBuffer, 0, int.Max);
// bytesRead would be 777.
// If they were sending a large file, you would have to make
// it up from the many 777s.
// If it were a small file under 777 bytes, your bytesRead
// would be the total small length of say 500.
while
(
(
bytesReadInThisTcpWindow =
networkStream.Read(readBuffer, 0, readBuffer.Length)
) > 0
)
// If the bytesReadInThisTcpWindow = 0 then the client
// has disconnected or failed to send the promised number
// of bytes in your Windows server internals dictated timeout
// (important to kill it here to stop lots of waiting
// threads killing your server)
{
recievedData.Write(readBuffer, 0, bytesReadInThisTcpWindow);
totalBytesToRead = totalBytesToRead + bytesReadInThisTcpWindow;
}
if(totalBytesToRead == totalBytesToRead)
{
// We have our data!
}
}
}
I have a Socket code which is communicating through TCP/IP.The machine to which i am communicating has buffer data in its buffer.At present i am trying to get the buffer data using this code.
byte data = new byte[1024];
int recv = sock.Receive(data);
stringData = Encoding.ASCII.GetString(data, 0, recv);
But this code retrieves only 11 lines of data whereas more data is there in the machines buffer.Is this because i have used int recv = sock.Receive(data); and data is 1024 ?
If yes ,How to get the total buffer size and retrieve it into string.
If you think you are missing some data, then you need to check recv and almost certainly: loop. Fortunately, ASCII is always single byte - in most other encodings you would also have to worry about receiving partial characters.
A common approach is basically:
int recv;
while((recv = sock.Receive(data)) > 0)
{
// process recv-many bytes
// ... stringData = Encoding.ASCII.GetString(data, 0, recv);
}
Keep in mind that there is no guarantee that stringData will be any particular entire unit of work; what you send is not always what you receive, and that could be a single character, 14 lines, or the second half of one word and the first half of another. You generally need to maintain your own back-buffer of received data until you have a complete logical frame to process.
Note, however, Receive always tries to return something (at least one byte), unless the inbound stream has closed - and will block to do so. If this is a problem, you may need to check the available buffer (sock.Available) to decide whether to do synchronous versus asynchronous receive (i.e. read synchronously while data is available, otherwise request an asynchronous read).
Try something along these lines:
StringBuilder sbContent=new StringBuilder();
byte data = new byte[1024];
int numBytes;
while ((numBytes = sock.Receive(data))>0)
{
sbContent.Append(Encoding.UTF8.GetString(data));
}
// use sbContent.ToString()
Socket tcpSocket = new Socket(ipe.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
Console.WriteLine(" ReceiveBufferSize {0}", tcpSocket.ReceiveBufferSize);
For actual data you can put below condition:-
int receiveBytes;
while((receiveBytes = tcpSocket.Receive.Data(receiveBytes)) > 0)
{
}
Is there any limit on the size of data that can be received by TCP client.
With TCP socket communication, server is sending more data but the client is only getting 4K and stopping.
I'm guessing that you're doing exactly 1 Send and exactly 1 Receive.
You need to do multiple reads, there is no guarantee that a single read from the socket will contain everything.
The Receive method will read as much data as is available, up to the size of the buffer. But it will return when it has some data so your program can use it.
You may consider splitting your read/writes over multiple calls. I've definitely had some problems with TcpClient in the past. To fix that we use a wrapped stream class with the following read/write methods:
public override int Read(byte[] buffer, int offset, int count)
{
int totalBytesRead = 0;
int chunkBytesRead = 0;
do
{
chunkBytesRead = _stream.Read(buffer, offset + totalBytesRead, Math.Min(__frameSize, count - totalBytesRead));
totalBytesRead += chunkBytesRead;
} while (totalBytesRead < count && chunkBytesRead > 0);
return totalBytesRead;
}
public override void Write(byte[] buffer, int offset, int count)
{
int bytesSent = 0;
do
{
int chunkSize = Math.Min(__frameSize, count - bytesSent);
_stream.Write(buffer, offset + bytesSent, chunkSize);
bytesSent += chunkSize;
} while (bytesSent < count);
}
//_stream is the wrapped stream
//__frameSize is a constant, we use 4096 since its easy to allocate.
No, it should be fine. I suspect that your code to read from the client is flawed, but it's hard to say without you actually showing it.
No limit, TCP socket is a stream.
There's no limit for data with TCP in theory BUT since we're limited by physical resources (i.e memory), implementors such as Microsoft Winsock utilize something called "tcp window size".
That means that when you send something with the Winsock's send() function for example (and didn't set any options on the socket handler) the data will be first copied to the socket's temporary buffer. Only when the receiving side has acknowledged that he got that data, Winsock will use this memory again.
So, you might flood this buffer by sending faster than it frees up and then - error!