Cannot send large packets from esp - c#

I have an Esp32 with a camera attached and want to send the images over tcp. I tried it using WifiClient (The standard tcp client implementation for wifi esp, i think). But when I am sending the image using client.write, only the first few thousand bytes are actually received by my C# server (I am for now writing the image to a file where I can see that basically the whole file is just null bytes). The total image is always around 90kb large, but I thought the TCP implementation would automatically split it into multiple packets. I then have tried to split it into multiple packets my self (splitting after 1000 bytes) and I was able to open the file (before all gallery programs said it was an unknown format), but it heavily impacted performance. I know the images are fine, since when I print them in hex over Serial and convert them to images, they work.
Here is the (simplified) code for the Camera module:
info[0] = (uint8_t)_jpg_buf_len;
info[1] = (uint8_t)((_jpg_buf_len & 0xFF00) >> 8);
info[2] = (uint8_t)((_jpg_buf_len & 0xFF0000) >> 16);
if(client.write(info, 3) < 0) fatal("Error writing the image size packet.");
if(client.write(_jpg_buf, _jpg_buf_len) < 0) fatal("Error writing image packet.");
And here is the part of the C# TCP Server that receives the packets:
NetworkStream stream = c.GetStream();
//The info buffer first written by the ESP containing the lowest 3 bytes of the image size integer.
byte[] buffInfo = new byte[4];
//Writes the lowest 3 bytes into the first 3 places in the buffer, since the highest byte is always 0
await s.ReadAsync(buffInfo , 0, 3);
int imgLen = (int)BitConverter.ToUInt32(buffInfo , 0);
//This always displays the correct lengths
Console.WriteLine("Received image with a length of {0}.", imgLen);
byte[] buffImg = new byte[imLen];
await s.ReadAsync(buffImg, 0, imLen);
//When the buffer is now written to a file, basically the whole image is null bytes and it cannot be viewed
Do I need to be manually splitting the huge buffers up? Or is there a more performant solution to this problem?

I found the issue. The esp is actually splitting the packets, but since the esp is way slower than the client, the client receives them as singular packets and when it tries to read the whole image, it cannot read the full length of bytes and fills the rest with null bytes. Then it tries to read the next image length, but the bytes read still belong to the image (but they just arrived now) and it gets really weird sizes. Here is my code I use now:
NetworkStream s = c.GetStream();
byte[] buffInfo= new byte[4];
await s.ReadAsync(buffInfo, 0, 3);
int imgLen= (int)BitConverter.ToUInt32(buffInit, 0);
byte[] buffImg = new byte[imgLen];
int remLen = imgLen;
while (remLen > 0) {
remLen -= await s.ReadAsync(buffImg, imgLen- remLen, remLen);
}

Related

How to read binary data from TCP stream?

I have a device that sends data to another device via TCP. When I receive the data and try the Encoding.Unicode.GetString() method on the Byte array, it turns into unreadable text.
Only the first frame of the TCP packet (the preamble in the header) can be converted to text. (sender TCP docs, packet data).
This is my code so far. I have tried encoding as ASCII and there are no results either.
NetworkStream stream = tcpClient.GetStream();
int i;
Byte[] buffer = new Byte[1396];
while ((i = stream.Read(buffer, 0, buffer.Length)) != 0)
{
data = System.Text.Encoding.Unicode.GetString(buffer, 0, i);
data = data.ToUpper();
Console.WriteLine($"Data: {data}");
}
This just prints the same unreadable string seen in the "packet data" link above. Why is this happening? The official device doc says it is encoded in little endian. Am I missing something? I am new in handling TCP data transmission.
There is nothing in the linked documentation to indicate that there is any textual data at all, with exception for the "preamble", that is a fixed, four letter ascii-string, or an integer with the equivalent value, whatever you prefer.
It specifies a binary header with a bunch of mostly 32-bit integers, followed by a sequence of frames, where each frame has 3 32-bit numbers.
So I would suggest using wrapping your buffer in a memory stream and use BinaryReader to read values, according to the format specification.
Note that network communication typically uses big-endian encoding, but both windows and your device uses little-endian, so you should not have to bother with endianess.

TCP IP socket.receive, data received in .Net application but not in Unity

I send data via TCP from a .NET application to Unity but all bytes aren't received. This is the case with a simple .NET wpf application with the same code. Why is there a difference in Unity? Both is based on .NET 4.7?
// Data send from .NET application:
byte[] ba = memoryStream.ToArray();
var buffer = BitConverter.GetBytes(ba.Length);
stm.Write(buffer, 0, buffer.Length);
stm.Write(ba, 0, ba.Length);
// Data receive works in .NET but in Unity not all bytes are received
Socket s;
byte[] buffersizeinbytes = new byte[32];
TcpListener myList = new TcpListener(ipAd, 8001);
s = myList.AcceptSocket();
(...)
int k = s.Receive(buffersizeinbytes);
int size = BitConverter.ToInt32(lengthb, 0); // size of buffer
byte[] buffer = new byte[size];
int receivedByteCount = s.Receive(buffer);
While it may seem confusing that your code works in one application, but not in Unity, that is not the core of your problem. You seem to be making the assumption that when you send chunks of data, you will receive them in that manner as well. That's not the case.
Calling Receive will result in you getting some data, up to a maximum of the amount you ask for, but you may not get all. The return value will tell you exactly how much you did actually get. If you expect more, you will have to call Receive again, until you have all the data you expect.
There are various overloads of Receive which allow you to specify an offset into a buffer. So if you're expecting 32 bytes of data, but you get only 16, you can call Receive again, with the same buffer, but specify an offset so your buffer will be filled from its first empty entry onward.
So it's not so much Unity that's doing anything strange, but rather you lucking out that all works without issue in your other application.

Why is my DeflateStream not receiving data correctly over TCP?

I have a TcpClient class on a client and server setup on my local machine. I have been using the Network stream to facilitate communications back and forth between the 2 successfully.
Moving forward I am trying to implement compression in the communications. I've tried GZipStream and DeflateStream. I have decided to focus on DeflateStream. However, the connection is hanging without reading data now.
I have tried 4 different implementations that have all failed due to the Server side not reading the incoming data and the connection timing out. I will focus on the two implementations I have tried most recently and to my knowledge should work.
The client is broken down to this request: There are 2 separate implementations, one with streamwriter one without.
textToSend = ENQUIRY + START_OF_TEXT + textToSend + END_OF_TEXT;
// Send XML Request
byte[] request = Encoding.UTF8.GetBytes(textToSend);
using (DeflateStream streamOut = new DeflateStream(netStream, CompressionMode.Compress, true))
{
//using (StreamWriter sw = new StreamWriter(streamOut))
//{
// sw.Write(textToSend);
// sw.Flush();
streamOut.Write(request, 0, request.Length);
streamOut.Flush();
//}
}
The server receives the request and I do
1.) a quick read of the first character then if it matches what I expect
2.) I continue reading the rest.
The first read works correctly and if I want to read the whole stream it is all there. However I only want to read the first character and evaluate it then continue in my LongReadStream method.
When I try to continue reading the stream there is no data to be read. I am guessing that the data is being lost during the first read but I'm not sure how to determine that. All this code works correctly when I use the normal NetworkStream.
Here is the server side code.
private void ProcessRequests()
{
// This method reads the first byte of data correctly and if I want to
// I can read the entire request here. However, I want to leave
// all that data until I want it below in my LongReadStream method.
if (QuickReadStream(_netStream, receiveBuffer, 1) != ENQUIRY)
{
// Invalid Request, close connection
clientIsFinished = true;
_client.Client.Disconnect(true);
_client.Close();
return;
}
while (!clientIsFinished) // Keep reading text until client sends END_TRANSMISSION
{
// Inside this method there is no data and the connection times out waiting for data
receiveText = LongReadStream(_netStream, _client);
// Continue talking with Client...
}
_client.Client.Shutdown(SocketShutdown.Both);
_client.Client.Disconnect(true);
_client.Close();
}
private string LongReadStream(NetworkStream stream, TcpClient c)
{
bool foundEOT = false;
StringBuilder sbFullText = new StringBuilder();
int readLength, totalBytesRead = 0;
string currentReadText;
c.ReceiveBufferSize = DEFAULT_BUFFERSIZE * 100;
byte[] bigReadBuffer = new byte[c.ReceiveBufferSize];
while (!foundEOT)
{
using (var decompressStream = new DeflateStream(stream, CompressionMode.Decompress, true))
{
//using (StreamReader sr = new StreamReader(decompressStream))
//{
//currentReadText = sr.ReadToEnd();
//}
readLength = decompressStream.Read(bigReadBuffer, 0, c.ReceiveBufferSize);
currentReadText = Encoding.UTF8.GetString(bigReadBuffer, 0, readLength);
totalBytesRead += readLength;
}
sbFullText.Append(currentReadText);
if (currentReadText.EndsWith(END_OF_TEXT))
{
foundEOT = true;
sbFullText.Length = sbFullText.Length - 1;
}
else
{
sbFullText.Append(currentReadText);
}
// Validate data code removed for simplicity
}
c.ReceiveBufferSize = DEFAULT_BUFFERSIZE;
c.ReceiveTimeout = timeOutMilliseconds;
return sbFullText.ToString();
}
private string QuickReadStream(NetworkStream stream, byte[] receiveBuffer, int receiveBufferSize)
{
using (DeflateStream zippy = new DeflateStream(stream, CompressionMode.Decompress, true))
{
int bytesIn = zippy.Read(receiveBuffer, 0, receiveBufferSize);
var returnValue = Encoding.UTF8.GetString(receiveBuffer, 0, bytesIn);
return returnValue;
}
}
EDIT
NetworkStream has an underlying Socket property which has an Available property. MSDN says this about the available property.
Gets the amount of data that has been received from the network and is
available to be read.
Before the call below Available is 77. After reading 1 byte the value is 0.
//receiveBufferSize = 1
int bytesIn = zippy.Read(receiveBuffer, 0, receiveBufferSize);
There doesn't seem to be any documentation about DeflateStream consuming the whole underlying stream and I don't know why it would do such a thing when there are explicit calls to be made to read specific numbers of bytes.
Does anyone know why this happens or if there is a way to preserve the underlying data for a future read? Based on this 'feature' and a previous article that I read stating a DeflateStream must be closed to finish sending (flush won't work) it seems DeflateStreams may be limited in their use for networking especially if one wishes to counter DOS attacks by testing incoming data before accepting a full stream.
The basic flaw I can think of looking at your code is a possible misunderstanding of how network stream and compression works.
I think your code might work, if you kept working with one DeflateStream. However, you use one in your quick read and then you create another one.
I will try to explain my reasoning on an example. Assume you have 8 bytes of original data to be sent over the network in a compressed way. Now let's assume for sake of an argument, that each and every byte (8 bits) of original data will be compressed to 6 bits in compressed form. Now let's see what your code does to this.
From the network stream, you can't read less than 1 byte. You can't take 1 bit only. You take 1 byte, 2 bytes, or any number of bytes, but not bits.
But if you want to receive just 1 byte of the original data, you need to read first whole byte of compressed data. However, there is only 6 bits of compressed data that represent the first byte of uncompressed data. The last 2 bits of the first byte are there for the second byte of original data.
Now if you cut the stream there, what is left is 5 bytes in the network stream that do not make any sense and can't be uncompressed.
The deflate algorithm is more complex than that and thus it makes perfect sense if it does not allow you to stop reading from the NetworkStream at one point and continue with new DeflateStream from the middle. There is a context of the decompression that must be present in order to decompress the data to their original form. Once you dispose the first DeflateStream in your quick read, this context is gone, you can't continue.
So, to resolve your issue, try to create only one DeflateStream and pass it to your functions, then dispose it.
This is broken in many ways.
You are assuming that a read call will read the exact number of bytes you want. It might read everything in one byte chunks though.
DeflateStream has an internal buffer. It can't be any other way: Input bytes do not correspond 1:1 to output bytes. There must be some internal buffering. You must use one such stream.
Same issue with UTF-8: UTF-8 encoded strings cannot be split at byte boundaries. Sometimes, your Unicode data will be garbled.
Don't touch ReceiveBufferSize, it does not help in any way.
You cannot reliably flush a deflate stream, I think, because the output might be at a partial byte position. You probably should devise a message framing format in which you prepend the compressed length as an uncompressed integer. Then, send the compressed deflate stream after the length. This is decodable in a reliable way.
Fixing these issues is not easy.
Since you seem to control client and server you should discard all of this and not devise your own network protocol. Use a higher-level mechanism such as web services, HTTP, protobuf. Anything is better than what you have there.
Basically there are a few things wrong with the code I posted above. First is that when I read data I'm not doing anything to make sure the data is ALL being read in. As per microsoft documentation
The Read operation reads as much data as is available, up to the
number of bytes specified by the size parameter.
In my case I was not making sure my reads would get all the data I expected.
This can be accomplished simply with this code.
byte[] data= new byte[packageSize];
bytesRead = _netStream.Read(data, 0, packageSize);
while (bytesRead < packageSize)
bytesRead += _netStream.Read(data, bytesRead, packageSize - bytesRead);
On top of this problem I had a fundamental issue with using DeflateStream - namely I should not use DeflateStream to write to the underlying NetworkStream. The correct approach is to first use the DeflateStream to compress data into a ByteArray, then send that ByteArray using the NetworkStream directly.
Using this approach helped to correctly compress data over the network and property read the data on the other end.
You may point out that I must know the size of the data, and that is true. Every call has a 8 byte 'header' that includes the size of the compressed data and the size of the data when it is uncompressed. Although I think the second was utimately not needed.
The code for this is here. Note the variable compressedSize serves 2 purposes.
int packageSize = streamIn.Read(sizeOfDataInBytes, 0, 4);
while (packageSize!= 4)
{
packageSize+= streamIn.Read(sizeOfDataInBytes, packageSize, 4 - packageSize);
}
packageSize= BitConverter.ToInt32(sizeOfDataInBytes, 0);
With this information I can correctly use the code I showed you first to get the contents fully.
Once I have the full compressed byte array I can get the incoming data like so:
var output = new MemoryStream();
using (var stream = new MemoryStream(bufferIn))
{
using (var decompress = new DeflateStream(stream, CompressionMode.Decompress))
{
decompress.CopyTo(output);;
}
}
output.Position = 0;
var unCompressedArray = output.ToArray();
output.Close();
output.Dispose();
return Encoding.UTF8.GetString(unCompressedArray);

Problems with TcpListener reading data from socket in c#

I am having trouble reading large messages sent to a TcpListener from a TcpClient using BeginReceive / BeginSend in C#.
I have tried to append a four byte length header to my messages, but on occasion I will not receive it as the first packet which is causing problems.
For example, I sent a serialized object that contains the values [204, 22, 0, 0] as the first four bytes of the byte array in my BeginSend. What I receive on the server in the BeginReceive is [17, 0, 0, 0]. I have checked Wireshark when sending simple strings and the messages are going through, but there's a problem with my code. Another example, is when I send "A b c d e f g h i j k l m n o p q r s t u v w x y z 1 2 3 4 5 6 7 8 9 0" to test I constantly receive "p q r . . . 8 9 0".
I thought that if packets were received out of order or if one was dropped, TCP would handle the resubmission of the lost packets and/or reorder them before sending them up. This would mean that my first four bytes should always contain my message size in the header. However, looking at the above examples this isn't the case, or it is and my code is messed up somewhere.
After the code below I just look if it's sending a particular command or type of object then responding based on what was received.
One I have the core functionality in place and get a better understanding of the issue I can begin to refactor, but this really has me at a stand still.
I've been banging my head against a wall for days trying to debug this. I have read several articles and questions about similar problems, but I haven't found a way to apply the suggested fixes to this particular instance.
Thanks in advance for your assistance with this.
The YahtzeeClient in the following code is just a wrapper around TcpClient with playerinformation.
private void ReceiveMessageCallback(IAsyncResult AR)
{
byte[] response = new byte[0];
byte[] header = new byte[4];
YahtzeeClient c = (YahtzeeClient)AR.AsyncState;
try
{
// If the client is connected
if (c.ClientSocket.Client.Connected)
{
int received = c.ClientSocket.Client.EndReceive(AR);
// If we didn't receive a message or a message has finished sending
// reset the messageSize to zero to prepare for the next message.
if (received == 0)
{
messageSize = 0;
// Do we need to do anything else here to prepare for a new message?
// Clean up buffers?
}
else
{
// Temporary buffer to trim any blanks from the message received.
byte[] tempBuff;
// Hacky way to track if this is the first message in a series of messages.
// If messageSize is currently set to 0 then get the new message size.
if (messageSize == 0)
{
tempBuff = new byte[received - 4];
// This will store the body of the message on the *first run only*.
byte[] body = new byte[received - 4];
// Only copy the first four bytes to get the length of the message.
Array.Copy(_buffer, header, 4);
// Copy the remainder of the message into the body.
Array.Copy(_buffer, 4, body, 0, received - 4);
messageSize = BitConverter.ToInt32(header, 0);
Array.Copy(body, tempBuff, body.Length);
}
else
{
// Since this isn't the first message containing the header packet
// we want to copy the entire contents of the byte array.
tempBuff = new byte[received];
Array.Copy(_buffer, tempBuff, received);
}
// MessageReceived will store the entire contents of all messages in this tranmission.
// If it is an new message then initialize the array.
if (messageReceived == null || messageReceived.Length == 0)
{
Array.Resize(ref messageReceived, 0);
}
// Store the message in the array.
messageReceived = AppendToArray(tempBuff, messageReceived);
if (messageReceived.Length < messageSize)
{
// Begin receiving again to get the rest of the packets for this stream.
c.ClientSocket.Client.BeginReceive(_buffer, 0, _buffer.Length, SocketFlags.None, ReceiveMessageCallback, c);
// Break out of the function. We do not want to proceed until we have a complete transmission.
return;
}
// Send it to the console
string message = Encoding.UTF8.GetString(messageReceived);
**Marked as resolved. The solution was to wrap the message in a header and end of message terminator then modify the code to look for these indicators.
The reasoning behind the use of sockets was due to constraints on the project where web services were not an option.
You are having problems because you don't know the first message size and sometimes you are getting more, sometimes less, sometimes you getting some of what's left in cache...
An easy solution is to ALWAYS send the message size before the actual messages, something like:
[MYHEADER][32 Bit Integer][Message Content]
Lets supose that MYHEADER is ASCII, just an dummy identifier, in this case I would:
1: Try to receive 12 bytes to catch the entire header (MYHEADER + 32Bit Integer) and don't do anything until you do. After that, if the header identifier IS NOT MYHEADER, then I would assume that the message got corrupted and would have something like a reset in the connection.
2: After I confirmed that the header is ok, I would check the 32bit integer for the message size and allocate the necessary buffer. (You might want to limit the memory usage here, something like 6Mb max and if your messages go beyond this, add a index after the 32bit integer to specify the message part...)
3: Try to receive until the message size specified in the header.
You don't seem to have a good understanding of the fact that TCP is a stream of bytes without boundaries. For example your header reading will fail if you get less than 4 bytes in that read.
A very simple way to receive a length prefixed message is using BinaryReader:
var length = br.ReadInt32();
var data = br.ReadBytes(length);
That's all. Make yourself familiar with all the standard BCL IO classes.
Usually it is best not to use sockets at all. Use a higher level protocol. WCF is good.

Sending files over TCP/ .NET SSLStream is slow/not working

Im writing an Server/Client Application which works with SSL(over SSLStream), which has to do many things(not only file receiving/sending). Currently, It works so: Theres only one connection. I always send the data from the client/server using SSLStream.WriteLine() and receive it using SSLStream.ReadLine(), because I can send all informations over one connection and I can send from all threads without destroying the data.
Now I wanted to implement the file sending and receiving. Like other things in my client/server apps, every message has a prefix (like cl_files or sth) and a base64 encoded content part(prefix and content are seperated by |). I implemented the file sharing like that: The uploader send to the receiver a message about the total file size and after that the uploader sends the base64 encoded parts of the file over the prefix r.
My problem is that the file sharing is really slow. I got around 20KB/s from localhost to localhost. I have also another problem. If I increase the size of the base64 encoded parts of the file(which makes file sharing faster), the prefix r doesnt go out to the receiver anymore(so the datas couldnt be identified).
How can I make it faster?
Any help will be greatly appreciated.
My(propably bad) code is for the client:
//its running inside a Thread
FileInfo x = new FileInfo(ThreadInfos.Path);
long size = x.Length; //gets total size
long cursize = 0;
FileStream fs = new FileStream(ThreadInfos.Path, FileMode.Open);
Int16 readblocks = default(Int16);
while (cursize < size) {
byte[] buffer = new byte[4096];
readblocks = fs.Read(buffer, 0, 4096);
ServerConnector.send("r", getBase64FromBytes(buffer));//It sends the encoded Data with the prefix r over SSLStream.WriteLine
cursize = cursize + Convert.ToInt64(readblocks);
ThreadInfos.wait.setvalue((csize / size) * 100);//outputs value to the gui
}
fs.Close();
For the Server:
case "r"://switch case for prefixes
if (isreceiving)
{
byte[] buffer = getBytesFromBase64(splited[1]);//splited ist the received Line over ReadLine splitted by the seperator "|"
rsize = rsize + buffer.LongLength;
writer.Write(buffer, 0, buffer.Length);//it writes the decoded data into the file
if (rsize == rtotalsize)//checks if file is completed
{
writer.Close();
}
}
break;
Your problem stems from the fact that you are performing what is essentially a binary operation through a text protocol and you are exacerbating that problem by doing it over an encrypted channel. I'm not going to re-invent this for you, but here are some options...
Consider converting to an HTTPS client/server model instead of reinventing the wheel. This will give you a well-defined model for PUT/GET operations on files.
If you can not (or will not) convert to HTTPS, consider other client/server libraries that provide a secure transport and well-defined protocol for binary data. For example, I often use protobuf-csharp-port and protobuf-csharp-rpc to provide a secure protocol and transport within our datacenter or local network.
If you are stuck with your transport being a raw SslStream, try using a well-defined and proven binary serialization framework like protobuf-csharp-port or protobuf-net to define your protocol.
Lastly, if you must continue with the framework you have, try some http-like tricks. Write a name/value pair as text that defines the raw-binary content that follows.
First of all base64 over ssl will be slow anyway, ssl itself is slower then raw transport. File transfers are not done over base64 now days, http protocol is much more stable than anything else and most libraries on all platforms are very well stable. Base64 takes more size then actual data, plus the time to encode.
Also, your following line may be a problem.
ThreadInfos.wait.setvalue((csize / size) * 100);//outputs value to the gui
If your this line is blocking, then this will slow down for every 4kb. Updating for every 4kb is also not right, unless a progress value from previous value differs by significant amount, there is no need to update ui for it.
I'd give a try of gzip compress before/after the network. From my experience, it helps. I'd say some code like this could help :
using(GZipStream stream = new GZipStream(sslStream, CompressionMode.Compress))
{
stream.Write(...);
stream.Flush();
stream.Close();
}
Warning : It may interfer with SSL if the Flush is not done. and it will need some tests... and I didn't try to compile the code.
I think Akash Kava is right.
while (cursize < size) {
DateTime start = DateTime.Now;
byte[] buffer = new byte[4096];
readblocks = fs.Read(buffer, 0, 4096);
ServerConnector.send("r", getBase64FromBytes(buffer));
DateTime end = DateTime.Now;
Console.Writline((end-start).TotalSeconds);
cursize = cursize + Convert.ToInt64(readblocks);
ThreadInfos.wait.setvalue((csize / size) * 100);
end = DateTime.Now;
Console.Writline((end-start).TotalSeconds);
}
By doing this you can find out where is the bottle neck.
Also the way you sending data packets to server is not robust.
Is it possible to paste your implementation of
ThreadInfos.wait.setvalue((csize / size) * 100);

Categories