I am receiving a Bytes array every second from a source in my winform application via Uart. lets say it is something like below. 99,98 marks the start of a new packet. Each packet is of variable length but it always starts with this 99,98 id. I want to copy the individual packet into a receivedBuffer and then process them individually
second 1:
{56,42,43,76,125,56,34,234,12,3,5,76,8,0,99,98,234,56,211,122,22,4,7,89,76,64,12,3,5,99,98,0,6,125}
second2:
{6,125,56,34,234,12}
So inabove example in second 1 I receive first some garbage value, then 1 full packet and other partial packet. and in second 2 I receive the remaining of 2nd packet
(ps: packet 1 is 99,98,234,56,211,122,22,4,7,89,76,64,12,3,5)
a packet continues until u receive a 99,98 id bytes
Sounds like a protocol difficult to handle: you can only rely on a "begin of packet" marker, you don't have a "packet length" nor "end of packet" marker.
So, reasonably, what you can do is:
start with an empty array / memoryStream
every time you get some bytes, add them the the array
parse the new array content looking for "beging of packet" marker. if you find two (or more) of them, extract bytes included between them, and remove all bytes preceding last "begin of packet" found
wait for next bytes
Note that you aren't able to understand when last packet is completed just parsing received bytes.
Related
I have a bitmap, I converted it to bytes using BinaryFormatter and MemoryStream then sent the bytes to the TCP server, when I try to convert it back to BitMapI get this error System.Runtime.Serialization.SerializationException: End of Stream encountered before parsing was completed. I tried converting the bitmap to bytes then converting the bytes to bitmap on the client side just to check if the error is due to the conversion but everything worked just fine. So I think the problem is that the server is receiving the byte of array in chunks not in 1 big array, so my question is how check if byte array is fully sent?
As you say the data can absolutely be sent in multiple chunks and you need to have a way of knowing when all the data is received. For HTTP you use Content-Length in the headers to let the client know when all data is received. Since you control both sides you can transform your image into a byte array, check the size and lets say its 5000 bytes.
Then you create an int (or long if necessary, not in this case probably) and set it to 5000 and send that first (as bytes) and then the rest of the data. You will then have created your own header. On the other side you start by reading the exact amount of bytes for an int (or other if you chose long etc). Then transform the bytes into an int and you know it will now come 5000 bytes. Then start reading until you have 5000 bytes. This can always be optimized but this is a simple way you can do it.
I'm setting up a way to communicate between a server and a client. How I am working it at the moment, is that a stream's first byte will contain an indicator of what is coming and then looking up that request's class I can determine the length of the request:
stream.Read(message, 0, 1)
if(message == <byte representation of a known class>)
{
stream.Read(message, 0, Class.RequestSize);
}
I'm curious how to handle the case of when the class size is not known, of if after reading a known request the data is corrupt.
I'm thinking that I can insert in some sort of delimiter into the stream, but since a byte can only be between 0-255, I'm not sure how to go about creating a unique delimiter. Do I want to place a pattern into the stream to represent the end of a message? How can I be sure that this pattern is unique enough to not be mistaken for actual data?
There are different approaches on this. One option would be sending the length of the class name and possible of the whole packet first (e.g. always the first byte). This way you can read just read that byte and then n bytes more to get the class name.
By this approach you don't end up reading a lot of stuff a malicious client sends you with the intent to DoS your application and you can quickly determine if you read enough to handle the packet or if it's not yet complete.
There are some low level bytes which are used especially as delimiters. Start of Text and End of Text have a (hex) value of 0x02 and 0x03 respectively. And you have Start of Heading coupled with End of Transmission, 0x01 and 0x04; you could use these.
I just read an article that says TCPClient.Read() may not get all the sent bytes in one read. How do you account for this?
For example, the server can write a string to the tcp stream. The client reads half of the string's bytes, and then reads the other half in another read call.
how do you know when you need to combine the byte arrays received in both calls?
how do you know when you need to combine the byte arrays received in both calls?
You need to decide this at the protocol level. There are four common models:
Close-on-finish: each side can only send a single "message" per connection. After sending the message, they close the sending side of the socket. The receiving side keeps reading until it reaches the end of the stream.
Length-prefixing: Before each message, include the number of bytes in the message. This could be in a fixed-length format (e.g. always 4 bytes) or some compressed format (e.g. 7 bits of size data per byte, top bit set for the final byte of size data). Then there's the message itself. The receiving code will read the size, then read that many bytes.
Chunking: Like length-prefixing, but in smaller chunks. Each chunk is length-prefixed, with a final chunk indicating "end of message"
End-of-message signal: Keep reading until you see the terminator for the message. This can be a pain if the message has to be able to include arbitrary data, as you'd need to include an escaping mechanism in order to represent the terminator data within the message.
Additionally, less commonly, there are protocols where each message is always a particular size - in which case you just need to keep going until you've read that much data.
In all of these cases, you basically need to loop, reading data into some sort of buffer until you've got enough of it, however you determine that. You should always use the return value of Read to note how many bytes you actually read, and always check whether it's 0, in which case you've reached the end of the stream.
Also note that this doesn't just affect network streams - for anything other than a local MemoryStream (which will always read as much data as you ask for in one go, if it's in the stream at all), you should assume that data may only become available over the course of multiple calls.
You should call read() in a loop. The condition of that loop would check if there is still any data available to be read.
That is kinda hard to answer, because you can never know when data will arrive, and thats why I usually use a thread for receiving data in my chat program. But you should be able to use something similar to this:
do{
numberOfBytesRead = myNetworkStream.Read(myReadBuffer,
0,
myReadBuffer.Length);
myCompleteMessage.AppendFormat("{0}",
Encoding.ASCII.GetString(myReadBuffer, 0, numberOfBytesRead));
}
while(myNetworkStream.DataAvailable);
Look at this source!
I am using the server-client model for communicating with a hardware board using socket programing.
I receive data from board using "read()" method of "NetworkStream" class which reads a buffer with specified maximum size and returns the length of valid data in buffer. I have considered the maximum size of buffer with a enough big number.
The board sends a set of messages every 100ms. Each message consists a 2-byte constant header and a variable number of bytes as its data after the header bytes.
The problem is that I do not receive the messages one by one! Instead, I receive a buffer may contains 2 or 3 messages or one message is scattered between two buffer.
Currently, I am using a DFA which gather the content of messages using the constant header bytes (We do not know the length of messages, we just know the header bytes) but the problem is that the data bytes may contains the header bytes randomly !!
Is there any efficient way to gather the bytes of each message from buffers using any specific stream or class? How can I overcome to this problem?!
You need to add an additional buffer component between your consumer DFA and the socket client.
Whenever data is avaliable from the NetworkStream the buffer component will read it and append it to its own private buffer, incrementing an "available bytes" counter. The buffer component needs to expose at least the following functionality to its users:
a BytesAvailable property -- this returns the value of the counter
a PeekBytes(int count) method -- this returns the first count bytes of the buffer, if that much is available at least, and does not modify the counter or the buffer
a ReadBytes(int count) method -- as above, but it decrements the counter by count and removes the bytes read from the buffer so that subsequent PeekBytes calls will never read them again
Keep in mind that you don't need to be able to service an arbitrarily high count parameter; it is enough if you can service a count as long as the longest message it would be possible to receive at all times.
Obviously the buffer component needs to keep some kind of data structure that allows "wrapping around" of some kind; you might want to look into a circular (ring) buffer implementation, or you can just use two fixed buffers of size N where N is the length of the longest message and switch from one to the other as they become full. You should be careful so that you stop pulling in data from the NetworkStream if your buffers become full and only continue pulling after the DFA has called ReadBytes to free up some buffer space.
Whenever your DFA needs to read data, it will first ask your buffer stage how much data it has accumulated and then proceed accordingly. It would look something like this:
if BytesAvailable < 2
return; // no header to read, cannot do anything
// peek at the header -- do not remove it from the buffer!
header = PeekBytes(2);
// calculate the full message length based on the header
// if this is not possible from just the header, you might want to do this
// iteratively, or you might want to change the header so that it is possible
length = X;
if BytesAvailable < X
return; // no full message to read, cannot continue
header = ReadBytes(2); // to remove it from the buffer
message = ReadBytes(X); // remove the whole message as well
This way your DFA will only ever deal with whole messages.
I'm one of those guys who come here to find answers to those questions that others have asked, and I think i newer asked anything myself, but after two days searching unsuccessfully I decided that it's time to ask something myself. So here it is...
I have a TCP server and client written in C#, .NET 4, asynchronous sockets using SocketAsyncEventArgs. I have a length-prefixed message framing protocol. Overall everything works just fine, but one issue keeps bugging me.
Situation is like this (I will use small numbers just as an example):
Lets say Server has a Send buffer length of 16 bytes.
It sends a message which is 6 bytes long, and prefixes it with 4 bytes long length prefix. Total message length is 6+4=10.
Client reads the data and receives a buffer of 16 bytes length (yes 10 bytes of data and 6 bytes equal to zero).
Received buffer looks like this: 6 0 0 0 56 21 33 1 5 7 0 0 0 0 0 0
So I read first 4 bytes which is my length prefix, I determine that my message is 6 bytes long, I read it as well and everything is fine so far. Then i have 16-10=6 bytes left to read. All of them are zeroes I read 4 of them, since it's my length prefix. So it's a zero length message which is allowed as keep-alive packet.
Remaining data to read: 0 0
Now the issue "kicks in". I got only 2 remaining bytes to read, they are not enough to complete a 4 byte-long length prefix buffer. So I read those 2 bytes, and wait for more incoming data. Now server is not aware that I'm still reading length prefix (I'm just reading all those zeroes in the buffer) and sends another message correctly prefixed with 4 bytes. And the client is assuming the server sends those missing 2 bytes. I receive the data on the client side, and read first two bytes to form a complete 4 byte length buffer. The results are something like that
lengthBuffer = new byte[4]{0, 0, 42, 0}
Which then translates into 2752512 message length. So my code will continue to read next 2752512 bytes to complete the message...
So in every single message framing example I have seen zero length messages are supported as keep-alive's. And every example I've seen doesn't do anything more than I do. The problem is that I do not know how much data I have to read when I receive it from the server. Since I have partially-filled buffer with zeroes, I have to read it all as those zeroes could be keep-alive's I sent from the other end of connection.
I could drop zero-length messages and stop reading the buffer after first empty message and it should fix this issue, and use custom messages for my keep-alive mechanism. But I want to know if I am missing something, or doing something wrong, since every code example I've seen seems to have same issue (?)
UPDATE
Marc Gravell, you sir pulled words out of my mouth. Was about to update that the issue is with sending the data. The problem is that initially when exploring .NET Sockets and SocketAsyncEventArgs I came across this sample: http://archive.msdn.microsoft.com/nclsamples/Wiki/View.aspx?title=socket%20performance
It uses reusable pool of buffers. Simply takes predefined number of maximum client connections allowed, for example 10, takes maximum single buffer size, for example 512, and creates one large buffer for all of them. So 512 * 10 * 2 (for send and receive) = 10240
So we have byte[] buff = new byte[10240];
Then for each client that connects it assigns a piece of this large buffer. First connected client gets first 512 bytes for Data Reading operations, and gets next 512 bytes (offset 512) for Data Sending operations. Therefore the code ended up having already allocated Send buffer which size is 512 (exactly the number the client later receives as BytesTransferred). This buffer is populated with data, and all remaining space out of these 512 bytes is sent as zeroes.
Strange enough this example is from msdn. The reason there is a single huge buffer is to avoid fragmented heap memory, when buffer gets pinned and GC cant collect it or something like that.
Comment from BufferManager.cs in the provided example (see link above):
This class creates a single large buffer which can be divided up and
assigned to SocketAsyncEventArgs objects for use with each socket I/O
operation. This enables bufffers to be easily reused and gaurds
against fragmenting heap memory.
So the issue is pretty much clear. Any suggestions on how I should resolve this are welcome :) Is it true what they say about fragmented heap memory, is it OK to create a data buffer "on the fly"? If so, will I have memory issues when the server scales to a few hundred or even thousands of clients?
I guess the problem is that you are treating the trailing zeros in the buffer you read as data. This is not data. It is garbage. No one ever sent it to you.
The Stream.Read call returns you the number of bytes actually read. You should not interpret the rest of the buffer in any way.
The problem is that I do not know how much data I have to read when I
receive it from the server.
Yes, you do: Use the return value from Stream.Read.
That sounds simply like a bug in either your send or receive code. You should only get BytesTransferred as the data that was actually sent, or some number smaller than that if arriving in fragments. The first thing I would wonder is: did you setup the send correctly? i.e. if you have an oversized buffer, a correct implementation might look like:
args.SetBuffer(buffer, 0, actualBytesToSend);
if (!socket.SendAsync(args)) { /* whatever */ }
where actualBytesToSend can be much less than buffer.Length. My initial suspicion is that
you are doing something like:
args.SetBuffer(buffer, 0, buffer.Length);
and therefore sending more data than you have actually populated.
I should emphasize: there is something wrong in either your send or receive; I do not believe, at least without an example, that there is some fundamental underlying bug in the BCL here - I use the async API extensively, and it works fine - but you do need to accurately track the data you are sending and receiving at all points.
"Now server is not aware that I'm still reading length prefix (I'm just reading all those zeroes in the buffer) and sends another message correctly prefixed with 4 bytes.".
Why? How does the server know what you are and aren't reading? If the server retransmits any part of a message it is in error. TCP already does that for you.
There seems to be something radically wrong with your server.