Based on the advice of #Len-Holgate in this question, I'm asynchronously requesting 0-byte reads, and in the callback, accept bytes the available bytes with synchronous reads, since I know the data is available and won't block. This seems so efficient and wonderful.
But then I add the option for SslStream, and the approach falls apart. The zero-byte read is fine, but the SslStream decrypts the bytes, leaving a zero byte-count in the TcpClient's buffer (appropriately so), and I cannot determine how many bytes are now in the SslStream available for reading.
Is there a simple trick around this?
Some code, just for context:
sslStream.BeginRead(this.zeroByteBuffer, 0, 0, DataAvailable, this);
And after the EndRead() ( which correctly returns 0 ), DataAvailable contains:
// by now this is 0, because sslStream has already consumed the bytes
available = myTcpClient.Available;
if (0 < available) // Never occurs
{
// this part can be distractingly complicated, but
// it's based on the available byte count
sslStream.Read(...);
}
And due to the protocol, I need to evaluate byte-by-byte and decode variable byte-width unicode and stuff. I don't want to have to read byte-by-byte asynchronously!
If I understood correctly, your messages are delimited by a certain character, and you are already using a StringBuilder to cover the case when a message is fragmented into multiple pieces.
You could consider ignoring the delimiter when reading data, adding any data to it when it becomes available, and then inspecting the local StringBuilder for the delimiter character. When found, you can extract a single message using sb.ToString(0, delimiterIndex) and sb.Remove(0, delimiterIndex) until no delimiters remain.
This would also cover the case when two messages are received simultaneously.
Related
FileStream.Read() returns the amount of bytes read, but... is there any situation other than having reached the end of file, that it will read less bytes than the number of bytes requested and not throw an exception?
the documentation says:
The Read method returns zero only after reaching the end of the stream. Otherwise, Read always reads at least one byte from the stream before returning. If no data is available from the stream upon a call to Read, the method will block until at least one byte of data can be returned. An implementation is free to return fewer bytes than requested even if the end of the stream has not been reached.
But this doesn't quite explain in what situations data would be unavailable and cause the method to block until it can read again. I mean, shouldn't most situations where data is unavailable force an exception?
What are real situations where comparing the number of bytes read against the number of expected bytes could differ (assuming that we're already checking for end of file when we mention number of bytes expected)?
EDIT: A bit more information, reason why I'm asking this is because I've come across a bit of code where the developer pretty much did something like this:
bytesExpected = (remainingBytesInFile > 94208 ? 94208 : remainingBytesInFile
while (bytesRead < bytesExpected)
{
bytesRead += fileStream.Read(buffer, bytesRead, bytesExpected - bytesRead)
}
Now, I can't see any advantage to having this while at all, I'd expect it to throw an exception if it can't read the number of bytes expected (bearing in mind it's already taking into account that there are those many bytes left to read)
What would the reason one could possibly have for something like this? I'm sure I'm missing something
The documentation is for Stream.Read, from which FileStream is derived. Since FileStream is a stream, it should obey the stream contract. Not all streams do, but unless you have a very good reason, you should stick to that.
In a typical file stream, you'll only get a return value smaller than count when you reach the end of file (and it's a pretty simple way of checking for the end of file).
However, in a NetworkStream, for example, you keep reading in a loop until the method returns zero - signalling the end of stream. The same works for file streams - you know you're at the end of the file when Read returns zero.
Most importantly, FileStream isn't just for what you'd consider files - it's also for pseudo-files like standard input/output pipes and COM ports, for example (try opening a file stream on PRN, for example). In that case, you're not reading a file with a fixed length, and the behaviour is the same as with NetworkStream.
Finally, don't forget that FileStream isn't sealed. It's perfectly fine for you to implement a virtualized file system, for example - and it's perfectly fine if your virtualized file system doesn't support seeking, or checking the length of file.
EDIT:
To address your edit, this is exactly how you're supposed to read any stream. Nothing wrong with it. If there's nothing else to read in a stream, the Read method will simply return 0, and you know the stream is over. The only thing is, it seems that he tries to fill his buffer to full, one buffer at a time - this only makes sense if you explicitly need to partition the file by 94208 bytes, and pass that byte[] for further processing somewhere.
If that's not the case, you don't really need to fill the full buffer - you just keep reading (and probably writing on some other side) until Read returns 0. And indeed, by default, FileStream will always fill the whole buffer unless it's built around a pipe handle - but since that's a possibility, you shouldn't rely on the "real file" behaviour, so as long as you need those byte[] for something non-stream (e.g. parsing messages), this is entirely fine. If you're only using the stream as an actual stream, and you're streaming the data somewhere else, it doesn't have a point, really - you only need one while to read the file.
Your expectations would only apply to the case when the stream is reading data off of a no-latency source. Other I/O sources can be slow, which is why the Read method might will not always be able to return immediately. That doesn't mean that there is an error (so no exception), just that it has to wait for data to arrive.
Examples: network stream, file stream on slow disk, etc.
(UPDATE, HDD example) To give an example specific to files (since your case is FileStream, although Read is defined on Stream and so all implementations should fulfill the requirements): mechanical hard-drives go to "sleep" when not active (specially on battery-powered devices, read laptops). Spinning up can take a second or so. That is not an IOException, but your read would have to wait for a second before any data is read.
Simple answer is that on a FileStream it probably never happens.
However keep in mind that the Read method is inherited from Stream which serves as base for many other streams like NetworkStream and in this case you may not be able to read has many bytes as you requested simple because they havent been received from the network yet.
So like the documentation says it all depends on the implementation of the specific type of stream - FileStream, NetworkStream, etc.
I'm setting up a way to communicate between a server and a client. How I am working it at the moment, is that a stream's first byte will contain an indicator of what is coming and then looking up that request's class I can determine the length of the request:
stream.Read(message, 0, 1)
if(message == <byte representation of a known class>)
{
stream.Read(message, 0, Class.RequestSize);
}
I'm curious how to handle the case of when the class size is not known, of if after reading a known request the data is corrupt.
I'm thinking that I can insert in some sort of delimiter into the stream, but since a byte can only be between 0-255, I'm not sure how to go about creating a unique delimiter. Do I want to place a pattern into the stream to represent the end of a message? How can I be sure that this pattern is unique enough to not be mistaken for actual data?
There are different approaches on this. One option would be sending the length of the class name and possible of the whole packet first (e.g. always the first byte). This way you can read just read that byte and then n bytes more to get the class name.
By this approach you don't end up reading a lot of stuff a malicious client sends you with the intent to DoS your application and you can quickly determine if you read enough to handle the packet or if it's not yet complete.
There are some low level bytes which are used especially as delimiters. Start of Text and End of Text have a (hex) value of 0x02 and 0x03 respectively. And you have Start of Heading coupled with End of Transmission, 0x01 and 0x04; you could use these.
We are using a application protocol which specifies the length indicator of the message in the first 4 bytes. Socket.Receive will return as much data as in the protocol stack at the time or block until data is available. This is why we have to continously read from the socket until we receive the number of bytes in the length indicator. The Socket.Receive will return 0 if the other side closed the connection. I understand all that.
Is there a minimum number of bytes that has to be read? The reason I ask is from the documentation it seems entirely possible that the entire length indicator (4 bytes) might not be available when socket.Receive can return. We would then have to have to keep trying. It would be more efficient to minimize the number of times we call socket.receive because it has to copy things in and out of buffers. So is it safer to get a single byte at a time to get the length indicator, is it safe to assume that 4 bytes will always be available or should we keep trying to get 4 bytes using an offset variable?
The reason that I think that there may be some sort of default minimum level is that I came across a varaible called ReceiveLowWater variable that I can set in the socket options. But this appears to only apply to BSD. MSDN See SO_RCVLOWAT.
It isn't really that important but I am trying to write unit tests. I have already wrapped a standard .Net Socket behind an interface.
is it safe to assume that 4 bytes will always be available
NO. Never. What if someone is testing your protocol with, say, telnet and a keyboard? Or over a real slow or busy connection? You can receive one byte at a time or a split "length indicator" over multiple Receive() calls. This isn't unit testing matter, it's basic socket matter that causes problems in production, especially under stressful situations.
or should we keep trying to get 4 bytes using an offset variable?
Yes, you should. For your convenience, you can use the Socket.Receive() overload that allows you to specify a number of bytes to be read so you won't read too much. But please note it can return less than required, that's what the offset parameter is for, so it can continue to write in the same buffer:
byte[] lenBuf = new byte[4];
int offset = 0;
while (offset < lenBuf.Length)
{
int received = socket.Receive(lenBuf, offset, lenBuf.Length - offset, 0);
offset += received;
if (received == 0)
{
// connection gracefully closed, do your thing to handle that
}
}
// Here you're ready to parse lenBuf
The reason that I think that there may be some sort of default minimum level is that I came across a varaible called ReceiveLowWater variable that I can set in the socket options. But this appears to only apply to BSD.
That is correct, the "receive low water" flag is only included for backwards compatibility and does nothing apart from throwing errors, as per MSDN, search for SO_RCVLOWAT:
This option is not supported by the Windows TCP/IP provider. If this option is used on Windows Vista and later, the getsockopt and setsockopt functions fail with WSAEINVAL. On earlier versions of Windows, these functions fail with WSAENOPROTOOPT". So I guess you'll have to use the offset.
It's a shame, because it can enhance performance. However, as #cdleonard pointed out in a comment, the performance penalty from keeping an offset variable will be minimal, as you'l usually receive the four bytes at once.
No, there isn't a minimum buffer size, the length in the receive just needs to match the actual space.
If you send a length in four bytes before the messages actual data, the recipient needs to handle the cases where 1, 2, 3 or 4 bytes are returned and keep repeating the read until all four bytes are received and then repeat the procedure to receive the actual data.
I just read an article that says TCPClient.Read() may not get all the sent bytes in one read. How do you account for this?
For example, the server can write a string to the tcp stream. The client reads half of the string's bytes, and then reads the other half in another read call.
how do you know when you need to combine the byte arrays received in both calls?
how do you know when you need to combine the byte arrays received in both calls?
You need to decide this at the protocol level. There are four common models:
Close-on-finish: each side can only send a single "message" per connection. After sending the message, they close the sending side of the socket. The receiving side keeps reading until it reaches the end of the stream.
Length-prefixing: Before each message, include the number of bytes in the message. This could be in a fixed-length format (e.g. always 4 bytes) or some compressed format (e.g. 7 bits of size data per byte, top bit set for the final byte of size data). Then there's the message itself. The receiving code will read the size, then read that many bytes.
Chunking: Like length-prefixing, but in smaller chunks. Each chunk is length-prefixed, with a final chunk indicating "end of message"
End-of-message signal: Keep reading until you see the terminator for the message. This can be a pain if the message has to be able to include arbitrary data, as you'd need to include an escaping mechanism in order to represent the terminator data within the message.
Additionally, less commonly, there are protocols where each message is always a particular size - in which case you just need to keep going until you've read that much data.
In all of these cases, you basically need to loop, reading data into some sort of buffer until you've got enough of it, however you determine that. You should always use the return value of Read to note how many bytes you actually read, and always check whether it's 0, in which case you've reached the end of the stream.
Also note that this doesn't just affect network streams - for anything other than a local MemoryStream (which will always read as much data as you ask for in one go, if it's in the stream at all), you should assume that data may only become available over the course of multiple calls.
You should call read() in a loop. The condition of that loop would check if there is still any data available to be read.
That is kinda hard to answer, because you can never know when data will arrive, and thats why I usually use a thread for receiving data in my chat program. But you should be able to use something similar to this:
do{
numberOfBytesRead = myNetworkStream.Read(myReadBuffer,
0,
myReadBuffer.Length);
myCompleteMessage.AppendFormat("{0}",
Encoding.ASCII.GetString(myReadBuffer, 0, numberOfBytesRead));
}
while(myNetworkStream.DataAvailable);
Look at this source!
This code works:
TcpClient tcpClient = (TcpClient)client;
NetworkStream clientStream = tcpClient.GetStream();
byte[] message = new byte[5242880];
int bytesRead;
bytesRead = clientStream.Read(message, 0, 909699);
But this returns the wrong number of bytes:
bytesRead = clientStream.Read(message, 0, 5242880);
Why? How can I fix it?
(the real data size is 1475186; the code returns the 11043 as the number of bytes)
If this is a TCP based stream, then the answer is that the rest of the data simply didn't arrive yet.
TCP is stream oriented. That means there is no relation between the number of Send/Write calls, and the number of receive events. Multiple writes can be combined together, and single writes can be split.
If you want to work with messages on TCP, you need to implement your own packeting algorithm on top of it. Typical strategies to achieve this are:
Prefix each packed by its length, usual with binary data
Use a separation sequence such as a line-break. Usual with text data.
If you want to read all data in a blocking way you can use loop until DataAvailable is true but a subsequent call to Read returns 0. (Hope I remembered that part correctly, haven't done any network programming in a while)
From MSDN:
The Read operation reads as much data as is available, up to the
number of bytes specified by the size parameter.
I.e. you have to call the Read() method in a loop until you received all data. Have a look at the sample code in MSDN.
You need to loop reading bytes from the message until the Available property on the TCP client or the DataAvailable property of the NetworkStream are 0 (= no more bytes left)
Read the Documentation:
This method reads data into the buffer parameter and returns the
number of bytes successfully read. If no data is available for
reading, the Read method returns 0. The Read operation reads as much
data as is available, up to the number of bytes specified by the size
parameter. If the remote host shuts down the connection, and all
available data has been received, the Read method completes
immediately and return zero bytes.
So it could be because of connection failure that you get each time different number, anyway you can check the result to know if its the reason.
I think the answers already here respond to your specific question quite well, but possibly more generally: If you are trying to send data over a networkStream object for the purposes of network communication check out the open source library, networkComms.net.