When does TcpClient's NetworkStream finish one read operation? - c#

I am working on a project that involves client server communication via TCP and Google Protocol Buffer. On the client side, I am basically using NetworkStream.Read() to do blocking read from server via a byte array buffer.
According to MSDN documentation,
This method reads data into the buffer parameter and returns the number of bytes successfully read. If no data is available for reading, the Read method returns 0. The Read operation reads as much data as is available, up to the number of bytes specified by the size parameter. If the remote host shuts down the connection, and all available data has been received, the Read method completes immediately and return zero bytes.
It is the same case with async read (NetworkStream.BeginRead and EndRead). My question is that when does Read()/EndRead() return? It seems like it will return after all the bytes in the buffer have been filled. But in my own testing, that is not the case. The bytes read in one operation vary a lot. I think it makes sense because if there is a pause on the server side when sending messages, the client should not wait until the read buffer has been filled. Does the Read()/EndRead() inherently have some timeout mechanism?
I was trying to find out how Mono implements Read() in NetworkStream and traced until a extern method Receive_internal() is called.

It reads all the data that is available on the networkstream or when the buffer is full. Whichever comes first. You have already noticed this behaviour.
So you will need to process all the bytes and see whether the message is complete. You do this by framing a message. See .NET question about asynchronous socket operations and message framing on how you can do this.
As for the timeout question, if assuming you are asking whether a beginread has a timeout, I would say no, because it is just waiting for data to arrive on the stream and put it into a buffer, after which you can process the incoming bytes.
The number of bytes available on the read action depends on things like your network (e.g. latency, proxy throttling) and the client sending the data.
BeginRead behaviour summary:
Call BeginRead(); -> Waiting for bytes to arrive on the stream......
1 byte or more have arrived on the stream
Start putting the byte(s) from step 2 into the buffer that was given
Call EndRead(); -> The byte(s) within the buffer can be processed by EndRead();
Most common practice is to repeat all these steps again.

If Read was waiting for a full buffer of data, you could easily deadlock if the remote party expects your response but you are waiting for a full buffer which will never come.
According to this logic it must return without ever blocking if data is available. Even if it is just a single byte that is available.
assume that server sends one message (100 bytes) every 50 ms, what is the bytes read on client side on one NetworkStream.Read() call?
Each call will return between one byte and the number of bytes available without blocking. Nothing, nothing, nothing else is guaranteed. In practice you will get one or multiple network packets at once. It doesn't make sense for the stack to withhold available bytes.

Related

C# Socket.Send( ): does it send all data or not?

I was reading about sockets from a book called "C# Network Programming" by Richard Blum. The following excerpt states that the Send() method is not guaranteed to send all the data passed to it.
byte[] data = new byte[1024];
int sent = socket.Send(data);
On the basis of this code, you might be tempted to presume that the
entire 1024-byte data buffer was sent to the remote device... but this
might be a bad assumption. Depending on the size of the internal TCP
buffer and how much data is being transferred, it is possible that not
all the data supplied to the Send() mehtod was actually sent.
However, when I went and looked at the Microsoft documentation https://msdn.microsoft.com/en-us/library/w93yy28a(v=vs.110).aspx it says:
If you are using a connection-oriented protocol, Send will block until
all of the bytes in the buffer are sent, unless a time-out was set
So which is it? The book was published in 2004, so has it changed since then?
I'm planning to use asynchronous sockets, so my next question is, would BeginSend() send all data?
All you had to do was read the rest of the exact same paragraph you quoted. There's even an exception to your quote given in the very same sentence.
If you are using a connection-oriented protocol, Send will block until all of the bytes in the buffer are sent, unless a time-out was set by using Socket.SendTimeout. If the time-out value was exceeded, the Send call will throw a SocketException. In nonblocking mode, Send may complete successfully even if it sends less than the number of bytes in the buffer. It is your application's responsibility to keep track of the number of bytes sent and to retry the operation until the application sends the bytes in the buffer.
For BeginSend, the behavior is also described:
Your callback method should invoke the EndSend method. When your application calls BeginSend, the system will use a separate thread to execute the specified callback method, and will block on EndSend until the Socket sends the number of bytes requested or throws an exception.
That's not a very nice design and defeats the whole point of a callback! Consider using SendAsync instead (and then you still need to check the BytesTransferred property).
Both of the resources you quoted are correct. I think the wording could have been better though.
Inthe docs, MSDN it is also written that
There is also no guarantee that the data you send will appear on the
network immediately. To increase network efficiency, the underlying
system may delay transmission until a significant amount of outgoing
data is collected.
So Send method is blocking until underlying system has had room to buffer your data for a network send.
A successful completion of the Send method means that the underlying
system has had room to buffer your data for a network send.

TcpClient wait for CRLF

I'm writing a class library to communicate with a PLC by using TCP. The communication is based on sending a data string which is terminated by a CRLF and next waiting for an acknowledge string (also terminated by a CRLF) to confirm the data is received (yes I know this is also included in the TCPIP protocol, but this is another discussion).
Currently I'm facing two major problems:
I'm setting the TcpClient.SendTimeout property, however it looks like when the data is send (by TcpClient.Client.Send), the sender does not wait for the receiver the data to be read. Why?
Because of the sender is not waiting, an acknowledge string and immediately the next data string can be send. So, the receiver is getting two packages. Is there a way to read the buffer only till the first CRLF (acknowledge) and leave the next data string in the buffer for the next TcpClient.Client.Read command.?
Thanks in advance,
Mark
TCP is a streaming protocol. There are no packets that you can program against. The receiver must be able to decode the data no matter in what chunks it arrives. Assume one byte chunks, for example.
Here, it seems the receiver can just read until it finds a CRLF. StreamReader can do that.
the sender does not wait for the receiver the data to be read
TCP is asynchronous. When your Send completes the receiver hasn't necessarily processed the data. This is impossible to ensure at the TCP stack level. The receiving app might have called Receive and gotten the data but it might not have processed it. The TCP stack can't know.
You must design your protocol so that this information is not needed.
I just read one byte till
That can work but it is very CPU intensive and inefficient.

Network stream reading before all data has been written

My Server will send a message using networkStream.Write(messageBytes);
My Client will receive the message using networkStream.Read(). The client will read byte by byte for a sequence, when it has found the sequence it reads the rest of the header.
The header contains a payload length. Once I have this I read the stream using the payload length to get the payload.
My problem is when I write the data, my client will try to read straight away and all the data may not have been written by this point.
So I end up with a message with the end effectively chopped off:
"blablablablabl\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"
Is there anyway to wait for all the information to be received?
Thanks.
This doesn't really make sense. Socket read functions tell you how much data is available / has been read into the buffer, so if you're processing a bunch of NULs in the buffer because the rest of the data hasn't been received, that's on you for not checking how much data has been received. If you're expecting more data, keep receiving until you get as much as you're expecting (avoid overwriting the previously-received chunks by advancing the pointer into the receive buffer or copying to a new buffer, etc).
System network buffers are not infinite, if you dont read them from windows socket to your internal memory, they will stop receiving.
In the case, you might want to create a memory storage for your data:
byte[] data = new byte[dataLength];
And keep pushing data into the array until it is all read up.

PHP to C# - Socket - File corrupts randomly

I have a TCP connection , which client is PHP and server is C#
this socket connection transfers a image to the socket server , but
randomly some times the transfer get corrupted [image hash is different]
PHP Client
$file = file_get_contents('img.bmp');
socket_write($socket,$file.$cordinates); it sends //image + sme other data
$recv = socket_read ($socket, 500, PHP_BINARY_READ) // read the server response
This stream always transfer a Bitmap image + some data .
C#
this.DataSocket = this.Listner.Accept();
int filelength = this.DataSocket.Receive(this.buffer, this.buffer.Length, SocketFlags.None)
i investigated that in a fresh-browser [newly opened ] this never failed. but when i using this created service several times frequently in the same browser this intended to fail.
when i check with a different browser or new instance of the browser it never failed in first few attempts.
i thought it was some problem with caching but i disable caching using headers but same problem exists
You can't simply expect to write an entire file to the socket at once, nor can you expect to read the file from the socket in one operation. The socket read and write APIs for just about any network programming API from BSD sockets to WinSock to .NET network classes are all going to transmit or receive data up to the desired byte count.
If you look at the documentation for PHP socket_write for example:
Returns the number of bytes successfully written to the socket or FALSE on failure. The error code can be retrieved with socket_last_error(). This code may be passed to socket_strerror() to get a textual explanation of the error.
Note:
It is perfectly valid for socket_write() to return zero which means no bytes have been written. Be sure to use the == operator to check for FALSE in case of an error.
You will typically want to choose a block size like 4096 or 16384 and loop transmitting or receiving that block size until you get the desired number of bytes transmitted or received. Your code will have to check the return value of the send or receive function you're calling and adjust your file pointer accordingly. If transmit returns 0, that could just mean the send buffer is full (not fatal) so continue sending (might want a Sleep(0) delay). If receive returns 0, this usually means the other side has cleanly closed the connection.
One of the most critical flaws in your simple network code usage is that you're not sending the size of the file before you send the file data, so there's no way for the receiver to know how much to read before sending their response. For a simple operation like this, I'd suggest just sending a binary 32bit integer (4 bytes). This would be part of the schema for your operation. So the receiver would first read 4 bytes and from that know how many more bytes need to be read (one buffer size at a time). The receiver keeps reading until they have that many bytes.
I hope this helps. It would be great if socket code were as simple as the usage you attempted, but unfortunately it isn't. You have to select a buffer size, and then keep reading or writing buffers of that size until you get what you want, and you have to convey to the other side how much data you plan on sending.
That you think caching has anything to do with the problem implies that either there is a lot of functionality outside of the code you've published which is affecting the result or that you are a very long way from understanding the problem.
Without knowing the structure of bmp files, my first concern would be how you separate the file from the additional info sent. A few things you could try...
If '$cordinates' (sic) is a fixed size, then put this at the front of the message, not the back
Log the size sent from PHP and the size received.
base64 encode the binary file before sending it (and decode at the receiving end)
Non of above solutions didn't work for me , but i found out that create a new instance every time after a one request will solve the problem. but i don't think its a reliable way.
i tried the client using ASP.NET but same results. i think its not a problem with client PHP its surely a problem of the socket server

Minimum guaranteed bytes sent in one go via NetworkStream

I am using the NetworkStream of a TcpClient to send bytes from a .NET server to a Silverlight Client that receives them via a Socket. Sending from the client is done via Socket.SendAsync.
My question is, what is the minimum amount of bytes I can expect to be received "in one go" on both sides, without the need to send the message length too and put several byte messages together by myself.
Thanks,
Andrej
You absolutely should send the message length. Network streams are precisely that - streams of information. Basically there are three ways of recognising the end of a message:
Prefix the data with the length (don't forget that it's at least theoretically possible that you won't even get all of the length data in one packet)
Use a message delimiter/terminator
Only send one message each way per connection, and close the connection afterwards
It depends on your network settings, but the default length for network nodes is 1500 bytes and most modems take something off that. So most homes have 1460 bytes packet size.
The optimum size for your situation can be calculated
But people can always have their own settings, so there's no guarantee that you get the optimal packet size for all clients.

Categories