Socket.Send has a delay? - c#

I am building an application and sending data back to the other side,
In my application, I have:
System.Net.Sockets.Socket.Send(byte[]) function.
My customer told me that there is 530 ms delay of receiving this packet. However, I have logged every where until: System.Net.Sockets.Socket.Send(byte[])
I measured that it took about 15ms to get to send an array from Socket. My customer advised me to check:
To flush after sending, but I dont see a flush function in Socket?
Fill up the data buffer before sending out otherwise, if the data is short, then I have to force the transmission
Is one of this advice correct? I see there are also another parameter of the method Send, which is: SocketFlags <= Is there any help of using this SocketFlags?

There's common problem with Nagle algorithm that tries to 'glue' data sent together into single packet. It is possible that your code suffers from it as well.
Try to disable it as shown here or by setting SocketOptionName.NoDelay option with SetSocketOption method.

Related

C# Socket.Send( ): does it send all data or not?

I was reading about sockets from a book called "C# Network Programming" by Richard Blum. The following excerpt states that the Send() method is not guaranteed to send all the data passed to it.
byte[] data = new byte[1024];
int sent = socket.Send(data);
On the basis of this code, you might be tempted to presume that the
entire 1024-byte data buffer was sent to the remote device... but this
might be a bad assumption. Depending on the size of the internal TCP
buffer and how much data is being transferred, it is possible that not
all the data supplied to the Send() mehtod was actually sent.
However, when I went and looked at the Microsoft documentation https://msdn.microsoft.com/en-us/library/w93yy28a(v=vs.110).aspx it says:
If you are using a connection-oriented protocol, Send will block until
all of the bytes in the buffer are sent, unless a time-out was set
So which is it? The book was published in 2004, so has it changed since then?
I'm planning to use asynchronous sockets, so my next question is, would BeginSend() send all data?
All you had to do was read the rest of the exact same paragraph you quoted. There's even an exception to your quote given in the very same sentence.
If you are using a connection-oriented protocol, Send will block until all of the bytes in the buffer are sent, unless a time-out was set by using Socket.SendTimeout. If the time-out value was exceeded, the Send call will throw a SocketException. In nonblocking mode, Send may complete successfully even if it sends less than the number of bytes in the buffer. It is your application's responsibility to keep track of the number of bytes sent and to retry the operation until the application sends the bytes in the buffer.
For BeginSend, the behavior is also described:
Your callback method should invoke the EndSend method. When your application calls BeginSend, the system will use a separate thread to execute the specified callback method, and will block on EndSend until the Socket sends the number of bytes requested or throws an exception.
That's not a very nice design and defeats the whole point of a callback! Consider using SendAsync instead (and then you still need to check the BytesTransferred property).
Both of the resources you quoted are correct. I think the wording could have been better though.
Inthe docs, MSDN it is also written that
There is also no guarantee that the data you send will appear on the
network immediately. To increase network efficiency, the underlying
system may delay transmission until a significant amount of outgoing
data is collected.
So Send method is blocking until underlying system has had room to buffer your data for a network send.
A successful completion of the Send method means that the underlying
system has had room to buffer your data for a network send.

TcpClient wait for CRLF

I'm writing a class library to communicate with a PLC by using TCP. The communication is based on sending a data string which is terminated by a CRLF and next waiting for an acknowledge string (also terminated by a CRLF) to confirm the data is received (yes I know this is also included in the TCPIP protocol, but this is another discussion).
Currently I'm facing two major problems:
I'm setting the TcpClient.SendTimeout property, however it looks like when the data is send (by TcpClient.Client.Send), the sender does not wait for the receiver the data to be read. Why?
Because of the sender is not waiting, an acknowledge string and immediately the next data string can be send. So, the receiver is getting two packages. Is there a way to read the buffer only till the first CRLF (acknowledge) and leave the next data string in the buffer for the next TcpClient.Client.Read command.?
Thanks in advance,
Mark
TCP is a streaming protocol. There are no packets that you can program against. The receiver must be able to decode the data no matter in what chunks it arrives. Assume one byte chunks, for example.
Here, it seems the receiver can just read until it finds a CRLF. StreamReader can do that.
the sender does not wait for the receiver the data to be read
TCP is asynchronous. When your Send completes the receiver hasn't necessarily processed the data. This is impossible to ensure at the TCP stack level. The receiving app might have called Receive and gotten the data but it might not have processed it. The TCP stack can't know.
You must design your protocol so that this information is not needed.
I just read one byte till
That can work but it is very CPU intensive and inefficient.

Forcing a TCP stream to send buffer contents

I am using Csharp tcp sockets to send data between a client and server.
Now the problem as i see it or as i perceive it is that tcp is a stream protocol and will not push (send) data unless their is a sufficient amount of it.
For instance say i wanted to send some data whatever it is doesnt matter lets just say its 8 bytes long. The behaviour i am seeing is no matter how long i wait it wont send that data unless i push more behind it presumably until i reach the tcp buffer.
So my question is. If i want to send a small amount of data via tcp do i need to append garbage to the end to force the socket to send. ( I wouldnt feel good about this ) or is their an alternative way i can force the front segment of the stream to send.
thanks in advance. i am still learning tcp so excuse the ignorance.
Don't set NoDelay unless you are an expert at TCP/IP and understand its full ramifications. If you haven't read Stevens, don't even think about it.
Here's an example question: if you establish a socket connection and send 8 bytes on it, are the 8 bytes immediately sent or does the Nagle algorithm wait for more data to send? The answer is "the 8 bytes are sent immediately" - but don't consider messing with Nagle until you understand exactly why that is the answer.
Here's another question: in a standard command/response protocol, how much Nagle delay is applied to each packet? The answer: none. Again, you should research why Nagle causes no delays in this common scenario.
If you're not seeing the data sent by 250 milliseconds (the maximum delay caused by Nagle in the worst possible scenario), then there is something else wrong.
You could set the NoDelay property (I think it's what disables Nagle).

PHP to C# - Socket - File corrupts randomly

I have a TCP connection , which client is PHP and server is C#
this socket connection transfers a image to the socket server , but
randomly some times the transfer get corrupted [image hash is different]
PHP Client
$file = file_get_contents('img.bmp');
socket_write($socket,$file.$cordinates); it sends //image + sme other data
$recv = socket_read ($socket, 500, PHP_BINARY_READ) // read the server response
This stream always transfer a Bitmap image + some data .
C#
this.DataSocket = this.Listner.Accept();
int filelength = this.DataSocket.Receive(this.buffer, this.buffer.Length, SocketFlags.None)
i investigated that in a fresh-browser [newly opened ] this never failed. but when i using this created service several times frequently in the same browser this intended to fail.
when i check with a different browser or new instance of the browser it never failed in first few attempts.
i thought it was some problem with caching but i disable caching using headers but same problem exists
You can't simply expect to write an entire file to the socket at once, nor can you expect to read the file from the socket in one operation. The socket read and write APIs for just about any network programming API from BSD sockets to WinSock to .NET network classes are all going to transmit or receive data up to the desired byte count.
If you look at the documentation for PHP socket_write for example:
Returns the number of bytes successfully written to the socket or FALSE on failure. The error code can be retrieved with socket_last_error(). This code may be passed to socket_strerror() to get a textual explanation of the error.
Note:
It is perfectly valid for socket_write() to return zero which means no bytes have been written. Be sure to use the == operator to check for FALSE in case of an error.
You will typically want to choose a block size like 4096 or 16384 and loop transmitting or receiving that block size until you get the desired number of bytes transmitted or received. Your code will have to check the return value of the send or receive function you're calling and adjust your file pointer accordingly. If transmit returns 0, that could just mean the send buffer is full (not fatal) so continue sending (might want a Sleep(0) delay). If receive returns 0, this usually means the other side has cleanly closed the connection.
One of the most critical flaws in your simple network code usage is that you're not sending the size of the file before you send the file data, so there's no way for the receiver to know how much to read before sending their response. For a simple operation like this, I'd suggest just sending a binary 32bit integer (4 bytes). This would be part of the schema for your operation. So the receiver would first read 4 bytes and from that know how many more bytes need to be read (one buffer size at a time). The receiver keeps reading until they have that many bytes.
I hope this helps. It would be great if socket code were as simple as the usage you attempted, but unfortunately it isn't. You have to select a buffer size, and then keep reading or writing buffers of that size until you get what you want, and you have to convey to the other side how much data you plan on sending.
That you think caching has anything to do with the problem implies that either there is a lot of functionality outside of the code you've published which is affecting the result or that you are a very long way from understanding the problem.
Without knowing the structure of bmp files, my first concern would be how you separate the file from the additional info sent. A few things you could try...
If '$cordinates' (sic) is a fixed size, then put this at the front of the message, not the back
Log the size sent from PHP and the size received.
base64 encode the binary file before sending it (and decode at the receiving end)
Non of above solutions didn't work for me , but i found out that create a new instance every time after a one request will solve the problem. but i don't think its a reliable way.
i tried the client using ASP.NET but same results. i think its not a problem with client PHP its surely a problem of the socket server

How to send large data using C# UdpClient?

I'm trying to send a large amount of data (more than 50 MB) using C# UdpClient.
So at first I split the data into 65507 byte blocks and send them in a loop.
for(int i = 0; i < packetCount; i++)
myUdpClient.Send(blocks[i], block[i].Length, remoteEndPoint);
My problem is that only the first packets can be received.
During sending the first packet the networkload increases rapidly to 100%, and then the other packets cannot be received.
I want to get as much data throughput as possible.
I'm sorry for my English!
Thanks for your help in advance.
For all those people saying to use TCP....are foolishly wrong. Although TCP is reliable and the window maintained by the kernel it's fairly "set and forget" protocol, but when it comes to the guy wanting to use 100% of his throughput, TCP will not do (it throttles too hard, and the wait for an ACK automatically puts at least 50% trashed because of the RTT).
To the original question, you are sending UDP packets nonstop in that for-loop, the window fills up and then any new data is dropped immediately and doesn't even attempt to go on the line. You are also splitting your data too large. I would recommend building your own throttle mechanism that starts off with 2k segments per second, and slowly ramps up. Each "segment" contains a SEQ (sequence identifier for acknowledgements or ACK) and OFF (offset inside the file for this data set). As the data is being tagged, let the server keep track of these tags. When the other side gets them, it stores the SEQ numbers in an ACK list, and any missing SEQ numbers are placed into a NACK timer list, when the timer runs out (if they haven't been received) it moves to a NACK list. The receiver should send 5 or so ACKs from the ACK list along with up to 5 NACKs in a single transmission every couple seconds or so. If the sender receives these messages and there are any NACKs, it should immediately throttle down and resend the missing fragment before continuing. The data that is ACKed can be freed from memory.
Good luck!
I don't know about .Net implementation specifically, it might be buffering your data, but UDP datagram is normally limited by the link MTU, which is 1500 on normal ethernet (subtract 20 bytes for IP header and 8 bytes of UDP header.)
UDP is explicitly allowed to drop and reorder the datagrams, and there's no flow control as in TCP.
Exceeding the socket send buffer on the sender side will have the network stack ignore following send attempts until the buffer space is available again (you need to check the return value of the send() for that.)
Edit:
I would strongly recommend going with TCP for large file transfers. TCP gives you sequencing (you don't have to keep track of dropped and re-ordered packets.) It has advanced flow control (so fast sender does not overwhelm a slow receiver.) It also does Path MTU discovery (i.e. finds out optimal data packetization and avoids IP fragmentation.) Otherwise you would have to re-implement most of these features yourself.
I hate to say it but you need to sleep the thread. You are overloading your throughput. UDP is not very good for lossless data transfer. UDP is for when you don't mind dropping some packets.
Reliably - no, you won't do it with UDP.
As far as I understand, this makes sense for sending to multiple computers at a time (broadcasting).
In this case,
establish a TCP connection with each of them,
split the data into blocks,
give each block an ID,
send list of IDs to each computer with TCP connection,
broadcast data with UDP,
inform clients (via TCP) that data transmission is over,
than clients should ask to resend the dropped packets

Categories