C# Async TCP sockets: Handling buffer size and huge transfers - c#

When using a blocking TCP socket, I don't have to specify a buffer size. For example:
using (var client = new TcpClient())
{
client.Connect(ServerIp, ServerPort);
using (reader = new BinaryReader(client.GetStream()))
using (writer = new BinaryWriter(client.GetStream()))
{
var byteCount = reader.ReadInt32();
reader.ReadBytes(byteCount);
}
}
Notice how the remote host could have sent any number of bytes.
However, when using async TCP sockets, I need to create a buffer and thus hardcode a maximum size:
var buffer = new byte[BufferSize];
socket.BeginReceive(buffer, 0, buffer.Length, SocketFlags.None, callback, null);
I could simply set the buffer size to, say, 1024 bytes. That'll work if I only need to receive small chunks of data. But what if I need to receive a 10 MB serialized object? I could set the buffer size to 10*1024*1024... but that would waste a constant 10 MB of RAM for as long as the application is running. This is silly.
So, my question is: How can I efficiently receive big chunks of data using async TCP sockets?

Two examples are not equivalent - your blocking code assumes the remote end sends the 32-bit length of the data to follow. If the same protocol is valid for the async - just read that length (blocking or not) and then allocate the buffer and initiate the asynchronous IO.
Edit 0:
Let me also add that allocating buffers of user-entered, and especially of network-input, size is a receipt for disaster. An obvious problem is a denial-of-service attack when client requests a huge buffer and holds on to it - say sends data very slowly - and prevents other allocations and/or slows the whole system.
Common wisdom here is accepting a fixed amount of data at a time and parsing as you go. That of course affects your application-level protocol design.

EDITED
The best approach for this problem found by me, after a long analysis was the following:
First, you need to set the buffer size in order to receive data from the server/client.
Second, you need to find the upload/download speed for that connection.
Third, you need to calculate how many seconds should the connection timeout last in accordance with the size of package to be sent or received.
Set the buffer size
The buffer size can be set in two ways, arbitrary or objectively. If the information to be received is text based, it is not large and it does not require character comparison, than an arbitrary pre-set buffer size is optimal. If the information to be received needs to be processed character by character, and/or large, an objective buffer size is optimal choice
// In this example I used a Socket wrapped inside a NetworkStream for simplicity
// stability, and asynchronous operability purposes.
// This can be done by doing this:
//
// For server:
//
// Socket server= new Socket();
// server.ReceiveBufferSize = 18000;
// IPEndPoint iPEndPoint = new IPEndPoint(IPAddress.Any, port);
// server.Bind(iPEndPoint);
// server.Listen(3000);
//
//
// NetworkStream ns = new NetworkStream(server);
// For client:
//
// Socket client= new Socket();
// client.Connect("127.0.0.1", 80);
//
// NetworkStream ns = new NetworkStream(client);
// In order to set an objective buffer size based on a file's size in order not to
// receive null characters as extra characters because the buffer is bigger than
// the file's size, or a corrupted file because the buffer is smaller than
// the file's size.
// The TCP protocol follows the Sys, Ack and Syn-Ack paradigm,
// so within a TCP connection if the client or server began the
// connection by sending a message, the next message within its
// connection must be read, and if the client or server began
// the connection by receiving a message, the next message must
// be sent.
// [SENDER]
byte[] file = new byte[18032];
byte[] file_length = Encoding.UTF8.GetBytes(file.Length.ToString());
await Sender.WriteAsync(file_length, 0, file_length.Length);
byte[] receiver_response = new byte[1800];
await Sender.ReadAsync(receiver_response, 0, receiver_response.Length);
await Sender.WriteAsync(file, 0, file.Length);
// [SENDER]
// [RECEIVER]
byte[] file_length = new byte[1800];
await Receiver.ReadAsync(file_length, 0, file_length.Length);
byte[] encoded_response = Encoding.UTF8.GetBytes("OK");
await Receiver.WriteAsync(encoded_response, 0, encoded_response.Length);
byte[] file = new byte[Convert.ToInt32(Encoding.UTF8.GetString(file_length))];
await Receiver.ReadAsync(file, 0, file.Length);
// [RECEIVER]
The buffers that are used to receive the payload length are using an arbitrary buffer size. The length of the payload to be sent is converted to string and then the string is converted in a UTF-8 encoded byte array. The received length of the payload is then converted back into a string format and then converted to an integer in order to set the length of the buffer that will receive the payload. The length is converted to string, then to int and then to byte[], in order to avoid data corruption due to the fact that the information related to the payload length will not be sent into a buffer that has the same size as the information. When the receiver will convert the byte[] content to a string and then to an int, the extra characters will be removed and the information will remain the same.
Get the upload/download speed of the connection and calculate the Socket receive and send buffer size
First, Make a class that is responsible for calculating the buffer size for each connection.
// In this example I used a Socket wrapped inside a NetworkStream for simplicity
// stability, and asynchronous operability purposes.
// This can be done by doing this:
//
// For server:
//
// Socket server= new Socket();
// server.ReceiveBufferSize = 18000;
// IPEndPoint iPEndPoint = new IPEndPoint(IPAddress.Any, port);
// server.Bind(iPEndPoint);
// server.Listen(3000);
//
// NetworkStream ns = new NetworkStream(server);
// For client:
//
// Socket client= new Socket();
// client.Connect("127.0.0.1", 80);
//
// NetworkStream ns = new NetworkStream(client);
class Internet_Speed_Checker
{
public async Task<bool>> Optimum_Buffer_Size(System.Net.Sockets.NetworkStream socket)
{
System.Diagnostics.Stopwatch latency_counter = new System.Diagnostics.Stopwatch();
byte[] test_payload = new byte[2048];
// The TCP protocol follows the Sys, Ack and Syn-Ack paradigm,
// so within a TCP connection if the client or server began the
// connection by sending a message, the next message within its
// connection must be read, and if the client or server began
// the connection by receiving a message, the next message must
// be sent.
//
// In order to test the connection, the client and server must
// send and receive a package of the same size. If the client
// or server began the connection by sending a message, the
// client or server must do the this connection test by
// initiating a write-read sequence, else it must do this
// connection test initiating a read-write sequence.
latency_counter .Start();
await client_secure_network_stream.ReadAsync(test_payload, 0, test_payload.Length);
await client_secure_network_stream.WriteAsync(test_payload, 0, test_payload.Length);
latency_counter .Stop();
int bytes_per_second = (int)(test_payload.Length * (1000 / latency_time_counter.Elapsed.TotalMilliseconds));
int optimal_connection_timeout = (Convert.ToInt32(payload_length) / download_bytes_per_second) * 1000 + 1000;
double optimal_buffer_size_double = ((download_bytes_per_second / 125000) * (latency_time_counter.Elapsed.TotalMilliseconds / 1000)) * 1048576;
int optimal_buffer_size = (int)download_optimal_buffer_size_double + 1024;
// If you want to upload data to the client/server --> client.SendBufferSize = optimal_buffer_size;
// client.SendTimeout = optimal_connection_timeout;
// If you want to download data from the client/server --> client.ReceiveBufferSize = optimal_buffer_size;
// client.ReceiveTimeout = optimal_connection_timeout;
}
}
The aforementioned method is ensuring that the data transmitted between the client buffer and server buffer uses an appropriate socket buffer size and socket connection timeout in order to avoid data corruption and fragmentation. When the data is sent through a socket with an async Read/Write operation, the length of the information to be sent will be segmented in packets. The packet size has a default value but it does not cover the fact that the upload/download speed of the connection is varying. In order to avoid data corruption and an optimal download/upload speed of the connection, the packet size must be set in accordance with the speed of the connection. In the aforementioned example I also showcased also the how to calculate the timeout in relation with the connection speed. The packet size for upload/download can be set by using the socket.ReceiveBufferSize = ... / socket.SendBufferSize = ... respectively.
For more information related to the equations and principles used check:
https://www.baeldung.com/cs/calculate-internet-speed-ping
https://docs.oracle.com/cd/E36784_01/html/E37476/gnkor.html#:~:text=You%20can%20calculate%20the%20correct,value%20of%20the%20connection%20latency.

Related

socket time out in socket programming

i have a hardware than joins to a laptop(server) in ad hoc Network.
when server sends data alone, it works correctly. and client sends data alone , works correctly too.
but when server and client send data together , after a period of time , time out will occur.
after 35 and sometimes 33 packet time out will occur.
i changed transfer rate of hardware but it disconnects too.
although hard ware supports full duplex.
after time out , i ping hard ware and it is not on port.
and check port on server , and it is open.
how can i do?
byte[] bytes = new byte[512];
//try
//{
IPHostEntry ipHost = Dns.GetHostEntry("");
// Gets first IP address associated with a localhost
IPAddress add = ipHost.AddressList[3];
TcpListener tcpListener = new TcpListener(add, 6000);
tcpListener.Start();
TcpClient tcpClient = tcpListener.AcceptTcpClient();
NetworkStream stream = tcpClient.GetStream();
String data = null;
while (true)
{
int j = 0;
int i;
while ((i = stream.Read(bytes, 0, bytes.Length)) != 0)
{
j = j + 1;
// Translate data bytes to a ASCII string.
data = System.Text.Encoding.ASCII.GetString(bytes, 0, i);
AddItem("j="+j+" Received:"+ data);
// Process the data sent by the client.
//data = data.ToUpper();
byte[] msg = System.Text.Encoding.ASCII.GetBytes("thanks");
// Send back a response.
stream.Write(msg, 0, msg.Length);
AddItem("Sent:"+"thanks");
}
// Shutdown and end connection
tcpClient.Close();
The standard socket calls are all blocking, so if both participants are sending to each other, they each wait for their opposite to receive the message they send, causing a deadlock.
In .NET, there are three typical solutions:
Microsoft has a parallel API for asynchronous socket activity. It requires more overhead code than your example, but handles just about everything in a Windows-like manner.
You can handle the asynchronous activity yourself by testing for readable data before you write with Socket.Select(). This is a typical polling approach, but you're doing everything yourself and need to make sure there's no starvation or other bias.
Put your Read and Write code in different threads, so that blocking one doesn't block the entire program.

Asynchronous Client Socket, increasing the buffer size

I've implemented an Asynchronous Client Socket extremely similar to this example. Is there any reason why I cannot dramatically increase this buffer size? In this example the buffer size is 256 bytes. In many cases, my application ends up receiving data that is 5,000++ bytes of data. Should I increase the buffer size? Are there any reasons why I should NOT increase the buffer size?
Every once in a long while I'll get some issue where the data comes in out of order or a chunk is missing (yet to be confirmed exactly which it is). For example, one time I received some corrupt data that looks like this
Slice Id="0" layotartX='100'
The attribute called layotartX does not exist in my data, it was supposed to say layout=... but instead the layout got cut off and other data was appended to it later. I counted the bytes and noticed that it was cut off at exactly 256 bytes which just so happens to be my buffer size. It's very possible that increasing my buffer size would prevent this problem from happening (data coming in out of order??). Anyways, as stated in the 1st paragraph, I'm just asking if there is any reason I should NOT increase the buffer size to be say like 5,000 bytes or even 10,000 bytes.
Adding some code. Below is my modified ReceiveCallback function (see the linked example code above for the rest of the classes. When the ReceiveCallback receives data, it calls the "ReceiveSomeData" function which I've also posted below. For some reason every once in a while I get data out of order or pieces missing. The "ReceiveSomeData" function is in a class called "MyChitterChatter" and the "ReceiveCallback" function is in a class called "AsyncClient". So when you see the ReceiveSomeData function locking "this", it's locking the MyChitterChatter class. Is there were my problem could by lying?
private static void ReceiveCallback( IAsyncResult ar )
{
AppDelegate appDel = (AppDelegate)UIApplication.SharedApplication.Delegate;
try {
// Retrieve the state object and the client socket
// from the asynchronous state object.
StateObject state = (StateObject) ar.AsyncState;
Socket client = state.workSocket;
// Read data from the remote device.
int bytesRead = client.EndReceive(ar);
if (bytesRead > 0) {
// There might be more data, so store the data received so far.
string stuffWeReceived = Encoding.ASCII.GetString(state.buffer,0,bytesRead);
string debugString = "~~~~~ReceiveCallback~~~~~~ " + stuffWeReceived + " len = " + stuffWeReceived.Length + " bytesRead = " + bytesRead;
Console.WriteLine(debugString);
// Send this data to be received
appDel.wallInteractionScreen.ChitterChatter.ReceiveSomeData(stuffWeReceived);
// Get the rest of the data.
client.BeginReceive(state.buffer,0,StateObject.BufferSize,0,
new AsyncCallback(ReceiveCallback), state);
} else {
// Signal that all bytes have been received.
receiveDone.Set();
}
}
catch (Exception e) {
Console.WriteLine("Error in AsyncClient ReceiveCallback: ");
Console.WriteLine(e.Message);
Console.WriteLine(e.StackTrace);
}
}
public void ReceiveSomeData ( string data )
{
lock(this)
{
DataList_New.Add(data);
// Update the keepalive when we receive ANY data at all
IsConnected = true;
LastDateTime_KeepAliveReceived = DateTime.Now;
}
}
Yes, you absolutely should increase the buffer size to something much closer to what you expect to get in a single read. 32k or 64k would be fine choice for most uses.
Having said that, data never comes in "out of order" or "missing a chunk" if you're using a TCP/IP socket; if you see something like that, it's a bug in your code, not a bug in the socket. Share your code if you want help.

How to get the Socket Buffer size in C#

I have a Socket code which is communicating through TCP/IP.The machine to which i am communicating has buffer data in its buffer.At present i am trying to get the buffer data using this code.
byte data = new byte[1024];
int recv = sock.Receive(data);
stringData = Encoding.ASCII.GetString(data, 0, recv);
But this code retrieves only 11 lines of data whereas more data is there in the machines buffer.Is this because i have used int recv = sock.Receive(data); and data is 1024 ?
If yes ,How to get the total buffer size and retrieve it into string.
If you think you are missing some data, then you need to check recv and almost certainly: loop. Fortunately, ASCII is always single byte - in most other encodings you would also have to worry about receiving partial characters.
A common approach is basically:
int recv;
while((recv = sock.Receive(data)) > 0)
{
// process recv-many bytes
// ... stringData = Encoding.ASCII.GetString(data, 0, recv);
}
Keep in mind that there is no guarantee that stringData will be any particular entire unit of work; what you send is not always what you receive, and that could be a single character, 14 lines, or the second half of one word and the first half of another. You generally need to maintain your own back-buffer of received data until you have a complete logical frame to process.
Note, however, Receive always tries to return something (at least one byte), unless the inbound stream has closed - and will block to do so. If this is a problem, you may need to check the available buffer (sock.Available) to decide whether to do synchronous versus asynchronous receive (i.e. read synchronously while data is available, otherwise request an asynchronous read).
Try something along these lines:
StringBuilder sbContent=new StringBuilder();
byte data = new byte[1024];
int numBytes;
while ((numBytes = sock.Receive(data))>0)
{
sbContent.Append(Encoding.UTF8.GetString(data));
}
// use sbContent.ToString()
Socket tcpSocket = new Socket(ipe.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
Console.WriteLine(" ReceiveBufferSize {0}", tcpSocket.ReceiveBufferSize);
For actual data you can put below condition:-
int receiveBytes;
while((receiveBytes = tcpSocket.Receive.Data(receiveBytes)) > 0)
{
}

c# raw socket ip header total length

I am using udp raw sockets.
I wish to read only the first, for example, 64 bytes of every packet.
ipaddr = IPAddress.Parse( "10.1.2.3" );
sock = new Socket(AddressFamily.InterNetwork, SocketType.Raw, ProtocolType.IP);
sock.Bind(new IPEndPoint(ipaddr, 0));
sock.SetSocketOption(SocketOptionLevel.IP, SocketOptionName.HeaderIncluded, true);
sock.IOControl(IOControlCode.ReceiveAll, BitConverter.GetBytes(RCVALL_IPLEVEL), null);
sock.ReceiveBufferSize = 32768;
byte[] buffer = new byte[64]; // max IP header, plus tcp/udp ports
while (!bTheEnd )
{
int ret = sock.Receive(buffer, buffer.Length, SocketFlags.None);
...
}
I receive the packets, but all with IP header' "total length" <= 64.
If I use a bigger buffer ( byte[] buffer = new byte[32768] ), I got the right "total length" ( now its value is <= 32768 ).
The goal is to get all the packets, only the IP header, with their corret packet length;
my routine doesn't have to cause packet fragmentation into the tcp/ip stack.
SocketFlags.Peek means the data returned will be left intact for a subsequent read - that's why you get same data after reading again. To read subsequent packets you don't want to use Peek, just perform a regular read with no special flags.
According to documentation:
If the datagram you receive is larger than the size of the buffer
parameter, buffer gets filled with the first part of the message, the
excess data is lost and a SocketException is thrown.
Is that the behavior you're after?

How to read all requested data using NetworkStream.BeginRead?

There is a code of async server. Client sends Header - size of Data Block + Data Block.
Server reads asynchronously first Header and then Data Block.
I need, after I read Data Block run the BeginRead for Header reading part, to make threads async.
PROBLEM:
When I got DataCallBack, in line:
int bytesRead = ns.EndRead(result);
I get not all buffer i asked to read in
mc.Client.GetStream().BeginRead(mc.DataBuffer, 0, size, new AsyncCallback(DataCallBack), mc);
If client send 1MB of Data I can get different number of "bytesRead".
QUESTION:
How to force "BeginRead" to read all data from connection. It should cause the new loop of Header - Data.
MyClient - simply wrapper over TcpClient;
CODE:
public void DoAcceptTcpClientCallback(IAsyncResult ar)
{
TcpListener listener = (TcpListener)ar.AsyncState;
TcpClient client = listener.EndAcceptTcpClient(ar);
client.NoDelay = false;
// client.ReceiveBufferSize = 1024*1024;
listener.BeginAcceptTcpClient(new AsyncCallback(DoAcceptTcpClientCallback), listener);
MyClient mc = new MyClient(client);
ContinueRead(0,mc);
}
public void ContinueRead(int size, MyClient mc)
{
if (size != 0)
{
mc.DataBuffer = new byte[size];
mc.Client.GetStream().BeginRead(mc.DataBuffer, 0, size, new AsyncCallback(DataCallBack), mc);
}
mc.Client.GetStream().BeginRead(mc.HeaderBuffer, 0, 4, new AsyncCallback(HeaderCallBack), mc);
}
private void HeaderCallBack(IAsyncResult result)
{
MyClient mc = (MyClient)result.AsyncState;
NetworkStream ns = mc.Stream;
int bytesRead = ns.EndRead(result);
if (bytesRead == 0)
throw new Exception();
mc.TotalLengs = BitConverter.ToInt32(mc.HeaderBuffer, 0);
ContinueRead(mc.TotalLengs, mc);
}
private void DataCallBack(IAsyncResult result)
{
MyClient mc = (MyClient)result.AsyncState;
NetworkStream ns = mc.Stream;
int bytesRead = ns.EndRead(result);
if (bytesRead == 0)
throw new Exception();
BAD CODE - MAKES ASYNC READING - SYNC
while (bytesRead < mc.TotalLengs)
{
bytesRead += ns.Read(mc.DataBuffer, bytesRead, mc.TotalLengs - bytesRead);
}
END BAD CODE
ContinueRead(0, mc);
ProcessPacket(mc.DataBuffer, mc.IP);
}
"If client send 1MB of Data I can get different number of "bytesRead"."
Yes...this is simply how TCP works under the hood. You can't change this. TCP guarantees the order of packets, not how they are grouped. The hardware and traffic conditions along the route the packets travel determine how that data is grouped (or un-grouped).
"How to force "BeginRead" to read all data from connection."
TCP has no idea how much data is being sent. As far as it is concerned, the connection is simply an endless stream of bytes; therefore it cannot read "all data" since there is no end to the data (from its perspective). TCP also has no notion of what a "complete message" is with respect to your application. It is up to you, the programmer, to develop a protocol that allows your application to know when all data has been sent.
If you are expecting a certain number of bytes, then keep a running sum of the values returned by EndRead() and stop when that magic number is hit.

Categories