IBuffer in UWP for TCP messages - c#

I need to transfer a string over TCP connection. For this I serializable my object(over 10000 line list) in one stroke, without Intended. But large string won't transfer(As I understood due to buffer size). So MSDN, on this page (https://learn.microsoft.com/ru-ru/windows/uwp/networking/sockets) say me to use IBuffer for transfer my divided stroke. Here is a code:
// More efficient way to send packets.
// This way enables the system to do batched sends
IList<IBuffer> packetsToSend = PreparePackets();
var outputStream = stream.OutputStream;
int i = 0;
Task[] pendingTasks = new Tast[packetsToSend.Count];
foreach (IBuffer packet in packetsToSend)
{
pendingTasks[i++] = outputStream.WriteAsync(packet).AsTask();
}
// Now, wait for all of the pending writes to complete
await Task.WaitAll(pendingTasks);
What is the method PraparePackets()? How to prepare packets from my stroke?
Edit: I've found solution with DataReader and DataWriter, which has written in Albahari.(End of 16 chapter).

I've found solution with DataReader and DataWriter, which has written in Albahari.(End of 16 chapter).

Related

StreamSocket on WinRT doesn't receive all data

I've a server running MPD (music player daemon) which communicates via sockets. Now I'm trying to implement the MPD protocol in a Windows store app. Basically I send a command and receive a list which has as the last line "OK". As long as the receving list is smaller than the receive buffer everything is ok. But if I need to load data which is bigger than the buffer the wired stuff starts.
When calling the SendCommand the first time I receive only part of the data, the rest is received when calling SendCommand a second time. When called once more I receive all data as expected. When doing this in a WPF program on the same machine everything is fine.
This is my code:
public async Task<string> SendCommand(MpdProtocol.MpdCommand command)
{
DataWriter writer = new DataWriter(streamSocket.OutputStream);
string res = string.Empty;
writer.WriteString(command.ToString());
await writer.StoreAsync();
res = await ReadResponse();
writer.DetachBuffer();
writer.DetachStream();
return res;
}
private async Task<string> ReadResponse()
{
DataReader reader = new DataReader(streamSocket.InputStream);
reader.InputStreamOptions = InputStreamOptions.Partial;
StringBuilder response = new StringBuilder();
const uint MAX_BUFFER = 8 * 1024;
uint returnBuffer = 0;
do
{
returnBuffer = await reader.LoadAsync(MAX_BUFFER);
response.Append(reader.ReadString(reader.UnconsumedBufferLength));
} while (returnBuffer >= MAX_BUFFER);
reader.DetachBuffer();
reader.DetachStream();
return response.ToString();
}
I've played around with the ReadResponse method but nothing worked.
Can someone point me to the right direction?
Finally, I've found a solution to get the communication working: reading the stream byte by byte with reader.ReadByte() and check each received line (they are separated by "\n") for being "OK".
The problem is reader.ReadString. This methond is good when you know how long the string is you'll try to receive. In my case I have no idea about the size of the string. All I know is that the last line will be an "OK" string.
My error was to belive - as Peter mentioned - that as long as there is data to retrive the recive buffer will be filled completely and only the last call to reader.ReadString will be smaller than the max buffer size. I've also tried to rewrite the ReadResponse function in different ways but nothing worked for reader.ReadString.

Read unknown length by DataReader

I've been working with windows app store programming in c# recently, and I've come across a problem with sockets.
I need to be able to read data with an unknown length from a DataReader().
It sounds simple enough, but I've not been able to find a solution after a few days of searching.
Here's my current receiving code (A little sloppy, need to clean it up after I find a solution to this problem. And yes, a bit of this is from the Microsoft example)
DataReader reader = new DataReader(args.Socket.InputStream);
try
{
while (true)
{
// Read first 4 bytes (length of the subsequent string).
uint sizeFieldCount = await reader.LoadAsync(sizeof(uint));
if (sizeFieldCount != sizeof(uint))
{
// The underlying socket was closed before we were able to read the whole data.
return;
}
reader.InputStreamOptions
// Read the string.
uint stringLength = reader.ReadUInt32();
uint actualStringLength = await reader.LoadAsync(stringLength);
if (stringLength != actualStringLength)
{
// The underlying socket was closed before we were able to read the whole data.
return;
}
// Display the string on the screen. The event is invoked on a non-UI thread, so we need to marshal
// the text back to the UI thread.
//MessageBox.Show("Received data: " + reader.ReadString(actualStringLength));
MessageBox.updateList(reader.ReadString(actualStringLength));
}
}
catch (Exception exception)
{
// If this is an unknown status it means that the error is fatal and retry will likely fail.
if (SocketError.GetStatus(exception.HResult) == SocketErrorStatus.Unknown)
{
throw;
}
MessageBox.Show("Read stream failed with error: " + exception.Message);
}
You are going down the right lines - read the first INT to find out how many bytes are to be sent.
Franky Boyle is correct - without a signalling mechanism it is impossible to ever know the length of a stream. Thats why it is called a stream!
NO socket implementation (including the WinSock) will ever be clever enough to know when a client has finished sending data. The client could be having a cup of tea half way through sending the data!
Your server and its sockets will never know! What are they going to do? Wait forever? I suppose they could wait until the client had 'closed' the connection? But your client could have had a blue screen and the server will never get that TCP close packet, it will just be sitting there thinking it is getting more data one day?
I have never used a DataReader - i have never even heard of that class! Use NetworkStream instead.
From my memory I have written code like this in the past. I am just typing, no checking of syntax.
using(MemoryStream recievedData = new MemoryStream())
{
using(NetworkStream networkStream = new NetworkStream(connectedSocket))
{
int totalBytesToRead = networkStream.ReadByte();
// This is your mechanism to find out how many bytes
// the client wants to send.
byte[] readBuffer = new byte[1024]; // Up to you the length!
int totalBytesRead = 0;
int bytesReadInThisTcpWindow = 0;
// The length of the TCP window of the client is usually
// the number of bytes that will be pushed through
// to your server in one SOCKET.READ method call.
// For example, if the clients TCP window was 777 bytes, a:
// int bytesRead =
// networkStream.Read(readBuffer, 0, int.Max);
// bytesRead would be 777.
// If they were sending a large file, you would have to make
// it up from the many 777s.
// If it were a small file under 777 bytes, your bytesRead
// would be the total small length of say 500.
while
(
(
bytesReadInThisTcpWindow =
networkStream.Read(readBuffer, 0, readBuffer.Length)
) > 0
)
// If the bytesReadInThisTcpWindow = 0 then the client
// has disconnected or failed to send the promised number
// of bytes in your Windows server internals dictated timeout
// (important to kill it here to stop lots of waiting
// threads killing your server)
{
recievedData.Write(readBuffer, 0, bytesReadInThisTcpWindow);
totalBytesToRead = totalBytesToRead + bytesReadInThisTcpWindow;
}
if(totalBytesToRead == totalBytesToRead)
{
// We have our data!
}
}
}

Receive the latest UDP packet in C#

I'm using Unity to visualize a simulation where the data from the simulation is being sent to it via UDP packets from Simulink. The problem I'm having stems from the rate at which Simulink sends out UDP packets and the rate at which my script in Unity tries to receive data from the UDP client.
For my Unity script, I create a thread that executes a simple function with a while loop and sleeps for the same amount of time it takes for the client to timeout (which is arbitrarily set by me):
public void Start() {
// Setup listener.
this.mSenderAddress = IPAddress.Parse("127.0.0.1");
this.mSender = new IPEndPoint(this.mSenderAddress, 30001);
// Setup background UDP listener thread.
this.mReceiveThread = new Thread(new ThreadStart(ReceiveData));
this.mReceiveThread.IsBackground = true;
this.mReceiveThread.Start();
}
// Function to receive UDP data.
private void ReceiveData() {
try {
// Setup UDP client.
this.mClient = new UdpClient(30001);
this.mClient.Client.ReceiveTimeout = 250;
// While thread is still alive.
while(Thread.CurrentThread.IsAlive) {
try {
// Grab the data.
byte[] data = this.mClient.Receive(ref this.mSender);
// Convert the data from bytes to doubles.
double[] convertedData = new double[data.Length / 8];
for(int ii = 0; ii < convertedData.Length; ii++)
convertedData[ii] = BitConverter.ToDouble(data, 8 * ii);
// DO WHATEVER WITH THE DATA
// Sleep the thread.
Thread.Sleep(this.mClient.Client.ReceiveTimeout);
} catch(SocketException e) {
continue;
}
}
} catch(Exception e) {
Debug.Log(e.ToString());
}
}
Here, if the timeout / sleep time is greater than the difference in time that Simulink sends out a UDP packet, my visualization will fall behind the simulation because it will read the next packet that was sent out and not the last packet that was sent out. It is regarding the packets as a queue.
Is there anyway to just get data from the last packet received? I know that there is at least one way around this, because if I use a Rate Transfer Block set to an equal or greater sample time as the UdpClient timeout it will work; but I'd like to make it more robust than that.
Since my packets contain full information about the state of my simulation (position, orientation, time, etc...) it doesn't matter if I never use data from intermediate packets; so long as I get the most up-to-date data, which would be from the last packet.
UDP is unreliable and the packets are not guaranteed to be received in the same order they are sent. My suggestion is to use TCP or put some sort of sequence number in your packets headers and keep reading the UDP packets and only select the newest packets.

Async Sockets, Receive multiple files on same connection

I have a server that is going to transfer multiple files to a client over a single connection. The packet from server is in the following format:
unique_packet_id | file_content
I have onDataReceived function which I need to work like this:
public class TRACK_ID {
public string id;
public string unknown_identifier;
}
List<TRACK_ID> TRACKER = new List<TRACK_ID>();
public void OnDataReceived(IAsyncResult asyn)
{
try
{
log("OnDataReceived");
SocketPacket theSockId = (SocketPacket)asyn.AsyncState;
int iRx = theSockId.thisSocket.EndReceive(asyn);
// .. read the data into data_chunk
// if seperator found, that means we got an first chunk with the id
if (data_chunk.Contains("|") == true)
{
// extract unique_packet_id from the data
// bind unique_packet_id to an some kind of identifier? how!!??
TRACK_ID new_track = new TRACK_ID();
new_track.id = unique_packet_id;
new_track.unknown_identifier = X;
TRACKER.add(new_track);
} else {
// no seperator found - we're getting the rest of the data
// determinate the unique_packet_id of the incoming data so we can distinguish data/files
string current_packet_id = "";
for(int i=0; i<TRACKER.count; i++){
if(TRACKER[i].unknown_identifier == X){
current_packet_id = TRACKER[i].id; // we found our packet id!
break;
}
}
// now we got our packet_id and know where to store the buffer
}
WaitForData..
}
}
I need a variable X that will allow me to track where to store each incoming buffer
If I closed connection for each file, I could bind unique_packet_id to socket_id (socket_id would be X), but since I'm using the same connection, socket_id always stays the same so I have to use something else for this X variable.
Only other solution I can think of is sending the unique_packet_id in each chunk of data. But that seems like not the best way to do it. Then I would have to split the file buffer into chunks and append the id to each chunk. Any other ways how to accomplish this? Thanks!
You didn't say if you're using a stream socket or a datagram socket.
If you're using a datagram socket (e.g. you're using UDP/IP) then you will always receive a whole packet all at once, so you can identify the data because it goes with the unique_packet_id that was found before the | at the beginning of the current packet.
If you're using a stream socket (e.g. you're using TCP/IP) then I think you have a problem. Your packet format isn't delimited or escaped, so how will you know where one packet ends and the next one begins?
If you are using a stream socket, you need to use, for example, a packet format like this:
unique packet ID (say, in ASCII, terminated with CRNL —­ or whatever you choose)
content length (same format)
packet payload
The receiver can find the end of the packet because it knows how many bytes will be part of the payload.
You will also need to be prepared for the case where you get one of your packets in small pieces. For example, your callback function might be called once with part of the unique packet ID, called again with the rest of the header and part of the payload, and again with the rest of the payload and the complete following packet tacked on to the end. Or you may get three whole packets and part of a fourth in a single call to your callback function.
The other possible solution you mention, that of sending the unique_packet_id in each chunk of data, is not possible, because the sender doesn't know how the data will be chunked up when it is delivered to the receiver.

C# Async TCP sockets: Handling buffer size and huge transfers

When using a blocking TCP socket, I don't have to specify a buffer size. For example:
using (var client = new TcpClient())
{
client.Connect(ServerIp, ServerPort);
using (reader = new BinaryReader(client.GetStream()))
using (writer = new BinaryWriter(client.GetStream()))
{
var byteCount = reader.ReadInt32();
reader.ReadBytes(byteCount);
}
}
Notice how the remote host could have sent any number of bytes.
However, when using async TCP sockets, I need to create a buffer and thus hardcode a maximum size:
var buffer = new byte[BufferSize];
socket.BeginReceive(buffer, 0, buffer.Length, SocketFlags.None, callback, null);
I could simply set the buffer size to, say, 1024 bytes. That'll work if I only need to receive small chunks of data. But what if I need to receive a 10 MB serialized object? I could set the buffer size to 10*1024*1024... but that would waste a constant 10 MB of RAM for as long as the application is running. This is silly.
So, my question is: How can I efficiently receive big chunks of data using async TCP sockets?
Two examples are not equivalent - your blocking code assumes the remote end sends the 32-bit length of the data to follow. If the same protocol is valid for the async - just read that length (blocking or not) and then allocate the buffer and initiate the asynchronous IO.
Edit 0:
Let me also add that allocating buffers of user-entered, and especially of network-input, size is a receipt for disaster. An obvious problem is a denial-of-service attack when client requests a huge buffer and holds on to it - say sends data very slowly - and prevents other allocations and/or slows the whole system.
Common wisdom here is accepting a fixed amount of data at a time and parsing as you go. That of course affects your application-level protocol design.
EDITED
The best approach for this problem found by me, after a long analysis was the following:
First, you need to set the buffer size in order to receive data from the server/client.
Second, you need to find the upload/download speed for that connection.
Third, you need to calculate how many seconds should the connection timeout last in accordance with the size of package to be sent or received.
Set the buffer size
The buffer size can be set in two ways, arbitrary or objectively. If the information to be received is text based, it is not large and it does not require character comparison, than an arbitrary pre-set buffer size is optimal. If the information to be received needs to be processed character by character, and/or large, an objective buffer size is optimal choice
// In this example I used a Socket wrapped inside a NetworkStream for simplicity
// stability, and asynchronous operability purposes.
// This can be done by doing this:
//
// For server:
//
// Socket server= new Socket();
// server.ReceiveBufferSize = 18000;
// IPEndPoint iPEndPoint = new IPEndPoint(IPAddress.Any, port);
// server.Bind(iPEndPoint);
// server.Listen(3000);
//
//
// NetworkStream ns = new NetworkStream(server);
// For client:
//
// Socket client= new Socket();
// client.Connect("127.0.0.1", 80);
//
// NetworkStream ns = new NetworkStream(client);
// In order to set an objective buffer size based on a file's size in order not to
// receive null characters as extra characters because the buffer is bigger than
// the file's size, or a corrupted file because the buffer is smaller than
// the file's size.
// The TCP protocol follows the Sys, Ack and Syn-Ack paradigm,
// so within a TCP connection if the client or server began the
// connection by sending a message, the next message within its
// connection must be read, and if the client or server began
// the connection by receiving a message, the next message must
// be sent.
// [SENDER]
byte[] file = new byte[18032];
byte[] file_length = Encoding.UTF8.GetBytes(file.Length.ToString());
await Sender.WriteAsync(file_length, 0, file_length.Length);
byte[] receiver_response = new byte[1800];
await Sender.ReadAsync(receiver_response, 0, receiver_response.Length);
await Sender.WriteAsync(file, 0, file.Length);
// [SENDER]
// [RECEIVER]
byte[] file_length = new byte[1800];
await Receiver.ReadAsync(file_length, 0, file_length.Length);
byte[] encoded_response = Encoding.UTF8.GetBytes("OK");
await Receiver.WriteAsync(encoded_response, 0, encoded_response.Length);
byte[] file = new byte[Convert.ToInt32(Encoding.UTF8.GetString(file_length))];
await Receiver.ReadAsync(file, 0, file.Length);
// [RECEIVER]
The buffers that are used to receive the payload length are using an arbitrary buffer size. The length of the payload to be sent is converted to string and then the string is converted in a UTF-8 encoded byte array. The received length of the payload is then converted back into a string format and then converted to an integer in order to set the length of the buffer that will receive the payload. The length is converted to string, then to int and then to byte[], in order to avoid data corruption due to the fact that the information related to the payload length will not be sent into a buffer that has the same size as the information. When the receiver will convert the byte[] content to a string and then to an int, the extra characters will be removed and the information will remain the same.
Get the upload/download speed of the connection and calculate the Socket receive and send buffer size
First, Make a class that is responsible for calculating the buffer size for each connection.
// In this example I used a Socket wrapped inside a NetworkStream for simplicity
// stability, and asynchronous operability purposes.
// This can be done by doing this:
//
// For server:
//
// Socket server= new Socket();
// server.ReceiveBufferSize = 18000;
// IPEndPoint iPEndPoint = new IPEndPoint(IPAddress.Any, port);
// server.Bind(iPEndPoint);
// server.Listen(3000);
//
// NetworkStream ns = new NetworkStream(server);
// For client:
//
// Socket client= new Socket();
// client.Connect("127.0.0.1", 80);
//
// NetworkStream ns = new NetworkStream(client);
class Internet_Speed_Checker
{
public async Task<bool>> Optimum_Buffer_Size(System.Net.Sockets.NetworkStream socket)
{
System.Diagnostics.Stopwatch latency_counter = new System.Diagnostics.Stopwatch();
byte[] test_payload = new byte[2048];
// The TCP protocol follows the Sys, Ack and Syn-Ack paradigm,
// so within a TCP connection if the client or server began the
// connection by sending a message, the next message within its
// connection must be read, and if the client or server began
// the connection by receiving a message, the next message must
// be sent.
//
// In order to test the connection, the client and server must
// send and receive a package of the same size. If the client
// or server began the connection by sending a message, the
// client or server must do the this connection test by
// initiating a write-read sequence, else it must do this
// connection test initiating a read-write sequence.
latency_counter .Start();
await client_secure_network_stream.ReadAsync(test_payload, 0, test_payload.Length);
await client_secure_network_stream.WriteAsync(test_payload, 0, test_payload.Length);
latency_counter .Stop();
int bytes_per_second = (int)(test_payload.Length * (1000 / latency_time_counter.Elapsed.TotalMilliseconds));
int optimal_connection_timeout = (Convert.ToInt32(payload_length) / download_bytes_per_second) * 1000 + 1000;
double optimal_buffer_size_double = ((download_bytes_per_second / 125000) * (latency_time_counter.Elapsed.TotalMilliseconds / 1000)) * 1048576;
int optimal_buffer_size = (int)download_optimal_buffer_size_double + 1024;
// If you want to upload data to the client/server --> client.SendBufferSize = optimal_buffer_size;
// client.SendTimeout = optimal_connection_timeout;
// If you want to download data from the client/server --> client.ReceiveBufferSize = optimal_buffer_size;
// client.ReceiveTimeout = optimal_connection_timeout;
}
}
The aforementioned method is ensuring that the data transmitted between the client buffer and server buffer uses an appropriate socket buffer size and socket connection timeout in order to avoid data corruption and fragmentation. When the data is sent through a socket with an async Read/Write operation, the length of the information to be sent will be segmented in packets. The packet size has a default value but it does not cover the fact that the upload/download speed of the connection is varying. In order to avoid data corruption and an optimal download/upload speed of the connection, the packet size must be set in accordance with the speed of the connection. In the aforementioned example I also showcased also the how to calculate the timeout in relation with the connection speed. The packet size for upload/download can be set by using the socket.ReceiveBufferSize = ... / socket.SendBufferSize = ... respectively.
For more information related to the equations and principles used check:
https://www.baeldung.com/cs/calculate-internet-speed-ping
https://docs.oracle.com/cd/E36784_01/html/E37476/gnkor.html#:~:text=You%20can%20calculate%20the%20correct,value%20of%20the%20connection%20latency.

Categories