C# NetworkStream ReadAsync, data split between buffers - c#

I'm currently using the ReadAsync method in NetworkStream, and thus far its been pretty stable. The problem occurs when the data received from the server is larger than the buffer itself.
ReadCallBack assumes that the buffer contains a complete and valid message, as a new buffer is created on every call to BeginAsyncRead. Thus, when the response is too large for one buffer, it gets split between multiple reads and thus multiple buffers.
My ReadCallBack then assumes each buffer is an independent message (instead of adding them together) and ends up failing the deserialization on both.
My question is, what is the correct way to handle messages that are bigger than the size of the buffer, specially if you want to do the reads asynchronously?
A character itself could be split between two buffers so this gets quite tricky.
Below is my code for ReadAsync and ReadCallBack.
public async Task ReadForever() {
while(this.IsConnected) {
await BeginAsyncRead();
}
}
public async Task BeginAsyncRead() {
if (this.Stream.CanRead) {
try {
ReceiverState readState = new ReceiverState(this.Stream);
var task = this.Stream.ReadAsync(readState.buffer, 0, ReceiverState.bufferSize);
await task.ContinueWith(bytesRead => ReadCallBack(bytesRead.Result, readState));
}
catch (Exception e) {
Console.WriteLine("Error");
}
} else {
Console.WriteLine("Unreadable stream");
}
}
private void ReadCallBack(int numBytesRead, ReceiverState state) {
if (numBytesRead > 0) {
string payload = Encoding.ASCII.GetString(state.buffer, 0, numBytesRead);
string[] responses = payload.Split(new string[] {"\n"}, StringSplitOptions.RemoveEmptyEntries);
foreach (string res in responses){
var t = Task.Run(() => this.OnRead(res));
}
} else {
Console.WriteLine("Corrupted number of bytes received");
}
}
Note : All responses sent by the server contain a newline at the end.
Note : ReadForever is called inside Task.Run, my application can receive messages from the server as notifications, thus I must always be reading for incoming messages (I don't know when a notification may arrive).

Related

Receiving data from a serial port using ReadAsync

I'm trying to communicate with a piece of hardware over a serial port and I've noticed that it's quite tricky. After reading up on some of the SerialPort caveats, I discovered this post by Ben Voigt on alternative approaches to handle serial communication in .NET with the SerialPort class.
However, this code is quite old (8 years or so as of today) and the Stream class now exposes asynchronous read/write support, so I was wondering how the code in that blog could be re-written to account for the following scenarios:
The device periodically sends a status update through the port (auto generated)
I can send a byte array that represents a command that doesn't need a response
I can send a byte array that represents a command that does need a response, and the device will send that reply data through the port
So far, I have a simple read loop that looks a bit like this:
public async IAsyncEnumerable<byte> ReadBytesAsync()
{
while (_port.IsOpen)
{
byte[]? buffer = new byte[1];
try
{
await _port.BaseStream.ReadAsync(buffer, 0, 1);
}
catch (Exception e)
{
// Do stuff
}
if (buffer is not null) yield return buffer[0];
await Task.Delay(1); // Wait a bit before reading again
}
}
I also have a GetRepliesAsync() method that await foreaches over the read bytes and constructs the right data types for the replies.
The autogenerated status update gets read correctly when received, but if I use _port.BaseStream.WriteAsync(commandBytes, 0, commandBytes.Length), the device receives the update but I somehow don't receive its reply through the base stream.
What could I be doing wrong, and how can I actually read the received bytes?
UPDATES
1. Modified read and added delay
I got the code to work somewhat with the following approach:
while (_port.IsOpen)
{
if (cancellationToken.IsCancellationRequested) yield break;
int bufferLength = 2048; // Arbitrary buffer length
byte[] buffer = new byte[bufferLength];
try
{
// Partially using Ben Voigt's approach
int bytesRead = await _port.BaseStream
.ReadAsync(buffer.AsMemory(0, bufferLength), cancellationToken);
nextBytes = new byte[bytesRead];
Buffer.BlockCopy(buffer, 0, nextBytes, 0, bytesRead);
}
catch (Exception e)
{
Console.WriteLine(e.Message);
Console.WriteLine("Terminating serial read polling...");
yield break;
}
if (nextBytes is not null)
{
for (int i = 0; i < nextBytes.Length; i++)
{
yield return nextBytes[i];
}
}
// Introduce delay in case a write operation is going on
await Task.Delay(200, cancellationToken);
}
But I still don't see a way to guarantee that each write will happen during the 200 millisecond delay.

How to handle incoming TCP messages with a Kestrel ConnectionHandler?

I want to create a TCP listener for my .NET Core project. I'm using Kestrel and configured a new ConnectionHandler for this via
kestrelServerOptions.ListenLocalhost(5000, builder =>
{
builder.UseConnectionHandler<MyTCPConnectionHandler>();
});
So what I have so far is
internal class MyTCPConnectionHandler : ConnectionHandler
{
public override async Task OnConnectedAsync(ConnectionContext connection)
{
IDuplexPipe pipe = connection.Transport;
PipeReader pipeReader = pipe.Input;
while (true)
{
ReadResult readResult = await pipeReader.ReadAsync();
ReadOnlySequence<byte> readResultBuffer = readResult.Buffer;
foreach (ReadOnlyMemory<byte> segment in readResultBuffer)
{
// read the current message
string messageSegment = Encoding.UTF8.GetString(segment.Span);
// send back an echo
await pipe.Output.WriteAsync(segment);
}
if (readResult.IsCompleted)
{
break;
}
pipeReader.AdvanceTo(readResultBuffer.Start, readResultBuffer.End);
}
}
}
When sending messages from a TCP client to the server application the code works fine. The line await pipe.Output.WriteAsync(segment); is acting like an echo for now.
Some questions come up
What response should I send back to the client so that it does not run into a timeout?
When should I send back the response? When readResult.IsCompleted returns true?
How should I change the code to fetch the whole message sent by the client? Should I store each messageSegment in a List<string> and join it to a single string when readResult.IsCompleted returns true?
that is entirely protocol dependent; in many cases, you're fine to do nothing; in others, there will be specific "ping"/"pong" frames to send if you just want to say "I'm still here"
the "when" is entirely protocol dependent; waiting for readResult.IsCompleted means that you're waiting for the inbound socket to be marked as closed, which means you won't send anything until the client closes their outbound socket; for single-shot protocols, that might be fine; but in most cases, you'll want to look for a single inbound frame, and reply to that frame (and repeat)
it sounds like you might indeed be writing a one-shot channel, i.e. the client only sends one thing to the server, and after that: the server only sends one thing to the client; in that case, you do something like:
while (true)
{
var readResult = await pipeReader.ReadAsync();
if (readResult.IsCompleted)
{
// TODO: not shown; process readResult.Buffer
// tell the pipe that we consumed everything, and exit
pipeReader.AdvanceTo(readResultBuffer.End, readResultBuffer.End);
break;
}
else
{
// wait for the client to close their outbound; tell
// the pipe that we couldn't consume anything
pipeReader.AdvanceTo(readResultBuffer.Start, readResultBuffer.End);
}
As for:
Should I store each messageSegment in a List<string> and join it to a single string when
The first thing to consider here is that it is not necessarily the case that each buffer segment contains an exact number of characters. Since you are using UTF-8, which is a multi-byte encoding, a segment might contain fractions of characters at the start and end, so: decoding it is a bit more involved than that.
Because of this, it is common to check IsSingleSegment on the buffer; if this is true, you can just use simple code:
if (buffer.IsSingleSegment)
{
string message = Encoding.UTF8.GetString(s.FirstSpan);
DoSomethingWith(message);
}
else
{
// ... more complex
}
The discontiguous buffer case is much harder; basically, you have two choices here:
linearize the segments into a contiguous buffer, probably leasing an oversized buffer from ArrayPool<byte>.Shared, and use UTF8.GetString on the correct portion of the leased buffer
use the GetDecoder() API on the encoding, and use that to populate a new string, which on older frameworks means overwriting a newly allocated string, or in newer frameworks means using the string.Create API
Frankly, "1" is much simpler. For example (untested):
public static string GetString(in this ReadOnlySequence<byte> payload,
Encoding encoding = null)
{
encoding ??= Encoding.UTF8;
return payload.IsSingleSegment ? encoding.GetString(payload.FirstSpan)
: GetStringSlow(payload, encoding);
static string GetStringSlow(in ReadOnlySequence<byte> payload, Encoding encoding)
{
// linearize
int length = checked((int)payload.Length);
var oversized = ArrayPool<byte>.Shared.Rent(length);
try
{
payload.CopyTo(oversized);
return encoding.GetString(oversized, 0, length);
}
finally
{
ArrayPool<byte>.Shared.Return(oversized);
}
}
}

How to write to MemoryStream if many messages are sent

Here is how i read data from my stream now:
public List<ServerClient> clients = new List<ServerClient>();
while (true)
{
Update();
}
private void Update()
{
//Console.WriteLine("Call");
if (!serverStarted)
{
return;
}
foreach (ServerClient c in clients.ToList())
{
// Is the client still connected?
if (!IsConnected(c.tcp))
{
c.tcp.Close();
disconnectList.Add(c);
Console.WriteLine(c.connectionId + " has disconnected.");
CharacterLogout(c.connectionId);
continue;
//Console.WriteLine("Check for connection?\n");
}
else
{
// Check for message from Client.
NetworkStream s = c.tcp.GetStream();
if (s.DataAvailable)
{
string data = c.streamReader.ReadLine();
if (data != null)
{
OnIncomingData(c, data);
}
}
//continue;
}
}
for (int i = 0; i < disconnectList.Count - 1; i++)
{
clients.Remove(disconnectList[i]);
disconnectList.RemoveAt(i);
}
}
When data is read it is send to OnIncomingData function which is processing the data. I don't have problems there.
Here is how i send data to the stream:
public void Send(string header, Dictionary data)
{
if (stream.CanRead)
{
socketReady = true;
}
if (!socketReady)
{
return;
}
JsonData SendData = new JsonData();
SendData.header = "1x" + header;
foreach (var item in data)
{
SendData.data.Add(item.Key.ToString(), item.Value.ToString());
}
SendData.connectionId = connectionId;
string json = JsonConvert.SerializeObject(SendData);
var howManyBytes = json.Length * sizeof(Char);
writer.WriteLine(json);
writer.Flush();
Debug.Log("Client World:" + json);
}
Here is my:
public class ServerClient
{
public TcpClient tcp;
public int accountId;
public StreamReader streamReader;
public int connectionId;
public ServerClient(TcpClient clientSocket)
{
tcp = clientSocket;
}
}
Here is my OnConnection function:
private void OnConnection(IAsyncResult ar)
{
connectionIncrementor++;
TcpListener listener = (TcpListener)ar.AsyncState;
NetworkStream s = clients[clients.Count - 1].tcp.GetStream();
clients.Add(new ServerClient(listener.EndAcceptTcpClient(ar)));
clients[clients.Count - 1].connectionId = connectionIncrementor;
clients[clients.Count - 1].streamReader = new StreamReader(s, true);
StartListening();
//Send a message to everyone, say someone has connected!
Dictionary<string, string> SendDataBroadcast = new Dictionary<string, string>();
SendDataBroadcast.Add("connectionId", clients[clients.Count - 1].connectionId.ToString());
Broadcast("001", SendDataBroadcast, clients, clients[clients.Count - 1].connectionId);
Console.WriteLine(clients[clients.Count - 1].connectionId + " has connected.");
}
Normally everything works fine. However if i try to send more request per 1 second the problem occurs. The message received is not full and complete. It just receives a portion of the message sent.
From Debug.Log("Client World:" + json); i can see that the message is full and complete but on the server i see that it is not.
This is not happening if i send less requests.
So for that reason i think i should create a MemoryStream and puts a message there and read it after. However i'm really not sure how i can do that. Can you help ?
The whole code is not very good, but I'll concentrate on your specific problem. It's most likely related to data buffering by StreamReader. StreamReader has buffer size (which you can pass to constructor) which defaults to 1024 bytes. When you call ReadLine - it's perfectly possible for stream reader to read more than one line from the underlying stream. In your case - you have while loop in which you enumerate connected clients and in every iteration of the loop you create new StreamReader and read one line from it. When message rate is low - all looks fine, because between your loop iterations only one line arrives. Now suppose client quickly sent 2 json messages, each of which is 800 bytes, and they both arrived into your socket. Now you call StreamReader.ReadLine. Because buffer size is 1024 - it will read 1024 bytes from socket (NetworkStream) and return first 800 to you (as a line). You process that line and discard StreamReader going to the next iteration of your while loop. By doing that you also discard part of the message (224 bytes of the next message), because they were already read from the socket into StreamReader buffer. I think from that it should be clear how to solve that problem - don't create new StreamReader every time but create one per client (for example store as a member of ServerClient) and use that.
The client looks more suspicious to me than the server.
StreamWriter is not thread-safe. Are you calling it in a thread-safe manner when using ClientWorldServer.Send? Lock up or queue your calls to ClientWorldServer.Send using a lock or BlockingCollection or some other synchronisation primitive. There is also a thread-safe wrapper of streamwriter you might be able to use.

networkstream always empty!

hey I'm writing on an Server-Client program
but when my client sends something, it never reaches my server!
I'm sending like this:
public void Send(string s)
{
char[] chars = s.ToCharArray();
byte[] bytes = chars.CharToByte();
nstream.Write(bytes, 0, bytes.Length);
nstream.Flush();
}
and Receiving in a background thread like this
void CheckIncoming(object dd)
{
RecievedDelegate d = (RecievedDelegate)dd;
try
{
while (true)
{
List<byte> bytelist = new List<byte>();
System.Threading.Thread.Sleep(1000);
int ssss;
ssss = nstream.ReadByte();
if (ssss > 1)
{
System.Diagnostics.Debugger.Break();
}
if (bytelist.Count != 0)
{
d.Invoke(bytelist.ToArray());
}
}
}
catch (Exception exp)
{
MSGBOX("ERROR:\n" + exp.Message);
}
}
the ssss int is never > 1
whats happening here???
NetworkStream.Flush() actually has no effect:
The Flush method implements the Stream..::.Flush method; however, because NetworkStream is not buffered, it has no affect [sic] on network streams. Calling the Flush method does not throw an exception
How much data is being sent?
If you don't send enough data it may remain buffered at the network level, until you close the stream or write more data.
See the TcpClient.NoDelay property for a way to disable this, if you are only going to be sending small chunks of data and require low latency.
You should change the check of the return value to if (ssss >= 0).
ReadByte returns a value greater or equal 0 if it succeeds to read a byte (source).
To elaborate on Marc's comment: How is nstream created? Maybe there is an underlying class that does not flush.
well, Im creating a TcpClient, and use GetStream(); to get the NetworkStream

Are there well-known patterns for asynchronous network code in C#?

I recently wrote a quick-and-dirty proof-of-concept proxy server in C# as part of an effort to get a Java web application to communicate with a legacy VB6 application residing on another server. It's ridiculously simple:
The proxy server and clients both use the same message format; in the code I use a ProxyMessage class to represent both requests from clients and responses generated by the server:
public class ProxyMessage
{
int Length; // message length (not including the length bytes themselves)
string Body; // an XML string containing a request/response
// writes this message instance in the proper network format to stream
// (helper for response messages)
WriteToStream(Stream stream) { ... }
}
The messages are as simple as could be: the length of the body + the message body.
I have a separate ProxyClient class that represents a connection to a client. It handles all the interaction between the proxy and a single client.
What I'm wondering is are they are design patterns or best practices for simplifying the boilerplate code associated with asynchronous socket programming? For example, you need to take some care to manage the read buffer so that you don't accidentally lose bytes, and you need to keep track of how far along you are in the processing of the current message. In my current code, I do all of this work in my callback function for TcpClient.BeginRead, and manage the state of the buffer and the current message processing state with the help of a few instance variables.
The code for my callback function that I'm passing to BeginRead is below, along with the relevant instance variables for context. The code seems to work fine "as-is", but I'm wondering if it can be refactored a bit to make it clearer (or maybe it already is?).
private enum BufferStates
{
GetMessageLength,
GetMessageBody
}
// The read buffer. Initially 4 bytes because we are initially
// waiting to receive the message length (a 32-bit int) from the client
// on first connecting. By constraining the buffer length to exactly 4 bytes,
// we make the buffer management a bit simpler, because
// we don't have to worry about cases where the buffer might contain
// the message length plus a few bytes of the message body.
// Additional bytes will simply be buffered by the OS until we request them.
byte[] _buffer = new byte[4];
// A count of how many bytes read so far in a particular BufferState.
int _totalBytesRead = 0;
// The state of the our buffer processing. Initially, we want
// to read in the message length, as it's the first thing
// a client will send
BufferStates _bufferState = BufferStates.GetMessageLength;
// ...ADDITIONAL CODE OMITTED FOR BREVITY...
// This is called every time we receive data from
// the client.
private void ReadCallback(IAsyncResult ar)
{
try
{
int bytesRead = _tcpClient.GetStream().EndRead(ar);
if (bytesRead == 0)
{
// No more data/socket was closed.
this.Dispose();
return;
}
// The state passed to BeginRead is used to hold a ProxyMessage
// instance that we use to build to up the message
// as it arrives.
ProxyMessage message = (ProxyMessage)ar.AsyncState;
if(message == null)
message = new ProxyMessage();
switch (_bufferState)
{
case BufferStates.GetMessageLength:
_totalBytesRead += bytesRead;
// if we have the message length (a 32-bit int)
// read it in from the buffer, grow the buffer
// to fit the incoming message, and change
// state so that the next read will start appending
// bytes to the message body
if (_totalBytesRead == 4)
{
int length = BitConverter.ToInt32(_buffer, 0);
message.Length = length;
_totalBytesRead = 0;
_buffer = new byte[message.Length];
_bufferState = BufferStates.GetMessageBody;
}
break;
case BufferStates.GetMessageBody:
string bodySegment = Encoding.ASCII.GetString(_buffer, _totalBytesRead, bytesRead);
_totalBytesRead += bytesRead;
message.Body += bodySegment;
if (_totalBytesRead >= message.Length)
{
// Got a complete message.
// Notify anyone interested.
// Pass a response ProxyMessage object to
// with the event so that receivers of OnReceiveMessage
// can send a response back to the client after processing
// the request.
ProxyMessage response = new ProxyMessage();
OnReceiveMessage(this, new ProxyMessageEventArgs(message, response));
// Send the response to the client
response.WriteToStream(_tcpClient.GetStream());
// Re-initialize our state so that we're
// ready to receive additional requests...
message = new ProxyMessage();
_totalBytesRead = 0;
_buffer = new byte[4]; //message length is 32-bit int (4 bytes)
_bufferState = BufferStates.GetMessageLength;
}
break;
}
// Wait for more data...
_tcpClient.GetStream().BeginRead(_buffer, 0, _buffer.Length, this.ReadCallback, message);
}
catch
{
// do nothing
}
}
So far, my only real thought is to extract the buffer-related stuff into a separate MessageBuffer class and simply have my read callback append new bytes to it as they arrive. The MessageBuffer would then worry about things like the current BufferState and fire an event when it received a complete message, which the ProxyClient could then propagate further up to the main proxy server code, where the request can be processed.
I've had to overcome similar problems. Here's my solution (modified to fit your own example).
We create a wrapper around Stream (a superclass of NetworkStream, which is a superclass of TcpClient or whatever). It monitors reads. When some data is read, it is buffered. When we receive a length indicator (4 bytes) we check if we have a full message (4 bytes + message body length). When we do, we raise a MessageReceived event with the message body, and remove the message from the buffer. This technique automatically handles fragmented messages and multiple-messages-per-packet situations.
public class MessageStream : IMessageStream, IDisposable
{
public MessageStream(Stream stream)
{
if(stream == null)
throw new ArgumentNullException("stream", "Stream must not be null");
if(!stream.CanWrite || !stream.CanRead)
throw new ArgumentException("Stream must be readable and writable", "stream");
this.Stream = stream;
this.readBuffer = new byte[512];
messageBuffer = new List<byte>();
stream.BeginRead(readBuffer, 0, readBuffer.Length, new AsyncCallback(ReadCallback), null);
}
// These belong to the ReadCallback thread only.
private byte[] readBuffer;
private List<byte> messageBuffer;
private void ReadCallback(IAsyncResult result)
{
int bytesRead = Stream.EndRead(result);
messageBuffer.AddRange(readBuffer.Take(bytesRead));
if(messageBuffer.Count >= 4)
{
int length = BitConverter.ToInt32(messageBuffer.Take(4).ToArray(), 0); // 4 bytes per int32
// Keep buffering until we get a full message.
if(messageBuffer.Count >= length + 4)
{
messageBuffer.Skip(4);
OnMessageReceived(new MessageEventArgs(messageBuffer.Take(length)));
messageBuffer.Skip(length);
}
}
// FIXME below is kinda hacky (I don't know the proper way of doing things...)
// Don't bother reading again. We don't have stream access.
if(disposed)
return;
try
{
Stream.BeginRead(readBuffer, 0, readBuffer.Length, new AsyncCallback(ReadCallback), null);
}
catch(ObjectDisposedException)
{
// DO NOTHING
// Ends read loop.
}
}
public Stream Stream
{
get;
private set;
}
public event EventHandler<MessageEventArgs> MessageReceived;
protected virtual void OnMessageReceived(MessageEventArgs e)
{
var messageReceived = MessageReceived;
if(messageReceived != null)
messageReceived(this, e);
}
public virtual void SendMessage(Message message)
{
// Have fun ...
}
// Dispose stuff here
}
I think the design you've used is fine that's roughly how I would and have done the same sort of thing. I don't think you'd gain much by refactoring into additional classes/structs and from what I've seen you'd actually make the solution more complex by doing so.
The only comment I'd have is as to whether the two reads where the first is always the messgae length and the second always being the body is robust enough. I'm always wary of approaches like that as if they somehow get out of sync due to an unforseen circumstance (such as the other end sending the wrong length) it's very difficult to recover. Instead I'd do a single read with a big buffer so that I always get all the available data from the network and then inspect the buffer to extract out complete messages. That way if things do go wrong the current buffer can just be thrown away to get things back to a clean state and only the current messages are lost rather than stopping the whole service.
Actually at the moment you would have a problem if you message body was big and arrived in two seperate receives and the next message in line sent it's length at the same time as the second half of the previous body. If that happened your message length would end up appended to the body of the previous message and you'd been in the situation as desecribed in the previous paragraph.
You can use yield return to automate the generation of a state machine for asynchronous callbacks. Jeffrey Richter promotes this technique through his AsyncEnumerator class, and I've played around with the idea here.
There's nothing wrong with the way you've done it. For me, though, I like to separate the receiving of the data from the processing of it, which is what you seem to be thinking with your proposed MessageBuffer class. I have discussed that in detail here.

Categories