I want to create a TCP listener for my .NET Core project. I'm using Kestrel and configured a new ConnectionHandler for this via
kestrelServerOptions.ListenLocalhost(5000, builder =>
{
builder.UseConnectionHandler<MyTCPConnectionHandler>();
});
So what I have so far is
internal class MyTCPConnectionHandler : ConnectionHandler
{
public override async Task OnConnectedAsync(ConnectionContext connection)
{
IDuplexPipe pipe = connection.Transport;
PipeReader pipeReader = pipe.Input;
while (true)
{
ReadResult readResult = await pipeReader.ReadAsync();
ReadOnlySequence<byte> readResultBuffer = readResult.Buffer;
foreach (ReadOnlyMemory<byte> segment in readResultBuffer)
{
// read the current message
string messageSegment = Encoding.UTF8.GetString(segment.Span);
// send back an echo
await pipe.Output.WriteAsync(segment);
}
if (readResult.IsCompleted)
{
break;
}
pipeReader.AdvanceTo(readResultBuffer.Start, readResultBuffer.End);
}
}
}
When sending messages from a TCP client to the server application the code works fine. The line await pipe.Output.WriteAsync(segment); is acting like an echo for now.
Some questions come up
What response should I send back to the client so that it does not run into a timeout?
When should I send back the response? When readResult.IsCompleted returns true?
How should I change the code to fetch the whole message sent by the client? Should I store each messageSegment in a List<string> and join it to a single string when readResult.IsCompleted returns true?
that is entirely protocol dependent; in many cases, you're fine to do nothing; in others, there will be specific "ping"/"pong" frames to send if you just want to say "I'm still here"
the "when" is entirely protocol dependent; waiting for readResult.IsCompleted means that you're waiting for the inbound socket to be marked as closed, which means you won't send anything until the client closes their outbound socket; for single-shot protocols, that might be fine; but in most cases, you'll want to look for a single inbound frame, and reply to that frame (and repeat)
it sounds like you might indeed be writing a one-shot channel, i.e. the client only sends one thing to the server, and after that: the server only sends one thing to the client; in that case, you do something like:
while (true)
{
var readResult = await pipeReader.ReadAsync();
if (readResult.IsCompleted)
{
// TODO: not shown; process readResult.Buffer
// tell the pipe that we consumed everything, and exit
pipeReader.AdvanceTo(readResultBuffer.End, readResultBuffer.End);
break;
}
else
{
// wait for the client to close their outbound; tell
// the pipe that we couldn't consume anything
pipeReader.AdvanceTo(readResultBuffer.Start, readResultBuffer.End);
}
As for:
Should I store each messageSegment in a List<string> and join it to a single string when
The first thing to consider here is that it is not necessarily the case that each buffer segment contains an exact number of characters. Since you are using UTF-8, which is a multi-byte encoding, a segment might contain fractions of characters at the start and end, so: decoding it is a bit more involved than that.
Because of this, it is common to check IsSingleSegment on the buffer; if this is true, you can just use simple code:
if (buffer.IsSingleSegment)
{
string message = Encoding.UTF8.GetString(s.FirstSpan);
DoSomethingWith(message);
}
else
{
// ... more complex
}
The discontiguous buffer case is much harder; basically, you have two choices here:
linearize the segments into a contiguous buffer, probably leasing an oversized buffer from ArrayPool<byte>.Shared, and use UTF8.GetString on the correct portion of the leased buffer
use the GetDecoder() API on the encoding, and use that to populate a new string, which on older frameworks means overwriting a newly allocated string, or in newer frameworks means using the string.Create API
Frankly, "1" is much simpler. For example (untested):
public static string GetString(in this ReadOnlySequence<byte> payload,
Encoding encoding = null)
{
encoding ??= Encoding.UTF8;
return payload.IsSingleSegment ? encoding.GetString(payload.FirstSpan)
: GetStringSlow(payload, encoding);
static string GetStringSlow(in ReadOnlySequence<byte> payload, Encoding encoding)
{
// linearize
int length = checked((int)payload.Length);
var oversized = ArrayPool<byte>.Shared.Rent(length);
try
{
payload.CopyTo(oversized);
return encoding.GetString(oversized, 0, length);
}
finally
{
ArrayPool<byte>.Shared.Return(oversized);
}
}
}
Related
While setting up a TCP server-client connection, I realized that the server receive function hangs if the client does not send an '\n', but the client does not block if the sever doesn't. I tried searching for an explanation without finding a proper answer, so I came here to ask for your help.
I am using the same function to exchange data for both server and client, but I don't know why it works for one and doesn't for the other...
Here is my function in C#:
public bool sendToClient(int i, string msg)
{
try
{
(clientSockets.ElementAt(i)).mSocket.Send(Encoding.ASCII.GetBytes(msg));
}
catch(Exception e)
{
Console.WriteLine(e.Data.ToString());
return false;
}
return true;
}
private string getMessageFromConnection(Socket s)
{
byte[] buff;
string msg = "";
int k;
do
{
buff = new byte[100];
k = s.Receive(buff, 100, SocketFlags.None);
msg += Encoding.ASCII.GetString(buff, 0, k);
} while (k >= 100);
return msg;
}
The sockets are simple SOCK_STREAM ones, and the clientSockets is a list containing Client objects containing each client info including their socket.
I understand that one solution would be to detect a particular character to end the message, but I would like to know the reason behind it because I also had this issue using C.
Thanks in advance.
Your while loop continues only as long as you're reading exactly 100 bytes, and it seems that you intend to use that to detect the end of a message.
This will fail if the message is exactly 100 or any multitude of 100 bytes (in which case it will append a subsequent message to it).
But even worse - there is no guarantee that the socket will return 100 bytes, even if there is data still on its way. Receive does not wait until the underlying buffer has reached 100 bytes, it will return whatever it has available at that point.
You're going to have to either include a header that indicates the message length, or have a terminator character that indicates the end of the message.
I have noticed the new System.IO.Pipelines and are trying to port existing, stream based, code over to it. The problems with streams are well understood, but at the same time it features a rich echosystem of related classes.
From the example provided here, there is a small tcp echo server.
https://blogs.msdn.microsoft.com/dotnet/2018/07/09/system-io-pipelines-high-performance-io-in-net/
A snippet of the code is attached here:
private static async Task ProcessLinesAsync(Socket socket)
{
Console.WriteLine($"[{socket.RemoteEndPoint}]: connected");
var pipe = new Pipe();
Task writing = FillPipeAsync(socket, pipe.Writer);
Task reading = ReadPipeAsync(socket, pipe.Reader);
await Task.WhenAll(reading, writing);
Console.WriteLine($"[{socket.RemoteEndPoint}]: disconnected");
}
private static async Task FillPipeAsync(Socket socket, PipeWriter writer)
{
const int minimumBufferSize = 512;
while (true)
{
try
{
// Request a minimum of 512 bytes from the PipeWriter
Memory<byte> memory = writer.GetMemory(minimumBufferSize);
int bytesRead = await socket.ReceiveAsync(memory, SocketFlags.None);
if (bytesRead == 0)
{
break;
}
// Tell the PipeWriter how much was read
writer.Advance(bytesRead);
}
catch
{
break;
}
// Make the data available to the PipeReader
FlushResult result = await writer.FlushAsync();
if (result.IsCompleted)
{
break;
}
}
// Signal to the reader that we're done writing
writer.Complete();
}
private static async Task ReadPipeAsync(Socket socket, PipeReader reader)
{
while (true)
{
ReadResult result = await reader.ReadAsync();
ReadOnlySequence<byte> buffer = result.Buffer;
SequencePosition? position = null;
do
{
// Find the EOL
position = buffer.PositionOf((byte)'\n');
if (position != null)
{
var line = buffer.Slice(0, position.Value);
ProcessLine(socket, line);
// This is equivalent to position + 1
var next = buffer.GetPosition(1, position.Value);
// Skip what we've already processed including \n
buffer = buffer.Slice(next);
}
}
while (position != null);
// We sliced the buffer until no more data could be processed
// Tell the PipeReader how much we consumed and how much we left to process
reader.AdvanceTo(buffer.Start, buffer.End);
if (result.IsCompleted)
{
break;
}
}
reader.Complete();
}
private static void ProcessLine(Socket socket, in ReadOnlySequence<byte> buffer)
{
if (_echo)
{
Console.Write($"[{socket.RemoteEndPoint}]: ");
foreach (var segment in buffer)
{
Console.Write(Encoding.UTF8.GetString(segment.Span));
}
Console.WriteLine();
}
}
When using streams, you could easily add SSL/TLS to your code just by wrapping it in SslStream. How is this intended to be solved with Pipelines?
Named pipes are a network protocol, just as HTTP, FTP, and SMTP are. Lets look at the .net Framework for some quick examples:
SSL is leveraged by HTTP connections automatically,
depending on the base URI. If the URI beings with "HTTPS:", SSL is
used.
SSL is leveraged by FTP connections manually, by setting the
EnableSsl property to true prior to calling GetResponse().
SSL is leveraged by SMTP in the same way as FTP.
But what if we are using a different network protocol, such as pipes? Right off the bat we know there is nothing similar to an "HTTPS" prefix. Furthermore, we can read the documentation for System.IO.Piplines and see that there is no "EnableSsl" method. However, in both .NET Framework and .NET Core, the SslStream class is available. This class allows you to build a SslStream out of almost any available Stream.
Also available in both .NET Framework and .NET Core is the System.IO.Pipes Namespace. The classes available in the Pipes namespace are pretty helpful.
AnonymousPipeClientStream
AnonymousPipeServerStream
NamedPipeClientStream
NamedPipeServerStream
PipeStream
All of these classes return some kind of object which inherits from Stream, and can thus be used in the constructor for a SslStream.
How does this relate back to the System.IO.Piplines Namespace? Well... it doesn't. None of the Classes, Structs, or Interfaces defined in the System.IO.Pipelines namespace inherit from Stream. So we can not use the SslStream class directly.
Instead, we have access to PipeReaders and PipeWriters. Sometimes we only have one of these available to us, but lets consider a bi-directional pipe so that we have access to both at the same time.
The System.IO.Piplines namespace helpfully provides an IDuplexPipe interface. If we want to wrap the PipeReader and PipeWriters in an SSL stream, we will need to define a new type that implements IDuplexPipe.
In this new type:
We will define an SslStream.
We will use generic pipes as input and output buffers.
The PipeReader will use the reader of the input buffer. We will use this input buffer to get data from the SSL stream.
The PipeWriter will use the writer of the output buffer. We will use this output buffer to send data to the SSL stream.
Here is an example in pseudocode:
SslStreamDuplexPipe : IDuplexPipe
{
SslStream sslStream;
Pipe inputBuffer;
Pipe outputBuffer;
public PipeReader Input = inputBuffer.Reader;
public PipeWriter Output = outputBuffer.Writer;
ReadDataFromSslStream()
{
int bytes = sslStream.Read(new byte[2048], 0, 2048);
inputBuffer.Writer.Advance(bytes)
inputBuffer.Writer.Flush();
}
//and the reverse to write to the SslStream
}
As you can see, we are still using the SslStream class from the System.Net.Security namespace, it just took us a few more steps.
Does this mean that you are basically still using streams? Yep! But, once you have fully implemented your SslStreamDuplexPipe class, you get to work with only pipes. No need to wrap an SslStream around everything.
Marc Gravell wrote a much, much, more detailed explanation of this. The first of the 3 parts can be found here: https://blog.marcgravell.com/2018/07/pipe-dreams-part-1.html
Additionally, you can read about the various .NET classes mentioned:
https://learn.microsoft.com/en-us/dotnet/api/system.io.pipelines.pipe?view=dotnet-plat-ext-2.1
https://learn.microsoft.com/en-us/dotnet/framework/network-programming/using-secure-sockets-layer
https://learn.microsoft.com/en-us/dotnet/api/system.net.security.sslstream?view=netcore-2.2
https://learn.microsoft.com/en-us/dotnet/api/system.io.pipes?view=netcore-2.2
I have a server that is going to transfer multiple files to a client over a single connection. The packet from server is in the following format:
unique_packet_id | file_content
I have onDataReceived function which I need to work like this:
public class TRACK_ID {
public string id;
public string unknown_identifier;
}
List<TRACK_ID> TRACKER = new List<TRACK_ID>();
public void OnDataReceived(IAsyncResult asyn)
{
try
{
log("OnDataReceived");
SocketPacket theSockId = (SocketPacket)asyn.AsyncState;
int iRx = theSockId.thisSocket.EndReceive(asyn);
// .. read the data into data_chunk
// if seperator found, that means we got an first chunk with the id
if (data_chunk.Contains("|") == true)
{
// extract unique_packet_id from the data
// bind unique_packet_id to an some kind of identifier? how!!??
TRACK_ID new_track = new TRACK_ID();
new_track.id = unique_packet_id;
new_track.unknown_identifier = X;
TRACKER.add(new_track);
} else {
// no seperator found - we're getting the rest of the data
// determinate the unique_packet_id of the incoming data so we can distinguish data/files
string current_packet_id = "";
for(int i=0; i<TRACKER.count; i++){
if(TRACKER[i].unknown_identifier == X){
current_packet_id = TRACKER[i].id; // we found our packet id!
break;
}
}
// now we got our packet_id and know where to store the buffer
}
WaitForData..
}
}
I need a variable X that will allow me to track where to store each incoming buffer
If I closed connection for each file, I could bind unique_packet_id to socket_id (socket_id would be X), but since I'm using the same connection, socket_id always stays the same so I have to use something else for this X variable.
Only other solution I can think of is sending the unique_packet_id in each chunk of data. But that seems like not the best way to do it. Then I would have to split the file buffer into chunks and append the id to each chunk. Any other ways how to accomplish this? Thanks!
You didn't say if you're using a stream socket or a datagram socket.
If you're using a datagram socket (e.g. you're using UDP/IP) then you will always receive a whole packet all at once, so you can identify the data because it goes with the unique_packet_id that was found before the | at the beginning of the current packet.
If you're using a stream socket (e.g. you're using TCP/IP) then I think you have a problem. Your packet format isn't delimited or escaped, so how will you know where one packet ends and the next one begins?
If you are using a stream socket, you need to use, for example, a packet format like this:
unique packet ID (say, in ASCII, terminated with CRNL — or whatever you choose)
content length (same format)
packet payload
The receiver can find the end of the packet because it knows how many bytes will be part of the payload.
You will also need to be prepared for the case where you get one of your packets in small pieces. For example, your callback function might be called once with part of the unique packet ID, called again with the rest of the header and part of the payload, and again with the rest of the payload and the complete following packet tacked on to the end. Or you may get three whole packets and part of a fourth in a single call to your callback function.
The other possible solution you mention, that of sending the unique_packet_id in each chunk of data, is not possible, because the sender doesn't know how the data will be chunked up when it is delivered to the receiver.
Hi guys i have an issue currently when using a Socket to send data.
I am currently getting a very strange issue when a Client sends to the server, for example "HEART", as a heart beat. afer sending the first time the server starts receiving a whole lot of \0, not always the same amount before the HEART. I am using queues to queue up sending so on slow connections it waits for the current send to be done before it sends the next but for tiny lengths like that i'm a bit confused.
public void Send(string data)
{
if (Connected)
{
SendQueue.Enqueue(data);
if (t.ThreadState == ThreadState.Stopped)
{
t = new Thread(new ThreadStart(SendData));
t.Start();
}
else if (t.ThreadState == ThreadState.Unstarted)
t.Start();
}
}
and the SendData function
private void SendData()
{
if (sending)
return;
sending = true;
while (SendQueue.Count > 0)
{
if (ClientSocket.Connected)
{
byte[] data = Networking.StringToByte((string)SendQueue.Dequeue());
ClientSocket.Send(data);
}
}
sending = false;
}
i don't think it's the sending function because i've debugged it and the byte array always holds the correct info.
the receiving end is even simpler.
public string Receive()
{
string msg = "";
if (Connected)
{
byte[] data = new byte[1024];
while (ClientSocket.Avaliable > 0)
{
ClientSocket.Receive(data);
msg += Networking.ByteToString(data).Trim();
}
}
return msg;
}
If anyone could point out where i'm going wrong or if i've gone at this the entirely wrong way that would be great.
Thanks guys.
I will remind people that it's seemingly random lengths of \0 each 2 seconds (in this example for the HEART message heartbeat)
This piece of code can't be correct:
byte[] data = new byte[1024];
while (ClientSocket.Avaliable > 0)
{
ClientSocket.Receive(data);
msg += Networking.ByteToString(data).Trim();
}
It seems you do not take into account how much data you actually receive. You have to look at the return value from ClientSocket.Receive I don't know what your Networking.ByteToString does but when your code runs Networking.ByteToString , that function cannot know how much data you actually received. It'll probably convert the entire buffer - all 1024 bytes. And that's likely where all your zeroes comes from. It could be that somewhere you're doing a similar thing on the sending side.
You also might need to keep in mind that TCP is stream oriented, not packet oriented. If you do 1 Send call, that can take several Receive calls to read, or 1 Receive call might read the data from many Send calls.
I recently wrote a quick-and-dirty proof-of-concept proxy server in C# as part of an effort to get a Java web application to communicate with a legacy VB6 application residing on another server. It's ridiculously simple:
The proxy server and clients both use the same message format; in the code I use a ProxyMessage class to represent both requests from clients and responses generated by the server:
public class ProxyMessage
{
int Length; // message length (not including the length bytes themselves)
string Body; // an XML string containing a request/response
// writes this message instance in the proper network format to stream
// (helper for response messages)
WriteToStream(Stream stream) { ... }
}
The messages are as simple as could be: the length of the body + the message body.
I have a separate ProxyClient class that represents a connection to a client. It handles all the interaction between the proxy and a single client.
What I'm wondering is are they are design patterns or best practices for simplifying the boilerplate code associated with asynchronous socket programming? For example, you need to take some care to manage the read buffer so that you don't accidentally lose bytes, and you need to keep track of how far along you are in the processing of the current message. In my current code, I do all of this work in my callback function for TcpClient.BeginRead, and manage the state of the buffer and the current message processing state with the help of a few instance variables.
The code for my callback function that I'm passing to BeginRead is below, along with the relevant instance variables for context. The code seems to work fine "as-is", but I'm wondering if it can be refactored a bit to make it clearer (or maybe it already is?).
private enum BufferStates
{
GetMessageLength,
GetMessageBody
}
// The read buffer. Initially 4 bytes because we are initially
// waiting to receive the message length (a 32-bit int) from the client
// on first connecting. By constraining the buffer length to exactly 4 bytes,
// we make the buffer management a bit simpler, because
// we don't have to worry about cases where the buffer might contain
// the message length plus a few bytes of the message body.
// Additional bytes will simply be buffered by the OS until we request them.
byte[] _buffer = new byte[4];
// A count of how many bytes read so far in a particular BufferState.
int _totalBytesRead = 0;
// The state of the our buffer processing. Initially, we want
// to read in the message length, as it's the first thing
// a client will send
BufferStates _bufferState = BufferStates.GetMessageLength;
// ...ADDITIONAL CODE OMITTED FOR BREVITY...
// This is called every time we receive data from
// the client.
private void ReadCallback(IAsyncResult ar)
{
try
{
int bytesRead = _tcpClient.GetStream().EndRead(ar);
if (bytesRead == 0)
{
// No more data/socket was closed.
this.Dispose();
return;
}
// The state passed to BeginRead is used to hold a ProxyMessage
// instance that we use to build to up the message
// as it arrives.
ProxyMessage message = (ProxyMessage)ar.AsyncState;
if(message == null)
message = new ProxyMessage();
switch (_bufferState)
{
case BufferStates.GetMessageLength:
_totalBytesRead += bytesRead;
// if we have the message length (a 32-bit int)
// read it in from the buffer, grow the buffer
// to fit the incoming message, and change
// state so that the next read will start appending
// bytes to the message body
if (_totalBytesRead == 4)
{
int length = BitConverter.ToInt32(_buffer, 0);
message.Length = length;
_totalBytesRead = 0;
_buffer = new byte[message.Length];
_bufferState = BufferStates.GetMessageBody;
}
break;
case BufferStates.GetMessageBody:
string bodySegment = Encoding.ASCII.GetString(_buffer, _totalBytesRead, bytesRead);
_totalBytesRead += bytesRead;
message.Body += bodySegment;
if (_totalBytesRead >= message.Length)
{
// Got a complete message.
// Notify anyone interested.
// Pass a response ProxyMessage object to
// with the event so that receivers of OnReceiveMessage
// can send a response back to the client after processing
// the request.
ProxyMessage response = new ProxyMessage();
OnReceiveMessage(this, new ProxyMessageEventArgs(message, response));
// Send the response to the client
response.WriteToStream(_tcpClient.GetStream());
// Re-initialize our state so that we're
// ready to receive additional requests...
message = new ProxyMessage();
_totalBytesRead = 0;
_buffer = new byte[4]; //message length is 32-bit int (4 bytes)
_bufferState = BufferStates.GetMessageLength;
}
break;
}
// Wait for more data...
_tcpClient.GetStream().BeginRead(_buffer, 0, _buffer.Length, this.ReadCallback, message);
}
catch
{
// do nothing
}
}
So far, my only real thought is to extract the buffer-related stuff into a separate MessageBuffer class and simply have my read callback append new bytes to it as they arrive. The MessageBuffer would then worry about things like the current BufferState and fire an event when it received a complete message, which the ProxyClient could then propagate further up to the main proxy server code, where the request can be processed.
I've had to overcome similar problems. Here's my solution (modified to fit your own example).
We create a wrapper around Stream (a superclass of NetworkStream, which is a superclass of TcpClient or whatever). It monitors reads. When some data is read, it is buffered. When we receive a length indicator (4 bytes) we check if we have a full message (4 bytes + message body length). When we do, we raise a MessageReceived event with the message body, and remove the message from the buffer. This technique automatically handles fragmented messages and multiple-messages-per-packet situations.
public class MessageStream : IMessageStream, IDisposable
{
public MessageStream(Stream stream)
{
if(stream == null)
throw new ArgumentNullException("stream", "Stream must not be null");
if(!stream.CanWrite || !stream.CanRead)
throw new ArgumentException("Stream must be readable and writable", "stream");
this.Stream = stream;
this.readBuffer = new byte[512];
messageBuffer = new List<byte>();
stream.BeginRead(readBuffer, 0, readBuffer.Length, new AsyncCallback(ReadCallback), null);
}
// These belong to the ReadCallback thread only.
private byte[] readBuffer;
private List<byte> messageBuffer;
private void ReadCallback(IAsyncResult result)
{
int bytesRead = Stream.EndRead(result);
messageBuffer.AddRange(readBuffer.Take(bytesRead));
if(messageBuffer.Count >= 4)
{
int length = BitConverter.ToInt32(messageBuffer.Take(4).ToArray(), 0); // 4 bytes per int32
// Keep buffering until we get a full message.
if(messageBuffer.Count >= length + 4)
{
messageBuffer.Skip(4);
OnMessageReceived(new MessageEventArgs(messageBuffer.Take(length)));
messageBuffer.Skip(length);
}
}
// FIXME below is kinda hacky (I don't know the proper way of doing things...)
// Don't bother reading again. We don't have stream access.
if(disposed)
return;
try
{
Stream.BeginRead(readBuffer, 0, readBuffer.Length, new AsyncCallback(ReadCallback), null);
}
catch(ObjectDisposedException)
{
// DO NOTHING
// Ends read loop.
}
}
public Stream Stream
{
get;
private set;
}
public event EventHandler<MessageEventArgs> MessageReceived;
protected virtual void OnMessageReceived(MessageEventArgs e)
{
var messageReceived = MessageReceived;
if(messageReceived != null)
messageReceived(this, e);
}
public virtual void SendMessage(Message message)
{
// Have fun ...
}
// Dispose stuff here
}
I think the design you've used is fine that's roughly how I would and have done the same sort of thing. I don't think you'd gain much by refactoring into additional classes/structs and from what I've seen you'd actually make the solution more complex by doing so.
The only comment I'd have is as to whether the two reads where the first is always the messgae length and the second always being the body is robust enough. I'm always wary of approaches like that as if they somehow get out of sync due to an unforseen circumstance (such as the other end sending the wrong length) it's very difficult to recover. Instead I'd do a single read with a big buffer so that I always get all the available data from the network and then inspect the buffer to extract out complete messages. That way if things do go wrong the current buffer can just be thrown away to get things back to a clean state and only the current messages are lost rather than stopping the whole service.
Actually at the moment you would have a problem if you message body was big and arrived in two seperate receives and the next message in line sent it's length at the same time as the second half of the previous body. If that happened your message length would end up appended to the body of the previous message and you'd been in the situation as desecribed in the previous paragraph.
You can use yield return to automate the generation of a state machine for asynchronous callbacks. Jeffrey Richter promotes this technique through his AsyncEnumerator class, and I've played around with the idea here.
There's nothing wrong with the way you've done it. For me, though, I like to separate the receiving of the data from the processing of it, which is what you seem to be thinking with your proposed MessageBuffer class. I have discussed that in detail here.