I am trying to send various bits of PC information such as free HDD space, total RAM etc to a Windows Service over TCP. I have the following code which basically creates a string of information split by a |, ready for processing within the Windows Service TCP server to be put in to a SQL table.
Is it best to do this as I have done or is there a better way?
public static void Main(string[] args)
{
Program stc = new Program(clientType.TCP);
stc.tcpClient(serverAddress, Environment.MachineName.ToString() + "|" + FormatBytes(GetTotalFreeSpace("C:\\")).ToString());
Console.WriteLine("The TCP server is disconnected.");
}
public void tcpClient(String serverName, String whatEver)
{
try
{
//Create an instance of TcpClient.
TcpClient tcpClient = new TcpClient(serverName, tcpPort);
//Create a NetworkStream for this tcpClient instance.
//This is only required for TCP stream.
NetworkStream tcpStream = tcpClient.GetStream();
if (tcpStream.CanWrite)
{
Byte[] inputToBeSent = System.Text.Encoding.ASCII.GetBytes(whatEver.ToCharArray());
tcpStream.Write(inputToBeSent, 0, inputToBeSent.Length);
tcpStream.Flush();
}
while (tcpStream.CanRead && !DONE)
{
//We need the DONE condition here because there is possibility that
//the stream is ready to be read while there is nothing to be read.
if (tcpStream.DataAvailable)
{
Byte[] received = new Byte[512];
int nBytesReceived = tcpStream.Read(received, 0, received.Length);
String dataReceived = System.Text.Encoding.ASCII.GetString(received);
Console.WriteLine(dataReceived);
DONE = true;
}
}
}
catch (Exception e)
{
Console.WriteLine("An Exception has occurred.");
Console.WriteLine(e.ToString());
}
}
Thanks
Because TCP is stream-based, it is important to have some indicator in the message to signal the other end when it has read the complete message. There are two traditional ways of doing this. First, you could have some special byte pattern at the end of each message. When the other end reads the data, it knows that it has read a full message when that special byte pattern is seen. Using this mechanism requires a byte pattern that is not likely to be included in the actual message. The other way is to include the length of the data at the beginning of the message. This is the way I do it. All my TCP messages include a short header structured like this:
class MsgHeader
{
short syncPattern; // e.g., 0xFDFD
short msgType; // useful if you have different messages
int msgLength; // length of the message minus header
}
When the other side starts receiving data, it reads the first 8 bytes, verifies the sync pattern (for the sake of sanity), and then uses the message length to read the actual message. Once the message has been read, it processes the message based on the message type.
I'd suggest creating a class that gathers the system information you're interested in and is capable of encoding/decoding it, something like:
using System;
using System.Text;
class SystemInfo
{
private string machineName;
private int freeSpace;
private int processorCount;
// Private so no one can create it directly.
private SystemInfo()
{
}
// This is a static method now. Call SystemInfo.Encode() to use it.
public static byte[] Encode()
{
// Convert the machine name to an ASCII-based byte array.
var machineNameAsByteArray = Encoding.ASCII.GetBytes(Environment.MachineName);
// *THIS IS IMPORTANT* The easiest way to encode a string value so that it
// can be easily decoded is to prepend the length of the string. Otherwise,
// you're left guessing on the decode side about how long the string is.
// Calculate the message length. This does *NOT* include the size of
// the message length itself.
// NOTE: As new fields are added to the message, account for their
// respective size here and encode them below.
var messageLength = sizeof(int) + // length of machine name string
machineNameAsByteArray.Length + // the machine name value
sizeof(int) + // free space
sizeof(int); // processor count
// Calculate the required size of the byte array. This *DOES* include
// the size of the message length.
var byteArraySize = messageLength + // message itself
sizeof(int); // 4-byte message length field
// Allocate the byte array.
var bytes = new byte[byteArraySize];
// The offset is used to keep track of where the next field should be
// placed in the byte array.
var offset = 0;
// Encode the message length (a very simple header).
Buffer.BlockCopy(BitConverter.GetBytes(messageLength), 0, bytes, offset, sizeof(int));
// Increment offset by the number of bytes added to the byte array.
// Note that the increment is equal to the value of the last parameter
// in the preceding BlockCopy call.
offset += sizeof(int);
// Encode the length of machine name to make it easier to decode.
Buffer.BlockCopy(BitConverter.GetBytes(machineNameAsByteArray.Length), 0, bytes, offset, sizeof(int));
// Increment the offset by the number of bytes added.
offset += sizeof(int);
// Encode the machine name as an ASCII-based byte array.
Buffer.BlockCopy(machineNameAsByteArray, 0, bytes, offset, machineNameAsByteArray.Length);
// Increment the offset. See the pattern?
offset += machineNameAsByteArray.Length;
// Encode the free space.
Buffer.BlockCopy(BitConverter.GetBytes(GetTotalFreeSpace("C:\\")), 0, bytes, offset, sizeof(int));
// Increment the offset.
offset += sizeof(int);
// Encode the processor count.
Buffer.BlockCopy(BitConverter.GetBytes(Environment.ProcessorCount), 0, bytes, offset, sizeof(int));
// No reason to do this, but it completes the pattern.
offset += sizeof(int).
return bytes;
}
// Static method. Call is as SystemInfo.Decode(myReceivedByteArray);
public static SystemInfo Decode(byte[] message)
{
// When decoding, the presumption is that your socket code read the first
// four bytes from the socket to determine the length of the message. It
// then allocated a byte array of that size and read the message into that
// byte array. So the byte array passed into this function does *NOT* have
// the 4-byte message length field at the front of it. It makes no sense
// in this class anyway.
// Create the SystemInfo object to be populated and returned.
var si = new SystemInfo();
// Use the offset to navigate through the byte array.
var offset = 0;
// Extract the length of the machine name string since that is the first
// field encoded in the message.
var machineNameLength = BitConverter.ToInt32(message, offset);
// Increment the offset.
offset += sizeof(int);
// Extract the machine name now that we know its length.
si.machineName = Encoding.ASCII.GetString(message, offset, machineNameLength);
// Increment the offset.
offset += machineNameLength;
// Extract the free space.
si.freeSpace = BitConverter.ToInt32(message, offset);
// Increment the offset.
offset += sizeof(int);
// Extract the processor count.
si.processorCount = BitConverter.ToInt32(message, offset);
// No reason to do this, but it completes the pattern.
offset += sizeof(int);
return si;
}
}
To encode the data, call the Encode method like this:
byte[] msg = SystemInfo.Encode();
To decode the data once it's been read from the socket, call the Decode method like this:
SystemInfo si = SystemInfo.Decode(msg);
As to your actual code, I'm not sure why you're reading from the socket after writing to it unless you're expecting a return value.
A few things to consider. Hope this helps.
EDIT
First of all, use the MsgHeader if you feel you need it. The example above simply uses the message length as the header, i.e., it does not include a sync pattern or a message type. Whether you need to use this additional information is up to you.
For every new field you add to the SystemInfo class, the overall size of the message will increased, obviously. Thus, the messageLength value needs to be adjusted accordingly. For example, if you add an int to include the number of processors, messageLength will increase by sizeof(int). Then, to add it to the byte array, simply use the same System.Buffer.BlockCopy call. I've adjusted the example to show this with a little more detail, including making the method static.
Related
My code is designed to get data from a serial device and print its contents to a MS Forms Application. The IDE i use is Visual Studio 2019 - Community.
The device does send a variable size packet. First i have to decode the packet "header" to get crucial information for further processing which is the first packet channel as well as the packet length.
Since the packet does neither contain a line ending, nor a fixed character at the end, the functions SerialPort.ReadTo() and SerialPort.ReadLine() are not useful. Therefore only SerialPort.Read(buf,offset,count) can be used
Since sending rather large packets (512bytes) does take time, I've implemented a function for calculation of a desired wait time, which is defined as (1000ms/baud-rate*(8*byte-count))+100ms
While testing, I've experienced delay, much more than the desired wait times, so implemented a measure function for different parts of the function.
In regular cases (with desired wait times) i except a log to console like this:
Load Header(+122ms) Load Data (+326ms) Transform (+3ms)
But its only like this for a few variable amount of records, usually 10, after that, the execution times are much worse:
Load Header(+972ms) Load Data (+990ms) Transform (+2ms)
Here you can see the complete function:
private void decodeWriteResponse(int identifier, object sender, SerialDataReceivedEventArgs e)
{
/* MEASURE TIME NOW */
long start = new DateTimeOffset(DateTime.UtcNow).ToUnixTimeMilliseconds();
var serialPort = (SerialPort)sender; //new serial port object
int delay = ComPort.getWaitTime(7); //Returns the wait time (1s/baudrate * bytecount *8) +100ms Additional
Task.Delay(delay).Wait(); //wait until the device has send all its data!
byte[] databytes = new byte[6]; //new buffer
try
{
serialPort.Read(databytes, 0, 6); //read the data
}
catch (Exception) { };
/* MEASURE TIME NOW */
long between_header = new DateTimeOffset(DateTime.UtcNow).ToUnixTimeMilliseconds();
/* Read the Data from Port */
int rec_len = databytes[1] | databytes[2] << 8; //Extract number of channels
int start_chnl = databytes[3] | databytes[4] << 8; //Extract the first channel
delay = ComPort.getWaitTime(rec_len+7); //get wait time
Task.Delay(delay).Wait(); //wait until the device has send all its data!
byte[] buf = new byte[rec_len-3]; //new buffer
try
{
serialPort.Read(buf, 0, rec_len-3); //read the data
}
catch (Exception) {}
/* MEASURE TIME NOW */
long after_load = new DateTimeOffset(DateTime.UtcNow).ToUnixTimeMilliseconds();
/* Now perform spectrum analysis */
decodeSpectrumData(buf, start_chnl, rec_len-4);
/*MEASURE TIME NOW */
long end = new DateTimeOffset(DateTime.UtcNow).ToUnixTimeMilliseconds();
Form.rtxtDataArea.AppendText("Load Header(+" + (between_header - start).ToString() + "ms) Load Data (+" + (after_load - between_header).ToString() + "ms) Transform (+" + (end - after_load) + "ms)\n");
/*Update the Write handler */
loadSpectrumHandler(1);
}
What could cause this issue?
I already tested this with "debug" in Visual Studio, and as "Release" standalone, but there is no difference.
Instead of trying to figure out how long a message will take to arrive at the port, why not just read the data in a loop until you have it all? For example, read the header and calculate the msg size. Then read that number of bytes. Ex:
// See if there are at least enough bytes for a header
if (serialPort.BytesToRead >= 6) {
byte[] databytes = new byte[6];
serialPort.Read(databytes, 0, 6);
// Parse the header - you have to create this function
int calculatedMsgSize = ValidateHeader(databytes);
byte [] msg = new byte[calculatedMsgSize];
int bytesRead = 0;
while (bytesRead < calculatedMsgSize) {
if (serialPort.BytesToRead) {
bytesRead += serialPort.Read(msg, bytesRead,
Math.min(calculatedMsgSize - bytesRead, serialPort.BytesToRead));
}
}
// You should now have a complete message
HandleMsg(msg);
}
I have a Socket code which is communicating through TCP/IP.The machine to which i am communicating has buffer data in its buffer.At present i am trying to get the buffer data using this code.
byte data = new byte[1024];
int recv = sock.Receive(data);
stringData = Encoding.ASCII.GetString(data, 0, recv);
But this code retrieves only 11 lines of data whereas more data is there in the machines buffer.Is this because i have used int recv = sock.Receive(data); and data is 1024 ?
If yes ,How to get the total buffer size and retrieve it into string.
If you think you are missing some data, then you need to check recv and almost certainly: loop. Fortunately, ASCII is always single byte - in most other encodings you would also have to worry about receiving partial characters.
A common approach is basically:
int recv;
while((recv = sock.Receive(data)) > 0)
{
// process recv-many bytes
// ... stringData = Encoding.ASCII.GetString(data, 0, recv);
}
Keep in mind that there is no guarantee that stringData will be any particular entire unit of work; what you send is not always what you receive, and that could be a single character, 14 lines, or the second half of one word and the first half of another. You generally need to maintain your own back-buffer of received data until you have a complete logical frame to process.
Note, however, Receive always tries to return something (at least one byte), unless the inbound stream has closed - and will block to do so. If this is a problem, you may need to check the available buffer (sock.Available) to decide whether to do synchronous versus asynchronous receive (i.e. read synchronously while data is available, otherwise request an asynchronous read).
Try something along these lines:
StringBuilder sbContent=new StringBuilder();
byte data = new byte[1024];
int numBytes;
while ((numBytes = sock.Receive(data))>0)
{
sbContent.Append(Encoding.UTF8.GetString(data));
}
// use sbContent.ToString()
Socket tcpSocket = new Socket(ipe.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
Console.WriteLine(" ReceiveBufferSize {0}", tcpSocket.ReceiveBufferSize);
For actual data you can put below condition:-
int receiveBytes;
while((receiveBytes = tcpSocket.Receive.Data(receiveBytes)) > 0)
{
}
I tried to understand the MSDN example for NetworkStream.EndRead(). There are some parts that i do not understand.
So here is the example (copied from MSDN):
// Example of EndRead, DataAvailable and BeginRead.
public static void myReadCallBack(IAsyncResult ar ){
NetworkStream myNetworkStream = (NetworkStream)ar.AsyncState;
byte[] myReadBuffer = new byte[1024];
String myCompleteMessage = "";
int numberOfBytesRead;
numberOfBytesRead = myNetworkStream.EndRead(ar);
myCompleteMessage =
String.Concat(myCompleteMessage, Encoding.ASCII.GetString(myReadBuffer, 0, numberOfBytesRead));
// message received may be larger than buffer size so loop through until you have it all.
while(myNetworkStream.DataAvailable){
myNetworkStream.BeginRead(myReadBuffer, 0, myReadBuffer.Length,
new AsyncCallback(NetworkStream_ASync_Send_Receive.myReadCallBack),
myNetworkStream);
}
// Print out the received message to the console.
Console.WriteLine("You received the following message : " +
myCompleteMessage);
}
It uses BeginRead() and EndRead() to read asynchronously from the network stream.
The whole thing is invoked by calling
myNetworkStream.BeginRead(someBuffer, 0, someBuffer.Length, new AsyncCallback(NetworkStream_ASync_Send_Receive.myReadCallBack), myNetworkStream);
somewhere else (not displayed in the example).
What I think it should do is print the whole message received from the NetworkStream in a single WriteLine (the one at the end of the example). Notice that the string is called myCompleteMessage.
Now when I look at the implementation some problems arise for my understanding.
First of all: The example allocates a new method-local buffer myReadBuffer. Then EndStream() is called which writes the received message into the buffer that BeginRead() was supplied. This is NOT the myReadBuffer that was just allocated. How should the network stream know of it? So in the next line numberOfBytesRead-bytes from the empty buffer are appended to myCompleteMessage. Which has the current value "". In the last line this message consisting of a lot of '\0's is printed with Console.WriteLine.
This doesn't make any sense to me.
The second thing I do not understand is the while-loop.
BeginRead is an asynchronous call. So no data is immediately read. So as I understand it, the while loop should run quite a while until some asynchronous call is actually executed and reads from the stream so that there is no data available any more. The documentation doesn't say that BeginRead immediately marks some part of the available data as being read, so I do not expect it to do so.
This example does not improve my understanding of those methods. Is this example wrong or is my understanding wrong (I expect the latter)? How does this example work?
I think the while loop around the BeginRead shouldn't be there. You don't want to execute the BeginRead more than ones before the EndRead is done. Also the buffer needs to be specified outside the BeginRead, because you may use more than one reads per packet/buffer.
There are some things you need to think about, like how long are my messages/blocks (fixed size). Shall I prefix it with a length. (variable size) <datalength><data><datalength><data>
Don't forget it is a Streaming connection, so multiple/partial messages/packets can be read in one read.
Pseudo example:
int bytesNeeded;
int bytesRead;
public void Start()
{
bytesNeeded = 40; // u need to know how much bytes you're needing
bytesRead = 0;
BeginReading();
}
public void BeginReading()
{
myNetworkStream.BeginRead(
someBuffer, bytesRead, bytesNeeded - bytesRead,
new AsyncCallback(EndReading),
myNetworkStream);
}
public void EndReading(IAsyncResult ar)
{
numberOfBytesRead = myNetworkStream.EndRead(ar);
if(numberOfBytesRead == 0)
{
// disconnected
return;
}
bytesRead += numberOfBytesRead;
if(bytesRead == bytesNeeded)
{
// Handle buffer
Start();
}
else
BeginReading();
}
I've a tcp based client-server application. I'm able to send and receive strings, but don't know how to send an array of bytes instead.
I'm using the following function to send a string from the client to the server:
static void Send(string msg)
{
try
{
StreamWriter writer = new StreamWriter(client.GetStream());
writer.WriteLine(msg);
writer.Flush();
}
catch
{
}
}
Communication example
Client sends a string:
Send("CONNECTED| 84.56.32.14")
Server receives a string:
void clientConnection_ReceivedEvent(Connection client, String Message)
{
string[] cut = Message.Split('|');
switch (cut[0])
{
case "CONNECTED":
Invoke(new _AddClient(AddClient), client, null);
break;
case "STATUS":
Invoke(new _Status(Status), client, cut[1]);
break;
}
}
I need some help to modify the functions above in order to send and receive an array of bytes in addition to strings. I want to make a call like this:
Send("CONNECTED | 15.21.21.32", myByteArray);
Just use Stream - no need for a writer here. Basic sending is simple:
stream.Write(data, 0, data.Length);
However you probably need to think about "framing", i.e. how it knows where each sun-message starts and ends. With strings this is often special characters (maybe new line) - but this is rarely possible in raw binary. A common appoach is to proceed the message with the number f bytes to follow, in a pre-defined way (maybe network-byte-order fixed 4 byte unsigned integer, for example).
Reading: again, use the Stream Read method, but understand that you always need t check the return value; just because you say "read at most 20 bytes" doesn't mean you get that many, even if more is coming - you could read 3,3,3,11 bytes for example (unlikely, but you see what I mean). For example, to read exactly 20 bytes:
var buffer = new byte[...];
int count = 20, read, offset = 0;
while(count > 0 && ((read = source.Read(buffer, offset, count)) > 0) {
offset += read;
count -= read;
}
if(count != 0) throw new EndOfStreamException();
Since you seem new to networking you might want to use WCF or another framework. I've just written an article about my own framework: http://blog.gauffin.org/2012/05/griffin-networking-a-somewhat-performant-networking-library-for-net
You need to use a header for your packets as Mark suggested, since TCP uses streams and not packets. i.e. there is not 1-1 relation between send and receive operations.
This is the same problems I'm having. I only code the client and the server accepts byte arrays as proper data. The messages start with an ASCII STX character preceded by a bunch of bytes of any values except the STX and ETX characters. The message ends with a ETX ASCII CHARACTER. In C I could do this in my sleep, but I'm learning C# on the job. I don't understand why you would send bunches of double byte unicodes when single byte ASCII codes work just as well. Wasting double the bandwidth.
I recently wrote a quick-and-dirty proof-of-concept proxy server in C# as part of an effort to get a Java web application to communicate with a legacy VB6 application residing on another server. It's ridiculously simple:
The proxy server and clients both use the same message format; in the code I use a ProxyMessage class to represent both requests from clients and responses generated by the server:
public class ProxyMessage
{
int Length; // message length (not including the length bytes themselves)
string Body; // an XML string containing a request/response
// writes this message instance in the proper network format to stream
// (helper for response messages)
WriteToStream(Stream stream) { ... }
}
The messages are as simple as could be: the length of the body + the message body.
I have a separate ProxyClient class that represents a connection to a client. It handles all the interaction between the proxy and a single client.
What I'm wondering is are they are design patterns or best practices for simplifying the boilerplate code associated with asynchronous socket programming? For example, you need to take some care to manage the read buffer so that you don't accidentally lose bytes, and you need to keep track of how far along you are in the processing of the current message. In my current code, I do all of this work in my callback function for TcpClient.BeginRead, and manage the state of the buffer and the current message processing state with the help of a few instance variables.
The code for my callback function that I'm passing to BeginRead is below, along with the relevant instance variables for context. The code seems to work fine "as-is", but I'm wondering if it can be refactored a bit to make it clearer (or maybe it already is?).
private enum BufferStates
{
GetMessageLength,
GetMessageBody
}
// The read buffer. Initially 4 bytes because we are initially
// waiting to receive the message length (a 32-bit int) from the client
// on first connecting. By constraining the buffer length to exactly 4 bytes,
// we make the buffer management a bit simpler, because
// we don't have to worry about cases where the buffer might contain
// the message length plus a few bytes of the message body.
// Additional bytes will simply be buffered by the OS until we request them.
byte[] _buffer = new byte[4];
// A count of how many bytes read so far in a particular BufferState.
int _totalBytesRead = 0;
// The state of the our buffer processing. Initially, we want
// to read in the message length, as it's the first thing
// a client will send
BufferStates _bufferState = BufferStates.GetMessageLength;
// ...ADDITIONAL CODE OMITTED FOR BREVITY...
// This is called every time we receive data from
// the client.
private void ReadCallback(IAsyncResult ar)
{
try
{
int bytesRead = _tcpClient.GetStream().EndRead(ar);
if (bytesRead == 0)
{
// No more data/socket was closed.
this.Dispose();
return;
}
// The state passed to BeginRead is used to hold a ProxyMessage
// instance that we use to build to up the message
// as it arrives.
ProxyMessage message = (ProxyMessage)ar.AsyncState;
if(message == null)
message = new ProxyMessage();
switch (_bufferState)
{
case BufferStates.GetMessageLength:
_totalBytesRead += bytesRead;
// if we have the message length (a 32-bit int)
// read it in from the buffer, grow the buffer
// to fit the incoming message, and change
// state so that the next read will start appending
// bytes to the message body
if (_totalBytesRead == 4)
{
int length = BitConverter.ToInt32(_buffer, 0);
message.Length = length;
_totalBytesRead = 0;
_buffer = new byte[message.Length];
_bufferState = BufferStates.GetMessageBody;
}
break;
case BufferStates.GetMessageBody:
string bodySegment = Encoding.ASCII.GetString(_buffer, _totalBytesRead, bytesRead);
_totalBytesRead += bytesRead;
message.Body += bodySegment;
if (_totalBytesRead >= message.Length)
{
// Got a complete message.
// Notify anyone interested.
// Pass a response ProxyMessage object to
// with the event so that receivers of OnReceiveMessage
// can send a response back to the client after processing
// the request.
ProxyMessage response = new ProxyMessage();
OnReceiveMessage(this, new ProxyMessageEventArgs(message, response));
// Send the response to the client
response.WriteToStream(_tcpClient.GetStream());
// Re-initialize our state so that we're
// ready to receive additional requests...
message = new ProxyMessage();
_totalBytesRead = 0;
_buffer = new byte[4]; //message length is 32-bit int (4 bytes)
_bufferState = BufferStates.GetMessageLength;
}
break;
}
// Wait for more data...
_tcpClient.GetStream().BeginRead(_buffer, 0, _buffer.Length, this.ReadCallback, message);
}
catch
{
// do nothing
}
}
So far, my only real thought is to extract the buffer-related stuff into a separate MessageBuffer class and simply have my read callback append new bytes to it as they arrive. The MessageBuffer would then worry about things like the current BufferState and fire an event when it received a complete message, which the ProxyClient could then propagate further up to the main proxy server code, where the request can be processed.
I've had to overcome similar problems. Here's my solution (modified to fit your own example).
We create a wrapper around Stream (a superclass of NetworkStream, which is a superclass of TcpClient or whatever). It monitors reads. When some data is read, it is buffered. When we receive a length indicator (4 bytes) we check if we have a full message (4 bytes + message body length). When we do, we raise a MessageReceived event with the message body, and remove the message from the buffer. This technique automatically handles fragmented messages and multiple-messages-per-packet situations.
public class MessageStream : IMessageStream, IDisposable
{
public MessageStream(Stream stream)
{
if(stream == null)
throw new ArgumentNullException("stream", "Stream must not be null");
if(!stream.CanWrite || !stream.CanRead)
throw new ArgumentException("Stream must be readable and writable", "stream");
this.Stream = stream;
this.readBuffer = new byte[512];
messageBuffer = new List<byte>();
stream.BeginRead(readBuffer, 0, readBuffer.Length, new AsyncCallback(ReadCallback), null);
}
// These belong to the ReadCallback thread only.
private byte[] readBuffer;
private List<byte> messageBuffer;
private void ReadCallback(IAsyncResult result)
{
int bytesRead = Stream.EndRead(result);
messageBuffer.AddRange(readBuffer.Take(bytesRead));
if(messageBuffer.Count >= 4)
{
int length = BitConverter.ToInt32(messageBuffer.Take(4).ToArray(), 0); // 4 bytes per int32
// Keep buffering until we get a full message.
if(messageBuffer.Count >= length + 4)
{
messageBuffer.Skip(4);
OnMessageReceived(new MessageEventArgs(messageBuffer.Take(length)));
messageBuffer.Skip(length);
}
}
// FIXME below is kinda hacky (I don't know the proper way of doing things...)
// Don't bother reading again. We don't have stream access.
if(disposed)
return;
try
{
Stream.BeginRead(readBuffer, 0, readBuffer.Length, new AsyncCallback(ReadCallback), null);
}
catch(ObjectDisposedException)
{
// DO NOTHING
// Ends read loop.
}
}
public Stream Stream
{
get;
private set;
}
public event EventHandler<MessageEventArgs> MessageReceived;
protected virtual void OnMessageReceived(MessageEventArgs e)
{
var messageReceived = MessageReceived;
if(messageReceived != null)
messageReceived(this, e);
}
public virtual void SendMessage(Message message)
{
// Have fun ...
}
// Dispose stuff here
}
I think the design you've used is fine that's roughly how I would and have done the same sort of thing. I don't think you'd gain much by refactoring into additional classes/structs and from what I've seen you'd actually make the solution more complex by doing so.
The only comment I'd have is as to whether the two reads where the first is always the messgae length and the second always being the body is robust enough. I'm always wary of approaches like that as if they somehow get out of sync due to an unforseen circumstance (such as the other end sending the wrong length) it's very difficult to recover. Instead I'd do a single read with a big buffer so that I always get all the available data from the network and then inspect the buffer to extract out complete messages. That way if things do go wrong the current buffer can just be thrown away to get things back to a clean state and only the current messages are lost rather than stopping the whole service.
Actually at the moment you would have a problem if you message body was big and arrived in two seperate receives and the next message in line sent it's length at the same time as the second half of the previous body. If that happened your message length would end up appended to the body of the previous message and you'd been in the situation as desecribed in the previous paragraph.
You can use yield return to automate the generation of a state machine for asynchronous callbacks. Jeffrey Richter promotes this technique through his AsyncEnumerator class, and I've played around with the idea here.
There's nothing wrong with the way you've done it. For me, though, I like to separate the receiving of the data from the processing of it, which is what you seem to be thinking with your proposed MessageBuffer class. I have discussed that in detail here.