I'm creating an application with will send and receive vast amount of data via udp packets. I want to have the ability to scale the processing of these requests by adding or diminishing more worker threads. However when I add a second thread and start waiting for data I get the following error.
“Only one usage of each socket address (protocol/network address/port) is normally permitted”
The Code:
static void Main(string[] args)
{
Console.WriteLine("Running...");
Thread threadA = new Thread(new ThreadStart(ProcessMessage));
Thread threadB = new Thread(new ThreadStart(ProcessMessage));
threadA.Start();
threadB.Start();
Console.WriteLine("Press any key.");
Console.ReadLine();
}
private static void ProcessMessage()
{
using (var udpClient = new UdpClient(11000))
{
var sender = new System.Net.IPEndPoint(IPAddress.Any, 11000);
var data = udpClient.Receive(ref sender);
// Do work
}
}
The error is very clear but then my question is: How do I correctly divide the work among multiple threads?
Thanks for the help and sorry for any spelling mistakes.
I'm not a native english speaker.
The comment from C.Gonzalez was very helpful, so I will put it here as the correct answer.
The usual way to handle high volume UDP is to have one thread just consuming data as fast as it can (and putting the content of the messages into a queue). (Any moderately new hardware can receive data as fast as the network can send it) One or more other threads empty and process the data. If a response is necessary, it can be put into an outgoing queue for another dedicated thread to send messages out (you can use the same socket form this dedicated thread). Again, any relatively new hardware can send and receive duplex mode at wire speed with ease.
Related
This is to a degree a "basics of TCP" question, yet at the same time I have yet to find a convincing answer elsewhere and believe i have a ok/good understanding of the basics of TCP. I am not sure if the combination of questions (or the one questions and while i'm at it the request for confirmation of a couple of points) is against the rules. Hope not.
I am trying to write a C# implementation of a TCP client, that communicates with an existing app containing a TCP server (I don't have access to its code, so no WCF). How do I connect to it, send and receive as needed as new info comes in or out, and ultimately disconnect. Using the following MSDN code as an example where they list "Send" and "Receive" asynchronous methods (or just TcpClient), and ignoring the connect and disconnect as trivial, how can I best go about continuously checking for new packets received and at the same time send when needed?
I initially used TCPClient and GetStream(), and the msdn code still seems to require the loop and sleep described in a bit (counter intuitively), where I run the receive method in a loop in a separate thread with a sleep(10) milliseconds, and Send in the main (or third) thread as needed. This allows me to send fine, and the receive method effectively polls at regular intervals to find new packets. The received packets are then added to a queue.
Is this really the best solution? Shouldn't there be a DataAvailable event equivalent (or something i'm missing in the msdn code) that allows us to receive when, and only when, there is new data available?
As an afterthought I noticed that the socket could be cut from the other side without the client becoming aware till the next botched send. To clarify then, the client is obliged to send regular keepalives (and receive isn't sufficient, only send) to determine if the socket is still alive. And the frequency of the keepalive determines how soon I will know that link is down. Is that correct? I tried Poll, socket.connected etc only to discover why each just doesn't help.
Lastly, to confirm (i believe not but good to make sure), in the above scenario of sending on demand and receiving if tcpclient.DataAvailable every ten seconds, can there be data loss if sending and receiving at the same time? If at the same time I am receiving I try and send will one fail, overwrite the other or any other such unwanted behaviour?
There's nothing wrong necessarily with grouping questions together, but it does make answering the question more challenging... :)
The MSDN article you linked shows how to do a one-and-done TCP communication, that is, one send and one receive. You'll also notice it uses the Socket class directly where most people, including myself, will suggest using the TcpClient class instead. You can always get the underlying Socket via the Client property should you need to configure a certain socket for example (e.g., SetSocketOption()).
The other aspect about the example to note is that while it uses threads to execute the AsyncCallback delegates for both BeginSend() and BeginReceive(), it is essentially a single-threaded example because of how the ManualResetEvent objects are used. For repeated exchange between a client and server, this is not what you want.
Alright, so you want to use TcpClient. Connecting to the server (e.g., TcpListener) should be straightforward - use Connect() if you want a blocking operation or BeginConnect() if you want a non-blocking operation. Once the connection is establish, use the GetStream() method to get the NetworkStream object to use for reading and writing. Use the Read()/Write() operations for blocking I/O and the BeginRead()/BeginWrite() operations for non-blocking I/O. Note that the BeginRead() and BeginWrite() use the same AsyncCallback mechanism employed by the BeginReceive() and BeginSend() methods of the Socket class.
One of the key things to note at this point is this little blurb in the MSDN documentation for NetworkStream:
Read and write operations can be performed simultaneously on an
instance of the NetworkStream class without the need for
synchronization. As long as there is one unique thread for the write
operations and one unique thread for the read operations, there will
be no cross-interference between read and write threads and no
synchronization is required.
In short, because you plan to read and write from the same TcpClient instance, you'll need two threads for doing this. Using separate threads will ensure that no data is lost while receiving data at the same time someone is trying to send. The way I've approached this in my projects is to create a top-level object, say Client, that wraps the TcpClient and its underlying NetworkStream. This class also creates and manages two Thread objects, passing the NetworkStream object to each during construction. The first thread is the Sender thread. Anyone wanting to send data does so via a public SendData() method on the Client, which routes the data to the Sender for transmission. The second thread is the Receiver thread. This thread publishes all received data to interested parties via a public event exposed by the Client. It looks something like this:
Client.cs
public sealed partial class Client : IDisposable
{
// Called by producers to send data over the socket.
public void SendData(byte[] data)
{
_sender.SendData(data);
}
// Consumers register to receive data.
public event EventHandler<DataReceivedEventArgs> DataReceived;
public Client()
{
_client = new TcpClient(...);
_stream = _client.GetStream();
_receiver = new Receiver(_stream);
_sender = new Sender(_stream);
_receiver.DataReceived += OnDataReceived;
}
private void OnDataReceived(object sender, DataReceivedEventArgs e)
{
var handler = DataReceived;
if (handler != null) DataReceived(this, e); // re-raise event
}
private TcpClient _client;
private NetworkStream _stream;
private Receiver _receiver;
private Sender _sender;
}
Client.Receiver.cs
private sealed partial class Client
{
private sealed class Receiver
{
internal event EventHandler<DataReceivedEventArgs> DataReceived;
internal Receiver(NetworkStream stream)
{
_stream = stream;
_thread = new Thread(Run);
_thread.Start();
}
private void Run()
{
// main thread loop for receiving data...
}
private NetworkStream _stream;
private Thread _thread;
}
}
Client.Sender.cs
private sealed partial class Client
{
private sealed class Sender
{
internal void SendData(byte[] data)
{
// transition the data to the thread and send it...
}
internal Sender(NetworkStream stream)
{
_stream = stream;
_thread = new Thread(Run);
_thread.Start();
}
private void Run()
{
// main thread loop for sending data...
}
private NetworkStream _stream;
private Thread _thread;
}
}
Notice that these are three separate .cs files but define different aspects of the same Client class. I use the Visual Studio trick described here to nest the respective Receiver and Sender files under the Client file. In a nutshell, that's the way I do it.
Regarding the NetworkStream.DataAvailable/Thread.Sleep() question. I would agree that an event would be nice, but you can effectively achieve this by using the Read() method in combination with an infinite ReadTimeout. This will have no adverse impact on the rest of your application (e.g., UI) since it's running in its own thread. However, this complicates shutting down the thread (e.g., when the application closes), so you'd probably want to use something more reasonable, say 10 milliseconds. But then you're back to polling, which is what we're trying to avoid in the first place. Here's how I do it, with comments for explanation:
private sealed class Receiver
{
private void Run()
{
try
{
// ShutdownEvent is a ManualResetEvent signaled by
// Client when its time to close the socket.
while (!ShutdownEvent.WaitOne(0))
{
try
{
// We could use the ReadTimeout property and let Read()
// block. However, if no data is received prior to the
// timeout period expiring, an IOException occurs.
// While this can be handled, it leads to problems when
// debugging if we are wanting to break when exceptions
// are thrown (unless we explicitly ignore IOException,
// which I always forget to do).
if (!_stream.DataAvailable)
{
// Give up the remaining time slice.
Thread.Sleep(1);
}
else if (_stream.Read(_data, 0, _data.Length) > 0)
{
// Raise the DataReceived event w/ data...
}
else
{
// The connection has closed gracefully, so stop the
// thread.
ShutdownEvent.Set();
}
}
catch (IOException ex)
{
// Handle the exception...
}
}
}
catch (Exception ex)
{
// Handle the exception...
}
finally
{
_stream.Close();
}
}
}
As far as 'keepalives' are concerned, there is unfortunately not a way around the problem of knowing when the other side has exited the connection silently except to try sending some data. In my case, since I control both the sending and receiving sides, I've added a tiny KeepAlive message (8 bytes) to my protocol. This is sent every five seconds from both sides of the TCP connection unless other data is already being sent.
I think I've addressed all the facets that you touched on. I hope you find this helpful.
I am developing a C# Windows Service that will receive financial quotations from a feeder that uses TCP. My project must receive and process a large volume of data, for I will be tracking 140 different assets that are used to update an SQL database every second.
I am using a looping to pool data from the socket, in a BackgroundWork thread:
try
{
// Send START command with the assets.
if (!AeSocket.AeSocket.Send(Encoding.ASCII.GetBytes(String.Format("{0}{1}START{1}BC|ATIVO{1}{2}{3}", GlobalData.GlobalData.Id, GlobalData.GlobalData.Tab, AeSocket.AeSocket.GetAssets(),
GlobalData.GlobalData.Ret))).Contains("OK"))
{
throw new Exception("Advise was no accepted.");
}
// Pool the socket and send all received string to the queue for processing in the background.
while (true)
{
// Make sure the connection to the socket is still active.
if (!AeSocket.AeSocket.Client.Connected)
{
throw new Exception("The connection was closed.");
}
// If no data is available in the socket, loop and keep waiting.
if (!AeSocket.AeSocket.Client.Poll(-1, SelectMode.SelectRead))
{
continue;
}
// There are data waiting to be read.
var data = new Byte[AeSocket.AeSocket.ReadBufferSize];
var bytes = AeSocket.AeSocket.Client.Receive(data, 0);
AeSocket.AeSocket.Response = Encoding.Default.GetString(data, 0, bytes);
// Push into the queue for further processing in a different thread.
GlobalData.GlobalData.RxQueue.Add(AeSocket.AeSocket.Response);
}
}
catch
{
backgroundWorkerMain.CancelAsync();
}
finally
{
AeSocket.AeSocket.Client.Close();
AeSocket.AeSocket.Client.Dispose();
}
The received data is being processed in a separated thread to avoid blocking the socket receiving activity, due to the high data volume received. I am using a BlockingCollection (RxQueue).
This collection is being observed as shown in the following code snippet:
// Subscribe to the queue for string processing in another thread.
// This is a blocking observable queue, so it is run in this background worker thread.
GlobalData.GlobalData.Disposable = GlobalData.GlobalData.RxQueue
.GetConsumingEnumerable()
.ToObservable()
.Subscribe(c =>
{
try
{
ProcessCommand(c);
}
catch
{
// Any error will stop the processing.
backgroundWorkerMain.CancelAsync();
}
});
The data is then added to a ConcurrentDictionary, to be read asynchronously by an one sec timer and saved to an SQL database:
// Add or update the quotation record in the dictionary.
GlobalData.GlobalData.QuotDict.AddOrUpdate(dataArray[0], quot, (k, v) => quot);
The one second system timer is processed in another BackgroundWorker thread and it saves the quotations data read from the ConcurrentDictionary.
// Update the gridViewAssetQuotations.
var list = new BindingList<MyAssetRecord>();
foreach (var kv in GlobalData.GlobalData.QuotDict)
{
// Add the data in a database table...
}
Is this a good approach to this situation?
Using a BlockingCollection as an assynchronous queue and a ConcurrentDictionary to allow asynchronous reading from another thread is a good way of doing this?
What about the way I used to pool data from the socket:
// If no data is available in the socket, loop and keep waiting.
if (!AeSocket.AeSocket.Client.Poll(-1, SelectMode.SelectRead))
{
continue;
}
Is there a better way of doing it?
Also, I have to send a KeepAlive command every 4 seconds to the TCP server. Could I do it totally asynchronously, ignoring the pooling loop above, using another system timer, or it must be synchronized to the pooling operation? The server only allows connection on one port.
Thanks in advance for any suggestions.
Eduardo Quintana
Your loop will lead to high processor usage. Put a sleep (say 1 ms) in case there is no data to process, increase the sleep time if there is still no data in the next cycle, decrease the sleep time if you get data. That way the reader loop will auto adjust to data traffic and would save on processor cycles. Rest all looks good.
I have run into an issue with the slow C# start-up time causing UDP packets to drop initially. Below, I is what I have done to mitigate this start-up delay. I essentially wait an additional 10ms between the first two packet transmissions. This fixes the initial drops at least on my machine. My concern is that a slower machine may need a longer delay than this.
private void FlushPacketsToNetwork()
{
MemoryStream packetStream = new MemoryStream();
while (packetQ.Count != 0)
{
byte[] packetBytes = packetQ.Dequeue().ToArray();
packetStream.Write(packetBytes, 0, packetBytes.Length);
}
byte[] txArray = packetStream.ToArray();
udpSocket.Send(txArray);
txCount++;
ExecuteStartupDelay();
}
// socket takes too long to transmit unless I give it some time to "warm up"
private void ExecuteStartupDelay()
{
if (txCount < 3)
{
timer.SpinWait(10e-3);
}
}
So, I am wondering is there a better approach to let C# fully load all of its dependencies? I really don't mind if it takes several seconds to completely load; I just do not want to do any high bandwidth transmissions until C# is ready for full speed.
Additional relevant details
This is a console application, the network transmission is run from a separate thread, and the main thread just waits for a key press to terminate the network transmitter.
In the Program.Main method I have tried to get the most performance from my application by using the highest priorities reasonable:
public static void Main(string[] args)
{
Process.GetCurrentProcess().PriorityClass = ProcessPriorityClass.High;
...
Thread workerThread = new Thread(new ThreadStart(worker.Run));
workerThread.Priority = ThreadPriority.Highest;
workerThread.Start();
...
Console.WriteLine("Press any key to stop the stream...");
WaitForKeyPress();
worker.RequestStop = true;
workerThread.Join();
Also, the socket settings I am currently using are shown below:
udpSocket = new Socket(targetEndPoint.Address.AddressFamily,
SocketType.Dgram,
ProtocolType.Udp);
udpSocket.Ttl = ttl;
udpSocket.SendBufferSize = 1024 * 1024;
udpSocket.Blocking = true;
udpSocket.Connect(targetEndPoint);
The default SendBufferSize is 8192, so I went ahead and moved it up to a megabyte, but this setting did not seem to have any affect on the dropped packets at the beginning.
From the comments I learned that TCP is not an option for you (because of inherent delays in transmission), also you do not want to loose packets due to other side being not fully loaded.
So you actually need to implement some features present in TCP (retransmission) but in more robust and lightweight fashion. I also assume that you are in control of the receiving side.
I propose that you send some predetermined number of packets. And then wait for confirmation. For instance, every packet can have an id that constantly grows. Every N packets, receiving application sends the number of last received packet to the sender. After receiving this number sender will know if it is necessary to repeat last N packets.
This approach should not hurt your bandwidth very much and you will get some sort of information about received data (although not guaranteed).
Otherwise it is best to switch to TCP. By the way did you try using TCP? How much your bandwidth hurts because of it?
I am making a game which demands great traffic of data like Positions(2D) and many other data.
I am using a very simple class to help me listening on 8080 port(UDP), and a method to send datagram :
public static void SendToHostUDP(string Msg)
{
UdpClient udpClient = new UdpClient();
udpClient.Connect(Main.HostIP, 8080);
byte[] sdBytes = Encoding.ASCII.GetBytes(Msg);
udpClient.BeginSend(sdBytes, sdBytes.Length, CallBack, udpClient);
Main.UDPout += sdBytes.Length / 1000f;
}
public static void SendToClientUDP(string Msg, IPAddress ip)
{
UdpClient udpClient = new UdpClient();
udpClient.Connect(ip, 8080);
byte[] sdBytes = Encoding.ASCII.GetBytes(Msg);
udpClient.BeginSend(sdBytes, sdBytes.Length, CallBack, udpClient);
Main.UDPout += sdBytes.Length / 1000f;
}
public static void CallBack(IAsyncResult ar)
{
}
The listener class is just a very simple one:
public class NetReciever
{
public TcpListener tcpListener;
public Thread listenThread;
private Action actionToPerformTCP;
private Action actionToPerformUDP;
public UdpClient udpClient;
public Thread UDPThread;
TimerAction UDPPacketsCounter;
int UDPPacketsCounts;
private BackgroundWorker bkgUDPListener;
string msg;
public NetReciever(IPAddress IP)
{
this.tcpListener = new TcpListener(IPAddress.Any, 25565);
this.udpClient = new UdpClient(8080);
this.UDPThread = new Thread(new ThreadStart(UDPListen));
this.listenThread = new Thread(new ThreadStart(ListenForClients));
this.listenThread.Start();
UDPPacketsCounter = new TimerAction(CountUDPPackets, 1000, false);
this.UDPThread.Start();
}
public void CountUDPPackets()
{
UDPPacketsCounts = 0;
}
public void Abort()
{
UDPThread.Abort();
udpClient.Close();
listenThread.Abort();
tcpListener.Stop();
}
public void UDPListen()
{
while (true)
{
IPEndPoint RemoteIPEndPoint = new IPEndPoint(IPAddress.Any, 0);
byte[] receiveBytesUDP = udpClient.Receive(ref RemoteIPEndPoint);
if (receiveBytesUDP != null)
{
UDPPacketsCounts++;
Main.UDPin += receiveBytesUDP.Length / 1000f;
}
if (Main.isHost)
{
Main.Server.processUDP(Encoding.ASCII.GetString(receiveBytesUDP), RemoteIPEndPoint.Address.ToString());
}
else
{
if (RemoteIPEndPoint.Address.ToString() == Main.HostIP)
{
Program.game.ProcessUDP(Encoding.ASCII.GetString(receiveBytesUDP));
}
}
}
}
So basically when there is 1 player, there will be approximately 60packet/s IN and 60 packets/s out.
It acts like this:
Listen for packets.
Validate the packets.
Process the packets data -> Like storing some of the positions, sending back some packets...
So it just loop like this.
And the problems are here:
Firstly, When there are 2 players(Host+ 1Client), there will be some significant FPS drops at some point, and the host will experience stuttering of all audios(like in blue screen) for a moment.
Secondly, when there are 2+ players(Host + 1+Client), the Host FPS will drop to 5-20, and it will lag+lag+lag+lag but not freezing.
I have read some articles about async, and this is already threaded?
And also BeginRecieve and EndRecieve, I don't really understand how and why do I need to use it.
Could someonekindly provide some examples to explain how to process these kinds of data and send/recieve packets please? I don't really want to use libraries because I want to know what is going on.
P.S: How do the networking system in Terraria works? It uses TCP but it is smooth! How?
PSS: What is buffering? Why do i need to set buffering and how? What does it changes?
PSSS: I think there are something to be tuned and changed in sending the packets, because it just look so simple.
The asyncronous concept is definitely something you want to look into here. What could be the issue is that with everything running on the same thread, certain UI actions (like graphic rendering (your fps loss), or playing sound (your stuttering)) may well be waiting for other aspects of the program, such as network communications.
Normally, you'd seperate the threads out, so that the UI and sound side of things can process on their own, without the dependence on anything else. Have a read of some of the msdn thread examples, then try putting your longer running processes on a seperate thread from your UI and see how that helps:
http://msdn.microsoft.com/en-us/library/aa645740(v=vs.71).aspx
If you truly want to create a networked game there will be no way around learning more about network programming than you seem to know so far.
A good start is http://gafferongames.com/networking-for-game-programmers/sending-and-receiving-packets/. Whilst this is for C++ (I think, but if I remember right someone portet it to C# also, maybe :P) the theory behind all this is explained very well.
It might also be worth reading up on WinAPI socket programming. This will be more technical than reading tutorials on how to do network programming in C#, but it will also make things more clear than using wrappers that obfuscate whats really going on behind the scenes.
Edit:
Basically its up to you weather you use a backgroud thread for listening for packets or use BeginReceive with an AsyncCallback method. The drawback of the latter is that you will eventually need to call EndReceive at which point it will still block you application until the actual receive is finished. Creating your own thread and using blocking mode will obviously not block your UI/business logic (main) thread, but you will need to program the cross thread communication part yourself.
I also found a simple tutorial for an UDP-Client-Server app using threading and blocking mode here.
I'm currently in the process of developing a C# Socket server that can accept multiple connections from multiple client computers. The objective of the server is to allow clients to "subscribe" and "un-subscribe" from server events.
So far I've taken a jolly good look over here: http://msdn.microsoft.com/en-us/library/5w7b7x5f(v=VS.100).aspx and http://msdn.microsoft.com/en-us/library/fx6588te.aspx for ideas.
All the messages I send are encrypted, so I take the string message that I wish to send, convert it into a byte[] array and then encrypt the data before pre-pending the message length to the data and sending it out over the connection.
One thing that strikes me as an issue is this: on the receiving end it seems possible that Socket.EndReceive() (or the associated callback) could return when only half of the message has been received. Is there an easy way to ensure each message is received "complete" and only one message at a time?
EDIT: For example, I take it .NET / Windows sockets does not "wrap" the messages to ensure that a single message sent with Socket.Send() is received in one Socket.Receive() call? Or does it?
My implementation so far:
private void StartListening()
{
IPHostEntry ipHostInfo = Dns.GetHostEntry(Dns.GetHostName());
IPEndPoint localEP = new IPEndPoint(ipHostInfo.AddressList[0], Constants.PortNumber);
Socket listener = new Socket(localEP.Address.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
listener.Bind(localEP);
listener.Listen(10);
while (true)
{
// Reset the event.
this.listenAllDone.Reset();
// Begin waiting for a connection
listener.BeginAccept(new AsyncCallback(this.AcceptCallback), listener);
// Wait for the event.
this.listenAllDone.WaitOne();
}
}
private void AcceptCallback(IAsyncResult ar)
{
// Get the socket that handles the client request.
Socket listener = (Socket) ar.AsyncState;
Socket handler = listener.EndAccept(ar);
// Signal the main thread to continue.
this.listenAllDone.Set();
// Accept the incoming connection and save a reference to the new Socket in the client data.
CClient client = new CClient();
client.Socket = handler;
lock (this.clientList)
{
this.clientList.Add(client);
}
while (true)
{
this.readAllDone.Reset();
// Begin waiting on data from the client.
handler.BeginReceive(client.DataBuffer, 0, client.DataBuffer.Length, 0, new AsyncCallback(this.ReadCallback), client);
this.readAllDone.WaitOne();
}
}
private void ReadCallback(IAsyncResult asyn)
{
CClient theClient = (CClient)asyn.AsyncState;
// End the receive and get the number of bytes read.
int iRx = theClient.Socket.EndReceive(asyn);
if (iRx != 0)
{
// Data was read from the socket.
// So save the data
byte[] recievedMsg = new byte[iRx];
Array.Copy(theClient.DataBuffer, recievedMsg, iRx);
this.readAllDone.Set();
// Decode the message recieved and act accordingly.
theClient.DecodeAndProcessMessage(recievedMsg);
// Go back to waiting for data.
this.WaitForData(theClient);
}
}
Yes, it is possible you'll have only part of message per one receiving, also it can be even worse during transfer only part of message will be sent. Usually you can see that during bad network conditions or under heavy network load.
To be clear on network level TCP guaranteed to transfer your data in specified order but it not guaranteed that portions of data will be same as you sent. There are many reasons for that software (take a look to Nagle's algorithm for example), hardware (different routers in trace), OS implementation, so in general you should never assume what part of data already transferred or received.
Sorry for long introduction, below some advices:
Try to use relatevely "new" API for high-performance socket server, here samples Networking Samples for .NET v4.0
Do not assume you always send full packet. Socket.EndSend() returns number of bytes actually scheduled to send, it can be even 1-2 bytes under heavy network load. So you have to implement resend rest part of buffer when it required.
There is warning on MSDN:
There is no guarantee that the data
you send will appear on the network
immediately. To increase network
efficiency, the underlying system may
delay transmission until a significant
amount of outgoing data is collected.
A successful completion of the
BeginSend method means that the
underlying system has had room to
buffer your data for a network send.
Do not assume you always receive full packet. Join received data in some kind of buffer and analyze it when it have enough data.
Usually, for binary protocols, I add field to indicate how much data incoming, field with message type (or you can use fixed length per message type (generally not good, e.g. versioning problem)), version field (where applicable) and add CRC-field to end of message.
It not really required to read, a bit old and applies directly to Winsock but maybe worth to study: Winsock Programmer's FAQ
Take a look to ProtocolBuffers, it worth to learn: http://code.google.com/p/protobuf-csharp-port/, http://code.google.com/p/protobuf-net/
Hope it helps.
P.S. Sadly sample on MSDN you refer in question effectively ruin async paradigm as stated in other answers.
Your code is very wrong. Doing loops like that defeats the purpose of asynchronous programming. Async IO is used to not block the thread but let them continue doing other work. By looping like that, you are blocking the thread.
void StartListening()
{
_listener.BeginAccept(OnAccept, null);
}
void OnAccept(IAsyncResult res)
{
var clientSocket = listener.EndAccept(res);
//begin accepting again
_listener.BeginAccept(OnAccept, null);
clientSocket.BeginReceive(xxxxxx, OnRead, clientSocket);
}
void OnReceive(IAsyncResult res)
{
var socket = (Socket)res.Asyncstate;
var bytesRead = socket.EndReceive(res);
socket.BeginReceive(xxxxx, OnReceive, socket);
//handle buffer here.
}
Note that I've removed all error handling to make the code cleaner. That code do not block any thread and is therefore much more effecient. I would break the code up in two classes: the server handling code and the client handling code. It makes it easier to maintain and extend.
Next thing to understand is that TCP is a stream protocol. It do not guarentee that a message arrives in one Receive. Therefore you must know either how large a message is or when it ends.
The first solution is to prefix each message with an header which you parse first and then continue reading until you get the complete body/message.
The second solution is to put some control character at the end of each message and continue reading until the control character is read. Keep in mind that you should encode that character if it can exist in the actual message.
You need to send fixed length messages or include in the header the length of the message. Try to have something that allows you to clearly identify the start of a packet.
I found 2 very useful links:
http://vadmyst.blogspot.com/2008/03/part-2-how-to-transfer-fixed-sized-data.html
C# Async TCP sockets: Handling buffer size and huge transfers