C# asynchronous beginsend method - c#

I am a newbie in socket programming. I am developing a server client application.
And I am using Asynchronous tcp ip socket. But now I am facing a problem. In my client side I am receiving my data by a 2kb byte array by beginReceive method. Its working perfectly if data size below or equals to 2 kb, but problem occurring when data size exceeding 2kb range. Please give me some solution.

This is perfectly normal - you shouldn't expect to get all the data in one call, whether you're using synchronous or asynchronous calls, and whether you have a lot of data or a little.
You should keep reading until the read call indicates that there's no more data - or until you've got everything you need. If your protocol needs more than one request/response on a connection, you should either length-prefix each message so that the other side knows how much to read, or have some sort of delimiter to indicate the end of a message. Length-prefixing is much easier when it's suitable, but it doesn't easily support streaming - you have to end up with length-prefixed "chunks" and a final chunk to indicate when you're done.

I agree with Jon's answer, wrt the fact that you shouldn't expect all your data in one read.
Here are some blogs that have helped me with this problem in the past:
Aviad Ezra has an excellent series on Asynchronous Sockets:
.NET Sockets - Two Way - Single Client
.NET Sockets in Two Directions with Multiple Client Support
Sending Typed (Serialized) Messages over .NET Sockets
This blog is particularly useful if you decide to go the length-prefixed route, the Author uses a MemoryStream as his temporary storage between reads:
How to Transfer Variable Length Messages With Async Sockets

Related

How do I write a small gap in networkstream?

I have a server that sends telemetry data of varying size to receivers, or as I'll call them, clients, via NetworkStream. I aim for maximum speed, so I want minimal delays. The frequency is uncontrolled, so I'd like to use an infinite loop in my clients, that uses NetworkStream.Read to read the data, process it, then repeat.
My problem is that sometimes, if two packets are sent very quickly, and a client has a slow internet connection, the two packets will be received as a continous stream, resulting unprocessable data. A half-solution I found (mostly to confirm that this is indeed the error) is to have a small delay after/before each transmission, using System.Threading.Thread.Sleep(100), but not only I find Sleep a botchy solution, it's inconsistent, as it also slows down clients with a good connection, and the problem may persist with an even worse connection.
What I'd like to do is to have the server send a gap between each transmission, providing a separation regardless of internet speed, as NetworkStream.Read should finish after the current continous stream ends. I don't understand deeply the working of a NetworkStream, and have no idea what a few bytes of empty stream look like or how it could be implemented. Any ideas?
I would strong advise changing the protocol instead if you possibly can. TCP is a stream-based protocol, and any attempt to effectively ignore that is likely to be unreliable.
Instead, I'd suggest changing to make it a stream of messages where each message has a prefix indicating the length of the body of the message. That way it doesn't matter if a particular message is split across multiple packets or received in the same packet as other messages. It also makes reading easier for clients: they know exactly how much data to read, so can just do that in a simple loop.
If you're concerned that the length prefix will introduce too much overhead (if the data is often small) you could potentially make the protocol slightly more complicated with a single message containing a whole batch of information (multiple telemetry items).
But fundamentally, it's worth assuming that the data will be split into multiple packets, combined again etc. Don't assume that one write operation corresponds to one read operation.
You don't specifically mention the ProtocolType you're using on your NetworkStream but TCP is bound to fail your requirements. Intermediate routers/switches have no way to know your intent is to separate packets by time and will not respect that desire. Furthermore, TCP, being stream oriented, delivers packets in order, and it has error correction against dropped packets and corrupt packets. On any occurrence of one or the other it will hold all further packets until the error packet is retransmitted - and then you'll probably get them all in a bunch.
Use UDP and implement throttling on the receiving (i.e., client) side - throwing out data if you're falling behind.

Simultaneous use of TCP and UDP sockets

I have created a simple server that currently uses a TCP socket for all my packeting needs. If some data transfer is best used with TCP and others with UDP, what is the most effective way to use them in tandem? Can you simply have 2 sockets? I cannot find example code that shows best practices for this, and it seems like something that would be difficult to debug if done improperly. Thanks!
There is nothing to prevent you from having 2 listening sockets, one each for TCP and UDP. However, the communication method is generally chosen in advance for a given application. For example, the syslog protocol is almost always implemented only with UDP. HTTP on the other hand is pretty much always implemented only over TCP. There are a few protocols that often support both (NTP, DNS are examples).
It would be extremely rare to find a single application that allows a single logical data stream to use both at the same time, and it would be nigh on impossible to ever get that working reliably.
But otherwise if you do support both mechanisms, debugging is straightforward because each can be treated and debugged in isolation.
TCP is far easier to use if you require reliable sequenced delivery -- and for most applications the obvious choice. Every byte sent via TCP is guaranteed to be delivered in the order you send it (or you will receive a distinct error notification), and the peer operating systems between the two machines will cooperate in attempting retries as needed in case any packets are dropped en route. A lot of stuff you don't need to worry about here. Downsides are: (a) some increased overhead, and (b) "message" boundaries are not respected (that is, TCP delivers a stream of bytes; the receiver won't necessarily get them in the same discrete chunks in which they were sent so you must impose message boundaries yourself).
UDP doesn't guarantee delivery. That is, packets may still be dropped, but with no notice to the sender and it is your (i.e. your application's) responsibility to handle that appropriately. Likewise, it is possible for packets to show up in a different order than you sent them (due to different routing paths, for example). On the other hand, the packet you send is the packet you receive so your message boundaries remain intact.
So UDP is often chosen for short "one-shot" or single message notifications, where it's easy to set a timeout at application level, or where each message stands alone and a lost message is non-critical. While TCP is usually the better choice when there is a long-lasting connection or large continuous data streams need to be sent (file transfers and so forth).
Unless you use a raw socket, you must have two separate sockets for UDP and TCP communication.
TCP and UDP are independent OSI Level 4 protocols.
If you use a socket with a default options, it's most likely a TCP socket.
You specify the protocol you want to use when you create a socket - SOCK_STREAM is for TCP and SOCK_DGRAM is for UDP.
C# may have more readable options instead of SOCK_STREAM and SOCK_DGRAM.
If you do use raw sockets, you can distinguish between protocols by parsing IP headers.

C# game client with asyncronous sockets

I'm developing a small online game in C#. Currently I am using simple sync TCP sockets. But now (because this is some kind of "learning project") I want to convert to asynchronous sockets. In the client I have the method: byte[] SendAndReceive(Opcode op, byte[] data).
But when I use async sockets this isn't possible anymore.
For example my MapManager class first checks if a map is locally in a folder (checksum) and if it isn't, the map will be downloaded from the server.
So my question:
Is there any good way to send some data and get the answer without saving the received data to some kind of buffer and polling till this buffer isn't null?
Check out IO Completion Ports and the SocketAsyncEventArgs that goes with it. It raises events when data has been transferred, but you still need a buffer. Just no polling. It's fast and pretty efficient.
http://www.codeproject.com/Articles/83102/C-SocketAsyncEventArgs-High-Performance-Socket-Cod
and another example on MSDN
http://msdn.microsoft.com/en-us/library/system.net.sockets.socketasynceventargs.aspx
A code example of what you have would help, but I'd suggest using a new thread for each socket connection with a thread manager. Lmk if that makes sense or if that' applicable here. :)

Good UDP relay server design

I'm creating a UDP server that needs to receive UDP packets from various clients, and then forward them to other clients. I'm using C# so each UDP socket is a UdpClient for me. I think I want 2 UdpClient objects, one for receiving and one for sending. The receiving socket will be bound to a known port, and the sender will not be bound at all.
The server will get each packet, lookup the username in the packet data, and then based on a routing list the server maintains, it will forward the packet to 1 or more other clients.
I start the listening UdpClient with:
UdpClient udpListener = new UdpClient(new IPEndPoint(ListenerIP, UdpListenerPort));
udpListener.BeginReceive(new AsyncCallback(OnUDPRead), udpListener);
My callback looks like:
void OnUDPRead(IAsyncResult ar)
{
UdpClient udpListener = (UdpClient)ar.AsyncState;
try
{
IPEndPoint remoteEndPoint = null;
byte[] packet = udpListener.EndReceive(ar, ref remoteEndPoint);
// Get connection based on data in packet
// Get connections routing table (from memory)
// ***Send to each client in routing table***
}
catch (Exception ex)
{
}
udpListener.BeginReceive(new AsyncCallback(OnUDPRead), udpListener);
}
The server will need to process 100s of packets per second (this is for VOIP). The main part I'm not sure about, is what to do at the "Send to each client" part above. Should I use UdpClient.Send or UdpClient.BeginSend? Since this is for a real-time protocol, it wouldn't make sense to have several sends pending/buffered for any given client.
Since I only have one BeginReceive() posted at any given time, I assume my OnUDPRead function will never be called while already executing. Unless UdpClient works differently than TcpClient in this area? Since I don't give it a single buffer (I don't give it any buffer), I suppose it could fire OnUDPRead several times with only one BeginReceive posted?
I don't see any reason why you can't use one socket, the straightforward way. Async I/O doesn't appear to me to offer any startling benefit in this situation. Just use non-blocking mode. Receive a packet, send it. If send() causes EAGAIN or EWOULDBLOCK or whatever the C# manifestation of that is, just drop it.
In your plan, the sending socket will be bound automatically the first time you call send(). Clients will reply to that port. So it needs to be a port you are reading on. And you already have one of those.
It very much depends on scale - i.e. how many concurrent RTP streams you want to relay. I have developed exactly this solution as part of a media gateway for our SIP switch. Despite initial reservations, I have it working very efficiently using C# (1000 concurrent streams on a single server, 50K datagrams per second using 20ms PTime).
Through trial-and-error, I determined the following:
1) Using Socket.ReceiveFromAsync is much better than using synchronous approaches, since you don't need individual threads for each stream. As packets are received, they are scheduled through the threadpool.
2) Creating/disposing the SocketAsyncEventArgs structures takes time - better to allocate a pool of structures at startup and 'borrow' them from the pool.
3) Similarly for any other objects required during processing (e.g. a RTPPacket class). Allocating these dynamically at such a high rate results in GC issues pretty quickly with any sort of real load. If you avoid all dynamic allocation, and instead use pools of objects you manage yourself, this issue goes away.
4) If you need to do any clocking to co-ordinate input and output streams, transcoding, mixing or file playback, don't try to use the standard .NET timers. I wrapped the Win32 Multimedia timers from winmm.dll - these were much more precise.
I ended up with an architecture where components could expose IRTPSource and IRTPSink interfaces, which could then be hooked-up to create the graph required. I created RTP input/output components, mixer, player, recorder, (simple) transcoder, playout buffer etc. - and then wired them up using these two interfaces. The RTP packets are then clocked through the graph using the MM timers.
C# proved to be a good choice for what would typically be approached using C or C++.
Mike
What EJP said.
Most all the literature on scaling socket servers with async IO is written with TCP scenarios in mind. I did a lot of research last year on how to scale a UDP socket server, and found that IOCP/epoll wasn't as appropriate for UDP as it is for TCP. See this answer here.
So it's actually quite simple to scale a UDP socket server just using multiple threads than trying to do any sort of asynchronous I/O thing. And often times, one single thread pumping out recvfrom/sendto calls is good enough.
I would suggest that the server have one synchronous socket per client. Then have several threads do a Socket.Select call on a different subset of client sockets. I suppose you could have a single socket for all clients, but then you'll get some out-of-ordering problems if you try to process audio packets on that socket from multiple threads.
Also consider using the lower level socket class intead of UdpClient.
But if you really want to scale out - you may want to do the core work in C/C++. With C#, more layers of abstraction on top of winsock means more buffer copies and higher latency. There are techniques by which you can use winsock to forward packets from one socket to another without incurring a buffer copy.
a good design would be not to re-invent the wheel :)
There are already lots of RTP media proxy solutions: Kamailio, FreeSWITCH, Asterisk, and some other open-source projects. I have great experience with FreeSWITCH, and would highly recommend it as a media proxy for SIP calls.
If your signalling protocol is not SIP, there are some solutions for other protocols as well.

Forcing a TCP stream to send buffer contents

I am using Csharp tcp sockets to send data between a client and server.
Now the problem as i see it or as i perceive it is that tcp is a stream protocol and will not push (send) data unless their is a sufficient amount of it.
For instance say i wanted to send some data whatever it is doesnt matter lets just say its 8 bytes long. The behaviour i am seeing is no matter how long i wait it wont send that data unless i push more behind it presumably until i reach the tcp buffer.
So my question is. If i want to send a small amount of data via tcp do i need to append garbage to the end to force the socket to send. ( I wouldnt feel good about this ) or is their an alternative way i can force the front segment of the stream to send.
thanks in advance. i am still learning tcp so excuse the ignorance.
Don't set NoDelay unless you are an expert at TCP/IP and understand its full ramifications. If you haven't read Stevens, don't even think about it.
Here's an example question: if you establish a socket connection and send 8 bytes on it, are the 8 bytes immediately sent or does the Nagle algorithm wait for more data to send? The answer is "the 8 bytes are sent immediately" - but don't consider messing with Nagle until you understand exactly why that is the answer.
Here's another question: in a standard command/response protocol, how much Nagle delay is applied to each packet? The answer: none. Again, you should research why Nagle causes no delays in this common scenario.
If you're not seeing the data sent by 250 milliseconds (the maximum delay caused by Nagle in the worst possible scenario), then there is something else wrong.
You could set the NoDelay property (I think it's what disables Nagle).

Categories