How to send large data using C# UdpClient? - c#

I'm trying to send a large amount of data (more than 50 MB) using C# UdpClient.
So at first I split the data into 65507 byte blocks and send them in a loop.
for(int i = 0; i < packetCount; i++)
myUdpClient.Send(blocks[i], block[i].Length, remoteEndPoint);
My problem is that only the first packets can be received.
During sending the first packet the networkload increases rapidly to 100%, and then the other packets cannot be received.
I want to get as much data throughput as possible.
I'm sorry for my English!
Thanks for your help in advance.

For all those people saying to use TCP....are foolishly wrong. Although TCP is reliable and the window maintained by the kernel it's fairly "set and forget" protocol, but when it comes to the guy wanting to use 100% of his throughput, TCP will not do (it throttles too hard, and the wait for an ACK automatically puts at least 50% trashed because of the RTT).
To the original question, you are sending UDP packets nonstop in that for-loop, the window fills up and then any new data is dropped immediately and doesn't even attempt to go on the line. You are also splitting your data too large. I would recommend building your own throttle mechanism that starts off with 2k segments per second, and slowly ramps up. Each "segment" contains a SEQ (sequence identifier for acknowledgements or ACK) and OFF (offset inside the file for this data set). As the data is being tagged, let the server keep track of these tags. When the other side gets them, it stores the SEQ numbers in an ACK list, and any missing SEQ numbers are placed into a NACK timer list, when the timer runs out (if they haven't been received) it moves to a NACK list. The receiver should send 5 or so ACKs from the ACK list along with up to 5 NACKs in a single transmission every couple seconds or so. If the sender receives these messages and there are any NACKs, it should immediately throttle down and resend the missing fragment before continuing. The data that is ACKed can be freed from memory.
Good luck!

I don't know about .Net implementation specifically, it might be buffering your data, but UDP datagram is normally limited by the link MTU, which is 1500 on normal ethernet (subtract 20 bytes for IP header and 8 bytes of UDP header.)
UDP is explicitly allowed to drop and reorder the datagrams, and there's no flow control as in TCP.
Exceeding the socket send buffer on the sender side will have the network stack ignore following send attempts until the buffer space is available again (you need to check the return value of the send() for that.)
Edit:
I would strongly recommend going with TCP for large file transfers. TCP gives you sequencing (you don't have to keep track of dropped and re-ordered packets.) It has advanced flow control (so fast sender does not overwhelm a slow receiver.) It also does Path MTU discovery (i.e. finds out optimal data packetization and avoids IP fragmentation.) Otherwise you would have to re-implement most of these features yourself.

I hate to say it but you need to sleep the thread. You are overloading your throughput. UDP is not very good for lossless data transfer. UDP is for when you don't mind dropping some packets.

Reliably - no, you won't do it with UDP.
As far as I understand, this makes sense for sending to multiple computers at a time (broadcasting).
In this case,
establish a TCP connection with each of them,
split the data into blocks,
give each block an ID,
send list of IDs to each computer with TCP connection,
broadcast data with UDP,
inform clients (via TCP) that data transmission is over,
than clients should ask to resend the dropped packets

Related

How do I write a small gap in networkstream?

I have a server that sends telemetry data of varying size to receivers, or as I'll call them, clients, via NetworkStream. I aim for maximum speed, so I want minimal delays. The frequency is uncontrolled, so I'd like to use an infinite loop in my clients, that uses NetworkStream.Read to read the data, process it, then repeat.
My problem is that sometimes, if two packets are sent very quickly, and a client has a slow internet connection, the two packets will be received as a continous stream, resulting unprocessable data. A half-solution I found (mostly to confirm that this is indeed the error) is to have a small delay after/before each transmission, using System.Threading.Thread.Sleep(100), but not only I find Sleep a botchy solution, it's inconsistent, as it also slows down clients with a good connection, and the problem may persist with an even worse connection.
What I'd like to do is to have the server send a gap between each transmission, providing a separation regardless of internet speed, as NetworkStream.Read should finish after the current continous stream ends. I don't understand deeply the working of a NetworkStream, and have no idea what a few bytes of empty stream look like or how it could be implemented. Any ideas?
I would strong advise changing the protocol instead if you possibly can. TCP is a stream-based protocol, and any attempt to effectively ignore that is likely to be unreliable.
Instead, I'd suggest changing to make it a stream of messages where each message has a prefix indicating the length of the body of the message. That way it doesn't matter if a particular message is split across multiple packets or received in the same packet as other messages. It also makes reading easier for clients: they know exactly how much data to read, so can just do that in a simple loop.
If you're concerned that the length prefix will introduce too much overhead (if the data is often small) you could potentially make the protocol slightly more complicated with a single message containing a whole batch of information (multiple telemetry items).
But fundamentally, it's worth assuming that the data will be split into multiple packets, combined again etc. Don't assume that one write operation corresponds to one read operation.
You don't specifically mention the ProtocolType you're using on your NetworkStream but TCP is bound to fail your requirements. Intermediate routers/switches have no way to know your intent is to separate packets by time and will not respect that desire. Furthermore, TCP, being stream oriented, delivers packets in order, and it has error correction against dropped packets and corrupt packets. On any occurrence of one or the other it will hold all further packets until the error packet is retransmitted - and then you'll probably get them all in a bunch.
Use UDP and implement throttling on the receiving (i.e., client) side - throwing out data if you're falling behind.

Good UDP relay server design

I'm creating a UDP server that needs to receive UDP packets from various clients, and then forward them to other clients. I'm using C# so each UDP socket is a UdpClient for me. I think I want 2 UdpClient objects, one for receiving and one for sending. The receiving socket will be bound to a known port, and the sender will not be bound at all.
The server will get each packet, lookup the username in the packet data, and then based on a routing list the server maintains, it will forward the packet to 1 or more other clients.
I start the listening UdpClient with:
UdpClient udpListener = new UdpClient(new IPEndPoint(ListenerIP, UdpListenerPort));
udpListener.BeginReceive(new AsyncCallback(OnUDPRead), udpListener);
My callback looks like:
void OnUDPRead(IAsyncResult ar)
{
UdpClient udpListener = (UdpClient)ar.AsyncState;
try
{
IPEndPoint remoteEndPoint = null;
byte[] packet = udpListener.EndReceive(ar, ref remoteEndPoint);
// Get connection based on data in packet
// Get connections routing table (from memory)
// ***Send to each client in routing table***
}
catch (Exception ex)
{
}
udpListener.BeginReceive(new AsyncCallback(OnUDPRead), udpListener);
}
The server will need to process 100s of packets per second (this is for VOIP). The main part I'm not sure about, is what to do at the "Send to each client" part above. Should I use UdpClient.Send or UdpClient.BeginSend? Since this is for a real-time protocol, it wouldn't make sense to have several sends pending/buffered for any given client.
Since I only have one BeginReceive() posted at any given time, I assume my OnUDPRead function will never be called while already executing. Unless UdpClient works differently than TcpClient in this area? Since I don't give it a single buffer (I don't give it any buffer), I suppose it could fire OnUDPRead several times with only one BeginReceive posted?
I don't see any reason why you can't use one socket, the straightforward way. Async I/O doesn't appear to me to offer any startling benefit in this situation. Just use non-blocking mode. Receive a packet, send it. If send() causes EAGAIN or EWOULDBLOCK or whatever the C# manifestation of that is, just drop it.
In your plan, the sending socket will be bound automatically the first time you call send(). Clients will reply to that port. So it needs to be a port you are reading on. And you already have one of those.
It very much depends on scale - i.e. how many concurrent RTP streams you want to relay. I have developed exactly this solution as part of a media gateway for our SIP switch. Despite initial reservations, I have it working very efficiently using C# (1000 concurrent streams on a single server, 50K datagrams per second using 20ms PTime).
Through trial-and-error, I determined the following:
1) Using Socket.ReceiveFromAsync is much better than using synchronous approaches, since you don't need individual threads for each stream. As packets are received, they are scheduled through the threadpool.
2) Creating/disposing the SocketAsyncEventArgs structures takes time - better to allocate a pool of structures at startup and 'borrow' them from the pool.
3) Similarly for any other objects required during processing (e.g. a RTPPacket class). Allocating these dynamically at such a high rate results in GC issues pretty quickly with any sort of real load. If you avoid all dynamic allocation, and instead use pools of objects you manage yourself, this issue goes away.
4) If you need to do any clocking to co-ordinate input and output streams, transcoding, mixing or file playback, don't try to use the standard .NET timers. I wrapped the Win32 Multimedia timers from winmm.dll - these were much more precise.
I ended up with an architecture where components could expose IRTPSource and IRTPSink interfaces, which could then be hooked-up to create the graph required. I created RTP input/output components, mixer, player, recorder, (simple) transcoder, playout buffer etc. - and then wired them up using these two interfaces. The RTP packets are then clocked through the graph using the MM timers.
C# proved to be a good choice for what would typically be approached using C or C++.
Mike
What EJP said.
Most all the literature on scaling socket servers with async IO is written with TCP scenarios in mind. I did a lot of research last year on how to scale a UDP socket server, and found that IOCP/epoll wasn't as appropriate for UDP as it is for TCP. See this answer here.
So it's actually quite simple to scale a UDP socket server just using multiple threads than trying to do any sort of asynchronous I/O thing. And often times, one single thread pumping out recvfrom/sendto calls is good enough.
I would suggest that the server have one synchronous socket per client. Then have several threads do a Socket.Select call on a different subset of client sockets. I suppose you could have a single socket for all clients, but then you'll get some out-of-ordering problems if you try to process audio packets on that socket from multiple threads.
Also consider using the lower level socket class intead of UdpClient.
But if you really want to scale out - you may want to do the core work in C/C++. With C#, more layers of abstraction on top of winsock means more buffer copies and higher latency. There are techniques by which you can use winsock to forward packets from one socket to another without incurring a buffer copy.
a good design would be not to re-invent the wheel :)
There are already lots of RTP media proxy solutions: Kamailio, FreeSWITCH, Asterisk, and some other open-source projects. I have great experience with FreeSWITCH, and would highly recommend it as a media proxy for SIP calls.
If your signalling protocol is not SIP, there are some solutions for other protocols as well.

Forcing a TCP stream to send buffer contents

I am using Csharp tcp sockets to send data between a client and server.
Now the problem as i see it or as i perceive it is that tcp is a stream protocol and will not push (send) data unless their is a sufficient amount of it.
For instance say i wanted to send some data whatever it is doesnt matter lets just say its 8 bytes long. The behaviour i am seeing is no matter how long i wait it wont send that data unless i push more behind it presumably until i reach the tcp buffer.
So my question is. If i want to send a small amount of data via tcp do i need to append garbage to the end to force the socket to send. ( I wouldnt feel good about this ) or is their an alternative way i can force the front segment of the stream to send.
thanks in advance. i am still learning tcp so excuse the ignorance.
Don't set NoDelay unless you are an expert at TCP/IP and understand its full ramifications. If you haven't read Stevens, don't even think about it.
Here's an example question: if you establish a socket connection and send 8 bytes on it, are the 8 bytes immediately sent or does the Nagle algorithm wait for more data to send? The answer is "the 8 bytes are sent immediately" - but don't consider messing with Nagle until you understand exactly why that is the answer.
Here's another question: in a standard command/response protocol, how much Nagle delay is applied to each packet? The answer: none. Again, you should research why Nagle causes no delays in this common scenario.
If you're not seeing the data sent by 250 milliseconds (the maximum delay caused by Nagle in the worst possible scenario), then there is something else wrong.
You could set the NoDelay property (I think it's what disables Nagle).

Receiving packets in UDP

Let's say my program sends a 1000 bytes over the network (UDP). Does it guaranteed that the receiver will receive the 1000 bytes in one "batch"? Or perhaps he will need to perform sevral "reads" until he'll receive the entire message? if the later is true, how can i ensure that the order of the packets for the same message don't get "mixed up" (in order), or perhaps the protocol guarantees it?
Edit: that is, does it possible that my message will be split to sevral packets? (what if i try to send a 10000mb message, what happens then?)
You will get it all or nothing.
But there is no particular guarantee that you will receive packets exactly once in the order they were transmitted; packet loss, reordering and (less often) duplication are all possible.
There is a maximum frame size (of 65,507 bytes), send()ing packets of larger sizes will return an error.
You must provide enough buffer to receive the entire frame in one call.
UDP packets CAN be fragmented into multiple IP fragments, but the OS will drop an incomplete packet. This is therefore transparent to the application.
The receiver will get the entire packet in one call. The packet length is limited, even in theory:
Length
A 16-bit field that specifies the length in bytes of the entire
datagram: header and data. The minimum
length is 8 bytes since that's the
length of the header. The field size
sets a theoretical limit of 65,535
bytes (8 byte header + 65527 bytes of
data) for a UDP datagram. The
practical limit for the data length
which is imposed by the underlying
IPv4 protocol is 65,507 bytes.
However the real limit is much much lower, usually is safe to assume 512 bytes. See What is the largest Safe UDP Packet Size on the Internet.
UDP, unlike TCP, is not a reliable protocol. It provides no built in mechanism to ensure that the packets arrive in the proper order, or even arrive at all. That said, you can write your send/recv routines in a lock step fashion, where every time a packet is sent, the sender must wait to receive an ACK before sending again. If an ACK is not received after some specified timeout, the packet must be resent. This way you ensure that packets are received in the proper order. (For more information, check out the RFC for the TFTP protocol, which uses this strategy.)
Finally, if possible, you may want to consider using TCP instead.
Data sent using UDP is grouped in packets, so if you send x amount of bytes then IF the receiver receives the packet he'll receive x amount of bytes.
However, your packets might not even arrive, or they may arrive out of order.
With UDP Lite you can request to receive partially corrupted packets. This can be useful for video and VoIP services.

C# socket abnormal latency

I am working on a little online multiplayer pong game with C# and XNA.
I use sockets to transfer data between two computers on my personnal LAN. It works fine.
The issue is with speed : the transfer is slow.
When I ping the second computer, it shows a latency of 2 ms.
I set up a little timer inside my code, and it shows a latency of about 200 ms.
Even when the server and the client are on the same computer (using 127.0.0.1), the latency still about 15 ms. I consider this as slow.
Here are some fragment of my code :
server = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
server.Bind(new IPEndPoint(IPAddress.Any, port));
server.Listen(1);
// Begin Accept
server.BeginAccept(new AsyncCallback(ClientAccepted), null);
in ClientAccepted, I set up a NetworkStream, a StreamReader and a StreamWriter.
This is how I send a message when I want to update the player's location :
string message = "P" + "\n" + player.Position + "\n";
byte[] data = Encoding.ASCII.GetBytes(message);
ns.BeginWrite(data, 0, data.Length, new AsyncCallback(EndUpdate), null);
the only thing EndUpdate do is to call EndWrite.
This is how I receive data :
message = sr.ReadLine();
It dosen't block my game since it's on a second thread.
These are the stuff I tried :
- Use IP instead of TCP
- Use binary message instead of text
- Use IPv6 instead of IPv4
Nothing really did help.
Any thoughts about how I can improve the latency?
Thank you
The latency you're seeing is most likely due to Nagle's algorithm which is used to improve efficiency when sending lots of small packets using TCP. Essentially, the TCP stack collects small packets together and coalesces them into a larger packet before transmitting. This obviously means delaying for a small interval - up to 200ms - to see if any more data is sent, before sending what's waiting in the output buffers.
So try switching off Nagle by setting the NoDelay property to true on the socket.
However, be aware that if you do a LOT of small writes to the socket, that you may lose performance because of the TCP header overhead and processing of each packet. In that case, you'd want to try to collect your writes together into batches if possible.
There are several other meta-issues to consider:
Timer accuracy: Many system timers only update every 15 ms or so. I can't remember the precise value, but if your timings are always multiples of about 15 ms, be suspicious that your timer is not high precision. This is 'probably' not your issue, but keep it in mind.
If you're testing on the same computer, and only running a single core machine, the thread/process switching frequency will dominate your ping times. Not much to do but try to add Sleep(0) calls to allow thread/process swaps after sending data.
TCP/IP transmission settings. As alluded earlier in SocketOptionName.NoDelay. Also as mentioned earlier, consider UDP if you're continuously updating state. Order isn't guaranteed, but for volatile data missing packets are acceptible, as the state will be overridden soon anyway. To avoid using stale packets, add an incrementing value to the packet and never use a packet labelled earlier than the current state.
The difference between tcp and udp should not account for 200 ms, but a combination of other factors can.
Polling vs. Select. If you're polling for received data, the frequency of polling will interfere with receive rate, naturally. I use Socket.Select in a dedicated network thread to wait for incoming data and handle it as soon as possible.
Most networked games don't use TCP but UDP for communications. The problem with that is that data can be easily dropped, which is something you have to account for.
With TCP, there are a number of interactions between the host/server to guarantee data in an ordered manner (another thing you have to account for using UDP, the fact that data is not ordered).
On top of that, the timer in your code has to wait until the bytes bubble up from the unmanaged socket layer through unmanaged code, etc, etc. There is going to be some overhead there that you aren't going to be able to overcome (the unmanaged to managed transition).
You said you tried "IP", and you presumably did that by specifying ProtocolType.Ip, but I don't know what that really means (i.e. I don't know what protocol is chosen 'under the hood') when you combine that with SocketType.Stream.
As well as what casperOne said, if you're using TCP then see whether setting SocketOptionName.NoDelay makes any difference to you.

Categories