UDP Client receives only 1 message - c#

I have a server client application that I am currently working on. The server is receiving data fine over a WAN and the client seems to receive the data, but the client is only receiving one communication. Is there anything over a WAN that would make a client always only receive the first return UDP communication and none of the subsequent. Thanks for the help.
Client UDP Listening code
private void receiveUDP()
{
System.Net.IPEndPoint test = new System.Net.IPEndPoint(System.Net.IPAddress.Any,UDP_PORT_NUMBER);
System.Net.EndPoint serverIP = (System.Net.EndPoint)test;
server.Bind(serverIP);
//server.Ttl = 50;
EndPoint RemoteServ = (EndPoint)listenUDP;
do
{
byte[] content = new byte[1024];
int data = server.ReceiveFrom(content, ref RemoteServ);
string message = Encoding.ASCII.GetString(content);
ProcessCommands(message);
} while (true);
}

This is a bit of a stab in the dark (since you don't provide enough code to really say what's going on definitively), but there's one major reason why you might consistently see some UDP datagrams not be delivered over a WAN, while others always arrive successfully. This reason is MTU; the Maximum Transmission Unit which can be sent in a single UDP datagram. This can easily produce behaviour such as what you're seeing if (for example), your first datagram is a short "I accept your connection" message, and you then follow that with a datagrams containing large files; the first (small) datagram is smaller than the MTU and is delivered, while the following (large) datagrams are larger than the MTU, and are discarded en route.
For UDP over a WAN, the MTU will not be higher than about 1500 bytes, and in many situations may be as low as 1200 bytes. Any packets larger than that will be silently dropped somewhere between endpoints. To send large blocks of data via UDP, you need to chop them up into pieces smaller than the MTU for the network segment across which you're transmitting them.
On a LAN, you can usually get away with sending datagrams of any size. But as soon as they're being sent over the Internet or otherwise through heterogenous networks, they're likely to be silently discarded.
If you do need to send large files, you might choose to transmit them via TCP instead; TCP automatically manages chopping data up to fit within the MTU, and ensures that its packets are all received, and are received in order; guarantees that you will not receive from datagrams sent via UDP.
As I mentioned above, this is a complete stab in the dark and may not actually be related to your actual troubles. But it's the elephant in the room, when all we have to go on is that the first packet always arrives successfully, and later packets never do.

Related

How does a TCP packet arrive when using the Socket api in C#

I have been reading about TCP packet and how they can be split up any number of times during their voyage. I took this to assume I would have to implement some kind of buffer on top of the buffer used for the actual network traffic in order to store each ReceiveAsync() until enough data is available to parse a message. BTW, I am sending length-prefixed, protobuf-serialized messages over TCP.
Then I read that the lower layers (ethernet?, IP?) will actually re-assemble packets transparently.
My question is, in C#, am I guaranteed to receive a full "message" over TCP? In other words, if I send 32 bytes, will I necessarily receive those 32 bytes in "one-go" (one call to ReceiveAsync())? Or do I have to "store" each receive until the number of bytes received is equal to the length-prefix?
Also, could I receive more than one message in a single call to ReceiveAsync()? Say one "protobuf message" is 32 bytes. I send 2 of them. Could I potentially receive 48 bytes in "one go" and then 16 in another?
I know this question shows up easily on google, but I can never tell if it's in the correct context (talking about the actual TCP protocol, or how C# will expose network traffic to the programmer).
Thanks.
TCP is a stream protocol - it transmits a stream of bytes. That's all. Absolutely no message framing / grouping is implied. In fact, you should forget that Ethernet packets or IP datagrams even exist when writing code using a TCP socket.
You may find yourself with 1 byte available, or 10,000 bytes available to read. The beauty of the (synchronous) Berkeley sockets API is that you, as an application programmer don't need to worry about this. Since you're using a length-prefixed message format (good job!) simply recv() as many bytes as you're expecting. If there are more bytes available than the application requests, the kernel will keep the rest buffered until the next call. If there are fewer bytes available than required, the thread will either block or the call will indicate that fewer bytes were received. In this case, you can simply sleep again until data is available.
The problem with async APIs is that it requires the application to track a lot more state itself. Even this Microsoft example of Asynchronous Client Sockets is far more complicated than it needs to be. With async APIs, you still control the amount of data you're requesting from the kernel, but when your async callback is fired, you then need to know the next amount of data to request.
Note that the C# async/await in 4.5 make asynchronous processing easier, as you can do so in a synchronous way. Have a look at this answer where the author comments:
Socket.ReceiveAsync is a strange one. It has nothing to do with async/await features in .net4.5. It was designed as an alternative socket API that wouldn't thrash memory as hard as BeginReceive/EndReceive, and only needs to be used in the most hardcore of server apps.
TCP is a stream-based octet protocol. So, from the application's perspective, you can only read or write bytes to the stream.
I have been reading about TCP packet and how they can be split up any number of times during their voyage.
TCP packets are a network implementation detail. They're used for efficiency (it would be very inefficient to send one byte at a time). Packet fragmentation is done at the device driver / hardware level, and is never exposed to applications. An application never knows what a "packet" is or where its boundaries are.
I took this to assume I would have to implement some kind of buffer on top of the buffer used for the actual network traffic in order to store each ReceiveAsync() until enough data is available to parse a message.
Yes. Because "message" is not a TCP concept. It's purely an application concept. Most application protocols do define a kind of "message" because it's easier to reason about.
Some application protocols, however, do not define the concept of a "message"; they treat the TCP stream as an actual stream, not a sequence of messages.
In order to support both kinds of application protocols, TCP/IP APIs have to be stream-based.
BTW, I am sending length-prefixed, protobuf-serialized messages over TCP.
That's good. Length prefixing is much easier to deal with than the alternatives, IMO.
My question is, in C#, am I guaranteed to receive a full "message" over TCP?
No.
Or do I have to "store" each receive until the number of bytes received is equal to the length-prefix? Also, could I receive more than one message in a single call to ReceiveAsync()?
Yes, and yes.
Even more fun:
You can get only part of your length prefix (assuming a multi-byte length prefix).
You can get any number of messages at once.
Your buffer can contain part of a message, or part of a message's length prefix.
The next read may not finish the current message, or even the current message's length prefix.
For more information on the details, see my TCP/IP .NET FAQ, particularly the sections on message framing and some example code for length-prefixed messages.
I strongly recommend using only asynchronous APIs in production; the synchronous alternative of having two threads per connection negatively impacts scalability.
Oh, and I also always recommend using SignalR if possible. Raw TCP/IP socket programming is always complex.
My question is, in C#, am I guaranteed to receive a full "message" over TCP?
No. You will not receive a full message. A single send does not result in a single receive. You must keep reading on the receiving side until you have received everything you need.
See the example here, it keeps the read data in a buffer and keeps checking to see if there is more data to be read:
private static void ReceiveCallback(IAsyncResult ar)
{
try
{
// Retrieve the state object and the client socket
// from the asynchronous state object.
StateObject state = (StateObject)ar.AsyncState;
Socket client = state.workSocket;
// Read data from the remote device.
int bytesRead = client.EndReceive(ar);
if (bytesRead > 0)
{
// There might be more data, so store the data received so far.
state.sb.Append(Encoding.ASCII.GetString(state.buffer, 0, bytesRead));
// Get the rest of the data.
client.BeginReceive(state.buffer, 0, StateObject.BufferSize, 0,
new AsyncCallback(ReceiveCallback), state);
}
else
{
// All the data has arrived; put it in response.
if (state.sb.Length > 1)
{
response = state.sb.ToString();
}
// Signal that all bytes have been received.
receiveDone.Set();
}
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
}
See this MSDN article and this article for more details. The 2nd link goes into more details and it also has sample code.

TCP messages arrival on the same socket

I've got a 2 services that communicate using a TCP socket (the one that initiates the connection is a C++ Windows service and the receiver is a C# TCP stream) and (let's say that they might not use the same TCP connection all the time). Some of the time I'm getting half messages (where the number of bytes is miscalculated somehow) on a great network load.
I have several questions in order to resolve the issue:
Can I be sure that messages (not packets...) must follow one another, for example, if I send message 1 (that was received as half), then message 2, can I be sure that the next half of message 1 won't be after message 2?
Please have separate answer for the same TCP connection between all messages and not having the same TCP connection between them.
Is there any difference whether the sender and the receiver are on the same station?
TCP is a stream protocol that delivers a stream of bytes. It does not know (or care) about your message boundaries.
To send the data, TCP breaks the stream up into arbitrarily sized packets. [it's not really arbitrary and it is really complicated.] and send these reliably to the other end.
When you read data from a TCP socket, you get whatever data 1) has arrived, and 2) will fit in the buffer you have provided.
On the receive side you need to use code to reassemble complete messages from the TCP stream. You may have to write this yourself or you may find that it is already written for you by whatever library (if any) you are using to interact with the socket.
TCP is a streaming protocol, it doesn't have "messages" or "packets" or other boundaries. Sometimes when you receive data you might not get all that was sent, other times you could get more than one "message" in the received stream.
You have to model your protocol to handle these things, for example by including special message terminators or including a fixed-sized header including the data size.
If you don't get all of a message at once, then you have to receive again, maybe even multiple times, to get all the data that was sent.
And to answer your question, you can be sure that the stream is received in the order it was sent, or the TCP layer will give you an error. If byte A was sent before byte B then it's guaranteed that they will arrive in that order. There is also no theoretical difference if the sender and receiver is on the same system, or on different continents.

C# Receiving Packet in System.Sockets

In my Client Server Application i wondered how to make a packet and send it to the server via the Client
Then on the server i recognize which packet is this and send the proper replay
but suddenly i got across this topic and it make me worry if i may fall in this problem
The Problem
One of the most common beginner mistakes for people designing
protocols for TCP/IP is that they assume that message boundaries are
preserved. For example, they assume a single "Send" will result in a
single "Receive".
Some TCP/IP documentation is partially to blame. Many people read
about how TCP/IP preserves packets - splitting them up when necessary
and re-ordering and re-assembling them on the receiving side. This is
perfectly true; however, a single "Send" does not send a single
packet.
Local machine (loopback) testing confirms this misunderstanding,
because usually when client and server are on the same machine they
communicate quickly enough that single "sends" do in fact correspond
to single "receives". Unfortunately, this is only a coincidence.
This problem usually manifests itself when attempting to deploy a
solution to the Internet (increasing latency between client and
server) or when trying to send larger amounts of data (requiring
fragmentation). Unfortunately, at this point, the project is usually
in its final stages, and sometimes the application protocol has even
been published!
True story: I once worked for a company that developed custom client/server software.
The original communications code had made this
common mistake. However, they were all on dedicated networks with
high-end hardware, so the underlying problem only happened very
rarely. When it did, the operators would just chalk it up to "that
buggy Windows OS" or "another network glitch" and reboot. One of my
tasks at this company was to change the communication to include a lot
more information; of course, this caused the problem to manifest
regularly, and the entire application protocol had to be changed to
fix it. The truly amazing thing is that this software had been used in
countless 24x7 automation systems for 20 years; it was fundamentally
broken and no one noticed.
So how could i send something like AUTH_CALC,VALUE1=10,VALUE2=12 packet and receive it from the server in a safe way...
And if you wanna an example of what i am doing here it is below
[CLIENT]
Send(Encoding.ASCII.GetBytes("1001:UN=user123&PW=123456")) //1001 is the ID
[SERVER]
private void OnReceivePacket(byte[] arg1, Wrapper Client)
{
try
{
int ID;
string V = Encoding.ASCII.GetString(arg1).Split(':')[0];
int.TryParse(V, out ID);
switch (ID)
{
case 1001://Login Packet
AppendToRichEditControl("LOGIN PACKET RECEIVED");
break;
case 1002:
//OTHER IDs
break;
default:
break;
}
}
catch { }
}
So is this is a good way to structure a Message and handling it on the server ?
Also which is better encoding to use ASCII or UTF8 ?
The best way you can do is by using length indicator. Suppose you are sending a file of 10000 bytes, first send the length of the file and receive the ack i.e "OK" string from other side, then keep on sending 10,000 bytes chunk by chunk(may be u can take 4096 bytes). Send 4096 bytes each time for two time and send 2000 odd bytes on the last chunk. On the receiver side there is no gauranty that for one send you will receive the whole 4096 bytes, so you need to wait until u get 4096 bytes and then proceed with next 4096 bytes.

How can I send data over the internet using a socket?

I would like to send data over internet through a desktop application. I know a little bit about sockets. I have transferred the data within the LAN, but now I want to transfer the data over the internet. What is the best way to transfer both large and small quantities of data?
My system is connected to the server which has the access to the internet. My system's IP address is dynamic. I don't know how to send the data to another system which is connected to the internet. Do I need to find the router address? (My IP address is generated as 192.168.1.15).
Is using a socket enough, or is HTTP required?
Socket is enough if no firewalls/proxies are involved.
But, as Internet is involved (not the fastest connection), I suggest for the sake of convenience you should better opt for remoting over http. That way, even if in the future the setup changes, and firewalls/proxies get involved in the equation, you should not worry.
If all you want to do is transfer raw data from one machine to another it's very easy to do using a TCP socket.
Here's a quick example.
Server:
ThreadPool.QueueUserWorkItem(StartTCPServer);
private static void StartTCPServer(object state) {
TcpListener tcpServer = new TcpListener(IPAddress.Parse("192.168.1.15"), 5442);
tcpServer.Start();
TcpClient client = tcpServer.AcceptTcpClient();
Console.WriteLine("Client connection accepted from " + client.Client.RemoteEndPoint + ".");
StreamWriter sw = new StreamWriter("destination.txt");
byte[] buffer = new byte[1500];
int bytesRead = 1;
while (bytesRead > 0) {
bytesRead = client.GetStream().Read(buffer, 0, 1500);
if (bytesRead == 0) {
break;
}
sw.BaseStream.Write(buffer, 0, bytesRead);
Console.WriteLine(bytesRead + " written.");
}
sw.Close();
}
Client:
StreamReader sr = new StreamReader("source.txt");
TcpClient tcpClient = new TcpClient();
tcpClient.Connect(new IPEndPoint(IPAddress.Parse("192.168.1.15"), 5442));
byte[] buffer = new byte[1500];
long bytesSent = 0;
while (bytesSent < sr.BaseStream.Length) {
int bytesRead = sr.BaseStream.Read(buffer, 0, 1500);
tcpClient.GetStream().Write(buffer, 0, bytesRead);
Console.WriteLine(bytesRead + " bytes sent.");
bytesSent += bytesRead;
}
tcpClient.Close();
Console.WriteLine("finished");
Console.ReadLine();
More information about your connection needs is required in order to give you an appropriate solution. There are many protocols at your disposal and there are trade-offs for all of them. You will probably choose one of these two transport layers:
UDP - This is a send-and-forget method of sending packets. Good for streaming media that doesn't necessarily have to be 100% correct.
The good:
No connection required.
Very lightweight.
The bad:
No guarantee of your packet reaching the destination (although most of the time they make it).
Packets can arrive out of the order in which you sent them.
No guarantee that their contents are the same as when you sent the packet.
TCP - This is a connection-based protocol that guarantees predictable behavior.
The good:
You will know for sure whether the packet has reached the destination or not.
Packets will arrive in the order you sent them.
You are guaranteed that 99.999999999% of the time your packets will arrive with their contents unaltered.
Flow control - if the machine sending packets is sending too quickly, the receiving machine is able to throttle the sender's packet-sending rate.
The bad:
Requires a connection to be established.
Considerable more overhead than UDP.
The list of pros and cons is by no means complete but it should be enough information to give you the ability to make an informed decision. If possible, you should take advantage of application layer-based protocols that already exist, such as HTTP if you are transferring ASCII text, FTP if you are transferring files, and so on.
You can do it with .Net's Socket class or you can work with the more convenient TcpClient class.
Firstly though you need to figure out what server you intend to communicate with. Is it an HTTP server or an FTP server? Both HTTP and FTP are application-level protocols which are implemented on top of (using) sockets, which is really a transport layer interface.
Your local IP address or the address of the router really doesn't matter. You however need to know the IP address of the remote host you intend to connect to. You can obtain this by calling:
IPHostEntry host;
host = Dns.GetHostEntry(hostname);
You might also want to think about other issues when working with sockets, such as using timeouts to mask failure, the possibility of resuming upload/downloads when transferring large files, etc. If you spend sometime looking on the net, you should be able to find higher level HTTP/FTP apis that will let you work with file transfers much more easily.
Judging by your question, you seem pretty new to sockets, so reading this might also help
In your question you mix different things. Sockets are an abstraction for network communication. You will certainly need a socket to communicate over the network. However, possibly you will not see that a socket is used (like in a web-browser). Http is a communication protocol. This is what goes through a communication channel.
Visual Studio has a lot of well made facilities for creating and consuming SOAP XML Web Services. I'd look into it if I were you. Sure, there is some overhead, but coding against it is extremely easy.
Of course, I'm not sure how well that would scale if you had to transfer, say, tens or hundreads of megabytes of data across slow internet connections. It does offer asynchronous I/O, but I don't think you can get a progress indicator, and there most definately isn't a resume functionality.
Added: You can also continue using your socket. There is no extra work invloved for connecting to a server across the internet. Just specify the server's IP address, and away you go. Your OS will take care of all the gory details like routers, missing packets, etc.
First you should make a decision what protocol you want to use TCP or UDP. Then you have two options: 1. use Socket (lower level) or 2. Use class like TCPClient or UDPClient (which represents a little higher abstraction.
I'd suggest (for the begging the second option).
What you want to know depends heavily on many parts of your infrastructure.
If you want to send data to a server that is transparently connected
to the internet, it is as easy as connecting to it's IP adress.
If you want to connect to some friend with a broadband connection, things
get tricky. You usually have to configure both of your routers (or at least the
target one) for NAT.
Familiarize yourself with NAT, and the basics of IP routing.
The details you provided are not sufficient to describe exactly what
you want to do.

How does NetworkStream work in two directions?

I've read an example of a Tcp Echo Server and some things are unclear to me.
TcpClient client = null;
NetworkStream netStream = null;
try {
client = listener.AcceptTcpClient();
netStream = client.GetStream();
int totalBytesEchoed = 0;
while ((bytesRcvd = netStream.Read(rcvBuffer, 0, rcvBuffer.Length)) > 0) {
netStream.Write(rcvBuffer, 0, bytesRcvd);
totalBytesEchoed += bytesRcvd;
}
netStream.Close();
client.Close();
} catch {
netStream.Close();
}
When the server receives a packet (the while loop), he reads the data into rcvBuffer and writes it to the stream.
What confuses me is the chronological order of messages in communication. Is the data which was written with netStream.Write() sent immediately to the client (who may even still be sending), or only after the data which is already written to the stream (by client) processed.
The following question may even clarify the previous: If a client sends some data by writing to the stream, is that data moved to the message queue on the server side waiting to be read so the stream is actually "empty"? That would explain why the server can immediately write to stream - because the data which comes from the stream is actually buffered elsewhere...?
A TCP connection is, in principal, full duplex. So you are dealing with 2 separate channels and yes, both sides could be writing at the same time.
Hint: The method call NetworkStream.Read is blocking in that example.
The book is absolutely correct -- raw access to TCP streams does not imply any sort of extra "chunking" and, in this example for instance, a single byte could easily be processed at a time. However, performing the reading and writing in batches (normally with exposed buffers) can allow for more efficient processing (often as a result of less system calls). The network layer and network hardware also employ there own forms of buffers.
There is actually no guarantee that data written from Write() will actually be written before more Reads() successfully complete: even if data is flushed in one layer it does not imply it is flushed in another and there is absolutely no guarantee that the data has made its way back over to the client. This is where higher-level protocols come into play.
With this echo example the data is simply shoved through as fast as it can be. Both the Write and the Read will block based upon the underlying network stack (the send and receive buffers in particular), each with their own series of buffers.
[This simplifies things a bit of course -- one could always look at the TCP [protocol] itself which does impose transmission characteristics on the actual packet flow.]
You are right that technically when performing Read() operation, you are not reading bits off the wire. You are basically reading buffered data (chunks received by a TCP and arranged in a correct order). When sending you can Flush() that should in theory should send data immediately, but modern TCP stacks have a bit of logic how to gather data in appropriate size packets and burst them to the wire.
As Henk Holterman explained, TCP is a full duplex protocol (if supported by all underlying infrastructure), so sending and receiving data is more of when you server/client reads and writes data. It's not like when you server send data, a client will read it immediately. Client can be sending it's own data and then perform Read(), in this case data will stay in network buffer longer and can be discarded after some time it no-one want to read it. At least I've experienced this when dealing with my supa dupa server/client library (-.

Categories