C# Async Sockets Concept - c#

This is my listening function and connection function
Socket Listen
public void Listen(){
IPEndPoint ep = new IPEndPoint(IPAddress.Any, PortNumber);
Listen.Bind(ep);
Listen.Listen(10);
Listen.BeginAccept(new AsyncCallback(NewConnection), null);}
public void NewConnection(IAsyncResult asyn)
{
Socket Accepted = Listen.EndAccept(asyn);
Listen.BeginAccept(new AsyncCallback(NewConnection), null);
SomeFunction(Accepted);
}
the code works fine and there is no problem - I traced the code to see how to work with different clients and I understand the flow. However, I don't understand how can 1 socket serve different clients. Does it time multiplex between the clients over the socket?
I read on MSDN that Accepted in my code can be only used for the established connection and can't be used any further - that part I don't understand. What actually happens when the client tries to connect to the server socket? Does EndAccept return a totally different socket with different port to establish the connection and keep listening with the same socket to accept more requests at the same time?

What you've said is basically correct, based on my understanding. The Accepted socket is not the same as Listen. After you EndAccept, you can kick off another BeginAccept async call with your listen socket, and you can use the newly created socket to communicate with your remote peer.
To verify, you can check the local port of the listen socket and the connected socket; they should be different.

Related

Managed Sockets not behaving as (I) expected

So I've been having a tough time finding documentation on exactly how sockets should behave when you have 2, both bound to the same endpoint, but one of them is also connected to a remote endpoint.
The sockets are UDP IPv4
Running in .net core 2.2/3 on linux x64
What I have been able to gather from various sources, is that the connected socket should always and only receive datagrams from the endpoint it is connected to and the "unconnected" socket will receive everything else.
I vaguely remember reading that the kernel socket implementation assigns "points" to each socket when a dgram arrives, and the socket with the higher score (most specific route) gets the data. If two socket get the same score, the dgrams are "load balanced" between the sockets.
I put together a small test:
class Program
{
static void Main(string[] args)
{
var localEp = new IPEndPoint(IPAddress.Loopback, 1114);
var remoteEp = new IPEndPoint(IPAddress.Loopback, 1115);
//Socket bound to local EP, not connected should receive from everyone
var notConnected = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
notConnected.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
notConnected.Bind(localEp);
//Socket bound and connected should receive from only it's remote EP
var connected = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
connected.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
connected.Bind(localEp);
connected.Connect(remoteEp);
var notConnectedTask = Task.Run(() => Receive(notConnected, "Not Connected"));
var connectedTask = Task.Run(() => Receive(connected, "Connected"));
//Remote socket to send to connected socket
var remote1 = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
remote1.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
remote1.Bind(remoteEp);
remote1.Connect(localEp);
//Remote socket to send to notConnected socket
var remote2 = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
remote2.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, true);
remote2.Bind(new IPEndPoint(IPAddress.Loopback, 1116));
remote2.Connect(localEp);
for (int i = 0; i < 10; i++)
{
//This should be received by connected socket only
remote1.Send(Encoding.Default.GetBytes($"message {i} to connected socket"));
//This should be received by unconnected socket only
remote2.Send(Encoding.Default.GetBytes($"message {i} to notConnected socket"));
}
remote1.Send(Encoding.Default.GetBytes("end"));
remote2.Send(Encoding.Default.GetBytes("end"));
Task.WaitAll(notConnectedTask, connectedTask);
}
public static void Receive(Socket sock, string name)
{
EndPoint ep = new IPEndPoint(IPAddress.Any, 0);
var buf = new byte[1024];
Console.WriteLine($"{name} is listening...");
while (true)
{
var rcvd = sock.ReceiveFrom(buf, ref ep);
var msg = Encoding.Default.GetString(buf.Take(rcvd).ToArray());
Console.WriteLine($"{name} => {msg}");
if (msg.SequenceEqual("end"))
return;
}
}
}
To My surprise and chagrin, the result was nothing close to what I expected:
Connected is listening...
Not Connected is listening...
Connected => message 0 to connected socket
Connected => message 0 to notConnected socket
Connected => message 1 to connected socket
Connected => message 1 to notConnected socket
Connected => message 2 to connected socket
Connected => message 2 to notConnected socket
Connected => message 3 to connected socket
Connected => message 3 to notConnected socket
Connected => message 4 to connected socket
Connected => message 4 to notConnected socket
Connected => message 5 to connected socket
Connected => message 5 to notConnected socket
Connected => message 6 to connected socket
Connected => message 6 to notConnected socket
Connected => message 7 to connected socket
Connected => message 7 to notConnected socket
Connected => message 8 to connected socket
Connected => message 8 to notConnected socket
Connected => message 9 to connected socket
Connected => message 9 to notConnected socket
Connected => end
Not only did the notConnected socket not receive anything, but the connected socket got everything...
So neither of my expectations seem true. No load balancing, and no point system.
I had posted a comment on SO asking this question and got a reply almost confirming my expectation:
I think if one is connected to a remote endpoint, then all datagrams
originating from that remote endpoint will end up at the connected
socket. The unconnected one would only catch datagrams from other
remote endpoints.
And I also have an e-mail from the OpenSSL mailing group mostly confirming it as well...
I suppose the above test should answer my question definitively, but it just seems so wrong!
Perhaps I made a mistake in the code, or I'm just missing something. I'd appreciate a bit of guidance.
Am I completely wrong about how sockets work?
EDIT
So I just re-ran my test, exactly as above, and the result is almost the same, only the socket receiving the data is the "notConnected" socket.
Binding the notConnected socket after the connected socket also has no effect.
I vaguely remember reading that the kernel socket implementation assigns "points" to each socket when a dgram arrives, and the socket with the higher score (most specific route) gets the data. If two socket get the same score, the dgrams are "load balanced" between the sockets.
I would be curious to know where you read that. Because it's not consistent with anything I've ever read or heard about reusing socket addresses.
Rather, my understanding has always been that if you reuse a socket address, the behavior is undefined/non-deterministic. For example:
Once the second socket has successfully bound, the behavior for all sockets bound to that port is indeterminate.
When I run your test code, I get behavior opposite from which you report. In particular the "Not Connected" socket is the one that receives all of the traffic.
When I modify the code so that both sockets call Connect(), one each to each of the remote endpoint addresses, only one socket winds up getting any datagrams. This is also consistent with my understanding and the previous test. In particular, Connect() on a connectionless-protocol socket operates at the socket level, filtering out any datagrams the socket receives before the application sees them.
So, on my computer the "Not Connected" socket is the one that's getting all the traffic, and if I tell it to connect to one of the remote endpoints that is sending datagrams, then while it still receives all of the traffic, my application sees only those datagrams it asked for with the Connect() call. The other datagrams are discarded.
(As an aside: in my view, "connecting" a socket that is using a connectionless protocol should be considered simply a convenience, and should not be viewed as actually connecting the socket. The same socket can still send datagrams to other remote endpoints, via SendTo(), and the socket is still receiving traffic from other remote endpoints, your program is just not getting to see that traffic.)
For reused socket addresses, I have also seen in the past, traffic delivered randomly. I.e. sometimes one socket gets the traffic, sometimes the other one does. The fact that there is at least some socket that is consistently receiving the traffic is an improvement over that!
But nonetheless, I don't believe you should have any reason to ever expect SocketOptionName.ReuseAddress to work reliably. It's not documented to do so, and in my experience it does not. Both the results you report, as well as the different results I obtained with the same code, are entirely consistent with the "non-deterministic" nature of reusing socket addresses.
If you have seen anything that claims that reusing socket addresses can and/or should produce some deterministic result, I would say that reference is simply incorrect.
So reading over the linux man pages for SO_REUSEPORT, I came across this:
For UDP sockets, the use of this option can provide better
distribution of incoming datagrams to multiple processes (or
threads) as compared to the traditional technique of having
multiple processes compete to receive datagrams on the same
socket.
So it would seem that SO_REUSEPORT is what I need, rather than SO_REUSEADDRESS. Which is unfortunate because SO_REUSEPORT is not available on windows...
Also, to confirm Peter Duniho's answer, straight from the horse's mouth:
The motivating case for so_reuseport in UDP would be something like a
DNS server. An alternative would be to recv on the same socket from
multiple threads. As in the case of TCP, the load across these
threads tends to be disproportionate and we also see a lot of
contection on the socket lock. Note that SO_REUSEADDR already allows
multiple UDP sockets to bind to the same port, however there is no
provision to prevent hijacking and nothing to distribute packets
across all the sockets sharing the same bound port.
To sum it up, multiple UDP sockets bound to the same endpoint with SO_REUSEADDRESS set will have undefined behavior. That is to say there is no way to tell where the data will end up.
Multiple UDP sockets bound with SO_REUSEPORT will see the dgrams distributed in a sort of "load balanced" way.
As I still don't know how one connected/bound socket and one bound socket will behave with SO_REUSEPORT, I will test my scenario above with SO_REUSEPORT and update this answer.
So this commit to the linux kernel does in fact implement the "socket scoring" system I thought I had read about. Specifically, static int compute_score seems to take a "udp table" and compute the socket with the highest score for a given datagram. This should guarantee that a connected socket will receive dgrams from it's endpoint, even when another socket is also bound to the same local endpoint.
This is a gist I created to test this case. It works as I had hoped, with the connected socket always receiving dgrams from it's remote endpoint.

synchronous multiple socket client clarification

I am going to create multiple synchronous clients . I need some explanation about below code.
When i am creating a socket like below and call connect what is happening at network level.
I believe when we create a socket and call connect, a TCP/IP connection a tunnel made between client socket and server socket.
Once this sender(socket) connect with server ,that client & server will have a unique tunnel between them.
if i create another client it will have another unique tunnel between them.
In case if we got an error, that the client is not connected, always we should reconnect using existing socket(sender) then we will access the same data/connection that we had.
And we should not create a new socket then we will have a new tunnel and we will lost the previous connection and data.
Socket sender = new Socket(AddressFamily.InterNetwork,
SocketType.Stream, ProtocolType.Tcp );
sender.Connect(remoteEndpoint)
Please clarify if i am wrong.
What you call tunnel is really called a connection. Broken connections cannot be revived. Data loss is to be expected.
When you reuse an existing socket object to connect again you are creating a new connection. Reusing socket objects is not recommended (by me) because it is confusing.
Note, that TCP does not know what a socket is. The spec does not contain that word. Sockets are OS-level things.

Uniting TCP and UDP in C#

So ideally, I am looking for a way to unite TCP and UDP on the server and manage both these connections under individual client threads. Currently, I am wondering if it is possible to accept a TCP connection and set up UDP communication from that.
Here is my ideal setup:
Client connects on TCP to server via TCPClient.connect()
Server accepts TCP connection via TCPListener
when server accepts TCP connection, it also gets the IPEndpoint from the TCP connection
and uses that to begin UDP communcation with:
serverUDPSocket.BeginReceiveFrom (byteData, 0, 1024,
SocketFlags.None, ref (Endpoint)ThatIPEndpointThatIJustMentioned,
new AsyncCallback(client.receiveUDP),
(Endpoint)ThatIPEndpointThatIJustMentioned);
^so this is where I am running into a little bit of a theoretical issue. From my understanding, the endpoints for TCP and UDP would be formatted differently. How can I resolve this issue? I would like to avoid having the client connect to UDP on a separate thread and then uniting these threads under a single managing class.
EDIT:
Here is the code that I am trying to implement:
//Listening for TCP
TcpClient newclient = listenTCP.AcceptTcpClient(); //Accept the client
Client clientr = new Client(newclient); //Create a new Client class to manage the connection
clientr.actionThread = new Thread(clientr.action); //This thread manages the data flow from the client via the TCP stream
clientr.actionThread.Start(clientr);
EndPoint endPoint = newclient.Client.RemoteEndPoint; //so this is the sketchy part. I am trying to get the endpoint from the TCP connection to set up a UDP "connection". I am unsure about the compatibility as UDP and TCP sockets are different.
UDPSocket.BeginReceiveFrom(new byte[1024],0,1024, SocketFlags.None,ref endPoint, new AsyncCallback(clientr.receiveUDP), null); //the AsyncCallBack is like the manager thread for UDP (same as in TCP)
clients.Add(clientr);
There is no problem in creating two listeners in one application, even if they use different protocol. I suppose you are not asking if you can do it on the same port (there is no point to do it anyway).
However listener is consuming thread so it need different thread if there is gui or some process to do in application (for example calculations meanwhile).
If you want to do everything in one thread you must first receive message from first listener, then setup second one. There is no possibility to setup 2 listeners in one thread at the same time because if you setup first listener it will consome whole thread waiting for message.
This was due to a lack of understanding of UDP on my part from the code level.
I ended up setting up the other method I described where it would accept initial UDP packets individually and then direct the communication (EndPoint + Message) towards a managing client class by comparing IP addresses.

Server/client sockets

Ok so I am new to socket programing and I'm making a game that is going to run from a server. I am going to try to be able to get a hundred clients to run off my server. Should I make one listener instance or one for every client? Also I've tried to make a hundred listeners all at 100 different ports but when I run my server I get an error while trying to start my listeners. The game is going to be a 3D RPG/MMORPG. Most of the game logic is in the clients though. What do you think that I should do?
If you are going to use TCP sockets, then you should create one listener socket (i.e. create a socket, bind it to a specific port and call Listen() on it). Then, when you Accept a connection and get another socket, which you use for receiving/sending data from/to client:
Socket socketListener;
// create listening socket
socketListener = new Socket(AddressFamily.InterNetwork,SocketType.Stream,ProtocolType.Tcp);
IPEndPoint ipLocal = new IPEndPoint(IPAddress.Any, 30120); // use port 30120
//bind to local IP Address.
socketListener.Bind(ipLocal);
//start listening
socketListener.Listen(4);
while (true) // loop that accepts client connections
{
Socket socketWorker = socketListener.Accept();
HandleClientConnection(socketWorker); // your routine where you communicate with a client
}
Also, consider using sockets in asynchronous mode, that will be more efficient in terms of performance.
You only ever have one listener per server endpoint. The listener will then create a connection for the client that uses a different port. It is this connection you actually use for communication.

receiving data from tcp port

I have some problems with a device brought for signal capturing (vilistus), as its software is supposed to send data to a tcp port (#123) during capturing and I used a c# code with a tcp listener to receive the data from the same port but the program is blocked at the accepttcpclient() command line and no data is received.
TcpClient client = this.tcpListener.AcceptTcpClient();
It sounds like a client isn't connecting to the listener. When you call AcceptTcpClient on a TcpListener the application will hang there waiting for a client to connect which seems to be the issue you're having.
You can get around this issue by doing a BeginAcceptTcpClient which will free the application and allow the program to continue executing while you wait for a client. When a client then connects a delegate will be called and then you can start processing the client, reading data etc. So for example:
class Comms
{
TcpListener listener;
TcpClient client;
// Starts listening for a tcp client
public void StartListener()
{
listener = new TcpListener(IPAddress.Any, 123);
listener.BeginAcceptTcpClient(new AsyncCallback(ClientCallback), listener);
}
// Callback for when a client connects on the port
void ClientCallback(IAsyncResult result)
{
listener = (TcpListener)result.AsyncState;
try
{
client = listener.EndAcceptTcpClient(result);
// From here you can access the stream etc and read data
}
catch (IOException e)
{
// Handle any exceptions here
}
}
}
Providing the client connects correctly you will then be free to access the clients NetworkStream and then read and write data to the client. Heres a quick reference and example for you to take a look at:
http://msdn.microsoft.com/en-us/library/system.net.sockets.tcplistener.beginaccepttcpclient.aspx
When it comes to the reading and writing of data you will have a similar lock up problem with the client's NetworkStream Read and Write functions. For this problem you have two possible solutions:
Set a timeout for the read and write functions, by setting the ReadTimeout and WriteTimeout properties of the NetworkStream.
Use a similar callback method as with BeginAcceptTcpClient by using the BeginRead and BeginWrite functions found in the NetworkStream.
Personally I prefer the second option as the first won't free the program until the timeout occurs but they're both viable options and it depends which one you'd prefer to implement. You can go here for more information about the NetworkStream:
http://msdn.microsoft.com/en-us/library/system.net.sockets.networkstream.aspx
Hope this helps
It sounds strange that the device is being the client. Are you sure that you should not use tcpClient.Connect() to the device on port 123?
Why?
How should the device know that you got a server running? It can't try to connect forever.
How do the device know that port 123 is free for your application in your PC?

Categories