Only one use of each socket address (proto/ip/port) - c#

Good day all
Info:
Topic: Multicast
First off, I have found the solution but I do not understand why this is the solution.
**Scope : ** (removing any cluttering/unnecessary code)
new_socket()
{
//SND_LOCAL_IP = 10.0.0.30 - local network adapter's IP
//SND_MCAST_PORT = 80 port used to broadcast Multicast Packets
//_SND_LOCAL_EP = new IPEndPoint(SND_LOCAL_IP, SND_MCAST_PORT); <problem>
_SND_LOCAL_EP = new IPEndPoint(SND_LOCAL_IP, 0); <fixed>
}
init_socket()
{
_SND_Socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
_SND_Socket.SetSocketOption(SocketOptionLevel.IP, SocketOptionName.AddMembership, new MulticastOption(_SND_MCAST_IP, _SND_LOCAL_IP));
_SND_Socket.SetSocketOption(SocketOptionLevel.IP, SocketOptionName.ReuseAddress, true);
_SND_Socket.ExclusiveAddressUse = false;
_SND_Socket.Bind(_SND_LOCAL_EP); <<< ====== PROBLEM LINE=====
}
The problem:
My listener runs on a Thread seperately, on a form_load event, thus it initializes in the same way as my SND_Socket does, however changing the SND_Socket.Bind() port to 0 allows me to recieve these Multicast packets.
As by the def MSDN, adding the ExclusiveAddress should not alleviate this problem (since the recieve and send sockets are initialized in the same way).
true if the Socket allows only one socket to bind to a specific port; otherwise, false. The default is true for Windows Server 2003 and Windows XP Service Pack 2, and false for all other versions.
and further on, in Remarks this is confirmed:
If ExclusiveAddressUse is false, multiple sockets can use the Bind method to bind to a specific port; however only one of the sockets can perform operations on the network traffic sent to the port. If more than one socket attempts to use the Bind(EndPoint) method to bind to a particular port, then the one with the more specific IP address will handle the network traffic sent to that port.
If ExclusiveAddressUse is true, the first use of the Bind method to attempt to bind to a particular port, regardless of Internet Protocol (IP) address, will succeed; all subsequent uses of the Bind method to attempt to bind to that port will fail until the original bound socket is destroyed.
This property must be set before Bind is called; otherwise an InvalidOperationException will be thrown.
Why does the
Socket.ExclusiveAddress = false
not allow the SND_Socket to listen on this IP and port as "Listener_Socket", furthermore why does setting port to 0 in the RCV_Socket.Bind() solve this problem?

Without a good Minimal, Complete, and Verifiable Example it's impossible to know for sure what problem you're even having specifically, never mind know for sure what the cause would be. Lacking that, some observations/comments related to the question as stated:
The ExclusiveAddressUse property affects not the socket on which it's set, but any other socket bound after that socket. It prevents any other socket from using the same port number, i.e. which it would otherwise be able to do through the ReuseAddress socket option.
The ReuseAddress socket option does affect the socket on which it's set. It's what allows a socket to bind to the same port that some other socket on the same adapter had already been bound.
One would typically not use both of those options at the same time. Either you want the sockets to cooperate, where one allows the other to reuse the same port number, or you want to prohibit any other socket from using the same port number.
Binding to port 0 can in some cases alleviate issues that might otherwise occur when misusing the address-exclusivity options. With the incomplete question, I cannot infer what specific problem you are having. But binding to port 0 causes the socket to select a unique port number, which will of course avoid any problems with port number conflicts.
Other than that, the biggest issue I see in your code is that you are attempting to join the multicast group before you call Bind(). You should be doing it the other way around, i.e. bind the socket, and then join the multicast group.
Most likely, you should not be using ReuseAddress at all. Your sockets should have unique port numbers. You may use ExclusiveAddressUse, as a preventative measure to ensure you get an exception if some code does try to bind a socket to a port that's already in use.
I recommend that you start by closely following the example found on MSDN on the documentation page for the MulticastOption Class. Once you have a working example using that code, then you can adjust the code to suit your specific needs.

Related

Udpclient.receive from unknown (random) port

I'm using UdpClient to send a message and listen for a response, like this:
Client = gcnew UdpClient();
HostEndPoint = gcnew IPEndPoint(192.168.0.20, 52381);
Client->Connect(HostEndPoint);
Client->Send(Message, Message->Length);
Bytes = Client->Receive(HostEndPoint);
I have two similar devices but they respond differently. In the first case, the destination responds on the same port as I send to. So, for example, sending with a random source port of 49542, this happens:
Request: 192.168.1.10:49542 > 192.168.1.20:52381
Response: 192.168.1.20:52381 > 192.168.1.10:49542
And with the above code I get the response as expected.
The other similar device however responds with a random port (which changes whenever it is powercycled), like this:
Request: 192.168.1.10:49542 > 192.168.1.20:52381
Response: 192.168.1.20:46468 > 192.168.1.10:49542
And in this case, I do not receive the response, Receive() will timeout. I believe I understand why there's nothing received. There's suggestion in .net docs that once you use a IPEndPoint with UdpClient() or Connect(), any other responses are filtered out. So, I'm not even sure why Receive has an IPEndPoint parameter.
I have monitored the communication with WireShark and I can see the messages in both directions. So I know the device is responding I just can't figure out how to receive it in my code.
The best solution I think is to be able to receive any response that arrives to my source random port (49542 above), additionally to specify the destination IP as well, but that may not be needed. Alternatively, to listen for any response from the destination IP, on any port, since I don't see how to know what port the device is responding with.
As best as I can figure out you have to indicate a port # for Receive(IPEndPoint), which usually is the destination port of the message you sent to in the first place - as it is in my code sample (which works with the first device). The random port chosen by Connect can't be listened to, that's the receiving port, but I think Receive listens for the sending port from the device. which is unknown.
I tend to think that the fact that I can't find any information about how this can be done suggests that it can't be done because devices aren't supposed to respond from a random port. But, I've discussed this issue with the manufacturer and they insist it's correct behavior.
Also note, I've tried to create a second UdpClient to listen for the response from the destination IP, but it also requires a port be defined, and there's no way I can tell to know what port to listen for. I have tried UdpClient()->Client->RemoteEndPoint, but I'm pretty sure, this is the endpoint I'm starting with which has the known port, not the random port.
This is the first time I've encountered this and it seems weird to me. The devices are from a major manufacturer though, that surely knows what they're doing.
I believe I have solved this. The trick is to not use Connect, but instead to pass HostEndPoint in the Send command. This approach works with both devices.
Client = gcnew UdpClient();
HostEndPoint = gcnew IPEndPoint(192.168.0.20, 52381);
// Client->Connect(HostEndPoint);
Client->Send(Message, Message->Length, HostEndPoint);
Bytes = Client->Receieve(HostEndPoint);
Although it's working, I'm not completely sure what messages it will allow thru. It might allow any port from the specified IP, or it might allow any response from the specified IP that is directed to the initial random source port. I'm not sure.
Also, I initially thought I might have to set HostEndPoint->Port = 0 between Send and Receive. That works, but it isn't necessary.

How to run client and server UDP listeners on the same machine

Both client and server send and receive on a given port. In production they are on separate machines and there is no problem. In development it would be a great deal more convenient to run them on the same machine and avoid the need for deployment and setting up and tearing down a remote debug session.
I tried this
var uc = new UdpClient();
var ep = new IPEndPoint(address, port);
uc.ExclusiveAddressUse = false;
uc.Client.Bind(ep);
and it doesn't barf but I still can't bind multiple listeners to the same endpoint. After the fact I discovered that ExclusiveAddressUse defaults to false anyhow so this approach produces nothing but extra code.
Is this possible and if so how?
You obviously cant use the same port on the same machine, just use an #if directive for debug and change your ports accordingly
The following might help
Client
#if DEBUG
uc client = new UdpClient(34534);
#else
uc client = new UdpClient();
#endif
UdpClient Constructor (Int32)
Initializes a new instance of the UdpClient class and binds it to the
local port number provided.
Remarks
This constructor creates an underlying Socket and binds it to the port
number from which you intend to communicate. Use this constructor if
you are only interested in setting the local port number. The
underlying service provider will assign the local IP address. If you
pass 0 to the constructor, the underlying service provider will assign
a port number. If this constructor is used, the UdpClient instance is
set with an address family of IPv4 that cannot be changed or
overwritten by a connect method call with an IPv6 target.
Disclaimer, totally untested, just read the documentation, possibly wrong :)

Mono Socket.Bind before connecting to use a specific interface

I'm attempting to connect to a remote server using a specific local interface. My logs tell me everything is working as intended, but checking with netstat, every connection is using the default interface.
I'm using the following code to bind a TcpClient to a specific Local Endpoint
Console.WriteLine("Binding to {0}", connectionArgs.LocalBindingInterface.ToString());
client = new TcpClient(connectionArgs.LocalBindingInterface);
Console.WriteLine("Bound to {0}", client.Client.LocalEndPoint.ToString());
Where connectionArgs.LocalBindingInterface is an IPEndPoint specified as such
IPEndPoint[] localEndPoints = new IPEndPoint[2];
localEndPoints[0] = new IPEndPoint(IPAddress.Parse("192.168.0.99"), 0);
localEndPoints[1] = new IPEndPoint(IPAddress.Parse("192.168.0.100"), 0);
The IP addresses listed here are not the actual addresses.
When i check my logs, this is the info I get
Binding to 192.168.0.99:0
Bound to 192.168.0.99:59252
Binding to 192.168.0.100:0
Bound to 192.168.0.100:53527
But when i netstat -n -p --tcp -a I get
tcp 0 0 192.168.0.98:39948 remote_addr_here:443 ESTABLISHED 17857/mono
tcp 0 0 192.168.0.98:60009 remote_addr_here:443 ESTABLISHED 17857/mono
Clearly something's wrong here. None of the ports, nor the interfaces match. Netstat is run as sudo so I can't assume it's wrong. I also tried to manually create a socket, call it's bind method, and set the TcpClient's Client property to the manually bound socket, but I get the same result.
Is there something i'm doing wrong here? Is there a different way to force a Socket to use a specific Local EndPoint on mono?
I'm running this app as a non-root user, mono --version is Mono JIT compiler version 3.2.8 (Debian 3.2.8+dfsg-4ubuntu1.1), server's ubuntu version is Ubuntu 14.04.3 LTS
Edit 1:
Added an extra logging call after calling TcpClient.Connect()
Binding to 192.168.0.100:59000
Bound to 192.168.0.100:59000
After connect bound to 192.168.0.98:55484
Bottom line: you can't do this, not at the socket level.
The routing of outbound traffic is determined by the network routing configuration. You would have to create an explicit routing table entry for your destination to force a specific adapter to be used.
You can bind to a specific IP address, but this only causes inbound traffic to be filtered, i.e. you'll only receive traffic sent to that IP address.
There are related questions you may want to read as well:
How to stop behaviour: C++ Socket sendto changes interface — context is C++ and not constrained to Windows, but it has what is IMHO the most direct, most relevant notes on the topic.
Using a specific network interface for a socket in windows — fairly poor question and answer both, frankly. But it does contain some quotes and links that you might find useful anyway.
Arguably, this question might have been closed as a duplicate of one of those, or perhaps even another similar question. But those two don't really answer the question in an accurate, C#/.NET-specific way, and I didn't actually find any others that seemed any better.

Why is my UDP socket disconnecting

Edit: Yes I know that UDP doesn't technically connect, but you can still use it to set the default target for Send(), which is what I'm doing here.
Basically I have this problem that between calls to MySocket.Send(), MySocket is becoming disconnected i.e. the Connected variable becomes false (I know that Connected isn't necessarily up-to-date, but no data isn't being sent so I know that it's telling the truth).
The strange thing is that the RemoteEndPoint variable is still set correctly, but when I call Send(), no data is recieved by the other computer. However if I call Connect() again, the socket does connect, and I'm able to send data (at least until the next time the user does something that causes another call to Send() )
Can anyone tell me why a socket would spontaneously disconnect?
The line where I connect it is:
opep = new IPEndPoint(Opponent.Address, 1000);
Listener.Connect(opep);
I don't see anything here that could be garbage collected for example to cause this issue.
Thanks!
UDP doesn't set up a connection. You should check out the following link for more info
Difference between TCP and UDP?

How to tell when a Socket has been disconnected

On the client side I need to know when/if my socket connection has been broken. However the Socket.Connected property always returns true, even after the server side has been disconnected and I've tried sending data through it. Can anyone help me figure out what's going on here. I need to know when a socket has been disconnected.
Socket serverSocket = null;
TcpListener listener = new TcpListener(1530);
listener.Start();
listener.BeginAcceptSocket(new AsyncCallback(delegate(IAsyncResult result)
{
Debug.WriteLine("ACCEPTING SOCKET CONNECTION");
TcpListener currentListener = (TcpListener)result.AsyncState;
serverSocket = currentListener.EndAcceptSocket(result);
}), listener);
Socket clientSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
Debug.WriteLine("client socket connected: " + clientSocket.Connected);//should be FALSE, and it is
clientSocket.Connect("localhost", 1530);
Debug.WriteLine("client socket connected: " + clientSocket.Connected);//should be TRUE, and it is
Thread.Sleep(1000);
serverSocket.Close();//closing the server socket here
Thread.Sleep(1000);
clientSocket.Send(new byte[0]);//sending data should cause the socket to update its Connected property.
Debug.WriteLine("client socket connected: " + clientSocket.Connected);//should be FALSE, but its always TRUE
After doing some testing, it appears that the documentation for Socket.Connected is wrong, or at least misleading. clientSocket.Connected will only become false after clientSocket.close() is called. I think this is a throwback to the original C Berkeley sockets API and its terminology. A socket is bound when it has a local address associated with it, and a socket is connected when it has a remote address associated with it. Even though the remote side has closed the connection, the local socket still has the association and so it is still "connected".
However, here is a method that does work:
!(socket.Poll(0, SelectMode.SelectRead) && socket.Available == 0)
It relies on that fact that a closed connection will be marked as readable even though no data is available.
If you want to detect conditions such as broken network cables or computers abruptly being turned off, the situation is a bit more complex. Under those conditions, your computer never receives a packet indicating that the socket has closed. It needs to detect that the remote side has vanished by sending packets and noticing that no response comes back. You can do this at the application level as part of your protocol, or you can use the TCP KeepAlive option. Using TCP Keep Alive from .NET isn't particularly easy; you're probably better off building a keep-alive mechanism into your protocol (alternately, you could ask a separate question for "How do I enable TCP Keep Alive in .NET and set the keep alive interval?").
Just write to your socket as normal. You'll know when it's disconnected by the Exception that says your data couldn't be delivered.
If you don't have anything to write...then who cares if it's disconnected? It may be disconnected now, but come back before you need it - why bother tearing it down, and then looping a reconnect until the link is repaired...especially when you didn't have anything to say anyway?
If it bothers you, implement a keep alive in your protocol. Then you'll have something to say every 30 seconds or so.
Maybe solution is to send some dummy data through it and check if it times out?
I recommend stripping out the higher-level language stuff and explore what happens at the lower-level IO.
The lowest I've explored was while writing isectd (find on sourceforge). Using the select() system call, a descriptor for a closed socket becomes read-ready, and when isectd would attempt the recv() the socket's disconnected state can be confirmed.
As a solution, I recommend not writing your own socket IO and use someone else's middleware. There are lots of good candidates out there. Don't forget to consider simple queuing services as well.
PS. I would have provided URLs to all the above but my reputation (1) doesn't allow it.
does the clientSocket.Send() method wait for the packet to either be ack/nack'd?
If not your code is flying onto the next line while socket is still trying to figure out what is going on.

Categories