How to tell when a Socket has been disconnected - c#

On the client side I need to know when/if my socket connection has been broken. However the Socket.Connected property always returns true, even after the server side has been disconnected and I've tried sending data through it. Can anyone help me figure out what's going on here. I need to know when a socket has been disconnected.
Socket serverSocket = null;
TcpListener listener = new TcpListener(1530);
listener.Start();
listener.BeginAcceptSocket(new AsyncCallback(delegate(IAsyncResult result)
{
Debug.WriteLine("ACCEPTING SOCKET CONNECTION");
TcpListener currentListener = (TcpListener)result.AsyncState;
serverSocket = currentListener.EndAcceptSocket(result);
}), listener);
Socket clientSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
Debug.WriteLine("client socket connected: " + clientSocket.Connected);//should be FALSE, and it is
clientSocket.Connect("localhost", 1530);
Debug.WriteLine("client socket connected: " + clientSocket.Connected);//should be TRUE, and it is
Thread.Sleep(1000);
serverSocket.Close();//closing the server socket here
Thread.Sleep(1000);
clientSocket.Send(new byte[0]);//sending data should cause the socket to update its Connected property.
Debug.WriteLine("client socket connected: " + clientSocket.Connected);//should be FALSE, but its always TRUE

After doing some testing, it appears that the documentation for Socket.Connected is wrong, or at least misleading. clientSocket.Connected will only become false after clientSocket.close() is called. I think this is a throwback to the original C Berkeley sockets API and its terminology. A socket is bound when it has a local address associated with it, and a socket is connected when it has a remote address associated with it. Even though the remote side has closed the connection, the local socket still has the association and so it is still "connected".
However, here is a method that does work:
!(socket.Poll(0, SelectMode.SelectRead) && socket.Available == 0)
It relies on that fact that a closed connection will be marked as readable even though no data is available.
If you want to detect conditions such as broken network cables or computers abruptly being turned off, the situation is a bit more complex. Under those conditions, your computer never receives a packet indicating that the socket has closed. It needs to detect that the remote side has vanished by sending packets and noticing that no response comes back. You can do this at the application level as part of your protocol, or you can use the TCP KeepAlive option. Using TCP Keep Alive from .NET isn't particularly easy; you're probably better off building a keep-alive mechanism into your protocol (alternately, you could ask a separate question for "How do I enable TCP Keep Alive in .NET and set the keep alive interval?").

Just write to your socket as normal. You'll know when it's disconnected by the Exception that says your data couldn't be delivered.
If you don't have anything to write...then who cares if it's disconnected? It may be disconnected now, but come back before you need it - why bother tearing it down, and then looping a reconnect until the link is repaired...especially when you didn't have anything to say anyway?
If it bothers you, implement a keep alive in your protocol. Then you'll have something to say every 30 seconds or so.

Maybe solution is to send some dummy data through it and check if it times out?

I recommend stripping out the higher-level language stuff and explore what happens at the lower-level IO.
The lowest I've explored was while writing isectd (find on sourceforge). Using the select() system call, a descriptor for a closed socket becomes read-ready, and when isectd would attempt the recv() the socket's disconnected state can be confirmed.
As a solution, I recommend not writing your own socket IO and use someone else's middleware. There are lots of good candidates out there. Don't forget to consider simple queuing services as well.
PS. I would have provided URLs to all the above but my reputation (1) doesn't allow it.

does the clientSocket.Send() method wait for the packet to either be ack/nack'd?
If not your code is flying onto the next line while socket is still trying to figure out what is going on.

Related

.NET socket keeps sending keep-alives after closing

I know this isn't exactly a bug, but I'm wondering if there is something I can do about it. I have a TCP server serving HTTP requests, and I am using socket keep-alive to act as a heartbeat to detect if the client is still connected to know if I can close their connection as I won't be getting any further requests from them. My issue is that when I close the connection for some other reason (server shutting down or something), the underlying socket keeps sending keep alive packets and remains alive even though I have closed it in code.
This is how I setup the keep alive in code:
// I don't think the specific values here are important, I was just experimenting with it
socket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.KeepAlive, true);
socket.SetSocketOption(SocketOptionLevel.Tcp, SocketOptionName.TcpKeepAliveTime, 10);
socket.SetSocketOption(SocketOptionLevel.Tcp, SocketOptionName.TcpKeepAliveInterval, 5);
socket.SetSocketOption(SocketOptionLevel.Tcp, SocketOptionName.TcpKeepAliveRetryCount, 3);
And then this is how I am closing it:
await socket.DisconnectAsync(false);
socket.Close();
socket.Dispose();
Once I have closed the socket I can see the first two close packets fire off in Wireshark. 5050 port is my server, and 56068 is the Chrome client.
But then Chrome will keep the socket alive for quite a while afterwards rather than just closing it. The socket stays in the FIN_WAIT_2 state for a long time waiting for Chrome to finally finish the close:
Then finally, after a long while, Chrome will send the second half of the close packets, and then the socket is actually closed, and does not appear with netstat -ano any more:
Is there a reason Chrome would be keeping that socket open for so long, and is there anything I can do about it? Maybe this is sort of a non-issue, and I can just ignore it, but it just feels wrong to me that the socket stays open for so long after the server has said "I'm done". Or maybe I have a fundamental misunderstanding of something.

Self-healing SslStream

I'm writing a service that needs to maintain a long running SSL connection to a remote server. I need this server to be self-healing, that is if it's disconnected for any reason then the next time it's written to it will reconnect. I've tried this:
bool isConnected = client.Connected && client.Client.Poll(0, SelectMode.SelectWrite) && stream.CanWrite;
if (!isConnected )
{
this.connected = false;
GetConnection();
}
stream.Write(bytes, 0, bytes.Length);
stream.Flush();
But I find it doesn't act as I would expect it. If I simulate a network outage by disabling my wifi, I'm still able to write to the stream with stream.Write() for approximately 20 seconds. Then next time I try to write to it, none of client.Connected, client.Client.Poll(), or stream.CanWrite() return false, but when I go to write to the stream I get a socket exception. Finally, if I try to recreate the connection, I get this exception: An existing connection was forcibly closed by the remote host.
I would appreciate any help create a long running SslStream that can withstand network failure. Thanks!
From a 10.000 feet point of view:
The reason you can still write to the stream after shutting down your wifi is because there is a network buffer that is holding the data for transmission, stream.Write/stream.Flush success means the network interface (TCP/IP stack) has accepted the data and has been buffered for transmission, not that the data has reach its target.
It takes time to the TCP/IP Stack to notice a full media disconnection, (connection lost/reset) because even if there is no physical link TCP/IP will see this as a temporary issue in the network and will keep retrying for a while (the network could be dropping packets at some point and the stack will keep retrying)
If you think about this in the reverse way, you won't like all your programs to fail if there is a network hiccup (this happen too often on internet), so TCP/IP takes its time to notify to the app layer that the connection has become invalid (after retry several times and wait a reasonable amount of time)
You can always reconnect to the server when the SslStream fails and continue sending data, although you will find is not as easy as this because there are several scenarios where you send and data is not received by server and others where server receive the data and you do not receive any ACK from server at all... So depending on your needs, self-healing alone could be not enough.
Self-Healing is simple to implement, data consistency and reliability is harder and usually requires the server to be ready to support some kind of reliable messaging mechanism to ensure all data has been sent and received.
The underlying protocol for SSL is TCP. TCP will usually only send data if the application wants it to deliver data, or if it needs to reply to data received from the other side by sending an ACK. This means, that a broken connection like a lost link will not be noticed until you are trying to send any data. But you will not notice immediatly, because:
A write to the socket will only deliver the data to the OS kernel and return success if this delivery was successful.
The kernel will then try to deliver the data to the peer and will wait for the ACK from the client.
If it does not get any ACK it will retry again to deliver the data and only after some unsuccessful retries the kernel will declare the connection broken.
Only after the connection is marked broken by the kernel the next write or read will return the error from kernel to user space, like with returning EPIPE when doing a write.
This means, if you want to know up-front if the connection is still alive you have to make sure that you get a regular data exchange on the connection. At the TCP level you might set TCP_KEEPALIVE, but this might use an interval of some hours between exchanges packets. At the SSL layer you might try to use the infamous heartbeat extension, but most peers will not understand it. The last choice is to implement some kind of heartbeat in your own application.
As for the self healing: When reconnecting you get a new TCP connection and you also need to do a full SSL handshake, because the last SSL connection was not cleanly closed and thus cannot be resumed. The server has no idea that this new connection is just a continuation of the old one so you have to implement some kind of meta-connection spanning multiple TCP connections inside your application layer on both client and server. Inside this meta-connection you need to have your own data tracking to detect, which data are really accepted from the peer and which were only send but never explicitly accepted because the connection broke. Sound like a kind of TCP on top of TCP.

Weird behavior of async tcp socket

I have an async server socket listening to some port. Then from another pc, I connect to the server and send 1 byte. Everything works fine, but there's a strange behavior. When I pull of network cable and try to send 1 byte (before os realizes cable was pulled of), I don't get any exception/error and as expected, server don't receive that packet. Is this how sockets are supposed to work? Does this mean that in case of connection loss some packets can be lost (because I don't get an exception and don't know that request was not sent)?
Here's the code:
private void button3_Click(object sender, EventArgs e)
{
var b = new byte[1] {1};
client.BeginSend(b, 0, b.Length, 0, new AsyncCallback(SendCallback), client);
}
private void SendCallback(IAsyncResult ar)
{
Socket client = (Socket)ar.AsyncState;
int bytesSent = client.EndSend(ar);
this.Invoke(new MethodInvoker(() => { MessageBox.Show(bytesSent.ToString() + " bytes sent"); }));
}
How could the sender possibly tell the packet was not received? It sent it out into a black hole and waits for reply. As long as there is no reply he cannot know whether the packet was received or never will be.
This is usually solved with timeouts. Eventually, the TCP stack will declare the connection dead, or your subsequent reads time out.
Sending does not guarantee delivery at all.
Have the other side send you a confirmation. Your confirmation read will time out eventually.
Or, Shutdown(Send) the socket. This ensures delivery and will throw an exception (after a timeout). You should Shutdown(Both) a socket anyway before closing it to make sure you get notified of all errors.
Windows at socket layer maintains a socket buffer at kernel. A successful send at application layer simply means the data is copied into the kernel buffer. The data in this buffer is pushed by the TCP stack to the remote application. In windows, if a packet is dropped, TCP resends the data for 3 times after which the TCP stack notifies the connection close to the application. Interval between retries is decided by the RTT. First retry is after 1*RTT, second is after 2*RTT & third after 3*RTT. In your case, the 1 byte you send simply copied into the kernel buffer & indicated success. It will take 3*RTT to indicate the socket is closed. This connection close is notified only if you invoke any socket API or monitoring the socket for close event. After pulling the cable, if you queue a second send exactly after 3*RTT, send should through the exception. Another way to get indication of the send failure immediately is by setting the send socket buffer size to zero (SetSocketOption(..,SendBuffer,..)) so that the TCP stack directly use your buffer & indicates a failure immediately.
I assume you created a TCP socket.
In that case, your Client will not be notified in case of disconnection (except if your application it sent some kind of logout message).
Looking at the Connected property of the TcpClient :
The Connected property gets the connection state of the Client socket
as of the last I/O operation. When it returns false, the Client socket
was either never connected, or is no longer connected.
"As of the last I/O" operation : only an (unsuccessful) Read from the Client will help you to detect the disconnection. Many applications implement "pings" to detect some disconnections.
Yes, this is the way TCP sockets are supposed to work. No, this does not mean that the packet is necessarily lost.
What is happening under the covers is this. When you call BeginSend, the byte you are sending is passed to the operating system's TCP stack. The TCP stack then sends the data in a packet.
When the packet is not acknowledged in a reasonable length of time, the TCP stack automatically resends the packet. This happens repeatedly as long as no acknowledgement packet is received.
If you plug the cable back in, one of these resends will get through, and the server will belatedly see the data. This is the desired behavior. Remember that TCP was designed to transmit military data during the cold war; the idea was that even if part of the network got nuked, the routing system would eventually adjust to find another path to the recipient, and the data would eventually get through.
If you don't plug the cable back in, most TCP stacks will eventually give up, terminating the connection. This takes several minutes, however.
More information on TCP retransmission timeout can be found in RFC 1122 here:
https://www.rfc-editor.org/rfc/rfc1122#page-95

Why is my UDP socket disconnecting

Edit: Yes I know that UDP doesn't technically connect, but you can still use it to set the default target for Send(), which is what I'm doing here.
Basically I have this problem that between calls to MySocket.Send(), MySocket is becoming disconnected i.e. the Connected variable becomes false (I know that Connected isn't necessarily up-to-date, but no data isn't being sent so I know that it's telling the truth).
The strange thing is that the RemoteEndPoint variable is still set correctly, but when I call Send(), no data is recieved by the other computer. However if I call Connect() again, the socket does connect, and I'm able to send data (at least until the next time the user does something that causes another call to Send() )
Can anyone tell me why a socket would spontaneously disconnect?
The line where I connect it is:
opep = new IPEndPoint(Opponent.Address, 1000);
Listener.Connect(opep);
I don't see anything here that could be garbage collected for example to cause this issue.
Thanks!
UDP doesn't set up a connection. You should check out the following link for more info
Difference between TCP and UDP?

Confused about Sockets with UDP Protocol in C#

I've just started learning Sockets through various Google searches but I'm having some problems figuring it out how to properly use Sockets in C# and I'm in the need of some help.
I have a test application (Windows Forms) and on a different class (which is actually in it's own .dll, but that's irrelevant) I have all the server/client code for my sockets code.
Question 1)
On my test application, on the server part, the user can click the "start listening" button and the server part of my sockets application should start listening for connections on the specified address and port, so far so good.
However, the application will be blocked and I can't do anything until someone connects to the server. What if no one connects? How should I handle that? I could specify a receive timeout but what then? It throws an exception, what can I do with that? What I would like is to have some sort of activity on the main application so the user knows the application didn't froze and is waiting for connections. But if a connection doesn't come, it should timeout and close everything.
Maybe I should use asynchronous calls to send/receive methods but they seem confusing and I was not able to make it work, only synchronous work (I'll post my current code below).
Question 2)
Do I need to close anything when some send/receive call times out. As you'll see on my current code, I have a bunch of closes on the socket, but this doesn't feel right somehow. But it also doesn't feel right when an operation times out and I don't close the socket.
In conclusion of my two questions.... I would like an application that doesn't block so the user knows the server is waiting for a connection (with a little marquee animation for instance). If a connection is never established after a period of time, I want to close everything that should be closed. When a connection is established or if it doesn't happen after a period of time, I would like to inform the main application of the result.
Here's some of my code, the rest is similar. The Packet class is a custom class that represents my custom data unit, it's just a bunch of properties based on enums for now, with methods to convert them to bytes and back into properties.
The function that starts to listen for connections is something like this:
public void StartListening(string address, int port) {
try {
byte[] bufferBytes = new byte[32];
if(address.Equals("0.0.0.0")) {
udpSocket.Bind(new IPEndPoint(IPAddress.Any, port));
} else {
udpSocket.Bind(new IPEndPoint(IPAddress.Parse(address), port));
}
remoteEndPoint = new IPEndPoint(IPAddress.Any, 0);
int numBytesReceived = udpSocket.ReceiveFrom(bufferBytes, ref remoteEndPoint);
if(numBytesReceived == 0) {
udpSocket.Close();
return;
}
Packet syncPacket = new Packet(bufferBytes);
if(syncPacket.PacketType != PacketType.Control) {
udpSocket.Close();
return;
}
} catch {
if(udpSocket != null) {
udpSocket.Close();
}
}
}
I'm sure that I have a bunch of unnecessary code but I'm new at this and I'm not sure what do, any help fixing up my code and how to solve the issues above is really appreciated.
EDIT:
I should probably have stated that my requirements are to use UDP and implement these things myself in the application layer. You can consider this as homework but I haven't tagged as such because the code is irrelevant and will not be part of my grade and my problem (my question) is in "how to code" as my Sockets experience is minimal and it's not taught.
However I must say that I solved my problem for now I think... I was using threading on the demo application which was giving me some problems, now I'm using it in the protocol connections, makes more sense and I can easily change my custom protocol class properties and read those from the demo application.
I have specified a timeout and throw a SocketException if it reaches the timeout. Whenever an exception like this is caught, the socket connection is closed. I'm just talking about the connection handshake, nothing more. If no exceptions are caught, the code probably went smooth and the connection is established.
Please adapt your answers accordingly. Right now it doesn't make sense for me to marky any of them as the accepted answer, hope you understand.
You have got stuff a bit wrong.
First of all, UDP is connection-less. You do not connect or disconnect. All you do is to send and receive (must specify destination each time). You should also know that the only thing UDP promises is that a complete message arrives on each read. UDP do not guarantee that your messages arrive in the correct order or that they arrive at all.
TCP on the other hand is connection-based. You connect, send/receive and finally disconnect. TCP is stream-based (while UDP is message-based) which means that you can get a half message in the first read and the other half at the second read. TCP promises you that everything will arrive and in the correct order (or will die trying ;). So using TCP means that you should have some kind of logic to know when a complete message has arrived and a buffer that you use to build the complete message.
The next big question was about blocking. Since you are new at this, I recommend that you use Threads to handle sockets. Put the listener socket in one thread and each connecting socket in a separate thread (5 connected clients = 5 threads).
I also recommend that you use TCP since it's easier to build complete messages than ordering messages and build a transaction system (which will needed if you want to make sure that all messages arrives to/from clients).
Update
You still got UDP wrong. Close doesn't do anything other than cleaning up system resources. You should do something like this instead:
public void MySimpleServer(string address, int port)
{
try
{
byte[] bufferBytes = new byte[32];
if(address.Equals("0.0.0.0")) {
udpSocket.Bind(new IPEndPoint(IPAddress.Any, port));
} else {
udpSocket.Bind(new IPEndPoint(IPAddress.Parse(address), port));
}
remoteEndPoint = new IPEndPoint(IPAddress.Any, 0);
while (serverCanRun)
{
int numBytesReceived = udpSocket.ReceiveFrom(bufferBytes, ref remoteEndPoint);
// just means that one of the clients closed the connection using Shutdown.
// doesnt mean that we cant continue to receive.
if(numBytesReceived == 0)
continue;
// same here, loop to receive from another client.
Packet syncPacket = new Packet(bufferBytes);
if (syncPacket.PacketType != PacketType.Control)
continue;
HandlePacket(packet, endPoint);
}
} catch {
if(udpSocket != null) {
udpSocket.Close();
}
}
}
See? since there are no connection it's just waste of time to close a UDP socket to start listening from another one. The same socket can receive from ALL udp clients that know the correct port and address. That's what the remoteEndPoint is for. It tells which client that send the message.
Update 2
Small update to make a summary of all my comments.
UDP is connectionless. You can never detect if a connection have been established or disconnected. The Close method on a UDP socket will only free system resources. A call on client.Close() will not notify the server socket (as it will with TCP).
The best way to check if a connection is open is to create a ping/pong style of packet. i.e. the client sends a PING message and the server responds with a PONG. Remember that UDP will not try to resend your messages if they do not arrive. Therefore you need to resend the PING a couple of times before assuming that the server is down (if you do not receive a PONG).
As for clients closing you need to send your own message to the server telling it that the the client is going to stop talking to the server. For reliability the same thing goes here, keep resending the BYE message until you receive a reply.
imho it's mandatory that you implement a transactional system for UDP if you want reliability. SIP (google rfc3261) is an example of a protocol which uses transactions over UDP.
From your description I feel you should use TCP sockets instead of UDP. The difference is
TCP - You wait for a connection at a particuler IP:Port some user can connect to it and until the socket is closed can communicate by sending and receiveing information. This is like calling someone on phone.
UDP - you wait for a message at some IP:Port. User who wants to communicate just sends a message through UDP. You will receive the message through UDP. The order of delivery is not guaranteed. This is more like sending a snail mail to someone. There is no dedicated communication channel established.
Now coming to your problem
Server
Create a Socket with TCP family.
Either create a thread and accept the connection in that thread or use the BeginAccept apis of Socket.
In the main thread you can still display the ticker or whatever you want to do.
Client
Connect to the server.
Communicate by sending and receiving data.

Categories