What's a reliable way to verify that a socket is open? - c#

I'm developing a C# application, working with TCP sockets.
While debugging, I arrive in this piece of source code:
if ((_socket!= null) && (_socket.Connected))
{
Debug.WriteLine($"..."); <= my breakpoint is here.
return true;
}
In my watch-window, the value of _socket.RemoteEndPoint is:
_socket.RemoteEndPoint {10.1.0.160:50001} System.Net.EndPoint {...}
Still, in commandline, when I run netstat -aon | findstr /I "10.1.0.160", I just see this:
TCP 10.1.13.200:62720 10.1.0.160:3389 ESTABLISHED 78792
TCP 10.1.13.200:63264 10.1.0.160:445 ESTABLISHED 4
=> the remote endpoint "10.1.0.160:50001" is not visible in netstat result.
As netstat seems not reliable for testing TCP sockets, what tool can I use instead?
(For your information: even after having run further, there still is no "netstat" entry.)

Documentation of Socket.Connected, says:
The value of the Connected property reflects the state of the
connection as of the most recent operation. If you need to determine
the current state of the connection, make a nonblocking, zero-byte
Send call. If the call returns successfully or throws a WAEWOULDBLOCK
error code (10035), then the socket is still connected; otherwise, the
socket is no longer connected.
So if it returns true - socket has been "connected" some time in the past, but not necessary is still alive right now.
That's because it's not possible to detect if your TCP connection is still alive with certainly without contacting the other side in one way or another. The TCP connection is kind of "virtual", two sides just exchange packets but there is no hard link between them. When one side decides to finish communication - it sends a packet and waits for response from the other side. If all goes well two sides will both agree that connection is closed.
However, if side A does NOT send this close packet, for example because it crashed, or internet died and so on - the other side B has no way to figure out that connection is no longer active UNTIL it tries to send some data to A. Then this send will fail and now B knows connection is dead.
So if you really need to know if other side is still alive - then you have to send some data there. You can use keepalive which is available on TCP sockets (which basically does the same - sends some data from time to time). Or if you always write on this connection first (say other side is server and you do requests to it from time to time, but do not expect any data between those requests) - then just don't check if the other side is alive - you will know that when you will attempt to write next time.

Related

Check if NamedPipeClientStream write is successful

Basically the title... I'd like to have same feedback on weather NamedPipeServerStream object successfully received a value. This is the starting code:
static void Main(string[] args){
Console.WriteLine("Client running!");
NamedPipeClientStream npc = new NamedPipeClientStream("somename");
npc.Connect();
// npc.WriteTimeout = 1000; does not work, says it is not supported for this stream
byte[] message = Encoding.UTF8.GetBytes("Message");
npc.Write(message);
int response = npc.ReadByte();
Console.WriteLine("response; "+response);
}
I've implemented a small echo message from the NamedPipeServerStream on every read. I imagine I could add some async timeout to check if npc.ReadByte(); did return a value in lets say 200ms. Similar to how TCP packets are ACKed.
Is there a better way of inspecting if namedPipeClientStream.Write() was successful?
I'd like to have same feedback on weather NamedPipeServerStream object successfully received a value
The only way to know for sure that the data you sent was received and successfully processed by the client at the remote endpoint, is for your own application protocol to include such acknowledgements.
As a general rule, you can assume that if your send operations are completing successfully, the connection remains viable and the remote endpoint is getting the data. If something happens to the connection, you'll eventually get an error while sending data.
However, this assumption only goes so far. Network I/O is buffered, usually at several levels. Any of your send operations almost certainly involve doing nothing more than placing the data in a local buffer for the network layer. The method call for the operation will return as soon as the data has been buffered, without regard for whether the remote endpoint has received it (and in fact, almost never will have by the time your call returns).
So if and when such a call throws an exception or otherwise reports an error, it's entirely possible that some of the previously sent data has also been lost in transit.
How best to address this possibility depends on what you're trying to do. But in general, you should not worry about it at all. It will typically not matter if a specific transmission has been received. As long as you can continue transmitting without error, the connection is fine, and asking for acknowledgement is just unnecessary overhead.
If you want to handle the case where an error occurs, invalidating the connection, forcing you to retry, and you want to make the broader operation resumable (e.g. you're streaming some data to the remote endpoint and want to ensure all of the data has been received, without having to resend data that has already been received), then you should build into your application protocol the ability to resume, where on reconnecting the remote endpoint reports the number of bytes it's received so far, or the most recent message ID, or whatever it is your application protocol would need to understand where it needs to start sending again.
See also this very closely-related question (arguably maybe even an actual duplicate…though it doesn't mention named pipes specifically, pretty much all network I/O will involve similar issues):
Does TcpClient write method guarantees the data are delivered to server?
There's a good answer there, as well as links to even more useful Q&A in that answer.

Self-healing SslStream

I'm writing a service that needs to maintain a long running SSL connection to a remote server. I need this server to be self-healing, that is if it's disconnected for any reason then the next time it's written to it will reconnect. I've tried this:
bool isConnected = client.Connected && client.Client.Poll(0, SelectMode.SelectWrite) && stream.CanWrite;
if (!isConnected )
{
this.connected = false;
GetConnection();
}
stream.Write(bytes, 0, bytes.Length);
stream.Flush();
But I find it doesn't act as I would expect it. If I simulate a network outage by disabling my wifi, I'm still able to write to the stream with stream.Write() for approximately 20 seconds. Then next time I try to write to it, none of client.Connected, client.Client.Poll(), or stream.CanWrite() return false, but when I go to write to the stream I get a socket exception. Finally, if I try to recreate the connection, I get this exception: An existing connection was forcibly closed by the remote host.
I would appreciate any help create a long running SslStream that can withstand network failure. Thanks!
From a 10.000 feet point of view:
The reason you can still write to the stream after shutting down your wifi is because there is a network buffer that is holding the data for transmission, stream.Write/stream.Flush success means the network interface (TCP/IP stack) has accepted the data and has been buffered for transmission, not that the data has reach its target.
It takes time to the TCP/IP Stack to notice a full media disconnection, (connection lost/reset) because even if there is no physical link TCP/IP will see this as a temporary issue in the network and will keep retrying for a while (the network could be dropping packets at some point and the stack will keep retrying)
If you think about this in the reverse way, you won't like all your programs to fail if there is a network hiccup (this happen too often on internet), so TCP/IP takes its time to notify to the app layer that the connection has become invalid (after retry several times and wait a reasonable amount of time)
You can always reconnect to the server when the SslStream fails and continue sending data, although you will find is not as easy as this because there are several scenarios where you send and data is not received by server and others where server receive the data and you do not receive any ACK from server at all... So depending on your needs, self-healing alone could be not enough.
Self-Healing is simple to implement, data consistency and reliability is harder and usually requires the server to be ready to support some kind of reliable messaging mechanism to ensure all data has been sent and received.
The underlying protocol for SSL is TCP. TCP will usually only send data if the application wants it to deliver data, or if it needs to reply to data received from the other side by sending an ACK. This means, that a broken connection like a lost link will not be noticed until you are trying to send any data. But you will not notice immediatly, because:
A write to the socket will only deliver the data to the OS kernel and return success if this delivery was successful.
The kernel will then try to deliver the data to the peer and will wait for the ACK from the client.
If it does not get any ACK it will retry again to deliver the data and only after some unsuccessful retries the kernel will declare the connection broken.
Only after the connection is marked broken by the kernel the next write or read will return the error from kernel to user space, like with returning EPIPE when doing a write.
This means, if you want to know up-front if the connection is still alive you have to make sure that you get a regular data exchange on the connection. At the TCP level you might set TCP_KEEPALIVE, but this might use an interval of some hours between exchanges packets. At the SSL layer you might try to use the infamous heartbeat extension, but most peers will not understand it. The last choice is to implement some kind of heartbeat in your own application.
As for the self healing: When reconnecting you get a new TCP connection and you also need to do a full SSL handshake, because the last SSL connection was not cleanly closed and thus cannot be resumed. The server has no idea that this new connection is just a continuation of the old one so you have to implement some kind of meta-connection spanning multiple TCP connections inside your application layer on both client and server. Inside this meta-connection you need to have your own data tracking to detect, which data are really accepted from the peer and which were only send but never explicitly accepted because the connection broke. Sound like a kind of TCP on top of TCP.

Why is my UDP socket disconnecting

Edit: Yes I know that UDP doesn't technically connect, but you can still use it to set the default target for Send(), which is what I'm doing here.
Basically I have this problem that between calls to MySocket.Send(), MySocket is becoming disconnected i.e. the Connected variable becomes false (I know that Connected isn't necessarily up-to-date, but no data isn't being sent so I know that it's telling the truth).
The strange thing is that the RemoteEndPoint variable is still set correctly, but when I call Send(), no data is recieved by the other computer. However if I call Connect() again, the socket does connect, and I'm able to send data (at least until the next time the user does something that causes another call to Send() )
Can anyone tell me why a socket would spontaneously disconnect?
The line where I connect it is:
opep = new IPEndPoint(Opponent.Address, 1000);
Listener.Connect(opep);
I don't see anything here that could be garbage collected for example to cause this issue.
Thanks!
UDP doesn't set up a connection. You should check out the following link for more info
Difference between TCP and UDP?

Confused about Sockets with UDP Protocol in C#

I've just started learning Sockets through various Google searches but I'm having some problems figuring it out how to properly use Sockets in C# and I'm in the need of some help.
I have a test application (Windows Forms) and on a different class (which is actually in it's own .dll, but that's irrelevant) I have all the server/client code for my sockets code.
Question 1)
On my test application, on the server part, the user can click the "start listening" button and the server part of my sockets application should start listening for connections on the specified address and port, so far so good.
However, the application will be blocked and I can't do anything until someone connects to the server. What if no one connects? How should I handle that? I could specify a receive timeout but what then? It throws an exception, what can I do with that? What I would like is to have some sort of activity on the main application so the user knows the application didn't froze and is waiting for connections. But if a connection doesn't come, it should timeout and close everything.
Maybe I should use asynchronous calls to send/receive methods but they seem confusing and I was not able to make it work, only synchronous work (I'll post my current code below).
Question 2)
Do I need to close anything when some send/receive call times out. As you'll see on my current code, I have a bunch of closes on the socket, but this doesn't feel right somehow. But it also doesn't feel right when an operation times out and I don't close the socket.
In conclusion of my two questions.... I would like an application that doesn't block so the user knows the server is waiting for a connection (with a little marquee animation for instance). If a connection is never established after a period of time, I want to close everything that should be closed. When a connection is established or if it doesn't happen after a period of time, I would like to inform the main application of the result.
Here's some of my code, the rest is similar. The Packet class is a custom class that represents my custom data unit, it's just a bunch of properties based on enums for now, with methods to convert them to bytes and back into properties.
The function that starts to listen for connections is something like this:
public void StartListening(string address, int port) {
try {
byte[] bufferBytes = new byte[32];
if(address.Equals("0.0.0.0")) {
udpSocket.Bind(new IPEndPoint(IPAddress.Any, port));
} else {
udpSocket.Bind(new IPEndPoint(IPAddress.Parse(address), port));
}
remoteEndPoint = new IPEndPoint(IPAddress.Any, 0);
int numBytesReceived = udpSocket.ReceiveFrom(bufferBytes, ref remoteEndPoint);
if(numBytesReceived == 0) {
udpSocket.Close();
return;
}
Packet syncPacket = new Packet(bufferBytes);
if(syncPacket.PacketType != PacketType.Control) {
udpSocket.Close();
return;
}
} catch {
if(udpSocket != null) {
udpSocket.Close();
}
}
}
I'm sure that I have a bunch of unnecessary code but I'm new at this and I'm not sure what do, any help fixing up my code and how to solve the issues above is really appreciated.
EDIT:
I should probably have stated that my requirements are to use UDP and implement these things myself in the application layer. You can consider this as homework but I haven't tagged as such because the code is irrelevant and will not be part of my grade and my problem (my question) is in "how to code" as my Sockets experience is minimal and it's not taught.
However I must say that I solved my problem for now I think... I was using threading on the demo application which was giving me some problems, now I'm using it in the protocol connections, makes more sense and I can easily change my custom protocol class properties and read those from the demo application.
I have specified a timeout and throw a SocketException if it reaches the timeout. Whenever an exception like this is caught, the socket connection is closed. I'm just talking about the connection handshake, nothing more. If no exceptions are caught, the code probably went smooth and the connection is established.
Please adapt your answers accordingly. Right now it doesn't make sense for me to marky any of them as the accepted answer, hope you understand.
You have got stuff a bit wrong.
First of all, UDP is connection-less. You do not connect or disconnect. All you do is to send and receive (must specify destination each time). You should also know that the only thing UDP promises is that a complete message arrives on each read. UDP do not guarantee that your messages arrive in the correct order or that they arrive at all.
TCP on the other hand is connection-based. You connect, send/receive and finally disconnect. TCP is stream-based (while UDP is message-based) which means that you can get a half message in the first read and the other half at the second read. TCP promises you that everything will arrive and in the correct order (or will die trying ;). So using TCP means that you should have some kind of logic to know when a complete message has arrived and a buffer that you use to build the complete message.
The next big question was about blocking. Since you are new at this, I recommend that you use Threads to handle sockets. Put the listener socket in one thread and each connecting socket in a separate thread (5 connected clients = 5 threads).
I also recommend that you use TCP since it's easier to build complete messages than ordering messages and build a transaction system (which will needed if you want to make sure that all messages arrives to/from clients).
Update
You still got UDP wrong. Close doesn't do anything other than cleaning up system resources. You should do something like this instead:
public void MySimpleServer(string address, int port)
{
try
{
byte[] bufferBytes = new byte[32];
if(address.Equals("0.0.0.0")) {
udpSocket.Bind(new IPEndPoint(IPAddress.Any, port));
} else {
udpSocket.Bind(new IPEndPoint(IPAddress.Parse(address), port));
}
remoteEndPoint = new IPEndPoint(IPAddress.Any, 0);
while (serverCanRun)
{
int numBytesReceived = udpSocket.ReceiveFrom(bufferBytes, ref remoteEndPoint);
// just means that one of the clients closed the connection using Shutdown.
// doesnt mean that we cant continue to receive.
if(numBytesReceived == 0)
continue;
// same here, loop to receive from another client.
Packet syncPacket = new Packet(bufferBytes);
if (syncPacket.PacketType != PacketType.Control)
continue;
HandlePacket(packet, endPoint);
}
} catch {
if(udpSocket != null) {
udpSocket.Close();
}
}
}
See? since there are no connection it's just waste of time to close a UDP socket to start listening from another one. The same socket can receive from ALL udp clients that know the correct port and address. That's what the remoteEndPoint is for. It tells which client that send the message.
Update 2
Small update to make a summary of all my comments.
UDP is connectionless. You can never detect if a connection have been established or disconnected. The Close method on a UDP socket will only free system resources. A call on client.Close() will not notify the server socket (as it will with TCP).
The best way to check if a connection is open is to create a ping/pong style of packet. i.e. the client sends a PING message and the server responds with a PONG. Remember that UDP will not try to resend your messages if they do not arrive. Therefore you need to resend the PING a couple of times before assuming that the server is down (if you do not receive a PONG).
As for clients closing you need to send your own message to the server telling it that the the client is going to stop talking to the server. For reliability the same thing goes here, keep resending the BYE message until you receive a reply.
imho it's mandatory that you implement a transactional system for UDP if you want reliability. SIP (google rfc3261) is an example of a protocol which uses transactions over UDP.
From your description I feel you should use TCP sockets instead of UDP. The difference is
TCP - You wait for a connection at a particuler IP:Port some user can connect to it and until the socket is closed can communicate by sending and receiveing information. This is like calling someone on phone.
UDP - you wait for a message at some IP:Port. User who wants to communicate just sends a message through UDP. You will receive the message through UDP. The order of delivery is not guaranteed. This is more like sending a snail mail to someone. There is no dedicated communication channel established.
Now coming to your problem
Server
Create a Socket with TCP family.
Either create a thread and accept the connection in that thread or use the BeginAccept apis of Socket.
In the main thread you can still display the ticker or whatever you want to do.
Client
Connect to the server.
Communicate by sending and receiving data.

TcpClient.BeginRead/TcpClient.EndRead doesn't throw exception when internet disconnected

I'm using TcpListener to accept & read from TcpClient.
The problem is that when reading from a TcpClient, TcpClient.BeginRead / TcpClient.EndRead doesn't throw exception when the internet is disconnected. It throws exception only if client's process is ended or connection is closed by server or client.
The system generally has no chance to know that connection is broken. The only reliable way to know this is to attempt to send something. When you do this, the packet is sent, then lost or bounced and your system knows that connection is no longer available, and reports the problem back to you by error code or exception (depending on environment). Reading is usually not enough cause reading only checks the state of input buffer, and doesn't send the packet to the remote side.
As far as I know, low level sockets doesn't notify you in such cases. You should provide your own time out implementation or ping the server periodically.
If you want to know about when the network status changes you can subscribe to the System.Net.NetworkInformation.NetworkChange.NetworkAvailabilityChanged event. This is not specific to the internet, just the local network.
EDIT
Sorry, I misunderstood. The concept of "connected" really doesn't exist the more you think about it. This post does a great job of going into more details about that. There is a Connected property on the TcpClient but MSDN says (emphasis mine):
Because the Connected property only
reflects the state of the connection
as of the most recent operation, you
should attempt to send or receive a
message to determine the current
state. After the message send fails,
this property no longer returns true.
Note that this behavior is by design.
You cannot reliably test the state of
the connection because, in the time
between the test and a send/receive,
the connection could have been lost.
Your code should assume the socket is
connected, and gracefully handle
failed transmissions.
Basically the only way to check for a client connection it to try to send data. If it goes through, you're connected. If it fails, you're not.
I don't think you'd want BeginRead and EndRead throwing exceptions as these should be use in multi threaded scenarios.
You probably need to implement some other mechanism to respond to the dropping of a connection.

Categories