Socket remains connected after the remote computer closes it - c#

I've already searched but didn't solve my issue.
I'm simulating a TCP network on my localhost. Server listens on a port and client connects to the server. The problem is that when I close the socket by client, the Socket.Connected remains true in the server. I need to know when clients are disconnected.
I suppose when I call Socket.Close on client app, a TCP FIN packet is sent to the server, right? But it seems like it doesn't.
Can you give me a solution?
P.S. I already called shutdown before closing, but the problem still persists.

There is no notification based way to know if a client is disconnected. That is the nature of tcp/ip communication. The usual way to know if a client is connected or not is to write data to the client connection. If you get an error, you can guess that the client is disconnected. You can streamline the heuristics by looking for specific exceptions

While I have no practical experience with socket programming in C# it seems that Socket.Close() does not send pending data and by implication doesn't send the FIN packet. (That is in my opinion a bit misleading because the Close semantics seem to differ from the Stream.Close() which calls Dispose(true) which tries to flush if possible. Correct me if I'm wrong.)
The MSDN documentation states:
For connection-oriented protocols, it is recommended that you call
Shutdown before calling the Close method. This ensures that all data
is sent and received on the connected socket before it is closed.
If you need to call Close without first calling Shutdown, you can
ensure that data queued for outgoing transmission will be sent by
setting the DontLingerSocket option to false and specifying a non-zero
time-out interval. Close will then block until this data is sent or
until the specified time-out expires. If you set DontLinger to false
and specify a zero time-out interval, Close releases the connection
and automatically discards outgoing queued data.

Related

How to keep a TCP Socket alive and detect a disconnect?

In my C# program, I use TCP Sockets for communication.
How does the Socket know, that there is no more connection, when the other side hasn't properly called Shutdown/Close or whatever.
Like for example when Internet connection is lost.
What I learned about TCP is that it sends keep alive packets. What are the standard values for this, how frequently are they send, where can I set the interval and how can I set the disconnect timeout ( the time to wait before the connection is considerd disconnected when nothing is received )?
If a socket doesn't send or receive any data then by definition the socket is alive and open. A TCP can sit forever and so long as each end knows about it's current state it will still work.
The issue you can run into is where intermediate device (such as stateful firewall) maintains a timeout for the TCP connection that has nothing to do with the end device and the the end devices have no visibility. If after, two three or even four days, one devie wants to send data on the TCP channel, if an intermediate device fails to send it on, then and only then will the socket "disconnect".
In relation to your question about tcp-keep-alives - this is Operating System dependent.
Here's a good write up on the Windows way: https://blogs.technet.microsoft.com/nettracer/2010/06/03/things-that-you-may-want-to-know-about-tcp-keepalives/

Disconnecting socket after certain amount of time that no data recceived

I'm just making my server disconnect sockets that send no data after a certain amount of time, like 20 seconds.
I wonder whether working with timers is good for that or is there something special for that in socket library? Working with timers on the server for every socket makes it heavy.
Is it unsafe to make the client program handle that? For example every client disconnects after not sending data for a while.
This should be very easy to implement as part of your keep-alive checking. Unless you're completely ignoring the issue of dropped connections, you probably have a keep-alive system that periodically sends a message client->server and vice versa if there's been no communication. It should be trivial to add a simple "last data received time" value to the socket state, and then close the socket if it gets too far from DateTime.Now.
But the more important question is "Why?". The best solution depends on what your reasons for this are in the first place. Do you want to make the server usable to more clients by dumping those that aren't sending data? You'll probably make everything worse, since the timeouts for TCP sockets are more like 2-4 minutes, so when you disconnect the client after 20s and it reconnects, it will now be using two server-side ports, instead of one. Oops.
As for your comment on the deleted answer, and connection without data send and receive i think it gonna waste your threads points closer to your real problem - the amount of connection your server has should have no relation to how many threads the server uses to service those connections. So the only thing an open connection would "waste" is basically a bit of memory (depending on the amount of memory you need per connection, plus the socket cost with its buffers) and a TCP port. This can be an issue in some applications, but if you ever get to that level of "load", you can probably congratulate yourself already. You will much more likely run out of other resources before getting anywhere close to the port limits (assumption based on the fact that it sounds like you're making an MMO game). If you do really run into issues with those, you probably want to drop TCP anyway and rewrite everything in UDP (or preferrably, some ready solution on top of UDP).
The Client-Server model describes how a client should connect to a server and perform requests.
What I would recommend to you would be to connect to the server, and when you finish retrieving all the date you need, close the socket (on the client side).
The server will eventually find the socket's resources released, but you can check for the socket's Connected property to release the resources sooner.
When client disconnect to server then server can get disconnect event. it look like
socket.on('disconnect', function () {
// Disconnect event handling
});
On client side you also findout disconnect event .. in which you need to reconnect the server.

Poll a connected client from a server in C#

I am trying to poll a connection from a client connected to my server.
When you use poll you need to give it a socket connection but on the server's side the socket is bound to it's own IP address and a specific port. Making another socket to connect on the same port but with the client's IP address won't work since you can't have multiple connections on the same socket.
I am just wondering what would be a good way to constantly be checking if a client is still connected to the server and also when it disconnects?
I was thinking some sort of timeout check or something. I just wanted to know if there was any generic or proper way of achieving this.
I have tried Socket.Poll but it does not seem to achieve what I want.
To restate my question, how do you check if a client is connected on the server side using TCP sockets in C#?
socket.Receive will just return 0.
From MSDN
If you are using a connection-oriented Socket, the Receive method will read as much data as is available, up to the size of the buffer. If the remote host shuts down the Socket connection with the Shutdown method, and all available data has been received, the Receive method will complete immediately and return zero bytes.
There is also Connected property in the Socket class if you need.
There are two kinds of sockets: For listening and for connections. If a client has connected you must have an instance of Socket that is associated with that connection.
Use this socket to periodically send data and receive and acknowledgement back. This is the only way to make 100% sure that the connection is still open.
Especially the Connected property cannot be used to detect a lost connection because if the network cable is unplugged neither of the two parties is notified in any way. The property can't be accurate by principle.

Is it possible that repeated OnConnected will call before previous OnDisconnected?

Imagine some spherical horse in a vacuum:
I lost control of my client application, maybe some error has happened. And I tried to re-enter to the hub immediately.
Is it possible, that OnConnected starts faster then OnDisconnected and I turn up twice on the server?
Edited:
Sorry, I didn't say than I meant SignalR library. I think if my application won't call stop() the server will wait about 30 seconds by default. And I can connect to the server again before OnDisconnected is called. Isn't it?
You'll have to take it from the client's side, also note that if you're using TCP the following would take place:
TCP ensures that your packets will arrive in the order they were sent. And so let's imagine that at the same moment the "horse" hit the space and the connection broke, your server is sending the next packet that would check the connection (if you implemented your server good enough that is).
Here, there's two things that may happen:
The client has already recovered and can respond in time. Meaning the interval in time when the connection had problems was small enough that the next packet from the server hasn't arrived yet. And so responding to your question, there's no disconnection in the first place.
The next packet from the server arrived but the client is not responding (the connection is severed). The server would instantly take note of this, raising the OnDisconnected event. If the client recovers virtually at the same time the server takes note, then it would initiate another connection (OnConnected).
So there's no chance that the client would turn twice. If any, the
disconnection interval will be small enough for the server not to
notice the problem in the first place.
Again, another protocol may behave differently. But TCP is will designed to guarantee a well established connection and communication between a server and clients.
It's worth mentioning that many of the communication frameworks (if not all) use TCP implicitly by default.
A client can connect a second time while the first connection is open (it will have a separate connection id though).
If the client doesn't manage to notify the server that it's closing the connection, the server will wait for a certain amount of time before removing the connection (DisconnectTimeout).
So in that case, if you restart the connection immediately, it will be a new logical connection to the server with a new connection id.
SignalR will also try to reconnect to the existing connection when it is lost, in which case it would retain its connection id once reconnected. I would recommend reading the entire article about SignalR connection lifetime events.

Do TCP sockets automatically close after some time if no data is sent?

I have a client server situation where the client opens a TCP socket to the server, and sometimes long periods of time will pass with no data being sent between them. I have encountered an issue where the server tries to send data to the client, and it seems to be successful, but the client never receives it, and after a few minutes, it looks like the client then gets disconnected.
Do I need to send some kind of keep alive packet every once in a while?
Edit: To note, this is with peers on the same computer. The computer is behind a NAT, that forwards a range of ports used to this computer. The client that connects with the server opens the connection via DNS. i.e. it uses the mydomain.net & port to connect.
On Windows, sockets with no data sent are a big source for trouble in many applications and must be handled correctly.
The problem is, that SO_KEEPALIVE's period can be set system-wide (otherwise, a default is useless two hours) or with the later winsock API.
Therefore, many applications do send some occasional byte of data every now and then (to be disregarded by the peer) only to make the network layer declare disconnection after ACK is not received (after all due retransmissions done by the layer and ack timeout).
Answering your question: no, the sockets do not disconnect automatically.
Yet, you must be careful with the above issue. What complicates it further is that testing this behavior is very hard. For example, if you set everything correctly and you expect to detect disconnection properly, you cannot test it by disconnecting the physical layer. This is because the NIC will sense the carrier loss and the socket layer will signal to close all application sockets that relied on it. A good way to test it is connect two computers with 3 legs and two switches in between, disconnecting the middle leg, thus preventing carrier loss but still physically disconnecting the machines.
There is a timeout built in to TCP but you can adjust it, See SendTimeout and ReciveTimeout of the Socket class, but I have a suspiciouion that is not your problem. A NAT router may also have a expiration time for TCP connections before it removes it from it's port forwarding table. If no traffic passes within the time of that timeout on the router it will block all incoming traffic (as it cleared the forwarding information from it's memory so it does not know what computer to send the traffic to), also the outgoing connection will likely have a different source port so the server may not recognize it as the same connection.
It's more secure to use Keep-alive option (SO_KEEPALIVE under linux), to prevent disconnect due to inactivity, but this may generate some extra packets.
This sample code do it under linux:
int val = 1;
....
// After creating the socket
if (setsockopt(s, SOL_SOCKET, SO_KEEPALIVE, (char *)&val, sizeof(val)))
fprintf(stderr, "setsockopt failure : %d", errno);
Regards.
TCP sockets don't automatically close at all. However TCP connections do. But if this is happening between peers in the same computer the connection should never be dropped as long as both peers exist and have their sockets open.

Categories