Preventing TIME_WAIT using .NET 'Async' API - c#

I have a problem, I've developed a Client and Server wrapper for my personal use, but unfortunately due to insufficient knowledge in network programming, I have TIME_WAIT problems during connect on the client. My client tries to make multiple connections to the same host within short period of time now, I have found out that the main reason for that is because I'm trying to reuse the socket and it goes to TIME_WAIT state because I'm closing the connection without graceful shutdown. I would like to know the correct pattern to close connection using .NET sockets in case I'm using 'Async' APIs intensively i.e. functions like ConnectAsync, AcceptAsync, SendAsync, ReceiveAsync, DisconnectAsync (DisconnectAsync - reuses socket)

I have found out that it is impossible to prevent TIME_WAIT. Either server or client will have the problem any way, depending only on who initiates a closure of the connection first. If it's the client who closes the connection, there will be no TIME_WAIT on server. If it's the server who closes first, than there will be no TIME_WAIT on client. So the only option that is left to do is using SO_REUSEADDR, but in this case it is still impossible to use the reused address for contacting previously disconnected host

You can use SO_REUSEADDR on the socket to get around this. See Socket.SetSocketOption for details, it's the ReuseAddress option you need to set.
By the way you don't really mean reuse the socket do you? once you get an error, you have to close it and open a new one.

Related

Socket remains connected after the remote computer closes it

I've already searched but didn't solve my issue.
I'm simulating a TCP network on my localhost. Server listens on a port and client connects to the server. The problem is that when I close the socket by client, the Socket.Connected remains true in the server. I need to know when clients are disconnected.
I suppose when I call Socket.Close on client app, a TCP FIN packet is sent to the server, right? But it seems like it doesn't.
Can you give me a solution?
P.S. I already called shutdown before closing, but the problem still persists.
There is no notification based way to know if a client is disconnected. That is the nature of tcp/ip communication. The usual way to know if a client is connected or not is to write data to the client connection. If you get an error, you can guess that the client is disconnected. You can streamline the heuristics by looking for specific exceptions
While I have no practical experience with socket programming in C# it seems that Socket.Close() does not send pending data and by implication doesn't send the FIN packet. (That is in my opinion a bit misleading because the Close semantics seem to differ from the Stream.Close() which calls Dispose(true) which tries to flush if possible. Correct me if I'm wrong.)
The MSDN documentation states:
For connection-oriented protocols, it is recommended that you call
Shutdown before calling the Close method. This ensures that all data
is sent and received on the connected socket before it is closed.
If you need to call Close without first calling Shutdown, you can
ensure that data queued for outgoing transmission will be sent by
setting the DontLingerSocket option to false and specifying a non-zero
time-out interval. Close will then block until this data is sent or
until the specified time-out expires. If you set DontLinger to false
and specify a zero time-out interval, Close releases the connection
and automatically discards outgoing queued data.

Network tcp socket application retry method

I'm writing a windows based client(c++) and server(c#) application which will communicate to each other via tcp packets. Here the client is sending data and server needs to acknowledge the same.
Now for this purpose I have made one single 'socket()' and 'connect()' call during the client lifetime on its startup. Some error checking and retries has been kept inside 'send()' and 'recv()' calling methods. Do note that one client will send one set (multiple packets) of data and quit at a time.
Now my questions are:
If the server is running continuously(e.g. windows service) on some PC, do I really need to consider about connection
breakdown(network failure) and creating a new socket and connect
accordingly from client?
If that be so, shall I need to consider resending the data from starting or from the point where client has failed to communicate last
time?
I want to know the general methods what people are using around the world for dealing this kind of situations for network applications.
do I really need to consider about connection breakdown and creating a new socket and connect accordingly from client?
Depends on how precious your data is. If you want to make sure it ended up at the server, and an error occurred while sending, then you can consider it "not sent".
If that be so, shall I need to consider resending the data from starting or from the point where client has failed to communicate last time?
That depends entirely on how your application logic and application protocol work. From your description we can't know how you send your data and how a server would recognize data it has already seen.
do I really need to consider about connection breakdown(network
failure) and creating a new socket and connect accordingly from
client?
You do certainly not need to create a new socket after connection shutdown; you can use the existing socket to connect anew.

Disconnecting socket after certain amount of time that no data recceived

I'm just making my server disconnect sockets that send no data after a certain amount of time, like 20 seconds.
I wonder whether working with timers is good for that or is there something special for that in socket library? Working with timers on the server for every socket makes it heavy.
Is it unsafe to make the client program handle that? For example every client disconnects after not sending data for a while.
This should be very easy to implement as part of your keep-alive checking. Unless you're completely ignoring the issue of dropped connections, you probably have a keep-alive system that periodically sends a message client->server and vice versa if there's been no communication. It should be trivial to add a simple "last data received time" value to the socket state, and then close the socket if it gets too far from DateTime.Now.
But the more important question is "Why?". The best solution depends on what your reasons for this are in the first place. Do you want to make the server usable to more clients by dumping those that aren't sending data? You'll probably make everything worse, since the timeouts for TCP sockets are more like 2-4 minutes, so when you disconnect the client after 20s and it reconnects, it will now be using two server-side ports, instead of one. Oops.
As for your comment on the deleted answer, and connection without data send and receive i think it gonna waste your threads points closer to your real problem - the amount of connection your server has should have no relation to how many threads the server uses to service those connections. So the only thing an open connection would "waste" is basically a bit of memory (depending on the amount of memory you need per connection, plus the socket cost with its buffers) and a TCP port. This can be an issue in some applications, but if you ever get to that level of "load", you can probably congratulate yourself already. You will much more likely run out of other resources before getting anywhere close to the port limits (assumption based on the fact that it sounds like you're making an MMO game). If you do really run into issues with those, you probably want to drop TCP anyway and rewrite everything in UDP (or preferrably, some ready solution on top of UDP).
The Client-Server model describes how a client should connect to a server and perform requests.
What I would recommend to you would be to connect to the server, and when you finish retrieving all the date you need, close the socket (on the client side).
The server will eventually find the socket's resources released, but you can check for the socket's Connected property to release the resources sooner.
When client disconnect to server then server can get disconnect event. it look like
socket.on('disconnect', function () {
// Disconnect event handling
});
On client side you also findout disconnect event .. in which you need to reconnect the server.

Is it possible that repeated OnConnected will call before previous OnDisconnected?

Imagine some spherical horse in a vacuum:
I lost control of my client application, maybe some error has happened. And I tried to re-enter to the hub immediately.
Is it possible, that OnConnected starts faster then OnDisconnected and I turn up twice on the server?
Edited:
Sorry, I didn't say than I meant SignalR library. I think if my application won't call stop() the server will wait about 30 seconds by default. And I can connect to the server again before OnDisconnected is called. Isn't it?
You'll have to take it from the client's side, also note that if you're using TCP the following would take place:
TCP ensures that your packets will arrive in the order they were sent. And so let's imagine that at the same moment the "horse" hit the space and the connection broke, your server is sending the next packet that would check the connection (if you implemented your server good enough that is).
Here, there's two things that may happen:
The client has already recovered and can respond in time. Meaning the interval in time when the connection had problems was small enough that the next packet from the server hasn't arrived yet. And so responding to your question, there's no disconnection in the first place.
The next packet from the server arrived but the client is not responding (the connection is severed). The server would instantly take note of this, raising the OnDisconnected event. If the client recovers virtually at the same time the server takes note, then it would initiate another connection (OnConnected).
So there's no chance that the client would turn twice. If any, the
disconnection interval will be small enough for the server not to
notice the problem in the first place.
Again, another protocol may behave differently. But TCP is will designed to guarantee a well established connection and communication between a server and clients.
It's worth mentioning that many of the communication frameworks (if not all) use TCP implicitly by default.
A client can connect a second time while the first connection is open (it will have a separate connection id though).
If the client doesn't manage to notify the server that it's closing the connection, the server will wait for a certain amount of time before removing the connection (DisconnectTimeout).
So in that case, if you restart the connection immediately, it will be a new logical connection to the server with a new connection id.
SignalR will also try to reconnect to the existing connection when it is lost, in which case it would retain its connection id once reconnected. I would recommend reading the entire article about SignalR connection lifetime events.

Permanent TCP connection or connection etablishment at request processing

I am developing a TCP server, which shall communicate with the client, if specified tasks are finished. So I open on the server a socket and the client connects on it.
That connection can be used for data tranfers back to the client, too. That is quite okay.
But what about connection aborts and anything like that?
My thought was to connect each time to the server, when the client have to communicate with it. But how can I send data back to the client?
Shall I open a socket on the client side, too?
EDIT:
I have considered WCF, too. I think it could be a very good way to implements server client hierarchy.
What do you think?
It depends on the rest of your requirements. If we're talking a message that is in no rush that might be sent once a day, the right solution might be for the client to connect to the server periodically and check if there are any messages. If we're talking something that's more common and more in a rush, the right solution might be for the client to keep a connection open to the server at all times. In some cases, the right solution might be for the server to make a 'backwards' connection to the client, if possible -- perhaps with an option to fall back to a persistent connection from the client to the server if the 'backwards' connection isn't possible.
See this article on Push technology, particularly the section on long polling.
From a runtime POV having the server connect to the client needs a network environment supporting this (firewall/IDS etc.).
IF you can't be sure that this is always the case then this option is ruled out IMO.
As for the client keeping the connection open:
I think this is a good option... you need to make sure that the client implementation detects any connection problems and automatically reconnects...
Whatever solution you implement you might need to implement a queue of events per client... depending on your requeirements these queues might even need to be persistent...
WCF can work in all the ways I described and offers several things (like serialization, optional session management, transport security etc.) which help build a robust and well-maintainable system... although a pure TCP/IP-based solution might be better performance-wise...

Categories