I'm working with TcpClient to communicate with a hardware device.
The communication to the device may drop for a period of 30 seconds or so, as part of a testing process. This is fine and sometimes intended.
The problem begins when I'm sending data to the device while communication is down. Because I'm using TcpClient, I'm getting an IO exception and the connection is dropped. The connection at the device side is still open though.
How can I:
Reconnect to the open connection at the device? creating a new TcpClient will create a new connection at the device side and is unwanted...
Perhaps Make TCP retransmissions take longer then 30 seconds ?(windows 7)
Your best method is to exchange a session identifier or have some other way to track connections and have code to handle resumes. You can increase the value of your SendTimeout Property, but the receive side could still end up timiing out the connection on its end.
You cannot reopen a specific connection with TcpClient once it is closed. The only other way you might do this (raw sockets code) seems to me to be more trouble than it's worth.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I'm having trouble understanding the proper flow when a TcpListener and TcpClient communicate with each other.
I have a program that starts up a TcpListener on port 8088 and listens for TcpClient connections.
I have another program that creates a TcpClient (let's call it DataSender1) and connects to the TcpListener in the other program and sends it data.
If DataSender1 is a program that runs constantly (like a service) and pushes data to the TcpListener regularly over time, when should the TcpListener close DataSender1's connection? Should I close the connection after I receive and process each distinct message from DataSender1? Or, whenever I decide to shut down the TcpListener? Do I create a new connection each time DataSender1 is ready to send data?
I'm running into an issue in an actual application where the TcpListener aborts the connection whenever my TcpClient sends 202 messages. I have no idea why it constantly aborts on message 202.
If DataSender1 is a program that runs constantly (like a service) and pushes data to the TcpListener regularly over time, when should the TcpListener close DataSender1's connection?
Unless your communication protocol dictates that the TcpListener should close the connection, then it shouldn't close the connection at all, unless an I/O error occurs on the connection, the TcpClient has disconnected its end of the connection, or the TcpListener is being shutdown. The TcpClient should decide when to close the connection, such as when it is not going to be sending/receiving any data for awhile.
If the connection is going to sit idle for awhile, but it should still be left open for whatever reason, then the TcpClient and TcpListener should implement some kind of ping/pong messaging between them. Or, you could just enable TCP keep-alives at the transport layer.
Should I close the connection after I receive and process each distinct message from DataSender1?
There is overhead in tearing down a connection and re-establishing a new connection. So try to avoid that as much as possible. If you are sending data frequently, there is no good reason to close the connection.
For example, HTTP is a stateless protocol. So in HTTP 0.9, and in HTTP 1.0 by default, a connection is closed after a response is sent, and a new request would require a new connection. But as the Internet evolved, and WWW pages grew in complexity, and so HTTP traffic usage increased, it was found to be more efficient to leave the HTTP connections open and reuse them whenever possible (especially if SSL/TLS is used). Which is why using persistent connections is now the default behavior in HTTP 1.1.
Most other TCP-based Internet protocols depend on persistent connections to maintain user state.
If you are designing your own custom communication protocol between your TcpClient and TcpListener, then you get to decide how your connections are to be managed.
Do I create a new connection each time DataSender1 is ready to send data?
Only if it is not already connected to the TcpListener. Whether it disconnects afterwards depends on your design and architectural needs.
I'm running into an issue in an actual application where the TcpListener aborts the connection whenever my TcpClient sends 202 messages. I have no idea why it constantly aborts on message 202.
Then you likely have a bug in your TcpListener code that you need to find and fix. Which is what a debugger is meant for. Put some breakpoints in the TcpListener's code, and step through the code to see what the TcpListener does when message 202 is received.
In my C# program, I use TCP Sockets for communication.
How does the Socket know, that there is no more connection, when the other side hasn't properly called Shutdown/Close or whatever.
Like for example when Internet connection is lost.
What I learned about TCP is that it sends keep alive packets. What are the standard values for this, how frequently are they send, where can I set the interval and how can I set the disconnect timeout ( the time to wait before the connection is considerd disconnected when nothing is received )?
If a socket doesn't send or receive any data then by definition the socket is alive and open. A TCP can sit forever and so long as each end knows about it's current state it will still work.
The issue you can run into is where intermediate device (such as stateful firewall) maintains a timeout for the TCP connection that has nothing to do with the end device and the the end devices have no visibility. If after, two three or even four days, one devie wants to send data on the TCP channel, if an intermediate device fails to send it on, then and only then will the socket "disconnect".
In relation to your question about tcp-keep-alives - this is Operating System dependent.
Here's a good write up on the Windows way: https://blogs.technet.microsoft.com/nettracer/2010/06/03/things-that-you-may-want-to-know-about-tcp-keepalives/
I'm just making my server disconnect sockets that send no data after a certain amount of time, like 20 seconds.
I wonder whether working with timers is good for that or is there something special for that in socket library? Working with timers on the server for every socket makes it heavy.
Is it unsafe to make the client program handle that? For example every client disconnects after not sending data for a while.
This should be very easy to implement as part of your keep-alive checking. Unless you're completely ignoring the issue of dropped connections, you probably have a keep-alive system that periodically sends a message client->server and vice versa if there's been no communication. It should be trivial to add a simple "last data received time" value to the socket state, and then close the socket if it gets too far from DateTime.Now.
But the more important question is "Why?". The best solution depends on what your reasons for this are in the first place. Do you want to make the server usable to more clients by dumping those that aren't sending data? You'll probably make everything worse, since the timeouts for TCP sockets are more like 2-4 minutes, so when you disconnect the client after 20s and it reconnects, it will now be using two server-side ports, instead of one. Oops.
As for your comment on the deleted answer, and connection without data send and receive i think it gonna waste your threads points closer to your real problem - the amount of connection your server has should have no relation to how many threads the server uses to service those connections. So the only thing an open connection would "waste" is basically a bit of memory (depending on the amount of memory you need per connection, plus the socket cost with its buffers) and a TCP port. This can be an issue in some applications, but if you ever get to that level of "load", you can probably congratulate yourself already. You will much more likely run out of other resources before getting anywhere close to the port limits (assumption based on the fact that it sounds like you're making an MMO game). If you do really run into issues with those, you probably want to drop TCP anyway and rewrite everything in UDP (or preferrably, some ready solution on top of UDP).
The Client-Server model describes how a client should connect to a server and perform requests.
What I would recommend to you would be to connect to the server, and when you finish retrieving all the date you need, close the socket (on the client side).
The server will eventually find the socket's resources released, but you can check for the socket's Connected property to release the resources sooner.
When client disconnect to server then server can get disconnect event. it look like
socket.on('disconnect', function () {
// Disconnect event handling
});
On client side you also findout disconnect event .. in which you need to reconnect the server.
I have a client server situation where the client opens a TCP socket to the server, and sometimes long periods of time will pass with no data being sent between them. I have encountered an issue where the server tries to send data to the client, and it seems to be successful, but the client never receives it, and after a few minutes, it looks like the client then gets disconnected.
Do I need to send some kind of keep alive packet every once in a while?
Edit: To note, this is with peers on the same computer. The computer is behind a NAT, that forwards a range of ports used to this computer. The client that connects with the server opens the connection via DNS. i.e. it uses the mydomain.net & port to connect.
On Windows, sockets with no data sent are a big source for trouble in many applications and must be handled correctly.
The problem is, that SO_KEEPALIVE's period can be set system-wide (otherwise, a default is useless two hours) or with the later winsock API.
Therefore, many applications do send some occasional byte of data every now and then (to be disregarded by the peer) only to make the network layer declare disconnection after ACK is not received (after all due retransmissions done by the layer and ack timeout).
Answering your question: no, the sockets do not disconnect automatically.
Yet, you must be careful with the above issue. What complicates it further is that testing this behavior is very hard. For example, if you set everything correctly and you expect to detect disconnection properly, you cannot test it by disconnecting the physical layer. This is because the NIC will sense the carrier loss and the socket layer will signal to close all application sockets that relied on it. A good way to test it is connect two computers with 3 legs and two switches in between, disconnecting the middle leg, thus preventing carrier loss but still physically disconnecting the machines.
There is a timeout built in to TCP but you can adjust it, See SendTimeout and ReciveTimeout of the Socket class, but I have a suspiciouion that is not your problem. A NAT router may also have a expiration time for TCP connections before it removes it from it's port forwarding table. If no traffic passes within the time of that timeout on the router it will block all incoming traffic (as it cleared the forwarding information from it's memory so it does not know what computer to send the traffic to), also the outgoing connection will likely have a different source port so the server may not recognize it as the same connection.
It's more secure to use Keep-alive option (SO_KEEPALIVE under linux), to prevent disconnect due to inactivity, but this may generate some extra packets.
This sample code do it under linux:
int val = 1;
....
// After creating the socket
if (setsockopt(s, SOL_SOCKET, SO_KEEPALIVE, (char *)&val, sizeof(val)))
fprintf(stderr, "setsockopt failure : %d", errno);
Regards.
TCP sockets don't automatically close at all. However TCP connections do. But if this is happening between peers in the same computer the connection should never be dropped as long as both peers exist and have their sockets open.
I have a service which communicates through tcpListener.
Problem is when the user restarts the service - an "Address already in use" exception is thrown, and the service cannot be started for a couple of minutes or so.
Is there's any way of telling the system to terminate the old connection so I can open a new one? (I can't just use random ports because there is no way for the service to notify the clients what is the port, so we must depend on a predefined port)
Set the SO_REUSEADDR socket option before binding to the listening port. It looks like the corresponding .NET code is something like:
SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, 1);
There is a reason sockets are not used for some time after they are closed.
A Socket is comprised of a 4 tuple, Source and Dest Port, Source and Dest IP.
Let's say you close a socket forcefully, while the client was busy sending data to the server.
You wait 5 seconds and re-open the server with the same Port, and the same client sends data to the same 4 tuple, the server will get packets with wrong tcp sequence numbers, and the connections will get reset.
You are shooting yourself in the foot :)
This is why connections have a time_wait status for 2-4 minutes (depending on distro) until they can be used again.
Just to be clear, i'm talking about SOCKETs and not just the listening tcp port.