I have a client-server communication between a mobile and a PC(server).
In the communication I have four sockets: two of these are to send and receive data, and the other two are for some kind of keep-alive, since I need to detect disconnections as fast as I can.
As long as the connection is OK, the data will travel without any problem. But I want to establish some priority in order to be sure that the keep alive (remember: two sockets) channel is always sending data, unless the connection between server-client is dead.
How can I achieve this?
Thanks for any help.
I would question your setup with four sockets.
First, having separate connection for discovering when remote end dies does not give you any advantage, but in fact introduces a race condition when that "keep-alive" connection goes down but "data" connection is still intact. Implement periodic heartbeats over same data connection when there's no activity.
Then two independent data connections between same nodes compete for bandwidth. Network stacks usually don't optimize across connection boundaries, so you get twice TCP overhead for no gain. Implement data exchange over the same TCP connection - you'll get better throughput (maybe at the expense of small latency increase, but only good measurement would tell that).
Last, but not least, four connections require four listening TCP ports, thus potentially four holes in a firewall somewhere. Reduce that to a single port, and administrator of that firewall will forever be your friend.
When using TCP for transmission your TCP protocol stack will inform you whenever you try to send data and the (TCP) connection is broken. If you control both server and client code you may well implement your heartbeat in between your data transmission over TCP.
If TCP's connection failure detection on the respective devices is too slow for your purpose you can implement some single packet ping-pong scheme between client and server, like the "SNMP echo request" a.k.a. "ping" - or if SNMP is not an option, maybe sending UDP packets back and forth will do the trick.
In any case you will need some kind of timeout mechanism (which is already implemented in the TCP stack), which implies that the detection of a broken connection will be delayed with the delay time bounded by the timeout duration.
Related
I created extended TCP server and TCP client classes (in C#) for communication over network for my project use.
And as far as I understand, a client cannot really know if a server is down unless it requests for something which expects a reply but does not get it.
In our application, time and availability (of the server) are critical factors as it involve heavy machines for automation. Hence, according to the discussion on the design, the server is supposed to send its "Heart Beat" periodically such that in case a client does not receive anything from server after a period of time, it will:
Start to attempt its own recovery actions and if it still fails,
It will raise alarm to the service officer in the control room
I am supposed to implement the "heart beat" part in the server. And I have simple implementation of creating "Heart Beat".
public void SendHeartBeatToAllClients(byte[] hbdata) {
foreach (Socket socket in clientNoSocketList.Select(x => x.Value).ToList())
socket.Send(hbdata);
}
So far it works fine, but one thing that worries me is that the heart beat data (hbdata) is short (only few pre-arranged bytes, to save time to talk over many machines) and self-defined and since the server also sends some other data besides the hbdata, and considering the possible latency or other unexpected case, there is always a possibility for this hbdata to be mixed up. Also, in my "heart beat" implementation, the client does not need reply anything to the server.
So here are my questions:
Is my worry not well-grounded (as it is fine so far)? Is there any flaw?
Is Ping a better or a common way to have such heart beat functionality over TCP? Why or why not?
If Ping is to be implemented, considering that Ping has reply, is there a way to implement replyless Ping?
Any suggestion to make the heart beat robust enough yet in the shortest amount of data possible?
This is probably the hardest question to answer. Can you provide a little more detail? Why do you think that your server can't handle sending more than a few bytes? Are we talking thousands of machines here? Is everything on a local LAN, or does this go across multiple networks, or the internet?
Ping is an ICMP echo request - ping is very commonly used by networking monitor software, etc to ensure that clients are online. Typically you do not need to implement your own, if you are just pinging for network access (see: https://msdn.microsoft.com/en-us/library/system.net.networkinformation.ping(v=vs.110).aspx).
Also note that ping is not over TCP at all, but rather ICMP, a somewhat different protocol, used for network diagnostics among other things. But that brings me to number 3...
Ping without a reply is kind of pointless. For what you have in mind, I think the protocol you want is UDP - you can broadcast an arbitrary datagram, with no need for any kind of handshake or reply (TCP by definition involves establishing a session with a handshake) - it just sends. These would be Sockets with SocketType.Dgram instead of SocketType.Stream, and ProtocolType.Udp instead of Tcp or ICMP. If you want to get a little more involved, you can use Broadcast to send to same thing to the entire LAN, or Multicast to send to a specific group of clients.
Again, are you sure you need to be that concerned about limiting traffic, etc here?
Personally, I would flip it around, and have the clients "Check In" at a set interval, reporting a status code to the server. If the server notices a client hasn't checked in for a while, it should send a message to the client and expect a reply.
If you really are having issues scaling that up, I would have the server send the "Heart beats" via UDP at a set interval, and if the client thinks it's missing them, have a mechanism for it to hit the server and ask for a reply - and then if it doesn't get a response, raise the alarm.
Edit: just saw Prabhu's answer - he's right, ping will only tell you if the computer is up, you definitely want something inside the actual application to report back, not just the status of the network connection.
in my "heart beat" implementation, the client does not need reply anything to the server.
Application level keep-alives need to be two-way is'n't? What the above enables is that clients can be sure that server is alive and healthy on receiving the heart beat. If the client does not respond, server will not know the true status of the client. If client becomes unreachable,heart beats pile up in the servers send buffer. Server application will be oblivious to the fact.
Is my worry not well-grounded (as it is fine so far)? Is there any flaw?
Small sized bytes shouldn't be a problem. Its better the heart beats are small.
Is Ping a better or a common way to have such heart beat functionality over TCP? Why or why not?
Ping will be positive even if the client application is down but the system is healthy.
I'm just making my server disconnect sockets that send no data after a certain amount of time, like 20 seconds.
I wonder whether working with timers is good for that or is there something special for that in socket library? Working with timers on the server for every socket makes it heavy.
Is it unsafe to make the client program handle that? For example every client disconnects after not sending data for a while.
This should be very easy to implement as part of your keep-alive checking. Unless you're completely ignoring the issue of dropped connections, you probably have a keep-alive system that periodically sends a message client->server and vice versa if there's been no communication. It should be trivial to add a simple "last data received time" value to the socket state, and then close the socket if it gets too far from DateTime.Now.
But the more important question is "Why?". The best solution depends on what your reasons for this are in the first place. Do you want to make the server usable to more clients by dumping those that aren't sending data? You'll probably make everything worse, since the timeouts for TCP sockets are more like 2-4 minutes, so when you disconnect the client after 20s and it reconnects, it will now be using two server-side ports, instead of one. Oops.
As for your comment on the deleted answer, and connection without data send and receive i think it gonna waste your threads points closer to your real problem - the amount of connection your server has should have no relation to how many threads the server uses to service those connections. So the only thing an open connection would "waste" is basically a bit of memory (depending on the amount of memory you need per connection, plus the socket cost with its buffers) and a TCP port. This can be an issue in some applications, but if you ever get to that level of "load", you can probably congratulate yourself already. You will much more likely run out of other resources before getting anywhere close to the port limits (assumption based on the fact that it sounds like you're making an MMO game). If you do really run into issues with those, you probably want to drop TCP anyway and rewrite everything in UDP (or preferrably, some ready solution on top of UDP).
The Client-Server model describes how a client should connect to a server and perform requests.
What I would recommend to you would be to connect to the server, and when you finish retrieving all the date you need, close the socket (on the client side).
The server will eventually find the socket's resources released, but you can check for the socket's Connected property to release the resources sooner.
When client disconnect to server then server can get disconnect event. it look like
socket.on('disconnect', function () {
// Disconnect event handling
});
On client side you also findout disconnect event .. in which you need to reconnect the server.
We currently have different applications that talk to each other over the network with a stateless protocol, comparable to HTTP. Many applications send messages to a single application that listens on one port only.
We now need to change this in order to have a well-defined connection from A to B, so there will be fixed ports for specific communication partners. We need to instantly get to know when the connection gets lost or established, so we can't rely on periodical checks with keep-alive messages and timeouts or similar.
What protocol or technique can I use to implement this behaviour? The whole thing is used for sending and receiving data from sensors and other devices, so there's more or less constant traffic with a relatively low average bandwith (appr. about 10 - 20 Mbit/s on a central, highly-frequented component).
Thanks for your suggestions!
we can't rely on periodical checks with keep-alive messages and timeouts or similar
You have to, as TCP doesn't let you know about a broken connection until you try to read or write.
See also How to test for a broken connection of TCPClient after being connected?
.
I have a client server situation where the client opens a TCP socket to the server, and sometimes long periods of time will pass with no data being sent between them. I have encountered an issue where the server tries to send data to the client, and it seems to be successful, but the client never receives it, and after a few minutes, it looks like the client then gets disconnected.
Do I need to send some kind of keep alive packet every once in a while?
Edit: To note, this is with peers on the same computer. The computer is behind a NAT, that forwards a range of ports used to this computer. The client that connects with the server opens the connection via DNS. i.e. it uses the mydomain.net & port to connect.
On Windows, sockets with no data sent are a big source for trouble in many applications and must be handled correctly.
The problem is, that SO_KEEPALIVE's period can be set system-wide (otherwise, a default is useless two hours) or with the later winsock API.
Therefore, many applications do send some occasional byte of data every now and then (to be disregarded by the peer) only to make the network layer declare disconnection after ACK is not received (after all due retransmissions done by the layer and ack timeout).
Answering your question: no, the sockets do not disconnect automatically.
Yet, you must be careful with the above issue. What complicates it further is that testing this behavior is very hard. For example, if you set everything correctly and you expect to detect disconnection properly, you cannot test it by disconnecting the physical layer. This is because the NIC will sense the carrier loss and the socket layer will signal to close all application sockets that relied on it. A good way to test it is connect two computers with 3 legs and two switches in between, disconnecting the middle leg, thus preventing carrier loss but still physically disconnecting the machines.
There is a timeout built in to TCP but you can adjust it, See SendTimeout and ReciveTimeout of the Socket class, but I have a suspiciouion that is not your problem. A NAT router may also have a expiration time for TCP connections before it removes it from it's port forwarding table. If no traffic passes within the time of that timeout on the router it will block all incoming traffic (as it cleared the forwarding information from it's memory so it does not know what computer to send the traffic to), also the outgoing connection will likely have a different source port so the server may not recognize it as the same connection.
It's more secure to use Keep-alive option (SO_KEEPALIVE under linux), to prevent disconnect due to inactivity, but this may generate some extra packets.
This sample code do it under linux:
int val = 1;
....
// After creating the socket
if (setsockopt(s, SOL_SOCKET, SO_KEEPALIVE, (char *)&val, sizeof(val)))
fprintf(stderr, "setsockopt failure : %d", errno);
Regards.
TCP sockets don't automatically close at all. However TCP connections do. But if this is happening between peers in the same computer the connection should never be dropped as long as both peers exist and have their sockets open.
When I was experimenting with C# and WCF one of the things I kept reading about was how unscalable it is to have clients with a constant current connection to the server. And although WCF allows that it seems that the recommended best practise is to use 'per call' as opposed to 'per session' for instance management if you want to have any kind of decent scalablity. (Please correct me if Im wrong)
However from what I understand IRC uses constant client connections to the server and IRC servers (well networks of servers) are servicing hundreds of thousands of clients at any given time. So in that case is there nothing actually 'bad' about keeping constant client connections to the server?
As long as you don't follow the one-thread-per-connection architecture, a server can support quite a large number of concurrent TCP connections.
IRC doesn't require much per connection state, beyond the TCP send and receive windows.
If you need real-time duplex communication (IRC is a chat protocol), then keeping a TCP connection alive is a relevant option. However, TCP connection brings network overhead and operating systems have practical limits on the number of concurrent open TCP connections. WCF is commonly used in SOAP/HTTP/RPC contexts where duplex communication is not required, but certainly it offers suitable bindings and channels for that as well. To answer your question, there is nothing bad in keeping the connection open if you have real-time, duplex requirements for your communication.
Yes, such architecture is feasible, but... The "ping? pong!" thing was invented for a reason - to let both parties know that the other party is still there. You cannot actually tell if a client is idle, because it does not have much to say or because it is actually disconnected and you are waiting for a TCP timeout.
UPD: "hundreds of thousands of clients" is possible on IRCnet only because of server networks. For a single machine, the C10K problem is still an issue.