I have an application that consists of numerous systems using UDP clients in remote locations. All clients send UDP packets to a central location for processing. In my application, it is critical that the central location knows what time the packet was sent by the remote location.
From a design perspective, would it be "safe" to assume that the central location could timestamp the packets as they arrive and use that as the "sent time"? Since the app uses UDP, the packets should either arrive immediately or not arrive at all? The other option would be to set up some kind of time syncing on each remote location. The disadvantage to this is that then I would need to continually ensure that the time syncing is working on each of potentially hundreds of remote locations.
My question is whether timestamping the UDP packets at the central location to determine "sent time" is a potential flaw. Is it possible to experience any delay with UDP?
For seconds resolution you can use time stamping of when you receive the packet, but you still need to use a sequence number to block re-ordered or duplicate packets.
This can make your remote stations less complex as they won't need a battery backed clock or synchronisation techniques.
For millisecond resolution you would want to calculate the round trip time (RTT) and use that offset to the clock on the receiver.
Unless you are using the precision time protocol (PTP) in a controlled environment you can never trust the clock of remote hosts.
There is always a delay in transmission, and UDP packets do not have guaranteed delivery nor are they guaranteed to arrive in sequence.
Would need more information about the context to recommend a better soluion.
One option would be to require that the client clocks are synchronized with an external atomic clock. To ensure this, and to make your UDP more robust, the server can reject any packets that arrive "late" (as determained by the difference in the server's clock--also externally sync'd--and the packet timestamp).
If your server is acking packets, it can report to the client that it is (possibly) out of sync so that it can re-sync itself.
If your server is not acking packets, your whole scheme is probalby going to fail anyhow due to dropped or out of ordered packets.
Related
How can you check if the message has reached the destination, I came up with a solution of mine but as I'm not a pro in this kind of topic I'd like to know some other ways.
My solutions is (client side) to send the packet, if no acknowledgement has been received within timeout time, then send once more, (server side) if the message received is correct send acknowledgement, if not received on the other side then send again.
This is a picture of the diagram of this algorithm,
Picture.
In short, both sides send the message twice.
any other ideas?
It depends on your application. But looking at the diagram you attached, you are preferring to TCP communication.
However if you really wanted to use UDP instead of TCP, you have to let go of ACK thing.
Assuming you are continuously streaming images to remote destination. And you do not worry about some frame loss as long as the streaming will be as fast as it will be. You can use UDP with that. Just also consider how reliable the transmission line (physical layer) to predict the outcome.
But if your application is not that time-critical but needs a highest reliability as much as possible, then you can use TCP.
For more detials [visit this]
Below are some comparison with UDP and TCP
The UDP protocol is an extremely simple protocol that allows for the sending of datagrams over IP. UDP is preferable to TCP for the delivery of time-critical data as many of the reliability features of TCP tend to come at the cost of higher latency and delays caused by the unconditional re-sending of data in the event of packet loss.
In contrast to TCP, which presents the programmer with one ordered octet stream per connected peer, UDP provides a packet-based interface with no concept of a connected peer. Datagrams arrive containing a source address1, and programmers are expected to track conceptual peer “connections” manually.
TCP guarantees2 that either a given octet will be delivered to the connected peer, or the connection will be broken and the programmer notified. UDP does not guarantee that any given packet will be delivered, and no notification is provided in the case of lost packets.
TCP guarantees that every octet sent will be received in the order that it was sent. UDP does not guarantee that transmitted packets will be received in any particular order, although the underlying protocols such as IP imply that packets will generally be received in the order transmitted in the absence of routing and/or hardware errors.
TCP places no limits on the size of transmitted data. UDP directly exposes the programmer to several implementation-specific (but also standardized) packet size limits. Creating packets of sizes that exceed these limits increase the chances that packets will either be fragmented or simply dropped. Fragmentation is undesirable because if any of the individual fragments of a datagram are lost, the datagram as a whole is automatically discarded. Working out a safe maximum size for datagrams is not trivial due to various overlapping standards.
I'm just making my server disconnect sockets that send no data after a certain amount of time, like 20 seconds.
I wonder whether working with timers is good for that or is there something special for that in socket library? Working with timers on the server for every socket makes it heavy.
Is it unsafe to make the client program handle that? For example every client disconnects after not sending data for a while.
This should be very easy to implement as part of your keep-alive checking. Unless you're completely ignoring the issue of dropped connections, you probably have a keep-alive system that periodically sends a message client->server and vice versa if there's been no communication. It should be trivial to add a simple "last data received time" value to the socket state, and then close the socket if it gets too far from DateTime.Now.
But the more important question is "Why?". The best solution depends on what your reasons for this are in the first place. Do you want to make the server usable to more clients by dumping those that aren't sending data? You'll probably make everything worse, since the timeouts for TCP sockets are more like 2-4 minutes, so when you disconnect the client after 20s and it reconnects, it will now be using two server-side ports, instead of one. Oops.
As for your comment on the deleted answer, and connection without data send and receive i think it gonna waste your threads points closer to your real problem - the amount of connection your server has should have no relation to how many threads the server uses to service those connections. So the only thing an open connection would "waste" is basically a bit of memory (depending on the amount of memory you need per connection, plus the socket cost with its buffers) and a TCP port. This can be an issue in some applications, but if you ever get to that level of "load", you can probably congratulate yourself already. You will much more likely run out of other resources before getting anywhere close to the port limits (assumption based on the fact that it sounds like you're making an MMO game). If you do really run into issues with those, you probably want to drop TCP anyway and rewrite everything in UDP (or preferrably, some ready solution on top of UDP).
The Client-Server model describes how a client should connect to a server and perform requests.
What I would recommend to you would be to connect to the server, and when you finish retrieving all the date you need, close the socket (on the client side).
The server will eventually find the socket's resources released, but you can check for the socket's Connected property to release the resources sooner.
When client disconnect to server then server can get disconnect event. it look like
socket.on('disconnect', function () {
// Disconnect event handling
});
On client side you also findout disconnect event .. in which you need to reconnect the server.
I have a client-server communication between a mobile and a PC(server).
In the communication I have four sockets: two of these are to send and receive data, and the other two are for some kind of keep-alive, since I need to detect disconnections as fast as I can.
As long as the connection is OK, the data will travel without any problem. But I want to establish some priority in order to be sure that the keep alive (remember: two sockets) channel is always sending data, unless the connection between server-client is dead.
How can I achieve this?
Thanks for any help.
I would question your setup with four sockets.
First, having separate connection for discovering when remote end dies does not give you any advantage, but in fact introduces a race condition when that "keep-alive" connection goes down but "data" connection is still intact. Implement periodic heartbeats over same data connection when there's no activity.
Then two independent data connections between same nodes compete for bandwidth. Network stacks usually don't optimize across connection boundaries, so you get twice TCP overhead for no gain. Implement data exchange over the same TCP connection - you'll get better throughput (maybe at the expense of small latency increase, but only good measurement would tell that).
Last, but not least, four connections require four listening TCP ports, thus potentially four holes in a firewall somewhere. Reduce that to a single port, and administrator of that firewall will forever be your friend.
When using TCP for transmission your TCP protocol stack will inform you whenever you try to send data and the (TCP) connection is broken. If you control both server and client code you may well implement your heartbeat in between your data transmission over TCP.
If TCP's connection failure detection on the respective devices is too slow for your purpose you can implement some single packet ping-pong scheme between client and server, like the "SNMP echo request" a.k.a. "ping" - or if SNMP is not an option, maybe sending UDP packets back and forth will do the trick.
In any case you will need some kind of timeout mechanism (which is already implemented in the TCP stack), which implies that the detection of a broken connection will be delayed with the delay time bounded by the timeout duration.
I have a client server situation where the client opens a TCP socket to the server, and sometimes long periods of time will pass with no data being sent between them. I have encountered an issue where the server tries to send data to the client, and it seems to be successful, but the client never receives it, and after a few minutes, it looks like the client then gets disconnected.
Do I need to send some kind of keep alive packet every once in a while?
Edit: To note, this is with peers on the same computer. The computer is behind a NAT, that forwards a range of ports used to this computer. The client that connects with the server opens the connection via DNS. i.e. it uses the mydomain.net & port to connect.
On Windows, sockets with no data sent are a big source for trouble in many applications and must be handled correctly.
The problem is, that SO_KEEPALIVE's period can be set system-wide (otherwise, a default is useless two hours) or with the later winsock API.
Therefore, many applications do send some occasional byte of data every now and then (to be disregarded by the peer) only to make the network layer declare disconnection after ACK is not received (after all due retransmissions done by the layer and ack timeout).
Answering your question: no, the sockets do not disconnect automatically.
Yet, you must be careful with the above issue. What complicates it further is that testing this behavior is very hard. For example, if you set everything correctly and you expect to detect disconnection properly, you cannot test it by disconnecting the physical layer. This is because the NIC will sense the carrier loss and the socket layer will signal to close all application sockets that relied on it. A good way to test it is connect two computers with 3 legs and two switches in between, disconnecting the middle leg, thus preventing carrier loss but still physically disconnecting the machines.
There is a timeout built in to TCP but you can adjust it, See SendTimeout and ReciveTimeout of the Socket class, but I have a suspiciouion that is not your problem. A NAT router may also have a expiration time for TCP connections before it removes it from it's port forwarding table. If no traffic passes within the time of that timeout on the router it will block all incoming traffic (as it cleared the forwarding information from it's memory so it does not know what computer to send the traffic to), also the outgoing connection will likely have a different source port so the server may not recognize it as the same connection.
It's more secure to use Keep-alive option (SO_KEEPALIVE under linux), to prevent disconnect due to inactivity, but this may generate some extra packets.
This sample code do it under linux:
int val = 1;
....
// After creating the socket
if (setsockopt(s, SOL_SOCKET, SO_KEEPALIVE, (char *)&val, sizeof(val)))
fprintf(stderr, "setsockopt failure : %d", errno);
Regards.
TCP sockets don't automatically close at all. However TCP connections do. But if this is happening between peers in the same computer the connection should never be dropped as long as both peers exist and have their sockets open.
I am about to develop a Network measurement tool. The objective is to make a tool, which can measure the responsetime in between a client and a server machine (from the client side). It is s side-application to a main application - If the main applicaiton experiences that the responsetime from the server is above a certain threshold, the tool will be kicked alive, and performs network connectivity tests, to determine of the client server connection is stable (it might be unstable, due to the network being wireless etc.)
The tests I need to perform are not just ping operations, but also transmitting packages of different size.
I have however very little experience in communications technology.
Is ICMP protocol the way to go? and if yes, is it possible to send packages of differnet sizes (to measure if the network is able to transfer eg. 2 MB of data in a reasonable time)?
I have a second concern. What should I look out for in regards to firewalls? It would be a shame to develop an application which works fine on my local network, but as soon as it is used out in the real life, it fails misserably because the tests are blocked by a firewall.
I hope my questions aren't too noobish, but know that any help is much appreciated.
All the best
/Sagi
To keep clear of firewalls, you should do a test using the same protocol and port you use, and create inside of your application a new type of message that should be responded as soon as it is read by the server: You should program your ping measures.
Then the client would measure the times spent in travel traveled and compute your ping and relay it back to your server. This also gives a better reading when in case of some ISPs that give a ICMP protocol packets a huge advantage over other packages on their QoS server, artificially creating(faking) lower latency. And also, you would not have to worry about the firewall not allowing your ICMP packets, because you would have to be allowed to conect on the standart port you use.
Also, most games work this way (Half-Life, Age Of Empires etc.) , and not by sending standard Ping packets.