I'm looking to enhance a WPF app I have that currently receives/broadcasts UDP datagrams within its network with a bridge functionality to share packets with other instances of the app on different subnets.
I essentially want one instance to connect with another and then be broadcast UDP packets while a connection is maintained. To do this, I figure a TCP connection will work at least for the handshake and periodic acks. However, I do not require the TCP overhead for the data itself, which could be transmitted multiple times per second for hours or longer (however long the two are connected).
Given these requirements does it make sense to do a hybrid where the TCP is used for handshaking/acks but data is sent via UDP (I assume the TCP connection can be configured to be kept open indefinitely), or would the additional overhead of sending the datagrams as TCP payloads be negligible? Or should I implement a syn/ack functionality within the UDP transmission? Is there an established standard for this sort of connection?
Related
I have created my own websocket class that extends the abstract System.Net.WebSockets.WebSocket Class. my class uses TcpListener and TcpClient to communicate. After this server receives an HTTP-Get-formatted request that asks to upgrade to a websocket connection, I am able to complete the handshake successfully and communicate with a websocket client.
Separately I have a simple HTTP server that sends and receives HTTP requests using HttpListener and HTTPClient.
Now I want to combine them.
I would like this HTTP server to, upon receiving a websocket request, transfer the "connection" to my websocket server to handle. However I am struggling to conceptually understand what a TCP "connection" is.
I know that I can create a TCPClient using an existing socket, but I am unsure how to retrieve the existing HTTPListener's socket (maybe it can't be exposed?). And for that matter I am unsure what would happen if I tried to have a TCPClient and HTTPListener sharing the same socket.
So how do I construct a TCPListener from an existing HTTPListener?
However I am struggling to conceptually understand what a TCP
"connection" is.
RFC 793, Transmission Control Protocol (and the subsequent RFCs that update it) is the standard for TCP. It explains what a TCP connection is, and later goes into more detail:
Multiplexing:
To allow for many processes within a single Host to use TCP
communication facilities simultaneously, the TCP provides a set of
addresses or ports within each host. Concatenated with the network and
host addresses from the internet communication layer, this forms a
socket. A pair of sockets uniquely identifies each connection.
That is, a socket may be simultaneously used in multiple connections.
The binding of ports to processes is handled independently by each
Host. However, it proves useful to attach frequently used processes
(e.g., a "logger" or timesharing service) to fixed sockets which are
made known to the public. These services can then be accessed through
the known addresses. Establishing and learning the port addresses of
other processes may involve more dynamic mechanisms.
Connections:
The reliability and flow control mechanisms described above require
that TCPs initialize and maintain certain status information for each
data stream. The combination of this information, including
sockets, sequence numbers, and window sizes, is called a
connection. Each connection is uniquely specified by a pair of
sockets identifying its two sides.
When two processes wish to communicate, their TCP's must first
establish a connection (initialize the status information on each
side). When their communication is complete, the connection is
terminated or closed to free the resources for other uses.
Since connections must be established between unreliable hosts and
over the unreliable internet communication system, a handshake
mechanism with clock-based sequence numbers is used to avoid erroneous
initialization of connections.
I had an argument with a colleague on the selection.
We have two processes running on the same machine.
=> NamedPipe and UDP are KERNEL OBJECT so as far as i understand this is same overhead.
The advantage of UDP is that if tomorrow we will separate those two processes and they will run on two different computers so I do not have to change anything.
I think that the NamedPipe performance are better since there is no need to use a network card to send the information to the same machine (am I right .. sending localhost will use the network card - right ?)
Can anyone advise us please ??
Thanks
Before Implementation , you can care below points :
Named pipes:
Named pipes provide interprocess communication between a pipe server and one or more pipe clients.
They support message-based communication and allow multiple clients to connect simultaneously to the server process using the same pipe name.
Named pipes also support impersonation, which enables connecting processes to use their own permissions on remote servers.
User Datagram Protocol :
User Datagram Protocol (UDP) is a simple protocol that makes a best effort to deliver data to a remote host.
The UDP protocol is a connectionless protocol, UDP datagrams sent to the remote endpoint are not guaranteed to arrive, nor are they guaranteed to arrive in the same sequence in which they are sent.
Applications that use UDP must be prepared to handle missing, duplicate, and out-of-sequence datagrams.
I am designing a server client app in C#.
the client connect and communicate with the sever threw tcp socket.
in the server side I am using the socket.accept() method in order to handle new connection from client. when client is connecting, the server use a random port in order to communicate with the client.
so my question is.. how many clients the server can receive in this kind of form?
is there another form that I should use in order to handle lots of clients?
This is practically limited by the OS. You have to test this. On Windows you must use fully asynchronous socket IO at this scale. You will probably be limited by memory usage.
On a TCP level there is no practical limit. There can be one connection for each combination of (server port, server ip, client port, client ip). So with one server port and one server ip you can serve an unlimited amount of clients as long as they have less than 65k connections per client.
You do not need to pick a random port on the server. This is a common misconception.
in the server side i am using the socket.accept() method in order to handle new connection from client. when client is connecting, the server use a random port in order to communicate with the client.
Not unless you open another, pointless, connection from server to client, and you won't be doing that for firewall reasons. The accepted socket uses the same local port number as the listening socket. Contrary to several answers and comments here.
Your question is therefore founded on a misconception. Whatever you run out of, and it could be memory, thread handles, socket handles, socket buffer space, CPUs, CPU power, virtual memory, disk space, ..., it won't be TCP ports.
EDIT Adherents of the new-random-port theory need to explain the following netstat output:
TCP 127.0.0.4:8009 0.0.0.0:0 LISTENING
TCP 127.0.0.4:8009 127.0.0.1:53777 ESTABLISHED
TCP 127.0.0.4:8009 127.0.0.1:53793 ESTABLISHED
TCP 127.0.0.4:8009 127.0.0.1:53794 ESTABLISHED
TCP 127.0.0.4:8009 127.0.0.1:53795 ESTABLISHED
TCP 127.0.0.4:8009 127.0.0.1:53796 ESTABLISHED
TCP 127.0.0.4:8009 127.0.0.1:53798 ESTABLISHED
TCP 127.0.0.4:8009 127.0.0.1:53935 ESTABLISHED
and show where in RFC 793 it says anything about allocating a new port to an accepted socket, and where in the TCP connect-handshake exchange the new port number is conveyed.
You may like to see this question I asked in similar vein: https://softwareengineering.stackexchange.com/questions/234672/is-there-are-problem-holding-large-numbers-of-open-socket-connections-for-length, and particularly some of the comments.
The answer seems to be that there is no practical limit. The combination of receive port and send port must be unique, and each of them can have 64K values. The total number of combinations is extremely large. There really are servers out there with extremely large numbers of open connections, but to get there you have to solve a number of other interesting problems. The question above contains a link to an article about a million connection server. See also How to retain one million simultaneous TCP connections?. And do a web search for the C10K problem.
What you probably cannot do is use synchronous ports and threads because you run into thread limits, not port limits. You have to use asynchronous ports and a thread pool. And you will have to write one to throw away, just to find out how to do it.
I have a client-server communication between a mobile and a PC(server).
In the communication I have four sockets: two of these are to send and receive data, and the other two are for some kind of keep-alive, since I need to detect disconnections as fast as I can.
As long as the connection is OK, the data will travel without any problem. But I want to establish some priority in order to be sure that the keep alive (remember: two sockets) channel is always sending data, unless the connection between server-client is dead.
How can I achieve this?
Thanks for any help.
I would question your setup with four sockets.
First, having separate connection for discovering when remote end dies does not give you any advantage, but in fact introduces a race condition when that "keep-alive" connection goes down but "data" connection is still intact. Implement periodic heartbeats over same data connection when there's no activity.
Then two independent data connections between same nodes compete for bandwidth. Network stacks usually don't optimize across connection boundaries, so you get twice TCP overhead for no gain. Implement data exchange over the same TCP connection - you'll get better throughput (maybe at the expense of small latency increase, but only good measurement would tell that).
Last, but not least, four connections require four listening TCP ports, thus potentially four holes in a firewall somewhere. Reduce that to a single port, and administrator of that firewall will forever be your friend.
When using TCP for transmission your TCP protocol stack will inform you whenever you try to send data and the (TCP) connection is broken. If you control both server and client code you may well implement your heartbeat in between your data transmission over TCP.
If TCP's connection failure detection on the respective devices is too slow for your purpose you can implement some single packet ping-pong scheme between client and server, like the "SNMP echo request" a.k.a. "ping" - or if SNMP is not an option, maybe sending UDP packets back and forth will do the trick.
In any case you will need some kind of timeout mechanism (which is already implemented in the TCP stack), which implies that the detection of a broken connection will be delayed with the delay time bounded by the timeout duration.
I have a client server situation where the client opens a TCP socket to the server, and sometimes long periods of time will pass with no data being sent between them. I have encountered an issue where the server tries to send data to the client, and it seems to be successful, but the client never receives it, and after a few minutes, it looks like the client then gets disconnected.
Do I need to send some kind of keep alive packet every once in a while?
Edit: To note, this is with peers on the same computer. The computer is behind a NAT, that forwards a range of ports used to this computer. The client that connects with the server opens the connection via DNS. i.e. it uses the mydomain.net & port to connect.
On Windows, sockets with no data sent are a big source for trouble in many applications and must be handled correctly.
The problem is, that SO_KEEPALIVE's period can be set system-wide (otherwise, a default is useless two hours) or with the later winsock API.
Therefore, many applications do send some occasional byte of data every now and then (to be disregarded by the peer) only to make the network layer declare disconnection after ACK is not received (after all due retransmissions done by the layer and ack timeout).
Answering your question: no, the sockets do not disconnect automatically.
Yet, you must be careful with the above issue. What complicates it further is that testing this behavior is very hard. For example, if you set everything correctly and you expect to detect disconnection properly, you cannot test it by disconnecting the physical layer. This is because the NIC will sense the carrier loss and the socket layer will signal to close all application sockets that relied on it. A good way to test it is connect two computers with 3 legs and two switches in between, disconnecting the middle leg, thus preventing carrier loss but still physically disconnecting the machines.
There is a timeout built in to TCP but you can adjust it, See SendTimeout and ReciveTimeout of the Socket class, but I have a suspiciouion that is not your problem. A NAT router may also have a expiration time for TCP connections before it removes it from it's port forwarding table. If no traffic passes within the time of that timeout on the router it will block all incoming traffic (as it cleared the forwarding information from it's memory so it does not know what computer to send the traffic to), also the outgoing connection will likely have a different source port so the server may not recognize it as the same connection.
It's more secure to use Keep-alive option (SO_KEEPALIVE under linux), to prevent disconnect due to inactivity, but this may generate some extra packets.
This sample code do it under linux:
int val = 1;
....
// After creating the socket
if (setsockopt(s, SOL_SOCKET, SO_KEEPALIVE, (char *)&val, sizeof(val)))
fprintf(stderr, "setsockopt failure : %d", errno);
Regards.
TCP sockets don't automatically close at all. However TCP connections do. But if this is happening between peers in the same computer the connection should never be dropped as long as both peers exist and have their sockets open.