I am trying to poll a connection from a client connected to my server.
When you use poll you need to give it a socket connection but on the server's side the socket is bound to it's own IP address and a specific port. Making another socket to connect on the same port but with the client's IP address won't work since you can't have multiple connections on the same socket.
I am just wondering what would be a good way to constantly be checking if a client is still connected to the server and also when it disconnects?
I was thinking some sort of timeout check or something. I just wanted to know if there was any generic or proper way of achieving this.
I have tried Socket.Poll but it does not seem to achieve what I want.
To restate my question, how do you check if a client is connected on the server side using TCP sockets in C#?
socket.Receive will just return 0.
From MSDN
If you are using a connection-oriented Socket, the Receive method will read as much data as is available, up to the size of the buffer. If the remote host shuts down the Socket connection with the Shutdown method, and all available data has been received, the Receive method will complete immediately and return zero bytes.
There is also Connected property in the Socket class if you need.
There are two kinds of sockets: For listening and for connections. If a client has connected you must have an instance of Socket that is associated with that connection.
Use this socket to periodically send data and receive and acknowledgement back. This is the only way to make 100% sure that the connection is still open.
Especially the Connected property cannot be used to detect a lost connection because if the network cable is unplugged neither of the two parties is notified in any way. The property can't be accurate by principle.
Related
is it possible to have one socket router that would pass incoming sockets to socket servers?
I need this because I want to have multiple servers for handling sockets but only one port for clients to connect to. so if one of the servers goes down, router would send the socket to other healthy socket servers.
Is this even possible and how? or any other solution for my problem?
Yes this is definitely possible, and is used very widespread.
Basically you need to have one TCP server running on that one port you want clients to connect to. Then you can listen for connections. When a connection is established you can reroute packets to other servers (you choose).
Listen on a port with your main server
Accept connections as they come in.
When a connection is opened, read the data in another thread. Take that data and send it to the other server you want to.
Basically, it is a proxy.
Preface:
I have an asynchronous socket server where we receive telemetry data, and when the remote devices do not send us data, we have the ability to send commands to request data. The listener and command processing are done on separate threads. The listener listens on one port, while the commands send on a different port.
My overall question is: Is it possible with C# to check if a socket is connected without having to call a "connect" method in the first place? Our customers device will establish a connection to the server and will remain connected always (unless service coverage drops or battery drains etc.). I'd like to avoid having to keep track of all the connected socket objects in memory if possible.
To be honest I'm not even sure if what I'm asking is feasible. I'd like to hear people's thoughts.
If you know the socket information, you could probably invoke GetExtendedTcpTable and get the state of the socket ("established" or not).
For an example of pinvoking this function, see:
http://social.msdn.microsoft.com/Forums/en-US/netfxnetcom/thread/5b8eccd3-4db5-44a1-be29-534ed34a588d/
I've no idea how I would go about this but I'm assuming that it is possible in some way, shape or form.
If I have a server that allow multiple connections to it through one port, is there a way I can make some sort of log of the connections, so that I could choose a certain connection to send a message to? Also if this is possible.
Is it also possible to do the same with connections through different ports?
How would I go about this? I'm fairly new to C# so not very experienced - any help is greatly appreciated!
Basically I want 3 clients to connect to a server. The clients will all send a message to the server, and the server will wait for a message from each client before replying to them, in the order in which the messages were sent.
I hope this makes more sense now.
If you are using TCP/IP, this is very much possible - the Socket that listens for incoming connections only does that - it does not handle the communication with each individual socket. Instead, the Accept() and BeginAccept() methods return a new Socket instance for each client that connects.
So the Socket instance you get when a client connects only receives messages from that client, and sending a message on that socket sends it to only that client.
Keeping track of which connection sent what - and which came first - will be more of a challenge, but definately possible.
If you are using UDP though things are a bit different, you would need to use a custom means of identifying each client.
I have a client server situation where the client opens a TCP socket to the server, and sometimes long periods of time will pass with no data being sent between them. I have encountered an issue where the server tries to send data to the client, and it seems to be successful, but the client never receives it, and after a few minutes, it looks like the client then gets disconnected.
Do I need to send some kind of keep alive packet every once in a while?
Edit: To note, this is with peers on the same computer. The computer is behind a NAT, that forwards a range of ports used to this computer. The client that connects with the server opens the connection via DNS. i.e. it uses the mydomain.net & port to connect.
On Windows, sockets with no data sent are a big source for trouble in many applications and must be handled correctly.
The problem is, that SO_KEEPALIVE's period can be set system-wide (otherwise, a default is useless two hours) or with the later winsock API.
Therefore, many applications do send some occasional byte of data every now and then (to be disregarded by the peer) only to make the network layer declare disconnection after ACK is not received (after all due retransmissions done by the layer and ack timeout).
Answering your question: no, the sockets do not disconnect automatically.
Yet, you must be careful with the above issue. What complicates it further is that testing this behavior is very hard. For example, if you set everything correctly and you expect to detect disconnection properly, you cannot test it by disconnecting the physical layer. This is because the NIC will sense the carrier loss and the socket layer will signal to close all application sockets that relied on it. A good way to test it is connect two computers with 3 legs and two switches in between, disconnecting the middle leg, thus preventing carrier loss but still physically disconnecting the machines.
There is a timeout built in to TCP but you can adjust it, See SendTimeout and ReciveTimeout of the Socket class, but I have a suspiciouion that is not your problem. A NAT router may also have a expiration time for TCP connections before it removes it from it's port forwarding table. If no traffic passes within the time of that timeout on the router it will block all incoming traffic (as it cleared the forwarding information from it's memory so it does not know what computer to send the traffic to), also the outgoing connection will likely have a different source port so the server may not recognize it as the same connection.
It's more secure to use Keep-alive option (SO_KEEPALIVE under linux), to prevent disconnect due to inactivity, but this may generate some extra packets.
This sample code do it under linux:
int val = 1;
....
// After creating the socket
if (setsockopt(s, SOL_SOCKET, SO_KEEPALIVE, (char *)&val, sizeof(val)))
fprintf(stderr, "setsockopt failure : %d", errno);
Regards.
TCP sockets don't automatically close at all. However TCP connections do. But if this is happening between peers in the same computer the connection should never be dropped as long as both peers exist and have their sockets open.
I am supposed to connect to external server using UDP sockets in C#..
I could not understand these 2 lines in server usage notes:
"Use of dedicated sockets is enforced."
and
"If the server looses UDP connectivity with the client, it will ..."
I thought that the UDP socket is connectionless!
So what did "looses connectivity" mean? and how to avoid it?
Does there is a known way to ensure "dedicated sockets"?
Thanks
"Use of dedicated sockets is
enforced."
To me this says, create one unique socket for each connection and use it throughout that connection.
EDIT: Just to expand on this, from the servers point of view.
UDP sockets are not identified by the
remote address, but only by the local
address, although each message has an
associated remote address. (source).
That way the server can distinguish from which client each message came from. Because the remote address is made up of an ip address and port combination, you should use the same socket throughout your communication of the sever. This is because if you don't, it's possible you could get assigned a different port next time you change the underlying socket.
"If the server looses UDP connectivity
with the client, it will ..."
It is possible to loose UPD connectivity e.g. either of the endpoints in the connection is lost, say I go to the server and pull the plug?
EDIT2:
Dan Bryant makes an excellent point in the comments, that links in with what I was saying about.
One thing worth noting is that it's
possible for a call to a UDP socket to
throw a SocketException with
SocketError.ConnectionReset as the
error code. UDP does not have any sort
of session with structured
connect/disconnect, but it does use a
dynamically-assigned remote port to
allow replies, which is a kind of
'connection'.
After 2 hours trying different -may be random solutions:
The server wants you to introduce yourself on a port other than the one you will use to actually send data. "dedicated sockets"
You know which IP and Port you are sending start info on, but you do not know which will be used for actual data transmission..
Solution
1- You will create your socket -with known IPEndpoint, and send on it the 'start message'..
2- Then wait to receive from Any IP...
3- The server will response 'welcome message', stating the Endpoint it will use.(by changing parameter ref remoteEP of Socket.ReceiveFrom())
4- You must then change the port you are sending on = remote Endpoint port + 1 (why? standard way or something?)
5- At last you can send and receive normally using these ports