Confused about Sockets with UDP Protocol in C# - c#

I've just started learning Sockets through various Google searches but I'm having some problems figuring it out how to properly use Sockets in C# and I'm in the need of some help.
I have a test application (Windows Forms) and on a different class (which is actually in it's own .dll, but that's irrelevant) I have all the server/client code for my sockets code.
Question 1)
On my test application, on the server part, the user can click the "start listening" button and the server part of my sockets application should start listening for connections on the specified address and port, so far so good.
However, the application will be blocked and I can't do anything until someone connects to the server. What if no one connects? How should I handle that? I could specify a receive timeout but what then? It throws an exception, what can I do with that? What I would like is to have some sort of activity on the main application so the user knows the application didn't froze and is waiting for connections. But if a connection doesn't come, it should timeout and close everything.
Maybe I should use asynchronous calls to send/receive methods but they seem confusing and I was not able to make it work, only synchronous work (I'll post my current code below).
Question 2)
Do I need to close anything when some send/receive call times out. As you'll see on my current code, I have a bunch of closes on the socket, but this doesn't feel right somehow. But it also doesn't feel right when an operation times out and I don't close the socket.
In conclusion of my two questions.... I would like an application that doesn't block so the user knows the server is waiting for a connection (with a little marquee animation for instance). If a connection is never established after a period of time, I want to close everything that should be closed. When a connection is established or if it doesn't happen after a period of time, I would like to inform the main application of the result.
Here's some of my code, the rest is similar. The Packet class is a custom class that represents my custom data unit, it's just a bunch of properties based on enums for now, with methods to convert them to bytes and back into properties.
The function that starts to listen for connections is something like this:
public void StartListening(string address, int port) {
try {
byte[] bufferBytes = new byte[32];
if(address.Equals("0.0.0.0")) {
udpSocket.Bind(new IPEndPoint(IPAddress.Any, port));
} else {
udpSocket.Bind(new IPEndPoint(IPAddress.Parse(address), port));
}
remoteEndPoint = new IPEndPoint(IPAddress.Any, 0);
int numBytesReceived = udpSocket.ReceiveFrom(bufferBytes, ref remoteEndPoint);
if(numBytesReceived == 0) {
udpSocket.Close();
return;
}
Packet syncPacket = new Packet(bufferBytes);
if(syncPacket.PacketType != PacketType.Control) {
udpSocket.Close();
return;
}
} catch {
if(udpSocket != null) {
udpSocket.Close();
}
}
}
I'm sure that I have a bunch of unnecessary code but I'm new at this and I'm not sure what do, any help fixing up my code and how to solve the issues above is really appreciated.
EDIT:
I should probably have stated that my requirements are to use UDP and implement these things myself in the application layer. You can consider this as homework but I haven't tagged as such because the code is irrelevant and will not be part of my grade and my problem (my question) is in "how to code" as my Sockets experience is minimal and it's not taught.
However I must say that I solved my problem for now I think... I was using threading on the demo application which was giving me some problems, now I'm using it in the protocol connections, makes more sense and I can easily change my custom protocol class properties and read those from the demo application.
I have specified a timeout and throw a SocketException if it reaches the timeout. Whenever an exception like this is caught, the socket connection is closed. I'm just talking about the connection handshake, nothing more. If no exceptions are caught, the code probably went smooth and the connection is established.
Please adapt your answers accordingly. Right now it doesn't make sense for me to marky any of them as the accepted answer, hope you understand.

You have got stuff a bit wrong.
First of all, UDP is connection-less. You do not connect or disconnect. All you do is to send and receive (must specify destination each time). You should also know that the only thing UDP promises is that a complete message arrives on each read. UDP do not guarantee that your messages arrive in the correct order or that they arrive at all.
TCP on the other hand is connection-based. You connect, send/receive and finally disconnect. TCP is stream-based (while UDP is message-based) which means that you can get a half message in the first read and the other half at the second read. TCP promises you that everything will arrive and in the correct order (or will die trying ;). So using TCP means that you should have some kind of logic to know when a complete message has arrived and a buffer that you use to build the complete message.
The next big question was about blocking. Since you are new at this, I recommend that you use Threads to handle sockets. Put the listener socket in one thread and each connecting socket in a separate thread (5 connected clients = 5 threads).
I also recommend that you use TCP since it's easier to build complete messages than ordering messages and build a transaction system (which will needed if you want to make sure that all messages arrives to/from clients).
Update
You still got UDP wrong. Close doesn't do anything other than cleaning up system resources. You should do something like this instead:
public void MySimpleServer(string address, int port)
{
try
{
byte[] bufferBytes = new byte[32];
if(address.Equals("0.0.0.0")) {
udpSocket.Bind(new IPEndPoint(IPAddress.Any, port));
} else {
udpSocket.Bind(new IPEndPoint(IPAddress.Parse(address), port));
}
remoteEndPoint = new IPEndPoint(IPAddress.Any, 0);
while (serverCanRun)
{
int numBytesReceived = udpSocket.ReceiveFrom(bufferBytes, ref remoteEndPoint);
// just means that one of the clients closed the connection using Shutdown.
// doesnt mean that we cant continue to receive.
if(numBytesReceived == 0)
continue;
// same here, loop to receive from another client.
Packet syncPacket = new Packet(bufferBytes);
if (syncPacket.PacketType != PacketType.Control)
continue;
HandlePacket(packet, endPoint);
}
} catch {
if(udpSocket != null) {
udpSocket.Close();
}
}
}
See? since there are no connection it's just waste of time to close a UDP socket to start listening from another one. The same socket can receive from ALL udp clients that know the correct port and address. That's what the remoteEndPoint is for. It tells which client that send the message.
Update 2
Small update to make a summary of all my comments.
UDP is connectionless. You can never detect if a connection have been established or disconnected. The Close method on a UDP socket will only free system resources. A call on client.Close() will not notify the server socket (as it will with TCP).
The best way to check if a connection is open is to create a ping/pong style of packet. i.e. the client sends a PING message and the server responds with a PONG. Remember that UDP will not try to resend your messages if they do not arrive. Therefore you need to resend the PING a couple of times before assuming that the server is down (if you do not receive a PONG).
As for clients closing you need to send your own message to the server telling it that the the client is going to stop talking to the server. For reliability the same thing goes here, keep resending the BYE message until you receive a reply.
imho it's mandatory that you implement a transactional system for UDP if you want reliability. SIP (google rfc3261) is an example of a protocol which uses transactions over UDP.

From your description I feel you should use TCP sockets instead of UDP. The difference is
TCP - You wait for a connection at a particuler IP:Port some user can connect to it and until the socket is closed can communicate by sending and receiveing information. This is like calling someone on phone.
UDP - you wait for a message at some IP:Port. User who wants to communicate just sends a message through UDP. You will receive the message through UDP. The order of delivery is not guaranteed. This is more like sending a snail mail to someone. There is no dedicated communication channel established.
Now coming to your problem
Server
Create a Socket with TCP family.
Either create a thread and accept the connection in that thread or use the BeginAccept apis of Socket.
In the main thread you can still display the ticker or whatever you want to do.
Client
Connect to the server.
Communicate by sending and receiving data.

Related

HTTP request over GPRS and TCP dropping packets

I'm writing a server for a biometric fingerprint device that connects via GPRS. The server receives GET and POST requests from the device and then will perform the required actions.
With the POST requests, the device should attach some additional data to the request.
The problem is, when I connect the device to the server via LAN, all the data comes through fine. When I connect via GPRS, the request body doesn't get picked up by my server.
On the left, is when I connect via LAN...the body of the message is attached. On the right, is via GPRS, everything remains the same, however, there is no body.
I ran Wireshark over the LAN and the GPRS connections. The packets, when I drill down, all have the body attached but on Wireshark, over GPRS, I get messages like above - with the out of order and RST, ACK and sometime PSH, ACK.
Contrasted with the LAN packets, which have none of these problems.
This is the code I'm using to read from the TCPListener
try
{
if (tcp == null)
{
this.tcp = new TcpListener(IPAddress.Parse(serverIP), port);
}
this.tcp.Start();
listening = true;
while (listening)
{
Socket mySocket = null;
// Blocks until a client has connected to the server
try
{
mySocket = this.tcp.AcceptSocket();
Thread.Sleep(500);
byte[] bReceive = new byte[1024 * 1024 * 2];
mySocket.Receive(bReceive);
Analysis(bReceive, mySocket);
}
catch(Exception ex)
{
MessageBox.Show(ex.Message);
}
}
this.tcp.Stop();
This is the original code I got from their developer. I've tried various combinations of async, TcpClient and different socket options such as KeepAlive and DontLinger, but none seem to cure this problem.
Other than manually capturing the packets in C# to get the body, are there any C# classes I can use to read the entire request?
TCP is a stream oriented protocol. Everybody knows that but a lot of developers do not consider that when they implement a TCP receiver.
When a send calls Send("ABCDEFG") and client calls Receive(buffer) the buffer may contain "ABCDEFG" or "ABCD" or "A" or whatever substring from original data that begins with "A". TCP is a stream of data without any information about message boundaries.
The receiver that needs to receive a message with a length that is unknown during the compile time (like an HTTP request) must contain a logic that receives the header, parse it and than waits till complete message is received.
But you don't need to implement it yourself. C# has the class HttpServer that already contains this logic. Moreover there are libraries with REST support. It is reinventing wheel to implement a REST server and start with TcpListener and sockets.
I ended up implementing the server as async using the code in the link below. This is so I could implement the logic to reread the stream if the end of the stream hasn't been reached. I just had to change the end of file condition for my circumstances.
Here's a link to the article:
https://learn.microsoft.com/en-us/dotnet/framework/network-programming/asynchronous-server-socket-example

Does TcpClient write method guarantees the data are delivered to server?

I have a separate thread on both client and server that are reading/writing data to/from a socket.
I am using synchronous TcpClient im (as suggested in documention):
https://msdn.microsoft.com/cs-cz/library/system.net.sockets.tcpclient%28v=vs.110%29.aspx
When connection is closed .Read()/.Write() throws an exception. Does it mean that when .Write() method does not throw the data were delivered correctly to the other party or do I need to implement custom ACK logic?
I read documentation for both Socket and TcpClient class and none of them describe this case.
All that a returning send() call means (or any wrapper you use, like Socket or TcpClient) on a streaming, blocking internet socket is that the bytes are placed in the sending machine's buffer.
MSDN Socket.Send():
A successful completion of the Send method means that the underlying system has had room to buffer your data for a network send.
And:
The successful completion of a send does not indicate that the data was successfully delivered.
For .NET, the underlying implementation is WinSock2, documentation: send():
The successful completion of a send function does not indicate that the data was successfully delivered and received to the recipient. This function only indicates the data was successfully sent.
A call to send() returning does not mean the data was successfully delivered to the other side and read by the consuming application.
When data is not acknowledged in time, or when the other party sends a RST, the Socket (or whichever wrapper) will become in a faulted state, making the next send() or recv() fail.
So in order to answer your question:
Does it mean that when .Write() method does not throw the data were delivered
correctly to the other party or do I need to implement custom ACK logic?
No, it doesn't, and yes, you should - if it's important to your application that it knows another party has read that particular message.
This would for example be the case if a server-sent message indicates a state change of some sort on the client, which the client must apply to remain in sync. If the client doesn't acknowledge that message, the server cannot know for certain that the client has an up-to-date state.
In that case you could alter your protocol so that certain messages have a required response which the receiver must return. Do note that implementing an application protocol is surprisingly easy to do wrong. If you're inclined, you could implement having various protocol-dictated message flows using a state machine,
for both the server and the client.
Of course there are other solutions to that problem, such as giving each state a unique identifier, which is verified with the server before attempting any operation involving that state, triggering the retry of the earlier failed synchronization.
See also How to check the capacity of a TCP send buffer to ensure data delivery, Finding out if a message over tcp was delivered, C socket: does send wait for recv to end?
#CodeCaster's answer is correct and highlights the .NET documentation specifying the behavior of .Write(). Here is some complete test code to prove that he is right and the other answers saying things like "TCP guarantees message delivery" are unambiguously wrong:
using System;
using System.Net;
using System.Net.Sockets;
using System.Text;
using System.Threading;
namespace TestEvents
{
class Program
{
static void Main(string[] args)
{
// Server: Start listening for incoming connections
const int PORT = 4411;
var listener = new TcpListener(IPAddress.Any, PORT);
listener.Start();
// Client: Connect to listener
var client = new TcpClient();
client.Connect(IPAddress.Loopback, PORT);
// Server: Accept incoming connection from client
TcpClient server = listener.AcceptTcpClient();
// Server: Send a message back to client to prove we're connected
const string msg = "We are now connected";
NetworkStream serverStream = server.GetStream();
serverStream.Write(ASCIIEncoding.ASCII.GetBytes(msg), 0, msg.Length);
// Client: Receive message from server to prove we're connected
var buffer = new byte[1024];
NetworkStream clientStream = client.GetStream();
int n = clientStream.Read(buffer, 0, buffer.Length);
Console.WriteLine("Received message from server: " + ASCIIEncoding.ASCII.GetString(buffer, 0, n));
// Client: Close connection and wait a little to make sure we won't ACK any more of server's messages
Console.WriteLine("Client is closing connection");
clientStream.Dispose();
client.Close();
Thread.Sleep(5000);
Console.WriteLine("Client has closed his end of the connection");
// Server: Send a message to client that client could not possibly receive
serverStream.Write(ASCIIEncoding.ASCII.GetBytes(msg), 0, msg.Length);
Console.WriteLine(".Write has completed on the server side even though the client will never receive the message. server.Client.Connected=" + server.Client.Connected);
// Let the user see the results
Console.ReadKey();
}
}
}
The thing to note is that execution proceeds normally all the way through the program and serverStream has no indication that the second .Write was not successful. This is despite the fact there is no way that second message can ever be delivered to its recipient. For a more detailed look at what's going on, you can replace IPAddress.Loopback with a longer route to your computer (like, have your router route port 4411 to your development computer and use the externally-visible IP address of your modem) and monitor that port in Wireshark. Here's what the output looks like:
Port 51380 is a randomly-chosen port representing the client TcpClient in the code above. There are double packets because this setup uses NAT on my router. So, the first SYN packet is my computer -> my external IP. The second SYN packet is my router -> my computer. The first PSH packet is the first serverStream.Write. The second PSH packet is the second serverStream.Write.
One might claim that the client does ACK at the TCP level with the RST packet, but 1) this is irrelevant to the use of TcpClient since that would mean TcpClient is ACKing with a closed connection and 2) consider what happens when the connection is completely disabled in the next paragraph.
If I comment out the lines that dispose the stream and close the client, and instead disconnect from my wireless network during the Thread.Sleep, the console prints the same output and I get this from Wireshark:
Basically, .Write returns without Exception even though no PSH packet was even dispatched, let alone had received an ACK.
If I repeat the process above but disable my wireless card instead of just disconnecting, THEN the second .Write throws an Exception.
Bottom line, #CodeCaster's answer is unambiguously correct on all levels and more than one of the other answers here are incorrect.
TcpClient uses TCP protocol which itself guarantees data delivery. If the data is not delivered, you will get an exception. If no exception is thrown - the data has been delivered.
Please see the description of the TCP protocol here: http://en.wikipedia.org/wiki/Transmission_Control_Protocol
Every time the data is sent, the sending computer waits for the acknowledgement packet to arrive, and if it has not arrived, it will re-try the send until it is either successful, timed out, or permanent network failure has been detected (for example, cable disconnect). In the latter two cases an exception will be thrown
Therefore, TCP offeres guaranteed data delivery in the sense that you always know whether the destination received your data or not
So, to answer your question, you do NOT need to implement custom ACK logic when using TcpClient, as it will be redundant
Guarantee is never possible. What you can know is that data has left your computer for delivery to other side in the order of data was sent.
If you want a very reliable system, you should implement acknowledgement logic by yourself based on your needs.
I agree with Denis.
From the documentation (and my experience): this method will block until all bytes were written or throw an exception on error (such as disconnect). If the method returns you are guaranteed that the bytes were delivered and read by the other side on the TCP level.
Vojtech - I think you missed the documentation since you need to look at the Stream you're using.
See: MSDN NetworkStream.Write method, in the remarks section:
The Write method blocks until the requested number of bytes is sent or a SocketException is thrown.
Notes
Assuring that the message was actually read properly by the listening application is another issue and the framework cannot guarantee this.
In async methods (such as BeginWrite or WriteAsync) it's a different ballgame since the method returns immediately and the mechanism to assure completion is different (EndWrite or task completion paired with a code).

Self-healing SslStream

I'm writing a service that needs to maintain a long running SSL connection to a remote server. I need this server to be self-healing, that is if it's disconnected for any reason then the next time it's written to it will reconnect. I've tried this:
bool isConnected = client.Connected && client.Client.Poll(0, SelectMode.SelectWrite) && stream.CanWrite;
if (!isConnected )
{
this.connected = false;
GetConnection();
}
stream.Write(bytes, 0, bytes.Length);
stream.Flush();
But I find it doesn't act as I would expect it. If I simulate a network outage by disabling my wifi, I'm still able to write to the stream with stream.Write() for approximately 20 seconds. Then next time I try to write to it, none of client.Connected, client.Client.Poll(), or stream.CanWrite() return false, but when I go to write to the stream I get a socket exception. Finally, if I try to recreate the connection, I get this exception: An existing connection was forcibly closed by the remote host.
I would appreciate any help create a long running SslStream that can withstand network failure. Thanks!
From a 10.000 feet point of view:
The reason you can still write to the stream after shutting down your wifi is because there is a network buffer that is holding the data for transmission, stream.Write/stream.Flush success means the network interface (TCP/IP stack) has accepted the data and has been buffered for transmission, not that the data has reach its target.
It takes time to the TCP/IP Stack to notice a full media disconnection, (connection lost/reset) because even if there is no physical link TCP/IP will see this as a temporary issue in the network and will keep retrying for a while (the network could be dropping packets at some point and the stack will keep retrying)
If you think about this in the reverse way, you won't like all your programs to fail if there is a network hiccup (this happen too often on internet), so TCP/IP takes its time to notify to the app layer that the connection has become invalid (after retry several times and wait a reasonable amount of time)
You can always reconnect to the server when the SslStream fails and continue sending data, although you will find is not as easy as this because there are several scenarios where you send and data is not received by server and others where server receive the data and you do not receive any ACK from server at all... So depending on your needs, self-healing alone could be not enough.
Self-Healing is simple to implement, data consistency and reliability is harder and usually requires the server to be ready to support some kind of reliable messaging mechanism to ensure all data has been sent and received.
The underlying protocol for SSL is TCP. TCP will usually only send data if the application wants it to deliver data, or if it needs to reply to data received from the other side by sending an ACK. This means, that a broken connection like a lost link will not be noticed until you are trying to send any data. But you will not notice immediatly, because:
A write to the socket will only deliver the data to the OS kernel and return success if this delivery was successful.
The kernel will then try to deliver the data to the peer and will wait for the ACK from the client.
If it does not get any ACK it will retry again to deliver the data and only after some unsuccessful retries the kernel will declare the connection broken.
Only after the connection is marked broken by the kernel the next write or read will return the error from kernel to user space, like with returning EPIPE when doing a write.
This means, if you want to know up-front if the connection is still alive you have to make sure that you get a regular data exchange on the connection. At the TCP level you might set TCP_KEEPALIVE, but this might use an interval of some hours between exchanges packets. At the SSL layer you might try to use the infamous heartbeat extension, but most peers will not understand it. The last choice is to implement some kind of heartbeat in your own application.
As for the self healing: When reconnecting you get a new TCP connection and you also need to do a full SSL handshake, because the last SSL connection was not cleanly closed and thus cannot be resumed. The server has no idea that this new connection is just a continuation of the old one so you have to implement some kind of meta-connection spanning multiple TCP connections inside your application layer on both client and server. Inside this meta-connection you need to have your own data tracking to detect, which data are really accepted from the peer and which were only send but never explicitly accepted because the connection broke. Sound like a kind of TCP on top of TCP.

Weird behavior of async tcp socket

I have an async server socket listening to some port. Then from another pc, I connect to the server and send 1 byte. Everything works fine, but there's a strange behavior. When I pull of network cable and try to send 1 byte (before os realizes cable was pulled of), I don't get any exception/error and as expected, server don't receive that packet. Is this how sockets are supposed to work? Does this mean that in case of connection loss some packets can be lost (because I don't get an exception and don't know that request was not sent)?
Here's the code:
private void button3_Click(object sender, EventArgs e)
{
var b = new byte[1] {1};
client.BeginSend(b, 0, b.Length, 0, new AsyncCallback(SendCallback), client);
}
private void SendCallback(IAsyncResult ar)
{
Socket client = (Socket)ar.AsyncState;
int bytesSent = client.EndSend(ar);
this.Invoke(new MethodInvoker(() => { MessageBox.Show(bytesSent.ToString() + " bytes sent"); }));
}
How could the sender possibly tell the packet was not received? It sent it out into a black hole and waits for reply. As long as there is no reply he cannot know whether the packet was received or never will be.
This is usually solved with timeouts. Eventually, the TCP stack will declare the connection dead, or your subsequent reads time out.
Sending does not guarantee delivery at all.
Have the other side send you a confirmation. Your confirmation read will time out eventually.
Or, Shutdown(Send) the socket. This ensures delivery and will throw an exception (after a timeout). You should Shutdown(Both) a socket anyway before closing it to make sure you get notified of all errors.
Windows at socket layer maintains a socket buffer at kernel. A successful send at application layer simply means the data is copied into the kernel buffer. The data in this buffer is pushed by the TCP stack to the remote application. In windows, if a packet is dropped, TCP resends the data for 3 times after which the TCP stack notifies the connection close to the application. Interval between retries is decided by the RTT. First retry is after 1*RTT, second is after 2*RTT & third after 3*RTT. In your case, the 1 byte you send simply copied into the kernel buffer & indicated success. It will take 3*RTT to indicate the socket is closed. This connection close is notified only if you invoke any socket API or monitoring the socket for close event. After pulling the cable, if you queue a second send exactly after 3*RTT, send should through the exception. Another way to get indication of the send failure immediately is by setting the send socket buffer size to zero (SetSocketOption(..,SendBuffer,..)) so that the TCP stack directly use your buffer & indicates a failure immediately.
I assume you created a TCP socket.
In that case, your Client will not be notified in case of disconnection (except if your application it sent some kind of logout message).
Looking at the Connected property of the TcpClient :
The Connected property gets the connection state of the Client socket
as of the last I/O operation. When it returns false, the Client socket
was either never connected, or is no longer connected.
"As of the last I/O" operation : only an (unsuccessful) Read from the Client will help you to detect the disconnection. Many applications implement "pings" to detect some disconnections.
Yes, this is the way TCP sockets are supposed to work. No, this does not mean that the packet is necessarily lost.
What is happening under the covers is this. When you call BeginSend, the byte you are sending is passed to the operating system's TCP stack. The TCP stack then sends the data in a packet.
When the packet is not acknowledged in a reasonable length of time, the TCP stack automatically resends the packet. This happens repeatedly as long as no acknowledgement packet is received.
If you plug the cable back in, one of these resends will get through, and the server will belatedly see the data. This is the desired behavior. Remember that TCP was designed to transmit military data during the cold war; the idea was that even if part of the network got nuked, the routing system would eventually adjust to find another path to the recipient, and the data would eventually get through.
If you don't plug the cable back in, most TCP stacks will eventually give up, terminating the connection. This takes several minutes, however.
More information on TCP retransmission timeout can be found in RFC 1122 here:
https://www.rfc-editor.org/rfc/rfc1122#page-95

How to tell when a Socket has been disconnected

On the client side I need to know when/if my socket connection has been broken. However the Socket.Connected property always returns true, even after the server side has been disconnected and I've tried sending data through it. Can anyone help me figure out what's going on here. I need to know when a socket has been disconnected.
Socket serverSocket = null;
TcpListener listener = new TcpListener(1530);
listener.Start();
listener.BeginAcceptSocket(new AsyncCallback(delegate(IAsyncResult result)
{
Debug.WriteLine("ACCEPTING SOCKET CONNECTION");
TcpListener currentListener = (TcpListener)result.AsyncState;
serverSocket = currentListener.EndAcceptSocket(result);
}), listener);
Socket clientSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
Debug.WriteLine("client socket connected: " + clientSocket.Connected);//should be FALSE, and it is
clientSocket.Connect("localhost", 1530);
Debug.WriteLine("client socket connected: " + clientSocket.Connected);//should be TRUE, and it is
Thread.Sleep(1000);
serverSocket.Close();//closing the server socket here
Thread.Sleep(1000);
clientSocket.Send(new byte[0]);//sending data should cause the socket to update its Connected property.
Debug.WriteLine("client socket connected: " + clientSocket.Connected);//should be FALSE, but its always TRUE
After doing some testing, it appears that the documentation for Socket.Connected is wrong, or at least misleading. clientSocket.Connected will only become false after clientSocket.close() is called. I think this is a throwback to the original C Berkeley sockets API and its terminology. A socket is bound when it has a local address associated with it, and a socket is connected when it has a remote address associated with it. Even though the remote side has closed the connection, the local socket still has the association and so it is still "connected".
However, here is a method that does work:
!(socket.Poll(0, SelectMode.SelectRead) && socket.Available == 0)
It relies on that fact that a closed connection will be marked as readable even though no data is available.
If you want to detect conditions such as broken network cables or computers abruptly being turned off, the situation is a bit more complex. Under those conditions, your computer never receives a packet indicating that the socket has closed. It needs to detect that the remote side has vanished by sending packets and noticing that no response comes back. You can do this at the application level as part of your protocol, or you can use the TCP KeepAlive option. Using TCP Keep Alive from .NET isn't particularly easy; you're probably better off building a keep-alive mechanism into your protocol (alternately, you could ask a separate question for "How do I enable TCP Keep Alive in .NET and set the keep alive interval?").
Just write to your socket as normal. You'll know when it's disconnected by the Exception that says your data couldn't be delivered.
If you don't have anything to write...then who cares if it's disconnected? It may be disconnected now, but come back before you need it - why bother tearing it down, and then looping a reconnect until the link is repaired...especially when you didn't have anything to say anyway?
If it bothers you, implement a keep alive in your protocol. Then you'll have something to say every 30 seconds or so.
Maybe solution is to send some dummy data through it and check if it times out?
I recommend stripping out the higher-level language stuff and explore what happens at the lower-level IO.
The lowest I've explored was while writing isectd (find on sourceforge). Using the select() system call, a descriptor for a closed socket becomes read-ready, and when isectd would attempt the recv() the socket's disconnected state can be confirmed.
As a solution, I recommend not writing your own socket IO and use someone else's middleware. There are lots of good candidates out there. Don't forget to consider simple queuing services as well.
PS. I would have provided URLs to all the above but my reputation (1) doesn't allow it.
does the clientSocket.Send() method wait for the packet to either be ack/nack'd?
If not your code is flying onto the next line while socket is still trying to figure out what is going on.

Categories