C# NetworkStream wont throw exception - c#

I am writing a simple C# server and client, but it doesn't throw an exception when the client is disconnected. It will continue reading from the client, thinking that the client is still there. It also no longer blocks when the client is gone. I expect it to throw an exception if the client is no longer available.
private TcpListener server;
private NetworkStream stream;
private TcpClient client;
private byte[] buffer = new byte[1];
server = new TcpListener (serverIp, _portNumber);
server.Start();
stream = client.GetStream();
//The part I want to throw exception when the client is gone but doesn't.
try
{
stream.Read(buffer,0,1);
}
catch(Exception e)
{
#if (DEBUG)
Debug.Log ("Failed Rading " + e.Message);
#endif
return 0;
}
Any help would be appreciated.

If the client goes away without notifying the server there is nothing you can do except timeout. The server has no way of finding out the fact that the client is gone.
You need to live with this fact. Set a timeout.

It should throw an exception when the connection is closed and there is no more data in the buffer to read. Apparently there is still data available. When the buffer and the connection is closed you should get an exception.
Also I see you read data 1 byte at the time. Why not first check if the connection is alive:
client.Client.Poll(0, SelectMode.SelectRead)
check if there is data available to read and read the correct amount:
int available = client.Client.Available;
if(available > 0)
{
var buffer = new byte[available];
client.Read(buffer, 0, available);
}

Related

Why does my C# TcpClient fail to receive content from the server if the server sends before reading everything from the client?

In an application I'm working on I want to disconnect clients that are trying to send me packets that are too large.
Just before disconnecting them I want to send them a message informing them about the reason for disconnecting them.
The issue I am running into is that the client cannot receive this server message, if the server does not read everything the client has send him first. I do not understand why this is happening.
I've managed to narrow it down to a very small test setup where the problem is demonstrated.
The StreamUtil class is a simple wrapper class that helps to get around the TCP message boundary problem, basically on the sender side it sends the size of each message first and then the message itself, and on the receiver side it receives the size of the message first and then the message.
The client uses a ReadKey command to simulate some time between sending and receiving, seeing in my real application these two actions are not immediately back to back either.
Here is a test case that works:
Run server as shown below
Run client as shown below, it will show a "Press key message", WAIT do not press key yet
Turn off server since everything is already in the clients receive buffer anyway (I validated this using packet sniffer)
Press key on the client -> client correctly shows the messages from the server.
This is what I was expecting, so great so far no problem yet.
Now in the server code, comment out the 2nd receive call and repeat the steps above.
Step 1 and 2 complete successfully, no errors sending from client to server.
On step 3 however the client crashes on the read from the server, EVEN though the server reply HAS arrived on the client (again validated with packet sniffer).
If I do a partial shutdown (eg socket.Shutdown (...send...)) without closing the socket on the server, everything works.
1: I just cannot get my head around WHY not processing the line of text from the client on the server causes the client to fail on receiving the text send back from the server.
2: If I send content from server to client but STOP the server before actually closing the socket, this content never arrives, but the bytes have already been transmitted to the server side... (see ReadKey in server to simulate, basically I block there and then just quit the server)
If anyone could shed light on these two issues, I'd deeply appreciate it.
Client:
class TcpClientDemo
{
public static void Main (string[] args)
{
Console.WriteLine ("Starting....");
TcpClient client = new TcpClient();
try
{
client.Connect("localhost", 56789);
NetworkStream stream = client.GetStream();
StreamUtil.SendString(stream, "Client teststring...");
Console.WriteLine("Press key to initiate receive...");
Console.ReadKey();
Console.WriteLine("server reply:" + StreamUtil.ReceiveString(stream));
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
finally
{
client.Close();
}
Console.WriteLine("Client ended");
Console.ReadKey(true);
}
}
Server:
class TcpServerDemo
{
public static void Main (string[] args)
{
TcpListener listener = new TcpListener (IPAddress.Any, 56789);
listener.Start ();
Console.WriteLine ("Waiting for clients to serve...");
while (true)
{
TcpClient client = null;
NetworkStream stream = null;
try
{
client = listener.AcceptTcpClient();
stream = client.GetStream();
//question 1: Why does commenting this line prevent the client from receiving the server reply??
Console.WriteLine("client string:" + StreamUtil.ReceiveString(stream));
StreamUtil.SendString(stream, "...Server reply goes here...");
//question 2: If I close the server program without actually calling client.Close (while on this line), the client program crashes as well, why?
//Console.ReadKey();
}
catch (Exception e)
{
Console.WriteLine(e.Message);
break;
}
finally
{
if (stream != null) stream.Close();
if (client != null) client.Close();
Console.WriteLine("Done serving this client, everything closed.");
}
}
listener.Stop();
Console.WriteLine("Server ended.");
Console.ReadKey(true);
}
}
StreamUtil:
public class StreamUtil
{
public static byte[] ReadBytes (NetworkStream pStream, int byteCount) {
byte[] bytes = new byte[byteCount];
int bytesRead = 0;
int totalBytesRead = 0;
try {
while (
totalBytesRead != byteCount &&
(bytesRead = pStream.Read (bytes, totalBytesRead, byteCount - totalBytesRead)) > 0
) {
totalBytesRead += bytesRead;
Console.WriteLine("Read/Total:" + bytesRead + "/" + totalBytesRead);
}
} catch (Exception e) {
Console.WriteLine(e.Message);
}
return (totalBytesRead == byteCount) ? bytes : null;
}
public static void SendString (NetworkStream pStream, string pMessage) {
byte[] sendPacket = Encoding.ASCII.GetBytes (pMessage);
pStream.Write (BitConverter.GetBytes (sendPacket.Length), 0, 4);
pStream.Write (sendPacket, 0, sendPacket.Length);
}
public static string ReceiveString (NetworkStream pStream) {
int byteCountToRead = BitConverter.ToInt32(ReadBytes (pStream, 4), 0);
Console.WriteLine("Byte count to read:"+byteCountToRead);
byte[] receivePacket = ReadBytes (pStream, byteCountToRead);
return Encoding.ASCII.GetString (receivePacket);
}
}
The client fails because it detects the socket was already closed.
If C# socket operations detect a closed connection during earlier operations, an exception is thrown on the next operation which can mask data which would otherwise have been received
The StreamUtil class does a couple of things when the connection is closed before/during a read:
Exceptions from the reads are swallowed
A read of zero bytes isn't treated
These obfuscate what's happening when an unexpected close hits the client.
Changing ReadBytes not to swallow exceptions and to throw a mock socket-closed exception (e.g. if (bytesRead == 0) throw new SocketException(10053);) when it reads zero bytes I think makes the outcome more clear.
Edit
I missed something subtle in your examples - your first example causes a TCP RST flag to be sent as soon as the server closes connection, due to the socket being closed with data waiting to be read.
The RST flag results in a closedown that doesn't preserve pending data.
This blog has some discussion based on a very similar scenario (web server sending a HTTP error).
So I don't think there's an easy fix, options are:
As you already tried, shutdown the socket on the server before closing to force a FIN to be sent before the RST
Read the data in question but never process it (taking up bandwidth for no reason)

How do I detect that my attempt to send data via TLS secured socket has failed?

This sample app creates a client-server connection via a TLS secured socket and sends some data over it:
static void Main(string[] args)
{
try
{
var listenerThread = new Thread(ListenerThreadEntry);
listenerThread.Start();
Thread.Sleep(TimeSpan.FromSeconds(1));
var socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.IP);
socket.Connect("localhost", Port);
var rawStream = new NetworkStream(socket);
using (var sslStream = new SslStream(rawStream, false, VerifyServerCertificate))
{
var certificate = new X509Certificate(CertsPath + #"test.cer");
var certificates = new X509CertificateCollection(new[] { certificate });
sslStream.AuthenticateAsClient("localhost", certificates, SslProtocols.Tls, false);
using (var writer = new StreamWriter(sslStream))
{
writer.WriteLine("TEST");
writer.Flush();
Thread.Sleep(TimeSpan.FromSeconds(10));
}
}
socket.Shutdown(SocketShutdown.Both);
socket.Disconnect(false);
Console.WriteLine("Success! Well, not really.");
}
catch (Exception exc)
{
Console.WriteLine(exc);
}
}
private static bool VerifyServerCertificate(object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors)
{
return true;
}
static void ListenerThreadEntry()
{
try
{
var listener = new TcpListener(IPAddress.Any, Port);
listener.Start();
var client = listener.AcceptTcpClient();
var serverCertificate = new X509Certificate2(CertsPath + #"\test.pfx");
var sslStream = new SslStream(client.GetStream(), false);
sslStream.AuthenticateAsServer(serverCertificate, false, SslProtocols.Tls, false);
client.Close(); // terminate the connection
using (var reader = new StreamReader(sslStream))
{
var line = reader.ReadLine();
Console.WriteLine("> " + line);
}
}
catch (Exception exc)
{
Console.WriteLine(exc);
}
}
The trick is that the connection is terminated from the server side immediately after handshake. And the problem is that the client side knows nothing about it; I'd expect the client side to raise an exception when it tries to send data over the closed connection, but it doesn't.
So, the question is: how do I detect such cases, when the connection was interrupted and data didn't really reach the server?
It is not possible to know which packets have arrived under the TCP model. TCP is a stream-oriented protocol, not a packet-oriented protocol; that is, it behaves like a bi-directional pipe. If you write 7 bytes, and then write 5 bytes, it's possible the other end will just get 12 bytes all at once. Worse, TCP's reliable delivery only guarantees that if the data arrives, it will do so in the correct order without duplication or rearrangement, and that if the data does not arrive, it will be resent.
If the connection is broken unexpectedly, TCP does not guarantee that you will know exactly what data was lost, nor is it reasonably possible to provide that information. The only thing the client knows is "I never received an acknowledgement for byte number N [and presumably not for the previous n bytes either], despite resending them multiple times." That is not enough information to determine whether byte N (and the other missing bytes) arrived at the server. It's possible that they did arrive and then the connection dropped, before the server could acknowledge them. It is also possible that they did not arrive at all. TCP cannot provide you with this information, because only the server knows it, and you are not connected to the server any longer.
Now, if you close the socket properly, using shutdown(2) or the .NET equivalent, then data in flight will be pushed through if possible, and the other end will error out promptly. Generally, we try to ensure that both sides agree on when to shutdown the connection. In HTTP, this is done with the Connection: Close header, in FTP with the BYE command, and so on. If one side shuts down unexpectedly, it may still cause data to be lost, because shutdown does not normally wait for acknowledgements.

Reset TCP connection if server closes/crashes mid connection

I have a tcp connection like follows:
public void ConnectToServer()
{
string mac = GetUID();
while(true)
{
try
{
tcpClient = new TcpClient("xx.x.xx.xxx", xxxx);
networkstream = new SslStream(tcpClient.GetStream());
networkstream.AuthenticateAsClient("xx.x.xx.xxx");
networkstream.Write(Encoding.UTF8.GetBytes("0002:" + mac + "\r\n"));
networkstream.Flush();
string serverMessage = ReadMessage(networkstream);
Console.WriteLine("MESSAGE FROM SERVER: " + serverMessage);
}
catch (Exception e)
{
tcpClient.GetStream().Close();
tcpClient.Close();
}
}
}
This works fine and can send a receive data to/from the server.
What I need help with, if the server isn't running when the client starts, it'll wait and then connect once the server is up. But, if both the client and server are running and everything is working, if I close the server, the client will not reconnect(because I don't have anything to handle the event yet).
I have seen some answers on here that suggest polling and such. Is that the only way? The ReadMessage method that I call get into an infinite loop as well. I can post that code if need be.
I would really like to detect when the server closes/crashes and close the stream and the tcpclient and reconnect ASAP.
Here is my readmessage:
static string ReadMessage(SslStream sslStream)
{
if (sslStream.CanRead)
{
byte[] buffer = new byte[2048];
StringBuilder messageData = new StringBuilder();
int bytes = -1;
string message_type = null;
string actual_message = null;
do
{
try
{
Console.WriteLine("LENGTH: " + buffer.Length);
bytes = sslStream.Read(buffer, 0, buffer.Length);
Decoder decoder = Encoding.UTF8.GetDecoder();
char[] chars = new char[decoder.GetCharCount(buffer, 0, bytes)];
decoder.GetChars(buffer, 0, bytes, chars, 0);
messageData.Append(chars);
message_type = messageData.ToString().Substring(0, 5);
actual_message = messageData.ToString().Substring(5);
if (message_type.Equals("0001:"))
{
m_Window pop = new m_Window();
pop.callHttpPost(null, new EventArgs());
}
if (messageData.ToString().IndexOf("\r\n") != -1)
{
break;
}
}
catch (Exception e)
{
Console.WriteLine("ERROR: " + e.Message);
}
} while (bytes != 0);
return messageData.ToString();
}
return("CONNECTION HAS BEEN LOST");
}
With TCP you have 2 kinds of a server disconnect:
the server is closed
the server crashes
When the server is closed, you are going to receive 0 bytes on your client socket, this is the way you know that the peer has closed its end of the socket, which is called a half close.
But thing get more ugly if the server crashes.
When that happens again you have several possibilities.
If you don't send anything from the client to the server, the you have not way to find out that the server has indeed crashed.
The only way to find out that the server crashed is by letting the client send something or by activating keep alive. If you send something to a server socket that does not exist, you will have to wait a rather long period, because TCP is going to try several times, with retransmits untill there is a server response. When TCP has retried several times, then it will finally bail out and if you have a blocking socket you will see that the send failed, which means you should close your socket.
Actually there is a third possible server disconnect, that is a reset, but this is exceptionally used. I assume here that if there is a gracefull server shutdown, a normal close on the socket on the server end is executed. Which will end up in a FIN being sent instead of a RST, which is the exceptional case.
Now back to your situation, if the server crashes, it is inherently in the design of TCP, because of all those retransmission timeouts and increasing delays, that you will have to wait some time to actually detect that there is a problem. If the server is gracefully closed and startup again, this is not the case, this you detect immediately by receiving 0 bytes.

How to reconnect a TcpClient as soon as network unplugged and then plugged

I have a TcpClient that i want automatically re-connect as soon as network disconnects and then reconnect,but i am not getting how to achieve it..
Here is my function ..
private void Conn()
{
try
{
client = new TcpClient();
client.Connect(new IPEndPoint(IPAddress.Parse(ip), intport));
//Say thread to sleep for 1 secs.
Thread.Sleep(1000);
}
catch (Exception ex)
{
// Log the error here.
client.Close();
}
try
{
using (NetworkStream stream = client.GetStream())
{
byte[] notify = Encoding.ASCII.GetBytes("Hello");
stream.Write(notify, 0, notify.Length);
}
byte[] data = new byte[1024];
while (true)
{
{
int numBytesRead = stream.Read(data, 0, data.Length);
if (numBytesRead > 0)
{
data= Encoding.ASCII.GetString(data, 0, numBytesRead);
}
}
}
}
catch{Exception ex}
Also how reliable is while (true) to get the continuous data from the Tcpip machine.Till my testing this codes automatically exits from responding or getting data after a while.
Please help me to get the uninterrupted data .
Thanks..
You are immediately disposing of the NetworkStream after you have written something. This closes the socket. Don't do that. Rather, put the TcpClient in a using statement.
The way you read data is exactly right. The loop will exit when Read returns 0 which indicated a graceful shutdown of the connection by the remote side. If this is unexpected, the problem lies with the remote side.
Catch SocketException only and examine the status code property to find out the exact error.
It is not possible to reliably detect network errors. You have to wait for an exception to notice connection failure. After that, you need to periodically try establishing a connection again to find out when the network becomes available again.
I believe Windows provides some network interface level events to detect unplugged cabled but those are unreliable.

Reading from closed NetworkStream doesn't cause any exception

I'm trying to create a rather simple client-server application, but for communication I want to use binary serialized objects. The communication itself seems rather fine, but when I close the stream on the client's side, the server doesn't really notice it and keeps on trying to read the stream.
server side (class Server, executed in separate thread):
listening for connections
listener = new TcpListener(IPAddress.Parse("127.0.0.1"), this.Port);
listener.Start();
while (!interrupted)
{
Console.WriteLine("Waiting for client");
TcpClient client = listener.AcceptTcpClient();
AddClient(client);
Console.WriteLine("Client connected");
}
adding the client:
public void AddClient(TcpClient socket)
{
Client client = new Client(this, socket);
this.clients.Add(client);
client.Start();
}
listening for messages (deep inside Client class):
BinaryFormatter deserializer = new BinaryFormatter();
while (!interrupted)
{
System.Diagnostics.Debug.WriteLine("Waiting for the message...");
AbstractMessage msg = (AbstractMessage)deserializer.Deserialize(stream);
System.Diagnostics.Debug.WriteLine("Message arrived: " + msg.GetType());
raiseEvent(msg);
}
unit test:
Server server = new Server(6666);
server.Start();
Thread.Sleep(500);
TcpClient client = new TcpClient("127.0.0.1", 6666);
var message = new IntroductionMessage();
byte[] arr = message.Serialize();
client.GetStream().Write(arr, 0, arr.Length);
Thread.Sleep(500);
Assert.AreEqual(1, server.Clients.Count);
client.GetStream().Close();
client.Close();
Thread.Sleep(1000);
Assert.AreEqual(0, server.Clients.Count);
server.Stop();
so the message gets read properly, but then, when I close the stream, deserializer.Deserialize(stream) doesn't appear to throw any exceptions... so should it just not be read this way, or should I close the client in a different way?
Assuming the stream used in the Server for deserializing the message is a NetworkStream (which is the type of the stream returned by TcpClient.GetStream()), you should do two things:
Define a specific message for "connection ends". When the server receives and deserializes this message, exit the while-loop. To make this work, the client obviously needs to send such a message before closing its TcpClient connection. (You might choose a different mechanism working in a similar manner -- but why not using the message mechanism you already have in place...)
Set a ReadTimeout on the NetworkStream, so in case the connection gets lost and the client is unable to send a "connection ends" message, the server will hit the timeout and realize that the client is "dead".
The code in your server for listening to client messages should look similar to this:
//
// Time-out after 1 minute after receiving last message
//
stream.ReadTimeOut = 60 * 1000;
BinaryFormatter deserializer = new BinaryFormatter();
try
{
while (!interrupted)
{
System.Diagnostics.Debug.WriteLine("Waiting for the message...");
AbstractMessage msg = (AbstractMessage)deserializer.Deserialize(stream);
System.Diagnostics.Debug.WriteLine("Message arrived: " + msg.GetType());
//
// Exit while-loop when receiving a "Connection ends" message.
// Adapt this if condition to whatever is appropriate for
// your AbstractMessage type.
//
if (msg == ConnectionEndsMessage) break;
raiseEvent(msg);
}
}
catch (IOException ex)
{
... handle timeout and other IOExceptions here...
}

Categories