Unable to make 2 parallel TCP requests to the same TCP Client - c#

Error:
Unable to read data from the transport connection: A blocking operation was interrupted by a call to WSACancelBlockingCall
Situation
There is a TCP Server
My web application connects to this TCP Server
Using the below code:
TcpClientInfo = new TcpClient();
_result = TcpClientInfo.BeginConnect(<serverAddress>,<portNumber>, null, null);
bool success = _result.AsyncWaitHandle.WaitOne(20000, true);
if (!success)
{
TcpClientInfo.Close();
throw new Exception("Connection Timeout: Failed to establish connection.");
}
NetworkStreamInfo = TcpClientInfo.GetStream();
NetworkStreamInfo.ReadTimeout = 20000;
2 Users use the same application from two different location to access information from this server at the SAME TIME
Server takes around 2sec to reply
Both Connect
But One of the user gets above error
"Unable to read data from the transport connection: A blocking operation was interrupted by a call to WSACancelBlockingCall"
when trying to read data from stream
How can I resolve this issue?
Use a better way of connecting to the server
Can't because it's a server issue
if a server issue, how should the server handle request to avoid this problem

This looks Windows-specific to me, which isn't my strong point, but...
You don't show us the server code, only the client code. I can only assume, then, that your server code accepts a socket connection, does its magic, sends something back, and closes the client connection. If this is your case, then that's the problem.
The accept() call is a blocking one that waits for the next client connection attempt and binds to it. There may be a queue of connection attempts created and administered by the OS, but it can still only accept one connection at a time.
If you want to be able to handle multiple simultaneous requests, you have to change your server to call accept(), and when a new connection comes in, launch a worker thread/process to handle the request and go back to the top of the loop where the accept() is. So the main loop hands off the actual work to another thread/process so it can get back to the business of waiting for the next connection attempt.
Real server applications are more complex than this. They launch a bunch of "worker bee" threads/processes in a pool and reuse them for future requests. Web servers do this, for instance.
If my assumptions about your server code are wrong, please enlighten us as to what it looks like.

Just a thought.
If your server takes 2seconds to response, shouldn't the Timeout values be 2000, instead of 20000 (which is 20 seconds)? First argument for AsyncWaitHandle.WaitOne() is in milliseconds.
If you are waiting 20 seconds, may be your server is disconnecting you for being idle?

Related

Multiple client with async TCP listener in C#

I have a problem with async TCP listener in C#. The main problem is I want to create async TCP listener in order to handle multiple connections. I have tons of requests from devices and webpages. Also I have to use database to write specific information from these connections (read/write to/from SQL Server).
The scenario of our task is this: One REST request will post from a webpage with a unique identifier to our Web API. Then our Web API makes a TCP connection to our listener, so we must halt this connection until we get another connection from a device with that unique identifier. Then we send data which we got it before (webpage connection) to this connected device and again we must halt this connection too. After processing this data in the device it will send us some other data again, and we must send this data to webpage which we halted it before.
How can I find halted connection in our listener?
Is there a better solution for us? (except using async TCP listener)
Because of some customer reasons we are unable to use signalR or self-hosted Web API in C#.
Regards,
Sara
'Halt' isn't the best word to describe what you need. If you need two-way communication with a web page over a REST request, you simply need to keep that request pending until the response is ready (not recommended, it could take really long and the connection could be dropped due to network conditions). Do reconsider your choice of avoiding SignalR. However, if need be, you can keep the request thread waiting. To do that, you'd need either a TaskCompletionSource (if you're processing the request within a Task) or a synchronization primitive such as a ManualResetEvent. I can't really give you more details without knowing the conditions your code will run under.
On the device side of things, again you need two way communication. You could implement this in one of two ways:
The device opens a TCP connection and keeps it open. The server receives the ID, and then sends the data back over the connection. The device then processes this data in some way and sends its response back to the server over the same connection and terminates the connection.
The device makes the equivalent of a REST GET request to the server to grab the data from the web page. It then processes the data and makes the equivalent of a POST request to send its own data back to the server.
After this is done, you still have the connection from the web page waiting for a response. Simply let it know the transaction has completed, using TaskCompletionSource.SetResult or ManualResetEvent.Signal. The server can then write whatever data it needs in the response to the web page's request and close that connection too.
Also note that there is no such thing as a halted connection. You just intentionally delay writing a response.
EDIT: You can't really hold the connection (not with the normal execution flow of most web servers at least), but you can stop the thread processing that connection. This is a heavily simplified (and completely inappropriate for any real system) example:
// ConnectionManager.cs
public static Dictionary<Guid, TaskCompletionSource<DataToSendToWebPage>> connectionTCSs;
// WebPageRequestHandler.cs
async Task HandleClientRequest() {
// do some stuff
var tcs = new TaskCompletionSource<DataToSendToWebPage>();
ConnectionManager.connectionTCSs[deviceID] = tcs;
var result = await tcs.Task; // This is where you wait for the other flow to complete
// Write response to connection
}
// DeviceRequestHandler.cs
void HandleRequest() {
// do stuff
ConnectionManager.connectionTCSs[clientID].SetResult(result);
}
The general idea is that you keep the thread (or task) processing the web page request waiting, and then signal it to continue from the other thread when the device's connection is handled and data is received.

Self-healing SslStream

I'm writing a service that needs to maintain a long running SSL connection to a remote server. I need this server to be self-healing, that is if it's disconnected for any reason then the next time it's written to it will reconnect. I've tried this:
bool isConnected = client.Connected && client.Client.Poll(0, SelectMode.SelectWrite) && stream.CanWrite;
if (!isConnected )
{
this.connected = false;
GetConnection();
}
stream.Write(bytes, 0, bytes.Length);
stream.Flush();
But I find it doesn't act as I would expect it. If I simulate a network outage by disabling my wifi, I'm still able to write to the stream with stream.Write() for approximately 20 seconds. Then next time I try to write to it, none of client.Connected, client.Client.Poll(), or stream.CanWrite() return false, but when I go to write to the stream I get a socket exception. Finally, if I try to recreate the connection, I get this exception: An existing connection was forcibly closed by the remote host.
I would appreciate any help create a long running SslStream that can withstand network failure. Thanks!
From a 10.000 feet point of view:
The reason you can still write to the stream after shutting down your wifi is because there is a network buffer that is holding the data for transmission, stream.Write/stream.Flush success means the network interface (TCP/IP stack) has accepted the data and has been buffered for transmission, not that the data has reach its target.
It takes time to the TCP/IP Stack to notice a full media disconnection, (connection lost/reset) because even if there is no physical link TCP/IP will see this as a temporary issue in the network and will keep retrying for a while (the network could be dropping packets at some point and the stack will keep retrying)
If you think about this in the reverse way, you won't like all your programs to fail if there is a network hiccup (this happen too often on internet), so TCP/IP takes its time to notify to the app layer that the connection has become invalid (after retry several times and wait a reasonable amount of time)
You can always reconnect to the server when the SslStream fails and continue sending data, although you will find is not as easy as this because there are several scenarios where you send and data is not received by server and others where server receive the data and you do not receive any ACK from server at all... So depending on your needs, self-healing alone could be not enough.
Self-Healing is simple to implement, data consistency and reliability is harder and usually requires the server to be ready to support some kind of reliable messaging mechanism to ensure all data has been sent and received.
The underlying protocol for SSL is TCP. TCP will usually only send data if the application wants it to deliver data, or if it needs to reply to data received from the other side by sending an ACK. This means, that a broken connection like a lost link will not be noticed until you are trying to send any data. But you will not notice immediatly, because:
A write to the socket will only deliver the data to the OS kernel and return success if this delivery was successful.
The kernel will then try to deliver the data to the peer and will wait for the ACK from the client.
If it does not get any ACK it will retry again to deliver the data and only after some unsuccessful retries the kernel will declare the connection broken.
Only after the connection is marked broken by the kernel the next write or read will return the error from kernel to user space, like with returning EPIPE when doing a write.
This means, if you want to know up-front if the connection is still alive you have to make sure that you get a regular data exchange on the connection. At the TCP level you might set TCP_KEEPALIVE, but this might use an interval of some hours between exchanges packets. At the SSL layer you might try to use the infamous heartbeat extension, but most peers will not understand it. The last choice is to implement some kind of heartbeat in your own application.
As for the self healing: When reconnecting you get a new TCP connection and you also need to do a full SSL handshake, because the last SSL connection was not cleanly closed and thus cannot be resumed. The server has no idea that this new connection is just a continuation of the old one so you have to implement some kind of meta-connection spanning multiple TCP connections inside your application layer on both client and server. Inside this meta-connection you need to have your own data tracking to detect, which data are really accepted from the peer and which were only send but never explicitly accepted because the connection broke. Sound like a kind of TCP on top of TCP.

How can you get Socket.Shutdown to raise a SocketException?

MSDN states that Socket.Shutdown can throw a SocketException. I've had this happen to me in production recently after introducing a load balancer between my clients and my server. But I cannot reproduce it in testing without a load balancer. Can you?
Some background - I have a server application written in C# that uses TCP sockets to communicate with clients. The application protocol is very simple for the server: accept connection, read request, send response, wait for client shutdown (read expecting 0 bytes), shutdown.
This code has been in production without issue for many years. However after introducing a load balancer in front of multiple server machines one of the server processes crashed due to an unhandled SocketException that was raised when the server called Socket.Shutdown. The particular client had timed out whilst waiting for the server to respond and attempted to close the connection early. The exception message on the server was "An existing connection was forcibly closed by the remote host." It is not unusual for the client to do this, but obviously prior to the load balancer the server was raising this error at a different point in the code. Still it's clearly a server bug and the fix is obvious - handle the exception.
However using a test client application (also written in C#), I cannot find a sequence of operations that will cause the server to raise an exception during Socket.Shutdown. It appears that the load balancer did something unusual to the TCP packets, but still, I dislike using that as excuse for failing to reproduce the issue.
I can run both server and client code in debug and I have WireShark watching the packets.
On the client side, after the connection is established, the operations are:
Socket.Send() // single call
Socket.Receive() // this one times out in our scenario
Socket.XXX() // various choices as described below
On the server side, after the connection is established, the operations are:
1) Socket.Receive() //multiple calls until complete message is received
2) // Processing...
3) Socket.Write() //single call
4) Socket.Receive() // single call expecting 0 bytes
5) Socket.Shutdown()
Presume each call is wrapped with try..catch(SocketException)
A) If I pause the server during step 2, wait for the client to time out, and initiate a client shutdown using Socket.Shutdown(SocketShutDown.Send) a FIN packet is sent to the server. When the server resumes processing, all the calls will succeed (3 thru 5) because that's a perfectly acceptable TCP flow.
B) If I pause the server during step 2, wait for the client to time out, and initiate a client shutdown using Socket.Shutdown(SocketShutDown.Both) or Socket.Close() again a FIN packet is sent to the server. When the server resumes processing step 3 succeeds, but it causes the client to send a RST packet in response as it is not accepting more data. If this RST arrives before step 4 then Socket.Receive throws and step 5 succeeds. If it arrives after step 4, then Socket.Receive succeeds (returns 0 bytes), and yet step 5 succeeds.
C) If the client has "Dont Linger" set (Linger enabled with 0 timeout), and I pause the server during processing, wait for the client to time out, and initiate a client shutdown using Socket.Shutdown(SocketShutDown.Both) or Socket.Close() a "RST" packet is immediately sent to the server. When the server resumes processing steps 3 and 4 will fail but still step 5 succeeds.
I think what puzzles me most is that Socket.Shutdown appears to ignore my test client RST packets and yet evidently my load balancer was able to send a RST packet that was not ignored. What am I missing? What else can I try?

Should I close a socket (TCPIP) after every transaction?

I have written a TCPIP server that implements a FileSystemWatcher and fills a queue with data parsed from new files acquired by the FSW.
A single client will connect to this server and ask for data from the queue (no other client will need to connect at any time). If no data exists, the client will wait (1 second) and try again.
Both client and server are written asynchronously - my question is: should the client create a new socket for each transaction (inside the while loop), or just leave the socket open (outside the while loop)?
client.Connect()
while(bCollectData)
{
... communicate ...
Thread.Sleep(1000);
}
client.Shutdown(SocketShutdown.Both);
client.Close();
I would suggest you to leave socket open and even better to block it on the server, so that you didn't have to do Thread.Sleep. When the server will have some data he will send the message to the client.
The code will look something like this
while(bCollectData)
{
_socket.recv(...); //this line will wait for response from server
//... process message and start another wait in the next iteration.
}
using this approach you will get all messages immediately and avoid unneeded messages sent between client and server(the messages which return that server has no data).
I would leave the socket open outside the loop, reconnecting every iteration seems like a waste of resources.
I would not close the socket. Every time you connect you have some handshake.

Check if a server is available

I'm looking for a way to check if a server is still available.
We have a offline application that saves data on the server, but if the serverconnection drops (it happens occasionally), we have to save the data to a local database instead of the online database.
So we need a continues check to see if the server is still available.
We are using C# for this application
The check on the sqlconnection.open is not really an option because this takes about 20 sec before an error is thrown, we can't wait this long + I'm using some http services as well.
Just use the System.Net.NetworkInformation.Ping class. If your server does not respond to ping (for some reason you decided to block ICMP Echo request) you'll have to invent your own service for this. Personally, I'm all for not blocking ICMP Echo requests, and I think this is the way to go. The ping command has been used for ages to check reachability of hosts.
using System.Net.NetworkInformation;
var ping = new Ping();
var reply = ping.Send("google.com", 60 * 1000); // 1 minute time out (in ms)
// or...
reply = ping.Send(new IPAddress(new byte[]{127,0,0,1}), 3000);
If the connection is as unreliable as you say, I would not use a seperate check, but make saving the data local part of the exception handling.
I mean if the connection fails and throws an exception, you switch strategies and save the data locally.
If you check first and the connection drops afterwards (when you actually save data), then you still would still run into an exception you need to handle. So the initial check was unnecessary. The check would only be useful if you can assume that after a succesfull check the connection is up and stays up.
From your question it appears the purpose of connecting to the server is to use its database. Your priority must be to check whether you can successfully connect to the database. It doesn't matter if you can PING the server or get an HTTP response (as suggested in other answers), your process will fail unless you successfully establish a connection to the database. You mention that checking a database connection takes too long, why don't you just change the Connection Timeout setting in your application's connection string to a more impatient value such as 5 seconds (Connection Timeout=5)?
If this is an sql server then you can just try to open a new connection to it. If the SqlConnection.Open method fails then you can check the error message to determine if the server is unavailable.
What you are doing now is:
use distant server
if distant server fails, resort to local cache
How to determine if the server is available? Use a catch block. That's the simplest to code.
If you actually have a local database (and not, for example, a list of transactions or data waiting to be inserted), I would turn the design around:
use the local database
regularly synchronize the local database and the distant database
I'll let you be the judge on concurrency constraints and other stuff related to your application to pick a solution.
Since you want to see if the database server is there either catch any errors when you attempt to connect to the database or use a socket and attempt a raw connection to the server on some service, I'd suggest the database as that is the resource you need.

Categories