I am trying to establish a TCP connection with a number of IPs in parallel, and do that as fast as possible. I have converted some older code to use AsyncCTP for that purpose, introducing the parallelism.
Changes to Design and Speed, and Accessing Successful Connections?
My question is three-fold:
How bad is the following flow / what should I change?
i.e. the await starts a bunch of parallel TcpRequest threads,
but within each TcpRequest there is a tcpClient.BeginConnect
as well as another thread being spawn for reading (if connection is successful)
and the writing to the stream is done with a Wait / Pulse mechanism in a while loop.
Secondly, how could i make the process of connecting to a number of targets faster?
Currently, if the ip:port targets are not actually running any servers, then i get the "All Done" printed after about 18 seconds from the start, when trying to connect to about 500 local targets (that are not listening, and thus fail, on those ports).
How could i access the WriteToQueue method of successful connections, from the mothership?
Async Mothership Trying to Connect to All Targets in Parallel
// First get a bunch of IPAddress:Port targets
var endpoints = EndPointer.Get();
// Try connect to all those targets
var tasks = from t in topList select TcpRequester.ConnectAsync(t);
await TaskEx.WhenAll(tasks);
Debug.WriteLine("All Done");
Static Accessor for Individual TcpRequest Tasks
public static Task<TcpRequester> ConnectAsync(IPEndPoint endPoint)
{
var tcpRequester = Task<TcpRequester>.Factory.StartNew(() =>
{
var request = new TcpRequester();
request.Connect(endPoint);
return request;
}
);
return tcpRequester;
}
TcpRequester with BeginConnect TimeOut and new Thread for Reading
public void Connect(IPEndPoint endPoint)
{
TcpClient tcpClient = null;
Stream stream = null;
using (tcpClient = new TcpClient())
{
tcpClient.ReceiveTimeout = 1000;
tcpClient.SendTimeout = 1000;
IAsyncResult ar = tcpClient.BeginConnect(endPoint.Address, endPoint.Port, null, null);
WaitHandle wh;
wh = ar.AsyncWaitHandle;
try
{
if (!ar.AsyncWaitHandle.WaitOne(TimeSpan.FromMilliseconds(1000), false))
{
throw new TimeoutException();
}
if (tcpClient.Client != null)
{
// Success
tcpClient.EndConnect(ar);
}
if (tcpClient.Connected)
{
stream = tcpClient.GetStream();
}
// Start to read stream until told to close or remote close
ThreadStart reader = () => Read(stream);
// Reading is done in a separate thread
var thread = new Thread(reader);
thread.Start();
// See Writer method below
Writer(stream);
} finally
{
wh.Close();
}
}
} catch (Exception ex)
{
if (tcpClient != null)
tcpClient.Close();
}
}
}
Writing to Stream with Wait and Pulse
readonly Object _writeLock = new Object();
public void WriteToQueue(String message)
{
_bytesToBeWritten.Add(Convert(message));
lock (_writeLock)
{
Monitor.Pulse(_writeLock);
}
}
void Writer(Stream stream)
{
while (!_halt)
{
while (_bytesToBeWritten.Count > 0 && !_halt)
{
// Write method does the actual writing to the stream:
if (Write(stream, _bytesToBeWritten.ElementAt(0)))
{
_bytesToBeWritten.RemoveAt(0);
} else
{
Discontinue();
}
}
if (!(_bytesToBeWritten.Count > 0) && !_halt)
{
lock (_writeLock)
{
Monitor.Wait(_writeLock);
}
}
}
Debug.WriteLine("Discontinuing Writer and TcpRequester");
}
There are a few red flags that pop out at a cursory glance.
You have this Stream that is accepting reads and writes, but there is no clear indication that the operations have been synchronized appropriately. The documentation does state that a Stream's instance methods are not safe for multithreaded operations.
There does not appear to be synchronization around operations involving _bytesToBeWritten.
Acquiring a lock solely to execute Monitor.Wait and Monitor.Pulse is a little weird, if not downright incorrect. It is basically equivalent to using a ManualResetEvent.
It is almost never correct to use Monitor.Wait without a while loop. To understand why you have to understand the purpose of pulsing and waiting on a lock. That is really outside the scope of this answer.
It appears like the Writer and WriteToQueue methods are an attempt to generate a producer-consumer queue. The .NET BCL already contains the innards for this via the BlockingCollection class.
For what it is worth I see nothing flagrantly wrong with the general approach and usage of the await keyword.
Related
I'm currently working on ASP.NET Core WebApp, which consist of web server and two long-running services– TCP Server (for managing my own clients) and TCP Client (integration with external platform).
Both of services are running alongside web sever– I achieved that, by making them inherit from BackgroundService and injecting to DI in this way:
services.AddHostedService(provider => provider.GetService<TcpClientService>());
services.AddHostedService(provider => provider.GetService<TcpServerService>());
Unfortunately, while development I ran into weird issue (which doesn't let me sleep at night so at this point I beg for your help). For some reason async code in TcpClientService blocks execution of other services (web server and tcp server).
using System;
using System.IO;
using System.Net.Sockets;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
namespace ClientService.AsyncPoblem
{
public class TcpClientService : BackgroundService
{
private readonly ILogger<TcpClientService> _logger;
private bool Connected { get; set; }
private TcpClient TcpClient { get; set; }
public TcpClientService(ILogger<TcpClientService> logger)
{
_logger = logger;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
while (!stoppingToken.IsCancellationRequested)
{
try
{
if (Connected)
{
await Task.Delay(100, stoppingToken); // check every 100ms if still connected
}
else
{
TcpClient = new TcpClient("localhost", 1234);
HandleClient(TcpClient); // <-- Call causing the issue
_logger.Log(LogLevel.Debug, "After call");
}
}
catch (Exception e)
{
// log the exception, wait for 3s and try again
_logger.Log(LogLevel.Critical, "An error occured while trying to connect with server.");
_logger.Log(LogLevel.Critical, e.ToString());
await Task.Delay(3000, stoppingToken);
}
}
}
private async Task HandleClient(TcpClient client)
{
Connected = true;
await using var ns = client.GetStream();
using var streamReader = new StreamReader(ns);
var msgBuilder = new StringBuilder();
bool reading = false;
var buffer = new char[1024];
while (!streamReader.EndOfStream)
{
var res = await streamReader.ReadAsync(buffer, 0, 1024);
foreach (var value in buffer)
{
if (value == '\x02')
{
msgBuilder.Clear();
reading = true;
}
else if (value == '\x03')
{
reading = false;
if (msgBuilder.Length > 0)
{
Console.WriteLine(msgBuilder);
msgBuilder.Clear();
}
}
else if (value == '\x00')
{
break;
}
else if (reading)
{
msgBuilder.Append(value);
}
}
Array.Clear(buffer, 0, buffer.Length);
}
Connected = false;
}
}
}
Call causing the issue is located in else statement of ExecuteAsync method
else
{
TcpClient = new TcpClient("localhost", 1234);
HandleClient(TcpClient); // <-- Call causing the issue
_logger.Log(LogLevel.Debug, "After call");
}
The code reads properly from the socket, but it blocks initialization of WebServer and TcpServer. Actually, even log method is not being reached. No matter if I put await in front of HandleClient() or not, the code behaves the same.
I've done some tests, and I figured out that this piece of code is not blocking anymore ("After call" log shows up):
else
{
TcpClient = new TcpClient("localhost", 1234);
await Task.Delay(1);
HandleClient(TcpClient); // <- moving Task.Delay into HandleClient also works
_logger.Log(LogLevel.Debug, "After call");
}
This also works like a charm (if I try to await Task.Run(), it will block "After call" log, but rest of app will start with no problem):
else
{
tcpClient = new TcpClient("localhost", 6969);
Connected = true;
Task.Run(() => ReceiveAsync(tcpClient));
_logger.Log(LogLevel.Debug, "After call");
}
There is couple more combinations which make it work, but my question is– why other methods work (especially 1ms delay- this completely shut downs my brain) and firing HandleClient() without await doesn't? I know that fire and forget may not be the most elegant solution, but it should work and do it's job shouldn't it? I searched for almost a month, and still didn't find a single explanation for that. At this point I have hard time falling asleep at night, cause I have no one to ask and can't stop thinking about that..
Update
(Sorry for disappearing for over a day without any answers)
After many many hours of investigation, I started debugging once again. Every time I would hit while loop in HandleClient(), I was losing control over debugger, program seemed to continue to work, but it would never reach await streamReader.ReadAsync(). At some point I decided to change condition in the while loop to true (I have no idea why I didn't think of trying it before), and everything began to work as expected. Messages would get read from tcp socket, and other services would fire up without any issues.
Here is piece of code causing issue
while (!streamReader.EndOfStream) <----- issue
{
var res = await streamReader.ReadAsync(buffer, 0, 1024);
// ...
After that observation, I decided to print out the result of EndOfStream before reaching the loop, to see what happens
Console.WriteLine(streamReader.EndOfStream);
while (!streamReader.EndOfStream)
{
var res = await streamReader.ReadAsync(buffer, 0, 1024);
// ...
Now the exact same thing was happening, but before even reaching the loop!
Explanation
Note:
I'm not senior programmer, especially when it comes to dealing with asynchronous TCP communication so I might be wrong here, but I will try to do my best.
streamReader.EndOfStream is not a regular field, it is a property, and it has logic inside it's getter.
This is how it looks like from the inside:
public bool EndOfStream
{
get
{
ThrowIfDisposed();
CheckAsyncTaskInProgress();
if (_charPos < _charLen)
{
return false;
}
// This may block on pipes!
int numRead = ReadBuffer();
return numRead == 0;
}
}
EndOfStream getter is synchronous method. To detect whether stream has ended or not, it calls ReadBuffer(). Since there is no data in the buffer yet and stream hasn't ended, method hangs until there is some data to read. Unfortunately it cannot be used in asynchronous context, it will always block (unfortunately because it seems to be the only way to instantly detect interrupted connection, broken cable or end of stream).
I don't have finished piece of code yet, I need to rewrite it and add some broken connection detection. I will post my solution I soon as I finish.
I would like to thank everyone for trying to help me, and especially #RoarS. who took biggest part in discussion, and spent some of his own time to take a closer look at my issue.
This is poorly documented behaviour of the BackgroundService class. All registered IHostedService will be started sequentially in the order they were registered. The application will not start until each IHostedService has returned from StartAsync. A BackgroundService is an IHostedService that starts your ExecuteAsync task before returning from StartAsync. Async methods will run until their first call to await an incomplete task before returning.
TLDR; If you don't await anything in your ExecuteAsync method, the server will never start.
Since you aren't awaiting that async method, your code boils down to;
while(true)
HandleClient(...);
(Do you really want to spawn an infinite number of TcpClient as fast as the CPU will go?). There's a really easy fix;
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
await Task.Yield();
// ...
}
I'm building a server app that accepts incoming TCP connections. (roughly 300 unique clients). It's important to note that I do not have control over the clients.
I have found that some of the connecting clients remain idle for quite some time after making the initial connection and sending the first status update. When they remain idle for over 5 mins the application's CPU usage jumps to over 90% and remains there.
To address this issue I built in a cancellation token that is triggered after 4 mins. This allows me to kill the connection. The client then detects this and reconnects about a minute later. This solves the high CPU usage issue, but has the side effect of high memory usage, there seems to be a memory leak. I suspect the resources is being held by the previous socket object.
I have a client object that contains the socket connection and information about the connected client. It also manages the incoming messages. There is also a manager class which accepts the incoming connections. It then creates the client object, assigns the socket to it and adds the client object to a concurrent dictionary. Every 10 seconds it checks the dictionary for clients that have been set to _closeConnection = true and calls their dispose method.
Here is the some of client object code:
public void StartCommunication()
{
Task.Run(async () =>
{
ArraySegment<byte> buffer = new ArraySegment<byte>(new byte[75]);
while (IsConnected)
{
try
{
// This is where I suspect the memory leak is originating - this call I suspect is not properly cleaned up when the object is diposed
var result = await SocketTaskExtensions.ReceiveAsync(ClientConnection.Client, buffer, SocketFlags.None).WithCancellation(cts.Token);
if (result > 0)
{
var message = new ClientMessage(buffer.Array, true);
if(message.IsValid)
HandleClientMessage(message);
}
}
catch (OperationCanceledException)
{
_closeConnection = true;
DisconnectReason = "Client has not reported in 4 mins";
}
catch (Exception e)
{
_closeConnection = true;
DisconnectReason = "Error during receive opperation";
}
}
});
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
_closeConnection = true;
cts.Cancel();
// Explicitly kill the underlying socket
if (UnitConnection.Client != null)
{
UnitConnection.Client.Close();
}
UnitConnection.Close();
cts.Dispose();
}
}
Task Extension Method:
public static async Task<T> WithCancellation<T>(this Task<T> task, CancellationToken cancellationToken)
{
var tcs = new TaskCompletionSource<bool>();
using (cancellationToken.Register(s => ((TaskCompletionSource<bool>)s).TrySetResult(true), tcs))
{
if (task != await Task.WhenAny(task, tcs.Task))
{
throw new OperationCanceledException(cancellationToken);
}
}
return task.Result;
}
Mananger Code:
public bool StartListener()
{
_listener = new TcpListenerEx(IPAddress.Any, Convert.ToInt32(_serverPort));
_listener.Start();
Task.Run(async () =>
{
while (_maintainConnection) // <--- boolean flag to exit loop
{
try
{
HandleClientConnection(await _listener.AcceptTcpClientAsync());
}
catch (Exception e)
{
//<snip>
}
}
});
return true;
}
private void HandleClientConnection(TcpClient client)
{
Task.Run(async () =>
{
try
{
// Create new Coms object
var client = new ClientComsAsync();
client.ClientConnection = client;
// Start client communication
client.StartCommunication();
//_clients is the ConcurrentDictionary
ClientComsAsync existingClient;
if (_clients.TryGetValue(client.ClientName, out existingClient) && existingClient != null)
{
if (existingClient.IsConnected)
existingClient.SendHeatbeat();
if (!existingClient.IsConnected)
{
// Call Dispose on existing client
CleanUpClient(existingClient, "Reconnected with new connection");
}
}
}
catch (Exception e)
{
//<snip>
}
finally
{
//<snip>
}
});
}
private void CleanUpClient(ClientComsAsync client, string reason)
{
ClientComsAsync _client;
_units.TryRemove(client.ClientName, out _client);
if (_client != null)
{
_client.Dispose();
}
}
When they remain idle for over 5 mins the application's CPU usage jumps to over 90% and remains there.
To address this issue I built in a cancellation token that is triggered after 4 mins.
The proper response is to solve the high CPU usage problem.
Looks to me like it's here:
while (IsConnected)
{
try
{
var result = await SocketTaskExtensions.ReceiveAsync(ClientConnection.Client, buffer, SocketFlags.None);
if (result > 0)
{
...
}
}
catch ...
{
...
}
}
Sockets are weird, and dealing with raw TCP/IP sockets is quite difficult to do correctly. On a side note, I always encourage devs to use something more standard like HTTP or WebSockets, but in this case you don't control the clients, so that's not an option.
Specifically, your code is not handling the case where result == 0. If the client devices gracefully closed their socket, you'd see a result of 0, immediately loop back and keep getting a result of 0 - a tight loop that uses up CPU.
This is, of course, assuming that IsConnected remains true. And that may be possible...
You don't show where IsConnected is set in your code, but I suspect it's in the error handling after sending the heartbeat message. So here's why that may not work as expected... I suspect that the client devices are closing their sending stream (your receiving stream) while keeping their receiving stream (your sending stream) open. This is one way to shut down a socket, sometimes considered "more polite" because it allows the other side to continue sending data even though this side is done sending. (This is from the client device perspective, so the "other side" is your code, and "this side" is the client device).
And this is perfectly legal socket-wise because each connected socket is two streams, not one, each of which can be independently closed. If this happens, your heartbeats will still be send and received without error (and likely just silently discarded by the client device), IsConnected will remain true, and the read loop will become synchronous and eat up your CPU.
To resolve, add a check for result == 0 in your read loop and clean up the client just the same as if a heartbeat failed to send.
This is a follow-up question to this question. On the next level, I now want to use maximal task concurrency to connect to expected hosts on a large set of IP addresses, using TCP/IP on a specific port.
My own research, as well as community reference, has lead me to key articles, for example:
How to check TCP/IP port availability using C# (Socket Communication)
Checking if ip with port is available?
How to set the timeout for a TcpClient?
A very impressive solution for large-scale pinging: Multithreading C# GUI ping example
And of course the precursor to this question: C#, Maximize Thread Concurrency
This allowed me to set up my own code, which works fine, but currently takes a full 30 seconds to finish scanning 255 IPs, using only one specific port. Given the test, machine has 8 logical cores this observation suggests that my construct actually spawns at maximum 8 concurrent tasks (255/8=31.85).
The function I wrote returns a list of responding IPs {IPs} which is a subset of the List of all IPs {IP_Ports} to be checked. This is my current code, working fine but not yet suitable for use on larger networks due to what I suspect is lack of efficient task concurrency:
// Check remote host connectivity
public static class CheckRemoteHost
{
// Private Class members
private static bool AllDone = false;
private static object lockObj = new object();
private static List<string> IPs;
// Wrapper: manage async method <TCP_check>
public static List<string> TCP(Dictionary<string, int> IP_Ports, int TimeoutInMS = 100)
{
// Locals
IPs = new List<string>();
// Perform remote host check
AllDone = false;
TCP_check(IP_Ports, TimeoutInMS);
while (!AllDone) { Thread.Sleep(50); }
// Finish
return IPs;
}
private static async void TCP_check(Dictionary<string, int> IP_Ports, int timeout)
{// async worker method: check remote host via TCP-IP
// Build task-set for parallel IP queries
var tasks = IP_Ports.Select(host => TCP_IPAndUpdateAsync(host.Key, host.Value, timeout));
// Start execution queue
await Task.WhenAll(tasks).ContinueWith(t =>
{
AllDone = true;
});
}
private static async Task TCP_IPAndUpdateAsync(string ip, int port, int timeout)
{// method to call IP-check
// Run method asynchronously
await Task.Run(() =>
{
// Locals
TcpClient client;
IAsyncResult result;
bool success;
try
{
client = new TcpClient();
result = client.BeginConnect(ip, port, null, null);
success = result.AsyncWaitHandle.WaitOne(TimeSpan.FromMilliseconds(timeout));
if (success)
{
lock (lockObj)
{
IPs.Add(ip);
}
}
}
catch (Exception e)
{
// do nothing
}
});
}
}// end public static class CheckRemoteHost
So my question is: how can I maximize the task concurrency of requesting a response using TCP/IP at Port X such that I can obtain very fast IP-Port network scans on large internal networks?
Details
The default task scheduler is usually the ThreadPool scheduler. That means the number of concurrent tasks will be limited by the available threads in the pool.
Remarks
The thread pool provides new worker threads or I/O completion threads on demand until it reaches the minimum for each category. By default, the minimum number of threads is set to the number of processors on a system. When the minimum is reached, the thread pool can create additional threads in that category or wait until some tasks complete. Beginning with the .NET Framework 4, the thread pool creates and destroys threads in order to optimize throughput, which is defined as the number of tasks that complete per unit of time. Too few threads might not make optimal use of available resources, whereas too many threads could increase resource contention.
(Source: https://msdn.microsoft.com/en-us/library/system.threading.threadpool.getminthreads(v=vs.110).aspx)
You are likely just under the threshold where the threadpool would spin up new threads since tasks are being completed. Hence why you only have 8 concurrent tasks running at once.
Solutions
1. Use ConnectAsync with a timeout.
Instead of creating a separate task which blocks waiting for the connect. You can call ConnectAsync and join it with a delay to create a timeout. ConnectAsync doesn't seem to block the threadpool threads.
public static async Task<bool> ConnectAsyncWithTimeout(this Socket socket, string host, int port, int timeout = 0)
{
if (timeout < 0)
throw new ArgumentOutOfRangeException("timeout");
try
{
var connectTask = socket.ConnectAsync(host, port);
var res = await Task.WhenAny(connectTask, Task.Delay(timeout));
await res;
return connectTask == res && connectTask.IsCompleted && !connectTask.IsFaulted;
}
catch(SocketException se)
{
return false;
}
}
Example usage
private static async Task TCP_IPAndUpdateAsync(string ip, int port, int timeout)
{// method to call IP-check
client = new TcpClient();
var success = await client.Client.ConnectAsyncWithTimeout(ip, port, timeout);
if (success)
{
lock (lockObj)
{
IPs.Add(ip);
}
}
}
2. Use long running tasks.
Using Task.Factor.StartNew you can specify that the task is LongRunning. The threadpool task scheduler specifically will create a new thread for the task instead of using the threadpool. This will get around the 8 thread limit you are hitting. However, it should be noted that this is not a good solution if you plan to naively create thousands of tasks. Since at that point, the bottle neck will be thread context switches. You could however split all of the work between, for example, 100 tasks.
3. Use non-blocking connect
This method doesn't require creating multiple tasks. Instead you can call multiple connects on a single thread and check the status of multiple sockets at once. This method is a bit more involved though. If you rather go with this approach and want a more complete example then comment letting me know. Here is a quick snippet on how to use the API.
var socket = new Socket(SocketType.Stream, ProtocolType.Tcp);
socket.Blocking = false;
try
{
socket.Connect("127.0.0.1", 12345);
}
catch(SocketException se)
{
//Ignore the "A non-blocking socket operation could not be completed immediately" error
if (se.ErrorCode != 10035)
throw;
}
//Check the connection status of the socket.
var writeCheck = new List<Socket>() { socket };
var errorCheck = new List<Socket>() { socket };
Socket.Select(null, writeCheck, errorCheck, 0);
if (writeCheck.Contains(socket))
{
//Connection opened successfully.
}
else if (errorCheck.Contains(socket))
{
//Connection refused
}
else
{
//Connect still pending
}
Yesterday I came accross a strange problem, which gave me quite some headaches. I have a server application with a Server class, which in turn is derived from a Connection class. The Connection class provides information about the connection state and the possibility to close the connection
public bool Connected
{
get
{
if (connection != null)
{
lock (lockObject)
{
bool blockingState = connection.Blocking;
try
{
connection.Blocking = false;
connection.Send(new byte[1], 1, 0);
}
catch (SocketException e)
{
if (!e.NativeErrorCode.Equals(10035))
{
return false;
}
//is connected, but would block
}
finally
{
connection.Blocking = blockingState;
}
return connection.Connected;
}
}
return false;
}
}
public virtual void CloseConnection()
{
if (Connected)
{
lock (lockObject)
{
connection.Close();
}
}
}
The Server class is resonsible for actually sending data
private void ConnectAndPollForData()
{
try
{
TcpListener listener = new TcpListener(Port);
listener.Start();
while (true)
{
connection = listener.AcceptSocket();
string currentBuffr = string.Empty;
const int READ_BUFFER_SIZE = 1024;
byte[] readBuffr = new byte[READ_BUFFER_SIZE];
while (Connected)
{
int bytesReceived;
lock (lockObject)
{
bytesReceived = connection.Receive(readBuffr, READ_BUFFER_SIZE, SocketFlags.None);
}
currentBuffr += ASCIIEncoding.ASCII.GetString(readBuffr, 0, bytesReceived);
//do stuff
}
}
catch(ThreadAbortException)
{
Thread.ResetAbort();
}
finally
{
}
}
public void SendString(string stringToSend)
{
stringToSend += "\r\n";
if(Connected)
{
lock(lockObject)
{
connection.Send(ASCIIEncoding.UTF7.GetBytes(stringToSend));
}
}
}
There is no other explicit access to the connection object. The ConnectAndPollForData function executes in a seperate thread. Whenever I ran the host in this version (I am currently using a non thread-safe version, which causes other problems) it hang after quite a few lines received via TCP. Pausing the debugger showed me, that one thread tried to execute the code withing the lock of Connected, while the other tried to receive data in the lock of ConnectAndPollForData. This behavior seems strange to me, for I would expect to execute the code within the first lock and then do the second. There do seem to be similar problems when using callbacks like in Deadlocking lock() method or 'Deadlock' with only one locked object? but the situation here is a bit different, for in my situation (I think) the code within the locks should not emit any events that themselves try to obtain a lock on the object.
Let's assume it gets the lock in the second method first. So it is holding the lock, and waiting for data. It is unclear whether this is directly receiving the data sent by the first method, or whether this is looking for a reply from an unrelated server - a reply to the message sent in the first method. But either way, I'm assuming that there won't be data incoming until the outbound message is sent.
Now consider: the outbound message can't be sent, because you are holding an exclusive lock.
So yes, you've deadlocked yourself. Basically, don't do that. There is no need to synchronize between inbound and outbound socket operations, even on the same socket. And since it makes very little sense to have concurrent readers on the same socket, or concurrent writers, I'm guessing you don't actually need those locks at all.
I have been writing a command line program in C# that uses multiple tcp clients that all connect to the same server. Each client resides in it's own thread. At the moment I am trying to work out an effective method of spreading say 5 requests a second efficiently between let's say 4 threads.
My code currently looks like the following but I still end up with requests overlapping each other. Does anyone have any idea how to prevent these overlaps effectively?
// Max connections is 4, interval is 200
// Loop once to give tcp clients chance to connect
var tcpClients = new TcpClient[_maxConnections];
for(int i = 0; i < _maxConnections; i++)
{
tcpClients[i] = new TcpClient();
tcpClients[i].Connect(host, port);
}
// Loop again to setup tasks
for(int i = 0; i < _maxConnections; i++)
{
Task.Factory.StartNew(TcpHandler, tcpClients[i]);
// Sleep so every task starts separate from each other.
Thread.Sleep(_interval);
}
And then the TcpHandler code looks like:
public static void TcpHandler(Object o)
{
// active is already declared
while(_active)
{
var tcpClient = (TcpClient) o;
// .. do some send and receive...
Console.WriteLine("Something here..");
Thread.Sleep(_interval * _maxConnections);
}
}
So as you can see I am sleeping to provide sufficient space between each thread executing yet now and then they still overlap.
How can I make this threads run parallel without any overlap and limit to 5 times a second spread across all 4?
Or am I going about this all wrong?
Presuming each client requires a separate thread, and that only one thread may be communicating with the server at a given time (no overlap), a lock in the TcpHandler method should suffice:
// Max connections is 4, interval is 200
// Loop once to give tcp clients chance to connect
var tcpClients = new TcpClient[_maxConnections];
// dedicated lock object
static readonly object lockObject = new object();
And then in your TcpHandler method
public static void TcpHandler(Object o)
{
// active is already declared
while(_active)
{
//DO NON-SOCKET RELATED STUFF HERE
// ... code ...
//
//DO SOCKET RELATED STUFF HERE
lock(lockObject)
{
var tcpClient = (TcpClient) o;
// .. do some send and receive...
Console.WriteLine("Something here..");
Thread.Sleep(_interval * _maxConnections);
}
}
}
I am not quite sure why you are doing this but I have used System.Timers (actually an array of timers) in windows services and have staggered the start (intervals).
In the Elapse event maybe you could use a lock(myobject) { } so they don't overlap?
Gina
I think you are using sleep to manage connection times.. Why not instead setup a "Maximum connection delay" then use BeginConnect and a Timer to look after the connection.
eg.
//setup a timer variable
TCPClient connectionOpening;
_connecting = true;
_connected = false;
connectionOpening = tcpClient;
timer.change(5000, Infinite)
tcpClient.BeginConnect(ClientConnectCallback, tcpClient)
void ClientConnectCallback(iasyncresult ar)
{
_timer.change(infinite, infinite);
TCPClient tcp = (TCPClient)ar.AsyncState;
try
{
//if we have timed out because our time will abort the connect
tcp.EndConnect(ar);
_connected = true;
_connecting = false;
//we are now connected... do the rest you want to do.
//get the stream and BeginRead etc.
}
catch (Exception ex) // use the proper exceptions IOException , socketException etc
{
if (!_connecting)
{
//We terminated the connection because our timer ticked.
}
else
{
//some other problem that we weren't expecting
}
}
void TimerTick(object state)
{
_connecting = false;
_connected = false;
connectionOpening.Close();
}