Suppose I have a Queue of Tasks, and each Task have a locking object (syncObject) which controls shared resource access, Queue can have multiple Task which share same instances of syncObject. And I have N concurrent threads that should dequeue Tasks and proccess them in queue order, this means acquire lock on syncObject in the order of queue.
Code explanation:
abstract class Task
{
public readonly Object SyncObject = new Object();
}
Queue<Task> taskQueue = new Queue<Task>();
Object queueLock = new Object();
void TakeItemThreadedMethod()
{
Task task;
lock(queueLock) task = taskQueue.Dequeue();
//Between this lines is my problem,
//Other thread can dequeue next task and it may share same syncObject and
//acquire lock on it before this thread, thought this task was first in queue
lock(task.SyncObject)
{
//Do some work here
}
}
How to start proccessing Tasks (acquire Task.SyncObject lock) that share the same SyncObject in the order they were in Queue.
It sounds like potentially your queue shouldn't contain individual tasks - but queues of tasks, where each subqueue is "all the tasks which share a sync-lock".
Your processor would therefore:
Take a subqueue off the main queue
Dequeue the first task off the subqueue and process it
When it's finished put the subqueue back at the end of the main queue (or anywhere, actually - work out how you want the scheduling to work)
This will ensure that only one task per subqueue is ever executed at a time.
You'll probably need a map from lock to subqueue, so that anything creating work can add it to the right subqueue. You'd need to atomically work out when to remove a subqueue from the map (and not put it back on the main queue), assuming you require that functionality at all.
EDIT: As an optimization for the above, you could put the subqueue itself into whatever you're using as the shared sync lock. It could have a reference to either "the single task to next execute" or "a queue of tasks" - only creating the queue lazily. You'd then put the sync lock (which wouldn't actually need to be used as a lock any more) on the queue, and each consumer would just ask it for the next task to execute. If only a single task is available, it's returned (and the "next task" variable set to null). If there are multiple tasks available, the first is dequeued.
When a producer adds a new task, either the "first task" variable is set to the task to execute if it was previously null, or a queue is created if there wasn't a queue but was already a task, or the queue is just added to if one already existed. That solves the inefficiency of unnecessary queue creation.
Again, the tricky part will be working out how to atomically throw away the shared resource lock - because you only want to do so after processing the last item, but equally you don't want to miss a task because you happened to add it at the wrong time. It shouldn't be too bad, but equally you'll need to think about it carefully.
How about this approach:
use a list instead of a queue
have each worker thread loop in order through the queue until it can find an "unlocked" task
Something like (untested):
abstract class Task
{
public readonly Object SyncObject = new Object();
}
List<Task> taskList = new List<Task>();
void TakeItemThreadedMethod()
{
Task task = null;
bool found = false;
try
{
// loop until found an task whose SyncObject is free
while (!found)
{
lock (taskList)
{
for (int i = 0; i < taskList.Count; i++)
{
object syncObj = taskList[i].SyncObject;
if (found = Monitor.TryEnter(syncObj))
{
for (int x = 0; x < taskList.Count; x++)
{
if (Object.ReferenceEquals(
syncObj, taskList[x].SyncObject))
{
task = taskList[x];
taskList.RemoveAt(x);
break;
}
}
break;
}
}
}
}
// process the task...
DoWork(task);
}
finally
{
if (found) Monitor.Exit(task.SyncObject);
}
}
void QueueTask(Task task)
{
lock (taskList)
{
taskList.Add(task);
}
}
I have used QueuedLock class suggested by Matthew Brindley, with slight modification, I have split up Enter function to TakeTicket and Enter which blocks.
Now I can use TakeTicket inside shared QueueLock without blocking whole queue.
Modified code:
abstract class Task
{
public readonly QueuedLock SyncObject = new QueuedLock();
}
Queue<Task> taskQueue = new Queue<Task>();
Object queueLock = new Object();
void TakeItemThreadedMethod()
{
Task task;
int ticket;
lock(queueLock)
{
task = taskQueue.Dequeue();
ticket = task.SyncObject.TakeTicket();
}
task.SyncObject.Enter(ticket);
//Do some work here
task.SyncObject.Exit();
}
Related
I need to have the piece of code which allowed to execute only by 1 thread at the same time based on parameter key:
private static readonly ConcurrentDictionary<string, SemaphoreSlim> Semaphores = new();
private async Task<TModel> GetValueWithBlockAsync<TModel>(string valueKey, Func<Task<TModel>> valueAction)
{
var semaphore = Semaphores.GetOrAdd(valueKey, s => new SemaphoreSlim(1, 1));
try
{
await semaphore.WaitAsync();
return await valueAction();
}
finally
{
semaphore.Release(); // Exception here - System.ObjectDisposedException
if (semaphore.CurrentCount > 0 && Semaphores.TryRemove(valueKey, out semaphore))
{
semaphore?.Dispose();
}
}
}
Time to time I got the error:
The semaphore has been disposed. : System.ObjectDisposedException: The semaphore has been disposed.
at System.Threading.SemaphoreSlim.CheckDispose()
at System.Threading.SemaphoreSlim.Release(Int32 releaseCount)
at Project.GetValueWithBlockAsync[TModel](String valueKey, Func`1 valueAction)
All cases that I can imagine here are thread safety. Please help, what case I missed?
You have a thread race here, where another task is trying to acquire the same semaphore, and acquires it when you Release - i.e. another thread is awaiting the semaphore.WaitAsync(). The check against CurrentCount is a race condition, and it could go either way depending on timing. The check for TryRemove is irrelevant, as the competing thread already got the semaphore out - it was, after all, awaiting the WaitAsync().
As discussed in the comments, you have a couple of race conditions here.
Thread 1 holds the lock and Thread 2 is waiting on WaitAsync(). Thread 1 releases the lock, and then checks semaphore.CurrentCount, before Thread 2 is able to acquire it.
Thread 1 holds the lock, releases it, and checks semaphore.CurrentCount which passes. Thread 2 enters GetValueWithBlockAsync, calls Semaphores.GetOrAdd and fetches the semaphore. Thread 1 then calls Semaphores.TryRemove and diposes the semaphore.
You really need locking around the decision to remove an entry from Semaphores -- there's no way around this. You also don't have a way of tracking whether any threads have fetched a semaphore from Semaphores (and are either currently waiting on it, or haven't yet got to that point).
One way is to do something like this: have a lock which is shared between everyone, but which is only needed when fetching/creating a semaphore, and deciding whether to dispose it. We manually keep track of how many threads currently have an interest in a particular semaphore. When a thread has released the semaphore, it then acquires the shared lock to check whether anyone else currently has an interest in that semaphore, and disposes it only if noone else does.
private static readonly object semaphoresLock = new();
private static readonly Dictionary<string, State> semaphores = new();
private async Task<TModel> GetValueWithBlockAsync<TModel>(string valueKey, Func<Task<TModel>> valueAction)
{
State state;
lock (semaphoresLock)
{
if (!semaphores.TryGetValue(valueKey, out state))
{
state = new();
semaphores[valueKey] = state;
}
state.Count++;
}
try
{
await state.Semaphore.WaitAsync();
return await valueAction();
}
finally
{
state.Semaphore.Release();
lock (semaphoresLock)
{
state.Count--;
if (state.Count == 0)
{
semaphores.Remove(valueKey);
state.Semaphore.Dispose();
}
}
}
}
private class State
{
public int Count { get; set; }
public SemaphoreSlim Semaphore { get; } = new(1, 1);
}
The other option, of course, is to let Semaphores grow. Maybe you have a periodic operation to go through and clear out anything which isn't being used, but this will of course need to be protected to ensure that a thread doesn't suddenly become interested in a semaphore which is being cleared up.
I have a MessagesManager thread to which different threads may send messages and then this MessagesManager thread is responsible to publish these messages inside SendMessageToTcpIP() (start point of MessagesManager thread ).
class MessagesManager : IMessageNotifier
{
//private
private readonly AutoResetEvent _waitTillMessageQueueEmptyARE = new AutoResetEvent(false);
private ConcurrentQueue<string> MessagesQueue = new ConcurrentQueue<string>();
public void PublishMessage(string Message)
{
MessagesQueue.Enqueue(Message);
_waitTillMessageQueueEmptyARE.Set();
}
public void SendMessageToTcpIP()
{
//keep waiting till a new message comes
while (MessagesQueue.Count() == 0)
{
_waitTillMessageQueueEmptyARE.WaitOne();
}
//Copy the Concurrent Queue into a local queue - keep dequeuing the item once it is inserts into the local Queue
Queue<string> localMessagesQueue = new Queue<string>();
while (!MessagesQueue.IsEmpty)
{
string message;
bool isRemoved = MessagesQueue.TryDequeue(out message);
if (isRemoved)
localMessagesQueue.Enqueue(message);
}
//Use the Local Queue for further processing
while (localMessagesQueue.Count() != 0)
{
TcpIpMessageSenderClient.ConnectAndSendMessage(localMessagesQueue.Dequeue().PadRight(80, ' '));
Thread.Sleep(2000);
}
}
}
The different threads (3-4) send their message by calling the PublishMessage(string Message) (using same object to MessageManager). Once the message comes, I push that message into a concurrent queue and notifies the SendMessageToTcpIP() by setting _waitTillMessageQueueEmptyARE.Set();. Inside SendMessageToTcpIP(), I am copying the message from the concurrent queue inside a local queue and then publish one by one.
QUESTIONS: Is it thread safe to do enqueuing and dequeuing in this way? Could there be some strange effects due to it?
While this is probably thread-safe, there are built-in classes in .NET to help with "many publishers one consumer" pattern, like BlockingCollection. You can rewrite your class like this:
class MessagesManager : IDisposable {
// note that your ConcurrentQueue is still in play, passed to constructor
private readonly BlockingCollection<string> MessagesQueue = new BlockingCollection<string>(new ConcurrentQueue<string>());
public MessagesManager() {
// start consumer thread here
new Thread(SendLoop) {
IsBackground = true
}.Start();
}
public void PublishMessage(string Message) {
// no need to notify here, will be done for you
MessagesQueue.Add(Message);
}
private void SendLoop() {
// this blocks until new items are available
foreach (var item in MessagesQueue.GetConsumingEnumerable()) {
// ensure that you handle exceptions here, or whole thing will break on exception
TcpIpMessageSenderClient.ConnectAndSendMessage(item.PadRight(80, ' '));
Thread.Sleep(2000); // only if you are sure this is required
}
}
public void Dispose() {
// this will "complete" GetConsumingEnumerable, so your thread will complete
MessagesQueue.CompleteAdding();
MessagesQueue.Dispose();
}
}
.NET already provides ActionBlock< T> that allows posting messages to a buffer and processing them asynchronously. By default, only one message is processed at a time.
Your code could be rewritten as:
//In an initialization function
ActionBlock<string> _hmiAgent=new ActionBlock<string>(async msg=>{
TcpIpMessageSenderClient.ConnectAndSendMessage(msg.PadRight(80, ' '));
await Task.Delay(2000);
);
//In some other thread ...
foreach ( ....)
{
_hmiAgent.Post(someMessage);
}
// When the application closes
_hmiAgent.Complete();
await _hmiAgent.Completion;
ActionBlock offers many benefits - you can specify a limit to the number of items it can accept in a buffer and specify that multiple messages can be processed in parallel. You can also combine multiple blocks in a processing pipeline. In a desktop application, a message can be posted to a pipeline in response to an event, get processed by separate blocks and results posted to a final block that updates the UI.
Padding, for example, could be performed by an intermediary TransformBlock< TIn,TOut>. This transformation is trivial and the cost of using the block is greater than the method, but that's just an illustration:
//In an initialization function
TransformBlock<string> _hmiAgent=new TransformBlock<string,string>(
msg=>msg.PadRight(80, ' '));
ActionBlock<string> _tcpBlock=new ActionBlock<string>(async msg=>{
TcpIpMessageSenderClient.ConnectAndSendMessage());
await Task.Delay(2000);
);
var linkOptions=new DataflowLinkOptions{PropagateCompletion = true};
_hmiAgent.LinkTo(_tcpBlock);
The posting code doesn't change at all
_hmiAgent.Post(someMessage);
When the application terminates, we need to wait for the _tcpBlock to complete:
_hmiAgent.Complete();
await _tcpBlock.Completion;
Visual Studio 2015+ itself uses TPL Dataflow for such scenarios
Bar Arnon provides a better example in TPL Dataflow Is The Best Library You're Not Using, that shows how both synchronous and asynchronous methods can be used in a block.
The code is thread safe since both ConcurrentQueue and AutoResetEvent are thread safe. your strings are anyway being read and never being written to, so this code is thread safe.
However, You have to make sure you call SendMessageToTcpIP in some sort of a loop.
otherwise , you have a dangerous race condition - some messages may get lost:
while (!MessagesQueue.IsEmpty)
{
string message;
bool isRemoved = MessagesQueue.TryDequeue(out message);
if (isRemoved)
localMessagesQueue.Enqueue(message);
}
//<<--- what happens if another thread enqueues a message here?
while (localMessagesQueue.Count() != 0)
{
TcpIpMessageSenderClient.ConnectAndSendMessage(localMessagesQueue.Dequeue().PadRight(80, ' '));
Thread.Sleep(2000);
}
Other than that, AutoResetEvent is extremely heavy object. it uses a kernel object to synchronize threads. every call is a system call which may be costly. consider using user mode synchronization object (doesn't .net provides some sort of condition variable?)
This is an refactored code snippet of how I would implement this functionality:
class MessagesManager {
private readonly AutoResetEvent messagesAvailableSignal = new AutoResetEvent(false);
private readonly ConcurrentQueue<string> messageQueue = new ConcurrentQueue<string>();
public void PublishMessage(string Message) {
messageQueue.Enqueue(Message);
messagesAvailableSignal.Set();
}
public void SendMessageToTcpIP() {
while (true) {
messagesAvailableSignal.WaitOne();
while (!messageQueue.IsEmpty) {
string message;
if (messageQueue.TryDequeue(out message)) {
TcpIpMessageSenderClient.ConnectAndSendMessage(message.PadRight(80, ' '));
}
}
}
}
}
Points to note here:
This drains the queue completely: if there is at least one message, it will process all of them
The 2000ms Thread sleep is removed
I need to implement a sort of task buffer. Basic requirements are:
Process tasks in a single background thread
Receive tasks from multiple threads
Process ALL received tasks i.e. make sure buffer is drained of buffered tasks after a stop signal is received
Order of tasks received per thread must be maintained
I was thinking of implementing it using a Queue like below. Would appreciate feedback on the implementation. Are there any other brighter ideas to implement such a thing?
public class TestBuffer
{
private readonly object queueLock = new object();
private Queue<Task> queue = new Queue<Task>();
private bool running = false;
public TestBuffer()
{
}
public void start()
{
Thread t = new Thread(new ThreadStart(run));
t.Start();
}
private void run()
{
running = true;
bool run = true;
while(run)
{
Task task = null;
// Lock queue before doing anything
lock (queueLock)
{
// If the queue is currently empty and it is still running
// we need to wait until we're told something changed
if (queue.Count == 0 && running)
{
Monitor.Wait(queueLock);
}
// Check there is something in the queue
// Note - there might not be anything in the queue if we were waiting for something to change and the queue was stopped
if (queue.Count > 0)
{
task = queue.Dequeue();
}
}
// If something was dequeued, handle it
if (task != null)
{
handle(task);
}
// Lock the queue again and check whether we need to run again
// Note - Make sure we drain the queue even if we are told to stop before it is emtpy
lock (queueLock)
{
run = queue.Count > 0 || running;
}
}
}
public void enqueue(Task toEnqueue)
{
lock (queueLock)
{
queue.Enqueue(toEnqueue);
Monitor.PulseAll(queueLock);
}
}
public void stop()
{
lock (queueLock)
{
running = false;
Monitor.PulseAll(queueLock);
}
}
public void handle(Task dequeued)
{
dequeued.execute();
}
}
You can actually handle this with the out-of-the-box BlockingCollection.
It is designed to have 1 or more producers, and 1 or more consumers. In your case, you would have multiple producers and one consumer.
When you receive a stop signal, have that signal handler
Signal producer threads to stop
Call CompleteAdding on the BlockingCollection instance
The consumer thread will continue to run until all queued items are removed and processed, then it will encounter the condition that the BlockingCollection is complete. When the thread encounters that condition, it just exits.
You should think about ConcurrentQueue, which is FIFO, in fact. If not suitable, try some of its relatives in Thread-Safe Collections. By using these you can avoid some risks.
I suggest you take a look at TPL DataFlow. BufferBlock is what you're looking for, but it offers so much more.
Look at my lightweight implementation of threadsafe FIFO queue, its a non-blocking synchronisation tool that uses threadpool - better than create own threads in most cases, and than using blocking sync tools as locks and mutexes. https://github.com/Gentlee/SerialQueue
Usage:
var queue = new SerialQueue();
var result = await queue.Enqueue(() => /* code to synchronize */);
You could use Rx on .NET 3.5 for this. It might have never come out of RC, but I believe it is stable* and in use by many production systems. If you don't need Subject you might find primitives (like concurrent collections) for .NET 3.5 you can use that didn't ship with the .NET Framework until 4.0.
Alternative to Rx (Reactive Extensions) for .net 3.5
* - Nit picker's corner: Except for maybe advanced time windowing, which is out of scope, but buffers (by count and time), ordering, and schedulers are all stable.
super simple question, but I just wanted some clarification. I want to be able to restart a thread using AutoResetEvent, so I call the following sequence of methods to my AutoResetEvent.
setupEvent.Reset();
setupEvent.Set();
I know it's really obvious, but MSDN doesn't state in their documentation that the Reset method restarts the thread, just that it sets the state of the event to non-signaled.
UPDATE:
Yes the other thread is waiting at WaitOne(), I'm assuming when it gets called it will resume at the exact point it left off, which is what I don't want, I want it to restart from the beginning. The following example from this valuable resource illustrates this:
static void Main()
{
new Thread (Work).Start();
_ready.WaitOne(); // First wait until worker is ready
lock (_locker) _message = "ooo";
_go.Set(); // Tell worker to go
_ready.WaitOne();
lock (_locker) _message = "ahhh"; // Give the worker another message
_go.Set();
_ready.WaitOne();
lock (_locker) _message = null; // Signal the worker to exit
_go.Set();
}
static void Work()
{
while (true)
{
_ready.Set(); // Indicate that we're ready
_go.WaitOne(); // Wait to be kicked off...
lock (_locker)
{
if (_message == null) return; // Gracefully exit
Console.WriteLine (_message);
}
}
}
If I understand this example correctly, notice how the Main thread will resume where it left off when the Work thread signals it, but in my case, I would want the Main thread to restart from the beginning.
UPDATE 2:
#Jaroslav Jandek - It's quite involved, but basically I have a CopyDetection thread that runs a FileSystemWatcher to monitor a folder for any new files that are moved or copied into it. My second thread is responsible for replicating the structure of that particular folder into another folder. So my CopyDetection thread has to block that thread from working while a copy/move operation is in progress. When the operation completes, the CopyDetection thread restarts the second thread so it can re-duplicate the folder structure with the newly added files.
UPDATE 3:
#SwDevMan81 - I actually didn't think about that and that would work save for one caveat. In my program, the source folder that is being duplicated is emptied once the duplication process is complete. That's why I have to block and restart the second thread when new items are added to the source folder, so it can have a chance to re-parse the folder's new structure properly.
To address this, I'm thinking of maybe adding a flag that signals that it is safe to delete the source folder's contents. Guess I could put the delete operation on it's own Cleanup thread.
#Jaroslav Jandek - My apologies, I thought it would be a simple matter to restart a thread on a whim. To answer your questions, I'm not deleting the source folder, only it's content, it's a requirement by my employer that unfortunately I cannot change. Files in the source folder are getting moved, but not all of them, only files that are properly validated by another process, the rest must be purged, i.e. the source folder is emptied. Also, the reason for replicating the source folder structure is that some of the files are contained within a very strict sub-folder hierarchy that must be preserved in the destination directory. Again sorry for making it complicated. All of these mechanisms are in place, have been tested and are working, which is why I didn't feel the need to elaborate on them. I only need to detect when new files are added so I may properly halt the other processes while the copy/move operation is in progress, then I can safely replicate the source folder structure and resume processing.
So thread 1 monitors and thread 2 replicates while other processes modify the monitored files.
Concurrent file access aside, you can't continue replicating after a change. So a successful replication only occurs when there is long enough delay between modifications. Replication cannot be stopped immediately since you replicate in chunks.
So the result of monitoring should be a command (file copy, file delete, file move, etc.).
The result of a successful replication should be an execution of a command.
Considering multiple operations can occur, you need a queue (or queued dictionary - to only perform 1 command on a file) of commands.
// T1:
somethingChanged(string path, CT commandType)
{
commandQueue.AddCommand(path, commandType);
}
// T2:
while (whatever)
{
var command = commandQueue.Peek();
if (command.Execute()) commandQueue.Remove();
else // operation failed, do what you like.
}
Now you may ask how to create a thread-safe query, but that probably belongs to another question (there are many implementations on the web).
EDIT (queue-less version with whole dir replication - can be used with query):
If you do not need multiple operations (eg. always replication the whole directory) and expect the replication to always finish or fail and cancel, you can do:
private volatile bool shouldStop = true;
// T1:
directoryChanged()
{
// StopReplicating
shouldStop = true;
workerReady.WaitOne(); // Wait for the worker to stop replicating.
// StartReplicating
shouldStop = false;
replicationStarter.Set();
}
// T2:
while (whatever)
{
replicationStarter.WaitOne();
... // prepare, throw some shouldStops so worker does not have to work too much.
if (!shouldStop)
{
foreach (var file in files)
{
if (shouldStop) break;
// Copy the file or whatever.
}
}
workerReady.Set();
}
I think this example clarifies (to me anyway) how reset events work:
var resetEvent = new ManualResetEvent(false);
var myclass = new MyAsyncClass();
myclass.MethodFinished += delegate
{
resetEvent.Set();
};
myclass.StartAsyncMethod();
resetEvent.WaitOne(); //We want to wait until the event fires to go on
Assume that MyAsyncClass runs the method on a another thread and fires the event when complete.
This basically turns the asynchronous "StartAsyncMethod" into a synchronous one. Many times I find a real-life example more useful.
The main difference between AutoResetEvent and ManualResetEvent, is that using AutoResetEvent doesn't require you to call Reset(), but automatically sets the state to "false". The next call to WaitOne() blocks when the state is "false" or Reset() has been called.
You just need to make it loop like the other Thread does. Is this what you are looking for?
class Program
{
static AutoResetEvent _ready = new AutoResetEvent(false);
static AutoResetEvent _go = new AutoResetEvent(false);
static Object _locker = new Object();
static string _message = "Start";
static AutoResetEvent _exitClient = new AutoResetEvent(false);
static AutoResetEvent _exitWork = new AutoResetEvent(false);
static void Main()
{
new Thread(Work).Start();
new Thread(Client).Start();
Thread.Sleep(3000); // Run for 3 seconds then finish up
_exitClient.Set();
_exitWork.Set();
_ready.Set(); // Make sure were not blocking still
_go.Set();
}
static void Client()
{
List<string> messages = new List<string>() { "ooo", "ahhh", null };
int i = 0;
while (!_exitClient.WaitOne(0)) // Gracefully exit if triggered
{
_ready.WaitOne(); // First wait until worker is ready
lock (_locker) _message = messages[i++];
_go.Set(); // Tell worker to go
if (i == 3) { i = 0; }
}
}
static void Work()
{
while (!_exitWork.WaitOne(0)) // Gracefully exit if triggered
{
_ready.Set(); // Indicate that we're ready
_go.WaitOne(); // Wait to be kicked off...
lock (_locker)
{
if (_message != null)
{
Console.WriteLine(_message);
}
}
}
}
}
I work with new Parallel.For that creates multiple threads to perform the same operation.
In case one of the threads fail, it means that I'm working "too fast" and I need to put all the threads to rest for a few seconds.
Is there a way to perform something like Thread.Sleep - only to do the same on all threads at once?
This is a direct answer to the question, except for the Parallel.For bit.
It really is a horrible pattern; you should probably be using a proper synchronization mechanism, and get the worker threads to, without preemption, occasionally check if they need to 'back off.'
In addition, this uses Thread.Suspend and Thread.Resume which are both deprecated, and with good reason (from Thread.Suspend):
"Do not use the Suspend and Resume methods to synchronize the activities of threads. You have no way of knowing what code a thread is executing when you suspend it. If you suspend a thread while it holds locks during a security permission evaluation, other threads in the AppDomain might be blocked. If you suspend a thread while it is executing a class constructor, other threads in the AppDomain that attempt to use that class are blocked. Deadlocks can occur very easily."
(Untested)
public class Worker
{
private readonly Thread[] _threads;
private readonly object _locker = new object();
private readonly TimeSpan _tooFastSuspensionSpan;
private DateTime _lastSuspensionTime;
public Worker(int numThreads, TimeSpan tooFastSuspensionSpan)
{
_tooFastSuspensionSpan = tooFastSuspensionSpan;
_threads = Enumerable.Repeat(new ThreadStart(DoWork), numThreads)
.Select(ts => new Thread(ts))
.ToArray();
}
public void Run()
{
foreach (var thread in _threads)
{
thread.Start();
}
}
private void DoWork()
{
while (!IsWorkComplete())
{
try
{
// Do work here
}
catch (TooFastException)
{
SuspendAll();
}
}
}
private void SuspendAll()
{
lock (_locker)
{
// We don't want N near-simultaneous failures causing a sleep-duration of N * _tooFastSuspensionSpan
// 1 second is arbitrary. We can't be deterministic about it since we are forcefully suspending threads
var now = DateTime.Now;
if (now.Subtract(_lastSuspensionTime) < _tooFastSuspensionSpan + TimeSpan.FromSeconds(1))
return;
_lastSuspensionTime = now;
var otherThreads = _threads.Where(t => t.ManagedThreadId != Thread.CurrentThread.ManagedThreadId).ToArray();
foreach (var otherThread in otherThreads)
otherThread.Suspend();
Thread.Sleep(_tooFastSuspensionSpan);
foreach (var otherThread in otherThreads)
otherThread.Resume();
}
}
}
You need to create an inventory of your worker threads and then perhaps you can use Thread.Suspend and Resume methods. Mind you that using Suspend can be dangerous (for example, thread may have acquired lock before suspending). And suspend/resume have been marked obsolate due to such issues.