I found this helper class:
public sealed class QueuedLock
{
private object innerLock;
private volatile int ticketsCount = 0;
private volatile int ticketToRide = 1;
public QueuedLock()
{
innerLock = new Object();
}
public void Enter()
{
int myTicket = Interlocked.Increment(ref ticketsCount);
Monitor.Enter(innerLock);
while (true)
{
if (myTicket == ticketToRide)
{
return;
}
else
{
Monitor.Wait(innerLock);
}
}
}
public void Exit()
{
Interlocked.Increment(ref ticketToRide);
Monitor.PulseAll(innerLock);
Monitor.Exit(innerLock);
}
}
here :
Is there a synchronization class that guarantee FIFO order in C#?
It works nice for me but I've just read some comments on this topic about the using of those volatile variables and I am not sure if there is an issue here or not. I understand the code and implemented it in my application in order to do the right job but I admit I can't see a negative impact with this approach in the future, at least, if there is one.
I haven't seen many ways of doing this so it would be nice to have a complete and safe version ready to be used in any situation.
So basically, the question is, can this approach be improved? Does it have any subtle issues?
For multithreaded FIFO (queue), .NET has provided ConcurrentQueue, I would recommend using ConcurrentQueue instead of using low level locks.
MSDN: ConcurrentQueue
You can also review the TPL Dataflow BufferBlock class for this purpose:
// Hand-off through a BufferBlock<T>
private static BufferBlock<int> m_buffer = new BufferBlock<int>();
// Producer
private static void Producer()
{
while(true)
{
int item = Produce();
// storing the messages in FIFO queue
m_buffer.Post(item);
}
}
// Consumer
private static async Task Consumer()
{
while(true)
{
int item = await m_buffer.ReceiveAsync();
Process(item);
}
}
// Main
public static void Main()
{
var p = Task.Factory.StartNew(Producer);
var c = Consumer();
Task.WaitAll(p,c);
}
Related
I have a wrapper class around serial port which looks something like this:
static class HASPCLass
{
private static SerialPort m_port;
private static bool m_initialized;
private static int m_baudRate;
static readonly object _syncObject = new object();
public DoInitialization(int baudRate /*also could be other params*/)
{
lock(_syncObject)
{
if (!m_initialized)
{
Initialize(baudRate);
}
}
}
private Initialize(int baudrate /*also could have other params*/)
{
m_port.open(..);
m_baudRate = baudRate;
m_initialized = true;
}
private Uninitialize()
{
m_port.close();
m_initialized = false;
}
public void Read(byte[] buff)
{
lock(_syncObject)
{
//Other custom read stuff
m_port.Read(buff);
}
}
public void Write(byte [] buff)
{
lock(_syncObject)
{
//Other write related code
m_port.Write(buff);
}
}
public void Close()
{
lock(_syncObject)
{
if (m_initialized)
{
Uninitialize();
}
}
}
}
I tried making this class thread safe. Someone initializes it - read and writes maybe used from other threads - and in the end calls Close.
Now Imagine I have two additional static methods from other class which do something like this:
public static void function1()
{
HASPClass.Read(...);
// Some other code
HASPClass.Write(...);
}
public static void function2()
{
HASPClass.Read(...);
// Some other code
HASPClass.Write(...);
}
For overall thread safety I also enclosed these functions in locks:
public static void function1()
{
lock(otherlock1)
{
HASPClass.Read(...);
// Some other code
HASPClass.Write(...);
}
}
public static void function2()
{
lock(otherlock1)
{
HASPClass.Read(...);
// Some other code
HASPClass.Write(...);
}
}
Because order in which read and writes are called might be relavant for the HASP.
My question is: is now my final approach (of using function1 and function2) correct/thread safe?
Since you kind of use a singleton you are fine without additional locks as long as the functions do not use resources that have to be locked in // Some other code.
The class itself is thread safe because it locks all uses of the variables with the same lock. This is as tight as it gets. But make sure to not introduce dead locks in the code that lies behind the comments.
In general you should make sure no one closes your object before all threads are done with it.
Besides this code example is more or less inconsistent. You don't declare it static and write no return types and all.
Edit: From the higher persepctive of the need to give commands in a special order I correct the statement and say yes you need to lock it.
But beware of dead locks.
A more explicit way how this can go wrong (though I don't see it happening in your example code):
There are 2 threads that can hold the lock. Your device will always send you 1 except if you transmit 2 to it then it will send you 2.
Thread 1 is trying to first read a 1 and after that a 2 from the device without releasing the lock.
Now suppose somehow the actions taken after receiving 1 start Thread 2 which wants to transmit 2 to the device. But it can not because Thread 1 is still waiting but it will wait forever because Thread 2 can not transmit.
The most often case for this is GUI events used with invoke (which leads to an other thread executing code).
Imagine I have two additional static methods from other class ... To ensure thread safety do I have to put additional locks ... ?
No.
A lock does not care about the calling method or the stack trace - it only concerns the current thread. Since you already put locks in the critical sections, there is no point in putting higher level locks in your case.
You don't want a thread-safe class, you want a message queue.
By the comments I see your concern is if read/writes are mixed, you write from one thread and other issues a read before the writer thread reads the response.
In that scenario the best you can do is to create a queue of operations, when a write must read then you add a Read and Write operation in only one call, in this way the sequence will be warranted to follow the correct order, and in this way you only need to lock the queue.
Something like this:
Queue:
public class SerialQueue
{
SerialPort sp;
ManualResetEvent processQueue = new ManualResetEvent(false);
Queue<QueueCommand> queue = new Queue<QueueCommand>();
public event EventHandler<ReadEventArgs> ReadSuccess;
public event EventHandler<IdEventArgs> WriteSuccess;
public SerialQueue()
{
ThreadPool.QueueUserWorkItem(ProcessQueueThread);
sp = new SerialPort(); //Initialize it according to your needs.
sp.Open();
}
void ProcessQueueThread(object state)
{
while (true)
{
processQueue.WaitOne();
QueueCommand cmd;
while(true)
{
lock (queue)
{
if (queue.Count > 0)
cmd = queue.Dequeue();
else
{
processQueue.Reset();
break;
}
}
if (cmd.Operation == SerialOperation.Write || cmd.Operation == SerialOperation.WriteRead)
{
sp.Write(cmd.BytesToWrite, 0, cmd.BytesToWrite.Length);
if (WriteSuccess != null)
WriteSuccess(this, new IdEventArgs { Id = cmd.Id });
}
if(cmd.Operation == SerialOperation.Read || cmd.Operation == SerialOperation.WriteRead)
{
byte[] buffer = new byte[cmd.BytesToRead];
sp.Read(buffer, 0, buffer.Length);
if (ReadSuccess != null)
ReadSuccess(this, new ReadEventArgs { Id = cmd.Id, Data = buffer });
}
}
}
}
public void EnqueueCommand(QueueCommand Command)
{
lock(queue)
{
queue.Enqueue(Command);
processQueue.Set();
}
}
}
QueueCommand:
public class QueueCommand
{
public QueueCommand()
{
Id = Guid.NewGuid();
}
public Guid Id { get; set; }
public SerialOperation Operation { get; set; }
public int BytesToRead { get; set; }
public byte[] BytesToWrite { get; set; }
}
Enums:
public enum SerialOperation
{
Read,
Write,
WriteRead
}
Event arguments:
public class IdEventArgs : EventArgs
{
public Guid Id { get; set; }
}
public class ReadEventArgs : IdEventArgs
{
public byte[] Data{ get; set; }
}
To use the queue you instantiate it and hook to the WriteSucces and ReadSucces.
SerialQueue queue = new SerialQueue();
queue.ReadSuccess += (o, args) => { /*Do whatever you need to do with the read data*/ };
queue.WriteSuccess += (o, args) => { /*Do whatever you need to do after the write */ };
Note that each QueueCommand has a property named Id which is a unique Guid, it allows you to track when the commands are executed.
Now, when you want to perform a read you do:
QueueCommand cmd = new QueueCommand { Operation = SerialOperation.Read, BytesToRead = 1024 };
queue.Enqueue(cmd);
In this moment the queue will add the command and set the reset event, when the reset event is set the thread processing the commands will continue it's execution (if wasn't already executing) and process all the possible commands in the queue.
For a write you will do:
QueueCommand cmd = new QueueCommand { Operation = SerialOperation.Write, BytesToWrite = new byte[]{ 1, 10, 40 } };
And for a write followed by a read you will do:
QueueCommand cmd = new QueueCommand { Operation = SerialOperation.WriteRead, BytesToWrite = new byte[]{ 1, 10, 40 }, BytesToRead = 230 };
I have been working with serial ports for years in multi threaded environments and this is the only way to ensure sequentiallity between sent commands and received responses, else you will mix responses from different commands.
Remember this is just a base implementation, you need to add error handling and customize it to your needs.
The thread safety of a method has nothing to deal with serial port operations (see this interesting discussion What Makes a Method Thread-safe? What are the rules?).
At the end, I think that your lock(_syncObject) in your first class is not necessary (but I don't know the rest of your code!), if you call the methods in the way you did, because the Read() and Write() calls are enclosed in a sync-lock to the same object (I'm supposing that your lock object is declared like private static readonly object otherlock1 = new object();).
In my opinion, if you only call function1 and function2 in the rest of your code, your approach is definitely thread-safe (supposed that your // Some other code don't spawn another thread that can make some thread-unsafe operations on the same variables on which function1 and function2 are working...).
Talking about the serial port protocol, what does it happen if your // Some other code fails for some reason? For example a computation error between your HASPClass.Read(...) and HASPClass.Write(...). This might not affect the thread-safety it-self, but damage the sequence of the read-write operations (but only you can know the details on that).
First of all, using singletons in such manner is a bad practice. You should consider using something like this.
public sealed class SerialPortExt
{
private readonly SerialPort _serialPort;
private readonly object _serialPortLock = new object();
public SerialPortExt(SerialPort serialPort)
{
_serialPort = serialPort;
}
public void DoSomething()
{
}
public IDisposable Lock()
{
return new DisposableLock(_serialPortLock);
}
}
Where DisposableLock looks like this.
public sealed class DisposableLock : IDisposable
{
private readonly object _lock;
public DisposableLock(object #lock)
{
_lock = #lock;
Monitor.Enter(_lock);
}
#region Implementation of IDisposable
public void Dispose()
{
Monitor.Exit(_lock);
}
#endregion
}
Then you can work with your instance in the following way.
class Program
{
static void Main()
{
var serialPortExt = new SerialPortExt(new SerialPort());
var tasks =
new[]
{
Task.Run(() => DoSomething(serialPortExt)),
Task.Run(() => DoSomething(serialPortExt))
};
Task.WaitAll(tasks);
}
public static void DoSomething(SerialPortExt serialPortExt)
{
using (serialPortExt.Lock())
{
serialPortExt.DoSomething();
Thread.Sleep(TimeSpan.FromSeconds(5));
}
}
}
Since I cannot try out your code and it wouldn't compile I would just advice that you make your wrapper into a singleton and perform the locking from there.
Here is an example of your sample code converted to a singleton class based on MSDN Implementing Singleton in C#:
public class HASPCLass
{
private static SerialPort m_port;
private static bool m_initialized;
private static int m_baudRate;
static readonly object _syncObject = new object();
private static HASPCLass _instance;
public static HASPCLass Instance
{
get
{
if(_instance == null)
{
lock(_syncObject)
{
if (_instance == null)
{
_instance = new HASPCLass();
}
}
}
return _instance;
}
}
public void DoInitialization(int baudRate /*also could be other params*/)
{
if (!m_initialized)
{
Initialize(baudRate);
}
}
private void Initialize(int baudrate /*also could have other params*/)
{
m_port.Open();
m_baudRate = baudrate;
m_initialized = true;
}
private void Uninitialize()
{
m_port.Close();
m_initialized = false;
}
public void Read(byte[] buff)
{
m_port.Read(buff, 0, buff.Length);
}
public void Write(byte[] buff)
{
m_port.Write(buff, 0, buff.Length);
}
public void Close()
{
if (m_initialized)
{
Uninitialize();
}
}
}
Notice that locking is only applied on the instance of HASPCLass.
if(_instance == null)
This check is added because when multiple threads try to access the singleton instance it will be null. In this case that is the time where it should wait and check if it is currently locked. These modifications has already made your HASPCLass thread safe! Now consider adding more functions such as for setting the port name and other properties as needed.
Generally, in this case of situation, you have to use a Mutex().
A mutex permits mutual exclusion to shared resources.
I need a synchronizing class that behaves exactly like the AutoResetEvent class, but with one minor exception:
A call to the Set() method must release all waiting threads, and not just one.
How can I construct such a class? I am simply out of ideas?
Martin.
So you have multiple threads doing a .WaitOne() and you want to release them?
Use the ManualResetEvent class and all the waiting threads should release...
Thank you very much for all your thougts and inputs which I have read with great interest. I did some more searching here on Stackoverflow, and suddenly I found this, whcih turned out to be just what I was looking for. By cutting it down to just the two methods I need, I ended up with this small class:
public sealed class Signaller
{
public void PulseAll()
{
lock (_lock)
{
Monitor.PulseAll(_lock);
}
}
public bool Wait(TimeSpan maxWaitTime)
{
lock (_lock)
{
return Monitor.Wait(_lock, maxWaitTime);
}
}
private readonly object _lock = new object();
}
and it does excactly what it should! I'm amazed that a solution could be that simple, and I love such simplicity. I'ts beautiful. Thank you, Matthew Watson!
Martin.
Two things you might try.
Using a Barrier object add conditionally adding threads too it and signaling them.
The other might be to use a publisher subscriber setup like in RX. Each thread waits on an object that it passes to a collection. When you want to call 'set' loop over a snapshot of it calling set on each member.
Or you could try bears.
If the event is being referenced by all threads in a common field or property, you could replace the common field or property with a new non-signaled event and then signal the old one. It has some cost to it since you'll be regularly creating new synchronization objects, but it would work. Here's an example of how I would do that:
public static class Example
{
private static volatile bool stopRunning;
private static ReleasingAutoResetEvent myEvent;
public static void RunExample()
{
using (Example.myEvent = new ReleasingAutoResetEvent())
{
WaitCallback work = new WaitCallback(WaitThread);
for (int i = 0; i < 5; ++i)
{
ThreadPool.QueueUserWorkItem(work, i.ToString());
}
Thread.Sleep(500);
for (int i = 0; i < 3; ++i)
{
Example.myEvent.Set();
Thread.Sleep(5000);
}
Example.stopRunning = true;
Example.myEvent.Set();
}
}
private static void WaitThread(object state)
{
while (!Example.stopRunning)
{
Example.myEvent.WaitOne();
Console.WriteLine("Thread {0} is released!", state);
}
}
}
public sealed class ReleasingAutoResetEvent : IDisposable
{
private volatile ManualResetEvent manualResetEvent = new ManualResetEvent(false);
public void Set()
{
ManualResetEvent eventToSet = this.manualResetEvent;
this.manualResetEvent = new ManualResetEvent(false);
eventToSet.Set();
eventToSet.Dispose();
}
public bool WaitOne()
{
return this.manualResetEvent.WaitOne();
}
public bool WaitOne(int millisecondsTimeout)
{
return this.manualResetEvent.WaitOne(millisecondsTimeout);
}
public bool WaitOne(TimeSpan timeout)
{
return this.manualResetEvent.WaitOne(timeout);
}
public void Dispose()
{
this.manualResetEvent.Dispose();
}
}
Another more lightweight solution you could try that uses the Monitor class to lock and unlock objects is below. However, I'm not as happy with the cleanup story for this version of ReleasingAutoResetEvent since Monitor may hold a reference to it and keep it alive indefinitely if it is not properly disposed.
There are a few limitations/gotchas with this implementation. First, the thread that creates this object will be the only one that will be able to signal it with a call to Set; other threads that attempt to do the same thing will receive a SynchronizationLockException. Second, the thread that created it will never be able to wait on it successfully since it already owns the lock. This will only be an effective solution if you have exactly one controlling thread and several other waiting threads.
public static class Example
{
private static volatile bool stopRunning;
private static ReleasingAutoResetEvent myEvent;
public static void RunExample()
{
using (Example.myEvent = new ReleasingAutoResetEvent())
{
WaitCallback work = new WaitCallback(WaitThread);
for (int i = 0; i < 5; ++i)
{
ThreadPool.QueueUserWorkItem(work, i.ToString());
}
Thread.Sleep(500);
for (int i = 0; i < 3; ++i)
{
Example.myEvent.Set();
Thread.Sleep(5000);
}
Example.stopRunning = true;
Example.myEvent.Set();
}
}
private static void WaitThread(object state)
{
while (!Example.stopRunning)
{
Example.myEvent.WaitOne();
Console.WriteLine("Thread {0} is released!", state);
}
}
}
public sealed class ReleasingAutoResetEvent : IDisposable
{
private volatile object lockObject = new object();
public ReleasingAutoResetEvent()
{
Monitor.Enter(this.lockObject);
}
public void Set()
{
object objectToSignal = this.lockObject;
object objectToLock = new object();
Monitor.Enter(objectToLock);
this.lockObject = objectToLock;
Monitor.Exit(objectToSignal);
}
public void WaitOne()
{
object objectToMonitor = this.lockObject;
Monitor.Enter(objectToMonitor);
Monitor.Exit(objectToMonitor);
}
public bool WaitOne(int millisecondsTimeout)
{
object objectToMonitor = this.lockObject;
bool succeeded = Monitor.TryEnter(objectToMonitor, millisecondsTimeout);
if (succeeded)
{
Monitor.Exit(objectToMonitor);
}
return succeeded;
}
public bool WaitOne(TimeSpan timeout)
{
object objectToMonitor = this.lockObject;
bool succeeded = Monitor.TryEnter(objectToMonitor, timeout);
if (succeeded)
{
Monitor.Exit(objectToMonitor);
}
return succeeded;
}
public void Dispose()
{
Monitor.Exit(this.lockObject);
}
}
I am about to use a BlockingCollection like below and just wanted to check that it was suitable for thread safety etc. Was wondering if I needed a CancellationTokenSource for anything.
Thanks
public class MyApp
{
private BlockingCollection<int> blockingCollection;
public void Start()
{
blockingCollection= new BlockingCollection<int>();
var task = Task.Factory.StartNew(ProcessData);
}
public void Add(int value)
{
blockingCollection.Add(value); //This is a thread that receives input
}
private void ProcessData()
{
foreach(var item in blockingCollection.GetConsumingEnumerable())
{
...
}
}
public void Finish()
{
blockingCollection.CompleteAdding();
}
}
Obviously, you may use cancellation token to support graceful cancellation pattern in your code:
private readonly CancellationTokenSource cts = new CancellationTokenSource();
public void Start()
{
blockingCollection= new BlockingCollection<int>();
var task = Task.Factory.StartNew(ProcessData, cts.Token);
}
private void ProcessData()
{
foreach(var item in blockingCollection.GetConsumingEnumerable(cts.Token))
{
cts.Token.ThrowIfCancellationRequested();
// ...
}
}
public void Cancel()
{
cts.Cancel();
}
Yes, BlockingCollection itself is thread-safe. From MSDN:
IProducerConsumerCollection represents a collection that allows
for thread-safe adding and removing of data. BlockingCollection is
used as a wrapper for an IProducerConsumerCollection instance,
allowing removal attempts from the collection to block until data is
available to be removed.
Ok, this doesn't say much about the actual code using it, but from what I see in your code it's used correctly.
I have a multi threaded program that opens a few threads to query an external CRM and save the results in an in-memory IDictionary in order to speed up the system.
I'm a little confused about multi threading and critical sections. I want my class QueryThreadProcess to have a thread which runs the query and to manage starting and stopping the query. It has an object of type query and saves the results in a list.
The class QueryManager will kill all the processes or start all processes, basically collection wide methods.
I have a feeling that the private members for QueryThreadProcess are shared between all threads. How would I be able to make them private to each thread, but also kill each thread separately from an external class?
I don't want to lock because I want all the threads to run parallel.
Here is my manager class:
public class QueryManager
{
private IDictionary<int, QueryThreadProcess> _queries;
public QueryManager()
{
_queries = new Dictionary<int, QueryThreadProcess>();
}
public void Start()
{
CreateQueryThreadsFromDb();
StartAllThreads();
}
private void StartAllThreads()
{
if (_queries != null && _queries.Count > 0)
{
StopThreadsAndWaitForKill();
}
foreach (var query in _queries)
query.Value.Start();
}
private void CreateQueryThreadsFromDb()
{
var queries = new QueryProvider().GetAllQueries();
if (_queries != null && _queries.Count > 0)
{
StopThreadsAndWaitForKill();
_queries.Clear();
}
foreach (var query in queries)
_queries.Add(query.Id, new QueryThreadProcess(query));
}
private void StopThreadsAndWaitForKill()
{
KillAllThreads();
while (!AreAllThreadsKilled()) { }
}
private void KillAllThreads()
{
foreach (var query in _queries)
query.Value.Kill();
}
private bool AreAllThreadsKilled()
{
return _queries.All(query => query.Value.IsKilled);
}
public IList<User> GetQueryResultById(int id)
{
return _queries[id].Result;
}
}
and here is my class for QueryProcesses which holds the threads that do the actual query:
using System.Collections.Generic;
using System.Threading;
using Intra.BLL.MessageProviders;
using Intra.BO;
using Intra.BO.Messages;
namespace Intra.BLL.QueryProcess
{
internal class QueryThreadProcess
{
private readonly Thread _thread;
private readonly Query _query;
private bool _isStoppingQuery = false;
private bool _isKilled = true;
private IList<User> _result;
private readonly object _objSync = new object();
public QueryThreadProcess(Query query)
{
_query = query;
_thread = new Thread(RetrieveQueries);
}
public void Start()
{
_isStoppingQuery = true;
while (!_isKilled) { }
_isStoppingQuery = false;
_thread.Start();
}
private void RetrieveQueries()
{
const string BROKERNAME = "bla";
_isKilled = false;
while (!_isStoppingQuery)
{
Broker broker = new BrokerProvider().GetBrokerByName(BROKERNAME);
var users = new QueryProvider().GetUserObjectsByQuery(_query, ParaTokenGenerator.GetBrokerAuthToken(broker));
_result = users;
}
_isKilled = true;
}
public bool IsKilled
{
get { return _isKilled; }
}
public IList<User> Result
{
get
{
lock (_objSync)
return _result;
}
}
public void Kill()
{
_isStoppingQuery = true;
}
}
}
It doesn't really answer your question, but it looks like a more modern approach using the Task Parallel Library of .NET 4 could save you some headache. Controlling Threads by yourself isn't necessary. It looks like you could refactor your classes to a few lines of code and get rid of the described problems.
.NET 4 has ThreadLocal<T> which may be of interest to you
The _thread and _query fields probably don’t matter as they are declared readonly and are not changed after each thread is run. These are not shared between worker threads as they are private to the class and you create a separate instance of the class for each thread.
_isStoppingQuery and _isKilled are accessed by both the worker thread and the controlling thread. As such these should be declared volatile to ensure they are not cashed in a processor register and don’t suffer from execution reordering.
There is a potential issue with _result. The lock in Result/get is not enough to protect the contents of _result. It is only protecting the reference to the list not the list itself. However as your worker thread is only overwriting the reference each cycle it may not be an issue. I would probably do away with the lock on _objSync and declare _result volatile too.
I'm having a small background thread which runs for the applications lifetime - however when the application is shutdown, the thread should exit gracefully.
The problem is that the thread runs some code at an interval of 15 minutes - which means it sleeps ALOT.
Now in order to get it out of sleep, I toss an interrupt at it - my question is however, if there's a better approach to this, since interrupts generate ThreadInterruptedException.
Here's the gist of my code (somewhat pseudo):
public class BackgroundUpdater : IDisposable
{
private Thread myThread;
private const int intervalTime = 900000; // 15 minutes
public void Dispose()
{
myThread.Interrupt();
}
public void Start()
{
myThread = new Thread(ThreadedWork);
myThread.IsBackground = true; // To ensure against app waiting for thread to exit
myThread.Priority = ThreadPriority.BelowNormal;
myThread.Start();
}
private void ThreadedWork()
{
try
{
while (true)
{
Thread.Sleep(900000); // 15 minutes
DoWork();
}
}
catch (ThreadInterruptedException)
{
}
}
}
There's absolutely a better way - either use Monitor.Wait/Pulse instead of Sleep/Interrupt, or use an Auto/ManualResetEvent. (You'd probably want a ManualResetEvent in this case.)
Personally I'm a Wait/Pulse fan, probably due to it being like Java's wait()/notify() mechanism. However, there are definitely times where reset events are more useful.
Your code would look something like this:
private readonly object padlock = new object();
private volatile bool stopping = false;
public void Stop() // Could make this Dispose if you want
{
stopping = true;
lock (padlock)
{
Monitor.Pulse(padlock);
}
}
private void ThreadedWork()
{
while (!stopping)
{
DoWork();
lock (padlock)
{
Monitor.Wait(padlock, TimeSpan.FromMinutes(15));
}
}
}
For more details, see my threading tutorial, in particular the pages on deadlocks, waiting and pulsing, the page on wait handles. Joe Albahari also has a tutorial which covers the same topics and compares them.
I haven't looked in detail yet, but I wouldn't be surprised if Parallel Extensions also had some functionality to make this easier.
You could use an Event to Check if the Process should end like this:
var eventX = new AutoResetEvent(false);
while (true)
{
if(eventX.WaitOne(900000, false))
{
break;
}
DoWork();
}
There is CancellationTokenSource class in .NET 4 and later which simplifies this task a bit.
private readonly CancellationTokenSource cancellationTokenSource =
new CancellationTokenSource();
private void Run()
{
while (!cancellationTokenSource.IsCancellationRequested)
{
DoWork();
cancellationTokenSource.Token.WaitHandle.WaitOne(
TimeSpan.FromMinutes(15));
}
}
public void Stop()
{
cancellationTokenSource.Cancel();
}
Don't forget that CancellationTokenSource is disposable, so make sure you dispose it properly.
One method might be to add a cancel event or delegate that the thread will subscribe to. When the cancel event is invoke, the thread can stop itself.
I absolutely like Jon Skeets answer. However, this might be a bit easier to understand and should also work:
public class BackgroundTask : IDisposable
{
private readonly CancellationTokenSource cancellationTokenSource;
private bool stop;
public BackgroundTask()
{
this.cancellationTokenSource = new CancellationTokenSource();
this.stop = false;
}
public void Stop()
{
this.stop = true;
this.cancellationTokenSource.Cancel();
}
public void Dispose()
{
this.cancellationTokenSource.Dispose();
}
private void ThreadedWork(object state)
{
using (var syncHandle = new ManualResetEventSlim())
{
while (!this.stop)
{
syncHandle.Wait(TimeSpan.FromMinutes(15), this.cancellationTokenSource.Token);
if (!this.cancellationTokenSource.IsCancellationRequested)
{
// DoWork();
}
}
}
}
}
Or, including waiting for the background task to actually have stopped (in this case, Dispose must be invoked by other thread than the one the background thread is running on, and of course this is not perfect code, it requires the worker thread to actually have started):
using System;
using System.Threading;
public class BackgroundTask : IDisposable
{
private readonly ManualResetEventSlim threadedWorkEndSyncHandle;
private readonly CancellationTokenSource cancellationTokenSource;
private bool stop;
public BackgroundTask()
{
this.threadedWorkEndSyncHandle = new ManualResetEventSlim();
this.cancellationTokenSource = new CancellationTokenSource();
this.stop = false;
}
public void Dispose()
{
this.stop = true;
this.cancellationTokenSource.Cancel();
this.threadedWorkEndSyncHandle.Wait();
this.cancellationTokenSource.Dispose();
this.threadedWorkEndSyncHandle.Dispose();
}
private void ThreadedWork(object state)
{
try
{
using (var syncHandle = new ManualResetEventSlim())
{
while (!this.stop)
{
syncHandle.Wait(TimeSpan.FromMinutes(15), this.cancellationTokenSource.Token);
if (!this.cancellationTokenSource.IsCancellationRequested)
{
// DoWork();
}
}
}
}
finally
{
this.threadedWorkEndSyncHandle.Set();
}
}
}
If you see any flaws and disadvantages over Jon Skeets solution i'd like to hear them as i always enjoy learning ;-)
I guess this is slower and uses more memory and should thus not be used in a large scale and short timeframe. Any other?