I am making a library that allows access to a system-wide shared resource and would like a mutex-like lock on it. I have used the Mutex class in the past to synchronize operations in different threads or processes.
In UI applications a problem can occur. The library I'm making is used in multiple products, of which some are plugins that sit in the same host UI application. Because of this, the UI thread is the same for each instance of the library - so mutex.WaitOne() will return true even if the resource is already being accessed.
The 'resource' is the user's attention. I don't want more than one specific child window open regardless of which host process wants to open it. Additionally, it may be a different thread that knows when the mutex can be released (child window closed).
Is there a class, or pattern I can apply, that will allow me to easily solve this?
To summarize my intentions, this might be the ideal fictional class:
var specialMutex = new SpecialMutex("UserToastNotification");
specialMutex.WaitOne(0); // Returns true only once, even on the same thread,
// and is respected across different processes.
specialMutex.Release(); // Can be called from threads other than the one
// that called WaitOne();
Yes, Release looks dangerous, but it's only called by the resource.
I think you want a Semaphore that has an initial value of 1. Any call to WaitOne() on a Semaphore tries to decrement the count, regardless of the thread. And any call to Release, regardless of the thread that calls it, results in incrementing the count.
So if a single thread initializes a semaphore with a value of 1 and then calls WaitOne, the count will go to 0. If that same thread calls WaitOne again on the same semaphore, the thread will lock waiting for a release.
Some other thread could come along and call Release to increment the count.
So, whereas a Semaphore isn't exactly like a Mutex, it might be similar enough to let your program work.
You could use a compare/exchange operation to accomplish this. Something like this:
class Lock {
int locked = 0;
bool Enter() { return Interlocked.CompareExchange(ref locked, 1, 0) == 0; }
void Leave() { Interlocked.CompareExchange(ref locked, 0, 1); }
}
Here, Enter will only ever return true once, regardless from which thread it is called, untill you call leave.
Related
For multiple threads wait, can anyone compare the pros and cons of using WaitHandle.WaitAll and Thread.Join?
WaitHandle.WaitAll has a 64 handle limit so that is obviously a huge limitation. On the other hand, it is a convenient way to wait for many signals in only a single call. Thread.Join does not require creating any additional WaitHandle instances. And since it could be called individually on each thread the 64 handle limit does not apply.
Personally, I have never used WaitHandle.WaitAll. I prefer a more scalable pattern when I want to wait on multiple signals. You can create a counting mechanism that counts up or down and once a specific value is reach you signal a single shared event. The CountdownEvent class conveniently packages all of this into a single class.
var finished = new CountdownEvent(1);
for (int i = 0; i < NUM_WORK_ITEMS; i++)
{
finished.AddCount();
SpawnAsynchronousOperation(
() =>
{
try
{
// Place logic to run in parallel here.
}
finally
{
finished.Signal();
}
}
}
finished.Signal();
finished.Wait();
Update:
The reason why you want to signal the event from the main thread is subtle. Basically, you want to treat the main thread as if it were just another work item. Afterall, it, along with the other real work items, is running concurrently as well.
Consider for a moment what might happen if we did not treat the main thread as a work item. It will go through one iteration of the for loop and add a count to our event (via AddCount) indicating that we have one pending work item right? Lets say the SpawnAsynchronousOperation completes and gets the work item queued on another thread. Now, imagine if the main thread gets preempted before swinging around to the next iteration of the loop. The thread executing the work item gets its fair share of the CPU and starts humming along and actually completes the work item. The Signal call in the work item runs and decrements our pending work item count to zero which will change the state of the CountdownEvent to signalled. In the meantime the main thread wakes up and goes through all iterations of the loop and hits the Wait call, but since the event got prematurely signalled it pass on by even though there are still pending work items.
Again, avoiding this subtle race condition is easy when you treat the main thread as a work item. That is why the CountdownEvent is intialized with one count and the Signal method is called before the Wait.
I like #Brian's answer as a comparison of the two mechanisms.
If you are on .Net 4, it would be worthwhile exploring Task Parallel Library to achieve Task Parellelism via System.Threading.Tasks which allows you to manage tasks across multiple threads at a higher level of abstraction. The signalling you asked about in this question to manage thread interactions is hidden or much simplified, and you can concentrate on properly defining what each Task consists of and how to coordinate them.
This may seem offtopic but as Microsoft themselves say in the MSDN docs:
in the .NET Framework 4, tasks are the
preferred API for writing
multi-threaded, asynchronous, and
parallel code.
The waitall mechanism involves kernal-mode objects. I don't think the same is true for the join mechanism. I would prefer join, given the opportunity.
Technically though, the two are not equivalent. IIRC Join can only operate on one thread. Waitall can hold for the signalling of multiple kernel objects.
We have an implementation for an Ultrasound machine application current where the Ultrasound object is created on the UI's thread. A Singleton implementation would have been good here, but regardless, isn't.
Recently, the set methods changed such that they automatically stop and restart the ultrasound machine, which can take between 10-100ms depending on the state of the machine. For most cases, this isn't too bad of a problem, however it's still causing the UI thread to block for 100ms. Additionally, these methods are not thread-safe and must be called on the same thread where the object was initialized.
This largest issue this is now causing is unresponsive buttons in the UI, especially sliders which may try to update variables many times as you slide the bar. As a result, sliders especially will stutter and update very slowly as it makes many set calls through databound propeties.
What is a good way to create a thread specifically for the creation and work for this Ultrasound object, which will persist through the lifetime of the application?
A current temporary workaround involves spawning a Timer, and invoking a parameter update once we have detected the slider hasn't moved for 200ms, however a Timer would then have to be implemented for every slider and seems like a very messy solution which solves unresponsive sliders, but still blocks the UI thread occasionally.
One thing that's really great about programming the GUI is that you don't have to worry about multiple threads mucking things up for you (assuming you've got CheckForIllegalCrossThreadCalls = true, as you should). It's all single-threaded, operating by means of a message pump (queue) that processes incoming messages one-by-one.
Since you've indicated that you need to synchronize method calls that are not written to be thread-safe (totally understandable), there's no reason you can't implement your own message pump to deal with your Ultrasound object.
A naive, very simplistic version might look something like this (the BlockingCollection<T> class is great if you're on .NET 4.0 or have installed Rx extensions; otherwise, you can just use a plain vanilla Queue<T> and do your own locking). Warning: this is just a quick skeleton I've thrown together just now; I make no promises as to its robustness or even correctness.
class MessagePump<T>
{
// In your case you would set this to your Ultrasound object.
// You could just as easily design this class to be "object-agnostic";
// but I think that coupling an instance to a specific object makes it clearer
// what the purpose of the MessagePump<T> is.
private T _obj;
private BlockingCollection<Action<T>> _workItems;
private Thread _thread;
public MessagePump(T obj)
{
_obj = obj;
// Note: the default underlying data store for a BlockingCollection<T>
// is a FIFO ConcurrentQueue<T>, which is what we want.
_workItems = new BlockingCollection<Action<T>>();
_thread = new Thread(ProcessQueue);
_thread.IsBackground = true;
_thread.Start();
}
public void Submit(Action<T> workItem)
{
_workItems.Add(workItem);
}
private void ProcessQueue()
{
for (;;)
{
Action<T> workItem = _workItems.Take();
try
{
workItem(_obj);
}
catch
{
// Put in some exception handling mechanism so that
// this thread is always running. One idea would be to
// raise an event containing the Exception object on a
// threadpool thread. You definitely don't want to raise
// the event from THIS thread, though, since then you
// could hit ANOTHER exception, which would defeat the
// purpose of this catch block.
}
}
}
}
Then what would happen is: every time you want to interact with your Ultrasound object in some way, you do so through this message pump, by calling Submit and passing in some action that works with your Ultrasound object. The Ultrasound object then receives all messages sent to it synchronously (by which I mean, one at a time), while operating on its own non-GUI thread.
You should maintain a dedicated UltraSound thread, which creates the UltraSound object and then listens for callbacks from other threads.
You should maintain a thread-safe queue of delegates and have the UltraSound thread repeatedly execute and remove the first delegate in the queue.
This way, the UI thread can post actions to the queue, which will then be executed asynchronously by the UltraSound thread.
I'm not sure I fully understand the setup, but here is my attempt at a solution:
How about having the event handler for the slider check the last event time, and wait for 50ms before processing a user adjustment (only process the most recent value).
Then have a thread using a while loop and waiting on an AutoResetEvent trigger from the GUI. It would then create the object and set it?
I was reading Threading from within a class with static and non-static methods and I am in a similar situation.
I have a static method that pulls data from a resource and creates some runtime objects based on the data.
static class Worker{
public static MyObject DoWork(string filename){
MyObject mo = new MyObject();
// ... does some work
return mo;
}
}
The method takes awhile (in this case it is reading 5-10mb files) and returns an object.
I want to take this method and use it in a multiple thread situation so I can read multiple files at once. Design issues / guidelines aside, how would multiple threads access this code?
Let's say I have something like this...
class ThreadedWorker {
public void Run() {
Thread t = new Thread(OnRun);
t.Start();
}
void OnRun() {
MyObject mo = Worker.DoWork("somefilename");
mo.WriteToConsole();
}
}
Does the static method run for each thread, allowing for parallel execution?
Yes, the method should be able to run fine in multiple threads. The only thing you should worry about is accessing the same file in multiple threads at the same time.
You should distinguish between static methods and static fields in this case. Each call to a static method will have its own "copy" of the method and its local variables. That means that in your sample, each call will operate on its own MyObject instance, and the calls will have nothing to do with each other. This also means that there is no problem with executing them on different threads.
If the static method is written to be thread safe, then it can be called from any thread or even passed to a thread pool.
You have to keep in mind - .NET objects don't live on threads (with the exception of structs located on a thread's stack) - paths of execution do. So, if a thread can access an instance of an object it can call an instance method. Any thread can call a static method because it all needs to know about is the type of the object.
One thing you should keep in mind when executing static methods concurrently are static fields, which only exist one time. So, if the method reads and writes static fields, concurrence issues can occur.
However, there is an attribute called ThreadStaticAttribute which says that for each thread there is a separate field. This can be helpful in some particular scenarios.
Local variables are separte for each thread, so you don't need to care about this. But be aware of external resources like files, which can be problematic when accessed concurrently.
Best Regards,
Oliver Hanappi
Aside from the code aspect, which has already been answered, you also need to consider the I/O aspect of accessing the file.
A note on architecture and how I have completed this task in the past - not suggesting that this is the one right approach or that it is necessarily appropriate for your application. However, I thought my notes might be helpful for your thought process:
Set up a ManualResetEvent field, call it ActivateReader or something similar, this will become more obvious further on. Initialize it as false.
Set up a boolean field, call it TerminateReaderThread. Initialize it as false, again this will become more obvious further on.
Set up a Queue<string> field, call it Files and initialize it.
My main application thread checks to see if there's a lock on the files queue before writing each of the relevant file paths into it. Once the file's been written, the reset event is tripped indicating to the queue reader thread that there are unread files in the queue.
I then set up a thread to act as a queue reader. This thread waits for the ManualResetEvent to be tripped using the WaitAny() method - this is a blocking method that unblocks once the ManualResetEvent is tripped. Once it is tripped, the thread checks to see if a thread shutdown has been initiated [by checking the TerminateReaderThread field]. If a shutdown has been initiated, the thread shuts down gracefully, otherwise it reads the next item from the queue and spawns a worker thread to process the file. I then lock the queue before checking to see if there's any items left. If no items are left, I reset the ManualResetEvent which will pause our thread on the next go-around. I then unlock the queue so the main thread can continue writing to it.
Each instance of the worker thread attempts to gain an exclusive lock on the file it was initiated with until some timeout elapses, if the lock is successful, it processes the file, if it's unsuccessful, it either retries as necessary, throws an exception and terminates itself. In the event of an exception, the thread can add the file to the end of the queue so another thread can pick it up again at a later point. Be aware that if you do this, then you need to consider the endless loop an I/O read issue could cause. In such an event a dictionary of failed files with counters of how many times they've failed could be useful so that if some limit was reached you could cease to re-add the file to the end of the queue.
Once my application decides the reader thread is no longer needed, it sets the TerminateReaderThread field to true. Next time the reader thread cycles to the start of its process, its shutdown process will be activated.
The static method will run on the thread that you call it from. As long as your function is re-entrant, meaning execution can safely re-enter the function while execution from another thread (or further up the stack) is already in the function.
Since your function is static, you can't access member variables, which would be one way of making it not re-entrant. If you had a static local variable that maintained state, that would be another way of making it not re-entrant.
Each time you enter you create a new MyObject, so each bit of execution flow is dealing with it's own MyObject instance, which is good. It means they won't be trying to access the same object at the same time (which would lead to race-conditions).
The only thing you're sharing between multiple calls is the Console itself. If you call it on multiple threads, they'll output over each other to the console. And you could potentially act on the same file (in your example the filename is hard-coded), but you'd probably be acting on multiple files. Successive threads would probably fail to open the file if previous ones have it open.
I have a multi-thread C# application that uses some recursive functions in a dll. The problem that I have is how to cleanly stop the recursive functions.
The recursive functions are used to traverse our SCADA system's hierarchical 'SCADA Object' data. Traversing the data takes a long time (10s of minutes) depending on the size of our system and what we need to do with the data.
When I start the work I create a background thread so the GUI stays responsive. Then the background worker handles the calling of the recursive function in the dll.
I can send a cancel request to the background worker using CancelAsync but the background worker can't check the CancellationPending flag because it is blocked waiting of the dll's recursive function to finish.
Typically there is only 1 recursive function active at a time but there are dozens of recursive functions that are used at various times by different background workers.
As a quick (and really shameful) hack I added a global 'CodeEnabled' flag to the dll. So when the GUI does the CancelAsync it also sets the 'CodeEnabled' flag to false. (I know I need some of those bad code offsets). Then the dll's recursive loop checks the 'CodeEnabled' flag and returns to the background worker which is finally able to stop.
I don't want to move the recursive logic to the background worker thread because I need it in other places (e.g. other background workers).
What other approach should be used for this type of problem?
It depends on the design, really. Much recursion can be replaced with (for example) a local stack (Stack<>) or queue (Queue<>), in which case a cancel flag can be held locally without too much pain. Another option is to use some kind of progress event that allows subscribers to set a cancel flag. A third option is to pass some kind of context class into the function(s), with a (volatile or synchronized) flag that can be set.
In any of these cases you should have relatively easy access to a cancel flag to exit the recursion.
FooContext ctx = new FooFontext();
BeginSomeRecursiveFunction(ctx);
...
ctx.Cancel = true; // or ctx.Cancel(), whatever
with (in your function that accepts the context):
if(ctx.Cancel) return; // or maybe throw something
// like an OperationCancelledException();
blah...
CallMyself(ctx); // and further down the rabbit hole we go...
Another interesting option is to use iterator blocks for your long function rather than regular code; then your calling code can simply stop iterating when it has had enough.
Well, it seems to me that you need to propagate the "stop now" state down the recursive calls. You could have some sort of cancellation token which you pass down the recursive calls, and also keep hold of in the UI thread. Something as simple as this:
public class CancellationToken
{
private volatile bool cancelled;
public bool IsCancelled { get { return cancelled; } }
public void Cancel() { cancelled = true; }
}
(I'm getting increasingly wary of volatility and lock-free coding; I would be tempted to use a lock here instead of a volatile variable, but I've kept it here for the sake of simplicity.)
So you'd create the cancellation token, pass it in, and then at the start of each recursive method call you'd have:
if (token.IsCancelled)
{
return null; // Or some other dummy value, or throw an exception
}
Then you'd just call Cancel() in the UI thread. Basically it's a just a way of sharing the state of "should this task continue".
The choice of whether to propagate a dummy return value back or throw an exception is an interesting one. In some ways this isn't exceptional - you must be partially expecting it, or you wouldn't pass the cancellation token in the first place - but at the same time exceptions have the behaviour you want in terms of unwinding the stack to somewhere that can recognise the cancellation easily.
I like the previous answers, but here's another.
I think you're asking how to have different cancel flag for different threads.
Assuming that the threads which you might want to cancel each have some kind of ThreadId then, instead of having a single global 'CodeEnabled' flag, you could have a global thread-safe dictionary of flags, where the TheadId values are used as the dictionary's keys.
A thread would then query the dictionary to see whether its flag has been set.
I've got an abstract class that spawns an infinitely-looping thread in its constructor. What's the best way to make sure this thread is aborted when the class is done being used?
Should I implement IDisposable and simply this use this?
public void Dispose()
{
this.myThread.Abort();
}
I read that Abort() is evil. Should I instead have Dispose() set a private bool flag that the thread checks for true to exit its loop?
public void Dispose()
{
this.abort = true;
}
// in the thread's loop...
if (this.abort)
{
break;
}
Use the BackgroundWorker class instead?
I would like to expand on the answer provided by "lc" (which is otherwise great).
To use his approach you also need to mark the boolean flag as "volatile". That will introduce a "memory barrier" which will ensure that each time your background thread reads the variable that it will grab it from memory (as opposed to a register), and that when the variable gets written that the data gets transfered between CPU caches.
Instead of having an infinite loop, use a boolean flag that can be set by the main class as the loop condition. When you're done, set the flag so the loop can gracefully exit. If you implement IDisposable, set the flag and wait for the thread to terminate before returning.
You could implement a cancellable BackgroundWorker class, but essentially it will accomplish the same thing.
If you really want, you can give it a window of time to finish itself after sending the signal. If it doesn't terminate, you can abort the thread like Windows does on shutdown.
The reason I believe Thread.Abort is considered "evil" is that it leaves things in an undefined state. Unlike killing an entire process, however, the remaining threads continue to run and could run into problems.
I'd suggest the BackgroundWorker method. It's relatively simple to implement and cleans up well.