In my application I have a queue which fires notifications whenever there are any changes to the queue, but sometimes it happens that when there are simultaneous operations on the queue event handler that it fires multiple times and that's okay, but what I don't want is,...
Below is the code for the event handler:
private async void NotificationQueue_Changed(object sender, EventArgs e)
{
if (!IsQueueInProcess)
await ProcessQeueue();
}
In ProcessQueue method I am setting IsQueueInProcess to true and whenever it gets completed it is set to false. Now, the problem is that whenever multiple event notifications fire simultaneously multiple ProcessQeueue methods start executing, which I don't want. I want to make sure that there will be only one execution of ProcessQeueue at any given time.
Given your statement that this event is raised whenever there are any changes to the queue, and that the queue can be used concurrently (i.e. there are multiple producers adding things to the queue), it seems likely to me that the best way to address this would be to abandon the event-based behavior altogether. Instead, using BlockingCollection<T>, with a thread dedicated to processing the queue via GetConsumingEnumerable(). That method will block the thread as long as the queue is empty, and will allow the thread to remove and process items in the queue any time any other thread adds something to it. The collection itself is thread-safe, so using that you would not require any additional thread synchronization (for the handling of the queue, that is…it's possible processing an item involves thread interactions, but there's nothing in your question that describes that aspect, so I can't say one way or the other anything about that).
That said, taking the question literally, the simplest approach would be to include a semaphore:
private readonly SemaphoreSlim _semaphore = new SemaphoreSlim(1);
private async void NotificationQueue_Changed(object sender, EventArgs e)
{
if (_semaphore.Wait(0))
{
await ProcessQueue();
_semaphore.Release();
}
}
The above attempts to acquire the semaphore's lock. With a timeout of 0 milliseconds, it will return immediately even if the semaphore could not be acquired. The return value indicates whether the semaphore was successfully acquired or not.
In this way, as long as there is no outstanding queue-processing operation, the current event handler invocation can acquire the semaphore and will call the ProcessQueue() method. When that operation completes, the continuation will release the semaphore. Until that happens, no other invocation of the event handler will be able to acquire the semaphore, and thus will not initiate processing of the queue.
I'll note that nothing here guarantees a solution to threads racing with each other that would ensure the queue is always either empty, or always has some processing operation acting on it. That's up to you, to ensure that the ProcessQueue() method has the synchronization needed to guarantee that if any thread has modified the queue and caused this event to be raised, that that thread will not fail to initiate another round of processing should the first round not be able to observe the change.
Or put another way, you need to make sure that for any thread that is going to raise that event, either its change to the queue will be observed by the current processing operation, or that thread will initiate a new one.
There's not enough context in your question for anyone to be able to address that concern specifically. I will just point out that it's a common enough thing for someone to overlook when trying to implement this sort of system. IMHO, all the more reason to just have a dedicated thread using BlockingCollection<T> to consume elements added to the queue. :)
See also the related question How to avoid reentrancy with async void event handlers?. This is a slightly different question, in that the accepted answer causes each invocation of the event handler to result in the operation initiated by the event handler. Your scenario is simpler, since you simply want to skip initiation of a new operation, but you may still find some useful insight there.
I agree with Peter that abandoning event-based notifications is the best solution, and that you should move to a producer/consumer queue. However, I recommend one of the TPL Dataflow blocks instead of BlockingCollection<T>.
In particular, ActionBlock<T> should work quite nicely:
private readonly ActionBlock<T> notificationQueue = new ActionBlock<T>(async t =>
{
await ProcessQueueItem(t);
});
By default, TPL Dataflow blocks have a concurrency limit of 1.
Related
I have a button which has an async handler which calls awaits on an async method. Here's how it looks like:
private async void Button1_OnClick(object sender, RoutedEventArgs e)
{
await IpChangedReactor.UpdateIps();
}
Here's how IpChangedReactor.UpdateIps() looks:
public async Task UpdateIps()
{
await UpdateCurrentIp();
await UpdateUserIps();
}
It's async all the way down.
Now I have a DispatcherTimer which repeatedly calls await IpChangedReactor.UpdateIps in its tick event.
Let's say I clicked the button. Now the event handler awaits on UpdateIps and returns to caller, this means that WPF will continue doing other things. In the meantime, if the timer fired, it would again call UpdateIps and now both methods will run simultaneously. So the way I see it is that it's similar to using 2 threads. Can race conditions happen? (A part of me says no, because it's all running in the same thread. But it's confusing)
I know that async methods doesn't necessarily run on separate threads. However, on this case, it's pretty confusing.
If I used synchronous methods here, it would have worked as expected. The timer tick event will run only after the first call completed.
Can someone enlighten me?
Since both calls run on the UI thread the code is "thread safe" in the traditional sense of - there wouldn't be any exceptions or corrupted data.
However, can there be logical race conditions? Sure. You could easily have this flow (or any other):
UpdateCurrentIp() - button
UpdateCurrentIp() - Timer
UpdateUserIps() - Timer
UpdateUserIps() - button
By the method names it seems not to really be an issue but that depends on the actual implementation of these methods.
Generally you can avoid these problems by synchronizing calls using a SemaphoreSlim, or an AsyncLock (How to protect resources that may be used in a multi-threaded or async environment?):
using (await _asyncLock.LockAsync())
{
await IpChangedReactor.UpdateIps();
}
In this case though, it seems that simply avoiding starting a new update when one is currently running is good enough:
if (_isUpdating) return;
_isUpdating = true;
try
{
await IpChangedReactor.UpdateIps();
}
finally
{
_isUpdating = false;
}
I can think of a number of ways to handle this issue
1 Do not handle it
Like i3arnon says it might not be a problem to have multiple calls to the methods running at the same time. It all depends on the implementation of the update methods. Just like you write, it's very much the same problem that you face in real, multi-threaded concurrency. If having multiple async operations running at once is not a problem for these methods, you can ignore the reentrancy issues.
2 Block the timer, and wait for running tasks to finish
You can disable the timer, och block the calls to the event handler when you know you have a async task running. You can use a simple state field, or any kind of locking/signaling primitive for this. This makes sure you only have a single operation running at a given time.
3 Cancel any ongoing async operations
If you want to cancel any async operations already running, you can use a cancellationtoken to stop them, and then start a new operation. This is described in this link How to cancel a Task in await?
This would make sense if the operation takes a long time to finish, and you want to avoid spending time to complete an operation that is already "obsolete".
4 Queue the requests
If it's important to actually run all the updates, and you need synchronization you can queue the tasks, and work them off one by one. Consider adding some sort of backpressure-handling if you go down this route...
I have a WPF (MVVM) project where I have multiple view-models, each with a button that launches different analyses on the same data source, which in this case is a file. The file cannot be shared, so if the buttons are pressed near the same time the second call will fail.
I need a way to queue the button clicks so that each analysis can be run sequentially, but I can't seem to get it to work. I tried using a static Semaphore, SemaphoreSlim and Mutex, but they appear to stop everything (the Wait() function appears to block the currently running analysis). I tried a lock() command with a static object but it didn't seem to block either event (I get the file share error). I also tried a thread pool (with a max concurrent thread count of 1), but it gives threading errors updating the UI (this may be solvable with Invoke() calls).
My question is what might be considered best practice in this situation with WPF?
EDIT: I created a mockup which exhibits the problem I'm having. It is at http://1drv.ms/1s4oQ1T.
What you need here is an asynchronous queue, so that you can enqueue these tasks without actually having anything blocking your threads. SemaphoreSlim actually has a WaitAsync method that makes creating such a queue rather simple:
public class TaskQueue
{
private SemaphoreSlim semaphore;
public TaskQueue()
{
semaphore = new SemaphoreSlim(1);
}
public async Task<T> Enqueue<T>(Func<Task<T>> taskGenerator)
{
await semaphore.WaitAsync();
try
{
return await taskGenerator();
}
finally
{
semaphore.Release();
}
}
public async Task Enqueue(Func<Task> taskGenerator)
{
await semaphore.WaitAsync();
try
{
await taskGenerator();
}
finally
{
semaphore.Release();
}
}
}
This allows you to enqueue operations that will be all executed sequentially, rather than in parallel, and without blocking any threads at any time. The operations can also be any type of asynchronous operation, whether that is CPU bound work in another thread, IO bound work, etc.
I would do two things to solve this problem:
First, encapsulate the analysis operations in a command pattern. If you aren't familiar with it, the simplest implementation is an interface with a single function Execute. When you want to perform an analysis operation, just create one of these. You could also use the built-in ICommand interface to help, but be aware that this interface has more to it than the generic command pattern.
Of course, creation is only half the battle, so after doing so I would add it to a BlockingCollection. This collection is .NET's solution to the Producer-Consumer problem. Have a background thread that consumes this collection (executing the command objects contained within) using a foreach on the collection's GetConsumingEnumerable method and your buttons will "feed" it.
foreach (var item in bc.GetConsumingEnumerable())
{
item.Execute();
}
MSDN for Blocking Collection: http://msdn.microsoft.com/en-us/library/dd267312(v=vs.110).aspx
Now, all the semaphores, waits, etc. are done for you, and you can just add an operation to the queue (if it needs to be a queue, consider using ConcurrentQueue as the backing collection for BlockingCollection) and return on the UI thread. The background thread will pick the task up and run it.
You will need to Invoke any UI updates from the background thread of course, no getting around that issue :).
I'd recommend a queue, in a scheduling object shared by the view-models, with a consumer task that waits on the queue to have an item added to it. When a button is pressed, the view-model adds a work item to the queue. The consumer task takes one item from the queue each time, does the analysis contained in the work item, and then checks the queue for another item, waiting for more work items to be added if there are no work items to be processed.
I'm new to C# & threading and I recently started working on a utility that's using multiple threads. I have some event handling logic being done by one thread, and then a GUI on a separate thread that is observing the event handler and receiving notifications when new events are received.
When the GUI is manually closed by the user I detach it so that it's no longer observing the event handler. However, the next time the event handler receives an event it thinks that it still has something in it's list of observers. I added some print outs/breakpoints and it seems to go into NotifyObservers and hits the foreach loop, then goes into the detach method, empties the observer list, and then when it goes back into NotifyObservers the observer it tries to access has already been disposed and it gets an exception.
I saw on this page that you're supposed to use locks to prevent race conditions from occurring and I tried using one on the observers list before the foreach in NotifyObservers and it still gets the exception. I'm thinking it might have something to do with locking not being able to prevent the GUI from closing on the other thread, so the other thread does not wait when I try to lock, but I'm new to this so I'm not really sure. I tried throwing a bunch of other locks around in these methods as well and nothing seemed to have any effect.
I've included the code for the 3 methods involved below, Detach and NotifyObservers are in my event handler, and HandleClosing is in my observer
protected void HandleClosing(object sender, EventArgs e)
{
handler.Detach(this);
}
public void Detach(SubscriberObserver observer)
{
observers.Remove(observer);
}
public void NotifyObservers()
{
foreach (SubscriberObserver observer in observers)
{
observer.Invoke(new Action(() => { observer.Notify(); }));
}
}
I don't know what type your observer collection has, but I assume it's kind of a thread-safe collection, which might behave the following way when iterating over it via the foreach loop. It locks itself, then creates an IEnumerable copy of itself, and then unlocks itself. The iteration then starts over the elements of the copy. If you remove an element from the collection after the copy has been created, it doesn't matter, the loop will still encounter the removed element.
To fix your race condition you need to lock the collection for the whole iteration and also you should perform the removal inside a lock on the same object. You can create a lock object for this sole purpose or you can lock on ICollection.SyncRoot if your collection implements that.
If observer is Control == true you might be encountering a deadlock when calling Invoke. Try calling BeginInvoke instead. A quote from MSDN: "The difference between the two methods is that a call to Invoke is a blocking call while a call to BeginInvoke is not. In most cases it is more efficient to call BeginInvoke because the secondary thread can continue to execute without having to wait for the primary UI thread to complete its work updating the user interface."
Often in my code I start threads which basically look like this:
void WatchForSomething()
{
while(true)
{
if(SomeCondition)
{
//Raise Event to handle Condition
OnSomeCondition();
}
Sleep(100);
}
}
just to know if some condition is true or not (for example if a have a bad coded library with no events, just boolean variables and I need a "live-view" of them).
I wonder if there is a better way to accomplish this kind of work like a Windows function to hook in which can run my methods all x sec. Or should I code a global event for my app, raising all x secs and let him call my methods like this:
//Event from Windows or selfmade
TicEvent += new TicEventHandler(WatchForSomething));
and then this method:
void WatchForSomething()
{
if(SomeCondition)
{
//Raise Event to handle Condition
OnSomeCondition();
}
}
So, I hope this is not closed because of being a "subjective question" or something, I just want to know what the best practice for this kind of work is.
There isn't necessarily a "best way" to write long-running event processing code. It depends on what kind of application you are developing.
The first example you show is the idiomatic way in which you would often see the main method of a long-running thread written. While it's generally desirable to use a mutex or waitable event synchronization primitive rather than a call to Sleep() - it is otherwise a typical pattern used to implement event processing loops. The benefit of this approach is that it allows specialized processing to run on a separate thread - allowing your application's main thread to perform other tasks or remain responsive to user input. The downside of this approach is that it may require the use of memory barriers (such as locks) to ensure that shared resources are not corrupted. It also makes it more difficult to update your UI, since you must generally marshal such calls back to the UI thread.
The second approach is often used as well - particularly in systems that already have an event-drive API such as WinForms, WPF, or Silverlight. Using a timer object or Idle event is the typical manner in which periodic background checks can be made if there is no user-initiated event that triggers your processing. The benefit here is that it's easy to interact and update user interface objects (since they are directly accessible from the same thread) and it's mitigates the need for locks and mutexes to protected data. One potential downside of this approach is if the processing that must be performed is time-consuming it can make your application unresponsive to user input.
If you are not writing applications that have a user interface (such as services) then the first form is used much more often.
As an aside ... when possible, it's better to use a synchronization object like an EventWaitHandle or Semaphore to signal when work is available to be processed. This allows you to avoid using Thread.Sleep and/or Timer objects. It reduces the average latency between when work is available to be performed and when event processing code is triggered, and it minimizes the overhead of using background threads, since they can be more efficiently scheduled by the runtime environment and won't consume any CPU cycles until there's work to do.
It's also worth mentioning that if the processing you do is in response to communications with external sources (MessageQueues, HTTP, TCP, etc) you can use technologies like WCF to provide the skeleton of your event handling code. WCF provides base classes that make it substantially easier to implement both Client and Server systems that asynchronously respond to communication event activity.
If you have a look at Reactive Extensions, it provides an elegant way of doing this using the observable pattern.
var timer = Observable.Interval(TimeSpan.FromMilliseconds(100));
timer.Subscribe(tick => OnSomeCondition());
A nice thing about observables is the ability to compose and combine further observables from existing ones, and even use LINQ expressions to create new ones. For example, if you wanted to have a second timer that was in sync with the first, but only triggering every 1 second, you could say
var seconds = from tick in timer where tick % 10 == 0 select tick;
seconds.Subscribe(tick => OnSomeOtherCondition());
By the way, Thread.Sleep is probably never a good idea.
A basic problem with Thread.Sleep that people are usually not aware of, is that the internal implementation of Thread.Sleep does not pump STA messages. The best and easiest alternative, if you have to wait a given time and can't use a kernel sync object, is to replace Thread.Sleep with Thread.Join on the current thread, with the wanted timeout. Thread.Join will behave the same, i.e. the thread would wait the wanted time, but in the meantime STA objects will be pumped.
Why this is important (some detailed explanatiopn follows)?
Sometimes, without you even knowing, one of your threads may have created an STA COM object. (For example this sometimes happens behind the scenes when you use Shell APIs). Now suppose a thread of yours has created an STA COM object, and is now in a call to Thread.Sleep.
If at sometime the COM object has to be deleted (which can happen at an unexpected time by the GC), then the Finalizer thread will try calling the object's distruvtor. This call will be marshalled to the object's STA thread, which will be blocked.
Now, in fact, you will have a blocked Finalizer thread. In this situations objects can't be freed from memory, and bad things will follow.
So the bottom line: Thread.Sleep=bad. Thread.Join=reasonable alternative.
The first example you show is a rather inelegant way to implement a periodic timer. .NET has a number of timer objects that make this kind of thing almost trivial. Look into System.Windows.Forms.Timer, System.Timers.Timer and System.Threading.Timer.
For example, here's how you'd use a System.Threading.Timer to replace your first example:
System.Threading.Timer MyTimer = new System.Threading.Timer(CheckCondition, null, 100, 100);
void CheckCondition(object state)
{
if (SomeCondition())
{
OnSomeCondition();
}
}
That code will call CheckCondition every 100 milliseconds (or thereabouts).
You don't provide a lot of background on why you're doing this, or what you're trying to accomplish, but if its possible, you might want to look into creating a windows service.
Use a BackgroundWoker for additional thread safe measures:
BackgroundWorker bw = new BackgroundWorker();
bw.WorkerSupportsCancellation = true;
bw.WorkerReportsProgress = true;
.
.
.
private void bw_DoWork(object sender, DoWorkEventArgs e)
{
BackgroundWorker worker = sender as BackgroundWorker;
for (;;)
{
if (worker.CancellationPending == true)
{
e.Cancel = true;
break;
}
else
{
// Perform a time consuming operation and report progress.
System.Threading.Thread.Sleep(100);
}
}
}
For more info visit: http://msdn.microsoft.com/en-us/library/cc221403%28v=vs.95%29.aspx
A very simple way for non blocking wait other threads/tasks is:
(new ManualResetEvent(false)).WaitOne(500); //Waits 500ms
We have an implementation for an Ultrasound machine application current where the Ultrasound object is created on the UI's thread. A Singleton implementation would have been good here, but regardless, isn't.
Recently, the set methods changed such that they automatically stop and restart the ultrasound machine, which can take between 10-100ms depending on the state of the machine. For most cases, this isn't too bad of a problem, however it's still causing the UI thread to block for 100ms. Additionally, these methods are not thread-safe and must be called on the same thread where the object was initialized.
This largest issue this is now causing is unresponsive buttons in the UI, especially sliders which may try to update variables many times as you slide the bar. As a result, sliders especially will stutter and update very slowly as it makes many set calls through databound propeties.
What is a good way to create a thread specifically for the creation and work for this Ultrasound object, which will persist through the lifetime of the application?
A current temporary workaround involves spawning a Timer, and invoking a parameter update once we have detected the slider hasn't moved for 200ms, however a Timer would then have to be implemented for every slider and seems like a very messy solution which solves unresponsive sliders, but still blocks the UI thread occasionally.
One thing that's really great about programming the GUI is that you don't have to worry about multiple threads mucking things up for you (assuming you've got CheckForIllegalCrossThreadCalls = true, as you should). It's all single-threaded, operating by means of a message pump (queue) that processes incoming messages one-by-one.
Since you've indicated that you need to synchronize method calls that are not written to be thread-safe (totally understandable), there's no reason you can't implement your own message pump to deal with your Ultrasound object.
A naive, very simplistic version might look something like this (the BlockingCollection<T> class is great if you're on .NET 4.0 or have installed Rx extensions; otherwise, you can just use a plain vanilla Queue<T> and do your own locking). Warning: this is just a quick skeleton I've thrown together just now; I make no promises as to its robustness or even correctness.
class MessagePump<T>
{
// In your case you would set this to your Ultrasound object.
// You could just as easily design this class to be "object-agnostic";
// but I think that coupling an instance to a specific object makes it clearer
// what the purpose of the MessagePump<T> is.
private T _obj;
private BlockingCollection<Action<T>> _workItems;
private Thread _thread;
public MessagePump(T obj)
{
_obj = obj;
// Note: the default underlying data store for a BlockingCollection<T>
// is a FIFO ConcurrentQueue<T>, which is what we want.
_workItems = new BlockingCollection<Action<T>>();
_thread = new Thread(ProcessQueue);
_thread.IsBackground = true;
_thread.Start();
}
public void Submit(Action<T> workItem)
{
_workItems.Add(workItem);
}
private void ProcessQueue()
{
for (;;)
{
Action<T> workItem = _workItems.Take();
try
{
workItem(_obj);
}
catch
{
// Put in some exception handling mechanism so that
// this thread is always running. One idea would be to
// raise an event containing the Exception object on a
// threadpool thread. You definitely don't want to raise
// the event from THIS thread, though, since then you
// could hit ANOTHER exception, which would defeat the
// purpose of this catch block.
}
}
}
}
Then what would happen is: every time you want to interact with your Ultrasound object in some way, you do so through this message pump, by calling Submit and passing in some action that works with your Ultrasound object. The Ultrasound object then receives all messages sent to it synchronously (by which I mean, one at a time), while operating on its own non-GUI thread.
You should maintain a dedicated UltraSound thread, which creates the UltraSound object and then listens for callbacks from other threads.
You should maintain a thread-safe queue of delegates and have the UltraSound thread repeatedly execute and remove the first delegate in the queue.
This way, the UI thread can post actions to the queue, which will then be executed asynchronously by the UltraSound thread.
I'm not sure I fully understand the setup, but here is my attempt at a solution:
How about having the event handler for the slider check the last event time, and wait for 50ms before processing a user adjustment (only process the most recent value).
Then have a thread using a while loop and waiting on an AutoResetEvent trigger from the GUI. It would then create the object and set it?