c# event handling: best practice to avoid thread contention and threadpool draining - c#

When events trigger, they use threads from the threadpool. So if you have a bunch of events that trigger faster than they return, you drain your threadpool. So whenever you have an event handler method that doesn't have any other control to limit the rate of threads entering, and doesn't have any guarantee of returning quickly, and you're not painstakingly implementing 100% thread-safe code inside that method, it's probably best to implement some thread control. The obvious simple thing to do would be to lock() inside the event handling method, but if you do that, all the threads after the first one will block in queue, waiting to enter the lock region, hogging all your threads from threadpool. It is probably better to detect another thread is inside this method, and quickly abort instead.
The question is: I have a way of detecting another thread already running, and quickly aborting the subsequent threads. But it doesn't seem very C#-ish due to the use of "const" and manually handling a locking flag at a low level. Is there a better way?
This is basically a direct replication of the lock() functionality, but using a non-blocking Interlocked.Exchange, instead of using the blocking Monitor.Enter()
public class FooGoo
{
private const int LOCKED = 0; // could use any arbitrary value; I choose 0
private const int UNLOCKED = LOCKED + 1; // any arbitrary value, != LOCKED
private static int _myLock = UNLOCKED;
void myEventHandler()
{
int previousValue = Interlocked.Exchange(ref _myLock, LOCKED);
if ( previousValue == UNLOCKED )
{
try
{
// some handling code, which may or may not return quickly
// maybe not threadsafe
}
finally
{
_myLock = UNLOCKED;
}
}
else
{
// another thread is executing right now. So I will abort.
//
// optional and environment-specific, maybe you want to
// queue some event information or set a flag or something,
// so you remember later that this thread aborted
}
}
}

So far, this is the best answer I have found. Does there exist any shorthand equivalent of a non-blocking lock() to shorten this up?
static object _myLock;
void myMethod ()
{
if ( Monitor.TryEnter(_myLock) )
{
try
{
// Do stuff
}
finally
{
Monitor.Exit(_myLock);
}
}
else
{
// then I failed to get the lock. Optionally do stuff.
}
}

Related

Better approach to concurrently "do or wait and skip"

I wonder is there a better solution for this task. One have a function which called concurrently by some amount of threads, but if some thread is already executing the code the other threads should skip that part of code and wait until that thread finish the execution. Here is what I have for now:
int _flag = 0;
readonly ManualResetEventSlim Mre = new ManualResetEventSlim();
void Foo()
{
if (Interlocked.CompareExchange(ref _flag, 1, 0) == 0)
{
Mre.Reset();
try
{
// do stuff
}
finally
{
Mre.Set();
Interlocked.Exchange(ref _flag, 0);
}
}
else
{
Mre.Wait();
}
}
What I want to achieve is faster execution, lower overhead and prettier look.
You could use a combination of an AutoResetEvent and a Barrier to do this.
You can use the AutoResetEvent to ensure that only one thread enters a "work" method.
The Barrier is used to ensure that all the threads wait until the one that entered the "work" method has returned from it.
Here's some sample code:
using System;
using System.Threading;
using System.Threading.Tasks;
namespace Demo
{
class Program
{
const int TASK_COUNT = 3;
static readonly Barrier barrier = new Barrier(TASK_COUNT);
static readonly AutoResetEvent gate = new AutoResetEvent(true);
static void Main()
{
Parallel.Invoke(task, task, task);
}
static void task()
{
while (true)
{
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " is waiting at the gate.");
// This bool is just for test purposes to prevent the same thread from doing the
// work every time!
bool didWork = false;
if (gate.WaitOne(0))
{
work();
didWork = true;
gate.Set();
}
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " is waiting at the barrier.");
barrier.SignalAndWait();
if (didWork)
Thread.Sleep(10); // Give a different thread a chance to get past the gate!
}
}
static void work()
{
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " is entering work()");
Thread.Sleep(3000);
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + " is leaving work()");
}
}
}
However, it might well be that the Task Parallel Library might have a better, higher-level solution. It's worth reading up on it a bit.
First of all, the waiting threads wouldn't do anything, they only wait, and after they get the signal from the event, they simply move out of the method, so you should add the while loop. After that, you can use the AutoResetEvent instead of manual one, as #MatthewWatson suggested. Also, you may consider SpinWait inside the loop, which is a lightweight solution.
Second, why use int, if this is definitely bool nature for the flag field?
Third, why not to use the simple locking, as #grrrrrrrrrrrrr suggested? This is exactly what are you doing here: forcing other threads to wait for one. If your code should write something by only one thread in a given time, but can read by multiple threads, you can use the ReaderWriterLockSlim object for such synchronization.
What I want to achieve is faster execution, lower overhead and prettier look.
faster execution
unless your "Do Stuff" is extremely fast this code shouldn't have any major overhead.
lower overhead
Again, Interlocked Exchange,/CompareExchange are very low overhead, as is manual reset event.
If your "Do Stuff" is really fast, e.g. moving a linked list head, then you can spin:
prettier look
Correct multi-threaded C# code rarely looks pretty when compared to correct single threaded C# code. The language idioms are just not there yet.
That said: If you have a really fast operation ("a few tens of cycles"), then you can spin: (although without knowing exactly what your code is doing, I can't say if this is correct).
if (Interlocked.CompareExchange(ref _flag, 1, 0) == 0)
{
try
{
// do stuff that is very quick.
}
finally
{
Interlocked.Exchange(ref _flag, 0);
}
}
else
{
SpinWait.SpinUntil(() => _flag == 0);
}
The first thing that springs to mind is to change it to use a lock. This won't skip the code, but will cause each thread getting to it to pause while the first thread executes its stuff. This way the lock will also automatically get released in the case of an exception.
object syncer = new object();
void Foo()
{
lock(syncer)
{
//Do stuff
}
}

Is there such a synchronization tool as "single-item-sized async task buffer"?

Many times in UI development I handle events in such a way that when an event first comes - I immediately start processing, but if there is one processing operation in progress - I wait for it to complete before I process another event. If more than one event occurs before the operation completes - I only process the most recent one.
The way I typically do that my process method has a loop and in my event handler I check a field that indicates if I am currently processing something and if I am - I put my current event arguments in another field that is basically a one item sized buffer and when current processing pass completes - I check if there is some other event to process and I loop until I am done.
Now this seems a bit too repetitive and possibly not the most elegant way to do it, though it seems to otherwise work fine for me. I have two questions then:
Does what I need to do have a name?
Is there some reusable synchronization type out there that could do that for me?
I'm thinking of adding something to the set of async coordination primitives by Stephen Toub that I included in my toolkit.
So first, we'll handle the case that you described in which the method is always used from the UI thread, or some other synchronization context. The Run method can itself be async to handle all of the marshaling through the synchronization context for us.
If we're running we just set the next stored action. If we're not, then we indicate that we're now running, await the action, and then continue to await the next action until there is no next action. We ensure that whenever we're done we indicate that we're done running:
public class EventThrottler
{
private Func<Task> next = null;
private bool isRunning = false;
public async void Run(Func<Task> action)
{
if (isRunning)
next = action;
else
{
isRunning = true;
try
{
await action();
while (next != null)
{
var nextCopy = next;
next = null;
await nextCopy();
}
}
finally
{
isRunning = false;
}
}
}
private static Lazy<EventThrottler> defaultInstance =
new Lazy<EventThrottler>(() => new EventThrottler());
public static EventThrottler Default
{
get { return defaultInstance.Value; }
}
}
Because the class is, at least generally, going to be used exclusively from the UI thread there will generally need to be only one, so I added a convenience property of a default instance, but since it may still make sense for there to be more than one in a program, I didn't make it a singleton.
Run accepts a Func<Task> with the idea that it would generally be an async lambda. It might look like:
public class Foo
{
public void SomeEventHandler(object sender, EventArgs args)
{
EventThrottler.Default.Run(async () =>
{
await Task.Delay(1000);
//do other stuff
});
}
}
Okay, so, just to be verbose, here is a version that handles the case where the event handlers are called from different threads. I know you said that you assume they're all called from the UI thread, but I generalized it a bit. This means locking over all access to instance fields of the type in a lock block, but not actually executing the function inside of a lock block. That last part is important not just for performance, to ensure we're not blocking items from just setting the next field, but also to avoid issues with that action also calling run, so that it doesn't need to deal with re-entrancy issues or potential deadlocks. This pattern, of doing stuff in a lock block and then responding based on conditions determined in the lock means setting local variables to indicate what should be done after the lock ends.
public class EventThrottlerMultiThreaded
{
private object key = new object();
private Func<Task> next = null;
private bool isRunning = false;
public void Run(Func<Task> action)
{
bool shouldStartRunning = false;
lock (key)
{
if (isRunning)
next = action;
else
{
isRunning = true;
shouldStartRunning = true;
}
}
Action<Task> continuation = null;
continuation = task =>
{
Func<Task> nextCopy = null;
lock (key)
{
if (next != null)
{
nextCopy = next;
next = null;
}
else
{
isRunning = false;
}
}
if (nextCopy != null)
nextCopy().ContinueWith(continuation);
};
if (shouldStartRunning)
action().ContinueWith(continuation);
}
}
Does what I need to do have a name?
What you're describing sounds a bit like a trampoline combined with a collapsing queue. A trampoline is basically a loop that iteratively invokes thunk-returning functions. An example is the CurrentThreadScheduler in the Reactive Extensions. When an item is scheduled on a CurrentThreadScheduler, the work item is added to the scheduler's thread-local queue, after which one of the following things will happen:
If the trampoline is already running (i.e., the current thread is already processing the thread-local queue), then the Schedule() call returns immediately.
If the trampoline is not running (i.e., no work items are queued/running on the current thread), then the current thread begins processing the items in the thread-local queue until it is empty, at which point the call to Schedule() returns.
A collapsing queue accumulates items to be processed, with the added twist that if an equivalent item is already in the queue, then that item is simply replaced with the newer item (resulting in only the most recent of the equivalent items remaining in the queue, as opposed to both). The idea is to avoid processing stale/obsolete events. Consider a consumer of market data (e.g., stock ticks). If you receive several updates for a frequently traded security, then each update renders the earlier updates obsolete. There is likely no point in processing earlier ticks for the same security if a more recent tick has already arrived. Thus, a collapsing queue is appropriate.
In your scenario, you essentially have a trampoline processing a collapsing queue with for which all incoming events are considered equivalent. This results in an effective maximum queue size of 1, as every item added to a non-empty queue will result in the existing item being evicted.
Is there some reusable synchronization type out there that could do that for me?
I do not know of an existing solution that would serve your needs, but you could certainly create a generalized trampoline or event loop capable of supporting pluggable scheduling strategies. The default strategy could use a standard queue, while other strategies might use a priority queue or a collapsing queue.
What you're describing sounds very similar to how TPL Dataflow's BrodcastBlock behaves: it always remembers only the last item that you sent to it. If you combine it with ActionBlock that executes your action and has capacity only for the item currently being processed, you get what you want (the method needs a better name):
// returns send delegate
private static Action<T> CreateProcessor<T>(Action<T> executedAction)
{
var broadcastBlock = new BroadcastBlock<T>(null);
var actionBlock = new ActionBlock<T>(
executedAction, new ExecutionDataflowBlockOptions { BoundedCapacity = 1 });
broadcastBlock.LinkTo(actionBlock);
return item => broadcastBlock.Post(item);
}
Usage could be something like this:
var processor = CreateProcessor<int>(
i =>
{
Console.WriteLine(i);
Thread.Sleep(i);
});
processor(100);
processor(1);
processor(2);
Output:
100
2

Scheduling a method to run rather than lock in C#

I have a method (let's call it "CheckAll") that is called from multiple areas of my program, and can therefore be called for a 2nd time before the 1st time has completed.
To get around this I have implemented a "lock" that (if I understand it correctly), halts the 2nd thread until the 1st thread has completed.
However what I really want is for this 2nd call to return to the calling method immediately (rather than halt the thread), and to schedule CheckAll to be run again once it has completed the 1st time.
I could setup a timer to do this but that seems cumbersome and difficult. Is there a better way?
Easy/cheap implementation.
private Thread checkThread = null;
private int requests = 0;
void CheckAll()
{
lock(SyncRoot){
if (checkThread != null; && checkThread.ThreadState == ThreadState.Running)
{
requests++;
return;
}else
{
CheckAllImpl();
}
}
}
void CheckAppImpl()
{
// start a new thread and run the following code in it.
checkThread = new Thread(newThreadStart( () => {
while (true)
{
// 1. Do what ever checkall need to do.
// 2.
lock (SyncRoot)
{
requests--;
if (!(requests > 0))
break;
}
}});
checkThread.Start();
}
Just on a side note, this can have some race conditions. Better implementation swould be to use ConcurrentQueue introduced in .NET 4 which handles all the threading craziness for you.
UPDATE: Here's a more 'cool' implementation using ConcurrentQueue (turns out we don't need TPL.)
public class CheckAllService
{
// Make sure you don't create multiple
// instances of this class. Make it a singleton.
// Holds all the pending requests
private ConcurrentQueue<object> requests = new ConcurrentQueue<object>();
private object syncLock = new object();
private Thread checkAllThread;
/// <summary>
/// Requests to Check All. This request is async,
/// and will be serviced when all pending requests
/// are serviced (if any).
/// </summary>
public void RequestCheckAll()
{
requests.Enqueue("Process this Scotty...");
lock (syncLock)
{ // Lock is to make sure we don't create multiple threads.
if (checkAllThread == null ||
checkAllThread.ThreadState != ThreadState.Running)
{
checkAllThread = new Thread(new ThreadStart(ListenAndProcessRequests));
checkAllThread.Start();
}
}
}
private void ListenAndProcessRequests()
{
while (requests.Count != 0)
{
object thisRequestData;
requests.TryDequeue(out thisRequestData);
try
{
CheckAllImpl();
}
catch (Exception ex)
{
// TODO: Log error ?
// Can't afford to fail.
// Failing the thread will cause all
// waiting requests to delay until another
// request come in.
}
}
}
protected void CheckAllImpl()
{
throw new NotImplementedException("Check all is not gonna write it-self...");
// TODO: Check All
}
}
NOTE: I use a real Thread instead of a TPL Task because a Task doesn't hold on to a real thread as an optimization. When there's no Thread, that means at the time your application closes, any waiting CheckAll requests are ignored.(I got bitten hard by this when I thought I'm so smart to call my logging methods in a task once, which ignored a couple of dozen log records when closing. CLR checks and waits for any waiting threads when gracefully exiting.)
Happy Coding...
Use a separate thread to call CheckAll() in a loop that also waits on a semaphore. A 'PerformCheck()' method signals the semaphore.
Your system can then make as many calls to 'PerformCheck()' as it might wish, from any thread, and CheckAll() will be run exactly as many times as there are PerformCheck() calls, but with no blocking on PerformCheck().
No flags, no limits, no locking, no polling.
You can setup a flag for this.
When this CheckAll() method runs. at the end of this method you can put a flag for each of the separate method. means if the method is being called from other method lets say a() then immidiately after this it is going to be called from b() then>>> when it is called from a() put a flaga variable(which may be global) in CheckAll() at the end(assign it to particular value) and give the condition in b() according to the flaga variable value. Means something like this...
public a()
{
CheckAll();
}
public b()
{
.
.
(put condition here for check when flaga=1 from the method CheckAll())
CheckAll();
}
public CheckAll()
{
.
.
.
flaga=1;
}
}

how to obtain a lock in two places but release on one place?

i'm newbie in c#. I need to obtain lock in 2 methods, but release in one method. Will that work?
public void obtainLock() {
Monitor.Enter(lockObj);
}
public void obtainReleaseLock() {
lock (lockObj) {
doStuff
}
}
Especially can I call obtainLock and then obtainReleaseLock? Is "doubleLock" allowed in C#? These two methods are always called from the same thread, however lockObj is used in another thread for synchronization.
upd: after all comments what do you think about such code? is it ideal?
public void obtainLock() {
if (needCallMonitorExit == false) {
Monitor.Enter(lockObj);
needCallMonitorExit = true;
}
// doStuff
}
public void obtainReleaseLock() {
try {
lock (lockObj) {
// doAnotherStuff
}
} finally {
if (needCallMonitorExit == true) {
needCallMonitorExit = false;
Monitor.Exit(lockObj);
}
}
}
Yes, locks are "re-entrant", so a call can "double-lock" (your phrase) the lockObj. however note that it needs to be released exactly as many times as it is taken; you will need to ensure that there is a corresponding "ReleaseLock" to match "ObtainLock".
I do, however, suggest it is easier to let the caller lock(...) on some property you expose, though:
public object SyncLock { get { return lockObj; } }
now the caller can (instead of obtainLock()):
lock(something.SyncLock) {
//...
}
much easier to get right. Because this is the same underlying lockObj that is used internally, this synchronizes against either usage, even if obtainReleaseLock (etc) is used inside code that locked against SyncLock.
With the context clearer (comments), it seems that maybe Wait and Pulse are the way to do this:
void SomeMethodThatMightNeedToWait() {
lock(lockObj) {
if(needSomethingSpecialToHappen) {
Monitor.Wait(lockObj);
// ^^^ this ***releases*** the lock (however many times needed), and
// enters the pending-queue; when *another* thread "pulses", it
// enters the ready-queue; when the lock is *available*, it
// reacquires the lock (back to as many times as it held it
// previously) and resumes work
}
// do some work, happy that something special happened, and
// we have the lock
}
}
void SomeMethodThatMightSignalSomethingSpecial() {
lock(lockObj) {
// do stuff
Monitor.PulseAll(lockObj);
// ^^^ this moves **all** items from the pending-queue to the ready-queue
// note there is also Pulse(...) which moves a *single* item
}
}
Note that when using Wait you might want to use the overload that accepts a timeout, to avoid waiting forever; note also it is quite common to have to loop and re-validate, for example:
lock(lockObj) {
while(needSomethingSpecialToHappen) {
Monitor.Wait(lockObj);
// at this point, we know we were pulsed, but maybe another waiting
// thread beat us to it! re-check the condition, and continue; this might
// also be a good place to check for some "abort" condition (and
// remember to do a PulseAll() when aborting)
}
// do some work, happy that something special happened, and we have the lock
}
You would have to use the Monitor for this functionality. Note that you open yourself up to deadlocks and race conditions if you aren't careful with your locks and having them taken and released in seperate areas of code can be risky
Monitor.Exit(lockObj);
Only one owner can hold the lock at a given time; it is exclusive. While the locking can be chained the more important component is making sure you obtain and release the proper number of times, avoiding difficult to diagnose threading issues.
When you wrap your code via lock { ... }, you are essentially calling Monitor.Enter and Monitor.Exit as scope is entered and departed.
When you explicitly call Monitor.Enter you are obtaining the lock and at that point you would need to call Monitor.Exit to release the lock.
This doesn't work.
The code
lock(lockObj)
{
// do stuff
}
is translated to something like
Monitor.Enter(lockObj)
try
{
// do stuff
}
finally
{
Monitor.Exit(lockObj)
}
That means that your code enters the lock twice but releases it only once. According to the documentation, the lock is only really released by the thread if Exit was called as often as Enter which is not the case in your code.
Summary: Your code will not deadlock on the call to obtainReleaseLock, but the lock on lockObj will never be released by the thread. You would need to have an explicit call to Monitor.Exit(lockObj), so the calls to Monitor.Enter matches the number of calls to Monitor.Exit.

.NET 3.5 C# does not offer what I need for locking: Count async saves until 0 again

I have some records, that I want to save to database asynchronously. I organize them into batches, then send them. As time passes, the batches are processed.
In the meanwhile the user can work on. There are some critical operations, that I want to lock him out from, while any save batch is still running asynchronously.
The save is done using a TableServiceContext and method .BeginSave() - but I think this should be irrelevant.
What I want to do is whenever an async save is started, increase a lock count, and when it completes, decrease the lock count so that it will be zero as soon as all have finished. I want to lock out the critical operation as long as the count is not zero. Furthermore I want to qualify the lock - by business object - for example.
I did not find a .NET 3.5 c# locking method, that does fulfil this requirement. A semaphore does not contain a method to check, if the count is 0. Otherwise a semaphore with unlimited max count would do.
Actually the Semaphare does have a method for checking to see if the count is zero. Use the WaitOne method with a zero timeout. It will return a value indicating whether the semaphore was acquired. If it returns false then it was not acquired which implies that it's count is zero.
var s = new Semaphore(5, 5);
while (s.WaitOne(0))
{
Console.WriteLine("acquired");
}
Console.WriteLine("no more left to acquire");
I'm presuming that when you say lock the user out, it is not a literal "lock" whilst the operations are completed as this would block the UI thread and freeze the application when the lock was encountered. I'm assuming you mean that some state can be checked so that UI controls can be disabled/enabled.
Something like the following code could be used which is lock free:
public class BusyState
{
private int isBusy;
public void SignalTaskStarted()
{
Interlocked.Increment(ref isBusy);
}
public void SignalTaskFinished()
{
if (Interlocked.Decrement(ref isBusy) < 0)
{
throw new InvalidOperationException("No tasks started.");
}
}
public bool IsBusy()
{
return Thread.VolatileRead(ref isBusy) > 0;
}
}
public class BusinessObject
{
private readonly BusyState busyState = new BusyState();
public void Save()
{
//Raise a "Started" event to disable UI controls...
//Start a few async tasks which call CallbackFromAsyncTask when finished.
//Start task 1
busyState.SignalTaskStarted();
//Start task 2
busyState.SignalTaskStarted();
//Start task 3
busyState.SignalTaskStarted();
}
private void CallbackFromAsyncTask()
{
busyState.SignalTaskFinished();
if (!busyState.IsBusy())
{
//Raise a "Completed" event to enable UI controls...
}
}
}
The counting aspect is encapsulated in BusyState which is then used in the business object to signal tasks starting and stopping. Raising started and completed events could be hooked to implement enabling and disabling UI controls to lock the user out whilst the async operations are being completed.
There are obviously loads of caveats here for handling error conditions, etc. So just some basic outline code...
What is the purpose of the lock count if the only logic involves whether or not the value is non-zero?
If you want to do this on a type-by-type basis, you could take this approach:
public class BusinessObject1
{
private static readonly object lockObject = new object();
public static object SyncRoot { get { return lockObject; } }
}
(following the same pattern for other business objects)
If you then enclose your save and your critical operations in a block like this:
lock(BusinessObject1.SyncRoot)
{
// do work
}
You will make saving and the critical operations mutually exclusive tasks.
Since you wanted it granular, you can cascade the locks like this:
lock(BusinessObject1.SyncRoot)
lock(BusinessObject2.SyncRoot)
lock(BusinessObject3.SyncRoot)
{
// do work
}

Categories