I have a method (let's call it "CheckAll") that is called from multiple areas of my program, and can therefore be called for a 2nd time before the 1st time has completed.
To get around this I have implemented a "lock" that (if I understand it correctly), halts the 2nd thread until the 1st thread has completed.
However what I really want is for this 2nd call to return to the calling method immediately (rather than halt the thread), and to schedule CheckAll to be run again once it has completed the 1st time.
I could setup a timer to do this but that seems cumbersome and difficult. Is there a better way?
Easy/cheap implementation.
private Thread checkThread = null;
private int requests = 0;
void CheckAll()
{
lock(SyncRoot){
if (checkThread != null; && checkThread.ThreadState == ThreadState.Running)
{
requests++;
return;
}else
{
CheckAllImpl();
}
}
}
void CheckAppImpl()
{
// start a new thread and run the following code in it.
checkThread = new Thread(newThreadStart( () => {
while (true)
{
// 1. Do what ever checkall need to do.
// 2.
lock (SyncRoot)
{
requests--;
if (!(requests > 0))
break;
}
}});
checkThread.Start();
}
Just on a side note, this can have some race conditions. Better implementation swould be to use ConcurrentQueue introduced in .NET 4 which handles all the threading craziness for you.
UPDATE: Here's a more 'cool' implementation using ConcurrentQueue (turns out we don't need TPL.)
public class CheckAllService
{
// Make sure you don't create multiple
// instances of this class. Make it a singleton.
// Holds all the pending requests
private ConcurrentQueue<object> requests = new ConcurrentQueue<object>();
private object syncLock = new object();
private Thread checkAllThread;
/// <summary>
/// Requests to Check All. This request is async,
/// and will be serviced when all pending requests
/// are serviced (if any).
/// </summary>
public void RequestCheckAll()
{
requests.Enqueue("Process this Scotty...");
lock (syncLock)
{ // Lock is to make sure we don't create multiple threads.
if (checkAllThread == null ||
checkAllThread.ThreadState != ThreadState.Running)
{
checkAllThread = new Thread(new ThreadStart(ListenAndProcessRequests));
checkAllThread.Start();
}
}
}
private void ListenAndProcessRequests()
{
while (requests.Count != 0)
{
object thisRequestData;
requests.TryDequeue(out thisRequestData);
try
{
CheckAllImpl();
}
catch (Exception ex)
{
// TODO: Log error ?
// Can't afford to fail.
// Failing the thread will cause all
// waiting requests to delay until another
// request come in.
}
}
}
protected void CheckAllImpl()
{
throw new NotImplementedException("Check all is not gonna write it-self...");
// TODO: Check All
}
}
NOTE: I use a real Thread instead of a TPL Task because a Task doesn't hold on to a real thread as an optimization. When there's no Thread, that means at the time your application closes, any waiting CheckAll requests are ignored.(I got bitten hard by this when I thought I'm so smart to call my logging methods in a task once, which ignored a couple of dozen log records when closing. CLR checks and waits for any waiting threads when gracefully exiting.)
Happy Coding...
Use a separate thread to call CheckAll() in a loop that also waits on a semaphore. A 'PerformCheck()' method signals the semaphore.
Your system can then make as many calls to 'PerformCheck()' as it might wish, from any thread, and CheckAll() will be run exactly as many times as there are PerformCheck() calls, but with no blocking on PerformCheck().
No flags, no limits, no locking, no polling.
You can setup a flag for this.
When this CheckAll() method runs. at the end of this method you can put a flag for each of the separate method. means if the method is being called from other method lets say a() then immidiately after this it is going to be called from b() then>>> when it is called from a() put a flaga variable(which may be global) in CheckAll() at the end(assign it to particular value) and give the condition in b() according to the flaga variable value. Means something like this...
public a()
{
CheckAll();
}
public b()
{
.
.
(put condition here for check when flaga=1 from the method CheckAll())
CheckAll();
}
public CheckAll()
{
.
.
.
flaga=1;
}
}
Related
Many times in UI development I handle events in such a way that when an event first comes - I immediately start processing, but if there is one processing operation in progress - I wait for it to complete before I process another event. If more than one event occurs before the operation completes - I only process the most recent one.
The way I typically do that my process method has a loop and in my event handler I check a field that indicates if I am currently processing something and if I am - I put my current event arguments in another field that is basically a one item sized buffer and when current processing pass completes - I check if there is some other event to process and I loop until I am done.
Now this seems a bit too repetitive and possibly not the most elegant way to do it, though it seems to otherwise work fine for me. I have two questions then:
Does what I need to do have a name?
Is there some reusable synchronization type out there that could do that for me?
I'm thinking of adding something to the set of async coordination primitives by Stephen Toub that I included in my toolkit.
So first, we'll handle the case that you described in which the method is always used from the UI thread, or some other synchronization context. The Run method can itself be async to handle all of the marshaling through the synchronization context for us.
If we're running we just set the next stored action. If we're not, then we indicate that we're now running, await the action, and then continue to await the next action until there is no next action. We ensure that whenever we're done we indicate that we're done running:
public class EventThrottler
{
private Func<Task> next = null;
private bool isRunning = false;
public async void Run(Func<Task> action)
{
if (isRunning)
next = action;
else
{
isRunning = true;
try
{
await action();
while (next != null)
{
var nextCopy = next;
next = null;
await nextCopy();
}
}
finally
{
isRunning = false;
}
}
}
private static Lazy<EventThrottler> defaultInstance =
new Lazy<EventThrottler>(() => new EventThrottler());
public static EventThrottler Default
{
get { return defaultInstance.Value; }
}
}
Because the class is, at least generally, going to be used exclusively from the UI thread there will generally need to be only one, so I added a convenience property of a default instance, but since it may still make sense for there to be more than one in a program, I didn't make it a singleton.
Run accepts a Func<Task> with the idea that it would generally be an async lambda. It might look like:
public class Foo
{
public void SomeEventHandler(object sender, EventArgs args)
{
EventThrottler.Default.Run(async () =>
{
await Task.Delay(1000);
//do other stuff
});
}
}
Okay, so, just to be verbose, here is a version that handles the case where the event handlers are called from different threads. I know you said that you assume they're all called from the UI thread, but I generalized it a bit. This means locking over all access to instance fields of the type in a lock block, but not actually executing the function inside of a lock block. That last part is important not just for performance, to ensure we're not blocking items from just setting the next field, but also to avoid issues with that action also calling run, so that it doesn't need to deal with re-entrancy issues or potential deadlocks. This pattern, of doing stuff in a lock block and then responding based on conditions determined in the lock means setting local variables to indicate what should be done after the lock ends.
public class EventThrottlerMultiThreaded
{
private object key = new object();
private Func<Task> next = null;
private bool isRunning = false;
public void Run(Func<Task> action)
{
bool shouldStartRunning = false;
lock (key)
{
if (isRunning)
next = action;
else
{
isRunning = true;
shouldStartRunning = true;
}
}
Action<Task> continuation = null;
continuation = task =>
{
Func<Task> nextCopy = null;
lock (key)
{
if (next != null)
{
nextCopy = next;
next = null;
}
else
{
isRunning = false;
}
}
if (nextCopy != null)
nextCopy().ContinueWith(continuation);
};
if (shouldStartRunning)
action().ContinueWith(continuation);
}
}
Does what I need to do have a name?
What you're describing sounds a bit like a trampoline combined with a collapsing queue. A trampoline is basically a loop that iteratively invokes thunk-returning functions. An example is the CurrentThreadScheduler in the Reactive Extensions. When an item is scheduled on a CurrentThreadScheduler, the work item is added to the scheduler's thread-local queue, after which one of the following things will happen:
If the trampoline is already running (i.e., the current thread is already processing the thread-local queue), then the Schedule() call returns immediately.
If the trampoline is not running (i.e., no work items are queued/running on the current thread), then the current thread begins processing the items in the thread-local queue until it is empty, at which point the call to Schedule() returns.
A collapsing queue accumulates items to be processed, with the added twist that if an equivalent item is already in the queue, then that item is simply replaced with the newer item (resulting in only the most recent of the equivalent items remaining in the queue, as opposed to both). The idea is to avoid processing stale/obsolete events. Consider a consumer of market data (e.g., stock ticks). If you receive several updates for a frequently traded security, then each update renders the earlier updates obsolete. There is likely no point in processing earlier ticks for the same security if a more recent tick has already arrived. Thus, a collapsing queue is appropriate.
In your scenario, you essentially have a trampoline processing a collapsing queue with for which all incoming events are considered equivalent. This results in an effective maximum queue size of 1, as every item added to a non-empty queue will result in the existing item being evicted.
Is there some reusable synchronization type out there that could do that for me?
I do not know of an existing solution that would serve your needs, but you could certainly create a generalized trampoline or event loop capable of supporting pluggable scheduling strategies. The default strategy could use a standard queue, while other strategies might use a priority queue or a collapsing queue.
What you're describing sounds very similar to how TPL Dataflow's BrodcastBlock behaves: it always remembers only the last item that you sent to it. If you combine it with ActionBlock that executes your action and has capacity only for the item currently being processed, you get what you want (the method needs a better name):
// returns send delegate
private static Action<T> CreateProcessor<T>(Action<T> executedAction)
{
var broadcastBlock = new BroadcastBlock<T>(null);
var actionBlock = new ActionBlock<T>(
executedAction, new ExecutionDataflowBlockOptions { BoundedCapacity = 1 });
broadcastBlock.LinkTo(actionBlock);
return item => broadcastBlock.Post(item);
}
Usage could be something like this:
var processor = CreateProcessor<int>(
i =>
{
Console.WriteLine(i);
Thread.Sleep(i);
});
processor(100);
processor(1);
processor(2);
Output:
100
2
When events trigger, they use threads from the threadpool. So if you have a bunch of events that trigger faster than they return, you drain your threadpool. So whenever you have an event handler method that doesn't have any other control to limit the rate of threads entering, and doesn't have any guarantee of returning quickly, and you're not painstakingly implementing 100% thread-safe code inside that method, it's probably best to implement some thread control. The obvious simple thing to do would be to lock() inside the event handling method, but if you do that, all the threads after the first one will block in queue, waiting to enter the lock region, hogging all your threads from threadpool. It is probably better to detect another thread is inside this method, and quickly abort instead.
The question is: I have a way of detecting another thread already running, and quickly aborting the subsequent threads. But it doesn't seem very C#-ish due to the use of "const" and manually handling a locking flag at a low level. Is there a better way?
This is basically a direct replication of the lock() functionality, but using a non-blocking Interlocked.Exchange, instead of using the blocking Monitor.Enter()
public class FooGoo
{
private const int LOCKED = 0; // could use any arbitrary value; I choose 0
private const int UNLOCKED = LOCKED + 1; // any arbitrary value, != LOCKED
private static int _myLock = UNLOCKED;
void myEventHandler()
{
int previousValue = Interlocked.Exchange(ref _myLock, LOCKED);
if ( previousValue == UNLOCKED )
{
try
{
// some handling code, which may or may not return quickly
// maybe not threadsafe
}
finally
{
_myLock = UNLOCKED;
}
}
else
{
// another thread is executing right now. So I will abort.
//
// optional and environment-specific, maybe you want to
// queue some event information or set a flag or something,
// so you remember later that this thread aborted
}
}
}
So far, this is the best answer I have found. Does there exist any shorthand equivalent of a non-blocking lock() to shorten this up?
static object _myLock;
void myMethod ()
{
if ( Monitor.TryEnter(_myLock) )
{
try
{
// Do stuff
}
finally
{
Monitor.Exit(_myLock);
}
}
else
{
// then I failed to get the lock. Optionally do stuff.
}
}
Say I have the following code (please assume all the appropriate import statements):
public class CTestClass {
// Properties
protected Object LockObj;
public ConcurrentDictionary<String, String> Prop_1;
protected System.Timers.Timer TImer_1;
// Methods
public CTestClass () {
LockObj = new Object ();
Prop_1 = new ConcurrentDictionary<String, String> ();
Prop_1.TryAdd ("Key_1", "Value_1");
Timer_1 = new System.Timers.Timer ();
Timer_1.Interval = (1000 * 60); // One minute
Timer_1.Elapsed += new ElapsedEventHandler ((s, t) => Method_2 ());
Timer_1.Enabled = true;
} // End CTestClass ()
public void Method_1 () {
// Do something that requires Prop_1 to be read
// But *__do not__* lock Prop_1
} // End Method_1 ()
public void Method_2 () {
lock (LockObj) {
// Do something with Prop_1 *__only if__* Method_1 () is not currently executing
}
} // End Method_2 ()
} // End CTestClass
// Main class
public class Program {
public static void Main (string[] Args) {
CTestClass TC = new CTestClass ();
ParallelEnumerable.Range (0, 10)
.ForAll (s => {
TC.Method_1 ();
});
}
}
I understand it is possible to use MethodBase.GetCurrentMethod, but (short of doing messy book-keeping with global variables) is it possible to solve the problem without reflection?
Thanks in advance for your assistance.
EDIT
(a) Corrected an error with the scope of LockObj
(b) Adding a bit more by way of explanation (taken from my comment below)
I have corrected my code (in my actual project) and placed LockObj as a class property. The trouble is, Method_2 is actually fired by a System.Timers.Timer, and when it is ready to fire, it is quite possible that Method_1 is already executing. But in that event it is important to wait for Method_1 to finish executing before proceeding with Method_2.
I agree that the minimum working example I have tried to create does not make this latter point clear. Let me see if I can edit the MWE.
CODE EDITING FINISHED
ONE FINAL EDIT
I am using Visual Studio 2010 and .NET 4.0, so I do not have the async/await features that would have made my life a lot easier.
As pointed above, you should become more familiar with different synchronization primitives, that exist in .net.
You dont solve such problems by reflection or analyzing whos the concurent - running method, but by using a signaling primitive, which will inform anyone interested that the method is running/ended.
First of all ConcurentDictionary is thread safe so you don't need to lock for producing/consuming. So, if only care about accessing your dictionary no additional locking is necessary.
However if you just need to mutual exclude the execution of method 1 and 2, you should declare the lock object as class member and you may lock each function body using it, but as I said, not needed if you are going to use ConcurentDictionary.
If you really need which method executes at every moment you can use stack frame of each thread, but this will going to be slow and I believe not necessary for this case.
The term you're looking for is Thread Synchronisation. There are many ways to achieve this in .NET.
One of which (lock) you've discovered.
In general terms, the lock object should be accessible by all threads needing it, and initialised before any thread tries to lock it.
The lock() syntax ensures that only one thread can continue at a time for that lock object. Any other threads which try to lock that same object will halt until they can obtain the lock.
There is no ability to time out or otherwise cancel the waiting for the lock (except by terminating the thread or process).
By way of example, here's a simpler form:
public class ThreadSafeCounter
{
private object _lockObject = new Object(); // Initialise once
private int count = 0;
public void Increment()
{
lock(_lockObject) // Only one thread touches count at a time
{
count++;
}
}
public void Decrement()
{
lock (_lockObject) // Only one thread touches count at a time
{
count--;
}
}
public int Read()
{
lock (_lockObject) // Only one thread touches count at a time
{
return count;
}
}
}
You can see this as a sort of variant of the classic readers/writers problem where the readers don't consume the product of the writers. I think you can do it with the help of an int variable and three Mutex.
One Mutex (mtxExecutingMeth2) guard the execution of Method2 and blocks the execution of both Method2 and Method1. Method1 must release it immediately, since otherwise you could not have other parallel executions of Method1. But this means that you have to tell Method2 whene there are Method1's executing, and this is done using the mtxThereAreMeth1 Mutex which is released only when there are no more Method1's executing. This is controlled by the value of numMeth1 which has to be protected by another Mutex (mtxNumMeth1).
I didn't give it a try, so I hope I didn't introduce some race conditions. Anyway it should at least give you an idea of a possible direction to follow.
And this is the code:
protected int numMeth1 = 0;
protected Mutex mtxNumMeth1 = new Mutex();
protected Mutex mtxExecutingMeth2 = new Mutex();
protected Mutex mtxThereAreMeth1 = new Mutex();
public void Method_1()
{
// if this is the first execution of Method1, tells Method2 that it has to wait
mtxNumMeth1.WaitOne();
if (numMeth1 == 0)
mtxThereAreMeth1.WaitOne();
numMeth1++;
mtxNumMeth1.ReleaseMutex();
// check if Method2 is executing and release the Mutex immediately in order to avoid
// blocking other Method1's
mtxExecutingMeth2.WaitOne();
mtxExecutingMeth2.ReleaseMutex();
// Do something that requires Prop_1 to be read
// But *__do not__* lock Prop_1
// if this is the last Method1 executing, tells Method2 that it can execute
mtxNumMeth1.WaitOne();
numMeth1--;
if (numMeth1 == 0)
mtxThereAreMeth1.ReleaseMutex();
mtxNumMeth1.ReleaseMutex();
}
public void Method_2()
{
mtxThereAreMeth1.WaitOne();
mtxExecutingMeth2.WaitOne();
// Do something with Prop_1 *__only if__* Method_1 () is not currently executing
mtxExecutingMeth2.ReleaseMutex();
mtxThereAreMeth1.ReleaseMutex();
}
i'm newbie in c#. I need to obtain lock in 2 methods, but release in one method. Will that work?
public void obtainLock() {
Monitor.Enter(lockObj);
}
public void obtainReleaseLock() {
lock (lockObj) {
doStuff
}
}
Especially can I call obtainLock and then obtainReleaseLock? Is "doubleLock" allowed in C#? These two methods are always called from the same thread, however lockObj is used in another thread for synchronization.
upd: after all comments what do you think about such code? is it ideal?
public void obtainLock() {
if (needCallMonitorExit == false) {
Monitor.Enter(lockObj);
needCallMonitorExit = true;
}
// doStuff
}
public void obtainReleaseLock() {
try {
lock (lockObj) {
// doAnotherStuff
}
} finally {
if (needCallMonitorExit == true) {
needCallMonitorExit = false;
Monitor.Exit(lockObj);
}
}
}
Yes, locks are "re-entrant", so a call can "double-lock" (your phrase) the lockObj. however note that it needs to be released exactly as many times as it is taken; you will need to ensure that there is a corresponding "ReleaseLock" to match "ObtainLock".
I do, however, suggest it is easier to let the caller lock(...) on some property you expose, though:
public object SyncLock { get { return lockObj; } }
now the caller can (instead of obtainLock()):
lock(something.SyncLock) {
//...
}
much easier to get right. Because this is the same underlying lockObj that is used internally, this synchronizes against either usage, even if obtainReleaseLock (etc) is used inside code that locked against SyncLock.
With the context clearer (comments), it seems that maybe Wait and Pulse are the way to do this:
void SomeMethodThatMightNeedToWait() {
lock(lockObj) {
if(needSomethingSpecialToHappen) {
Monitor.Wait(lockObj);
// ^^^ this ***releases*** the lock (however many times needed), and
// enters the pending-queue; when *another* thread "pulses", it
// enters the ready-queue; when the lock is *available*, it
// reacquires the lock (back to as many times as it held it
// previously) and resumes work
}
// do some work, happy that something special happened, and
// we have the lock
}
}
void SomeMethodThatMightSignalSomethingSpecial() {
lock(lockObj) {
// do stuff
Monitor.PulseAll(lockObj);
// ^^^ this moves **all** items from the pending-queue to the ready-queue
// note there is also Pulse(...) which moves a *single* item
}
}
Note that when using Wait you might want to use the overload that accepts a timeout, to avoid waiting forever; note also it is quite common to have to loop and re-validate, for example:
lock(lockObj) {
while(needSomethingSpecialToHappen) {
Monitor.Wait(lockObj);
// at this point, we know we were pulsed, but maybe another waiting
// thread beat us to it! re-check the condition, and continue; this might
// also be a good place to check for some "abort" condition (and
// remember to do a PulseAll() when aborting)
}
// do some work, happy that something special happened, and we have the lock
}
You would have to use the Monitor for this functionality. Note that you open yourself up to deadlocks and race conditions if you aren't careful with your locks and having them taken and released in seperate areas of code can be risky
Monitor.Exit(lockObj);
Only one owner can hold the lock at a given time; it is exclusive. While the locking can be chained the more important component is making sure you obtain and release the proper number of times, avoiding difficult to diagnose threading issues.
When you wrap your code via lock { ... }, you are essentially calling Monitor.Enter and Monitor.Exit as scope is entered and departed.
When you explicitly call Monitor.Enter you are obtaining the lock and at that point you would need to call Monitor.Exit to release the lock.
This doesn't work.
The code
lock(lockObj)
{
// do stuff
}
is translated to something like
Monitor.Enter(lockObj)
try
{
// do stuff
}
finally
{
Monitor.Exit(lockObj)
}
That means that your code enters the lock twice but releases it only once. According to the documentation, the lock is only really released by the thread if Exit was called as often as Enter which is not the case in your code.
Summary: Your code will not deadlock on the call to obtainReleaseLock, but the lock on lockObj will never be released by the thread. You would need to have an explicit call to Monitor.Exit(lockObj), so the calls to Monitor.Enter matches the number of calls to Monitor.Exit.
I have some records, that I want to save to database asynchronously. I organize them into batches, then send them. As time passes, the batches are processed.
In the meanwhile the user can work on. There are some critical operations, that I want to lock him out from, while any save batch is still running asynchronously.
The save is done using a TableServiceContext and method .BeginSave() - but I think this should be irrelevant.
What I want to do is whenever an async save is started, increase a lock count, and when it completes, decrease the lock count so that it will be zero as soon as all have finished. I want to lock out the critical operation as long as the count is not zero. Furthermore I want to qualify the lock - by business object - for example.
I did not find a .NET 3.5 c# locking method, that does fulfil this requirement. A semaphore does not contain a method to check, if the count is 0. Otherwise a semaphore with unlimited max count would do.
Actually the Semaphare does have a method for checking to see if the count is zero. Use the WaitOne method with a zero timeout. It will return a value indicating whether the semaphore was acquired. If it returns false then it was not acquired which implies that it's count is zero.
var s = new Semaphore(5, 5);
while (s.WaitOne(0))
{
Console.WriteLine("acquired");
}
Console.WriteLine("no more left to acquire");
I'm presuming that when you say lock the user out, it is not a literal "lock" whilst the operations are completed as this would block the UI thread and freeze the application when the lock was encountered. I'm assuming you mean that some state can be checked so that UI controls can be disabled/enabled.
Something like the following code could be used which is lock free:
public class BusyState
{
private int isBusy;
public void SignalTaskStarted()
{
Interlocked.Increment(ref isBusy);
}
public void SignalTaskFinished()
{
if (Interlocked.Decrement(ref isBusy) < 0)
{
throw new InvalidOperationException("No tasks started.");
}
}
public bool IsBusy()
{
return Thread.VolatileRead(ref isBusy) > 0;
}
}
public class BusinessObject
{
private readonly BusyState busyState = new BusyState();
public void Save()
{
//Raise a "Started" event to disable UI controls...
//Start a few async tasks which call CallbackFromAsyncTask when finished.
//Start task 1
busyState.SignalTaskStarted();
//Start task 2
busyState.SignalTaskStarted();
//Start task 3
busyState.SignalTaskStarted();
}
private void CallbackFromAsyncTask()
{
busyState.SignalTaskFinished();
if (!busyState.IsBusy())
{
//Raise a "Completed" event to enable UI controls...
}
}
}
The counting aspect is encapsulated in BusyState which is then used in the business object to signal tasks starting and stopping. Raising started and completed events could be hooked to implement enabling and disabling UI controls to lock the user out whilst the async operations are being completed.
There are obviously loads of caveats here for handling error conditions, etc. So just some basic outline code...
What is the purpose of the lock count if the only logic involves whether or not the value is non-zero?
If you want to do this on a type-by-type basis, you could take this approach:
public class BusinessObject1
{
private static readonly object lockObject = new object();
public static object SyncRoot { get { return lockObject; } }
}
(following the same pattern for other business objects)
If you then enclose your save and your critical operations in a block like this:
lock(BusinessObject1.SyncRoot)
{
// do work
}
You will make saving and the critical operations mutually exclusive tasks.
Since you wanted it granular, you can cascade the locks like this:
lock(BusinessObject1.SyncRoot)
lock(BusinessObject2.SyncRoot)
lock(BusinessObject3.SyncRoot)
{
// do work
}