Multithreading: How to check if static class is busy - c#

I have a static class and it has a static function IsDataCorrect() which does a http request.
The function can be called from multiple threads at the same time, and I want to let the first thread doing the request, and the others should be rejected (meaning they should get false as return value, they should not just be blocked!) until half a second after the first thread finished the request.
After that, the next winning thread should be able to do the next request, others should be rejected, and so on.
This is my approach, could someone please confirm if that is reasonable:
static class MyClass
{
private static bool IsBusy = false;
private static object lockObject = new object();
public static bool IsDataCorrect(string testString)
{
lock (lockObject)
{
if (IsBusy) return false;
IsBusy = true;
}
var uri = $"https://something.com";
bool htmlCheck = GetDocFromUri(uri, 2);
var t = new Thread(WaitBeforeFree);
t.Start();
//Fast Evaluations
//...
return htmlCheck;
}
private static void WaitBeforeFree()
{
Thread.Sleep(500);
IsBusy = false;
}
}

Your threads accessing the function would still be serialized in access for checking IsBusy flag, since only one thread at a time would be able to check it due to synchronization on lockObject. Instead, you can simply attempt to get a lock, and consequently, you don't need a flag since the lock itself will serve as the lock. Second, I would replace launching of new thread every time just to sleep and reset the flag, and replace it with a check on DateTime field.
static class MyClass
{
private static DateTime NextEntry = DateTime.Now;
private static ReaderWriterLockSlim timeLock = new ReaderWriterLockSlim();
private static object lockObject = new object();
public static bool IsDataCorrect(string testString)
{
bool tryEnterSuccess = false;
try
{
try
{
timeLock.EnterReadLock()
if (DateTime.Now < NextEntry) return false;
}
finally
{
timeLock.ExitReadLock()
}
Monitor.TryEnter(lockObject, ref tryEnterSuccess);
if (!tryEnterSuccess) return false;
var uri = $"https://something.com";
bool htmlCheck = GetDocFromUri(uri, 2);
//Fast Evaluations
//...
try
{
timeLock.EnterWriteLock()
NextEntry = DateTime.Now.AddMilliseconds(500);
} finally {
timeLock.ExitWriteLock()
}
return htmlCheck;
} finally {
if (tryEnterSuccess) Monitor.Exit(lockObject);
}
}
}
More efficient this way for not launching new threads, DateTime access is safe and yet concurrent so threads only stop when absolutely have to. Otherwise, everything keeps moving along with minimal resource usage.

I see you guys solved the problem correctly, but I think that there is still room to make it correct, efficient and simple in same time:).
How about this way?
EDIT: Edit to make calming easier and part of the example.
public static class ConcurrentCoordinationExtension
{
private static int _executing = 0;
public static bool TryExecuteSequentially(this Action actionToExecute)
{
// compate _executing with zero, if zero, set 1,
// return original value as result,
// successfull entry then result is zero, non zero returned, then somebody is executing
if (Interlocked.CompareExchange(ref _executing, 1, 0) != 0) return false;
try
{
actionToExecute.Invoke();
return true;
}
finally
{
Interlocked.Exchange(ref _executing, 0);//
}
}
public static bool TryExecuteSequentially(this Func<bool> actionToExecute)
{
// compate _executing with zero, if zero, set 1,
// return original value as result,
// successfull entry then result is zero, non zero returned, then somebody is executing
if (Interlocked.CompareExchange(ref _executing, 1, 0) != 0) return false;
try
{
return actionToExecute.Invoke();
}
finally
{
Interlocked.Exchange(ref _executing, 0);//
}
}
}
class Program
{
static void Main(string[] args)
{
DateTime last = DateTime.MinValue;
Func<bool> operation= () =>
{
//calming condition was not meant
if (DateTime.UtcNow - last < TimeSpan.FromMilliseconds(500)) return false;
last = DateTime.UtcNow;
//some stuff you want to process sequentially
return true;
};
operation.TryExecuteSequentially();
}
}

Related

Collection was modified; enumeration operation may not execute even though the collection was modified exclusively in lock statements

I have the following base code. The ActionMonitor can be used by anyone, in whatever setting, regardless of single-thread or multi-thread.
using System;
public class ActionMonitor
{
public ActionMonitor()
{
}
private object _lockObj = new object();
public void OnActionEnded()
{
lock (_lockObj)
{
IsInAction = false;
foreach (var trigger in _triggers)
trigger();
_triggers.Clear();
}
}
public void OnActionStarted()
{
IsInAction = true;
}
private ISet<Action> _triggers = new HashSet<Action>();
public void ExecuteAfterAction(Action action)
{
lock (_lockObj)
{
if (IsInAction)
_triggers.Add(action);
else
action();
}
}
public bool IsInAction
{
get;private set;
}
}
On exactly one occasion, when I examined a crash on client's machine, an exception was thrown at:
System.Core: System.InvalidOperationException Collection was modified;enumeration operation may not execute. at
System.Collections.Generic.HashSet`1.Enumerator.MoveNext() at
WPFApplication.ActionMonitor.OnActionEnded()
My reaction when seeing this stack trace: this is unbelievable! This must be a .Net bug!.
Because although ActionMonitor can be used in multithreading setting, but the crash above shouldn't occur-- all the _triggers ( the collection) modification happens inside a lock statement. This guarantees that one cannot iterate over the collection and modifying it at the same time.
And, if _triggers happened to contain an Action that involves ActionMonitor, then the we might get a deadlock, but it would never crash.
I have seen this crash exactly once, so I can't reproduce the problem at all. But base on my understanding of multithreading and lock statement, this exception can never have occurred.
Do I miss something here? Or is it known that .Net can behave it a very quirky way, when it involves System.Action?
You didn't shield your code against the following call:
private static ActionMonitor _actionMonitor;
static void Main(string[] args)
{
_actionMonitor = new ActionMonitor();
_actionMonitor.OnActionStarted();
_actionMonitor.ExecuteAfterAction(Foo1);
_actionMonitor.ExecuteAfterAction(Foo2);
_actionMonitor.OnActionEnded();
Console.ReadLine();
}
private static void Foo1()
{
_actionMonitor.OnActionStarted();
//Notice that if you would call _actionMonitor.OnActionEnded(); here instead of _actionMonitor.OnActionStarted(); - you would get a StackOverflow Exception
_actionMonitor.ExecuteAfterAction(Foo3);
}
private static void Foo2()
{
}
private static void Foo3()
{
}
FYI - that's the scenario Damien_The_Unbeliever is talking about in the comments.
To fix that issue the only 2 things that come in mind are
Don't call it like this, it's your class and your code is calling it so make sure you stick to your own rules
Get a copy of the _trigger list and enumarate this
About point 1, you could track if OnActionEnded is running and throw an exception if OnActionStarted is called while running:
private bool _isRunning = false;
public void OnActionEnded()
{
lock (_lockObj)
{
try
{
_isRunning = true;
IsInAction = false;
foreach (var trigger in _triggers)
trigger();
_triggers.Clear();
}
finally
{
_isRunning = false;
}
}
}
public void OnActionStarted()
{
lock (_lockObj)
{
if (_isRunning)
throw new NotSupportedException();
IsInAction = true;
}
}
About point 2, how about this
public class ActionMonitor
{
public ActionMonitor()
{
}
private object _lockObj = new object();
public void OnActionEnded()
{
lock (_lockObj)
{
IsInAction = false;
var tmpTriggers = _triggers;
_triggers = new HashSet<Action>();
foreach (var trigger in tmpTriggers)
trigger();
//have to decide what to do if _triggers isn't empty here, we could use a while loop till its empty
//so for example
while (true)
{
var tmpTriggers = _triggers;
_triggers = new HashSet<Action>();
if (tmpTriggers.Count == 0)
break;
foreach (var trigger in tmpTriggers)
trigger();
}
}
}
public void OnActionStarted()
{
lock (_lockObj) //fix the error #EricLippert talked about in comments
IsInAction = true;
}
private ISet<Action> _triggers = new HashSet<Action>();
public void ExecuteAfterAction(Action action)
{
lock (_lockObj)
{
if (IsInAction)
_triggers.Add(action);
else
action();
}
}
public bool IsInAction
{
get;private set;
}
}
This guarantees that one cannot iterate over the collection and modifying it at the same time.
No. You have a reentrancy problem.
Consider what happens if inside the call to trigger (same thread, so lock is already held), you modify the collection:
csharp
foreach (var trigger in _triggers)
trigger(); // _triggers modified in here
In fact if you look at your full callstack, you will be able to find the frame that is enumerating the collection. (by the time the exception happens, the code that modified the collection has been popped off the stack)

How to buffer a burst of events into fewer resulting actions

I want to reduce multiple events into a single delayed action. After some trigger occurs I expect some more similar triggers to come, but I prefer not to repeat the resulting delayed action. The action waits, to give a chance of completion to the burst.
The question: How can I do it in an elegant reusable way?
Till now I used a property to flag the event and trigger a delayed action like below:
public void SomeMethod()
{
SomeFlag = true; //this will intentionally return to the caller before completing the resulting buffered actions.
}
private bool someFlag;
public bool SomeFlag
{
get { return someFlag; }
set
{
if (someFlag != value)
{
someFlag = value;
if (value)
SomeDelayedMethod(5000);
}
}
}
public async void SomeDelayedMethod(int delay)
{
//some bufferred work.
await Task.Delay(delay);
SomeFlag = false;
}
below is a shorter way, but still not generic or reusable... I want something concise that packages the actions and the flag, and keeps the functionality (returning to the caller before execution is complete (like today)). I also need to be able to pass an object reference to this action)
public void SerializeAccountsToConfig()
{
if (!alreadyFlagged)
{
alreadyFlagged = true;
SerializeDelayed(5000, Serialize);
}
}
public async void SerializeDelayed(int delay, Action whatToDo)
{
await Task.Delay(delay);
whatToDo();
}
private bool alreadyFlagged;
private void Serialize()
{
//some buferred work.
//string json = JsonConvert.SerializeObject(Accounts, Formatting.Indented);
//Settings1.Default.Accounts = json;
//Settings1.Default.Save();
alreadyFlagged = false;
}
Here's a thread-safe and reusable solution.
You can create an instance of DelayedSingleAction, and in the constructor you pass the action that you want to have performed. I believe this is thread safe, though there is a tiny risk that it will restart the timer just before commencing the action, but I think that risk would exist no matter what the solution is.
public class DelayedSingleAction
{
private readonly Action _action;
private readonly long _millisecondsDelay;
private long _syncValue = 1;
public DelayedSingleAction(Action action, long millisecondsDelay)
{
_action = action;
_millisecondsDelay = millisecondsDelay;
}
private Task _waitingTask = null;
private void DoActionAndClearTask(Task _)
{
Interlocked.Exchange(ref _syncValue, 1);
_action();
}
public void PerformAction()
{
if (Interlocked.Exchange(ref _syncValue, 0) == 1)
{
_waitingTask = Task.Delay(TimeSpan.FromMilliseconds(_millisecondsDelay))
.ContinueWith(DoActionAndClearTask);
}
}
public Task Complete()
{
return _waitingTask ?? Task.FromResult(0);
}
}
See this dotnetfiddle for an example which invokes one action continuously from multiple threads.
https://dotnetfiddle.net/el14wZ
Since you're interested in RX here simple console app sample:
static void Main(string[] args)
{
// event source
var burstEvents = Observable.Interval(TimeSpan.FromMilliseconds(50));
var subscription = burstEvents
.Buffer(TimeSpan.FromSeconds(3)) // collect events 3 seconds
//.Buffer(50) // or collect 50 events
.Subscribe(events =>
{
//Console.WriteLine(events.First()); // take only first event
// or process event collection
foreach (var e in events)
Console.Write(e + " ");
Console.WriteLine();
});
Console.ReadLine();
return;
}
Based on the solution proposed by Andrew, here is a more generic solution.
Declaration and instance creation of the delayed action:
public DelayedSingleAction<Account> SendMailD;
Create the instance inside a function or in the constructor (this can be a collection of such actions each working on a different object):
SendMailD = new DelayedSingleAction<Account>(SendMail, AccountRef, 5000);
repeatedly call this action
SendMailD.PerformAction();
Send mail is the action you will "burst control". Its signature matches :
public int SendMail(Account A)
{}
Here is the updated class
public class DelayedSingleAction<T>
{
private readonly Func<T, int> actionOnObj;
private T tInstance;
private readonly long millisecondsDelay;
private long _syncValue = 1;
public DelayedSingleAction(Func<T, int> ActionOnObj, T TInstance, long MillisecondsDelay)
{
actionOnObj = ActionOnObj;
tInstance = TInstance;
millisecondsDelay = MillisecondsDelay;
}
private Task _waitingTask = null;
private void DoActionAndClearTask(Task _)
{
Console.WriteLine(string.Format("{0:h:mm:ss.fff} DelayedSingleAction Resetting SyncObject: Thread {1} for {2}", DateTime.Now, System.Threading.Thread.CurrentThread.ManagedThreadId, tInstance));
Interlocked.Exchange(ref _syncValue, 1);
actionOnObj(tInstance);
}
public void PerformAction()
{
if (Interlocked.Exchange(ref _syncValue, 0) == 1)
{
Console.WriteLine(string.Format("{0:h:mm:ss.fff} DelayedSingleAction Starting the timer: Thread {1} for {2}", DateTime.Now, System.Threading.Thread.CurrentThread.ManagedThreadId, tInstance));
_waitingTask = Task.Delay(TimeSpan.FromMilliseconds(millisecondsDelay)).ContinueWith(DoActionAndClearTask);
}
}
public Task Complete()
{
return _waitingTask ?? Task.FromResult(0);
}
}

How to ensure a certain method doesn't get executed if some other method is running?

Let's say I have two methods -
getCurrentValue(int valueID)
updateValues(int changedComponentID)
These two methods are called on the same object independently by separate threads.
getCurrentValue() simply does a database look-up for the current valueID.
"Values" change if their corresponding components change. The updateValues()method updates those values that are dependent upon the component that just changed, i.e. changedComponentID. This is a database operation and takes time.
While this update operation is going on, I do not want to return a stale value by doing a lookup from the database, but I want to wait till the update method has completed. At the same time, I don't want two update operations to happen simultaneously or an update to happen when a read is going on.
So, I'm thinking of doing it this way -
[MethodImpl(MethodImplOptions.Synchronized)]
public int getCurrentValue(int valueID)
{
while(updateOperationIsGoingOn)
{
// do nothing
}
readOperationIsGoingOn = true;
value = // read value from DB
readOperationIsGoingOn = false;
return value;
}
[MethodImpl(MethodImplOptions.Synchronized)]
public void updateValues(int componentID)
{
while(readOperationIsGoingOn)
{
// do nothing
}
updateOperationIsGoingOn = true;
// update values in DB
updateOperationIsGoingOn = false;
}
I'm not sure whether this is a correct way of doing it. Any suggestions? Thanks.
That's not the correct way. Like this you are doing an "active wait", effectively blocking your CPU.
You should use a lock instead:
static object _syncRoot = new object();
public int getCurrentValue(int valueID)
{
lock(_syncRoot)
{
value = // read value from DB
}
}
public void updateValues(int componentID)
{
lock(_syncRoot)
{
// update values in DB
}
}
Create a static object outside of both of these methods. Then use a lock statement on that object; when one method is accessing the protected code, the other method will wait for the lock to release.
private static object _lockObj = new object();
public int getCurrentValue(int valueID)
{
object value;
lock(_lockObj)
{
value = // read value from DB
}
return value;
}
public void updateValues(int componentID)
{
lock(_lockObj)
{
// update values in DB
}
}
private static readonly object _lock = new object();
public int getCurrentValue(int valueID)
{
try
{
Monitor.Enter(_lock);
value = // read value from DB
return value;
}
finally
{
Monitor.Exit(_lock);
}
}
public void updateValues(int componentID)
{
try
{
Monitor.Enter(_lock);
// update values in DB
}
finally
{
Monitor.Exit(_lock);
}
}

Synchronizing thread communication?

Just for the heck of it I'm trying to emulate how JRuby generators work using threads in C#.
Also, I'm fully aware that C# has built in support for yield return, I'm just toying around a bit.
I guess it's some sort of poor mans coroutines by keeping multiple callstacks alive using threads. (even though none of the callstacks should execute at the same time)
The idea is like this:
The consumer thread requests a value
The worker thread provides a value and yields back to the consumer thread
Repeat untill worker thread is done
So, what would be the correct way of doing the following?
//example
class Program
{
static void Main(string[] args)
{
ThreadedEnumerator<string> enumerator = new ThreadedEnumerator<string>();
enumerator.Init(() =>
{
for (int i = 1; i < 100; i++)
{
enumerator.Yield(i.ToString());
}
});
foreach (var item in enumerator)
{
Console.WriteLine(item);
};
Console.ReadLine();
}
}
//naive threaded enumerator
public class ThreadedEnumerator<T> : IEnumerator<T>, IEnumerable<T>
{
private Thread enumeratorThread;
private T current;
private bool hasMore = true;
private bool isStarted = false;
AutoResetEvent enumeratorEvent = new AutoResetEvent(false);
AutoResetEvent consumerEvent = new AutoResetEvent(false);
public void Yield(T item)
{
//wait for consumer to request a value
consumerEvent.WaitOne();
//assign the value
current = item;
//signal that we have yielded the requested
enumeratorEvent.Set();
}
public void Init(Action userAction)
{
Action WrappedAction = () =>
{
userAction();
consumerEvent.WaitOne();
enumeratorEvent.Set();
hasMore = false;
};
ThreadStart ts = new ThreadStart(WrappedAction);
enumeratorThread = new Thread(ts);
enumeratorThread.IsBackground = true;
isStarted = false;
}
public T Current
{
get { return current; }
}
public void Dispose()
{
enumeratorThread.Abort();
}
object System.Collections.IEnumerator.Current
{
get { return Current; }
}
public bool MoveNext()
{
if (!isStarted)
{
isStarted = true;
enumeratorThread.Start();
}
//signal that we are ready to receive a value
consumerEvent.Set();
//wait for the enumerator to yield
enumeratorEvent.WaitOne();
return hasMore;
}
public void Reset()
{
throw new NotImplementedException();
}
public IEnumerator<T> GetEnumerator()
{
return this;
}
System.Collections.IEnumerator System.Collections.IEnumerable.GetEnumerator()
{
return this;
}
}
Ideas?
There are many ways to implement the producer/consumer pattern in C#.
The best way, I guess, is using TPL (Task, BlockingCollection). See an example here.

How can I write a conditional lock in C#?

The thing is I've been using the lock statement to protect a critical part of my code, but now, I realize I could allow concurrent execution of that critical code is some conditions are met.
Is there a way to condition the lock?
bool locked = false;
if (condition) {
Monitor.Enter(lockObject);
locked = true;
}
try {
// possibly critical section
}
finally {
if (locked) Monitor.Exit(lockObject);
}
EDIT: yes, there is a race condition unless you can assure that the condition is constant while threads are entering.
I'm no threading expert, but it sounds like you might be looking for something like this (double-checked locking). The idea is to check the condition both before and after acquiring the lock.
private static object lockHolder = new object();
if (ActionIsValid()) {
lock(lockHolder) {
if (ActionIsValid()) {
DoSomething();
}
}
}
Action doThatThing = someMethod;
if (condition)
{
lock(thatThing)
{
doThatThing();
}
}
else
{
doThatThing();
}
Actually, to avoid a race condition, I'd be tempted to use a ReaderWriterLockSlim here - treat concurrent access as a read lock, and exclusive access as a write lock. That way, if the conditions change you won't end up with some inappropriate code still executing blindly in the region (under the false assumption that it is safe); a bit verbose, but
(formatted for space):
if (someCondition) {
lockObj.EnterReadLock();
try { Foo(); }
finally { lockObj.ExitReadLock(); }
} else {
lockObj.EnterWriteLock();
try { Foo(); }
finally { lockObj.ExitWriteLock(); }
}
If you have many methods/properties that require conditional locking, you don't want to repeat the same pattern over and over again. I propose the following trick:
Non-repetitive conditional-lock pattern
With a private helper struct implementing IDisposable we can encapsulate the condition/lock without measurable overhead.
public void DoStuff()
{
using (ConditionalLock())
{
// Thread-safe code
}
}
It's quite easy to implement. Here's a sample class demonstrating this pattern:
public class Counter
{
private static readonly int MAX_COUNT = 100;
private readonly bool synchronized;
private int count;
private readonly object lockObject = new object();
private int lockCount;
public Counter(bool synchronized)
{
this.synchronized = synchronized;
}
public int Count
{
get
{
using (ConditionalLock())
{
return count;
}
}
}
public int LockCount
{
get
{
using (ConditionalLock())
{
return lockCount;
}
}
}
public void Increase()
{
using (ConditionalLock())
{
if (count < MAX_COUNT)
{
Thread.Sleep(10);
++count;
}
}
}
private LockHelper ConditionalLock() => new LockHelper(this);
// This is where the magic happens!
private readonly struct LockHelper : IDisposable
{
private readonly Counter counter;
private readonly bool lockTaken;
public LockHelper(Counter counter)
{
this.counter = counter;
lockTaken = false;
if (counter.synchronized)
{
Monitor.Enter(counter.lockObject, ref lockTaken);
counter.lockCount++;
}
}
private void Exit()
{
if (lockTaken)
{
Monitor.Exit(counter.lockObject);
}
}
void IDisposable.Dispose() => Exit();
}
}
Now, let's create a small sample program demonstrating its correctness.
class Program
{
static void Main(string[] args)
{
var onlyOnThisThread = new Counter(synchronized: false);
IncreaseToMax(c1);
var onManyThreads = new Counter(synchronized: true);
var t1 = Task.Factory.StartNew(() => IncreaseToMax(c2));
var t2 = Task.Factory.StartNew(() => IncreaseToMax(c2));
var t3 = Task.Factory.StartNew(() => IncreaseToMax(c2));
Task.WaitAll(t1, t2, t3);
Console.WriteLine($"Counter(false) => Count = {c1.Count}, LockCount = {c1.LockCount}");
Console.WriteLine($"Counter(true) => Count = {c2.Count}, LockCount = {c2.LockCount}");
}
private static void IncreaseToMax(Counter counter)
{
for (int i = 0; i < 1000; i++)
{
counter.Increase();
}
}
}
Output:
Counter(false) => Count = 100, LockCount = 0
Counter(true) => Count = 100, LockCount = 3002
Now you can let the caller decide whether locking (costly) is needed.
I'm guessing you've got some code that looks a little like this:
private Monkey GetScaryMonkey(int numberOfHeads){
Monkey ape = null;
lock(this) {
ape = new Monkey();
ape.AddHeads(numberOfHeads);
}
return ape;
}
To make this conditional couldn't you just do this:
private Monkey GetScaryMonkey(int numberOfHeads){
if ( numberOfHeads > 1 ) {
lock(this) {
return CreateNewMonkey( numberOfHeads );
}
}
return CreateNewMonkey( numberOfHeads );
}
Should work, no?
Use Double-checked locking pattern, as suggested above. that's the trick IMO :)
make sure you have your lock object as a static, as listed in not.that.dave.foley.myopenid.com's example.

Categories