How to make group of operations atomic without using Lock - c#

I have a state variable whose fields of interest are thread-safe and kept fresh using a ReaderWriterLockSlim.I have fine-grained the access to all fields since i use them somewhere else separately.
While for individual access of fields it works fine , i also need to make a couple of operations atomic.In this case do i need an additional lock ?
I have a state variable that already contains thread-safe fields:
internal class State {
private ReaderWriterLockSlim lck = new ReaderWriterLockSlim();
private bool cmd_PopUp;
private bool cmd_Shutdown;
private bool cmd_Delay;
public bool CMD_PopUp {
get {
try {
this.lck.EnterReadLock();
return this.cmd_PopUp;
} finally {
this.lck.ExitReadLock();
}
}
set {
try {
this.lck.EnterWriteLock();
this.cmd_PopUp = value;
} finally {
this.lck.ExitWriteLock();
}
}
}
//same goes for the other booleans
}
I already have this thread-safe and also i am ensuring the thread reads from the memory not from local cache.However i need multiple operations to be done atomic.
Atomic operations
public async Task Run(State state)
{
while(true)
{
//do i need a lock here
if (state.CMD_Delay) {
state.CMD_Delay = false;
state.CMD_PopUp = false;
//end of potential lock ?
} else if (state.CMD_Shutdown) { //same here
state.CMD_PopUp = false;
state.CMD_Shutdown = false;
await SomeAction();
}
}
}
As you can see in my while i have an if-else where i need the group of operations to be atomic .Should i use an additional lock or is there some other lightweight solution ?

It would make the most sense to me to re-use your existing lock:
internal class State
{
...
public void SetDelayState()
{
try {
this.lck.EnterWriteLock();
this.cmd_Delay = false;
this.cmd_PopUp = false;
} finally {
this.lck.ExitWriteLock();
}
}
public void SetShutdownState()
{
try {
this.lck.EnterWriteLock();
this.cmd_PopUp = false;
this.cmd_Shutdown = false;
} finally {
this.lck.ExitWriteLock();
}
}
}
In other words, move all atomic operations to be members of your State type.
Side note: you almost certainly do not need a reader/writer lock. Reader/writer locks should only be used when all of the following are true:
Some code paths are read-only and others are read/write.
Readers far outnumber writers.
There are a large number of consecutive readers.
If any of those are not true, then lock is the better choice.

Related

What is the most performant way to make the results of a cached computation thread-safe?

(Apologies if this was answered elsewhere; it seems like it would be a common problem, but it turns out to be hard to search for since terms like "threading" and "cache" produce overwhelming results.)
I have an expensive computation whose result is accessed frequently but changes infrequently. Thus, I cache the resulting value. Here's some c# pseudocode of what I mean:
int? _cachedResult = null;
int GetComputationResult()
{
if(_cachedResult == null)
{
// Do the expensive computation.
_cachedResult = /* Result of expensive computation. */;
}
return _cachedResult.Value;
}
Elsewhere in my code, I will occasionally set _cachedResult back to null because the input to the computation has changed and thus the cached result is no longer valid and needs to be re-computed. (Which means I can't use Lazy<T> since Lazy<T> doesn't support being reset.)
This works fine for single-threaded scenarios, but of course it's not at all thread-safe. So my question is: What is the most performant way to make GetComputationResult thread-safe?
Obviously I could just put the whole thing in a lock() block, but I suspect there might be a better way? (Something that would do an atomic check to see if the result needs to be recomputed and only lock if it does?)
Thanks a lot!
You can use the double-checked locking pattern:
// Thread-safe (uses double-checked locking pattern for performance)
public class Memoized<T>
{
Func<T> _compute;
volatile bool _cached;
volatile bool _startedCaching;
volatile StrongBox<T> _cachedResult; // Need reference type
object _cacheSyncRoot = new object();
public Memoized(Func<T> compute)
{
_compute = compute;
}
public T Value {
get {
if (_cached) // Fast path
return _cachedResult.Value;
lock (_cacheSyncRoot)
{
if (!_cached)
{
_startedCaching = true;
_cachedResult = new StrongBox<T>(_compute());
_cached = true;
}
}
return _cachedResult.Value;
}
}
public void Invalidate()
{
if (!_startedCaching)
{
// Fast path: already invalidated
Thread.MemoryBarrier(); // need to release
if (!_startedCaching)
return;
}
lock (_cacheSyncRoot)
_cached = _startedCaching = false;
}
}
This particular implementation matches your description of what it should do in corner cases: If the cache has been invalidated, the value should only be computed once, by a single thread, and other threads should wait. However, if the cache is invalidated concurrently with the cached value being accessed, the stale cached value may be returned.
perhaps this will provide some food for thought:).
Generic class.
The class can compute data asynchronously or synchronously.
Allows fast reads thanks to the spinlock.
Does not perform heavy stuff inside the spinlock, just returning Task and if necessary, creating and starting Task on default TaskScheduler, to avoid inlining.
Task with Spinlock is pretty powerful combination, that can solve some problems in lock-free way.
using System;
using System.Threading;
using System.Threading.Tasks;
namespace Example
{
class OftenReadSometimesUpdate<T>
{
private Task<T> result_task = null;
private SpinLock spin_lock = new SpinLock(false);
private TResult LockedFunc<TResult>(Func<TResult> locked_func)
{
TResult t_result = default(TResult);
bool gotLock = false;
if (locked_func == null) return t_result;
try
{
spin_lock.Enter(ref gotLock);
t_result = locked_func();
}
finally
{
if (gotLock) spin_lock.Exit();
gotLock = false;
}
return t_result;
}
public Task<T> GetComputationAsync()
{
return
LockedFunc(GetComputationTaskLocked)
;
}
public T GetComputationResult()
{
return
LockedFunc(GetComputationTaskLocked)
.Result
;
}
public OftenReadSometimesUpdate<T> InvalidateComputationResult()
{
return
this
.LockedFunc(InvalidateComputationResultLocked)
;
}
public OftenReadSometimesUpdate<T> InvalidateComputationResultLocked()
{
result_task = null;
return this;
}
private Task<T> GetComputationTaskLocked()
{
if (result_task == null)
{
result_task = new Task<T>(HeavyComputation);
result_task.Start(TaskScheduler.Default);
}
return result_task;
}
protected virtual T HeavyComputation()
{
//a heavy computation
return default(T);//return some result of computation
}
}
}
You could simply reassign the Lazy<T> to achieve a reset:
Lazy<int> lazyResult = new Lazy<int>(GetComputationResult);
public int Result { get { return lazyResult.Value; } }
public void Reset()
{
lazyResult = new Lazy<int>(GetComputationResult);
}

Thread safety in Services set to ConcurrencyMode.Multiple

I am in the middle of developing a WCF application which is hosting a custom object for many clients to access. It's basically working but because I need to deal with thousands of simultaneous clients I need the service to be able to handle concurrent read calls (updates will be infrequent). I have added some thread-safety by locking private field while updating the object.
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
public sealed class TestServerConfig : ConfigBase, ITestInterface
{
private object updateLock = new object();
private SortedList<string, DateTime> dates = new SortedList<string, DateTime>();
public DateTime GetDate(string key)
{
if (this.dates.ContainsKey(key))
{
return this.dates[key];
}
else
{
return DateTime.MinValue;
}
}
public void SetDate(string key, DateTime expirationDate)
{
lock (this.updateLock)
{
if (this.dates.ContainsKey(key))
{
this.dates[key] = expirationDate;
}
else
{
this.dates.Add(key, expirationDate);
}
}
}
}
My problem is how to make GetDate thread safe without locking so that concurrent calls to GetDate can execute but so that an exception wont happen randomly when the value from the collection is removed after the check but before the value is read.
Catching the exception and dealing with it is possible but I would prefer to pevent it still.
Any ideas?
There is a lock specifically designed for this, ReaderWriterLockSlim (ReadWriterLock if you are using less than .NET 4.0)
This lock allows concurrent reads, but locks out the reads (and other writes) when a write is happening.
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
public sealed class TestServerConfig : ConfigBase, ITestInterface
{
private ReaderWriterLockSlim updateLock = new ReaderWriterLockSlim();
private SortedList<string, DateTime> dates = new SortedList<string, DateTime>();
public DateTime GetDate(string key)
{
try
{
this.updateLock.EnterReadLock();
if (this.dates.ContainsKey(key))
{
return this.dates[key];
}
else
{
return DateTime.MinValue;
}
}
finally
{
this.updateLock.ExitReadLock();
}
}
public void SetDate(string key, DateTime expirationDate)
{
try
{
this.updateLock.EnterWriteLock();
if (this.dates.ContainsKey(key))
{
this.dates[key] = expirationDate;
}
else
{
this.dates.Add(key, expirationDate);
}
}
finally
{
this.updateLock.ExitWriteLock();
}
}
}
There is also "Try" versions of the locks that support timeouts, you just check the returned bool to see if you took the lock.
UPDATE: Another solution is use a ConcurrentDictionary, this does not require any locks at all. ConcurrentDictionary uses locks internally but they are shorter lived than the ones you could use, also there is potential that Microsoft could use some form of unsafe methods to optimize it even more, I don't know exactly what kind of locks they are taking internally.
You will need to do some re-writing to make your operations atomic though
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
public sealed class TestServerConfig : ConfigBase, ITestInterface
{
private ConcurrentDictionary<string, DateTime> dates = new ConcurrentDictionary<string, DateTime>();
public DateTime GetDate(string key)
{
DateTime result;
if (this.dates.TryGetValue(key, out result))
{
return result;
}
else
{
return DateTime.MinValue;
}
}
public void SetDate(string key, DateTime expirationDate)
{
this.dates.AddOrUpdate(key, expirationDate, (usedKey, oldValue) => expirationDate);
}
}
UPDATE2: Out of curiosity I looked under the hood to see what ConcurrentDictionary does, What it does is only lock on a set of the buckets of the element, so you only get lock contention if two object also share the same hash bucket lock.
There is normally Environment.ProcessorCount * 4 lock buckets, but you can set it by hand using the constructor that sets the concurrencyLevel.
Here is how it decides which lock to use
private void GetBucketAndLockNo(int hashcode, out int bucketNo, out int lockNo, int bucketCount, int lockCount)
{
bucketNo = (hashcode & 2147483647) % bucketCount;
lockNo = bucketNo % lockCount;
}
lockCount is equal to the concurrencyLevel set in the constructor.
I would suggest you use a ReaderWriterLockSlim the documentation for which provides an example that is almost exactly what you want. ( http://msdn.microsoft.com/en-us/library/system.threading.readerwriterlockslim.aspx )
But, something like this:
public DateTime GetDate(string key)
{
cacheLock.EnterReadLock();
try
{
if (this.dates.ContainsKey(key))
{
return this.dates[key];
}
else
{
return DateTime.MinValue;
}
}
finally
{
cacheLock.ExitReadLock();
}
}
public void SetDate(string key, DateTime expirationDate)
{
cacheLock.EnterWriteLock();
try
{
if (this.dates.ContainsKey(key))
{
this.dates[key] = expirationDate;
}
else
{
this.dates.Add(key, expirationDate);
}
}
finally
{
cacheLock.ExitWriteLock();
}
}
ReaderWriterLockSlim is much more performant than using a lock and differentiates between reads and writes, so if no writes are occurring the read becomes non-blocking.
If you really can't afford locking for reads, you could (inside the lock), make a copy of the list, update it accordingly, and then replace the old list. The worst thing that could happend now would be that some of the reads would be a bit out of date, but they should never throw.
lock (this.updateLock)
{
var temp = <copy list here>
if (temp.ContainsKey(key))
{
temp[key] = expirationDate;
}
else
{
temp.Add(key, expirationDate);
}
this.dates = temp;
}
Not very efficient, but if you're not doing it too often it might not matter.
I had the same situation and used ReaderWriterLockSlim and works these kind of situations.

Entity Framework concurrency refresh and update

I have written a .NET + EF application. Everything works fine on a single thread. On multiple threads - it's another story.
In my EF object I have an integer counter. This property is marked as "Concurrency Mode = Fixed". Basically, what I'm trying to do is update this counter on several threads.
Like this operation:
this.MyCounter -= 1;
Because it's concurrency mode has been changed to "Fixed", when I'm tring to update a property that's already change - an OptimisticConcurrencyException is thrown.
In order to solve this concurrency problems, I'm using this code:
while (true)
{
try
{
this.UsageAmount -= 1; // Change the local EF object value and call SaveChanges().
break;
}
catch (OptimisticConcurrencyException)
{
Logger.Output(LoggerLevel.Trace, this, "concurrency conflict detected.");
EntityContainer.Instance.Entities.Refresh(RefreshMode.StoreWins, this.InnerObject);
}
}
The result of this code is an infinite (or maybe its just looks like) loop. Every call of this.UsageAmount -= 1 throw an OptimisticConcurrencyException, which causes the loop to run again.
My EntityContainer.Instance.Entities is a singleton class that provides an EF context PER THREAD. This means that every thread has a unique context. The code:
public sealed class EntityContainer
{
#region Singlethon Implemantation
private static Dictionary<Thread, EntityContainer> _instance = new Dictionary<Thread,EntityContainer> ();
private static object syncRoot = new Object();
public static EntityContainer Instance
{
get
{
if (!_instance.ContainsKey(Thread.CurrentThread))
{
lock (syncRoot)
{
if (!_instance.ContainsKey(Thread.CurrentThread))
_instance.Add(Thread.CurrentThread, new EntityContainer());
}
}
return _instance[Thread.CurrentThread];
}
}
private EntityContainer()
{
Entities = new anticopyEntities2();
}
#endregion
anticopyEntities2 _entities;
public anticopyEntities2 Entities
{
get
{
//return new anticopyEntities2();
return _entities;
}
private set
{
_entities = value;
}
}
}
BTW, after calling the Entities.Refresh methods - it looks like it's working (object state is Unchanged and the propery value is exactly what exists in the database).
How can I solve this concurrency problem?
I solved this in some code that I wrote for a multi-instance azure webrole by using a semaphore that I save in my database. Here's the code I use to get the semaphore. I had to add in some extra code to handle the race condition that happens between my competing instances. I also add in a time release in case my semaphore gets stuck locked because of some error.
var semaphore = SemaphoreRepository.FetchMySemaphore(myContext);
var past = DateTime.UtcNow.AddHours(-1);
//check lock, break if in use. Ignor if the lock is stale.
if (semaphore == null || (semaphore.InUse && (semaphore.ModifiedDate.HasValue && semaphore.ModifiedDate > past)))
{
return;
}
//Update semaphore to hold lock
try
{
semaphore.InUse = true;
semaphore.OverrideAuditing = true;
semaphore.ModifiedDate = DateTime.UtcNow;
myContext.Entry(semaphore).State = EntityState.Modified;
myContext.SaveChanges();
}
catch (DbUpdateConcurrencyException)
{
//concurrency exception handeling another thread beat us in the race. exit
return;
}
catch (DBConcurrencyException)
{
return;
}
//Do work here ...
My semaphore model looks like this:
using System.ComponentModel.DataAnnotations;
public class Semaphore : MyEntityBase //contains audit properties
{
[Required]
[ConcurrencyCheck]
public bool InUse { get; set; }
public string Description { get; set; }
}

What is wrong with this solution to locking and managing locked exceptions?

My objective is a convention for thread-safe functionality and exception handling within my application. I'm relatively new to the concept of thread management/multithreading. I am using .NET 3.5
I wrote the following helper method to wrap all my locked actions after reading this article http://blogs.msdn.com/b/ericlippert/archive/2009/03/06/locks-and-exceptions-do-not-mix.aspx, which was linked in response to this question, Monitor vs lock.
My thought is that if I use this convention consistently in my application, it will be easier to write thread-safe code and to handle errors within thread safe code without corrupting the state.
public static class Locking
{
private static readonly Dictionary<object,bool> CorruptionStateDictionary = new Dictionary<object, bool>();
private static readonly object CorruptionLock = new object();
public static bool TryLockedAction(object lockObject, Action action, out Exception exception)
{
if (IsCorrupt(lockObject))
{
exception = new LockingException("Cannot execute locked action on a corrupt object.");
return false;
}
exception = null;
Monitor.Enter(lockObject);
try
{
action.Invoke();
}
catch (Exception ex)
{
exception = ex;
}
finally
{
lock (CorruptionLock) // I don't want to release the lockObject until its corruption-state is updated.
// As long as the calling class locks the lockObject via TryLockedAction(), this should work
{
Monitor.Exit(lockObject);
if (exception != null)
{
if (CorruptionStateDictionary.ContainsKey(lockObject))
{
CorruptionStateDictionary[lockObject] = true;
}
else
{
CorruptionStateDictionary.Add(lockObject, true);
}
}
}
}
return exception == null;
}
public static void Uncorrupt(object corruptLockObject)
{
if (IsCorrupt(corruptLockObject))
{
lock (CorruptionLock)
{
CorruptionStateDictionary[corruptLockObject] = false;
}
}
else
{
if(!CorruptionStateDictionary.ContainsKey(corruptLockObject))
{
throw new LockingException("Uncorrupt() is not valid on object that have not been corrupted.");
}
else
{
// The object has previously been uncorrupted.
// My thought is to ignore the call.
}
}
}
public static bool IsCorrupt(object lockObject)
{
lock(CorruptionLock)
{
return CorruptionStateDictionary.ContainsKey(lockObject) && CorruptionStateDictionary[lockObject];
}
}
}
I use a LockingException class for ease of debugging.
public class LockingException : Exception
{
public LockingException(string message) : base(message) { }
}
Here is an example usage class to show how I intend to use this.
public class ExampleUsage
{
private readonly object ExampleLock = new object();
public void ExecuteLockedMethod()
{
Exception exception;
bool valid = Locking.TryLockedAction(ExampleLock, ExecuteMethod, out exception);
if (!valid)
{
bool revalidated = EnsureValidState();
if (revalidated)
{
Locking.Uncorrupt(ExampleLock);
}
}
}
private void ExecuteMethod()
{
//does something, maybe throws an exception
}
public bool EnsureValidState()
{
// code to make sure the state is valid
// if there is an exception returns false,
return true;
}
}
Your solution seems to add nothing but complexity due to a race in the TryLockedAction:
if (IsCorrupt(lockObject))
{
exception = new LockingException("Cannot execute locked action on a corrupt object.");
return false;
}
exception = null;
Monitor.Enter(lockObject);
The lockObject might become "corrupted" while we are still waiting on the Monitor.Enter, so there is no protection.
I'm not sure what behaviour you'd like to achieve, but probably it would help to separate locking and state managing:
class StateManager
{
public bool IsCorrupted
{
get;
set;
}
public void Execute(Action body, Func fixState)
{
if (this.IsCorrupted)
{
// use some Exception-derived class here.
throw new Exception("Cannot execute action on a corrupted object.");
}
try
{
body();
}
catch (Exception)
{
this.IsCorrupted = true;
if (fixState())
{
this.IsCorrupted = false;
}
throw;
}
}
}
public class ExampleUsage
{
private readonly object ExampleLock = new object();
private readonly StateManager stateManager = new StateManager();
public void ExecuteLockedMethod()
{
lock (ExampleLock)
{
stateManager.Execute(ExecuteMethod, EnsureValidState);
}
}
private void ExecuteMethod()
{
//does something, maybe throws an exception
}
public bool EnsureValidState()
{
// code to make sure the state is valid
// if there is an exception returns false,
return true;
}
}
Also, as far as I understand, the point of the article is that state management is harder in presence of concurrency. However, it's still just your object state correctness issue which is orthogonal to the locking and probably you need to use completely different approach to ensuring correctness. E.g. instead of changing some complex state withing locked code region, create a new one and if it succeeded, just switch to the new state in a single and simple reference assignment:
public class ExampleUsage
{
private ExampleUsageState state = new ExampleUsageState();
public void ExecuteLockedMethod()
{
var newState = this.state.ExecuteMethod();
this.state = newState;
}
}
public class ExampleUsageState
{
public ExampleUsageState ExecuteMethod()
{
//does something, maybe throws an exception
}
}
Personally, I always tend to think that manual locking is hard-enough to treat each case when you need it individually (so there is no much need in generic state-management solutions) and low-lelvel-enough tool to use it really sparingly.
Though it looks reliable, I have three concerns:
1) The performance cost of Invoke() on every locked action could be severe.
2) What if the action (the method) requires parameters? A more complex solution will be necessary.
3) Does the CorruptionStateDictionary grow endlessly? I think the uncorrupt() method should problem remove the object rather than set the data false.
Move the IsCorrupt test and the Monitor.Enter inside
the Try
Move the corruption set
handling out of finally and into the Catch block (this should
only execute if an exception has
been thrown)
Don't release the primary lock until after the
corruption flag has been set (leave
it in the finaly block)
Don't restrict the execption to the calling thread; either rethow
it or add it to the coruption
dictionary by replacing the bool
with the custom execption, and
return it with the IsCorrupt Check
For Uncorrupt simply remove the
item
There are some issues with the locking sequencing (see below)
That should cover all the bases
public static class Locking
{
private static readonly Dictionary<object, Exception> CorruptionStateDictionary = new Dictionary<object, Exception>();
private static readonly object CorruptionLock = new object();
public static bool TryLockedAction(object lockObject, Action action, out Exception exception)
{
var lockTaken = false;
exception = null;
try
{
Monitor.Enter(lockObject, ref lockTaken);
if (IsCorrupt(lockObject))
{
exception = new LockingException("Cannot execute locked action on a corrupt object.");
return false;
}
action.Invoke();
}
catch (Exception ex)
{
var corruptionLockTaken = false;
exception = ex;
try
{
Monitor.Enter(CorruptionLock, ref corruptionLockTaken);
if (CorruptionStateDictionary.ContainsKey(lockObject))
{
CorruptionStateDictionary[lockObject] = ex;
}
else
{
CorruptionStateDictionary.Add(lockObject, ex);
}
}
finally
{
if (corruptionLockTaken)
{
Monitor.Exit(CorruptionLock);
}
}
}
finally
{
if (lockTaken)
{
Monitor.Exit(lockObject);
}
}
return exception == null;
}
public static void Uncorrupt(object corruptLockObject)
{
var lockTaken = false;
try
{
Monitor.Enter(CorruptionLock, ref lockTaken);
if (IsCorrupt(corruptLockObject))
{
{ CorruptionStateDictionary.Remove(corruptLockObject); }
}
}
finally
{
if (lockTaken)
{
Monitor.Exit(CorruptionLock);
}
}
}
public static bool IsCorrupt(object lockObject)
{
Exception ex = null;
return IsCorrupt(lockObject, out ex);
}
public static bool IsCorrupt(object lockObject, out Exception ex)
{
var lockTaken = false;
ex = null;
try
{
Monitor.Enter(CorruptionLock, ref lockTaken);
if (CorruptionStateDictionary.ContainsKey(lockObject))
{
ex = CorruptionStateDictionary[lockObject];
}
return CorruptionStateDictionary.ContainsKey(lockObject);
}
finally
{
if (lockTaken)
{
Monitor.Exit(CorruptionLock);
}
}
}
}
The approach I would suggest would be to have a lock-state-manager object, with an "inDangerState" field. An application that needs to access a protected resource starts by using the lock-manager-object to acquire the lock; the manager will acquire the lock on behalf of the application and check the inDangerState flag. If it's set, the manager will throw an exception and release the lock while unwinding the stack. Otherwise the manager will return an IDisposable to the application which will release the lock on Dispose, but which can also manipulate the danger state flag. Before putting the locked resource into a bad state, one should call a method on the IDisposable which will set inDangerState and return a token that can be used to re-clear it once the locked resource is restored to a safe state. If the IDisposable is Dispose'd before the inDangerState flag is re-cleared, the resource will be 'stuck' in 'danger' state.
An exception handler which can restore the locked resource to a safe state should use the token to clear the inDangerState flag before returning or propagating the exception. If the exception handler cannot restore the locked resource to a safe state, it should propagate the exception while inDangerState is set.
That pattern seems simpler than what you suggest, but seems much better than assuming either that all exceptions will corrupt the locked resource, or that none will.

Is there a "try to lock, skip if timed out" operation in C#?

I need to try to lock on an object, and if its already locked just continue (after time out, or without it).
The C# lock statement is blocking.
Ed's got the right function for you. Just don't forget to call Monitor.Exit(). You should use a try-finally block to guarantee proper cleanup.
if (Monitor.TryEnter(someObject))
{
try
{
// use object
}
finally
{
Monitor.Exit(someObject);
}
}
I believe that you can use Monitor.TryEnter().
The lock statement just translates to a Monitor.Enter() call and a try catch block.
I had the same problem, I ended up creating a class TryLock that implements IDisposable and then uses the using statement to control the scope of the lock:
public class TryLock : IDisposable
{
private object locked;
public bool HasLock { get; private set; }
public TryLock(object obj)
{
if (Monitor.TryEnter(obj))
{
HasLock = true;
locked = obj;
}
}
public void Dispose()
{
if (HasLock)
{
Monitor.Exit(locked);
locked = null;
HasLock = false;
}
}
}
And then use the following syntax to lock:
var obj = new object();
using (var tryLock = new TryLock(obj))
{
if (tryLock.HasLock)
{
Console.WriteLine("Lock acquired..");
}
}
Consider using AutoResetEvent and its method WaitOne with a timeout input.
static AutoResetEvent autoEvent = new AutoResetEvent(true);
if(autoEvent.WaitOne(0))
{
//start critical section
Console.WriteLine("no other thread here, do your job");
Thread.Sleep(5000);
//end critical section
autoEvent.Set();
}
else
{
Console.WriteLine("A thread working already at this time.");
}
See https://msdn.microsoft.com/en-us/library/cc189907(v=vs.110).aspx
https://msdn.microsoft.com/en-us/library/system.threading.autoresetevent(v=vs.110).aspx and https://msdn.microsoft.com/en-us/library/cc190477(v=vs.110).aspx
You'll probably find this out for yourself now that the others have pointed you in the right direction, but TryEnter can also take a timeout parameter.
Jeff Richter's "CLR Via C#" is an excellent book on details of CLR innards if you're getting into more complicated stuff.
Based on Dereks answer a little helper method:
private bool TryExecuteLocked(object lockObject, Action action)
{
if (!Monitor.TryEnter(lockObject))
return false;
try
{
action();
}
finally
{
Monitor.Exit(lockObject);
}
return true;
}
Usage:
private object _myLockObject;
private void Usage()
{
if (TryExecuteLocked(_myLockObject, ()=> DoCoolStuff()))
{
Console.WriteLine("Hurray!");
}
}

Categories