Entity Framework concurrency refresh and update - c#

I have written a .NET + EF application. Everything works fine on a single thread. On multiple threads - it's another story.
In my EF object I have an integer counter. This property is marked as "Concurrency Mode = Fixed". Basically, what I'm trying to do is update this counter on several threads.
Like this operation:
this.MyCounter -= 1;
Because it's concurrency mode has been changed to "Fixed", when I'm tring to update a property that's already change - an OptimisticConcurrencyException is thrown.
In order to solve this concurrency problems, I'm using this code:
while (true)
{
try
{
this.UsageAmount -= 1; // Change the local EF object value and call SaveChanges().
break;
}
catch (OptimisticConcurrencyException)
{
Logger.Output(LoggerLevel.Trace, this, "concurrency conflict detected.");
EntityContainer.Instance.Entities.Refresh(RefreshMode.StoreWins, this.InnerObject);
}
}
The result of this code is an infinite (or maybe its just looks like) loop. Every call of this.UsageAmount -= 1 throw an OptimisticConcurrencyException, which causes the loop to run again.
My EntityContainer.Instance.Entities is a singleton class that provides an EF context PER THREAD. This means that every thread has a unique context. The code:
public sealed class EntityContainer
{
#region Singlethon Implemantation
private static Dictionary<Thread, EntityContainer> _instance = new Dictionary<Thread,EntityContainer> ();
private static object syncRoot = new Object();
public static EntityContainer Instance
{
get
{
if (!_instance.ContainsKey(Thread.CurrentThread))
{
lock (syncRoot)
{
if (!_instance.ContainsKey(Thread.CurrentThread))
_instance.Add(Thread.CurrentThread, new EntityContainer());
}
}
return _instance[Thread.CurrentThread];
}
}
private EntityContainer()
{
Entities = new anticopyEntities2();
}
#endregion
anticopyEntities2 _entities;
public anticopyEntities2 Entities
{
get
{
//return new anticopyEntities2();
return _entities;
}
private set
{
_entities = value;
}
}
}
BTW, after calling the Entities.Refresh methods - it looks like it's working (object state is Unchanged and the propery value is exactly what exists in the database).
How can I solve this concurrency problem?

I solved this in some code that I wrote for a multi-instance azure webrole by using a semaphore that I save in my database. Here's the code I use to get the semaphore. I had to add in some extra code to handle the race condition that happens between my competing instances. I also add in a time release in case my semaphore gets stuck locked because of some error.
var semaphore = SemaphoreRepository.FetchMySemaphore(myContext);
var past = DateTime.UtcNow.AddHours(-1);
//check lock, break if in use. Ignor if the lock is stale.
if (semaphore == null || (semaphore.InUse && (semaphore.ModifiedDate.HasValue && semaphore.ModifiedDate > past)))
{
return;
}
//Update semaphore to hold lock
try
{
semaphore.InUse = true;
semaphore.OverrideAuditing = true;
semaphore.ModifiedDate = DateTime.UtcNow;
myContext.Entry(semaphore).State = EntityState.Modified;
myContext.SaveChanges();
}
catch (DbUpdateConcurrencyException)
{
//concurrency exception handeling another thread beat us in the race. exit
return;
}
catch (DBConcurrencyException)
{
return;
}
//Do work here ...
My semaphore model looks like this:
using System.ComponentModel.DataAnnotations;
public class Semaphore : MyEntityBase //contains audit properties
{
[Required]
[ConcurrencyCheck]
public bool InUse { get; set; }
public string Description { get; set; }
}

Related

How to make group of operations atomic without using Lock

I have a state variable whose fields of interest are thread-safe and kept fresh using a ReaderWriterLockSlim.I have fine-grained the access to all fields since i use them somewhere else separately.
While for individual access of fields it works fine , i also need to make a couple of operations atomic.In this case do i need an additional lock ?
I have a state variable that already contains thread-safe fields:
internal class State {
private ReaderWriterLockSlim lck = new ReaderWriterLockSlim();
private bool cmd_PopUp;
private bool cmd_Shutdown;
private bool cmd_Delay;
public bool CMD_PopUp {
get {
try {
this.lck.EnterReadLock();
return this.cmd_PopUp;
} finally {
this.lck.ExitReadLock();
}
}
set {
try {
this.lck.EnterWriteLock();
this.cmd_PopUp = value;
} finally {
this.lck.ExitWriteLock();
}
}
}
//same goes for the other booleans
}
I already have this thread-safe and also i am ensuring the thread reads from the memory not from local cache.However i need multiple operations to be done atomic.
Atomic operations
public async Task Run(State state)
{
while(true)
{
//do i need a lock here
if (state.CMD_Delay) {
state.CMD_Delay = false;
state.CMD_PopUp = false;
//end of potential lock ?
} else if (state.CMD_Shutdown) { //same here
state.CMD_PopUp = false;
state.CMD_Shutdown = false;
await SomeAction();
}
}
}
As you can see in my while i have an if-else where i need the group of operations to be atomic .Should i use an additional lock or is there some other lightweight solution ?
It would make the most sense to me to re-use your existing lock:
internal class State
{
...
public void SetDelayState()
{
try {
this.lck.EnterWriteLock();
this.cmd_Delay = false;
this.cmd_PopUp = false;
} finally {
this.lck.ExitWriteLock();
}
}
public void SetShutdownState()
{
try {
this.lck.EnterWriteLock();
this.cmd_PopUp = false;
this.cmd_Shutdown = false;
} finally {
this.lck.ExitWriteLock();
}
}
}
In other words, move all atomic operations to be members of your State type.
Side note: you almost certainly do not need a reader/writer lock. Reader/writer locks should only be used when all of the following are true:
Some code paths are read-only and others are read/write.
Readers far outnumber writers.
There are a large number of consecutive readers.
If any of those are not true, then lock is the better choice.

Collection was modified; enumeration operation may not execute even though the collection was modified exclusively in lock statements

I have the following base code. The ActionMonitor can be used by anyone, in whatever setting, regardless of single-thread or multi-thread.
using System;
public class ActionMonitor
{
public ActionMonitor()
{
}
private object _lockObj = new object();
public void OnActionEnded()
{
lock (_lockObj)
{
IsInAction = false;
foreach (var trigger in _triggers)
trigger();
_triggers.Clear();
}
}
public void OnActionStarted()
{
IsInAction = true;
}
private ISet<Action> _triggers = new HashSet<Action>();
public void ExecuteAfterAction(Action action)
{
lock (_lockObj)
{
if (IsInAction)
_triggers.Add(action);
else
action();
}
}
public bool IsInAction
{
get;private set;
}
}
On exactly one occasion, when I examined a crash on client's machine, an exception was thrown at:
System.Core: System.InvalidOperationException Collection was modified;enumeration operation may not execute. at
System.Collections.Generic.HashSet`1.Enumerator.MoveNext() at
WPFApplication.ActionMonitor.OnActionEnded()
My reaction when seeing this stack trace: this is unbelievable! This must be a .Net bug!.
Because although ActionMonitor can be used in multithreading setting, but the crash above shouldn't occur-- all the _triggers ( the collection) modification happens inside a lock statement. This guarantees that one cannot iterate over the collection and modifying it at the same time.
And, if _triggers happened to contain an Action that involves ActionMonitor, then the we might get a deadlock, but it would never crash.
I have seen this crash exactly once, so I can't reproduce the problem at all. But base on my understanding of multithreading and lock statement, this exception can never have occurred.
Do I miss something here? Or is it known that .Net can behave it a very quirky way, when it involves System.Action?
You didn't shield your code against the following call:
private static ActionMonitor _actionMonitor;
static void Main(string[] args)
{
_actionMonitor = new ActionMonitor();
_actionMonitor.OnActionStarted();
_actionMonitor.ExecuteAfterAction(Foo1);
_actionMonitor.ExecuteAfterAction(Foo2);
_actionMonitor.OnActionEnded();
Console.ReadLine();
}
private static void Foo1()
{
_actionMonitor.OnActionStarted();
//Notice that if you would call _actionMonitor.OnActionEnded(); here instead of _actionMonitor.OnActionStarted(); - you would get a StackOverflow Exception
_actionMonitor.ExecuteAfterAction(Foo3);
}
private static void Foo2()
{
}
private static void Foo3()
{
}
FYI - that's the scenario Damien_The_Unbeliever is talking about in the comments.
To fix that issue the only 2 things that come in mind are
Don't call it like this, it's your class and your code is calling it so make sure you stick to your own rules
Get a copy of the _trigger list and enumarate this
About point 1, you could track if OnActionEnded is running and throw an exception if OnActionStarted is called while running:
private bool _isRunning = false;
public void OnActionEnded()
{
lock (_lockObj)
{
try
{
_isRunning = true;
IsInAction = false;
foreach (var trigger in _triggers)
trigger();
_triggers.Clear();
}
finally
{
_isRunning = false;
}
}
}
public void OnActionStarted()
{
lock (_lockObj)
{
if (_isRunning)
throw new NotSupportedException();
IsInAction = true;
}
}
About point 2, how about this
public class ActionMonitor
{
public ActionMonitor()
{
}
private object _lockObj = new object();
public void OnActionEnded()
{
lock (_lockObj)
{
IsInAction = false;
var tmpTriggers = _triggers;
_triggers = new HashSet<Action>();
foreach (var trigger in tmpTriggers)
trigger();
//have to decide what to do if _triggers isn't empty here, we could use a while loop till its empty
//so for example
while (true)
{
var tmpTriggers = _triggers;
_triggers = new HashSet<Action>();
if (tmpTriggers.Count == 0)
break;
foreach (var trigger in tmpTriggers)
trigger();
}
}
}
public void OnActionStarted()
{
lock (_lockObj) //fix the error #EricLippert talked about in comments
IsInAction = true;
}
private ISet<Action> _triggers = new HashSet<Action>();
public void ExecuteAfterAction(Action action)
{
lock (_lockObj)
{
if (IsInAction)
_triggers.Add(action);
else
action();
}
}
public bool IsInAction
{
get;private set;
}
}
This guarantees that one cannot iterate over the collection and modifying it at the same time.
No. You have a reentrancy problem.
Consider what happens if inside the call to trigger (same thread, so lock is already held), you modify the collection:
csharp
foreach (var trigger in _triggers)
trigger(); // _triggers modified in here
In fact if you look at your full callstack, you will be able to find the frame that is enumerating the collection. (by the time the exception happens, the code that modified the collection has been popped off the stack)

What is the most performant way to make the results of a cached computation thread-safe?

(Apologies if this was answered elsewhere; it seems like it would be a common problem, but it turns out to be hard to search for since terms like "threading" and "cache" produce overwhelming results.)
I have an expensive computation whose result is accessed frequently but changes infrequently. Thus, I cache the resulting value. Here's some c# pseudocode of what I mean:
int? _cachedResult = null;
int GetComputationResult()
{
if(_cachedResult == null)
{
// Do the expensive computation.
_cachedResult = /* Result of expensive computation. */;
}
return _cachedResult.Value;
}
Elsewhere in my code, I will occasionally set _cachedResult back to null because the input to the computation has changed and thus the cached result is no longer valid and needs to be re-computed. (Which means I can't use Lazy<T> since Lazy<T> doesn't support being reset.)
This works fine for single-threaded scenarios, but of course it's not at all thread-safe. So my question is: What is the most performant way to make GetComputationResult thread-safe?
Obviously I could just put the whole thing in a lock() block, but I suspect there might be a better way? (Something that would do an atomic check to see if the result needs to be recomputed and only lock if it does?)
Thanks a lot!
You can use the double-checked locking pattern:
// Thread-safe (uses double-checked locking pattern for performance)
public class Memoized<T>
{
Func<T> _compute;
volatile bool _cached;
volatile bool _startedCaching;
volatile StrongBox<T> _cachedResult; // Need reference type
object _cacheSyncRoot = new object();
public Memoized(Func<T> compute)
{
_compute = compute;
}
public T Value {
get {
if (_cached) // Fast path
return _cachedResult.Value;
lock (_cacheSyncRoot)
{
if (!_cached)
{
_startedCaching = true;
_cachedResult = new StrongBox<T>(_compute());
_cached = true;
}
}
return _cachedResult.Value;
}
}
public void Invalidate()
{
if (!_startedCaching)
{
// Fast path: already invalidated
Thread.MemoryBarrier(); // need to release
if (!_startedCaching)
return;
}
lock (_cacheSyncRoot)
_cached = _startedCaching = false;
}
}
This particular implementation matches your description of what it should do in corner cases: If the cache has been invalidated, the value should only be computed once, by a single thread, and other threads should wait. However, if the cache is invalidated concurrently with the cached value being accessed, the stale cached value may be returned.
perhaps this will provide some food for thought:).
Generic class.
The class can compute data asynchronously or synchronously.
Allows fast reads thanks to the spinlock.
Does not perform heavy stuff inside the spinlock, just returning Task and if necessary, creating and starting Task on default TaskScheduler, to avoid inlining.
Task with Spinlock is pretty powerful combination, that can solve some problems in lock-free way.
using System;
using System.Threading;
using System.Threading.Tasks;
namespace Example
{
class OftenReadSometimesUpdate<T>
{
private Task<T> result_task = null;
private SpinLock spin_lock = new SpinLock(false);
private TResult LockedFunc<TResult>(Func<TResult> locked_func)
{
TResult t_result = default(TResult);
bool gotLock = false;
if (locked_func == null) return t_result;
try
{
spin_lock.Enter(ref gotLock);
t_result = locked_func();
}
finally
{
if (gotLock) spin_lock.Exit();
gotLock = false;
}
return t_result;
}
public Task<T> GetComputationAsync()
{
return
LockedFunc(GetComputationTaskLocked)
;
}
public T GetComputationResult()
{
return
LockedFunc(GetComputationTaskLocked)
.Result
;
}
public OftenReadSometimesUpdate<T> InvalidateComputationResult()
{
return
this
.LockedFunc(InvalidateComputationResultLocked)
;
}
public OftenReadSometimesUpdate<T> InvalidateComputationResultLocked()
{
result_task = null;
return this;
}
private Task<T> GetComputationTaskLocked()
{
if (result_task == null)
{
result_task = new Task<T>(HeavyComputation);
result_task.Start(TaskScheduler.Default);
}
return result_task;
}
protected virtual T HeavyComputation()
{
//a heavy computation
return default(T);//return some result of computation
}
}
}
You could simply reassign the Lazy<T> to achieve a reset:
Lazy<int> lazyResult = new Lazy<int>(GetComputationResult);
public int Result { get { return lazyResult.Value; } }
public void Reset()
{
lazyResult = new Lazy<int>(GetComputationResult);
}

How to ensure a certain method doesn't get executed if some other method is running?

Let's say I have two methods -
getCurrentValue(int valueID)
updateValues(int changedComponentID)
These two methods are called on the same object independently by separate threads.
getCurrentValue() simply does a database look-up for the current valueID.
"Values" change if their corresponding components change. The updateValues()method updates those values that are dependent upon the component that just changed, i.e. changedComponentID. This is a database operation and takes time.
While this update operation is going on, I do not want to return a stale value by doing a lookup from the database, but I want to wait till the update method has completed. At the same time, I don't want two update operations to happen simultaneously or an update to happen when a read is going on.
So, I'm thinking of doing it this way -
[MethodImpl(MethodImplOptions.Synchronized)]
public int getCurrentValue(int valueID)
{
while(updateOperationIsGoingOn)
{
// do nothing
}
readOperationIsGoingOn = true;
value = // read value from DB
readOperationIsGoingOn = false;
return value;
}
[MethodImpl(MethodImplOptions.Synchronized)]
public void updateValues(int componentID)
{
while(readOperationIsGoingOn)
{
// do nothing
}
updateOperationIsGoingOn = true;
// update values in DB
updateOperationIsGoingOn = false;
}
I'm not sure whether this is a correct way of doing it. Any suggestions? Thanks.
That's not the correct way. Like this you are doing an "active wait", effectively blocking your CPU.
You should use a lock instead:
static object _syncRoot = new object();
public int getCurrentValue(int valueID)
{
lock(_syncRoot)
{
value = // read value from DB
}
}
public void updateValues(int componentID)
{
lock(_syncRoot)
{
// update values in DB
}
}
Create a static object outside of both of these methods. Then use a lock statement on that object; when one method is accessing the protected code, the other method will wait for the lock to release.
private static object _lockObj = new object();
public int getCurrentValue(int valueID)
{
object value;
lock(_lockObj)
{
value = // read value from DB
}
return value;
}
public void updateValues(int componentID)
{
lock(_lockObj)
{
// update values in DB
}
}
private static readonly object _lock = new object();
public int getCurrentValue(int valueID)
{
try
{
Monitor.Enter(_lock);
value = // read value from DB
return value;
}
finally
{
Monitor.Exit(_lock);
}
}
public void updateValues(int componentID)
{
try
{
Monitor.Enter(_lock);
// update values in DB
}
finally
{
Monitor.Exit(_lock);
}
}

What is wrong with this solution to locking and managing locked exceptions?

My objective is a convention for thread-safe functionality and exception handling within my application. I'm relatively new to the concept of thread management/multithreading. I am using .NET 3.5
I wrote the following helper method to wrap all my locked actions after reading this article http://blogs.msdn.com/b/ericlippert/archive/2009/03/06/locks-and-exceptions-do-not-mix.aspx, which was linked in response to this question, Monitor vs lock.
My thought is that if I use this convention consistently in my application, it will be easier to write thread-safe code and to handle errors within thread safe code without corrupting the state.
public static class Locking
{
private static readonly Dictionary<object,bool> CorruptionStateDictionary = new Dictionary<object, bool>();
private static readonly object CorruptionLock = new object();
public static bool TryLockedAction(object lockObject, Action action, out Exception exception)
{
if (IsCorrupt(lockObject))
{
exception = new LockingException("Cannot execute locked action on a corrupt object.");
return false;
}
exception = null;
Monitor.Enter(lockObject);
try
{
action.Invoke();
}
catch (Exception ex)
{
exception = ex;
}
finally
{
lock (CorruptionLock) // I don't want to release the lockObject until its corruption-state is updated.
// As long as the calling class locks the lockObject via TryLockedAction(), this should work
{
Monitor.Exit(lockObject);
if (exception != null)
{
if (CorruptionStateDictionary.ContainsKey(lockObject))
{
CorruptionStateDictionary[lockObject] = true;
}
else
{
CorruptionStateDictionary.Add(lockObject, true);
}
}
}
}
return exception == null;
}
public static void Uncorrupt(object corruptLockObject)
{
if (IsCorrupt(corruptLockObject))
{
lock (CorruptionLock)
{
CorruptionStateDictionary[corruptLockObject] = false;
}
}
else
{
if(!CorruptionStateDictionary.ContainsKey(corruptLockObject))
{
throw new LockingException("Uncorrupt() is not valid on object that have not been corrupted.");
}
else
{
// The object has previously been uncorrupted.
// My thought is to ignore the call.
}
}
}
public static bool IsCorrupt(object lockObject)
{
lock(CorruptionLock)
{
return CorruptionStateDictionary.ContainsKey(lockObject) && CorruptionStateDictionary[lockObject];
}
}
}
I use a LockingException class for ease of debugging.
public class LockingException : Exception
{
public LockingException(string message) : base(message) { }
}
Here is an example usage class to show how I intend to use this.
public class ExampleUsage
{
private readonly object ExampleLock = new object();
public void ExecuteLockedMethod()
{
Exception exception;
bool valid = Locking.TryLockedAction(ExampleLock, ExecuteMethod, out exception);
if (!valid)
{
bool revalidated = EnsureValidState();
if (revalidated)
{
Locking.Uncorrupt(ExampleLock);
}
}
}
private void ExecuteMethod()
{
//does something, maybe throws an exception
}
public bool EnsureValidState()
{
// code to make sure the state is valid
// if there is an exception returns false,
return true;
}
}
Your solution seems to add nothing but complexity due to a race in the TryLockedAction:
if (IsCorrupt(lockObject))
{
exception = new LockingException("Cannot execute locked action on a corrupt object.");
return false;
}
exception = null;
Monitor.Enter(lockObject);
The lockObject might become "corrupted" while we are still waiting on the Monitor.Enter, so there is no protection.
I'm not sure what behaviour you'd like to achieve, but probably it would help to separate locking and state managing:
class StateManager
{
public bool IsCorrupted
{
get;
set;
}
public void Execute(Action body, Func fixState)
{
if (this.IsCorrupted)
{
// use some Exception-derived class here.
throw new Exception("Cannot execute action on a corrupted object.");
}
try
{
body();
}
catch (Exception)
{
this.IsCorrupted = true;
if (fixState())
{
this.IsCorrupted = false;
}
throw;
}
}
}
public class ExampleUsage
{
private readonly object ExampleLock = new object();
private readonly StateManager stateManager = new StateManager();
public void ExecuteLockedMethod()
{
lock (ExampleLock)
{
stateManager.Execute(ExecuteMethod, EnsureValidState);
}
}
private void ExecuteMethod()
{
//does something, maybe throws an exception
}
public bool EnsureValidState()
{
// code to make sure the state is valid
// if there is an exception returns false,
return true;
}
}
Also, as far as I understand, the point of the article is that state management is harder in presence of concurrency. However, it's still just your object state correctness issue which is orthogonal to the locking and probably you need to use completely different approach to ensuring correctness. E.g. instead of changing some complex state withing locked code region, create a new one and if it succeeded, just switch to the new state in a single and simple reference assignment:
public class ExampleUsage
{
private ExampleUsageState state = new ExampleUsageState();
public void ExecuteLockedMethod()
{
var newState = this.state.ExecuteMethod();
this.state = newState;
}
}
public class ExampleUsageState
{
public ExampleUsageState ExecuteMethod()
{
//does something, maybe throws an exception
}
}
Personally, I always tend to think that manual locking is hard-enough to treat each case when you need it individually (so there is no much need in generic state-management solutions) and low-lelvel-enough tool to use it really sparingly.
Though it looks reliable, I have three concerns:
1) The performance cost of Invoke() on every locked action could be severe.
2) What if the action (the method) requires parameters? A more complex solution will be necessary.
3) Does the CorruptionStateDictionary grow endlessly? I think the uncorrupt() method should problem remove the object rather than set the data false.
Move the IsCorrupt test and the Monitor.Enter inside
the Try
Move the corruption set
handling out of finally and into the Catch block (this should
only execute if an exception has
been thrown)
Don't release the primary lock until after the
corruption flag has been set (leave
it in the finaly block)
Don't restrict the execption to the calling thread; either rethow
it or add it to the coruption
dictionary by replacing the bool
with the custom execption, and
return it with the IsCorrupt Check
For Uncorrupt simply remove the
item
There are some issues with the locking sequencing (see below)
That should cover all the bases
public static class Locking
{
private static readonly Dictionary<object, Exception> CorruptionStateDictionary = new Dictionary<object, Exception>();
private static readonly object CorruptionLock = new object();
public static bool TryLockedAction(object lockObject, Action action, out Exception exception)
{
var lockTaken = false;
exception = null;
try
{
Monitor.Enter(lockObject, ref lockTaken);
if (IsCorrupt(lockObject))
{
exception = new LockingException("Cannot execute locked action on a corrupt object.");
return false;
}
action.Invoke();
}
catch (Exception ex)
{
var corruptionLockTaken = false;
exception = ex;
try
{
Monitor.Enter(CorruptionLock, ref corruptionLockTaken);
if (CorruptionStateDictionary.ContainsKey(lockObject))
{
CorruptionStateDictionary[lockObject] = ex;
}
else
{
CorruptionStateDictionary.Add(lockObject, ex);
}
}
finally
{
if (corruptionLockTaken)
{
Monitor.Exit(CorruptionLock);
}
}
}
finally
{
if (lockTaken)
{
Monitor.Exit(lockObject);
}
}
return exception == null;
}
public static void Uncorrupt(object corruptLockObject)
{
var lockTaken = false;
try
{
Monitor.Enter(CorruptionLock, ref lockTaken);
if (IsCorrupt(corruptLockObject))
{
{ CorruptionStateDictionary.Remove(corruptLockObject); }
}
}
finally
{
if (lockTaken)
{
Monitor.Exit(CorruptionLock);
}
}
}
public static bool IsCorrupt(object lockObject)
{
Exception ex = null;
return IsCorrupt(lockObject, out ex);
}
public static bool IsCorrupt(object lockObject, out Exception ex)
{
var lockTaken = false;
ex = null;
try
{
Monitor.Enter(CorruptionLock, ref lockTaken);
if (CorruptionStateDictionary.ContainsKey(lockObject))
{
ex = CorruptionStateDictionary[lockObject];
}
return CorruptionStateDictionary.ContainsKey(lockObject);
}
finally
{
if (lockTaken)
{
Monitor.Exit(CorruptionLock);
}
}
}
}
The approach I would suggest would be to have a lock-state-manager object, with an "inDangerState" field. An application that needs to access a protected resource starts by using the lock-manager-object to acquire the lock; the manager will acquire the lock on behalf of the application and check the inDangerState flag. If it's set, the manager will throw an exception and release the lock while unwinding the stack. Otherwise the manager will return an IDisposable to the application which will release the lock on Dispose, but which can also manipulate the danger state flag. Before putting the locked resource into a bad state, one should call a method on the IDisposable which will set inDangerState and return a token that can be used to re-clear it once the locked resource is restored to a safe state. If the IDisposable is Dispose'd before the inDangerState flag is re-cleared, the resource will be 'stuck' in 'danger' state.
An exception handler which can restore the locked resource to a safe state should use the token to clear the inDangerState flag before returning or propagating the exception. If the exception handler cannot restore the locked resource to a safe state, it should propagate the exception while inDangerState is set.
That pattern seems simpler than what you suggest, but seems much better than assuming either that all exceptions will corrupt the locked resource, or that none will.

Categories