I'm trying to implement something that manages a pool of resources such that the calling code can request an object and will be given one from the pool if it's available, or else it will be made to wait. I'm having trouble getting the synchronization to work correctly however. What I have in my pool class is something like this (where autoEvent is an AutoResetEvent initially set as signaled:
public Foo GetFooFromPool()
{
autoEvent.WaitOne();
var foo = Pool.FirstOrDefault(p => !p.InUse);
if (foo != null)
{
foo.InUse = true;
autoEvent.Set();
return foo;
}
else if (Pool.Count < Capacity)
{
System.Diagnostics.Debug.WriteLine("count {0}\t capacity {1}", Pool.Count, Capacity);
foo = new Foo() { InUse = true };
Pool.Add(foo);
autoEvent.Set();
return foo;
}
else
{
return GetFooFromPool();
}
}
public void ReleaseFoo(Foo p)
{
p.InUse = false;
autoEvent.Set();
}
The idea is when you call GetFooFromPool, you wait until signaled, then you try and find an existing Foo that is not in use. If you find one, we set it to InUse and then fire a signal so other threads can proceed. If we don't find one, we check to see if the the pool is full. If not, we create a new Foo, add it to the pool and signal again. If neither of those conditions are satisfied, we are made to wait again by calling GetFooFromPool again.
Now in ReleaseFoo we just set InUse back to false, and signal the next thread waiting in GetFooFromPool (if any) to try and get a Foo.
The problem seems to be in my managing the size of the pool. With a capacity of 5, I'm ending up with 6 Foos. I can see in my debug line count 0 appear a couple of times and count 1 might appear a couple of times also. So clearly I have multiple threads getting into the block when, as far as I can see, they shouldn't be able to.
What am I doing wrong here?
Edit: A double check lock like this:
else if (Pool.Count < Capacity)
{
lock(locker)
{
if (Pool.Count < Capacity)
{
System.Diagnostics.Debug.WriteLine("count {0}\t capacity {1}", Pool.Count, Capacity);
foo = new Foo() { InUse = true };
Pool.Add(foo);
autoEvent.Set();
return foo;
}
}
}
Does seem to fix the problem, but I'm not sure it's the most elegant way to do it.
As was already mentioned in the comments, a counting semaphore is your friend.
Combine this with a concurrent stack and you have got a nice simple, thread safe implementation, where you can still lazily allocate your pool items.
The bare-bones implementation below provides an example of this approach. Note that another advantage here is that you do not need to "contaminate" your pool items with an InUse member as a flag to track stuff.
Note that as a micro-optimization, a stack is preferred over a queue in this case, because it will provide the most recently returned instance from the pool, that may still be in e.g. L1 cache.
public class GenericConcurrentPool<T> : IDisposable where T : class
{
private readonly SemaphoreSlim _sem;
private readonly ConcurrentStack<T> _itemsStack;
private readonly Action<T> _onDisposeItem;
private readonly Func<T> _factory;
public GenericConcurrentPool(int capacity, Func<T> factory, Action<T> onDisposeItem = null)
{
_itemsStack = new ConcurrentStack<T>(new T[capacity]);
_factory = factory;
_onDisposeItem = onDisposeItem;
_sem = new SemaphoreSlim(capacity);
}
public async Task<T> CheckOutAsync()
{
await _sem.WaitAsync();
return Pop();
}
public T CheckOut()
{
_sem.Wait();
return Pop();
}
public void CheckIn(T item)
{
Push(item);
_sem.Release();
}
public void Dispose()
{
_sem.Dispose();
if (_onDisposeItem != null)
{
T item;
while (_itemsStack.TryPop(out item))
{
if (item != null)
_onDisposeItem(item);
}
}
}
private T Pop()
{
T item;
var result = _itemsStack.TryPop(out item);
Debug.Assert(result);
return item ?? _factory();
}
private void Push(T item)
{
Debug.Assert(item != null);
_itemsStack.Push(item);
}
}
There are a few problems with what you're doing, but your specific race condition is likely caused by a situation like the following. Imagine you have a capacity of one.
1) There is one unused item in the pool.
2) Thread #1 grabs it and signals the event.
3) Thread #2 finds no available event and gets inside the capacity block. It does not add the item yet.
4) Thread #1 returns the item to the pool and signals the event.
5) Repeat steps 1, 2, and 3 using two other threads (e.g. #3, #4).
6) Thread #2 adds an item to the pool.
7) Thread #4 adds an item to the pool.
There are now two items in a pool with a capacity of one.
Your implementation has other potential issues, however.
Depending on how your Pool.Count and Add() are synchronized, you might not see an up-to-date value.
You could potentially have multiple threads grab the same unused item.
Controlling access with an AutoResetEvent opens yourself up to difficult to find issues (like this one) because you are trying to use a lockless solution instead of just taking a lock and using Monitor.Wait() and Monitor.Pulse() for this purpose.
Related
I have a scenario in which I need to create number of threads dynamically based on the configurable variable.I can only start that number of thread at a time and as soon as one of the thread is completed,I need to assign a method in same thread as in queue.
Can any one help me to resolve the above scenario with an example.
I have been researching for a week but not able to get the concrete solution.
There are many ways to approach this, but which is best depends on your specific problem.
However, let's assume that you have a collection of items that you want to do some work on, with a separate thread processing each item - up to a maximum number of simultaneous threads that you specify.
One very simple way to do that is to use Plinq via AsParallel() and WithDegreeOfParallelism(), as the following console application demonstrates:
using System;
using System.Linq;
using System.Threading;
namespace Demo
{
static class Program
{
static void Main()
{
int maxThreads = 4;
var workItems = Enumerable.Range(1, 100);
var parallelWorkItems = workItems.AsParallel().WithDegreeOfParallelism(maxThreads);
parallelWorkItems.ForAll(worker);
}
static void worker(int value)
{
Console.WriteLine($"Worker {Thread.CurrentThread.ManagedThreadId} is processing {value}");
Thread.Sleep(1000); // Simulate work.
}
}
}
If you run this and inspect the output, you'll see that multiple threads are processing the work items, but the maximum number of threads is limited to the specified value.
You should have a look at thread pooling. See this link for more information Threadpooling in .NET. You will most likely have to work with the callbacks to accomplish your task to call a method as soon as work in one thread was done
There might be a smarter solution for you using async/await, depending on what you are trying to achieve. But since you explicitly ask about threads, here is a short class that does what you want:
public class MutliThreadWorker : IDisposable
{
private readonly ConcurrentQueue<Action> _actions = new ConcurrentQueue<Action>();
private readonly List<Thread> _threads = new List<Thread>();
private bool _disposed;
private void ThreadFunc()
{
while (true)
{
Action action;
while (!_actions.TryDequeue(out action)) Thread.Sleep(100);
action();
}
}
public MutliThreadWorker(int numberOfThreads)
{
for (int i = 0; i < numberOfThreads; i++)
{
Thread t = new Thread(ThreadFunc);
_threads.Add(t);
t.Start();
}
}
public void Dispose()
{
Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
_disposed = true;
foreach (Thread t in _threads)
t.Abort();
if (disposing)
GC.SuppressFinalize(this);
}
public void Enqueue(Action action)
{
if (_disposed)
throw new ObjectDisposedException("MultiThreadWorker");
_actions.Enqueue(action);
}
}
This class starts the required number of threads when instantiated as this:
int requiredThreadCount = 16; // your configured value
MultiThreadWorker mtw = new MultiThreadWorker(requiredThreadCount);
It then uses a ConcurrentQueue<T> to keep track of the tasks to do. You can add methods to the queue via
mtw.Enqueue(() => DoThisTask());
I made it IDisposable to make sure the treads are stopped in the end. Of course this would need a little improvemnt since aborting threads like this is not the best practice.
The ThreadFunc itself checks repeatedly if there are queued actions and executes them. This could also be improved a little by patterns using Monitor.Pulse and Monitor.Wait etc.
And as I said, async/await may lead to better solutions, but you asked for threads explicitly.
Many times in UI development I handle events in such a way that when an event first comes - I immediately start processing, but if there is one processing operation in progress - I wait for it to complete before I process another event. If more than one event occurs before the operation completes - I only process the most recent one.
The way I typically do that my process method has a loop and in my event handler I check a field that indicates if I am currently processing something and if I am - I put my current event arguments in another field that is basically a one item sized buffer and when current processing pass completes - I check if there is some other event to process and I loop until I am done.
Now this seems a bit too repetitive and possibly not the most elegant way to do it, though it seems to otherwise work fine for me. I have two questions then:
Does what I need to do have a name?
Is there some reusable synchronization type out there that could do that for me?
I'm thinking of adding something to the set of async coordination primitives by Stephen Toub that I included in my toolkit.
So first, we'll handle the case that you described in which the method is always used from the UI thread, or some other synchronization context. The Run method can itself be async to handle all of the marshaling through the synchronization context for us.
If we're running we just set the next stored action. If we're not, then we indicate that we're now running, await the action, and then continue to await the next action until there is no next action. We ensure that whenever we're done we indicate that we're done running:
public class EventThrottler
{
private Func<Task> next = null;
private bool isRunning = false;
public async void Run(Func<Task> action)
{
if (isRunning)
next = action;
else
{
isRunning = true;
try
{
await action();
while (next != null)
{
var nextCopy = next;
next = null;
await nextCopy();
}
}
finally
{
isRunning = false;
}
}
}
private static Lazy<EventThrottler> defaultInstance =
new Lazy<EventThrottler>(() => new EventThrottler());
public static EventThrottler Default
{
get { return defaultInstance.Value; }
}
}
Because the class is, at least generally, going to be used exclusively from the UI thread there will generally need to be only one, so I added a convenience property of a default instance, but since it may still make sense for there to be more than one in a program, I didn't make it a singleton.
Run accepts a Func<Task> with the idea that it would generally be an async lambda. It might look like:
public class Foo
{
public void SomeEventHandler(object sender, EventArgs args)
{
EventThrottler.Default.Run(async () =>
{
await Task.Delay(1000);
//do other stuff
});
}
}
Okay, so, just to be verbose, here is a version that handles the case where the event handlers are called from different threads. I know you said that you assume they're all called from the UI thread, but I generalized it a bit. This means locking over all access to instance fields of the type in a lock block, but not actually executing the function inside of a lock block. That last part is important not just for performance, to ensure we're not blocking items from just setting the next field, but also to avoid issues with that action also calling run, so that it doesn't need to deal with re-entrancy issues or potential deadlocks. This pattern, of doing stuff in a lock block and then responding based on conditions determined in the lock means setting local variables to indicate what should be done after the lock ends.
public class EventThrottlerMultiThreaded
{
private object key = new object();
private Func<Task> next = null;
private bool isRunning = false;
public void Run(Func<Task> action)
{
bool shouldStartRunning = false;
lock (key)
{
if (isRunning)
next = action;
else
{
isRunning = true;
shouldStartRunning = true;
}
}
Action<Task> continuation = null;
continuation = task =>
{
Func<Task> nextCopy = null;
lock (key)
{
if (next != null)
{
nextCopy = next;
next = null;
}
else
{
isRunning = false;
}
}
if (nextCopy != null)
nextCopy().ContinueWith(continuation);
};
if (shouldStartRunning)
action().ContinueWith(continuation);
}
}
Does what I need to do have a name?
What you're describing sounds a bit like a trampoline combined with a collapsing queue. A trampoline is basically a loop that iteratively invokes thunk-returning functions. An example is the CurrentThreadScheduler in the Reactive Extensions. When an item is scheduled on a CurrentThreadScheduler, the work item is added to the scheduler's thread-local queue, after which one of the following things will happen:
If the trampoline is already running (i.e., the current thread is already processing the thread-local queue), then the Schedule() call returns immediately.
If the trampoline is not running (i.e., no work items are queued/running on the current thread), then the current thread begins processing the items in the thread-local queue until it is empty, at which point the call to Schedule() returns.
A collapsing queue accumulates items to be processed, with the added twist that if an equivalent item is already in the queue, then that item is simply replaced with the newer item (resulting in only the most recent of the equivalent items remaining in the queue, as opposed to both). The idea is to avoid processing stale/obsolete events. Consider a consumer of market data (e.g., stock ticks). If you receive several updates for a frequently traded security, then each update renders the earlier updates obsolete. There is likely no point in processing earlier ticks for the same security if a more recent tick has already arrived. Thus, a collapsing queue is appropriate.
In your scenario, you essentially have a trampoline processing a collapsing queue with for which all incoming events are considered equivalent. This results in an effective maximum queue size of 1, as every item added to a non-empty queue will result in the existing item being evicted.
Is there some reusable synchronization type out there that could do that for me?
I do not know of an existing solution that would serve your needs, but you could certainly create a generalized trampoline or event loop capable of supporting pluggable scheduling strategies. The default strategy could use a standard queue, while other strategies might use a priority queue or a collapsing queue.
What you're describing sounds very similar to how TPL Dataflow's BrodcastBlock behaves: it always remembers only the last item that you sent to it. If you combine it with ActionBlock that executes your action and has capacity only for the item currently being processed, you get what you want (the method needs a better name):
// returns send delegate
private static Action<T> CreateProcessor<T>(Action<T> executedAction)
{
var broadcastBlock = new BroadcastBlock<T>(null);
var actionBlock = new ActionBlock<T>(
executedAction, new ExecutionDataflowBlockOptions { BoundedCapacity = 1 });
broadcastBlock.LinkTo(actionBlock);
return item => broadcastBlock.Post(item);
}
Usage could be something like this:
var processor = CreateProcessor<int>(
i =>
{
Console.WriteLine(i);
Thread.Sleep(i);
});
processor(100);
processor(1);
processor(2);
Output:
100
2
I have a class in C# like this:
public MyClass
{
public void Start() { ... }
public void Method_01() { ... }
public void Method_02() { ... }
public void Method_03() { ... }
}
When I call the "Start()" method, an external class start to work and will create many parallel threads that those parallel threads call the "Method_01()" and "Method_02()" form above class. after end of working of the external class, the "Method_03()" will be run in another parallel thread.
Threads of "Method_01()" or "Method_02()" are created before creation of thread of Method_03(), but there is no guaranty to end before start of thread of "Method_03()". I mean the "Method_01()" or the "Method_02()" will lost their CPU turn and the "Method_03" will get the CPU turn and will end completely.
In the "Start()" method I know the total number of threads that are supposed to create and run "Method_01" and "Method_02()". The question is that I'm searching for a way using semaphore or mutex to ensure that the first statement of "Method_03()" will be run exactly after end of all threads which are running "Method_01()" or "Method_02()".
Three options that come to mind are:
Keep an array of Thread instances and call Join on all of them from Method_03.
Use a single CountdownEvent instance and call Wait from Method_03.
Allocate one ManualResetEvent for each Method_01 or Method_02 call and call WaitHandle.WaitAll on all of them from Method_03 (this is not very scalable).
I prefer to use a CountdownEvent because it is a lot more versatile and is still super scalable.
public class MyClass
{
private CountdownEvent m_Finished = new CountdownEvent(0);
public void Start()
{
m_Finished.AddCount(); // Increment to indicate that this thread is active.
for (int i = 0; i < NUMBER_OF_THREADS; i++)
{
m_Finished.AddCount(); // Increment to indicate another active thread.
new Thread(Method_01).Start();
}
for (int i = 0; i < NUMBER_OF_THREADS; i++)
{
m_Finished.AddCount(); // Increment to indicate another active thread.
new Thread(Method_02).Start();
}
new Thread(Method_03).Start();
m_Finished.Signal(); // Signal to indicate that this thread is done.
}
private void Method_01()
{
try
{
// Add your logic here.
}
finally
{
m_Finished.Signal(); // Signal to indicate that this thread is done.
}
}
private void Method_02()
{
try
{
// Add your logic here.
}
finally
{
m_Finished.Signal(); // Signal to indicate that this thread is done.
}
}
private void Method_03()
{
m_Finished.Wait(); // Wait for all signals.
// Add your logic here.
}
}
This appears to be a perfect job for Tasks. Below I assume that Method01 and Method02 are allowed to run concurrently with no specific order of invocation or finishing (with no guarantee, just typed in out of memory without testing):
int cTaskNumber01 = 3, cTaskNumber02 = 5;
Task tMaster = new Task(() => {
for (int tI = 0; tI < cTaskNumber01; ++tI)
new Task(Method01, TaskCreationOptions.AttachedToParent).Start();
for (int tI = 0; tI < cTaskNumber02; ++tI)
new Task(Method02, TaskCreationOptions.AttachedToParent).Start();
});
// after master and its children are finished, Method03 is invoked
tMaster.ContinueWith(Method03);
// let it go...
tMaster.Start();
What it sounds like you need to do is to create a ManualResetEvent (initialized to unset) or some other WatHandle for each of Method_01 and Method_02, and then have Method_03's thread use WaitHandle.WaitAll on the set of handles.
Alternatively, if you can reference the Thread variables used to run Method_01 and Method_02, you could have Method_03's thread use Thread.Join to wait on both. This assumes however that those threads are actually terminated when they complete execution of Method_01 and Method_02- if they are not, you need to resort to the first solution I mention.
Why not use a static variable static volatile int threadRuns, which is initialized with the number threads Method_01 and Method_02 will be run.
Then you modify each of those two methods to decrement threadRuns just before exit:
...
lock(typeof(MyClass)) {
--threadRuns;
}
...
Then in the beginning of Method_03 you wait until threadRuns is 0 and then proceed:
while(threadRuns != 0)
Thread.Sleep(10);
Did I understand the quesiton correctly?
There is actually an alternative in the Barrier class that is new in .Net 4.0. This simplifies the how you can do the signalling across multiple threads.
You could do something like the following code, but this is mostly useful when synchronizing different processing threads.
public class Synchro
{
private Barrier _barrier;
public void Start(int numThreads)
{
_barrier = new Barrier((numThreads * 2)+1);
for (int i = 0; i < numThreads; i++)
{
new Thread(Method1).Start();
new Thread(Method2).Start();
}
new Thread(Method3).Start();
}
public void Method1()
{
//Do some work
_barrier.SignalAndWait();
}
public void Method2()
{
//Do some other work.
_barrier.SignalAndWait();
}
public void Method3()
{
_barrier.SignalAndWait();
//Do some other cleanup work.
}
}
I would also like to suggest that since your problem statement was quite abstract, that often actual problems that are solved using countdownevent are now better solved using the new Parallel or PLINQ capabilities. If you were actually processing a collection or something in your code, you might have something like the following.
public class Synchro
{
public void Start(List<someClass> collection)
{
new Thread(()=>Method3(collection));
}
public void Method1(someClass)
{
//Do some work.
}
public void Method2(someClass)
{
//Do some other work.
}
public void Method3(List<someClass> collection)
{
//Do your work on each item in Parrallel threads.
Parallel.ForEach(collection, x => { Method1(x); Method2(x); });
//Do some work on the total collection like sorting or whatever.
}
}
I have List newJobs. Some threads add items to that list and other thread removes items from it, if it's not empty. I have ManualResetEvent newJobEvent which is set when items are added to the list, and reset when items are removed from it:
Adding items to the list is performed in the following way:
lock(syncLock){
newJobs.Add(job);
}
newJobEvent.Set();
Jobs removal is performed in the following way:
if (newJobs.Count==0)
newJobEvent.WaitOne();
lock(syncLock){
job = newJobs.First();
newJobs.Remove(job);
/*do some processing*/
}
newJobEvent.Reset();
When the line
job=newJobs.First()
is executed I sometimes get an exception that the list is empty. I guess that the check:
if (newJobs.Count==0)
newJobEvent.WaitOne();
should also be in the lock statement but I'm afraid of deadlocks on the line newJobEvent.WaitOne();
How can I solve it?
Many thanks and sorry for the long post!
You are right. Calling WaitOne inside a lock could lead to a deadlock. And the check to see if the list is empty needs to be done inside the lock otherwise there could be a race with another thread trying to remove an item. Now, your code looks suspiciously like the producer-consumer pattern which is usually implemented with a blocking queue. If you are using .NET 4.0 then you can take advantage of the BlockingCollection class.
However, let me go over a couple of ways you can do it youself. The first uses a List and a ManualResetEvent to demonstrate how this could be done using the data structures in your question. Notice the use of a while loop in the Take method.
public class BlockingJobsCollection
{
private List<Job> m_List = new List<Job>();
private ManualResetEvent m_Signal = new ManualResetEvent(false);
public void Add(Job item)
{
lock (m_List)
{
m_List.Add(item);
m_Signal.Set();
}
}
public Job Take()
{
while (true)
{
lock (m_List)
{
if (m_List.Count > 0)
{
Job item = m_List.First();
m_List.Remove(item);
if (m_List.Count == 0)
{
m_Signal.Reset();
}
return item;
}
}
m_Signal.WaitOne();
}
}
}
But this not how I would do it. I would go with the simplier solution below with uses Monitor.Wait and Monitor.Pulse. Monitor.Wait is useful because it can be called inside a lock. In fact, it is suppose to be done that way.
public class BlockingJobsCollection
{
private Queue<Job> m_Queue = new Queue<Job>();
public void Add(Job item)
{
lock (m_Queue)
{
m_Queue.Enqueue(item);
Monitor.Pulse(m_Queue);
}
}
public Job Take()
{
lock (m_Queue)
{
while (m_Queue.Count == 0)
{
Monitor.Wait(m_Queue);
}
return m_Queue.Dequeue();
}
}
}
Not answering your question, but if you are using .NET framework 4, you can use the new ConcurrentQueue which does all the locking for you.
Regarding your question:
One scenario that I can think of causing such a problem is the following:
The insertion thread enters the lock, calls newJob.Add, leaves the lock.
Context switch to the removal thread. It checks for emptyness, sees an item, enters the locked area, removes the item, resets the event - which hasn't even been set yet.
Context switch back to the insertion thread, the event is set.
Context switch back to the removal thread. It checks for emptyness, sees no items, waits for the event - which is already set, trys to get the first item... Bang!
Set and reset the event inside the lock and you should be fine.
I don't see why object removal in case of zero objects should wait for one to be added and then remove it. It looks to be being against logic.
I have some records, that I want to save to database asynchronously. I organize them into batches, then send them. As time passes, the batches are processed.
In the meanwhile the user can work on. There are some critical operations, that I want to lock him out from, while any save batch is still running asynchronously.
The save is done using a TableServiceContext and method .BeginSave() - but I think this should be irrelevant.
What I want to do is whenever an async save is started, increase a lock count, and when it completes, decrease the lock count so that it will be zero as soon as all have finished. I want to lock out the critical operation as long as the count is not zero. Furthermore I want to qualify the lock - by business object - for example.
I did not find a .NET 3.5 c# locking method, that does fulfil this requirement. A semaphore does not contain a method to check, if the count is 0. Otherwise a semaphore with unlimited max count would do.
Actually the Semaphare does have a method for checking to see if the count is zero. Use the WaitOne method with a zero timeout. It will return a value indicating whether the semaphore was acquired. If it returns false then it was not acquired which implies that it's count is zero.
var s = new Semaphore(5, 5);
while (s.WaitOne(0))
{
Console.WriteLine("acquired");
}
Console.WriteLine("no more left to acquire");
I'm presuming that when you say lock the user out, it is not a literal "lock" whilst the operations are completed as this would block the UI thread and freeze the application when the lock was encountered. I'm assuming you mean that some state can be checked so that UI controls can be disabled/enabled.
Something like the following code could be used which is lock free:
public class BusyState
{
private int isBusy;
public void SignalTaskStarted()
{
Interlocked.Increment(ref isBusy);
}
public void SignalTaskFinished()
{
if (Interlocked.Decrement(ref isBusy) < 0)
{
throw new InvalidOperationException("No tasks started.");
}
}
public bool IsBusy()
{
return Thread.VolatileRead(ref isBusy) > 0;
}
}
public class BusinessObject
{
private readonly BusyState busyState = new BusyState();
public void Save()
{
//Raise a "Started" event to disable UI controls...
//Start a few async tasks which call CallbackFromAsyncTask when finished.
//Start task 1
busyState.SignalTaskStarted();
//Start task 2
busyState.SignalTaskStarted();
//Start task 3
busyState.SignalTaskStarted();
}
private void CallbackFromAsyncTask()
{
busyState.SignalTaskFinished();
if (!busyState.IsBusy())
{
//Raise a "Completed" event to enable UI controls...
}
}
}
The counting aspect is encapsulated in BusyState which is then used in the business object to signal tasks starting and stopping. Raising started and completed events could be hooked to implement enabling and disabling UI controls to lock the user out whilst the async operations are being completed.
There are obviously loads of caveats here for handling error conditions, etc. So just some basic outline code...
What is the purpose of the lock count if the only logic involves whether or not the value is non-zero?
If you want to do this on a type-by-type basis, you could take this approach:
public class BusinessObject1
{
private static readonly object lockObject = new object();
public static object SyncRoot { get { return lockObject; } }
}
(following the same pattern for other business objects)
If you then enclose your save and your critical operations in a block like this:
lock(BusinessObject1.SyncRoot)
{
// do work
}
You will make saving and the critical operations mutually exclusive tasks.
Since you wanted it granular, you can cascade the locks like this:
lock(BusinessObject1.SyncRoot)
lock(BusinessObject2.SyncRoot)
lock(BusinessObject3.SyncRoot)
{
// do work
}