Creating a mutiple syncLock variable for an instance - c#

I have two internal properties that use lazy-loading of backing fields, and are used in a multi-threaded application, so I have implemented a double-checking lock scheme as per this MSDN article
Now, firstly assuming that this is an appropriate pattern, all the examples show creating a single lock object for an instance. If my two properties are independent of each other, would it not be more efficient to create a lock instance for each property?
It occurs to me that maybe there is only one in order to avoid deadlocks or race-conditions. A obvious situation doesn't come to mind, but I'm sure someone can show me one... (I'm not very experienced with multi-threaded code, obviously)
private List<SomeObject1> _someProperty1;
private List<SomeObject2> _someProperty2;
private readonly _syncLockSomeProperty1 = new Object();
private readonly _syncLockSomeProperty2 = new Object();
internal List<SomeObject1> SomeProperty1
{
get
{
if (_someProperty1== null)
{
lock (_syncLockSomeProperty1)
{
if (_someProperty1 == null)
{
_someProperty1 = new List<SomeObject1>();
}
}
}
return _someProperty1;
}
set
{
_someProperty1 = value;
}
}
internal List<SomeObject2> SomeProperty2
{
get
{
if (_someProperty2 == null)
{
lock (_syncLockSomeProperty2)
{
if (_someProperty2 == null)
{
_someProperty2 = new List<SomeObject2>();
}
}
}
return _someProperty2;
}
set
{
_someProperty2 = value;
}
}

If your properties are truly independent, then there's no harm in using independent locks for each of them.

In case the two properties (or their initializers more specifically) are independent of each other, as in the sample code you provided, it makes sense to have two different lock objects. However, when the initialization occurs rarely, the effect will be negligible.
Note that you should protect the setter's code as well. The lock statement imposes a so called memory barrier, which is indispensable especially on multi-CPU and/or multi-core systems to prevent race conditions.

Yes, if they are independent of each other, this would indeed be more efficient, as access to one wont' block access to the other. You're also on the money about the risk of a deadlock if that independence turned out to be false.
The question is, presuming that _someProperty1 = new List<SomeObject1>(); isn't the real code for assigning to _someProperty1 (hardly worth the lazy-load, is it?), then the question is: Can the code that fills SomeProperty1 ever call that which fills SomeProperty2, or vice-versa, through any code-path, no matter how bizarre?
Even if one can call the other, there can't be a deadlock, but if they both can call each other (or 1 call 2, 2 call 3 and 3 call 1, and so on), then a deadlock can definitely happen.
As a rule, I'd start with broad locks (one lock for all locked tasks) and then make the locks narrower as an optimisation as needed. In cases where you have, say, 20 methods which need locking, then judging the safety can be harder (also, you begin to fill memory just with lock objects).
Note that there are two issues with your code also:
First, you don't lock in your setter. Possibly this is fine (you just want your lock to prevent multiple heavy calls to the loading method, and don't actually care if there are over-writes between the set, and the get), possibly this is a disaster.
Second, depending on the CPU running it, double-check as you write it can have issues with read/write reordering, so you should either have a volatile field, or call a memory barrier. See http://blogs.msdn.com/b/brada/archive/2004/05/12/130935.aspx
Edit:
It's also worth considering whether it's really needed at all.
Consider that the operation itself should be thread-safe:
Do a bunch of stuff is done.
Have an object created based on that bunch of stuff.
Assign that object to the local variable.
1 and 2 will only happen on one thread, and 3 is atomic. Therefore, the advantage of locking is:
If performing step 1 and/or 2 above have their own threading issues, and aren't protected from them by their own locks, then locking is 100% necessary.
If it would be disastrous for something to have acted upon a value obtained in step 1 and 2, and then later to do so with step 1 and 2 being repeated, locking is 100% necessary.
Locking will prevent the waste of 1 and 2 being done multiple times.
So, if we can rule out case 1 and 2 as an issue (takes a bit of analysis, but it's often possible), then we've only preventing the waste in case 3 to worry about. Now, maybe this is a big worry. However, if it would rarely come up, and also not be that much of a waste when it did, then the gains of not locking would outweigh the gains of locking.
If in doubt, locking is probably the safer approach, but its possible that just living with the occasional wasted operation is better.

Related

Is this a valid way to make a custom type thread safe? And general threading questions

I have a few general questions when dealing with threads. I have been looking around but haven't really seen any answers to my questions
When dealing with multiple variables in a class you want to be thread safe, are you supposed to have one "lock object" for every variable you want to lock in the class? Like this?
static readonly object lockForVarA = new object();
private float varA;
static readonly object lockForVarB = new object();
private float varB;
Also is this a valid way to handle thread safing a custom type?
public class SomeClass
{
public SomeClass()
{
//Do some kind of work IE load an assembly
}
}
public class SomeOtherClass : BaseClassFiringFromRandomThread
{
static readonly object someClassLock = new object();
SomeClass someClass;
public override void Init()//this is fired from any available thread, can be fired multiple times and even at the same time
{
lock(someClassLock)
{
if(someClass == null)
someClass = new SomeClass();
}
}
}
This code is in the constructor of a class that can be called from any thread at any time
When dealing with multiple variables in a class you want to be thread safe, are you supposed to have one "lock object" for every variable you want to lock in the class?
There are two rules:
Be "fine grained". Have as many locks as possible, one for each variable. Access the variable under its lock every time you use it. Lock as little code as possible to ensure scalability. If you forget to lock a variable, you'll cause a race condition, and if you get the lock ordering wrong, you'll cause a deadlock, so make sure you get it perfect.
Be "coarse-grained". Have just one lock, and put all the critical sections under that lock. Having many locks decreases contention but increases the chance of deadlocks and other errors, so have as few locks as possible, with as much code as possible in each. Of course, this also increases the risk of deadlocks since now there is lots more code inside the locks that can have inversions, and it decreases scalability.
As you have no doubt noticed, the standard advice is completely contradictory. That's because locks are terrible.
My advice: if you don't share variables across threads then you don't need to have any locks at all.
Also is this a valid way to handle thread safing a custom type?
The code looks reasonable so far, but if your intention is to lazy-load some logic then do not write your own threading logic. Just use Lazy<T> and make it do the work. It was written by experts.
Always use the highest-level tool designed by experts that is available to you. Rolling your own threading primitives is a recipe for disaster.
Whatever you do do not take the advice in the other answer that says you must use double checked locking. There are no circumstances in which you must use double-checked locking. Single checked locking is safer, easier, and more likely to be correct. Only use double-checked locking when (1) you have overwhelming empirical evidence that contention is the cause of a measurable, user-impacting performance problem that will be fixed by going low-lock, and (2) you can explain what rules in the C# memory model make double checked locking safe.
If you can't do (1) then you have no reason to do double checked locking, and if you can't do (2), you can't do it with any confidence of safety.
You need to use a double checked lock pattern. There isn't need to acquire your someClassLock lock once someClass has been initialised, and locking it there will just cause unnecessary contention.
if (someClass == null)
{
lock(someClassLock)
{
if (someClass == null)
someClass = new SomeClass();
}
}
You need the inner if block because it is possible a concurrent thread may have created someClass after the first null check but before your lock was acquired.
Of course, you need to also ensure that SomeClass is written in a way that is itself threadsafe, but this will safely ensure that only one instance of someClass is created.
An alternative method is to use Lazy<T> with a suitable LazyThreadSafetyMode.

Threads operating on the one instance of a method

I'm creating an app, where I have 50x50 map. On this map I can add dots, which are new instances of the class "dot". Every dot has it's own thread, and every thread connected with a specific dot operates on the method "explore" of the class, and in this method there is another method "check_place(x,y)" which is responsible for checking if some place on the map was already discovered. If not, the static variable of the class "num_discovered" should be incremented. This single instance of the method "check_place(x,y)" should be accessed in the real-time by every thread started in the app.
Constructor:
public dot(Form1 F)
{
/...
thread = new System.Threading.Thread(new System.Threading.ThreadStart(explore)); //wątek wykonujący metodę explore klasy robot
thread.Start();
}
check_place(x,y) method:
static void check_place(int x, int y)
{
lock (ob)
{
if (discovered[x, y] == false)
{
discovered[x, y] = true;
num_discovered += 1;
}
}
}
In the explore method I'm invoking method "check_place(x,y)" like this:
dot.check_place(x, y);
Is it enough to achieve a situation where in the single time only one dot can check if place was already discovered?
Is it enough to achieve a situation where in the single time only one dot can check if place was already discovered?
Yes. But what's the point?
If threads are spending all of their time waiting on other threads, what have you gained from being multi-threaded?
There are three (sometimes overlapping) reasons to spawn more threads:
To make use of more than one core at the same time: overall throughput increases.
To have work done while another thread is waiting on something else (typically I/O from file, DB or network): overall throughput increases.
To respond to user interaction while work is being done: overall throughput decreases, but it feels faster to the user as they are separately being reacted to.
Here the last doesn't apply.
If your "checking" involved I/O then the second might apply, and this strategy might make sense.
The first could well apply, but because all the threads are spending most of their time waiting on other threads, you don't gain an improvement in throughput.
Indeed, because there is overhead involved in setting up threads and switching between them, this code will be slower than just having one thread do everything: If only one thread can work at a time, then only have one thread!
So your use of a lock here is correct in that it prevents corruption and errors, but pointless in that it makes everything too slow.
What to do about this:
If your real case involves I/O or other reasons why the threads in fact spend most of their time out of each others' way, then what you have is fine.
Otherwise you've got two options.
Easy: Just use one thread.
Hard: Have finer locking.
One way to have finer locking would be to do double-checking:
static void check_place(int x, int y)
{
if (!discovered[x, y])
lock (ob)
if (!discovered[x, y])
{
discovered[x, y] = true;
num_discovered += 1;
}
}
Now at the very least some threads will skip past some cases where discovered[x, y] is true without holding up the other threads.
This is useful when a thread is going to get a result at the end of the locked period. Its still not good enough here though, because it's just going to move on quickly to a case were it fights for the lock again.
If our lookup of discovered were itself thread-safe and that thread-safety was finely grained, then we could make some progress:
static void check_place(int x, int y)
{
if (discovered.SetIfFalse(x, y))
Interlocked.Increment(ref num_discovered)
}
So far though we've just moved the problem around; how do we make SetIfFalse thread-safe without using a single lock and causing the same problem?
There are a few approaches. We could use striped locks, or low-locking concurrent collections.
It seem that you have a fixed-size structure of 50×50, in which case this isn't too hard:
private class DotMap
{
//ints because we can't use interlocked with bools
private int[][] _map = new int[50][];
public DotMap()
{
for(var i = 0; i != 50; ++i)
_map[i] = new int[50];
}
public bool SetIfFalse(int x, int y)
{
return Interlocked.CompareExchange(ref _map[x][y], 1, 0) == 0;
}
}
Now our advantages are:
All of our locking is much lower-level (but note that Interlocked operations will still slow down in the face of contention, albeit not as much as lock).
Much of our locking is out of the way of other locking. Specifically, that in SetIfFalse can allow for separate areas to be checked without being in each others way at all.
This is neither a panacea though (such approaches still suffer in the face of contention, and also bring their own costs) nor easy to generalise to other cases (changing SetIfFalse to something that does anything more than check and change that single value is not easy). It's still quite likely that even on a machine with a lot of cores this would be slower than the single-threaded approach.
Another possibility is to not have SetIfFalse thread-safe at all, but to ensure that the threads where each partitioned from each other so that they were never going to hit the same values and that the structure is safe in the case of such multi-threaded access (fixed arrays of elements above machine word-size are thread-safe when threads only ever hit different indices, must mutable structures where one can Add and/or Remove are not).
In all, you've got the right idea about how to use lock to keep threads from causing errors, and that is the approach to use 98% of the time when something lends itself well to multithreading because it involves threads waiting on something else. Your example though hits that lock too much to benefit from multiple cores, and creating code that does is not trivial.
Your performance on this could potentially be pretty bad - I recommend using Task.Run here to increase efficiency when you need to run your explore method on multiple threads in parallel.
As far as locking and thread safety, if the lock in check_place is the only place you're setting bools in the discovered variable and setting the num_discovered variable, the existing code will work. If you start setting them from somewhere else in the code, you will need to use locks there as well.
Also, when reading from these variables, you should read these values into local variables inside other locks using the same lock object to maintain thread safety here as well.
I have other suggestions but those are the two most basic things you need here.

lock keyword on a LINQ Parallel.ForEach<> loop

This is more a conceptual question. I was wondering if I used a lock inside of Parallel.ForEach<> loop if that would take away the benefits of Paralleling a foreachloop.
Here is some sample code where I have seen it done.
Parallel.ForEach<KeyValuePair<string, XElement>>(binReferences.KeyValuePairs, reference =>
{
lock (fileLockObject)
{
if (fileLocks.ContainsKey(reference.Key) == false)
{
fileLocks.Add(reference.Key, new object());
}
}
RecursiveBinUpdate(reference.Value, testPath, reference.Key, maxRecursionCount, ref recursionCount);
lock (fileLocks[reference.Key])
{
reference.Value.Document.Save(reference.Key);
}
});
Where fileLockObject and fileLocks are as follows.
private static object fileLockObject = new object();
private static Dictionary<string, object> fileLocks = new Dictionary<string, object>();
Does this technique completely make the loop not parallel?
I would like to see your thoughts on this.
It means all of the work inside of the lock can't be done in parallel. This greatly harms the performance here, yes. Since the entire body is not all locked (and locked on the same object) there is still some parallelization here though. Whether the parallelization that you do get adds enough benefit to surpass the overhead that comes with managing the threads and synchronizing around the locks is something you really just need to test yourself with your specific data.
That said, it looks like what you're doing (at least in the first locked block, which is the one I'd be more concerned with at every thread is locking on the same object) is locking access to a Dictionary. You can instead use a ConcurrentDictionary, which is specifically designed to be utilized from multiple threads, and will minimize the amount of synchronization that needs to be done.
if I used a lock ... if that would take away the benefits of Paralleling a foreachloop.
Proportionally. When RecursiveBinUpdate() is a big chunk of work (and independent) then it will still pay off. The locking part could be a less than 1%, or 99%. Look up Amdahls law, that applies here.
But worse, your code is not thread-safe. From your 2 operations on fileLocks, only the first is actually inside a lock.
lock (fileLockObject)
{
if (fileLocks.ContainsKey(reference.Key) == false)
{
...
}
}
and
lock (fileLocks[reference.Key]) // this access to fileLocks[] is not protected
change the 2nd part to:
lock (fileLockObject)
{
reference.Value.Document.Save(reference.Key);
}
and the use of ref recursionCount as a parameter looks suspicious too. It might work with Interlocked.Increment though.
The "locked" portion of the loop will end up running serially. If the RecursiveBinUpdate function is the bulk of the work, there may be some gain, but it would be better if you could figure out how to handle the lock generation in advance.
When it comes to locks, there's no difference in the way PLINQ/TPL threads have to wait to gain access. So, in your case, it only makes the loop not parallel in those areas that you're locking and any work outside those locks is still going to execute in parallel (i.e. all the work in RecursiveBinUpdate).
Bottom line, I see nothing substantially wrong with what you're doing here.

When would you ever use nested locking?

I was reading in Albahari's excellent eBook on threading and came across the following scenario he mentions that "a thread can repeatedly lock the same object in a nested (reentrant) fashion"
lock (locker)
lock (locker)
lock (locker)
{
// Do something...
}
as well as
static readonly object _locker = new object();
static void Main()
{
lock (_locker)
{
AnotherMethod();
// We still have the lock - because locks are reentrant.
}
}
static void AnotherMethod()
{
lock (_locker) { Console.WriteLine ("Another method"); }
}
From the explanation, any threads will block on the first (outermost) lock and that it is unlocked only after the outer lock has exited.
He states "nested locking is useful when one method calls another within a lock"
Why is this useful? When would you NEED to do this and what problem does it solve?
Lets say you have two public methods, A() and B(), which both need the same lock.
Furthermore, let's say that A() calls B()
Since the client can also call B() directly, you need to lock in both methods.
Therefore, when A() is called, B() will take the lock a second time.
It's not so much that it's useful to do so, as it's useful to be allowed to. Consider how you may often have public methods that call other public methods. If the public method called into locks, and the public method calling into it needs to lock on the wider scope of what it does, then being able to use recursive locks means you can do so.
There are some cases where you might feel like using two lock objects, but you're going to be using them together and hence if you make a mistake, there's a big risk of deadlock. If you can deal with the wider scope being given to the lock, then using the same object for both cases - and recursing in those cases where you'd be using both objects - will remove those particular deadlocks.
However...
This usefulness is debatable.
On the first case, I'll quote from Joe Duffy:
Recursion typically indicates an over-simplification in your synchronization design that often leads to less reliable code. Some designs use lock recursion as a way to avoid splitting functions into those that take locks and those that assume locks are already taken. This can admittedly lead to a reduction in code size and therefore a shorter time-to-write, but results in a more brittle design in the end.
It is always a better idea to factor code into public entry-points that take non-recursive locks, and internal worker functions that assert a lock is held. Recursive lock calls are redundant work that contributes to raw performance overhead. But worse, depending on recursion can make it more difficult to understand the synchronization behavior of your program, in particular at what boundaries invariants are supposed to hold. Usually we’d like to say that the first line after a lock acquisition represents an invariant “safe point” for an object, but as soon as recursion is introduced this statement can no longer be made confidently. This in turn makes it more difficult to ensure correct and reliable behavior when dynamically composed.
(Joe has more to say on the topic elsewhere in his blog, and in his book on concurrent programming).
The second case is balanced by the cases where recursive lock entry just makes different types of deadlock happen, or push up the rate of contention so high that there might as well be deadlocks (This guy says he'd prefer it just to hit a deadlock the first time you recursed, I disagree - I'd much prefer it just to throw a big exception that brought my app down with a nice stack-trace).
One of the worse things, is it simplifies at the wrong time: When you're writing code it can be simpler to use lock recursion than to split things out more and think more deeply about just what should be locking when. However, when you're debugging code, the fact that leaving a lock does not mean leaving that lock complicates things. What a bad way around - it's when we think we know what we're doing that complicated code is a temptation to be enjoyed in your off-time so you don't indulge while on the clock, and when we realised we messed up that we most want things to be nice and simple.
You really don't want to mix them with condition variables.
Hey, POSIX-threads only has them because of a dare!
At least the lock keyword means we avoid the possibility of not having matching Monitor.Exit()s for every Monitor.Enter()s which makes some of the risks less likely. Up until the time you need to do something outside of that model.
With more recent locking classes, .NET does it's bit to help people avoid using lock-recursion, without blocking those who use older coding patterns. ReaderWriterLockSlim has a constructor overload that lets you use it recursion, but the default is LockRecursionPolicy.NoRecursion.
Often in dealing with issues of concurrency we have to make a decision between a more fraught technique that could potentially give us better concurrency but which requires much more care to be sure of correctness vs a simpler technique that could potentially give worse concurrency but where it is easier to be sure of the correctness. Using locks recursively gives us a technique where we will hold locks longer and have less good concurrency, and also be less sure of correctness and have harder debugging.
If you have a resource that you want exclusive control over, but many methods act upon this resource. A method might not be able to assume that it is locked so it will lock it within it's method. If it's locked in the outer method AND inner method, then it provides a situation similar to the example in the book. I cannot see a time where I would want to lock twice in the same code block.

When to use the lock thread in C#?

I have a server which handles multiple incoming socket connections and creates 2 different threads which store the data in XML format.
I was using the lock statement for thread safety almost in every event handler called asyncronously and in the 2 threads in different parts of code. Sadly using this approach my application significantly slows down.
I tried to not use lock at all and the server is very fast in execution, even the file storage seems to boost; but the program crashes for reasons I don't understand after 30sec - 1min. of work.
So. I thought that the best way is to use less locks or to use it only there where's strictly necessary. As such, I have 2 questions:
Is the lock needed when I write to the public accessed variables (C# lists) only or even when I read from them ?
Is the lock needed only in the asyncronous threads created by the socket handler or in other places too ?
Someone could give me some practical guidelines, about how to operate. I'll not post the whole code this time. It hasn't sense to post about 2500 lines of code.
You ever sit in your car or on the bus at a red light when there's no cross traffic? Big waste of time, right? A lock is like a perfect traffic light. It is always green except when there is traffic in the intersection.
Your question is "I spend too much time in traffic waiting at red lights. Should I just run the red light? Or even better, should I remove the lights entirely and just let everyone drive through the intersection at highway speeds without any intersection controls?"
If you're having a performance problem with locks then removing locks is the last thing you should do. You are waiting at that red light precisely because there is cross traffic in the intersection. Locks are extraordinarily fast if they are not contended.
You can't eliminate the light without eliminating the cross traffic first. The best solution is therefore to eliminate the cross traffic. If the lock is never contended then you'll never wait at it. Figure out why the cross traffic is spending so much time in the intersection; don't remove the light and hope there are no collisions. There will be.
If you can't do that, then adding more finely-grained locks sometimes helps. That is, maybe you have every road in town converging on the same intersection. Maybe you can split that up into two intersections, so that code can be moving through two different intersections at the same time.
Note that making the cars faster (getting a faster processor) or making the roads shorter (eliminating code path length) often makes the problem worse in multithreaded scenarios. Just as it does in real life; if the problem is gridlock then buying faster cars and driving them on shorter roads gets them to the traffic jam faster, but not out of it faster.
Is the lock needed when I write to the public accessed variables (C# lists) only or even when I read from them ?
Yes (even when you read).
Is the lock needed only in the asyncronous threads created by the socket handler or in other places too ?
Yes. Wherever code accesses a section of code which is shared, always lock.
This sounds like you may not be locking individual objects, but locking one thing for all lock situations.
If so put in smart discrete locks by creating individual unique objects which relate and lock only certain sections at a time, which don't interfere with other threads in other sections.
Here is an example:
// This class simulates the use of two different thread safe resources and how to lock them
// for thread safety but not block other threads getting different resources.
public class SmartLocking
{
private string StrResource1 { get; set; }
private string StrResource2 { get; set; }
private object _Lock1 = new object();
private object _Lock2 = new object();
public void DoWorkOn1( string change )
{
lock (_Lock1)
{
_Resource1 = change;
}
}
public void DoWorkOn2( string change2 )
{
lock (_Lock2)
{
_Resource2 = change2;
}
}
}
Always use a lock when you access members (either read or write). If you are iterating over a collection, and from another thread you're removing items, things can go wrong quickly.
A suggestion is when you want to iterate a collection, copy all the items to a new collection and then iterate the copy. I.e.
var newcollection; // Initialize etc.
lock(mycollection)
{
// Copy from mycollection to newcollection
}
foreach(var item in newcollection)
{
// Do stuff
}
Likewise, only use the lock the moment you are actually writing to the list.
The reason that you need to lock while reading is:
let's say you are making change to one property and it has being read twice while the thread is inbetween a lock. Once right before we made any change and another after, then we will have inconsistent results.
I hope that helps,
Basically this can be answered pretty simple:
You need to lock all the things that are accessed by different threads. It actually doesnt really matter if its about reading or writing. If you are reading and another thread is overwriting the data at the same time the data read may get invalid and you possibly are performing invalid operations.

Categories