lock variable in C# - c#

I know when to use lock block in C#, but what is not clear to me is lock variable in the parentheses of lock statement. Consider following Singleton Design Pattern from DoFactory:
class LoadBalancer
{
private static LoadBalancer _instance;
private List<string> _servers = new List<string>();
private static object syncLock = new object();
public static LoadBalancer GetLoadBalancer()
{
// Support multithreaded applications through
// 'Double checked locking' pattern which (once
// the instance exists) avoids locking each
// time the method is invoked
if (_instance == null)
{
lock (syncLock)
{
if (_instance == null)
{
_instance = new LoadBalancer();
}
}
}
return _instance;
}
}
in the above code, i can not understand why we do not use _instance for lock variable instead of `syncLock?

in the above code, i can not understand why we do not use _instance for lock variable instead of `syncLock?
Well you'd be trying to lock on a null reference, for one thing...
Additionally, you'd be locking via mutable field, which is a bad idea. (If the field changes value, then another thread can get into the same code. Not great.) It's generally a good idea to lock via a readonly field.
Also, you'd be locking on a reference which is also returned to callers, who could hold the lock for arbitrarily long periods. I prefer to lock on a reference which is never exposed outside the class - like syncLock.
Finally, this implementation is broken anyway due to the CLI memory model. (It may or may not be broken in the MS implementation, which is a bit stronger. I wouldn't like to bet either way.) The _instance variable should be marked as volatile if you really want to use this approach... but I'd personally avoid double-checked locking anyway. See my article on the singleton pattern for more details.

I haven't used the lock statement much myself.
But my guess is, that it is because you can't lock the instance because it is null.
So you have to make the static object 'synclock' which is only used for locking the part where you actually create the 'instance' variable.

This is a general style preference of mine - wherever possible, only lock on objects specifically created for the purpose of locking.
http://www.yoda.arachsys.com/csharp/singleton.html

Related

How does a singleton property with a lock ensure thread safety?

I rarely use singletons, in this case it's appropriate. While trying to investigate the best implementation thereof I came across this bit of code which has left me believing I improperly understand how brackets encapsulate a "scope."
public sealed class Singleton
{
private static Singleton instance = null;
private static readonly object padlock = new object();
Singleton()
{
}
public static Singleton Instance
{
get
{
lock (padlock)
{
if (instance == null)
{
instance = new Singleton();
}
return instance;
}
}
}
}
I'm confused what happens when I attempt to access "Instance." Say I'm working on a logging singleton (my useful application for a singleton) and it has a method "WriteLine(string line)"
When I call:
Singleton.Instance.WriteLine("Hello!");
It maintains the lock during the execution of the entire method of "WriteLine?"
What if I assign the instance to an external variable like:
Singleton Console = Singleton.Instance;
Now there's a constant reference to the singleton outside of the singleton. Is Console.WriteLine("Hello!") also completely thread safe like Singleton.Instance.WriteLine("Hello!")?
Anyway, I'm just confused how this makes the singleton thread safe and whether or not it's only thread safe when the property is explicitly accessed. I thought Singlton.Instance.WriteLine("...") would pull out the Instance first, thereby leaving the scope of the lock, and then execute WriteLine on the returned instance, therefore performing the write after the lock has been released.
Any help on clearing up my misunderstanding of how this functions would be appreciated.
Does Singleton.Instance.WriteLine("Hello!"); maintain the lock during the execution of the entire method of WriteLine?
No, the lock guards only the creation of your singleton. WriteLine executes unlocked (unless, of course, it obtains its own lock internally).
Is Console.WriteLine("Hello!") also completely thread safe like Singleton.Instance.WriteLine("Hello!")?
It is equally as safe or unsafe as Singleton.Instance, because the lock is not maintained outside of Instance's getter.
Anyway, I'm just confused how this makes the singleton thread safe
Lock makes the process of obtaining the instance of your singleton thread-safe. Making the methods of your singleton thread-safe is a process that does not depend on whether your object is a singleton or not. There is no simple turn-key one-fits-all solution for making a thread-unsafe object behave in a thread-safe way. You address it one method at a time.
No, the lock ends upon the return, anything you do with the Instance is "outside" the lock.
The advantage of the lock in that point is only one:
It guarantees that there can be only one instance of Singleton that is created.
Note that in general, it is better to use the Lazy<> class. To obtain the same result you would have to use it like:
public static Lazy<Singleton> Instance = new Lazy<Singleton>();
(Lazy<T> can work in three modes, the default one, ExecutionAndPublication, is equivalent to that code)
Any help on clearing up my misunderstanding of how this functions would be appreciated.
In Head First Design Patterns, there's a great example of a thread-safe singleton that uses "code magnets" where you can consider all the possible ways two threads can execute the same code. It's done with three columns, one for each of the two threads, and a third column for the value of the supposed singleton that is returned. It's an exercise where you position the code fragments vertically to show the sequence of operations between the two threads. I'll try to reproduce it here with limited formatting in SO and with your code example.
The code fragments (without the lock) would be:
get{
if (instance == null){
instance =
new Singleton(); }
return instance; }
You can find one possible execution that results in two instances of the class being returned, because of the way the threads execute:
Thread One Thread Two Value instance
get{ null
get{ null
if (instance == null){ null
if (instance == null){ null
instance =
new Singleton(); } Object_1
return instance; } Object_1
instance =
new Singleton(); } Object_2
return instance; } Object_2
With the lock just after get {, Thread Two would not be able to continue (as above), until Thread One has executed the return instance; and releases the lock:
Thread One Thread Two Value instance
get{ [takes lock] null
get{ [blocks on lock] null
if (instance == null){ null
instance =
new Singleton(); } Object_1
return instance; } [releases lock] Object_1
[continues]
if (instance == null) { Object_1
return instance; } Object_1

Locking a private static object

I am wondering which following code is best:
private static volatile OrderedDictionary _instance;
private static readonly Object SyncLock = new Object();
private static OrderedDictionary Instance
{
get { return _instance ?? (_instance = new OrderedDictionary()); }
}
public static Mea Add(Double pre, Double rec)
{
lock (SyncLock)
{
...
}
}
Or is it OK and better IMO just use the following?
private static volatile OrderedDictionary _instance;
private static OrderedDictionary Instance
{
get { return _instance ?? (_instance = new OrderedDictionary()); }
}
public static Mea Add(Double pre, Double rec)
{
lock (Instance)
{
...
}
}
Based on Mike Strobel's answer I have done to following changes:
public static class Meas
{
private static readonly OrderedDictionary Instance = new OrderedDictionary();
private static readonly Object SyncLock = new Object();
public static Mea Add(Double pre, Double rec)
{
lock (SyncLock)
{
Instance.Add(pre, rec);
...
}
}
}
Mike Strobel's advice is good advice. To sum up:
Lock only objects that are specifically intended to be locks.
Those lock objects should be private readonly fields that are initialized in their declarations.
Do not try to roll your own threadsafe lazy initialization. Use the Lazy<T> type; it was designed by experts who know what they are doing.
Lock all accesses to the protected variable.
Violate these sensible guidelines when both of the following two conditions are true: (1) you have a empirically demonstrated customer-impacting performance problem and solid proof that going with a more complex low-lock thread safety system is the only reasonable solution to the problem, and (2) you are a leading expert on the implications of processor optimizations on low-lock code. For example, if you are Grant Morrison or Joe Duffy.
The two pieces of code are not equivalent. The former ensures that all threads will always use the same lock object. The latter locks a lazily-initialized object, and there is absolutely nothing preventing multiple instantiations of your _instance dictionary, resulting in contents being lost.
What is the purpose of the lock? Does the serve a purpose other than to guarantee single-initialization of the dictionary? Ignoring that it fails to accomplish this in the second example, if that is its sole intended purpose, then you may consider simply using the Lazy<T> class or a double-check locking pattern.
But since this is a static member (and does not appear to capture outer generic parameters), it will presumably only be instantiated once per AppDomain. In that case, just mark it as readonly and initialize it in the declaration. You're probably not saving much this way.
Since you are concerned with best practices: you should never use the lock construct on a mutable value; this goes for both static and instance fields, as well as locals. It is especially bad practice to lock on a volatile field, as the presence of that keyword indicates that you expect the underlying value to change. If you're going to lock on a field, it should almost always be a readonly field. It's also considered bad practice to lock on a method result; that applies to properties too, as properties are effectively a pair of specially-named accessor methods.
If you do not expose the Instance to other classes, the second approach is okay (but not equivalent). It's best practice to keep the lock object private to the class that is using it as a lock object. Only if other classes can also take this object as lock object you may run into issues.
(For completeness and regarding #Scott Chamberlain comment:)
This assumes that the class of Instance is not using lock (this) which contrary represents bad practice.
Nevertheless, the property could make problems. The null coalescing operator is compiled to a null check + assignment... Therefore you could run into race conditions. You might want to read more about this. But also consider to remove the lazy initialization at all, if possible.

Have I been doing locks wrong this whole time?

I was reading this article about thread safety in Singletons, and it got me thinking that I don't understand the lock method.
In the second version, the author has this:
public sealed class Singleton
{
private static Singleton instance = null;
private static readonly object padlock = new object();
Singleton()
{
}
public static Singleton Instance
{
get
{
lock (padlock)
{
if (instance == null)
{
instance = new Singleton();
}
return instance;
}
}
}
}
Whereas I would've done something more like this:
public sealed class Singleton
{
private static Singleton instance = null;
Singleton()
{
}
public static Singleton Instance
{
get
{
lock (instance)
{
if (instance == null)
{
instance = new Singleton();
}
return instance;
}
}
}
}
Why would you use a padlock object, rather than locking the actual object you want to lock?
What would you expect to happen the first time you accessed the Instance property, before you've got an object to lock on?
(Hint: lock(null) goes bang...)
As a separate measure, I almost always avoid locking on "the actual object" - because typically there may well be other code which that reference is exposed to, and I don't necessarily know what that's going to lock on. Even if your version did work, what would happen if some external code wrote:
// Make sure only one thread is ever in this code...
lock (Singleton.Instance)
{
// Do stuff
}
Now no-one else can even get the instance while that code is executing, because they'll end up blocked in the getter. The lock in the getter isn't meant to be guarding against that - it's only meant to be guarding against multiple access within the getter.
The more tightly you can keep control of your locks, the easier it is to reason about them and to avoid deadlocks.
I very occasionally take a lock on a "normal" object if:
I'm not exposing that reference outside that class
I have confidence that the type itself (which will always have a reference to this, of course) won't lock on itself.
(All of this is a reason to avoid locking on this, too, of course...)
Fundamentally, I believe the idea of allowing you to lock on any object was a bad idea in Java, and it was a bad move to copy it in .NET :(
Locking on this or any other non-private object is dangerous because it can lead to deadlocks if someone else also tries to use that object for synchronization.
It's not terribly likely, which is why people can be in the habit of doing it for years without ever getting bitten. But it is still possible, and the cost of a private instance of object probably isn't great enough to justify running the risk.
For locking you can use any kind of object, the type really doesn't matter, it just used as a flag and only one thread can hold it in a time.
It is of course possible to have this as locking param. But I guess the main reason why it's not recommended is that other class somewhere can also use instance of your class as locking object and that can cause strage problems

locking the object inside a property, c#

public ArrayList InputBuffer
{
get { lock (this.in_buffer) { return this.in_buffer; } }
}
is this.in_buffer locked during a call to InputBuffer.Clear?
or does the property simply lock the in_buffer object while it's getting the reference to it; the lock exits, and then that reference is used to Clear?
No, the property locks the reference while it's getting that reference. Pretty pointless, to be honest... this is more common:
private readonly object mutex = new object();
private Foo foo = ...;
public Foo Foo
{
get
{
lock(mutex)
{
return foo;
}
}
}
That lock would only cover the property access itself, and wouldn't provide any protection for operations performed with the Foo. However, it's not the same as not having the lock at all, because so long as the variable is only written while holding the same lock, it ensures that any time you read the Foo property, you're accessing the most recent value of the property... without the lock, there's no memory barrier and you could get a "stale" result.
This is pretty weak, but worth knowing about.
Personally I try to make very few types thread-safe, and those tend to have more appropriate operations... but if you wanted to write code which did modify and read properties from multiple threads, this is one way of doing so. Using volatile can help too, but the semantics of it are hideously subtle.
The object is locked inside the braces of the lock call, and then it is unlocked.
In this case the only code in the lock call is return this.in_buffer;.
So in this case, the in_buffer is not locked during a call to InputBuffer.Clear.
One solution to your problem, using extension methods, is as follows.
private readonly object _bufLock;
class EMClass{
public static void LockedClear(this ArrayList a){
lock(_bufLock){
a.Clear();
}
}
}
Now when you do:
a.LockedClear();
The Clear call will be done in a lock.
You must ensure that the buffer is only accessed inside _bufLocks.
In addition to what others have said about the scope of the lock, remember that you aren't locking the object, you are only locking based on the object instance named.
Common practice is to have a separate lock mutex as Jon Skeet exemplifies.
If you must guarantee synchronized execution while the collection is being cleared, expose a method that clears the collection, have clients call that, and don't expose your underlying implementation details. (Which is good practice anyway - look up encapsulation.)

Using the same lock for multiple methods

I haven't had any issues using the same lock for multiple methods so far, but I'm wondering if the following code might actually have issues (performance?) that I'm not aware of:
private static readonly object lockObj = new object();
public int GetValue1(int index)
{
lock(lockObj)
{
// Collection 1 read and/or write
}
}
public int GetValue2(int index)
{
lock(lockObj)
{
// Collection 2 read and/or write
}
}
public int GetValue3(int index)
{
lock(lockObj)
{
// Collection 3 read and/or write
}
}
The 3 methods and the collections are not related in anyway.
In addition, will it be a problem if this lockObj is also used by a singleton (in Instance property) ?
Edit: To clarify my question on using the same lock object in a Singleton class:
private static readonly object SyncObject = new object();
public static MySingleton Instance
{
get
{
lock (SyncObject)
{
if (_instance == null)
{
_instance = new MySingleton();
}
}
return _instance;
}
}
public int MyMethod()
{
lock (SyncObject)
{
// Read or write
}
}
Will this cause issues?
If the methods are unrelated as you state, then use a different lock for each one; otherwise it's inefficient (since there's no reason for different methods to lock on the same object, as they could safely execute concurrently).
Also, it seems that these are instance methods locking on a static object -- was that intended? I have a feeling that's a bug; instance methods should (usually) only lock on instance fields.
Regarding the Singleton design pattern:
While locking can be safe for those, better practice is doing a delayed initialization of a field like this:
private static object sharedInstance;
public static object SharedInstance
{
get
{
if (sharedInstance == null)
Interlocked.CompareExchange(ref sharedInstance, new object(), null);
return sharedInstance;
}
}
This way it's a little bit faster (both because interlocked methods are faster, and because the initialization is delayed), but still thread-safe.
By using the same object to lock on in all of those methods, you are serializing all access to code in all of the threads.
That is... code running GetValue1() will block other code in a different thread from running GetValue2() until it's done. If you add even more code that locks on the same object instance, you'll end up with effectively a single-threaded application at some point.
Shared lock locks other non-related calls
If you use the same lock then locking in one method unnecessarily locks others as well. If they're not related at all than this is a problem since they have to wait for each other. Which they shouldn't.
Bottleneck
This may pose a bottleneck when these methods are frequently called. With separate locks they would run independently, but sharing the same lock it means they must wait for the lock to be released more often as required (actually three times more often).
To create a thread-safe singleton, use this technique.
You don't need a lock.
In general, each lock should be used as little as possible.
The more methods lock on the same thing, the mroe likely you are to end up waiting for it when you don't really need to.
Good question. There are pros and cons of making locks more fine grained vs more coarse grained, with one extreme being a separate lock for each piece of data and the other extreme being one lock for the entire program. As other posts point out, the disadvantage of reusing the same locks is in general you may get less concurrency (though it depends on the case, you may not get less concurrency).
However, the disadvantage of using more locks is in general you make deadlock more likely. There are more ways to get deadlocks the more locks you have involved. For example, acquiring two locks at the same time in separate threads but in the opposite order is a potential deadlock which wouldn't happen if only one lock were involved. Of course sometimes you may fix a deadlock by breaking one lock into two, but usually fewer locks means fewer deadlocks. There's also added code complexity of having more locks.
In general these two factors need to be balanced. It's common to use one lock per class for convenience if it doesn't cause any concurrency issues. In fact, doing so is a design pattern called a monitor.
I would say the best practice is to favor fewer locks for code simplicity's sake and make additional locks if there's a good reason (such as concurrency, or a case where it's more simple or fixes a deadlock).

Categories