How does a singleton property with a lock ensure thread safety? - c#

I rarely use singletons, in this case it's appropriate. While trying to investigate the best implementation thereof I came across this bit of code which has left me believing I improperly understand how brackets encapsulate a "scope."
public sealed class Singleton
{
private static Singleton instance = null;
private static readonly object padlock = new object();
Singleton()
{
}
public static Singleton Instance
{
get
{
lock (padlock)
{
if (instance == null)
{
instance = new Singleton();
}
return instance;
}
}
}
}
I'm confused what happens when I attempt to access "Instance." Say I'm working on a logging singleton (my useful application for a singleton) and it has a method "WriteLine(string line)"
When I call:
Singleton.Instance.WriteLine("Hello!");
It maintains the lock during the execution of the entire method of "WriteLine?"
What if I assign the instance to an external variable like:
Singleton Console = Singleton.Instance;
Now there's a constant reference to the singleton outside of the singleton. Is Console.WriteLine("Hello!") also completely thread safe like Singleton.Instance.WriteLine("Hello!")?
Anyway, I'm just confused how this makes the singleton thread safe and whether or not it's only thread safe when the property is explicitly accessed. I thought Singlton.Instance.WriteLine("...") would pull out the Instance first, thereby leaving the scope of the lock, and then execute WriteLine on the returned instance, therefore performing the write after the lock has been released.
Any help on clearing up my misunderstanding of how this functions would be appreciated.

Does Singleton.Instance.WriteLine("Hello!"); maintain the lock during the execution of the entire method of WriteLine?
No, the lock guards only the creation of your singleton. WriteLine executes unlocked (unless, of course, it obtains its own lock internally).
Is Console.WriteLine("Hello!") also completely thread safe like Singleton.Instance.WriteLine("Hello!")?
It is equally as safe or unsafe as Singleton.Instance, because the lock is not maintained outside of Instance's getter.
Anyway, I'm just confused how this makes the singleton thread safe
Lock makes the process of obtaining the instance of your singleton thread-safe. Making the methods of your singleton thread-safe is a process that does not depend on whether your object is a singleton or not. There is no simple turn-key one-fits-all solution for making a thread-unsafe object behave in a thread-safe way. You address it one method at a time.

No, the lock ends upon the return, anything you do with the Instance is "outside" the lock.
The advantage of the lock in that point is only one:
It guarantees that there can be only one instance of Singleton that is created.
Note that in general, it is better to use the Lazy<> class. To obtain the same result you would have to use it like:
public static Lazy<Singleton> Instance = new Lazy<Singleton>();
(Lazy<T> can work in three modes, the default one, ExecutionAndPublication, is equivalent to that code)

Any help on clearing up my misunderstanding of how this functions would be appreciated.
In Head First Design Patterns, there's a great example of a thread-safe singleton that uses "code magnets" where you can consider all the possible ways two threads can execute the same code. It's done with three columns, one for each of the two threads, and a third column for the value of the supposed singleton that is returned. It's an exercise where you position the code fragments vertically to show the sequence of operations between the two threads. I'll try to reproduce it here with limited formatting in SO and with your code example.
The code fragments (without the lock) would be:
get{
if (instance == null){
instance =
new Singleton(); }
return instance; }
You can find one possible execution that results in two instances of the class being returned, because of the way the threads execute:
Thread One Thread Two Value instance
get{ null
get{ null
if (instance == null){ null
if (instance == null){ null
instance =
new Singleton(); } Object_1
return instance; } Object_1
instance =
new Singleton(); } Object_2
return instance; } Object_2
With the lock just after get {, Thread Two would not be able to continue (as above), until Thread One has executed the return instance; and releases the lock:
Thread One Thread Two Value instance
get{ [takes lock] null
get{ [blocks on lock] null
if (instance == null){ null
instance =
new Singleton(); } Object_1
return instance; } [releases lock] Object_1
[continues]
if (instance == null) { Object_1
return instance; } Object_1

Related

Multi threading in case of singleton class C#

I want to know in case of multi threading how singleton works.
Suppose 2 threads enter the instantiation code as shown in below code , and 1st thread enters the instantiation code it locks that part and proceeds with its operation till that time another thread waits.
So once the first thread completes its operation 2nd thread will enter the instantiation code , now I want to know who takes the responsibility to release the lock since 1st thread has completed its operation and will second thread create new instance or will it share 1st threads instantiation???
Code :
public sealed class Singleton
{
private static Singleton instance = null;
// adding locking object
private static readonly object syncRoot = new object();
private Singleton() { }
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
{
instance = new Singleton();
}
}
}
return instance;
}
}
}
once the first thread completes its operation 2nd thread will enter the instantiation code , now I want to know who takes the responsibility to release the lock since 1st thread has completed its operation
Each thread acquires and releases the lock individually. The first thread acquires the lock; while it has the lock, the second thread cannot acquire it.
Once the first thread has released the lock (which happens when execution leaves the block of code controlled by the lock statement), the second thread can then acquire the lock and execute the code. It will then release the lock again, when it's done with it.
…and will second thread create new instance or will it share 1st threads instantiation???
In this particular implementation, the singleton is initialized only once, even if the second thread initially observes the backing field to be null. By using lock, the code ensures that only one thread will ever actually create the singleton instance.
There are variations in which more than one initialization could occur, but IMHO these are inferior ways to do it.
That said, for that matter, it's my opinion that in the context of .NET, even the double-lock above is needlessly complicated. In most cases, it suffices to use a simple field initializer. In the cases where that's not efficient enough, one can use the Lazy<T> class.
For a much more in-depth discussion, see the related Stack Overflow question Thread Safe C# Singleton Pattern and references provided there.
One more best way to implement this we can create a static constructor it will be good.
What you've used here is called double-check locking, a fairly common serialization pattern for multi-threaded code. It works.
A lock is released automatically once you fall out of the lock scope.
Assuming there is contention, one thread would test->acquire->test->initialize->release, and the next would simply test->acquire->test->release: no double-initialization.

lock variable in C#

I know when to use lock block in C#, but what is not clear to me is lock variable in the parentheses of lock statement. Consider following Singleton Design Pattern from DoFactory:
class LoadBalancer
{
private static LoadBalancer _instance;
private List<string> _servers = new List<string>();
private static object syncLock = new object();
public static LoadBalancer GetLoadBalancer()
{
// Support multithreaded applications through
// 'Double checked locking' pattern which (once
// the instance exists) avoids locking each
// time the method is invoked
if (_instance == null)
{
lock (syncLock)
{
if (_instance == null)
{
_instance = new LoadBalancer();
}
}
}
return _instance;
}
}
in the above code, i can not understand why we do not use _instance for lock variable instead of `syncLock?
in the above code, i can not understand why we do not use _instance for lock variable instead of `syncLock?
Well you'd be trying to lock on a null reference, for one thing...
Additionally, you'd be locking via mutable field, which is a bad idea. (If the field changes value, then another thread can get into the same code. Not great.) It's generally a good idea to lock via a readonly field.
Also, you'd be locking on a reference which is also returned to callers, who could hold the lock for arbitrarily long periods. I prefer to lock on a reference which is never exposed outside the class - like syncLock.
Finally, this implementation is broken anyway due to the CLI memory model. (It may or may not be broken in the MS implementation, which is a bit stronger. I wouldn't like to bet either way.) The _instance variable should be marked as volatile if you really want to use this approach... but I'd personally avoid double-checked locking anyway. See my article on the singleton pattern for more details.
I haven't used the lock statement much myself.
But my guess is, that it is because you can't lock the instance because it is null.
So you have to make the static object 'synclock' which is only used for locking the part where you actually create the 'instance' variable.
This is a general style preference of mine - wherever possible, only lock on objects specifically created for the purpose of locking.
http://www.yoda.arachsys.com/csharp/singleton.html

Have I been doing locks wrong this whole time?

I was reading this article about thread safety in Singletons, and it got me thinking that I don't understand the lock method.
In the second version, the author has this:
public sealed class Singleton
{
private static Singleton instance = null;
private static readonly object padlock = new object();
Singleton()
{
}
public static Singleton Instance
{
get
{
lock (padlock)
{
if (instance == null)
{
instance = new Singleton();
}
return instance;
}
}
}
}
Whereas I would've done something more like this:
public sealed class Singleton
{
private static Singleton instance = null;
Singleton()
{
}
public static Singleton Instance
{
get
{
lock (instance)
{
if (instance == null)
{
instance = new Singleton();
}
return instance;
}
}
}
}
Why would you use a padlock object, rather than locking the actual object you want to lock?
What would you expect to happen the first time you accessed the Instance property, before you've got an object to lock on?
(Hint: lock(null) goes bang...)
As a separate measure, I almost always avoid locking on "the actual object" - because typically there may well be other code which that reference is exposed to, and I don't necessarily know what that's going to lock on. Even if your version did work, what would happen if some external code wrote:
// Make sure only one thread is ever in this code...
lock (Singleton.Instance)
{
// Do stuff
}
Now no-one else can even get the instance while that code is executing, because they'll end up blocked in the getter. The lock in the getter isn't meant to be guarding against that - it's only meant to be guarding against multiple access within the getter.
The more tightly you can keep control of your locks, the easier it is to reason about them and to avoid deadlocks.
I very occasionally take a lock on a "normal" object if:
I'm not exposing that reference outside that class
I have confidence that the type itself (which will always have a reference to this, of course) won't lock on itself.
(All of this is a reason to avoid locking on this, too, of course...)
Fundamentally, I believe the idea of allowing you to lock on any object was a bad idea in Java, and it was a bad move to copy it in .NET :(
Locking on this or any other non-private object is dangerous because it can lead to deadlocks if someone else also tries to use that object for synchronization.
It's not terribly likely, which is why people can be in the habit of doing it for years without ever getting bitten. But it is still possible, and the cost of a private instance of object probably isn't great enough to justify running the risk.
For locking you can use any kind of object, the type really doesn't matter, it just used as a flag and only one thread can hold it in a time.
It is of course possible to have this as locking param. But I guess the main reason why it's not recommended is that other class somewhere can also use instance of your class as locking object and that can cause strage problems

locking the object inside a property, c#

public ArrayList InputBuffer
{
get { lock (this.in_buffer) { return this.in_buffer; } }
}
is this.in_buffer locked during a call to InputBuffer.Clear?
or does the property simply lock the in_buffer object while it's getting the reference to it; the lock exits, and then that reference is used to Clear?
No, the property locks the reference while it's getting that reference. Pretty pointless, to be honest... this is more common:
private readonly object mutex = new object();
private Foo foo = ...;
public Foo Foo
{
get
{
lock(mutex)
{
return foo;
}
}
}
That lock would only cover the property access itself, and wouldn't provide any protection for operations performed with the Foo. However, it's not the same as not having the lock at all, because so long as the variable is only written while holding the same lock, it ensures that any time you read the Foo property, you're accessing the most recent value of the property... without the lock, there's no memory barrier and you could get a "stale" result.
This is pretty weak, but worth knowing about.
Personally I try to make very few types thread-safe, and those tend to have more appropriate operations... but if you wanted to write code which did modify and read properties from multiple threads, this is one way of doing so. Using volatile can help too, but the semantics of it are hideously subtle.
The object is locked inside the braces of the lock call, and then it is unlocked.
In this case the only code in the lock call is return this.in_buffer;.
So in this case, the in_buffer is not locked during a call to InputBuffer.Clear.
One solution to your problem, using extension methods, is as follows.
private readonly object _bufLock;
class EMClass{
public static void LockedClear(this ArrayList a){
lock(_bufLock){
a.Clear();
}
}
}
Now when you do:
a.LockedClear();
The Clear call will be done in a lock.
You must ensure that the buffer is only accessed inside _bufLocks.
In addition to what others have said about the scope of the lock, remember that you aren't locking the object, you are only locking based on the object instance named.
Common practice is to have a separate lock mutex as Jon Skeet exemplifies.
If you must guarantee synchronized execution while the collection is being cleared, expose a method that clears the collection, have clients call that, and don't expose your underlying implementation details. (Which is good practice anyway - look up encapsulation.)

Thread-Safe lazy instantiating using MEF

// Member Variable
private static readonly object _syncLock = new object();
// Now inside a static method
foreach (var lazyObject in plugins)
{
if ((string)lazyObject.Metadata["key"] = "something")
{
lock (_syncLock)
{
// It seems the `IsValueCreated` is not up-to-date
if (!lazyObject.IsValueCreated)
lazyObject.value.DoSomething();
}
return lazyObject.value;
}
}
Here I need synchronized access per loop. There are many threads iterating this loop and based on the key they are looking for, a lazy instance is created and returned.
lazyObject should not be created more that one time. Although Lazy class is for doing so and despite of the used lock, under high threading I have more than one instance created (I track this with a Interlocked.Increment on a volatile static int and log it somewhere). The problem is I don't have access to definition of Lazy and MEF defines how the Lazy class create objects. I should notice the CompositionContainer has a thread-safe option in constructor which is already used.
My questions:
1) Why the lock doesn't work ?
2) Should I use an array of locks instead of one lock for performance improvement ?
Is the default constructor of T in your Lazy complex? MEF uses LazyThreadSafetyMode.PublicationOnly which means each thread accessing the unitialised Lazy will generate a new() on T until the first to complete the initialisation. That value is then returned for all threads currently accessing .Value and their own new() instances are discarded. If your constructor is complex (perhaps doing too much?) you should redefine it as doing minimal construction work and moving configuration to another method.
You need to think about the method as a whole. Should you consider:
public IPlugin GetPlugin(string key)
{
mutex.WaitOne();
try
{
var plugin = plugins
.Where(l => l.Metadata["key"] == key)
.Select(l => l.Value);
.FirstOrDefault();
return plugin;
}
finally
{
mutex.ReleaseMutex();
}
}
You also need to consider that if plugins is not read-only then you need to synchronise access to that instance too, otherwise it may be modified on another thread, causing your code to fall over.
There is a specific constructor of Lazy<T, TMetadata> for such scenarios, where you define a LazyThreadSafetyMode when constructing a Lazy instance... Otherwise, the lock might not work for many different reasons, e.g. if this is not the only place where the Value property of this Lazy<T> instance is ever accessed.
Btw you got I typo in the if statement...

Categories