Multi threading in case of singleton class C# - c#

I want to know in case of multi threading how singleton works.
Suppose 2 threads enter the instantiation code as shown in below code , and 1st thread enters the instantiation code it locks that part and proceeds with its operation till that time another thread waits.
So once the first thread completes its operation 2nd thread will enter the instantiation code , now I want to know who takes the responsibility to release the lock since 1st thread has completed its operation and will second thread create new instance or will it share 1st threads instantiation???
Code :
public sealed class Singleton
{
private static Singleton instance = null;
// adding locking object
private static readonly object syncRoot = new object();
private Singleton() { }
public static Singleton Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
{
instance = new Singleton();
}
}
}
return instance;
}
}
}

once the first thread completes its operation 2nd thread will enter the instantiation code , now I want to know who takes the responsibility to release the lock since 1st thread has completed its operation
Each thread acquires and releases the lock individually. The first thread acquires the lock; while it has the lock, the second thread cannot acquire it.
Once the first thread has released the lock (which happens when execution leaves the block of code controlled by the lock statement), the second thread can then acquire the lock and execute the code. It will then release the lock again, when it's done with it.
…and will second thread create new instance or will it share 1st threads instantiation???
In this particular implementation, the singleton is initialized only once, even if the second thread initially observes the backing field to be null. By using lock, the code ensures that only one thread will ever actually create the singleton instance.
There are variations in which more than one initialization could occur, but IMHO these are inferior ways to do it.
That said, for that matter, it's my opinion that in the context of .NET, even the double-lock above is needlessly complicated. In most cases, it suffices to use a simple field initializer. In the cases where that's not efficient enough, one can use the Lazy<T> class.
For a much more in-depth discussion, see the related Stack Overflow question Thread Safe C# Singleton Pattern and references provided there.

One more best way to implement this we can create a static constructor it will be good.

What you've used here is called double-check locking, a fairly common serialization pattern for multi-threaded code. It works.
A lock is released automatically once you fall out of the lock scope.
Assuming there is contention, one thread would test->acquire->test->initialize->release, and the next would simply test->acquire->test->release: no double-initialization.

Related

Is this immutable object threadsafe?

I have a class which loads some data from a server and transforms it. The class contains a method that reloads this data from the server.
I'm not sure if the reload is threadsafe, but I read that i might need to add a volatile keyword or using locks.
public class Tenants : ITenants
{
private readonly string url = "someurl";
private readonly IHttpClientFactory httpClientFactory;
private ConfigParser parser;
public Tenants(IHttpClientFactory httpClientFactory)
{
this.httpClientFactory = httpClientFactory;
}
public async Task Refresh()
{
TConfig data = await ConfigLoader.GetData(httpClientFactory.CreateClient(), url);
parser = new ConfigParser(data);
}
public async Task<TConfig> GetSettings(string name)
{
if (parser == null)
await Refresh();
return parser.GetSettings(name);
}
}
public class ConfigParser
{
private readonly ImmutableDictionary<string, TConfig> configs;
public ConfigParser(TConfig[] configs)
{
this.configs = configs.ToImmutableDictionary(s => s.name, v => v);
}
public TConfig GetSettings(string name)
{
if (!configs.ContainsKey(name))
{
return null;
}
return configs[name];
}
}
The Tenants class will be injected as a singleton intoother classes via DI IOC.
I think that this design makes this threadsafe.
It is fully atomic, and immutable with no exposed members to be changed by any consuming code. (TConfig is also immutable)
I also dont think i need a lock, if 2 threads try to set the reference at the same time, last one wins, which i am happy with.
And i dont know enough to understand if i need volatile. But from what i understood about it, i wont need it, as there is only 1 reference if parser that i care about, and its never exposed outside this class.
But i think some of my statements/assumptions above could be wrong.
EDIT:
From your comments I can deduce that you do not understand the difference between immutable and thread safety.
Immutability means an instance of an object can not be mutated (it's internal or external state can not change).
Thread safe means multiple threads can access the class/method without causing errors like race conditions, deadlocks or unexpected behavior like something which has to be executed only once is executed twice.
Immutable objects are thread safe, but something doesn't have to be immutable to be thread safe.
Your Tenants class is neither immutable nor thread safe because:
It's internal sate can change after instantiation.
It contains unexpected behavior where the request to receive the config is executed twice, where it should only happen once.
If you read my answer below you can determine that if you are ok with the request happening twice (which you shouldn't be): You don't have to do anything, but you could add the volatile keyword to the parser field to prevent SOME scenarios, but definitely not all.
You don't see any locks in immutable objects because there's no writing happening to the state of the object.
When there are writing operations in an object it is not immutable anymore (like your Tenants class). To make an object like that thread safe, you need to lock the write operations that can cause errors like the unexpected behavior of something which has to be executed only once is executed twice.
ConfigParser Seems to be thread safe, Tenants however definitely isn't.
Your Tenants class is also not immutable, since it exposes a method which changes the state of the class (both the GetSettings and Refresh methods).
If 2 threads call GetSettings at the same time when parser is null, 2 requests will be made to receive the ConfigParser. You can be OK with this, but it is bad practice, and also means the method is not thread safe.
If you are fine with the request being executed twice you could use volatile here:
The volatile keyword indicates that a field might be modified by multiple threads that are executing at the same time. The compiler, the runtime system, and even hardware may rearrange reads and writes to memory locations for performance reasons. Fields that are declared volatile are not subject to these optimizations. Adding the volatile modifier ensures that all threads will observe volatile writes performed by any other thread in the order in which they were performed.
Volatile will prevent threads from having outdated values. This means you could prevent some of the extra requests happening (from the threads which still think parser is null), but it will not completely prevent an method or instruction from being executed multiple times at the same time.
In this situation you need to lock:
The lock statement acquires the mutual-exclusion lock for a given object, executes a statement block, and then releases the lock. While a lock is held, the thread that holds the lock can again acquire and release the lock. Any other thread is blocked from acquiring the lock and waits until the lock is released.
Meaning you can prevent multiple threads from executing an method or instruction multiple times at the same time.
Unfortunately, you can't use await inside a lock.
What you want to do is:
If Refresh needs to be called:
If another thread is already working on the Refresh
Wait for the other thread to finish, and do not call Refresh
Continue with the result from the other thread
if no other thread is already working on the Refresh
Invoke the Refresh method
I have written a library for this called TaskSynchronizer. You can use that to accomplish a true thread safe version of you Tenants class.
Example:
public static TaskSynchronizer Synchronizer = new TaskSynchronizer();
public static async Task DoWork()
{
await Task.Delay(100); // Some heavy work.
Console.WriteLine("Work done!");
}
public static async Task WorkRequested()
{
using (Synchronizer.Acquire(DoWork, out var task)) // Synchronize the call to work.
{
await task;
}
}
static void Main(string[] args)
{
var tasks = new List<Task>();
for (var i = 0; i < 2; i++)
{
tasks.Add(WorkRequested());
}
Task.WaitAll(tasks.ToArray());
}
will output:
Work done!
EG: The async DoWork method has only be invoked once, even tho it has been invoked twice at the same time.

Use Mutex to synchronize C# object: issue with call to ReleaseMutex() in C# object destructor

I have an application code in which I use a mutex to synchronise some code during the creation of an object. The object constructor acquires the mutex and ONLY releases it when the object is no longer needed thus one place to release the mutex would be in the object destructor. As I debugged the code using 2 instances of the app, the 1st instance first acquires the mutex, the 2nd instance sits and waits (mut.WaitOne()). User then closes the 1st app instance. At this instance, the 2nd instance mut.WaitOne() throws the exception: "The wait completed due to an abandoned mutex." This happens before the mut.ReleaseMutex() was called in the 1st instance (I know it because it hit my breakpoint in the object destructor code before calling MutexRelease). It appears that the mutex was released before ReleaseMutex() was called thus causing the exception. How would I resolve this race condition? Thank you for your help.
public sealed class MyObject
{
static ExtDeviceDriver devDrv;
private Mutex mut = new Mutex(false,myMutex);
public MyObject()
{
mut.WaitOne();
//Thread safe code here.
devDrv = new ExtDeviceDriver();
}
~MyObject()
{
mut.ReleaseMutex();
}
}
Your approach is flawed; this is not how you should be (attempting to) perform synchronization. If you want to prevent multiple application instances... then do that, not this. If you need to synchronize specific calls then do so at the most narrow scope possible.
I can still create a race condition in your approach simply by copying the original reference and making function calls with it on another thread. Nothing is actually synchronized aside from the constructor, you are not preventing anything but creating more than a single instance of your class, and if I attempt to create a second instance, I get a deadlock. Not very nice.
Research the IDisposable pattern. Finalizers are not deterministic. This is not C++, that is not a destructor, you cannot rely upon it executing when you want it to.
Secondly, your mutex should be static. Each instance is getting their own mutex, so the mutex you synchronized around in instance 1 is different than that of instance 2. This needs to be a shared resource.
From the docs:
An abandoned mutex often indicates a serious error in the code. When a thread exits without releasing the mutex, the data structures protected by the mutex might not be in a consistent state. The next thread to request ownership of the mutex can handle this exception and proceed, if the integrity of the data structures can be verified.
In the case of a system-wide mutex, an abandoned mutex might indicate that an application has been terminated abruptly (for example, by using Windows Task Manager).
And it goes on to say this regarding local v system mutexes...
Mutexes are of two types: local mutexes, which are unnamed, and named system mutexes. A local mutex exists only within your process. It can be used by any thread in your process that has a reference to the Mutex object that represents the mutex. Each unnamed Mutex object represents a separate local mutex.
It sounds to me like you want a system mutex. How about telling us which calls need to be synchronized so that we can show you how to do it? Here is a very basic example:
class Foo
{
static Mutex _mut(false);
public MyObject()
{
_mut.WaitOne();
//Thread safe code here.
devDrv = new ExtDeviceDriver();
_mut.ReleaseMutex();
}
public void SomeSynchronizedMethod()
{
// synchronize this call
_mut.WaitOne();
devDrv.DoSomething();
_mut.ReleaseMutex();
}
}
Destructors are not deterministic; there is no guarantee by the CLR as to when they will run. Instead, implement IDisposable in your class, and force callers to use its instances in using(...) blocks. That will ensure that your implementation of Dispose gets called when it needs to.
E.g.,
public sealed class MyObject : IDisposable
{
static ExtDeviceDriver devDrv;
private Mutex mut = new Mutex(false,myMutex);
public MyObject()
{
mut.WaitOne();
//Thread safe code here.
devDrv = new ExtDeviceDriver();
}
public void Dispose() {
mut.ReleaseMutex();
}
}
Then callers would just need to do this:
using (var x = new MyObject()) {
// etc
}
When the execution flow exists the using block, Dispose will be called, regardless of exceptions or anything else.

What is exactly a "thread-safe type"? When do we need to use the "lock" statement?

I read all documentation about thread-safe types and the "lock" statement, but I am still not getting it 100%.
When exactly do I need to use the "lock" statement? How it relates to (non) thread-safe types? Thank you.
Imagine an instance of a class with a global variable in it. Imagine two threads call a method on that object at exactly the same time, and that method updates the global variable inside.
The likelihood is that value in the variable will get corrupted. Different languages and compilers/interpreters will deal with this in different ways (or not at all...) but the point is that you get "undesired" and "unpredictable" results.
Now imagine that the method obtains a "lock" on the variable before attempting to read from or write to it. The first thread to call the method will get a "lock" on the variable, the second thread to call the method will have to wait until the lock is released by the first thread. While you still have a race condition (i.e. the second thread might overwrite the value from the first) at least you have predictable results because no two threads (that are unaware of each other) can modify the value at the same time.
You use the lock statement to obtain that lock on the variable. Typically you'd define a separate object variable and use that for the lock object:
public class MyThreadSafeClass
{
private readonly object lockObject = new object();
private string mySharedString;
public void ThreadSafeMethod(string newValue)
{
lock (lockObject)
{
// Once one thread has got inside this lock statement, any others will have to wait outside for their turn...
mySharedString = newValue;
}
}
}
A type is deemed "thread-safe" if it applies the principle that no corruption will occur if shared data is accessed by multiple threads at the same time.
Beware the difference between "immutable" and "thread-safe". Thread-safe says that you have coded for the scenario and won't get corruption if two threads access shared state at the same time, whereas immutability is simply saying you return a new object rather than modifying it. Immutable objects are thread-safe, but not all thread-safe objects are immutable.
Thread safe code means code that can be accessed with many threads and still operate correctly.
In C#, this normally requires some sort of synchronization mechanism. A simple one is the lock statement (which is behind the scenes a call to Monitor.Enter). A code block that is surrounded by a lock block can only be accessed by one thread at a time.
Any use of a type that is not thread safe requires you to manage synchronization yourself.
A good resource to learn about threading in C# is the free eBook by Joe Albahari, found here.
http://en.wikipedia.org/wiki/Thread_safety

how to use lock on a IList

I have a static collections which implements IList interface. This collection is used throughout the application, including adding/removing items.
Due to multithread issue, I wonder what I can do to ensure that the list is modifying one at a time, such as when 1 thread try to add an item, another thread should not do delete item at that time.
I wonder what is difference between lock(this) and lock(privateObject) ? Which one is better in my case?
Thank you.
The lock(this) will lock on the entire instance while lock(privateObject) will only lock that specific instance variable. The second one is the better choice since locking on the entire instance will prevent other threads from being able to do anything with the object.
From MSDN:
In general, avoid locking on a public
type, or instances beyond your code's
control. The common constructs lock
(this), lock (typeof (MyType)), and
lock ("myLock") violate this
guideline:
lock (this) is a problem if the
instance can be accessed publicly.
lock (typeof (MyType)) is a problem if
MyType is publicly accessible.
lock(“myLock”) is a problem since any
other code in the process using the
same string, will share the same lock.
Best practice is to define a private
object to lock on, or a private static
object variable to protect data common
to all instances.
In this particular case, the collection is static which effectively means there is a single instance but that still doesn't change how the lock(this) and lock(privateObject) would behave.
By using lock(this) even in a static collection you are still locking the entire instance. In this case, as soon as thread A acquires a lock for method Foo() all other threads will have to wait to perform any operation on the collection.
Using lock(privateObject) means that as soon as thread A acquires a lock for method Foo() all other threads can perform any other operation except Foo() without waiting. Only when another thread tries to perform method Foo() will it have to wait until thread A has completed its Foo() operation and released the lock.
The lock keyword is a little confusing. The object expression in the lock statement is really just an identification mechanism for creating critical sections. It is not the subject of the lock nor is it in any way guarenteed to be safe for multithreaded operations just because it is referenced by the statement.
So lock(this) is creating a critical section identified by the class containing the currently executing method whereas lock(privateObject) is identified by an object that is (presumably anyway) private to the class. The former is more risky because a caller of your class could inadvertantly define their own critical sections using a lock statement that uses that class instance as the lock object. That could lead to unintended threading problems including, but not limited to, deadlocks and bottlenecks.
You mentioned that you were concerned with multiple threads modifying the collection at the same time. I should point out that you should be equally concerned with threads reading the collection as well even if they are not modifying it. It is likely that you will need some of the same safe guards in place to protect the collection during reads as you would during writes.
Add a private member to the class that methods lock on.
eg.
public class MyClass : IList
{
private object syncRoot = new object();
public void Add(object value)
{
lock(this.syncRoot)
{
// Add code here
}
}
public void Remove(object value)
{
lock(this.syncRoot)
{
// Remove code here
}
}
}
This will ensure that access to the list is syncronized between threads for the adding and removing cases, while maintaining access to the list. This will still let enumerators access the list while another thread can modify it, but that then opens another issue where an enumerator will throw an exception if the collection is modified during the enumeration.

Does lock(){} lock a resource, or does it lock a piece of code?

I'm still confused... When we write some thing like this:
Object o = new Object();
var resource = new Dictionary<int , SomeclassReference>();
...and have two blocks of code that lock o while accessing resource...
//Code one
lock(o)
{
// read from resource
}
//Code two
lock(o)
{
// write to resource
}
Now, if i have two threads, with one thread executing code which reads from resource and another writing to it, i would want to lock resource such that when it is being read, the writer would have to wait (and vice versa - if it is being written to, readers would have to wait). Will the lock construct help me? ...or should i use something else?
(I'm using Dictionary for the purposes of this example, but could be anything)
There are two cases I'm specifically concerned about:
two threads trying to execute same line of code
two threads trying to work on the same resource
Will lock help in both conditions?
Most of the other answers address your code example, so I'll try to answer you question in the title.
A lock is really just a token. Whoever has the token may take the stage so to speak. Thus the object you're locking on doesn't have an explicit connection to the resource you're trying to synchronize around. As long as all readers/writers agree on the same token it can be anything.
When trying to lock on an object (i.e. by calling Monitor.Enter on an object) the runtime checks if the lock is already held by a thread. If this is the case the thread trying to lock is suspended, otherwise it acquires the lock and proceeds to execute.
When a thread holding a lock exits the lock scope (i.e. calls Monitor.Exit), the lock is released and any waiting threads may now acquire the lock.
Finally a couple of things to keep in mind regarding locks:
Lock as long as you need to, but no longer.
If you use Monitor.Enter/Exit instead of the lock keyword, be sure to place the call to Exit in a finally block so the lock is released even in the case of an exception.
Exposing the object to lock on makes it harder to get an overview of who is locking and when. Ideally synchronized operations should be encapsulated.
Yes, using a lock is the right way to go. You can lock on any object, but as mentioned in other answers, locking on your resource itself is probably the easiest and safest.
However, you may want use a read/write lock pair instead of just a single lock, to decrease concurrency overhead.
The rationale for that is that if you have only one thread writing, but several threads reading, you do not want a read operation to block an other read operation, but only a read block a write or vice-versa.
Now, I am more a java guy, so you will have to change the syntax and dig up some doc to apply that in C#, but rw-locks are part of the standard concurrency package in Java, so you could write something like:
public class ThreadSafeResource<T> implements Resource<T> {
private final Lock rlock;
private final Lock wlock;
private final Resource res;
public ThreadSafeResource(Resource<T> res) {
this.res = res;
ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
this.rlock = rwl.readLock();
this.wlock = rwl.writeLock();
}
public T read() {
rlock.lock();
try { return res.read(); }
finally { rlock.unlock(); }
}
public T write(T t) {
wlock.lock();
try { return res.write(t); }
finally { wlock.unlock(); }
}
}
If someone can come up with a C# code sample...
Both blocks of code are locked here. If thread one locks the first block, and thread two tries to get into the second block, it will have to wait.
The lock (o) { ... } statement is compiled to this:
Monitor.Enter(o)
try { ... }
finally { Monitor.Exit(o) }
The call to Monitor.Enter() will block the thread if another thread has already called it. It will only be unblocked after that other thread has called Monitor.Exit() on the object.
Will lock help in both conditions?
Yes.
Does lock(){} lock a resource, or does
it lock a piece of code?
lock(o)
{
// read from resource
}
is syntactic sugar for
Monitor.Enter(o);
try
{
// read from resource
}
finally
{
Monitor.Exit(o);
}
The Monitor class holds the collection of objects that you are using to synchronize access to blocks of code.
For each synchronizing object, Monitor keeps:
A reference to the thread that currently holds the lock on the synchronizing object; i.e. it is this thread's turn to execute.
A "ready" queue - the list of threads that are blocking until they are given the lock for this synchronizing object.
A "wait" queue - the list of threads that block until they are moved to the "ready" queue by Monitor.Pulse() or Monitor.PulseAll().
So, when a thread calls lock(o), it is placed in o's ready queue, until it is given the lock on o, at which time it continues executing its code.
And that should work assuming that you only have one process involved. You will want to use a "Mutex" if you want that to work across more then one process.
Oh, and the "o" object, should be a singleton or scoped across everywhere that lock is needed, as what is REALLY being locked is that object and if you create a new one, then that new one will not be locked yet.
The way you have it implemented is an acceptable way to do what you need to do. One way to improve your way of doing this would be to use lock() on the dictionary itself, rather than a second object used to synchronize the dictionary. That way, rather than passing around an extra object, the resource itself keeps track of whether there's a lock on it's own monitor.
Using a separate object can be useful in some cases, such as synchronizing access to outside resources, but in cases like this it's overhead.

Categories