Why doesn't Lock'ing on same object cause a deadlock? [duplicate] - c#

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Re-entrant locks in C#
If I write some code like this:
class Program {
static void Main(string[] args) {
Foo();
Console.ReadLine();
}
static void Foo() {
lock(_lock) {
Console.WriteLine("Foo");
Bar();
}
}
static void Bar() {
lock(_lock) {
Console.WriteLine("Bar");
}
}
private static readonly object _lock = new object();
}
I get as output:
Foo
Bar
I expected this to deadlock, because Foo acquires a lock, and then waits for Bar to acquire the lock. But this doesn't happen.
Does the locking mechanism simply allow this because the code is executed on the same thread?

For the same thread a lock is always reentrant, so the thread can lock an object as often as it wants.

Because you have only one thread here.
lock is shortcut for
bool lockWasTaken = false;
var temp = obj;
try {
Monitor.Enter(temp, ref lockWasTaken);
// your thread safe code
}
finally { if (lockWasTaken) Monitor.Exit(temp); }
Monitor.Enter acquire the Monitor on the object passed as the parameter. If another thread has executed an Enter on the object but has not yet executed the corresponding Exit, the current thread will block until the other thread releases the object. It is legal for the same thread to invoke Enter more than once without it blocking; however, an equal number of Exit calls must be invoked before other threads waiting on the object will unblock.

One word: Reentrant lock. If a thread has already acquired a lock, then it does not wait if it wants to acquire the lock again. This is very much needed otherwise it could have turned simple recursive functions into a nightmare.!

The lock statement is smarter than that, and it is designed to prevent just this. The lock is "owned" by the thread once it gets inside of it, so anytime it reaches another lock statement that locks on the same object it will realize that it already has access to that lock.

Related

Why can threads change instance data if it is blocked in another thread?

I began to study lock and immediately a question arose.
It docs.microsoft says here:
The lock statement acquires the mutual-exclusion lock for a given
object, executes a statement block, and then releases the lock. While
a lock is held, the thread that holds the lock can again acquire and
release the lock. Any other thread is blocked from acquiring the lock
and waits until the lock is released.
I made a simple example proving that another thread with a method without the lock keyword can easily change the data of an instance while that instance is occupied by a method using the lock from the first thread. It is worth removing the comment from the blocking and the work is done as expected. I thought that a lock would block access to an instance from other threads, even if they don't use a lock on that instance in their methods.
Questions:
Do I understand correctly that locking an instance on one thread allows data from another thread to be modified on that instance, unless that other thread also uses that instance's lock? If so, what then does such a blocking generally give and why is it done this way?
What does this mean in simpler terms? While a lock is held, the thread that holds the lock can again acquire and release the lock.
So code formatting works well.
using System;
using System.Threading;
using System.Threading.Tasks;
namespace ConsoleApp1
{
class A
{
public int a;
}
class Program
{
static void Main(string[] args)
{
A myA = new A();
void MyMethod1()
{
lock (myA)
{
for (int i = 0; i < 10; i++)
{
Thread.Sleep(500);
myA.a += 1;
Console.WriteLine($"Work MyMethod1 a = {myA.a}");
}
}
}
void MyMethod2()
{
//lock (myA)
{
for (int i = 0; i < 10; i++)
{
Thread.Sleep(500);
myA.a += 100;
Console.WriteLine($"Work MyMethod2 a = {myA.a}");
}
}
}
Task t1 = Task.Run(MyMethod1);
Thread.Sleep(100);
Task t2 = Task.Run(MyMethod2);
Task.WaitAll(t1, t2);
}
}
}
locks are cooperative, it relies on all parties that can change the data to cooperate and take the lock before attempting to change the data. Note that the lock does not care what you are changing inside the lock. It is fairly common to use a surrogate lock object when protecting some data structure. I.e.
private object myLockObject = new object();
private int a;
private int b;
public void TransferMonety(int amount){
lock(myLockObject){
if(a > amount){
a-=amount;
b+=amount;
}
}
}
Because of this locks are very flexible, you can protect any kind of operation, but you need to write your code correctly.
Because of this it is important to be careful when using locks. Locks should preferably be private to avoid any unrelated code from taking the lock. The code inside the lock should be fairly short, and should not call any code outside the class. This is done to avoid deadlocks, if arbitrary code is run it may do things like taking other locks or waiting for events.
While locks are very useful, there are also other synchronization primitives that can be used depending on your use case.
What does this mean in simpler terms? "While a lock is held, the thread that holds the lock can again acquire and release the lock."
It means that you can do this:
lock (locker)
{
lock (locker)
{
lock (locker)
{
// Do something while holding the lock
}
}
}
You can acquire the lock many times, and then release it an equal number of times. This is called reentrancy. The lock statement is reentrant, because the underlying Monitor class is reentrant by design. Other synchronization primitives, like the SemaphoreSlim, are not reentrant.

c# why put object in the lock statement

Can someone clarify me:
The statements inside the lock will be locked no one can go through unless it's finished and release the lock. then what is the object inside the lock used for
lock (obj)
{
///statement
}
Does that mean the obj is being locked and cannot be used from anywhere else unless the lock has done his work.
I've made a very simple class to illustrate what the object in the lock is there for.
public class Account
{
private decimal _balance = 0m;
private object _transactionLock = new object();
private object _saveLock = new object();
public void Deposit(decimal amount)
{
lock (_transactionLock)
{
_balance += amount;
}
}
public void Withdraw(decimal amount)
{
lock (_transactionLock)
{
_balance -= amount;
}
}
public void Save()
{
lock (_saveLock)
{
File.WriteAllText(#"C:\Balance.txt", _balance.ToString());
}
}
}
You'll notice that I have three locks, but only two variables.
The lines lock (_transactionLock) mutually lock the regions of code to only allow the current thread to enter - and this could mean that the current thread can re-enter the locked region. Other threads are blocked no matter which of the lock (_transactionLock) they hit if a thread already has the lock.
The second lock, lock (_saveLock), is there to show you that the object in the lock statement is there to identify the lock. So, if a thread were in one of the lock (_transactionLock) statements then there is nothing stopping a thread to enter the lock (_saveLock) block (unless another thread were already there).
Read up on semaphores and monitors. When it comes to multi-threading, you want to protect the Critical Section of the code, so that the object in question is not being accessed while an operation is being performed on it. The critical section is what's being enclosed inside the lock.
This is all done to avoid dead locks and live locks. Once again, you only need the lock if your application is multi-threaded.

Monitor.Enter vs Monitor.Wait

I'm still unsure on the differences between these two calls. From MSDN,
Monitor.Enter(Object) Acquires an exclusive lock on the specified object.
Monitor.Wait(Object) Releases the lock on an object and blocks the current thread until it reacquires the lock.
From that I assume that Monitor.Wait is the same as Monitor.Enter except that it releases the lock on the object first before reacquiring.
Does the current thread have to have the lock in the first place? How could a different thread force a release on a lock of an object? Why would the same thread want to reacquire a lock?
According to MSDN: Monitor.Wait Method(Object)
SynchronizationLockException: The calling thread does not own the lock for the specified object.
In other words: You can only call Monitor.Wait(Object), when you already own the lock, whereas you call Monitor.Enter(Object) in order to acquire the lock.
As for why Monitor.Wait is needed: If your thread realizes, that it is lacking information to continue execution (e.g. it's waiting for a signal), you might want to let other threads enter the critical section, because not all threads have the same prerequisites.
For the waiting thread to continue execution, you will need to call Monitor.Pulse(Object) or Monitor.PulseAll(Object) before releasing the lock (otherwise, you're going to get the same kind of exception as with Monitor.Wait(Object)).
Keep in mind, that the next thread that acquires the lock after a pulse and after the lock was released, is not necessarily the thread that received the pulse.
Also keep in mind, that receiving a pulse, is not equivalent to having your condition met. You might still need to wait just a little longer:
// make sure to synchronize this correctly ;)
while (ConditionNotMet)
{
Monitor.Wait(mutex);
if (ConditionNotMet) // We woke up, but our condition is still not met
Monitor.Pulse(mutex); // Perhaps another waiting thread wants to wake up?
}
Consider this example:
public class EnterExitExample
{
private object myLock;
private bool running;
private void ThreadProc1()
{
while (running)
{
lock (myLock)
{
// Do stuff here...
}
Thread.Yield();
}
}
private void ThreadProc2()
{
while (running)
{
lock (myLock)
{
// Do other stuff here...
}
Thread.Yield();
}
}
}
Now you have two threads, each waiting for lock, then doing their stuff, then releasing the lock. The lock (myLock) syntax is just sugar for Monitor.Enter(myLock) and Monitor.Exit(myLock).
Let us now look at a more complicated example, where Wait and Pulse come into play.
public class PulseWaitExample
{
private Queue<object> queue;
private bool running;
private void ProducerThreadProc()
{
while (running)
{
object produced = ...; // Do production stuff here.
lock (queue)
{
queue.Enqueue(produced);
Monitor.Pulse(queue);
}
}
}
private void ConsumerThreadProc()
{
while (running)
{
object toBeConsumed;
lock (queue)
{
Monitor.Wait(queue);
toBeConsumed = queue.Dequeue();
}
// Do consuming stuff with toBeConsumed here.
}
}
}
What do we have here?
The producer produces an object whenever he feels like it. As soon as he has, he obtains lock on the queue, enqueues the object, then does a Pulse call.
At the same time, the consumer does NOT have lock, he left it by calling Wait. As soon as he gets a Pulse on that object, he will re-lock, and do his consuming stuff.
So what you have here is a direct thread-to-thread notification that there is something to do for the consumer. If you wouldn't have that, all you could do is have the consumer keep polling on the collection if there is something to do yet. Using Wait, you can make sure that there is.
As Cristi mentioned, a naive wait/pulse code does not work. Because your are completely missing the crucial point here : The monitor is NOT a message queue. If you pulse and no one is waiting, the pulse is LOST.
The right philosophy is that your are waiting for a condition, and if the condition is not satisfied, there is a way to wait for it, without eating cpu and without holding the lock. Here, the condition for the consumer is that there is something in the queue.
See https://ideone.com/tWqTS1 which work (a fork from by Cristi's example).
public class PulseWaitExample
{
private Queue<object> queue;
private bool running;
private void ProducerThreadProc()
{
while (running)
{
object produced = ...; // Do production stuff here.
lock (queue)
{
queue.Enqueue(produced);
Monitor.Pulse(queue);
}
}
}
private void ConsumerThreadProc()
{
while (running)
{
object toBeConsumed;
lock (queue)
{
// here is the fix
if (queue.Count == 0)
{
Monitor.Wait(queue);
}
toBeConsumed = queue.Dequeue();
}
// Do consuming stuff with toBeConsumed here.
}
}
}

How to tell in C# if a class method is currently executing

Say I have the following code (please assume all the appropriate import statements):
public class CTestClass {
// Properties
protected Object LockObj;
public ConcurrentDictionary<String, String> Prop_1;
protected System.Timers.Timer TImer_1;
// Methods
public CTestClass () {
LockObj = new Object ();
Prop_1 = new ConcurrentDictionary<String, String> ();
Prop_1.TryAdd ("Key_1", "Value_1");
Timer_1 = new System.Timers.Timer ();
Timer_1.Interval = (1000 * 60); // One minute
Timer_1.Elapsed += new ElapsedEventHandler ((s, t) => Method_2 ());
Timer_1.Enabled = true;
} // End CTestClass ()
public void Method_1 () {
// Do something that requires Prop_1 to be read
// But *__do not__* lock Prop_1
} // End Method_1 ()
public void Method_2 () {
lock (LockObj) {
// Do something with Prop_1 *__only if__* Method_1 () is not currently executing
}
} // End Method_2 ()
} // End CTestClass
// Main class
public class Program {
public static void Main (string[] Args) {
CTestClass TC = new CTestClass ();
ParallelEnumerable.Range (0, 10)
.ForAll (s => {
TC.Method_1 ();
});
}
}
I understand it is possible to use MethodBase.GetCurrentMethod, but (short of doing messy book-keeping with global variables) is it possible to solve the problem without reflection?
Thanks in advance for your assistance.
EDIT
(a) Corrected an error with the scope of LockObj
(b) Adding a bit more by way of explanation (taken from my comment below)
I have corrected my code (in my actual project) and placed LockObj as a class property. The trouble is, Method_2 is actually fired by a System.Timers.Timer, and when it is ready to fire, it is quite possible that Method_1 is already executing. But in that event it is important to wait for Method_1 to finish executing before proceeding with Method_2.
I agree that the minimum working example I have tried to create does not make this latter point clear. Let me see if I can edit the MWE.
CODE EDITING FINISHED
ONE FINAL EDIT
I am using Visual Studio 2010 and .NET 4.0, so I do not have the async/await features that would have made my life a lot easier.
As pointed above, you should become more familiar with different synchronization primitives, that exist in .net.
You dont solve such problems by reflection or analyzing whos the concurent - running method, but by using a signaling primitive, which will inform anyone interested that the method is running/ended.
First of all ConcurentDictionary is thread safe so you don't need to lock for producing/consuming. So, if only care about accessing your dictionary no additional locking is necessary.
However if you just need to mutual exclude the execution of method 1 and 2, you should declare the lock object as class member and you may lock each function body using it, but as I said, not needed if you are going to use ConcurentDictionary.
If you really need which method executes at every moment you can use stack frame of each thread, but this will going to be slow and I believe not necessary for this case.
The term you're looking for is Thread Synchronisation. There are many ways to achieve this in .NET.
One of which (lock) you've discovered.
In general terms, the lock object should be accessible by all threads needing it, and initialised before any thread tries to lock it.
The lock() syntax ensures that only one thread can continue at a time for that lock object. Any other threads which try to lock that same object will halt until they can obtain the lock.
There is no ability to time out or otherwise cancel the waiting for the lock (except by terminating the thread or process).
By way of example, here's a simpler form:
public class ThreadSafeCounter
{
private object _lockObject = new Object(); // Initialise once
private int count = 0;
public void Increment()
{
lock(_lockObject) // Only one thread touches count at a time
{
count++;
}
}
public void Decrement()
{
lock (_lockObject) // Only one thread touches count at a time
{
count--;
}
}
public int Read()
{
lock (_lockObject) // Only one thread touches count at a time
{
return count;
}
}
}
You can see this as a sort of variant of the classic readers/writers problem where the readers don't consume the product of the writers. I think you can do it with the help of an int variable and three Mutex.
One Mutex (mtxExecutingMeth2) guard the execution of Method2 and blocks the execution of both Method2 and Method1. Method1 must release it immediately, since otherwise you could not have other parallel executions of Method1. But this means that you have to tell Method2 whene there are Method1's executing, and this is done using the mtxThereAreMeth1 Mutex which is released only when there are no more Method1's executing. This is controlled by the value of numMeth1 which has to be protected by another Mutex (mtxNumMeth1).
I didn't give it a try, so I hope I didn't introduce some race conditions. Anyway it should at least give you an idea of a possible direction to follow.
And this is the code:
protected int numMeth1 = 0;
protected Mutex mtxNumMeth1 = new Mutex();
protected Mutex mtxExecutingMeth2 = new Mutex();
protected Mutex mtxThereAreMeth1 = new Mutex();
public void Method_1()
{
// if this is the first execution of Method1, tells Method2 that it has to wait
mtxNumMeth1.WaitOne();
if (numMeth1 == 0)
mtxThereAreMeth1.WaitOne();
numMeth1++;
mtxNumMeth1.ReleaseMutex();
// check if Method2 is executing and release the Mutex immediately in order to avoid
// blocking other Method1's
mtxExecutingMeth2.WaitOne();
mtxExecutingMeth2.ReleaseMutex();
// Do something that requires Prop_1 to be read
// But *__do not__* lock Prop_1
// if this is the last Method1 executing, tells Method2 that it can execute
mtxNumMeth1.WaitOne();
numMeth1--;
if (numMeth1 == 0)
mtxThereAreMeth1.ReleaseMutex();
mtxNumMeth1.ReleaseMutex();
}
public void Method_2()
{
mtxThereAreMeth1.WaitOne();
mtxExecutingMeth2.WaitOne();
// Do something with Prop_1 *__only if__* Method_1 () is not currently executing
mtxExecutingMeth2.ReleaseMutex();
mtxThereAreMeth1.ReleaseMutex();
}

Confusion about the lock statement in C#

This is from MSDN:
The lock keyword ensures that one thread does not enter a critical section of code while another thread is in the critical section.
Does a critical section have to be same as the critical section?
Or does it mean:
The lock keyword ensures that one thread does not enter any critical section guarded by an object of code while another thread is in any critical section guarded by the same object. ?
class Program
{
static void Main(string[] args)
{
TestDifferentCriticalSections();
Console.ReadLine();
}
private static void TestDifferentCriticalSections()
{
Test lo = new Test();
Thread t1 = new Thread(() =>
{
lo.MethodA();
});
t1.Start();
Thread t2 = new Thread(() =>
{
lo.MethodB();
});
t2.Start();
}
}
public class Test
{
private object obj = new object();
public Test()
{ }
public void MethodA()
{
lock (obj)
{
for (int i = 0; i < 5; i++)
{
Thread.Sleep(500);
Console.WriteLine("A");
}
}
}
public void MethodB()
{
lock (obj)
{
for (int i = 0; i < 5; i++)
{
Thread.Sleep(500);
Console.WriteLine("B");
}
}
}
}
The question is confusingly worded and the answers so far are not particularly clear either. Let me rephrase the question into several questions:
(1) Does the lock statement ensure that no more than one thread is in the body of the lock statement at any one time?
No. For example:
static readonly object lock1 = new object();
static readonly object lock2 = new object();
static int counter = 0;
static object M()
{
int c = Interlocked.Increment(ref counter);
return c % 2 == 0 ? lock1 : lock2;
}
...
lock(M()) { Critical(); }
It is possible for two threads to both be in the body of the lock statement at the same time, because the lock statement locks on two different objects. Thread Alpha can call M() and get lock1, and then thread Beta can call M() and get lock2.
(2) Assuming that my lock statement always locks on the same object, does a lock statement ensure that no more than one "active" thread is in the body of the lock at any one time?
Yes. If you have:
static readonly object lock1 = new object();
...
lock(lock1) { Critical(); }
then thread Alpha can take the lock, and thread Beta will block until the lock is available before entering the lock body.
(3) Assuming that I have two lock statements, and both lock statements lock on the same object every time, does a lock statement ensure that no more than one "active" thread is in the body of either lock at any one time?
Yes. If you have:
static readonly object lock1 = new object();
...
static void X()
{
lock(lock1) { CriticalX(); }
}
static void Y()
{
lock(lock1) { CriticalY(); }
}
then if thread Alpha is in X and takes the lock, and thread Beta is in Y, then thread Beta will block until the lock is available before entering the lock body.
(4) Why are you putting "active" in "scare quotes"?
To call attention to the fact that it is possible for a waiting thread to be in the lock body. You can use the Monitor.Wait method to "pause" a thread that is in a lock body, and allow a blocked thread to become active and enter that lock body (or a different lock body that locks the same object). The waiting thread will stay in its "waiting" state until pulsed. At some time after it is pulsed, it rejoins the "ready" queue and blocks until there is no "active" thread in the lock. It then resumes at the point where it left off.
You put a lock on an object. If another thread tries to access a critical section marked by that object at the same time, it will block until the lock is removed/complete.
Example:
public static object DatabaseLck= new object();
lock (DatabaseLck) {
results = db.Query<T>(query).ToList();
}
Or
lock (DatabaseLck) {
results = db.Query<T>(string.Format(query, args)).ToList();
}
Neither one of those code blocks can be run at the same time BECAUSE they use the same lock object. If you used a different lock object for each, they could run at the same time.
It is one and the same critical section.
lock (synclock)
{
// the critical section protected by the lock statement
// Only one thread can access this at any one time
}
See lock Statement on MSDN:
The lock keyword marks a statement block as a critical section by obtaining the mutual-exclusion lock for a given object, executing a statement, and then releasing the lock.
Or does it mean: The lock keyword ensures that one thread does not enter any critical section of code while another thread is in any critical section. ?
No. It does not mean that. It means the critical section protected by that lock and that lock alone.
Update, following code example:
If you use a single object to lock on, it will lock all critical sections, causing other threads to block until released. In your code example, once the lock in MethodA has been entered, all other threads reaching that lock and the lock on MethodB will block until the lock is released (this is happening because you are locking on the same object in both methods).
It does not mean any, though you can protect 2 blocks of code from being entered by more than one thread at the same time by locking them both with the same object. This is a common paradigm -- you may want to lock your collection for both clears and writes.
No it means that another thread wont enter THE critical section protected by this lock statement.
The Critical section is only defined by the programmer, and in this case, you could replace it by : the section protected by a lock
So translation :
The lock keyword ensures that one thread does not enter a section of code protected by a lock while another thread is in this section of code (protected by a lock )
The critical section that it is talking about is the section guarded by the lock statements.
Any critical section that is locking on the same object will be blocked from getting access.
It is also important that your lock object be static, because the locks need to be locking (or trying to lock) on the same instance of the lock object.

Categories