Multiple code blocks locked by the same object - c#

If I have something like this:
private readonly object objectLock = new object();
public void MethodA()
{
lock(objectLock)
{
//do something
}
}
public void MethodB()
{
lock(objectLock)
{
//do something
}
}
If I have 2 threads and both come in at the same time, 1st thread calls MethodA and second Method B. Whichever gets there first and locks objectLock, I assume the other thread sits there waiting until objectLock is no longer locked.

Yes, your explanation is right -- unless the lock is already taken (in which case both threads sit waiting, and an arbitrary one gets the lock as soon as it's unlocked).
(Slightly offtopic) I would advise not to lock the whole methods if they are doing something non-trivial. Try to keep the "locking" section of code as small and as fast as possible.

That is correct.
However it is not the objectLock that is locked (nor the object) it is the code blocks that are locked.
Think of the object that is passed to the lock keyword as a key that does unlock multiple doors but only grants access to a single room at one time.

You're absolutely right! But be careful with locks. Locks will maybe make you're program thread-safe (means, no errors on concurrent accesses) but it takes much more effort making make your program taking real advantage from running on a multi-kernel system.

yes you are right as Monitor.Enter and Monitor.Exit is called on same object objectLock behind the scene. remember its the code block that is synchronized not the objectLock.

You're correct. If this isn't desirable, then consider that:
lock(objectLock)
{
//do something
}
Is equivalent to:
Monitor.Enter(objectLock);
try
{
//do something
}
finally
{
Monitor.Exit(objectLock);
}
You can replace this with:
if(Monitor.TryEnter(objectLock, 250))//Don't wait more than 250ms
{
try
{
//do something
}
finally
{
Monitor.Exit(objectLock);
}
}
else
{
//fallback code
}
It's also worth looking at the overloads of TryEnter(), and the other synchronisation objects such as ReaderWriterLockSlim.

Related

If lock in C# has like RAII behavior?

It is a little bit confusing. In C# for multithread managing we have mutex and we have lock and in addition I found such lock RAII implementation
public class ReaderWriterLockSlim_ScopedLockRead : IDisposable
{
ReaderWriterLockSlim m_myLock;
public ReaderWriterLockSlim_ScopedLockRead(ReaderWriterLockSlim myLock)
{
m_myLock = myLock;
m_myLock.EnterReadLock();
}
public void Dispose()
{
m_myLock.ExitReadLock();
}
}
public class ReaderWriterLockSlim_ScopedLockWrite : IDisposable
{
ReaderWriterLockSlim m_myLock;
public ReaderWriterLockSlim_ScopedLockWrite(ReaderWriterLockSlim myLock)
{
m_myLock = myLock;
m_myLock.EnterWriteLock();
}
public void Dispose()
{
m_myLock.ExitWriteLock();
}
}
}
I would like to understand the difference between them, as for me mutex is a first multithreading managing implementation you need to call mutex.lock() and then don't forget to call mutex.release() it is usually not so suitable to call mutex.release() because you can get an error at the middle of execution, so here we have lock(obj){} as far as I see it is kind of RAII object with the same behavior but if you get error at the middle under the hood it will call mutex.release() and all nice.
But what about the last custom implementaion that I posted? It looks like the same with lock(obj){}, just with a difference that we have read and write behavior, like in write state it is possible that a few threads get accesses to method and with read state just one by one...
Am I right here?
So for locking it's important that every lock that is acquired is also released (no matter if the code it was locking had any exceptions). So normally, no matter what lock you use, it'll look something like this:
myLock.MyAcquireMethod();
try
{
//...
}
finally
{
myLock.MyReleaseMethod();
}
Now for the Monitor locking mechanism in c# they have a keyword to make it easier: lock.
which basically wraps the acquiring and releasing in one lock code-block.
So this:
lock(myObj)
{
//...
}
Is just a more convenient way of writing this:
Monitor.Enter(myObj);
try
{
//...
}
finally
{
Monitor.Exit(myObj);
}
Sadly for the other locks (and because Monitor has it's limitations we don't always want to use it) we don't have such a handy short way of doing the whole thing, and to solve that the ReaderWriterLockSlim_ScopedLockRead wrapper implements IDisposable that gives you this try finally mechanism (using also guarantees that Dispose() is called on the IDisposable no matter if the code ran to completion or an exception occurred.
So instead of:
m_myLock.EnterWriteLock();
try
{
//...
}
finally
{
m_myLock.ExitWriteLock();
}
You're now able to do this:
using(new ReaderWriterLockSlim_ScopedLockRead(m_myLock))
{
//...
}
Hope this answers your question!
As a bonus a warning on the Monitor class of c#. This locking mechanism is re-entrant on a thread level. Meaning the thread holding the lock is allowed to acquire it multiple times (though it also has to release it multiple times), which allows you to do something like this:
private readonly object _myLock = new object();
public void MyLockedMethod1()
{
lock(_myLock)
{
MyLockedMethod2();
}
}
public void MyLockedMethod2()
{
lock(_myLock)
{
//...
}
}
So no matter if MyLockedMethod2 is called directly or through MyLockedMethod1 (that might need the lock for other stuff as well) you can have thread-safety.
However these days a lot of people use async/await where a method can be continued on a different thread, which can break the Monitor if the thread that acquired it is not the thread releasing it, so I advise you not to use something like this:
public async Task MyLockedMethod()
{
lock(_myLock)
{
await MyAsyncMethod();
}
}
Anyway there is a lot of documentation about this if you would like to learn more.
This is not it at all. The Read Writer lock is an implementation used in a specific context.
All can read at any time without one blocking the other, but blocking anyone who wants to write.
None can read or write when there is even one writing at that time
It is exactly as wikipedia describes it here and non-specific for C# or any other language. This is just the C# flavor of a ReadWriter lock
is a synchronization primitive that solves one of the readers–writers problems. An RW lock allows concurrent access for read-only operations, while write operations require exclusive access.
Check Microsoft docs here for more information

Does Monitor.TryEnter and lock() work together?

I am looking at code that has been created and it uses a TryEnter in one method call and lock in others. So, like this:
private readonly object xmppLock = new object();
void f1()
{
if (Monitor.TryEnter(xmppLock))
{
try
{
// Do stuff
}
finally
{
Monitor.Exit(xmppLock);
}
}
}
void f2()
{
lock(xmppLock)
{
// Do stuff
}
}
Is this okay?
lock is just syntax sugar for Monitor.Enter, so yes, it will work fine.
The Visual Basic SyncLock and C# lock statements use Monitor.Enter to take the lock and Monitor.Exit to release it. The advantage of using the language statements is that everything in the lock or SyncLock block is included in a Try statement.
(That said, it's considered poor form to lock on something public like a Type object.)
Yes these two constructs will work together. The C# lock keyword is just a thin wrapper over the Monitor.Enter and Monitor.TryEnter methods.
Note: I would absolutely avoid using a Type instance as the value to lock on. Doing so is very fragile as it makes it very easy for two completely unrelated pieces of code to be unexpectedly locking on the same object. This can lead to deadlocks.
lock will block until the resource is available
TryEnter will not do anything if it is already locked.
Depending on your needs you have to use one or the other.
In your case f2() will always do what ever it does no matter how long it takes. f1() will return immediately if there is lock contention

c# threads synchronisation

I need to use lock object, but it is already used by another thread. I wish to wait while the lock object will be free but have no idea how to do this.
I found sth like:
if(Monitor.TryEnter(_lock)
{
try
{
// do work
}
finally
{
Monitor.Exit(_lock);
}
}
But I it just check and go on, but I wish to wait until lock object is free.
Either use this:
Monitor.Enter(_lock)
try
{
// do work
}
finally
{
Monitor.Exit(_lock);
}
or - more preferably - the lock keyword:
lock(_lock)
{
// do work
}
In fact, those code snippets will generate the same code. The compiler will translate the second code into the first one. However, the second one is preferred because it is far more readable.
UPDATE:
The lock belongs to the thread it was acquired in. That means, nested usage of the lock statement is possible:
void MethodA()
{
lock(_lock)
{
// ...
MethodB();
}
}
void MethodB()
{
lock(_lock)
{
// ...
}
}
The above code will not block.
You can use Monitor.Enter
From docs:
Use Enter to acquire the Monitor on the object passed as the
parameter. If another thread has executed an Enter on the object but
has not yet executed the corresponding Exit, the current thread will
block until the other thread releases the object.
I agree with #Daniel Hilgarth, the lock syntax is preferred.
Regarding your question:
I wish to wait while the lock object will be free but have no idea how to do this.
As per the MSDN description:
lock ensures that one thread does not enter a critical section while another thread is in the critical section of code. If another thread attempts to enter a locked code, it will wait (block) until the object is released.
i.e. the code you have already does what you want it to.

Is there a construct similar to a lock in C# that skips over a block of code rather than blocking?

In the piece of code that I'm working on, another developer's library fires off one of my object's methods on regular, scheduled intervals. I've run into problems where the previous call into my object's method has not completed at the time another interval is reached and a second call is made into my method to execute again - the two threads then end up stepping on each other. I'd like to be able to wrap the method's implementation with a check to see whether it is in the middle of processing and skip over the block if so.
A lock is similar to what I want, but doesn't quite cover it because a lock will block and the call into my method will pick up as soon as the previous instance releases the lock. That's not what I want to happen because I could potentially end up with a large number of these calls backed up and all waiting to process one by one. Instead, I'd like something similar to a lock, but without the block so that execution will continue after the block of code that would normally be surrounded by the lock.
What I came up with was a counter to be used with Interlocked.Increment and Interlocked.Decrement to allow me to use a simple if statement to determine whether execution on the method should continue.
public class Processor
{
private long _numberOfThreadsRunning = 0;
public void PerformProcessing()
{
long currentThreadNumber Interlocked.Increment(ref _numberOfThreadsRunning);
if(currentThreadNumber == 1)
{
// Do something...
}
Interlocked.Decrement(ref _numberOfThreadsRunning);
}
}
I feel like I'm overthinking this and there may be a simpler solution out there.
You could call Monitor.TryEnter and just continue if it returns false.
public class Processor
{
private readonly object lockObject = new object();
public void PerformProcessing()
{
if (Monitor.TryEnter(lockObject) == true)
{
try
{
// Do something...
}
finally
{
Monitor.Exit(lockObject);
}
}
}
}
How about adding a flag to the object. In the method set the flag true to indicated the method is being executed. At the very end of the method, reset it to false. Then you could check the status of the flag to know if the method can be called.

How to make a method exclusive in a multithreaded context?

I have a method which should be executed in an exclusive fashion. Basically, it's a multi threaded application where the method is invoked periodically by a timer, but which could also be manually triggered by a user action.
Let's take an example :
The timer elapses, so the method is
called. The task could take a few
seconds.
Right after, the user clicks on some
button, which should trigger the
same task : BAM. It does nothing
since the method is already running.
I used the following solution :
public void DoRecurentJob()
{
if(!Monitor.TryEnter(this.lockObject))
{
return;
}
try
{
// Do work
}
finally
{
Monitor.Exit(this.lockObject);
}
}
Where lockObject is declared like that:
private readonly object lockObject = new object();
Edit : There will be only one instance of the object which holds this method, so I updated the lock object to be non-static.
Is there a better way to do that ? Or maybe this one is just wrong for any reason ?
This looks reasonable if you are just interested in not having the method run in parallel. There's nothing to stop it from running immediately after each other, say that you pushed the button half a microsecond after the timer executed the Monitor.Exit().
And having the lock object as readonly static also make sense.
You could also use Mutex or Semaphore if you want it to work cross process (with a slight performance penalty), or if you need to set any other number than one of allowed simultaneous threads running your piece of code.
There are other signalling constructs that would work, but your example looks like it does the trick, and in a simple and straightforward manner.
Minor nit: if the lockObject variable is static, then "this.lockObject" shouldn't compile. It also feels slightly odd (and should at least be heavily documented) that although this is an instance method, it has distinctly type-wide behaviour as well. Possibly make it a static method which takes an instance as the parameter?
Does it actually use the instance data? If not, make it static. If it does, you should at least return a boolean to say whether or not you did the work with the instance - I find it hard to imagine a situation where I want some work done with a particular piece of data, but I don't care if that work isn't performed because some similar work was being performed with a different piece of data.
I think it should work, but it does feel a little odd. I'm not generally a fan of using manual locking, just because it's so easy to get wrong - but this does look okay. (You need to consider asynchronous exceptions between the "if" and the "try" but I suspect they won't be a problem - I can't remember the exact guarantees made by the CLR.)
I think Microsoft recommends using the lock statement, instead of using the Monitor class directly. It gives a cleaner layout and ensures the lock is released in all circumstances.
public class MyClass
{
// Used as a lock context
private readonly object myLock = new object();
public void DoSomeWork()
{
lock (myLock)
{
// Critical code section
}
}
}
If your application requires the lock to span all instances of MyClass you can define the lock context as a static field:
private static readonly object myLock = new object();
The code is fine, but would agree with changing the method to be static as it conveys intention better. It feels odd that all instances of a class have a method between them that runs synchronously, yet that method isn't static.
Remember you can always have the static syncronous method to be protected or private, leaving it visible only to the instances of the class.
public class MyClass
{
public void AccessResource()
{
OneAtATime(this);
}
private static void OneAtATime(MyClass instance)
{
if( !Monitor.TryEnter(lockObject) )
// ...
This is a good solution although I'm not really happy with the static lock. Right now you're not waiting for the lock so you won't get into trouble with deadlocks. But making locks too visible can easily get you in to trouble the next time you have to edit this code. Also this isn't a very scalable solution.
I usually try to make all the resources I try to protect from being accessed by multiple threads private instance variables of a class and then have a lock as a private instance variable too. That way you can instantiate multiple objects if you need to scale.
A more declarative way of doing this is using the MethodImplOptions.Synchronized specifier on the method to which you wish to synchronize access:
[MethodImpl(MethodImplOptions.Synchronized)]
public void OneAtATime() { }
However, this method is discouraged for several reasons, most of which can be found here and here. I'm posting this so you won't feel tempted to use it. In Java, synchronized is a keyword, so it may come up when reviewing threading patterns.
We have a similar requirement, with the added requirement that if the long-running process is requested again, it should enqueue to perform another cycle after the current cycle is complete. It's similar to this:
https://codereview.stackexchange.com/questions/16150/singleton-task-running-using-tasks-await-peer-review-challenge
private queued = false;
private running = false;
private object thislock = new object();
void Enqueue() {
queued = true;
while (Dequeue()) {
try {
// do work
} finally {
running = false;
}
}
}
bool Dequeue() {
lock (thislock) {
if (running || !queued) {
return false;
}
else
{
queued = false;
running = true;
return true;
}
}
}

Categories