I am looking at code that has been created and it uses a TryEnter in one method call and lock in others. So, like this:
private readonly object xmppLock = new object();
void f1()
{
if (Monitor.TryEnter(xmppLock))
{
try
{
// Do stuff
}
finally
{
Monitor.Exit(xmppLock);
}
}
}
void f2()
{
lock(xmppLock)
{
// Do stuff
}
}
Is this okay?
lock is just syntax sugar for Monitor.Enter, so yes, it will work fine.
The Visual Basic SyncLock and C# lock statements use Monitor.Enter to take the lock and Monitor.Exit to release it. The advantage of using the language statements is that everything in the lock or SyncLock block is included in a Try statement.
(That said, it's considered poor form to lock on something public like a Type object.)
Yes these two constructs will work together. The C# lock keyword is just a thin wrapper over the Monitor.Enter and Monitor.TryEnter methods.
Note: I would absolutely avoid using a Type instance as the value to lock on. Doing so is very fragile as it makes it very easy for two completely unrelated pieces of code to be unexpectedly locking on the same object. This can lead to deadlocks.
lock will block until the resource is available
TryEnter will not do anything if it is already locked.
Depending on your needs you have to use one or the other.
In your case f2() will always do what ever it does no matter how long it takes. f1() will return immediately if there is lock contention
Related
It is a little bit confusing. In C# for multithread managing we have mutex and we have lock and in addition I found such lock RAII implementation
public class ReaderWriterLockSlim_ScopedLockRead : IDisposable
{
ReaderWriterLockSlim m_myLock;
public ReaderWriterLockSlim_ScopedLockRead(ReaderWriterLockSlim myLock)
{
m_myLock = myLock;
m_myLock.EnterReadLock();
}
public void Dispose()
{
m_myLock.ExitReadLock();
}
}
public class ReaderWriterLockSlim_ScopedLockWrite : IDisposable
{
ReaderWriterLockSlim m_myLock;
public ReaderWriterLockSlim_ScopedLockWrite(ReaderWriterLockSlim myLock)
{
m_myLock = myLock;
m_myLock.EnterWriteLock();
}
public void Dispose()
{
m_myLock.ExitWriteLock();
}
}
}
I would like to understand the difference between them, as for me mutex is a first multithreading managing implementation you need to call mutex.lock() and then don't forget to call mutex.release() it is usually not so suitable to call mutex.release() because you can get an error at the middle of execution, so here we have lock(obj){} as far as I see it is kind of RAII object with the same behavior but if you get error at the middle under the hood it will call mutex.release() and all nice.
But what about the last custom implementaion that I posted? It looks like the same with lock(obj){}, just with a difference that we have read and write behavior, like in write state it is possible that a few threads get accesses to method and with read state just one by one...
Am I right here?
So for locking it's important that every lock that is acquired is also released (no matter if the code it was locking had any exceptions). So normally, no matter what lock you use, it'll look something like this:
myLock.MyAcquireMethod();
try
{
//...
}
finally
{
myLock.MyReleaseMethod();
}
Now for the Monitor locking mechanism in c# they have a keyword to make it easier: lock.
which basically wraps the acquiring and releasing in one lock code-block.
So this:
lock(myObj)
{
//...
}
Is just a more convenient way of writing this:
Monitor.Enter(myObj);
try
{
//...
}
finally
{
Monitor.Exit(myObj);
}
Sadly for the other locks (and because Monitor has it's limitations we don't always want to use it) we don't have such a handy short way of doing the whole thing, and to solve that the ReaderWriterLockSlim_ScopedLockRead wrapper implements IDisposable that gives you this try finally mechanism (using also guarantees that Dispose() is called on the IDisposable no matter if the code ran to completion or an exception occurred.
So instead of:
m_myLock.EnterWriteLock();
try
{
//...
}
finally
{
m_myLock.ExitWriteLock();
}
You're now able to do this:
using(new ReaderWriterLockSlim_ScopedLockRead(m_myLock))
{
//...
}
Hope this answers your question!
As a bonus a warning on the Monitor class of c#. This locking mechanism is re-entrant on a thread level. Meaning the thread holding the lock is allowed to acquire it multiple times (though it also has to release it multiple times), which allows you to do something like this:
private readonly object _myLock = new object();
public void MyLockedMethod1()
{
lock(_myLock)
{
MyLockedMethod2();
}
}
public void MyLockedMethod2()
{
lock(_myLock)
{
//...
}
}
So no matter if MyLockedMethod2 is called directly or through MyLockedMethod1 (that might need the lock for other stuff as well) you can have thread-safety.
However these days a lot of people use async/await where a method can be continued on a different thread, which can break the Monitor if the thread that acquired it is not the thread releasing it, so I advise you not to use something like this:
public async Task MyLockedMethod()
{
lock(_myLock)
{
await MyAsyncMethod();
}
}
Anyway there is a lot of documentation about this if you would like to learn more.
This is not it at all. The Read Writer lock is an implementation used in a specific context.
All can read at any time without one blocking the other, but blocking anyone who wants to write.
None can read or write when there is even one writing at that time
It is exactly as wikipedia describes it here and non-specific for C# or any other language. This is just the C# flavor of a ReadWriter lock
is a synchronization primitive that solves one of the readers–writers problems. An RW lock allows concurrent access for read-only operations, while write operations require exclusive access.
Check Microsoft docs here for more information
If I have something like this:
private readonly object objectLock = new object();
public void MethodA()
{
lock(objectLock)
{
//do something
}
}
public void MethodB()
{
lock(objectLock)
{
//do something
}
}
If I have 2 threads and both come in at the same time, 1st thread calls MethodA and second Method B. Whichever gets there first and locks objectLock, I assume the other thread sits there waiting until objectLock is no longer locked.
Yes, your explanation is right -- unless the lock is already taken (in which case both threads sit waiting, and an arbitrary one gets the lock as soon as it's unlocked).
(Slightly offtopic) I would advise not to lock the whole methods if they are doing something non-trivial. Try to keep the "locking" section of code as small and as fast as possible.
That is correct.
However it is not the objectLock that is locked (nor the object) it is the code blocks that are locked.
Think of the object that is passed to the lock keyword as a key that does unlock multiple doors but only grants access to a single room at one time.
You're absolutely right! But be careful with locks. Locks will maybe make you're program thread-safe (means, no errors on concurrent accesses) but it takes much more effort making make your program taking real advantage from running on a multi-kernel system.
yes you are right as Monitor.Enter and Monitor.Exit is called on same object objectLock behind the scene. remember its the code block that is synchronized not the objectLock.
You're correct. If this isn't desirable, then consider that:
lock(objectLock)
{
//do something
}
Is equivalent to:
Monitor.Enter(objectLock);
try
{
//do something
}
finally
{
Monitor.Exit(objectLock);
}
You can replace this with:
if(Monitor.TryEnter(objectLock, 250))//Don't wait more than 250ms
{
try
{
//do something
}
finally
{
Monitor.Exit(objectLock);
}
}
else
{
//fallback code
}
It's also worth looking at the overloads of TryEnter(), and the other synchronisation objects such as ReaderWriterLockSlim.
I haven't had any issues using the same lock for multiple methods so far, but I'm wondering if the following code might actually have issues (performance?) that I'm not aware of:
private static readonly object lockObj = new object();
public int GetValue1(int index)
{
lock(lockObj)
{
// Collection 1 read and/or write
}
}
public int GetValue2(int index)
{
lock(lockObj)
{
// Collection 2 read and/or write
}
}
public int GetValue3(int index)
{
lock(lockObj)
{
// Collection 3 read and/or write
}
}
The 3 methods and the collections are not related in anyway.
In addition, will it be a problem if this lockObj is also used by a singleton (in Instance property) ?
Edit: To clarify my question on using the same lock object in a Singleton class:
private static readonly object SyncObject = new object();
public static MySingleton Instance
{
get
{
lock (SyncObject)
{
if (_instance == null)
{
_instance = new MySingleton();
}
}
return _instance;
}
}
public int MyMethod()
{
lock (SyncObject)
{
// Read or write
}
}
Will this cause issues?
If the methods are unrelated as you state, then use a different lock for each one; otherwise it's inefficient (since there's no reason for different methods to lock on the same object, as they could safely execute concurrently).
Also, it seems that these are instance methods locking on a static object -- was that intended? I have a feeling that's a bug; instance methods should (usually) only lock on instance fields.
Regarding the Singleton design pattern:
While locking can be safe for those, better practice is doing a delayed initialization of a field like this:
private static object sharedInstance;
public static object SharedInstance
{
get
{
if (sharedInstance == null)
Interlocked.CompareExchange(ref sharedInstance, new object(), null);
return sharedInstance;
}
}
This way it's a little bit faster (both because interlocked methods are faster, and because the initialization is delayed), but still thread-safe.
By using the same object to lock on in all of those methods, you are serializing all access to code in all of the threads.
That is... code running GetValue1() will block other code in a different thread from running GetValue2() until it's done. If you add even more code that locks on the same object instance, you'll end up with effectively a single-threaded application at some point.
Shared lock locks other non-related calls
If you use the same lock then locking in one method unnecessarily locks others as well. If they're not related at all than this is a problem since they have to wait for each other. Which they shouldn't.
Bottleneck
This may pose a bottleneck when these methods are frequently called. With separate locks they would run independently, but sharing the same lock it means they must wait for the lock to be released more often as required (actually three times more often).
To create a thread-safe singleton, use this technique.
You don't need a lock.
In general, each lock should be used as little as possible.
The more methods lock on the same thing, the mroe likely you are to end up waiting for it when you don't really need to.
Good question. There are pros and cons of making locks more fine grained vs more coarse grained, with one extreme being a separate lock for each piece of data and the other extreme being one lock for the entire program. As other posts point out, the disadvantage of reusing the same locks is in general you may get less concurrency (though it depends on the case, you may not get less concurrency).
However, the disadvantage of using more locks is in general you make deadlock more likely. There are more ways to get deadlocks the more locks you have involved. For example, acquiring two locks at the same time in separate threads but in the opposite order is a potential deadlock which wouldn't happen if only one lock were involved. Of course sometimes you may fix a deadlock by breaking one lock into two, but usually fewer locks means fewer deadlocks. There's also added code complexity of having more locks.
In general these two factors need to be balanced. It's common to use one lock per class for convenience if it doesn't cause any concurrency issues. In fact, doing so is a design pattern called a monitor.
I would say the best practice is to favor fewer locks for code simplicity's sake and make additional locks if there's a good reason (such as concurrency, or a case where it's more simple or fixes a deadlock).
I'm still confused... When we write some thing like this:
Object o = new Object();
var resource = new Dictionary<int , SomeclassReference>();
...and have two blocks of code that lock o while accessing resource...
//Code one
lock(o)
{
// read from resource
}
//Code two
lock(o)
{
// write to resource
}
Now, if i have two threads, with one thread executing code which reads from resource and another writing to it, i would want to lock resource such that when it is being read, the writer would have to wait (and vice versa - if it is being written to, readers would have to wait). Will the lock construct help me? ...or should i use something else?
(I'm using Dictionary for the purposes of this example, but could be anything)
There are two cases I'm specifically concerned about:
two threads trying to execute same line of code
two threads trying to work on the same resource
Will lock help in both conditions?
Most of the other answers address your code example, so I'll try to answer you question in the title.
A lock is really just a token. Whoever has the token may take the stage so to speak. Thus the object you're locking on doesn't have an explicit connection to the resource you're trying to synchronize around. As long as all readers/writers agree on the same token it can be anything.
When trying to lock on an object (i.e. by calling Monitor.Enter on an object) the runtime checks if the lock is already held by a thread. If this is the case the thread trying to lock is suspended, otherwise it acquires the lock and proceeds to execute.
When a thread holding a lock exits the lock scope (i.e. calls Monitor.Exit), the lock is released and any waiting threads may now acquire the lock.
Finally a couple of things to keep in mind regarding locks:
Lock as long as you need to, but no longer.
If you use Monitor.Enter/Exit instead of the lock keyword, be sure to place the call to Exit in a finally block so the lock is released even in the case of an exception.
Exposing the object to lock on makes it harder to get an overview of who is locking and when. Ideally synchronized operations should be encapsulated.
Yes, using a lock is the right way to go. You can lock on any object, but as mentioned in other answers, locking on your resource itself is probably the easiest and safest.
However, you may want use a read/write lock pair instead of just a single lock, to decrease concurrency overhead.
The rationale for that is that if you have only one thread writing, but several threads reading, you do not want a read operation to block an other read operation, but only a read block a write or vice-versa.
Now, I am more a java guy, so you will have to change the syntax and dig up some doc to apply that in C#, but rw-locks are part of the standard concurrency package in Java, so you could write something like:
public class ThreadSafeResource<T> implements Resource<T> {
private final Lock rlock;
private final Lock wlock;
private final Resource res;
public ThreadSafeResource(Resource<T> res) {
this.res = res;
ReentrantReadWriteLock rwl = new ReentrantReadWriteLock();
this.rlock = rwl.readLock();
this.wlock = rwl.writeLock();
}
public T read() {
rlock.lock();
try { return res.read(); }
finally { rlock.unlock(); }
}
public T write(T t) {
wlock.lock();
try { return res.write(t); }
finally { wlock.unlock(); }
}
}
If someone can come up with a C# code sample...
Both blocks of code are locked here. If thread one locks the first block, and thread two tries to get into the second block, it will have to wait.
The lock (o) { ... } statement is compiled to this:
Monitor.Enter(o)
try { ... }
finally { Monitor.Exit(o) }
The call to Monitor.Enter() will block the thread if another thread has already called it. It will only be unblocked after that other thread has called Monitor.Exit() on the object.
Will lock help in both conditions?
Yes.
Does lock(){} lock a resource, or does
it lock a piece of code?
lock(o)
{
// read from resource
}
is syntactic sugar for
Monitor.Enter(o);
try
{
// read from resource
}
finally
{
Monitor.Exit(o);
}
The Monitor class holds the collection of objects that you are using to synchronize access to blocks of code.
For each synchronizing object, Monitor keeps:
A reference to the thread that currently holds the lock on the synchronizing object; i.e. it is this thread's turn to execute.
A "ready" queue - the list of threads that are blocking until they are given the lock for this synchronizing object.
A "wait" queue - the list of threads that block until they are moved to the "ready" queue by Monitor.Pulse() or Monitor.PulseAll().
So, when a thread calls lock(o), it is placed in o's ready queue, until it is given the lock on o, at which time it continues executing its code.
And that should work assuming that you only have one process involved. You will want to use a "Mutex" if you want that to work across more then one process.
Oh, and the "o" object, should be a singleton or scoped across everywhere that lock is needed, as what is REALLY being locked is that object and if you create a new one, then that new one will not be locked yet.
The way you have it implemented is an acceptable way to do what you need to do. One way to improve your way of doing this would be to use lock() on the dictionary itself, rather than a second object used to synchronize the dictionary. That way, rather than passing around an extra object, the resource itself keeps track of whether there's a lock on it's own monitor.
Using a separate object can be useful in some cases, such as synchronizing access to outside resources, but in cases like this it's overhead.
A common pattern in C++ is to create a class that wraps a lock - the lock is either implicitly taken when object is created, or taken explicitly afterwards. When object goes out of scope, dtor automatically releases the lock.
Is it possible to do this in C#? As far as I understand there are no guarantees on when dtor in C# will run after object goes out of scope.
Clarification:
Any lock in general, spinlock, ReaderWriterLock, whatever.
Calling Dispose myself defeats the purpose of the pattern - to have the lock released as soon as we exit scope - no matter if we called return in the middle, threw exception or whatnot.
Also, as far as I understand using will still only queue object for GC, not destroy it immediately...
To amplify Timothy's answer, the lock statement does create a scoped lock using a monitor. Essentially, this translates into something like this:
lock(_lockKey)
{
// Code under lock
}
// is equivalent to this
Monitor.Enter(_lockKey)
try
{
// Code under lock
}
finally
{
Monitor.Exit(_lockKey)
}
In C# you rarely use the dtor for this kind of pattern (see the using statement/IDisposable). One thing you may notice in the code is that if an async exception happens between the Monitor.Enter and the try, it looks like the monitor will not be released. The JIT actually makes a special guarantee that if a Monitor.Enter immediately precedes a try block the async exception will not happen until the try block thus ensuring the release.
Your understanding regarding using is incorrect, this is a way to have scoped actions happen in a deterministic fashion (no queuing to the GC takes place).
C# supplies the lock keyword which provides an exclusive lock and if you want to have different types (e.g. Read/Write) you'll have to use the using statement.
P.S. This thread may interest you.
It's true that you don't know exactly when the dtor is going to run... but, if you implement the IDisposable interface, and then use either a 'using' block or call 'Dispose()' yourself, you will have a place to put your code.
Question: When you say "lock", do you mean a thread lock so that only one thread at a time can use the object? As in:
lock (_myLockKey) { ... }
Please clarify.
For completeness there is another way to achieve a similar RAII effect without using using and IDisposable. In C# using is usually clearer (see also here for some more thoughts), but in other languages (e.g. Java), or even in C# if using is not appropriate for some reason, it's useful to know.
It's an idiom called "Execute Around" and the idea is that you call a method that does the pre and post stuff (e.g. locking/unlocking your threads, or setting up and committing/ closing your DB connection etc), and you pass into that method a delegate that will implement the operations you want to occur in between.
e.g.:
funkyObj.InOut( delegate{ System.Console.WriteLine( "middle bit" ); } );
Depending on what the InOut method does, the output might be something like:
first bit
middle bit
last bit
As I say, this answer is for completeness only, the previous suggestions of using with IDisposable, as well as the lock keyword, are going to be better 99% of the time.
It's a shame that, while .Net has gone further than many other modern OO languages in this regards (I'm looking at you, Java), it still places the responsibility for RAII to work on the client code (ie the code that uses using), whereas in C++ the destructor will always run at the end of the scope.
Why would you want a scoped lock in the first place? Suppose you have the following code:
lock(obj) {
... some logic goes here
}
If exception has happened inside try inserted in place of lock, this is often means that you have a corrupted state now and other threads will continue to work with corrupted state. It is better to let the program hang to signal about the problem.
Another problem is that try incurs some performance penalty, but this is usually much lesser problem if at all.
Jeffrey Richter specifically advises not to use lock statement.
I've been really bothered by the fact that using is up to the developer to remember to do - at best you get a warning, which most people never bother to promote to an error. So, I've been toying with an idea like this - it forces the client to at least TRY to do things correctly. Fortunately and unfortunately, it's a closure, so the client could still keep a copy of the resource, and try to use it again later - but this code at least tries to push the client in the right direction...
public class MyLockedResource : IDisposable
{
private MyLockedResource()
{
Console.WriteLine("initialize");
}
public void Dispose()
{
Console.WriteLine("dispose");
}
public delegate void RAII(MyLockedResource resource);
static public void Use(RAII raii)
{
using (MyLockedResource resource = new MyLockedResource())
{
raii(resource);
}
}
public void test()
{
Console.WriteLine("test");
}
}
Good usage:
MyLockedResource.Use(delegate(MyLockedResource resource)
{
resource.test();
});
Bad usage! (Unfortunately, this can't be prevented...)
MyLockedResource res = null;
MyLockedResource.Use(delegate(MyLockedResource resource)
{
resource.test();
res = resource;
res.test();
});
res.test();