Is the "lock" statement in C# time-consuming? - c#

I have a method which has been called many times by other methods to hash data. Inside the method, some lock statements are used. Could you please let me know whether the lock statement is time-consuming and what is the best way to improve it.
P/S: I have been finding a way to avoid using the lock statement in this method.

Your question is not answerable. It depends entirely on whether the lock is contended or not.
Let me put it this way: you're asking "does it take a long time to enter the bathroom?" without telling us how many people are already in line to use it. If there is never anyone in line, not long at all. If there are usually twenty people waiting to get in, perhaps very long indeed.

The lock statement itself is not very time-consuming, but it may cause contention over shared data.
Followup: if you need to protect shared data, use a lock. Lock-free code is wicked difficult to do correctly, as this article illustrates.

You might find this article on threads interesting and relevant. One of the claims that it makes is "Locking itself is very fast: a lock is typically obtained in tens of nanoseconds assuming no blocking."

The lock statement itself is actually some syntactic sugar that creates and manages a Monitor object.
This in itself is usually not overly resource intensive, but can become a problem if you have multiple reads but no writes to your variable across multiple threads. Each read will have to wait for the other to finish before a read can complete. In scenarios where you might be getting the variable from multiple threads, but not setting it, you might want to look at using a ReaderWriterLockSlim object to manage asynchronous reads and writes.

I landed here for a slightly different question.
In a piece of code that can be run as single threaded or multi threaded shoud I refactor the code to remove the lock statement (i.e. is the lock statement without parallelism costless)?
This is my test
class Program
{
static void Main(string[] args)
{
var startingTime = DateTime.Now;
(new Program()).LockMethod();
Console.WriteLine("Elapsed {0}", (DateTime.Now - startingTime).TotalMilliseconds);
Console.ReadLine();
}
private void LockMethod()
{
int a = 0;
for (int i = 0; i < 10000000; i++)
{
lock (this)
{
a++; // costless operation
}
}
}
}
To be sure that this code is not optimized I decompiled it. No optimizations at all (a++ changed to ++a).
RESULT: 1Mln of not contended locks acquirements takes about 160ms that is about 15ns for acquire a not contended lock.

Related

lock keyword on a LINQ Parallel.ForEach<> loop

This is more a conceptual question. I was wondering if I used a lock inside of Parallel.ForEach<> loop if that would take away the benefits of Paralleling a foreachloop.
Here is some sample code where I have seen it done.
Parallel.ForEach<KeyValuePair<string, XElement>>(binReferences.KeyValuePairs, reference =>
{
lock (fileLockObject)
{
if (fileLocks.ContainsKey(reference.Key) == false)
{
fileLocks.Add(reference.Key, new object());
}
}
RecursiveBinUpdate(reference.Value, testPath, reference.Key, maxRecursionCount, ref recursionCount);
lock (fileLocks[reference.Key])
{
reference.Value.Document.Save(reference.Key);
}
});
Where fileLockObject and fileLocks are as follows.
private static object fileLockObject = new object();
private static Dictionary<string, object> fileLocks = new Dictionary<string, object>();
Does this technique completely make the loop not parallel?
I would like to see your thoughts on this.
It means all of the work inside of the lock can't be done in parallel. This greatly harms the performance here, yes. Since the entire body is not all locked (and locked on the same object) there is still some parallelization here though. Whether the parallelization that you do get adds enough benefit to surpass the overhead that comes with managing the threads and synchronizing around the locks is something you really just need to test yourself with your specific data.
That said, it looks like what you're doing (at least in the first locked block, which is the one I'd be more concerned with at every thread is locking on the same object) is locking access to a Dictionary. You can instead use a ConcurrentDictionary, which is specifically designed to be utilized from multiple threads, and will minimize the amount of synchronization that needs to be done.
if I used a lock ... if that would take away the benefits of Paralleling a foreachloop.
Proportionally. When RecursiveBinUpdate() is a big chunk of work (and independent) then it will still pay off. The locking part could be a less than 1%, or 99%. Look up Amdahls law, that applies here.
But worse, your code is not thread-safe. From your 2 operations on fileLocks, only the first is actually inside a lock.
lock (fileLockObject)
{
if (fileLocks.ContainsKey(reference.Key) == false)
{
...
}
}
and
lock (fileLocks[reference.Key]) // this access to fileLocks[] is not protected
change the 2nd part to:
lock (fileLockObject)
{
reference.Value.Document.Save(reference.Key);
}
and the use of ref recursionCount as a parameter looks suspicious too. It might work with Interlocked.Increment though.
The "locked" portion of the loop will end up running serially. If the RecursiveBinUpdate function is the bulk of the work, there may be some gain, but it would be better if you could figure out how to handle the lock generation in advance.
When it comes to locks, there's no difference in the way PLINQ/TPL threads have to wait to gain access. So, in your case, it only makes the loop not parallel in those areas that you're locking and any work outside those locks is still going to execute in parallel (i.e. all the work in RecursiveBinUpdate).
Bottom line, I see nothing substantially wrong with what you're doing here.

Is Interlocked.CompareExchange really faster than a simple lock?

I came across a ConcurrentDictionary implementation for .NET 3.5 (I'm so sorry I could find the link right now) that uses this approach for locking:
var current = Thread.CurrentThread.ManagedThreadId;
while (Interlocked.CompareExchange(ref owner, current, 0) != current) { }
// PROCESS SOMETHING HERE
if (current != Interlocked.Exchange(ref owner, 0))
throw new UnauthorizedAccessException("Thread had access to cache even though it shouldn't have.");
Instead of the traditional lock:
lock(lockObject)
{
// PROCESS SOMETHING HERE
}
The question is: Is there any real reason for doing this? Is it faster or have some hidden benefit?
PS: I know there's a ConcurrentDictionary in some latest version of .NET but I can't use for a legacy project.
Edit:
In my specific case, what I'm doing is just manipulating an internal Dictionary class in such a way that it's thread safe.
Example:
public bool RemoveItem(TKey key)
{
// open lock
var current = Thread.CurrentThread.ManagedThreadId;
while (Interlocked.CompareExchange(ref owner, current, 0) != current) { }
// real processing starts here (entries is a regular `Dictionary` class.
var found = entries.Remove(key);
// verify lock
if (current != Interlocked.Exchange(ref owner, 0))
throw new UnauthorizedAccessException("Thread had access to cache even though it shouldn't have.");
return found;
}
As #doctorlove suggested, this is the code: https://github.com/miensol/SimpleConfigSections/blob/master/SimpleConfigSections/Cache.cs
There is no definitive answer to your question. I would answer: it depends.
What the code you've provided is doing is:
wait for an object to be in a known state (threadId == 0 == no current work)
do work
set back the known state to the object
another thread now can do work too, because it can go from step 1 to step 2
As you've noted, you have a loop in the code that actually does the "wait" step. You don't block the thread until you can access to your critical section, you just burn CPU instead. Try to replace your processing (in your case, a call to Remove) by Thread.Sleep(2000), you'll see the other "waiting" thread consuming all of one of your CPUs for 2s in the loop.
Which means, which one is better depends on several factors. For example: how many concurrent accesses are there? How long the operation takes to complete? How many CPUs do you have?
I would use lock instead of Interlocked because it's way easier to read and maintain. The exception would be the case you've got a piece of code called millions of times, and a particular use case you're sure Interlocked is faster.
So you'll have to measure by yourself both approaches. If you don't have time for this, then you probably don't need to worry about performances, and you should use lock.
Your CompareExchange sample code doesn't release the lock if an exception is thrown by "PROCESS SOMETHING HERE".
For this reason as well as the simpler, more readable code, I would prefer the lock statement.
You could rectify the problem with a try/finally, but this makes the code even uglier.
The linked ConcurrentDictionary implementation has a bug: it will fail to release the lock if the caller passes a null key, potentially leaving other threads spinning indefinitely.
As for efficiency, your CompareExchange version is essentially a Spinlock, which can be efficient if threads are only likely to be blocked for short periods. But inserting into a managed dictionary can take a relatively long time, since it may be necessary to resize the dictionary. Therefore, IMHO, this isn't a good candidate for a spinlock - which can be wasteful, especially on single-processor system.
A little bit late... I have read your sample but in short:
Fastest to slowest MT sync:
Interlocked.* => This is a CPU atomic instruction. Can't be beat if it is sufficient for your need.
SpinLock => Uses Interlocked behind and is really fast. Uses CPU when wait. Do not use for code that wait long time (it is usually used to prevent thread switching for lock that do quick action). If you often have to wait more than one thread cycle, I would suggest to go with "Lock"
Lock => The slowest but easier to use and read than SpinLock. The instruction itself is very fast but if it can't acquire the lock it will relinquish the cpu. Behind the scene, it will do a WaitForSingleObject on a kernel objet (CriticalSection) and then Window will give cpu time to the thread only when the lock will be freed by the thread that acquired it.
Have fun with MT!
The docs for the Interlocked class tell us it
"Provides atomic operations for variables that are shared by multiple threads. "
The theory is an atomic operation can be faster than locks. Albahari gives some further details on interlocked operations stating they are faster.
Note that Interlocked provides a "smaller" interface than Lock - see previous question here
Yes.
The Interlocked class offer atomic operations which means they do not block other code like a lock because they don't really need to.
When you lock a block of code you want to make sure no 2 threads are in it at the same time, that means that when a thread is inside all other threads wait to get in, which uses resources (cpu time and idle threads).
The atomic operations on the other hand do not need to block other atomic operations because they are atomic. It's conceptually a one CPU operation, the next ones just go in after the previous, and you're not wasting threads on just waiting. (By the way, that's why it's limited to very basic operations like Increment, Exchange etc.)
I think a lock (which is a Monitor underneath) uses interlocked to know if the lock is already taken, but it can't know that the actions inside it can be atomic.
In most cases, though, the difference is not critical. But you need to verify that for your specific case.
Interlocked is faster - already explained in other comments and you can also define the logic of how the wait is implemented e.g. spinWait.spin(), spinUntil, Thread.sleep etc once the lock fails the first time.. Also, if your code within the lock is expected to run without possibility of crash (custom code/delegates/resource resolution or allocation/events/unexpected code executed during the lock) unless you are going to be catching the exception to allow your software to continue execution, "try" "finally" is also skipped so extra speed there. lock(something) makes sure if you catch the exception from outside to unlock that something, just like "using" makes sure (C#) when the execution exits the execution block for whatever reason to dispose the "used" disposable object.
One important difference between lock and interlock.CompareExhange is how it can be used in async environments.
async operations cannot be awaited inside a lock, as they can easily occur in deadlocks if the thread that continues execution after the await is not the same one that originally acquired the lock.
This is not a problem with interlocked however, because nothing is "acquired" by a thread.
Another solution for asynchronous code that may provide better readability than interlocked may be semaphore as described in this blog post:
https://blog.cdemi.io/async-waiting-inside-c-sharp-locks/

Can the same object lock be checked at the same time by concurrent threads?

How come if I have a statement like this:
private int sharedValue = 0;
public void SomeMethodOne()
{
lock(this){sharedValue++;}
}
public void SomeMethodTwo()
{
lock(this){sharedValue--;}
}
So for a thread to get into a lock it must first check if another thread is operating on it. If it isn't, it can enter and has to write something to memory, this surely cannot be atomic as it needs to read and write.
So how come it's impossible for one thread to be reading the lock, while the other is writing its ownership to it?
To simplify Why cannot two threads both get into a lock at the same time?
It looks like you are basically asking how the lock works. How can the lock maintain internal state in an atomic manner without already having the lock built? It seems like a chicken and egg problem at first does it not?
The magic all happens because of a compare-and-swap (CAS) operation. The CAS operation is a hardware level instruction that does 2 important things.
It generates a memory barrier so that instruction reordering is constrained.
It compares the contents of a memory address with another value and if they are equal then the original value is replaced with a new value. It does all of this in an atomic manner.
At the most fundamental level this is how the trick is accomplished. It is not that all other threads are blocked from reading while another is writing. That is totally the wrong way to think about it. What actually happens is that all threads are acting as writers simultaneously. The strategy is more optimistic than it is pessimistic. Every thread is trying to acquire the lock by performing this special kind of write called a CAS. You actually have access to a CAS operation in .NET via the Interlocked.CompareExchange (ICX) method. Every synchronization primitive can be built from this single operation.
If I were going to write a Monitor-like class (that is what the lock keyword uses behind the scenes) from scratch entirely in C# I could do it using the Interlocked.CompareExchange method. Here is an overly simplified implementation. Please keep in mind that this is most certainly not how the .NET Framework does it.1 The reason I present the code below is to show you how it could be done in pure C# code without the need for CLR magic behind the scenes and because it might get you thinking about how Microsoft could have implemented it.
public class SimpleMonitor
{
private int m_LockState = 0;
public void Enter()
{
int iterations = 0;
while (!TryEnter())
{
if (iterations < 10) Thread.SpinWait(4 << iterations);
else if (iterations % 20 == 0) Thread.Sleep(1);
else if (iterations % 5 == 0) Thread.Sleep(0);
else Thread.Yield();
iterations++;
}
}
public void Exit()
{
if (!TryExit())
{
throw new SynchronizationLockException();
}
}
public bool TryEnter()
{
return Interlocked.CompareExchange(ref m_LockState, 1, 0) == 0;
}
public bool TryExit()
{
return Interlocked.CompareExchange(ref m_LockState, 0, 1) == 1;
}
}
This implementation demonstrates a couple of important things.
It shows how the ICX operation is used to atomically read and write the lock state.
It shows how the waiting might occur.
Notice how I used Thread.SpinWait, Thread.Sleep(0), Thread.Sleep(1) and Thread.Yield while the lock is waiting to be acquired. The waiting strategy is overly simplified, but it does approximate a real life algorithm implemented in the BCL already. I intentionally kept the code simple in the Enter method above to make it easier to spot the crucial bits. This is not how I would have normally implemented this, but I am hoping it does drive home the salient points.
Also note that my SimpleMonitor above has a lot of problems. Here are but only a few.
It does not handle nested locking.
It does not provide Wait or Pulse methods like the real Monitor class. They are really hard to do right.
1The CLR will actually use a special block of memory that exists on each reference type. This block of memory is referred to as the "sync block". The Monitor will manipulate bits in this block of memory to acquire and release the lock. This action may require a kernel event object. You can read more about it on Joe Duffy's blog.
lock in C# is used to create a Monitor object that is actually used for locking.
You can read more about Monitor in here: http://msdn.microsoft.com/en-us/library/system.threading.monitor.aspx. The Enter method of the Monitor ensures that only one thread can enter the critical section at the time:
Acquires a lock for an object. This action also marks the beginning of a critical section. No other thread can enter the critical section unless it is executing the instructions in the critical section using a different locked object.
BTW, you should avoid locking on this (lock(this)). You should use a private variable on a class (static or non-static) to protect the critical section. You can read more in the same link provided above but the reason is:
When selecting an object on which to synchronize, you should lock only on private or internal objects. Locking on external objects might result in deadlocks, because unrelated code could choose the same objects to lock on for different purposes.

Double locking specific case C#

I struggle to choose from this 2 approaches, many of the answers here favor one or the other.
I need a guidance to choose the best for my situation.
lock (lockObject)
lock (lockObject2) {
// two critical portions of code
}
versus:
lock (lockObject)
{
//first critical portion for lockObject
lock (lockObject2) {
//critical portion for lockObject2 and lockObject
}
}
The second example is marked as a bad practice by Coverity and I want to switch to the first if it is OK to do that.
Which of the 2 is the best(and by best I mean code quality and fewer problems on the long run)? And why?
Edit 1: The first lock is only used for this case in particular.
"Best" is subjective and depends on context. One problem with either is that you can risk deadlocks if some code uses the same locks in a different order. If you have nested locks, then personally at a minimum I'd be using a lock-with-timeout (and raising an exception) - an exception is better than a deadlock. The advantage of taking out both locks immediately is that you know you can get them, before you start doing the work. The advantage of not doing that is that you reduce the time the lock on lockObject2 is held.
Personally, I would be looking for ways to make it:
lock (lockObject) {
//critical portion for lockObject
}
lock (lockObject2) {
//critical portion for lockObject2
}
This has the advantages of both, without the disadvantages of either - if you can restructure the code to do it.
Depends on the case you are facing:
If it is possible the first lock is release without touching the second lock it a possible speedup to your application because the second lock won't be locked without being used.
If always both locks are used I would prefere the first case for readability
Edit: I think it's not possible to tell this without more information.
To keep the locks as short as possible you need to lock as late as possible. Anyway there may be cases you want to get both locks at the same time.
Edit: I said the wrong thing.
The first will lock the first lock, then lock the second lock for the portion of the code and then unlock it automatically.
The second example will lock the first, then lock the second and then both portions of code has ended, both have unlocked.
Personally I would prefer the second option since it will automatically unlock both locks. But as someone else commented, beware of deadlocks. This can happen if another thread locks in the reverse order.

Best Practices Concerning Method Locking

I have a method whom access myst be synchronized allowing only one thread at once to go though it. Here is my current implementation:
private Boolean m_NoNeedToProceed;
private Object m_SynchronizationObject = new Object();
public void MyMethod()
{
lock (m_SynchronizationObject)
{
if (m_NoNeedToProceed)
return;
Now I was thinking about changing it a little bit like so:
private Boolean m_NoNeedToProceed;
private Object m_SynchronizationObject = new Object();
public void MyMethod()
{
if (m_NoNeedToProceed)
return;
lock (m_SynchronizationObject)
{
Shouldn't it be better to do a quick return before locking it so that calling threads can proceed without waiting for the previous one to complete the method call?
Shouldn't it be better to do a quick return before locking it...
No. A lock is is not just a mutual-exclusion mechanism, it's also a memory barrier1. Without a lock, you could introduce a data race if any of the concurrent threads tries to modify the variable2.
BTW, locks have a good performance when there is no contention, so you wouldn't be gaining much performance anyway. As always, refrain from making assumptions about performance, especially this "close to the metal". If in doubt, measure!
...so that calling threads can proceed without waiting for the previous one to complete the method call?
This just means you are holding the lock for longer than necessary. Release the lock as soon as the shared memory no longer needs protection (which might be sooner than the method exit), and you won't need to try to artificially circumvent it.
1 I.e. triggers a cache coherency mechanism so all CPU cores see the "same" memory.
2 For example, one thread writes to the variable, but that change lingers in one core's write buffer for some time, so other threads on other cores don't see it immediately.
Yes, as long as m_NoNeedToProceed doesn't have any race conditions associated with it.
If the method takes a long time to run, and some threads do not need to actually access the critical sections of the method. Then it would be best to let them return early without getting the lock.
Yes it's better to that before you lock.
Make m_NoNeedToProceed volatile
Just a disclaimer: volatile doesn't make it thread safe. It just causes a barrier to check if the value has changed in another processor.

Categories