I'm having thread contention in an OLTP app. While reviewing the code involved, I found the following:
lock (_pendingTransactions)
{
transaction.EndPointRequest.Request.Key = (string)MessageComparer.GenerateKey(transaction.EndPointRequest.Request);
if (!_pendingTransactions.ContainsKey(transaction.EndPointRequest.Request.Key))
{
_pendingTransactions.Add(transaction.EndPointRequest.Request.Key, transaction);
return true;
}
else
{
return false;
}
}
As you can see in the snippet, there is a lock on an object that is modified within the 'lock' block. Is there anything bad with that? Anyone had problems doing something like this?
Using locking in this way is often discouraged, with the recommendation being to use a dedicated lock field (class member variable). A dedicated lock field is of type Object and usually looks like this:
private object _pendingTransactionLock = new object();
If the object itself has some threading awareness, this lock variable might belong in _pendingTransaction's implementation class. Otherwise, it might belong alongside _pendingTransaction in the field's declaring class.
You don't say what type _pendingTransaction is. If this is a built-in collection class that provides a SyncRoot property, that might be a good choice to lock on.
See Jon Skeet's Choosing What to Lock On.
Generally speaking, one will take a lock on an object specifically because one is going to modify (or read) it, so there's nothing inherently wrong with this.
Probably the key generation can be taken outside the lock block, to reduce the duration of the lock. Other than that, this is an almost canonical example of lock that protects a list/collection/array: acquire lock, check if the key exists, add key if not already present, release the lock.
Related
Assuming I have an object A containing
// ...
private List<double> someList = new List<double>();
// ...
public List<double> SomeList
{
get { lock (this) { return someList; } }
}
// ...
would it be thread safe to perform operation on the list as in the code below. Knowing that several operations could be executed simultaneously by different threads.
A.SomeList.Add(2.0);
or
A.SomeList.RemoveAt(0);
In other words, when is the lock released?
There is no thread safety here.
The lock is released as soon as the block it protects is finished, just before the property returns, so the calls to Add ad RemoveAt are not protected by the lock.
The lock you shown in the question isn't of much use.
To make list operations thread safe you need to implement your own Add/Remove/etc methods wrapping those of the list.
public void Add(double item)
{
lock(_list)
{
_list.Add(item);
}
}
Also, it's a good idea to hide the list itself from the consumers of your class, i.e. make the field private.
The lock is released when you exit the body of the lock statement. That means that your code is not thread-safe.
In other words, you can be sure that two threads won't be executing return someList on the same object at the same time. But it's certainly possible that one thread will execute Add() at the same time as another thread will execute RemoveAt(), which is what makes it non thread-safe.
The lock is released when the code inside the lock is finished executing.
Also, locking on this will only affect the current instance of the object
Ok, just for the hell of it.
There IS a way to make your object threadsafe, by using architectures that are already threadsafe.
For example, you could make your object a single threaded COM object. The COM object will be thread safe, but you'll pay with performance (the price of being lazy and not managing your own locks).
Create a COM Object in C#
...others said already, but just to formalize a problem a bit...
First, lock (this) {...} suggests a 'scope' - e.g. like using (){} - it only locks (this in this case) for variables inside. And that's a 'good thing' :) actually, as if you couldn't rely on that the whole locks/synchronization concept would be very much useless,
lock (this) { return something; } is an oxymoron of a sort - it's returning something that unlocks the very same moment it returns,
And the problems I think is the understanding of how it works. 'lock()' is not 'persisted' in a state of the object, so that you could return it etc. Take a look here how it's implemented How does lock work exactly? - answer explains it. It's more of a 'critical section' - i.e. you protect certain parts of 'code', which uses the variable - not the variable itself. Any type of synchronization requires 'synchronization objects' to hold the locks - and to be disposed of once lock is no longer needed. Take a look at this post https://stackoverflow.com/a/251668/417747, Esteban formulated that very well,
"Finally, there is the common misconception that lock(this) actually modifies the object passed as a parameter, and in some way makes it read-only or inaccessible. This is false. The object passed as a parameter to lock merely serves as a key" (this is a quote)
you either (usually) lock 'private code' of a class method, property... - to synchronize access to something you're doing inside - and access being to a private member (again usually) so that nobody else can access it w/o going through your synchronized code.
Or you make a thread-safe 'structure' - like a list - which is already 'synchronized' inside so that you can access it in a thread-safe manner. But there are no such things (or not used, almost never) as passing locks around or having one place lock the variable, while the other part of the code unlocks it etc. (in that case it's some type of EventWaitHandle that's rather used to synchronize things in between 'distant' code where one fires off on another etc.)
In your case, the choice is I think to go with the 'synchronized structure', i.e. the list that's internally handled,
I have been learning about locking on threads and I have not found an explanation for why creating a typical System.Object, locking it and carrying out whatever actions are required during the lock provides the thread safety?
Example
object obj = new object()
lock (obj) {
//code here
}
At first I thought that it was just being used as a place holder in examples and meant to be swapped out with the Type you are dealing with. But I find examples such as Dennis Phillips points out, doesn't appear to be anything different than actually using an instance of Object.
So taking an example of needing to update a private dictionary, what does locking an instance of System.Object do to provide thread safety as opposed to actually locking the dictionary (I know locking the dictionary in this case could case synchronization issues)?
What if the dictionary was public?
//what if this was public?
private Dictionary<string, string> someDict = new Dictionary<string, string>();
var obj = new Object();
lock (obj) {
//do something with the dictionary
}
The lock itself provides no safety whatsoever for the Dictionary<TKey, TValue> type. What a lock does is essentially
For every use of lock(objInstance) only one thread will ever be in the body of the lock statement for a given object (objInstance)
If every use of a given Dictionary<TKey, TValue> instance occurs inside a lock. And every one of those lock uses the same object then you know that only one thread at a time is ever accessing / modifying the dictionary. This is critical to preventing multiple threads from reading and writing to it at the same time and corrupting its internal state.
There is one giant problem with this approach though: You have to make sure every use of the dictionary occurs inside a lock and it uses the same object. If you forget even one then you've created a potential race condition, there will be no compiler warnings and likely the bug will remain undiscovered for some time.
In the second sample you showed you're using a local object instance (var indicates a method local) as a lock parameter for an object field. This is almost certainly the wrong thing to do. The local will live only for the lifetime of the method. Hence 2 calls to the method will use lock on different locals and hence all methods will be able to simultaneously enter the lock.
It used to be common practice to lock on the shared data itself:
private Dictionary<string, string> someDict = new Dictionary<string, string>();
lock (someDict )
{
//do something with the dictionary
}
But the (somewhat theoretical) objection is that other code, outside of your control, could also lock on someDict and then you might have a deadlock.
So it is recommended to use a (very) private object, declared in 1-to-1 correspondence with the data, to use as a stand-in for the lock. As long as all code that accesses the dictionary locks on on obj the tread-safety is guaranteed.
// the following 2 lines belong together!!
private Dictionary<string, string> someDict = new Dictionary<string, string>();
private object obj = new Object();
// multiple code segments like this
lock (obj)
{
//do something with the dictionary
}
So the purpose of obj is to act as a proxy for the dictionary, and since its Type doesn't matter we use the simplest type, System.Object.
What if the dictionary was public?
Then all bets are off, any code could access the Dictionary and code outside the containing class is not even able to lock on the guard object. And before you start looking for fixes, that simply is not an sustainable pattern. Use a ConcurrentDictionary or keep a normal one private.
The object which is used for locking does not stand in relation to the objects that are modified during the lock. It could be anything, but should be private and no string, as public objects could be modified externally and strings could be used by two locks by mistake.
So far as I understand it, the use of a generic object is simply to have something to lock (as an internally lockable object). To better explain this; say you have two methods within a class, both access the Dictionary, but may be running on different threads. To prevent both methods from modifying the Dictionary at the same time (and potentially causing deadlock), you can lock some object to control the flow. This is better illustrated by the following example:
private readonly object mLock = new object();
public void FirstMethod()
{
while (/* Running some operations */)
{
// Get the lock
lock (mLock)
{
// Add to the dictionary
mSomeDictionary.Add("Key", "Value");
}
}
}
public void SecondMethod()
{
while (/* Running some operation */)
{
// Get the lock
lock (mLock)
{
// Remove from dictionary
mSomeDictionary.Remove("Key");
}
}
}
The use of the lock(...) statement in both methods on the same object prevents the two methods from accessing the resource at the same time.
The important rules for the object you lock on are:
It must be an object visible only to the code that needs to lock on it. This avoids other code also locking on it.
This rules out strings that could be interned, and Type objects.
This rules out this in most cases, and the exceptions are too few and offer little in exploiting, so just don't use this.
Note also that some cases internal to the framework lock on Types and this, so while "it's okay as long as nobody else does it" is true, but it's already too late.
It must be static to protect static static operations, it may be instance to protect instance operations (including those internal to a instance that is held in a static).
You don't want to lock on a value-type. If you really wanted too you could lock on a particular boxing of it, but I can't think of anything that this would gain beyond proving that it's technically possible - it's still going to lead to the code being less clear as to just what locks on what.
You don't want to lock on a field that you may change during the lock being held, as you'll no longer have the lock on what you appear to have the lock on (it's just about plausible that there's a practical use for the effect of this, but there's going to be an impedance between what the code appears to do at first read and what it really does, which is never good).
The same object must be used to lock on all operations that may conflict with each other.
While you can have correctness with overly-broad locks, you can get better performance with finer. E.g. if you had a lock that was protecting 6 operations, and realised that 2 of those operations couldn't interfere with the other 4, so you changed to having 2 lock objects, then you can gain by having better coherency (or crash-and-burn if you were wrong in that analysis!)
The first point rules out locking on anything that is either visible or which could be made visible (e.g. a private instance that is returned by a protected or public member should be considered public as far as this analysis goes, anything captured by a delegate could end up elsewhere, and so on).
The last two points can mean that there's no obvious "type you are dealing with" as you put it, because locks don't protect objects, the protect operations done on objects and you may either have more than one object affected, or the same object affected by more than one group of operations that must be locked.
Hence it can be good practice to have an object that exists purely to lock on. Since it's doing nothing else, it can't get mixed up with other semantics or written over when you don't expect. And since it does nothing else it may as well be the lightest reference type that exists in .NET; System.Object.
Personally, I do prefer to lock on an object related to an operation when it does clearly fit the bill of the "type you are dealing with", and none of the other concerns apply, as it seems to me to be quite self-documenting, but to others the risk of doing it wrong out-weighs that benefit.
I read all documentation about thread-safe types and the "lock" statement, but I am still not getting it 100%.
When exactly do I need to use the "lock" statement? How it relates to (non) thread-safe types? Thank you.
Imagine an instance of a class with a global variable in it. Imagine two threads call a method on that object at exactly the same time, and that method updates the global variable inside.
The likelihood is that value in the variable will get corrupted. Different languages and compilers/interpreters will deal with this in different ways (or not at all...) but the point is that you get "undesired" and "unpredictable" results.
Now imagine that the method obtains a "lock" on the variable before attempting to read from or write to it. The first thread to call the method will get a "lock" on the variable, the second thread to call the method will have to wait until the lock is released by the first thread. While you still have a race condition (i.e. the second thread might overwrite the value from the first) at least you have predictable results because no two threads (that are unaware of each other) can modify the value at the same time.
You use the lock statement to obtain that lock on the variable. Typically you'd define a separate object variable and use that for the lock object:
public class MyThreadSafeClass
{
private readonly object lockObject = new object();
private string mySharedString;
public void ThreadSafeMethod(string newValue)
{
lock (lockObject)
{
// Once one thread has got inside this lock statement, any others will have to wait outside for their turn...
mySharedString = newValue;
}
}
}
A type is deemed "thread-safe" if it applies the principle that no corruption will occur if shared data is accessed by multiple threads at the same time.
Beware the difference between "immutable" and "thread-safe". Thread-safe says that you have coded for the scenario and won't get corruption if two threads access shared state at the same time, whereas immutability is simply saying you return a new object rather than modifying it. Immutable objects are thread-safe, but not all thread-safe objects are immutable.
Thread safe code means code that can be accessed with many threads and still operate correctly.
In C#, this normally requires some sort of synchronization mechanism. A simple one is the lock statement (which is behind the scenes a call to Monitor.Enter). A code block that is surrounded by a lock block can only be accessed by one thread at a time.
Any use of a type that is not thread safe requires you to manage synchronization yourself.
A good resource to learn about threading in C# is the free eBook by Joe Albahari, found here.
http://en.wikipedia.org/wiki/Thread_safety
Is it necessary to protect access to a single variable of a reference type in a multi-threaded application? I currently lock that variable like this:
private readonly object _lock = new object();
private MyType _value;
public MyType Value
{
get { lock (_lock) return _value; }
set { lock (_lock) _value = value; }
}
But I'm wondering if this is really necessary? Isn't assignment of a value to a field atomic? Can anything go wrong if I don't lock in this case?
P.S.: MyType is an immutable class: all the fields are set in the constructor and don't change. To change something, a new instance is created and assigned to the variable above.
Being atomic is rarely enough.
I generally want to get the latest value for a variable, rather than potentially see a stale one - so some sort of memory barrier is required, both for reading and writing. A lock is a simple way to get this right, at the cost of potentially losing some performance due to contention.
I used to believe that making the variable volatile would be enough in this situation. I'm no longer convinced this is the case. Basically I now try to avoid writing lock-free code when shared data is involved, unless I'm able to use building blocks written by people who really understand these things (e.g. Joe Duffy).
There is the volatile keyword for this. Whether it's safe without it depends on the scenario. But the compiler can do funny stuff, such as reorganize order of operation. So even read/write to one field may be unsafe.
It can be an issue. It's not just the assignment itself you have to be concerned with. Due to caching, concurrent threads might see an old version of the object if you don't lock. So whether a lock is necessary will depend on precisely how you use it, and you don't show that.
Here's a free, sample chapter of "Concurrent Programming in Windows" which explains this issue in detail.
it all depends on whether the property will be accessed by multiple threads. and some variable is said to be atomic operation, in this atomic operation case, no need to use lock. sorry for poor english.
in you case, immutable, i think lock is not necessary.
I'm using C# & .NEt 3.5. What is the difference between the OptionA and OptionB ?
class MyClass
{
private object m_Locker = new object();
private Dicionary<string, object> m_Hash = new Dictionary<string, object>();
public void OptionA()
{
lock(m_Locker){
// Do something with the dictionary
}
}
public void OptionB()
{
lock(m_Hash){
// Do something with the dictionary
}
}
}
I'm starting to dabble in threading (primarly for creating a cache for a multi-threaded app, NOT using the HttpCache class, since it's not attached to a web site), and I see the OptionA syntax in a lot of the examples I see online, but I don't understand what, if any, reason that is done over OptionB.
Option B uses the object to be protected to create a critical section. In some cases, this more clearly communicates the intent. If used consistently, it guarantees only one critical section for the protected object will be active at a time:
lock (m_Hash)
{
// Across all threads, I can be in one and only one of these two blocks
// Do something with the dictionary
}
lock (m_Hash)
{
// Across all threads, I can be in one and only one of these two blocks
// Do something with the dictionary
}
Option A is less restrictive. It uses a secondary object to create a critical section for the object to be protected. If multiple secondary objects are used, it's possible to have more than one critical section for the protected object active at a time.
private object m_LockerA = new object();
private object m_LockerB = new object();
lock (m_LockerA)
{
// It's possible this block is active in one thread
// while the block below is active in another
// Do something with the dictionary
}
lock (m_LockerB)
{
// It's possible this block is active in one thread
// while the block above is active in another
// Do something with the dictionary
}
Option A is equivalent to Option B if you use only one secondary object. As far as reading code, Option B's intent is clearer. If you're protecting more than one object, Option B isn't really an option.
It's important to understand that lock(m_Hash) does NOT prevent other code from using the hash. It only prevents other code from running that is also using m_Hash as its locking object.
One reason to use option A is because classes are likely to have private variables that you will use inside the lock statement. It is much easier to just use one object which you use to lock access to all of them instead of trying to use finer grain locks to lock access to just the members you will need. If you try to go with the finer grained method you will probably have to take multiple locks in some situations and then you need to make sure you are always taking them in the same order to avoid deadlocks.
Another reason to use option A is because it is possible that the reference to m_Hash will be accessible outside your class. Perhaps you have a public property which supplies access to it, or maybe you declare it as protected and derived classes can use it. In either case once external code has a reference to it, it is possible that the external code will use it for a lock. This also opens up the possibility of deadlocks since you have no way to control or know what order the lock will be taken in.
Actually, it is not good idea to lock on object if you are using its members.
Jeffrey Richter wrote in his book "CLR via C#" that there is no guarantee that a class of object that you are using for synchronization will not use lock(this) in its implementation (It's interesting, but it was a recommended way for synchronization by Microsoft for some time... Then, they found that it was a mistake), so it is always a good idea to use a special separate object for synchronization. So, as you can see OptionB will not give you a guarantee of deadlock - safety.
So, OptionA is much safer that OptionB.
It's not what you're "Locking", its the code that's contained between the lock { ... } thats important and that you're preventing from being executed.
If one thread takes out a lock() on any object, it prevents other threads from obtaining a lock on the same object, and hence prevents the second thread from executing the code between the braces.
So that's why most people just create a junk object to lock on, it prevents other threads from obtaining a lock on that same junk object.
I think the scope of the variable you "pass" in will determine the scope of the lock.
i.e. An instance variable will be in respect of the instance of the class whereas a static variable will be for the whole AppDomain.
Looking at the implementation of the collections (using Reflector), the pattern seems to follow that an instance variable called SyncRoot is declared and used for all locking operations in respect of the instance of the collection.
Well, it depends on what you wanted to lock(be made threadsafe).
Normally I would choose OptionB to provide threadsafe access to m_Hash ONLY. Where as OptionA, I would used for locking value type, which can't be used with the lock, or I had a group of objects that need locking concurrently, but I don't what to lock the whole instance by using lock(this)
Locking the object that you're using is simply a matter of convenience. An external lock object can make things simpler, and is also needed if the shared resource is private, like with a collection (in which case you use the ICollection.SyncRoot object).
OptionA is the way to go here as long as in all your code, when accessing the m_hash you use the m_Locker to lock on it.
Now Imagine this case. You lock on the object. And that object in one of the functions you call has a lock(this) code segment. In this case that is a sure unrecoverable deadlock