I'm currently implementing a thread-safe dictionary in C# which uses immutable AVL trees as buckets internally. The idea is to provide fast read access without a lock because in my application context, we add entries to this dictionary only at startup and afterwards, values are mostly read (but there still are a few number of writes).
I've structured my TryGetValue and GetOrAdd methods in the following way:
public sealed class FastReadThreadSafeDictionary<TKey, TValue> where TKey : IEquatable<TKey>
{
private readonly object _bucketContainerLock = new object();
private ImmutableBucketContainer<TKey, TValue> _bucketContainer;
public bool TryGetValue(TKey key, out TValue value)
{
var bucketContainer = _bucketContainer;
return bucketContainer.TryFind(key.GetHashCode(), key, out value);
}
public bool GetOrAdd(TKey key, Func<TValue> createValue, out TValue value)
{
createValue.MustNotBeNull(nameof(createValue));
var hashCode = key.GetHashCode();
lock (_bucketContainerLock)
{
ImmutableBucketContainer<TKey, TValue> newBucketContainer;
if (_bucketContainer.GetOrAdd(hashCode, key, createValue, out value, out newBucketContainer) == false)
return false;
_bucketContainer = newBucketContainer;
return true;
}
}
// Other members omitted for sake of brevity
}
As you can see, I don't use a lock in TryGetValue because reference assignment in .NET runtimes is an atomic operation by design. By copying the reference of the field _bucketContainer to a local variable, I'm sure I can safely access the instance because it is immutable. In GetOrAdd, I use a lock to access the private _bucketContainer so I can ensure that a value is not created twice (i.e. if two or more threads are trying to add a value, only one can actually create a new ImmutableBucketContainer with the added value because of the lock).
I use Microsoft Chess for testing concurrency and in one of my tests, MCUT (Microsoft Concurrency Unit Testing) reports a data race in GetOrAdd when I exchange the new bucket container with the old one:
[DataRaceTestMethod]
public void ReadWhileAdd()
{
var testTarget = new FastReadThreadSafeDictionary<int, object>();
var writeThread = new Thread(() =>
{
for (var i = 5; i < 10; i++)
{
testTarget.GetOrAdd(i, () => new object());
Thread.Sleep(0);
}
});
var readThread = new Thread(() =>
{
object value;
testTarget.TryGetValue(5, out value);
Thread.Sleep(0);
testTarget.TryGetValue(7, out value);
Thread.Sleep(10);
testTarget.TryGetValue(9, out value);
});
readThread.Start();
writeThread.Start();
readThread.Join();
writeThread.Join();
}
MCUT reports the following message:
23> Test result: DataRace
23> ReadWhileAdd() (Context=, TestType=MChess): [DataRace]Found data race at GetOrAdd:FastReadThreadSafeDictionary.cs(68)
which is the assignment _bucketContainer = newBucketContainer; in GetOrAdd.
My actual question is: why is the assignment _bucketContainer = newBucketContainer a race condition? Threads currently executing TryGetValue always make a copy of the _bucketContainer field and thus shouldn't be bothered with the update (except that the searched value might be added to the _bucketContainer just after the copy takes place, but this doesn't matter with the data race). And in GetOrAdd, there is an explicit lock to prevent concurrent access. Is this a bug in Chess or am I missing something very obvious?
As mentioned by #CodesInChaos in the comments of the question, I missed a volatile read in TryGetValue. The method now looks like this:
public bool TryGetValue(TypeKey typeKey, out TValue value)
{
var bucketContainer = Volatile.Read(ref _bucketContainer);
return bucketContainer.TryFind(typeKey, out value);
}
This volatile read is necessary because different threads accessing this dictionary might cache data and reorder instructions independently from each other, which might lead to a data race. Additionally, the CPU architecture that is running the code also matters, e.g. x86 and x64 processors perform volatile reads by default, while this might not be true for other architectures like ARM or Itanium. That's why the read access has to be synchronized with other threads using a Memory Barrier, which is performed internally in Volatile.Read (note that lock statements also use memory barriers internally). Joseph Albahari wrote a comprehensive tutorial on this here: http://www.albahari.com/threading/part4.aspx
Related
I'm trying to create my own Cache implementation for an API. It is the first time I work with ConcurrentDictionary and I do not know if I am using it correctly. In a test, something has thrown error and so far I have not been able to reproduce it again. Maybe some concurrency professional / ConcurrentDictionary can look at the code and find what may be wrong. Thank you!
private static readonly ConcurrentDictionary<string, ThrottleInfo> CacheList = new ConcurrentDictionary<string, ThrottleInfo>();
public override void OnActionExecuting(HttpActionContext actionExecutingContext)
{
if (CacheList.TryGetValue(userIdentifier, out var throttleInfo))
{
if (DateTime.Now >= throttleInfo.ExpiresOn)
{
if (CacheList.TryRemove(userIdentifier, out _))
{
//TODO:
}
}
else
{
if (throttleInfo.RequestCount >= defaultMaxRequest)
{
actionExecutingContext.Response = ResponseMessageExtension.TooManyRequestHttpResponseMessage();
}
else
{
throttleInfo.Increment();
}
}
}
else
{
if (CacheList.TryAdd(userIdentifier, new ThrottleInfo(Seconds)))
{
//TODO:
}
}
}
public class ThrottleInfo
{
private int _requestCount;
public int RequestCount => _requestCount;
public ThrottleInfo(int addSeconds)
{
Interlocked.Increment(ref _requestCount);
ExpiresOn = ExpiresOn.AddSeconds(addSeconds);
}
public void Increment()
{
// this is about as thread safe as you can get.
// From MSDN: Increments a specified variable and stores the result, as an atomic operation.
Interlocked.Increment(ref _requestCount);
// you can return the result of Increment if you want the new value,
//but DO NOT set the counter to the result :[i.e. counter = Interlocked.Increment(ref counter);] This will break the atomicity.
}
public DateTime ExpiresOn { get; } = DateTime.Now;
}
If I understand what you are trying to do if the ExpiresOn has passed remove the entry else update it or add if not exists.
You certainly can take advantage of the AddOrUpdateMethod to simplify some of your code.
Take a look here for some good examples: https://learn.microsoft.com/en-us/dotnet/standard/collections/thread-safe/how-to-add-and-remove-items
Hope this helps.
The ConcurrentDictionary is sufficient as a thread-safe container only in cases where (1) the whole state that needs protection is its internal state (the keys and values it contains), and only if (2) this state can be mutated atomically using the specialized API it offers (GetOrAdd, AddOrUpdate). In your case the second requirement is not met, because you need to remove keys conditionally depending on the state of their value, and this scenario is not supported by the ConcurrentDictionary class.
So your current cache implementation is not thread safe. The fact that throws exceptions sporadically is a coincidence. It would still be non-thread-safe if it was totally throw-proof, because it would not be totally error-proof, meaning that it could occasionally (or permanently) transition to a state incompatible with its specifications (returning expired values for example).
Regarding the ThrottleInfo class, it suffers from a visibility bug that could remain unobserved if you tested the class extensively in one machine, and then suddenly emerge when you deployed your app in another machine with a different CPU architecture. The non-volatile private int _requestCount field is exposed through the public property RequestCount, so there is no guarantee (based on the C# specification) that all threads will see its most recent value. You can read this article by Igor Ostrovsky about the peculiarities of the memory models, which may convince you (like me) that employing lock-free techniques (using the Interlocked class in this case) with multithreaded code is more trouble than it's worth. If you read it and like it, there is also a part 2 of this article.
I have the following function which is intended to "memoize" argument-less functions. Meaning to only call the function once, and then return the same result all other times.
private static Func<T> Memoize<T>(Func<T> func)
{
var lockObject = new object();
var value = default(T);
var inited = false;
return () => {
if (inited)
return value;
lock (lockObject) {
if (!inited) {
value = func();
inited = true;
}
}
return value;
};
}
Can I be certain that if a thread reads "inited == true" outside the lock, it will then read the "value" which was written before "inited" was set to true?
Note: Double-checked locking in .NET covers the fact the it should work and this question mainly to check if my implementation is correct and maybe get better alternatives.
No, because inited is not volatile. volatile gives you the memory release and acquire fences you need in order to establish the correct happens-before relationship.
If there's no release fence before inited is set to true, then the value may not be completely written by the time another thread reads inited and sees it as true, which could result in a half-constructed object being returned. Similarly, if there's a release fence but no corresponding acquire fence before reading inited in the first check, it's possible that the object is fully constructed, but that the CPU core that saw inited as true hasn't yet seen the memory effects of value being written (cache coherency does not necessarily mandate that the effects of consecutive writes are seen in order on other cores). This would again potentially result in a half-constructed object being returned.
This is, by the way, an instance of the already very well-documented double-checked locking pattern.
Instead of using a lambda that captures local variables (which will make the compiler generate an implicit class to hold the closed-over variables in non-volatile fields), I suggest explicitly creating your own class with a volatile filed for value.
private class Memoized<T>
{
public T value;
public volatile bool inited;
}
private static Func<T> Memoize<T>(Func<T> func)
{
var memoized = new Memoized<T>();
return () => {
if (memoized.inited)
return memoized.value;
lock (memoized) {
if (!memoized.inited) {
memoized.value = func();
memoized.inited = true;
}
}
return memoized.value;
};
}
Of course, as others have mentioned Lazy<T> exists for this very purpose. Use it instead of rolling your own, but it's always a good idea to know the theory behind how something works.
I think you would be better off using the standard Lazy<T> class to implement the functionality you need, as in:
private static Func<T> Memoize<T>(Func<T> func)
{
var lazyValue = new Lazy<T>(func, isThreadSafe: true);
return () => lazyValue.Value;
}
No, that code is not safe. The compiler is free to reorder the writes to value and inited; so is the memory system. This means that another thread might see inited set to true whilst value is still at its default.
This pattern is called double-checked locking, and is discussed by Albahari under Lazy Initialization. The recommended solution is to use the built-in Lazy<T> class. An equivalent implementation would be the following:
private static Func<T> Memoize<T>(Func<T> func)
{
var lazy = new Lazy<T>(func);
return () => lazy.Value;
}
I have the following code to cache instances of some class in a Concurrent Dictionary to which I use in a multi threaded application.
Simply, when I instantinate the class with the id parameter, it first checks if an instance of privateclass with the given id exists in the dictionary, and if not creates an instance of the privateclass (which takes long time, sometimes couple of seconds), and adds it to the dictionary for future use.
public class SomeClass
{
private static readonly ConcurrentDictionary<int, PrivateClass> SomeClasses =
new ConcurrentDictionary<int, PrivateClass>();
private readonly PrivateClass _privateClass;
public SomeClass(int cachedInstanceId)
{
if (!SomeClasses.TryGetValue(cachedInstanceId, out _privateClass))
{
_privateClass = new PrivateClass(); // This takes long time
SomeClasses.TryAdd(cachedInstanceId, _privateClass);
}
}
public int SomeCalculationResult()
{
return _privateClass.CalculateSomething();
}
private class PrivateClass
{
internal PrivateClass()
{
// this takes long time
}
internal int CalculateSomething()
{
// Calculates and returns something
}
}
}
My question is, do I need to add a lock around the generation and assignment part of the outer classes constructor to make this code thread safe or is it good as it is?
Update:
After SLaks's suggestion, tried to use GetOrAdd() method of ConcurrentDictionary with the combination of Lazy, but unfortunately the constructor of the PrivateClass still called more than once. See https://gist.github.com/3500955 for the test code.
Update 2:
You can see the final solution here:
https://gist.github.com/3501446
You're misusing ConcurrentDictionary.
In multi-threaded code, you should never check for the presence of an item, then add it if it's not there.
If two threads run that code at once, they will both end up adding it.
In general, there are two solutions tho this kind of problem. You can wrap all of that code in a lock, or you can redesign it to the whole thing in one atomic operation.
ConcurrentDictionary is designed to for this kind of scenario.
You should simply call
_privateClass = SomeClasses.GetOrAdd(cachedInstanceId, key => new PrivateClass());
Locking is not necessary, but what you're doing is not thread-safe. Instead of first checking the dictionary for presence of an item and then adding it if necessary, you should use ConcurrentDictionary.GetOrAdd() to do it all in one atomic operation.
Otherwise, you're exposing yourself to the same problem that you'd have with a regular dictionary: another thread might add an entry to SomeClasses after you check for existence but before you insert.
Your sample code at https://gist.github.com/3500955 using ConcurrentDictionary and Lazy<T> is incorrect - you're writing:
private static readonly ConcurrentDictionary<int, PrivateClass> SomeClasses =
new ConcurrentDictionary<int, PrivateClass>();
public SomeClass(int cachedInstanceId)
{
_privateClass = SomeClasses.GetOrAdd(cachedInstanceId, (key) => new Lazy<PrivateClass>(() => new PrivateClass(key)).Value);
}
..which should have been:
private static readonly ConcurrentDictionary<int, Lazy<PrivateClass>> SomeClasses =
new ConcurrentDictionary<int, Lazy<PrivateClass>>();
public SomeClass(int cachedInstanceId)
{
_privateClass = SomeClasses.GetOrAdd(cachedInstanceId, (key) => new Lazy<PrivateClass>(() => new PrivateClass(key))).Value;
}
You need to use ConcurrentDictionary<TKey, Lazy<TVal>>, and not ConcurrentDictionary<TKey, TVal>.
The point is that you only access the Value of the Lazy after the correct Lazy object has been returned from the GetOrAdd() - sending in the Value of the Lazy object to the GetOrAdd function defeats the whole purpose of using it.
Edit: Ah - you got it in https://gist.github.com/mennankara/3501446 :)
While i was looking at some legacy application code i noticed it is using a string object to do thread synchronization. I'm trying to resolve some thread contention issues in this program and was wondering if this could lead so some strange situations. Any thoughts ?
private static string mutex= "ABC";
internal static void Foo(Rpc rpc)
{
lock (mutex)
{
//do something
}
}
Strings like that (from the code) could be "interned". This means all instances of "ABC" point to the same object. Even across AppDomains you can point to the same object (thx Steven for the tip).
If you have a lot of string-mutexes, from different locations, but with the same text, they could all lock on the same object.
The intern pool conserves string storage. If you assign a literal string constant to several variables, each variable is set to reference the same constant in the intern pool instead of referencing several different instances of String that have identical values.
It's better to use:
private static readonly object mutex = new object();
Also, since your string is not const or readonly, you can change it. So (in theory) it is possible to lock on your mutex. Change mutex to another reference, and then enter a critical section because the lock uses another object/reference. Example:
private static string mutex = "1";
private static string mutex2 = "1"; // for 'lock' mutex2 and mutex are the same
private static void CriticalButFlawedMethod() {
lock(mutex) {
mutex += "."; // Hey, now mutex points to another reference/object
// You are free to re-enter
...
}
}
To answer your question (as some others already have), there are some potential problems with the code example you provided:
private static string mutex= "ABC";
The variable mutex is not immutable.
The string literal "ABC" will refer to the same interned object reference everywhere in your application.
In general, I would advise against locking on strings. However, there is a case I've ran into where it is useful to do this.
There have been occasions where I have maintained a dictionary of lock objects where the key is something unique about some data that I have. Here's a contrived example:
void Main()
{
var a = new SomeEntity{ Id = 1 };
var b = new SomeEntity{ Id = 2 };
Task.Run(() => DoSomething(a));
Task.Run(() => DoSomething(a));
Task.Run(() => DoSomething(b));
Task.Run(() => DoSomething(b));
}
ConcurrentDictionary<int, object> _locks = new ConcurrentDictionary<int, object>();
void DoSomething(SomeEntity entity)
{
var mutex = _locks.GetOrAdd(entity.Id, id => new object());
lock(mutex)
{
Console.WriteLine("Inside {0}", entity.Id);
// do some work
}
}
The goal of code like this is to serialize concurrent invocations of DoSomething() within the context of the entity's Id. The downside is the dictionary. The more entities there are, the larger it gets. It's also just more code to read and think about.
I think .NET's string interning can simplify things:
void Main()
{
var a = new SomeEntity{ Id = 1 };
var b = new SomeEntity{ Id = 2 };
Task.Run(() => DoSomething(a));
Task.Run(() => DoSomething(a));
Task.Run(() => DoSomething(b));
Task.Run(() => DoSomething(b));
}
void DoSomething(SomeEntity entity)
{
lock(string.Intern("dee9e550-50b5-41ae-af70-f03797ff2a5d:" + entity.Id))
{
Console.WriteLine("Inside {0}", entity.Id);
// do some work
}
}
The difference here is that I am relying on the string interning to give me the same object reference per entity id. This simplifies my code because I don't have to maintain the dictionary of mutex instances.
Notice the hard-coded UUID string that I'm using as a namespace. This is important if I choose to adopt the same approach of locking on strings in another area of my application.
Locking on strings can be a good idea or a bad idea depending on the circumstances and the attention that the developer gives to the details.
If you need to lock a string, you can create an object that pairs the string with an object that you can lock with.
class LockableString
{
public string _String;
public object MyLock; //Provide a lock to the data in.
public LockableString()
{
MyLock = new object();
}
}
My 2 cents:
ConcurrentDictionary is 1.5X faster than interned strings. I did a benchmark once.
To solve the "ever-growing dictionary" problem you can use a dictionary of semaphores instead of a dictionary of objects. AKA use ConcurrentDictionary<string, SemaphoreSlim> instead of <string, object>. Unlike the lock statements, Semaphores can track how many threads have locked on them. And once all the locks are released - you can remove it from the dictionary. See this question for solutions like that: Asynchronous locking based on a key
Semaphores are even better because you can even control the concurrency level. Like, instead of "limiting to one concurrent run" - you can "limit to 5 concurrent runs". Awesome free bonus isn't it? I had to code an email-service that needed to limit the number of concurrent connections to a server - this came very very handy.
I imagine that locking on interned strings could lead to memory bloat if the strings generated are many and are all unique. Another approach that should be more memory efficient and solve the immediate deadlock issue is
// Returns an Object to Lock with based on a string Value
private static readonly ConditionalWeakTable<string, object> _weakTable = new ConditionalWeakTable<string, object>();
public static object GetLock(string value)
{
if (value == null) throw new ArgumentNullException(nameof(value));
return _weakTable.GetOrCreateValue(value.ToLower());
}
I have a function that returns an entry on a dictionary, based on the Key (name) and if it doesn't exist, returns a newly created one.
The question I have is with the "double lock" : SomeFunction locks the _dictionary, to check for the existance of the key, then calls a function that also locks the same dictionary, it seems to work but I am not sure if there is a potential problem with this approach.
public Machine SomeFunction(string name)
{
lock (_dictionary)
{
if (!_dictionary.ContainsKey(name))
return CreateMachine(name);
return _dictionary[name];
}
}
private Machine CreateMachine(string name)
{
MachineSetup ms = new Machine(name);
lock(_dictionary)
{
_ictionary.Add(name, ms);
}
return vm;
}
That's guaranteed to work - locks are recursive in .NET. Whether it's really a good idea or not is a different matter... how about this instead:
public Machine SomeFunction(string name)
{
lock (_dictionary)
{
Machine result;
if (!_dictionary.TryGetValue(name, out result))
{
result = CreateMachine(name);
_dictionary[name] = result;
}
return result;
}
}
// This is now *just* responsible for creating the machine,
// not for maintaining the dictionary. The dictionary manipulation
// is confined to the above method.
private Machine CreateMachine(string name)
{
return new Machine(name);
}
No problem here, the lock is re-entrant by the same thread. Not all sync objects have thread affinity, Semaphore for example. But Mutex and Monitor (lock) are fine.
New since .net 4.0, check out the ConcurrentDictionary - ConcurrentDictionary is a thread-safe collection of key/value pairs that can be accessed by multiple threads concurrently. More info at https://msdn.microsoft.com/en-us/library/dd287191(v=vs.110).aspx .