Thread safety in Services set to ConcurrencyMode.Multiple - c#

I am in the middle of developing a WCF application which is hosting a custom object for many clients to access. It's basically working but because I need to deal with thousands of simultaneous clients I need the service to be able to handle concurrent read calls (updates will be infrequent). I have added some thread-safety by locking private field while updating the object.
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
public sealed class TestServerConfig : ConfigBase, ITestInterface
{
private object updateLock = new object();
private SortedList<string, DateTime> dates = new SortedList<string, DateTime>();
public DateTime GetDate(string key)
{
if (this.dates.ContainsKey(key))
{
return this.dates[key];
}
else
{
return DateTime.MinValue;
}
}
public void SetDate(string key, DateTime expirationDate)
{
lock (this.updateLock)
{
if (this.dates.ContainsKey(key))
{
this.dates[key] = expirationDate;
}
else
{
this.dates.Add(key, expirationDate);
}
}
}
}
My problem is how to make GetDate thread safe without locking so that concurrent calls to GetDate can execute but so that an exception wont happen randomly when the value from the collection is removed after the check but before the value is read.
Catching the exception and dealing with it is possible but I would prefer to pevent it still.
Any ideas?

There is a lock specifically designed for this, ReaderWriterLockSlim (ReadWriterLock if you are using less than .NET 4.0)
This lock allows concurrent reads, but locks out the reads (and other writes) when a write is happening.
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
public sealed class TestServerConfig : ConfigBase, ITestInterface
{
private ReaderWriterLockSlim updateLock = new ReaderWriterLockSlim();
private SortedList<string, DateTime> dates = new SortedList<string, DateTime>();
public DateTime GetDate(string key)
{
try
{
this.updateLock.EnterReadLock();
if (this.dates.ContainsKey(key))
{
return this.dates[key];
}
else
{
return DateTime.MinValue;
}
}
finally
{
this.updateLock.ExitReadLock();
}
}
public void SetDate(string key, DateTime expirationDate)
{
try
{
this.updateLock.EnterWriteLock();
if (this.dates.ContainsKey(key))
{
this.dates[key] = expirationDate;
}
else
{
this.dates.Add(key, expirationDate);
}
}
finally
{
this.updateLock.ExitWriteLock();
}
}
}
There is also "Try" versions of the locks that support timeouts, you just check the returned bool to see if you took the lock.
UPDATE: Another solution is use a ConcurrentDictionary, this does not require any locks at all. ConcurrentDictionary uses locks internally but they are shorter lived than the ones you could use, also there is potential that Microsoft could use some form of unsafe methods to optimize it even more, I don't know exactly what kind of locks they are taking internally.
You will need to do some re-writing to make your operations atomic though
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
public sealed class TestServerConfig : ConfigBase, ITestInterface
{
private ConcurrentDictionary<string, DateTime> dates = new ConcurrentDictionary<string, DateTime>();
public DateTime GetDate(string key)
{
DateTime result;
if (this.dates.TryGetValue(key, out result))
{
return result;
}
else
{
return DateTime.MinValue;
}
}
public void SetDate(string key, DateTime expirationDate)
{
this.dates.AddOrUpdate(key, expirationDate, (usedKey, oldValue) => expirationDate);
}
}
UPDATE2: Out of curiosity I looked under the hood to see what ConcurrentDictionary does, What it does is only lock on a set of the buckets of the element, so you only get lock contention if two object also share the same hash bucket lock.
There is normally Environment.ProcessorCount * 4 lock buckets, but you can set it by hand using the constructor that sets the concurrencyLevel.
Here is how it decides which lock to use
private void GetBucketAndLockNo(int hashcode, out int bucketNo, out int lockNo, int bucketCount, int lockCount)
{
bucketNo = (hashcode & 2147483647) % bucketCount;
lockNo = bucketNo % lockCount;
}
lockCount is equal to the concurrencyLevel set in the constructor.

I would suggest you use a ReaderWriterLockSlim the documentation for which provides an example that is almost exactly what you want. ( http://msdn.microsoft.com/en-us/library/system.threading.readerwriterlockslim.aspx )
But, something like this:
public DateTime GetDate(string key)
{
cacheLock.EnterReadLock();
try
{
if (this.dates.ContainsKey(key))
{
return this.dates[key];
}
else
{
return DateTime.MinValue;
}
}
finally
{
cacheLock.ExitReadLock();
}
}
public void SetDate(string key, DateTime expirationDate)
{
cacheLock.EnterWriteLock();
try
{
if (this.dates.ContainsKey(key))
{
this.dates[key] = expirationDate;
}
else
{
this.dates.Add(key, expirationDate);
}
}
finally
{
cacheLock.ExitWriteLock();
}
}
ReaderWriterLockSlim is much more performant than using a lock and differentiates between reads and writes, so if no writes are occurring the read becomes non-blocking.

If you really can't afford locking for reads, you could (inside the lock), make a copy of the list, update it accordingly, and then replace the old list. The worst thing that could happend now would be that some of the reads would be a bit out of date, but they should never throw.
lock (this.updateLock)
{
var temp = <copy list here>
if (temp.ContainsKey(key))
{
temp[key] = expirationDate;
}
else
{
temp.Add(key, expirationDate);
}
this.dates = temp;
}
Not very efficient, but if you're not doing it too often it might not matter.

I had the same situation and used ReaderWriterLockSlim and works these kind of situations.

Related

Performance of ConcurrentBag, many reads, rare modifications

I'm trying to build a model where there will me multiple reads of an entire collection and rare additions and modifications to it.
I thought I might use the ConcurrentBag in .NET as I've read the documentation and it's supposed to be good for concurrent reads and writes.
The code would look like this:
public class Cache
{
ConcurrentBag<string> cache = new ConcurrentBag<string>();
// this method gets called frequently
public IEnumerable<string> GetAllEntries()
{
return cache.ToList();
}
// this method gets rarely called
public void Add(string newEntry)
{
// add to concurrentBag
}
public void Remove(string entryToRemove)
{
// remove from concurrent bag
}
}
However, I've decompiled the ConcurrentBag class and on theGetEnumerator there's always a lock taken, which means any call to GetAllEntries will lock the entire collection and it will not perform.
I'm thinking to get around this and code it in this manner instead, using a list.
public class Cache
{
private object guard = new object();
IList<string> cache = new List<string>();
// this method gets called frequently
public IEnumerable<string> GetAllEntries()
{
var currentCache = cache;
return currentCache;
}
// this method gets rarely called
public void Add(string newEntry)
{
lock (guard)
{
cache.Add(newEntry);
}
}
public void Remove(string entryToRemove)
{
lock (guard)
{
cache.Remove(entryToRemove);
}
}
}
Since the Add and Remove are rarely called I don't care too much about locking the access to the list there. On Get I might get a stale version of the list, but again I don't care, it will be fine for the next request.
Is the second implementation a good way to go?
EDIT
I've run a quick performance test and the results are the following:
Setup: populated the in memory collection with 10000 strings.
Action: GetAllEntries concurrently 50000 times.
Result:
00:00:35.2393871 to finish operation using ConcurrentBag (first implementation)
00:00:00.0036959 to finish operation using normal list (second implementation)
Code below:
class Program
{
static void Main(string[] args)
{
// warmup caches and stopwatch
var cacheWitBag = new CacheWithBag();
var cacheWitList = new CacheWithList();
cacheWitBag.Add("abc");
cacheWitBag.GetAllEntries();
cacheWitList.Add("abc");
cacheWitList.GetAllEntries();
var sw = new Stopwatch();
// warmup stowtach as well
sw.Start();
// initialize caches (rare writes so no real reason to measure here
for (int i =0; i < 50000; i++)
{
cacheWitBag.Add(new Guid().ToString());
cacheWitList.Add(new Guid().ToString());
}
sw.Stop();
// measure
var program = new Program();
sw.Start();
program.Run(cacheWitBag).Wait();
sw.Stop();
Console.WriteLine(sw.Elapsed);
sw.Restart();
program.Run2(cacheWitList).Wait();
sw.Stop();
Console.WriteLine(sw.Elapsed);
}
public async Task Run(CacheWithBag cache1)
{
List<Task> tasks = new List<Task>();
for (int i = 0; i < 10000; i++)
{
tasks.Add(Task.Run(() => cache1.GetAllEntries()));
}
await Task.WhenAll(tasks);
}
public async Task Run2(CacheWithList cache)
{
List<Task> tasks = new List<Task>();
for (int i = 0; i < 10000; i++)
{
tasks.Add(Task.Run(() => cache.GetAllEntries()));
}
await Task.WhenAll(tasks);
}
public class CacheWithBag
{
ConcurrentBag<string> cache = new ConcurrentBag<string>();
// this method gets called frequently
public IEnumerable<string> GetAllEntries()
{
return cache.ToList();
}
// this method gets rarely called
public void Add(string newEntry)
{
cache.Add(newEntry);
}
}
public class CacheWithList
{
private object guard = new object();
IList<string> cache = new List<string>();
// this method gets called frequently
public IEnumerable<string> GetAllEntries()
{
var currentCache = cache;
return currentCache;
}
// this method gets rarely called
public void Add(string newEntry)
{
lock (guard)
{
cache.Add(newEntry);
}
}
public void Remove(string entryToRemove)
{
lock (guard)
{
cache.Remove(entryToRemove);
}
}
}
}
}
To improve on InBetween's solution:
class Cache
{
ImmutableHashSet<string> cache = ImmutableHashSet.Create<string>();
public IEnumerable<string> GetAllEntries()
{
return cache;
}
public void Add(string newEntry)
{
ImmutableInterlocked.Update(ref cache, (set,item) => set.Add(item), newEntry);
}
public void Remove(string entryToRemove)
{
ImmutableInterlocked.Update(ref cache, (set,item) => set.Remove(item), newEntry);
}
}
This performs only atomic operations (no locking) and uses the .NET Immutable types.
In your current scenario, where Add and Remove are rarely called, I'd consider the following approach:
public class Cache
{
private object guard = new object();
var cache = new SomeImmutableCollection<string>();
// this method gets called frequently
public IEnumerable<string> GetAllEntries()
{
return cache;
}
// this method gets rarely called
public void Add(string newEntry)
{
lock (guard)
{
cache = cache.Add(newEntry);
}
}
public void Remove(string entryToRemove)
{
lock (guard)
{
cache = cache.Remove(entryToRemove);
}
}
}
The fundamental change here is that cache now is an immutable collection, which means it can't change....ever. So concurrency problems with the collection itself simply disappear, something that can't change is inherently thread safe.
Also, depending on how rare calls to Add and Remove are you can even consider removing the lock in both of them because all its doing now is avoiding a race between Add and Remove and a potential loss of a cache update. If that scenario is very very improbable you could get away with it. That said, I very much doubt the few nanoseconds an uncontended lock takes is a relevant factor here to actually consider this ;)
SomeImmutableCollection can be any of the collections found in System.Collections.Immutable that better suit your needs.
Instead of a 'lock' on a guard object to protect a simple container you should consider the 'ReaderWriterLockSlim' which is optimized and very performant for the read/write scenario : multiple readers are allowed at same time but only one writer is allowed and blocks other readers/writers. It is very useful in your scenario where you read a lot but write only few.
Please note you can be a reader and then, for some reason, decide to become a writer (upgrade the slim lock) in your "reading" code.

How to dynamically lock strings but remove the lock objects from memory

I have the following situation:
I have a lot of threads in my project, and each thread process one "key" by time.
Two threads cannot process the same "key" at the same time, but my project process A LOOOOOT OF KEYS, so I can't store the "keys" on memory, I need to store on memory that a thread is processing a "key" and if another thread tries to process the same "key" this thread will be waiting in a lock clause.
Now I have the following structure:
public class Lock
{
private static object _lockObj = new object();
private static List<object> _lockListValues = new List<object>();
public static void Execute(object value, Action action)
{
lock (_lockObj)
{
if (!_lockListValues.Contains(value))
_lockListValues.Add(value);
}
lock (_lockListValues.First(x => x.Equals(value)))
{
action.Invoke();
}
}
}
It is working fine, the problem is that the keys aren't being removed from the memory. the biggest trouble is the multi thread feature because at any time a "key" can be processed.
How could I solve this without a global lock independent of the keys?
Sorry, but no, this is not the way it should be done.
First, you speak about keys, but you store keys as type object in List and then you are searching with LINQ to get that from list.
For that kind of stuff is here dictionary.
Second, object model, usually it is best to implement locking of some object around some class, make it nice and clean:
like:
using System.Collections.Concurrent;
public LockedObject<T>
{
public readonly T data;
public readonly int id;
private readonly object obj = new object();
LockedObject(int id, T data)
{
this.id = id;
this.data = data;
}
//Usually, if you have Action related to some data,
//it is better to receive
//that data as parameter
public void InvokeAction(Action<T> action)
{
lock(obj)
{
action(data);
}
}
}
//Now it is a concurrently safe object applying some action
//concurrently on given data, no matter how it is stored.
//But still, this is the best idea:
ConcurrentDictionary<int, LockedObject<T>> dict =
new ConcurrentDictionary<int, LockedObject<T>>();
//You can insert, read, remove all object's concurrently.
But, the best thing is yet to come! :) You can make it lock free and very easily!
EDIT1:
ConcurrentInvoke, dictionary like collection for concurrently safe invoking action over data. There can be only one action at the time on given key.
using System;
using System.Threading;
using System.Collections.Concurrent;
public class ConcurrentInvoke<TKey, TValue>
{
//we hate lock() :)
private class Data<TData>
{
public readonly TData data;
private int flag;
private Data(TData data)
{
this.data = data;
}
public static bool Contains<TTKey>(ConcurrentDictionary<TTKey, Data<TData>> dict, TTKey key)
{
return dict.ContainsKey(key);
}
public static bool TryAdd<TTKey>(ConcurrentDictionary<TTKey, Data<TData>> dict, TTKey key, TData data)
{
return dict.TryAdd(key, new Data<TData>(data));
}
// can not remove if,
// not exist,
// remove of the key already in progress,
// invoke action of the key inprogress
public static bool TryRemove<TTKey>(ConcurrentDictionary<TTKey, Data<TData>> dict, TTKey key, Action<TTKey, TData> action_removed = null)
{
Data<TData> data = null;
if (!dict.TryGetValue(key, out data)) return false;
var access = Interlocked.CompareExchange(ref data.flag, 1, 0) == 0;
if (!access) return false;
Data<TData> data2 = null;
var removed = dict.TryRemove(key, out data2);
Interlocked.Exchange(ref data.flag, 0);
if (removed && action_removed != null) action_removed(key, data2.data);
return removed;
}
// can not invoke if,
// not exist,
// remove of the key already in progress,
// invoke action of the key inprogress
public static bool TryInvokeAction<TTKey>(ConcurrentDictionary<TTKey, Data<TData>> dict, TTKey key, Action<TTKey, TData> invoke_action = null)
{
Data<TData> data = null;
if (invoke_action == null || !dict.TryGetValue(key, out data)) return false;
var access = Interlocked.CompareExchange(ref data.flag, 1, 0) == 0;
if (!access) return false;
invoke_action(key, data.data);
Interlocked.Exchange(ref data.flag, 0);
return true;
}
}
private
readonly
ConcurrentDictionary<TKey, Data<TValue>> dict =
new ConcurrentDictionary<TKey, Data<TValue>>()
;
public bool Contains(TKey key)
{
return Data<TValue>.Contains(dict, key);
}
public bool TryAdd(TKey key, TValue value)
{
return Data<TValue>.TryAdd(dict, key, value);
}
public bool TryRemove(TKey key, Action<TKey, TValue> removed = null)
{
return Data<TValue>.TryRemove(dict, key, removed);
}
public bool TryInvokeAction(TKey key, Action<TKey, TValue> invoke)
{
return Data<TValue>.TryInvokeAction(dict, key, invoke);
}
}
ConcurrentInvoke<int, string> concurrent_invoke = new ConcurrentInvoke<int, string>();
concurrent_invoke.TryAdd(1, "string 1");
concurrent_invoke.TryAdd(2, "string 2");
concurrent_invoke.TryAdd(3, "string 3");
concurrent_invoke.TryRemove(1);
concurrent_invoke.TryInvokeAction(3, (key, value) =>
{
Console.WriteLine("InvokingAction[key: {0}, vale: {1}", key, value);
});
I modified a KeyedLock class that I posted in another question, to use internally the Monitor class instead of SemaphoreSlims. I expected that using a specialized mechanism for synchronous locking would offer better performance, but I can't actually see any difference. I am posting it anyway because it has the added convenience feature of releasing the lock automatically with the using statement. This feature adds no significant overhead in the case of synchronous locking, so there is no reason to omit it.
Another reason that justifies this separate implementation is that the Monitor has different semantics than the SemaphoreSlim. The Monitor is reentrant while the SemaphoreSlim is not. A single thread is allowed to enter the Monitor multiple times, before finally Exiting an equal number of times. This is not possible with a SemaphoreSlim. If a thread make an attempt to Wait a second time a SemaphoreSlim(1, 1), most likely it will deadlock.
The KeyedMonitor class stores internally only the locking objects that are currently in use, plus a small pool of locking objects that have been released and can be reused. This pool reduces significantly the memory allocations under heavy usage, at the cost of some added synchronization overhead.
public class KeyedMonitor<TKey>
{
private readonly Dictionary<TKey, (object, int)> _perKey;
private readonly Stack<object> _pool;
private readonly int _poolCapacity;
public KeyedMonitor(IEqualityComparer<TKey> keyComparer = null,
int poolCapacity = 10)
{
_perKey = new Dictionary<TKey, (object, int)>(keyComparer);
_pool = new Stack<object>(poolCapacity);
_poolCapacity = poolCapacity;
}
public ExitToken Enter(TKey key)
{
var locker = GetLocker(key);
Monitor.Enter(locker);
return new ExitToken(this, key);
}
// Abort-safe API
public void Enter(TKey key, ref bool lockTaken)
{
try { }
finally // Abort-safe block
{
var locker = GetLocker(key);
try { Monitor.Enter(locker, ref lockTaken); }
finally { if (!lockTaken) ReleaseLocker(key, withMonitorExit: false); }
}
}
public bool TryEnter(TKey key, int millisecondsTimeout)
{
var locker = GetLocker(key);
bool acquired = false;
try { acquired = Monitor.TryEnter(locker, millisecondsTimeout); }
finally { if (!acquired) ReleaseLocker(key, withMonitorExit: false); }
return acquired;
}
public void Exit(TKey key) => ReleaseLocker(key, withMonitorExit: true);
private object GetLocker(TKey key)
{
object locker;
lock (_perKey)
{
if (_perKey.TryGetValue(key, out var entry))
{
int counter;
(locker, counter) = entry;
counter++;
_perKey[key] = (locker, counter);
}
else
{
lock (_pool) locker = _pool.Count > 0 ? _pool.Pop() : null;
if (locker == null) locker = new object();
_perKey[key] = (locker, 1);
}
}
return locker;
}
private void ReleaseLocker(TKey key, bool withMonitorExit)
{
object locker; int counter;
lock (_perKey)
{
if (_perKey.TryGetValue(key, out var entry))
{
(locker, counter) = entry;
// It is important to allow a possible SynchronizationLockException
// to be surfaced before modifying the internal state of the class.
// That's why the Monitor.Exit should be called here.
// Exiting the Monitor while holding the inner lock should be safe.
if (withMonitorExit) Monitor.Exit(locker);
counter--;
if (counter == 0)
_perKey.Remove(key);
else
_perKey[key] = (locker, counter);
}
else
{
throw new InvalidOperationException("Key not found.");
}
}
if (counter == 0)
lock (_pool) if (_pool.Count < _poolCapacity) _pool.Push(locker);
}
public readonly struct ExitToken : IDisposable
{
private readonly KeyedMonitor<TKey> _parent;
private readonly TKey _key;
public ExitToken(KeyedMonitor<TKey> parent, TKey key)
{
_parent = parent; _key = key;
}
public void Dispose() => _parent?.Exit(_key);
}
}
Usage example:
var locker = new KeyedMonitor<string>();
using (locker.Enter("Hello"))
{
DoSomething(); // with the "Hello" resource
}
Although the KeyedMonitor class is thread-safe, it is not as robust as using the lock statement directly, because it offers no resilience in case of a ThreadAbortException. An aborted thread could leave the class in a corrupted internal state. I don't consider this to be a big issue, since the Thread.Abort method has become obsolete in the current version of the .NET platform (.NET 5).
For an explanation about why the IDisposable ExitToken struct is not boxed by the using statement, you can look here: If my struct implements IDisposable will it be boxed when used in a using statement? If this was not the case, the ExitToken feature would add significant overhead.
Caution: please don't store anywhere the ExitToken value returned by the KeyedMonitor.Enter method. There is no protection against misuse of this struct (like disposing it multiple times). The intended usage of this method is shown in the example above.
Update: I added an Enter overload that allows to take the lock with thread-abort resilience, albeit with an inconvenient syntax:
bool lockTaken = false;
try
{
locker.Enter("Hello", ref lockTaken);
DoSomething();
}
finally
{
if (lockTaken) locker.Exit("Hello");
}
As with the underlying Monitor class, the lockTaken is always true after a successful invocation of the Enter method. The lockTaken can be false only if the Enter throws an exception.

What is the most performant way to make the results of a cached computation thread-safe?

(Apologies if this was answered elsewhere; it seems like it would be a common problem, but it turns out to be hard to search for since terms like "threading" and "cache" produce overwhelming results.)
I have an expensive computation whose result is accessed frequently but changes infrequently. Thus, I cache the resulting value. Here's some c# pseudocode of what I mean:
int? _cachedResult = null;
int GetComputationResult()
{
if(_cachedResult == null)
{
// Do the expensive computation.
_cachedResult = /* Result of expensive computation. */;
}
return _cachedResult.Value;
}
Elsewhere in my code, I will occasionally set _cachedResult back to null because the input to the computation has changed and thus the cached result is no longer valid and needs to be re-computed. (Which means I can't use Lazy<T> since Lazy<T> doesn't support being reset.)
This works fine for single-threaded scenarios, but of course it's not at all thread-safe. So my question is: What is the most performant way to make GetComputationResult thread-safe?
Obviously I could just put the whole thing in a lock() block, but I suspect there might be a better way? (Something that would do an atomic check to see if the result needs to be recomputed and only lock if it does?)
Thanks a lot!
You can use the double-checked locking pattern:
// Thread-safe (uses double-checked locking pattern for performance)
public class Memoized<T>
{
Func<T> _compute;
volatile bool _cached;
volatile bool _startedCaching;
volatile StrongBox<T> _cachedResult; // Need reference type
object _cacheSyncRoot = new object();
public Memoized(Func<T> compute)
{
_compute = compute;
}
public T Value {
get {
if (_cached) // Fast path
return _cachedResult.Value;
lock (_cacheSyncRoot)
{
if (!_cached)
{
_startedCaching = true;
_cachedResult = new StrongBox<T>(_compute());
_cached = true;
}
}
return _cachedResult.Value;
}
}
public void Invalidate()
{
if (!_startedCaching)
{
// Fast path: already invalidated
Thread.MemoryBarrier(); // need to release
if (!_startedCaching)
return;
}
lock (_cacheSyncRoot)
_cached = _startedCaching = false;
}
}
This particular implementation matches your description of what it should do in corner cases: If the cache has been invalidated, the value should only be computed once, by a single thread, and other threads should wait. However, if the cache is invalidated concurrently with the cached value being accessed, the stale cached value may be returned.
perhaps this will provide some food for thought:).
Generic class.
The class can compute data asynchronously or synchronously.
Allows fast reads thanks to the spinlock.
Does not perform heavy stuff inside the spinlock, just returning Task and if necessary, creating and starting Task on default TaskScheduler, to avoid inlining.
Task with Spinlock is pretty powerful combination, that can solve some problems in lock-free way.
using System;
using System.Threading;
using System.Threading.Tasks;
namespace Example
{
class OftenReadSometimesUpdate<T>
{
private Task<T> result_task = null;
private SpinLock spin_lock = new SpinLock(false);
private TResult LockedFunc<TResult>(Func<TResult> locked_func)
{
TResult t_result = default(TResult);
bool gotLock = false;
if (locked_func == null) return t_result;
try
{
spin_lock.Enter(ref gotLock);
t_result = locked_func();
}
finally
{
if (gotLock) spin_lock.Exit();
gotLock = false;
}
return t_result;
}
public Task<T> GetComputationAsync()
{
return
LockedFunc(GetComputationTaskLocked)
;
}
public T GetComputationResult()
{
return
LockedFunc(GetComputationTaskLocked)
.Result
;
}
public OftenReadSometimesUpdate<T> InvalidateComputationResult()
{
return
this
.LockedFunc(InvalidateComputationResultLocked)
;
}
public OftenReadSometimesUpdate<T> InvalidateComputationResultLocked()
{
result_task = null;
return this;
}
private Task<T> GetComputationTaskLocked()
{
if (result_task == null)
{
result_task = new Task<T>(HeavyComputation);
result_task.Start(TaskScheduler.Default);
}
return result_task;
}
protected virtual T HeavyComputation()
{
//a heavy computation
return default(T);//return some result of computation
}
}
}
You could simply reassign the Lazy<T> to achieve a reset:
Lazy<int> lazyResult = new Lazy<int>(GetComputationResult);
public int Result { get { return lazyResult.Value; } }
public void Reset()
{
lazyResult = new Lazy<int>(GetComputationResult);
}

Lock usage of all methods

I have a method that is accessed from multiple threads at the same time and I want to make sure that only 1 thread can be inside of a body of any method.
Can this code be refactored to something more generic? (Apart from Locking inside the State property?
public class StateManager : IStateManager
{
private readonly object _lock = new object();
public Guid? GetInfo1()
{
lock (_lock)
{
return State.Info1;
}
}
public void SetInfo1(Guid guid)
{
lock (_lock)
{
State.Info1 = guid;
}
}
public Guid? GetInfo2()
{
lock (_lock)
{
return State.Info2;
}
}
public void SetInfo2(Guid guid)
{
lock (_lock)
{
State.Info2 = guid;
}
}
}
Maybe something like:
private void LockAndExecute(Action action)
{
lock (_lock)
{
action();
}
}
Then your methods might look like this:
public void DoSomething()
{
LockAndExecute(() => Console.WriteLine("DoSomething") );
}
public int GetSomething()
{
int i = 0;
LockAndExecute(() => i = 1);
return i;
}
I'm not sure that's really saving you very much however and return values are a bit of a pain.
Although you could work around that by adding another method like this:
private T LockAndExecute<T>(Func<T> function)
{
lock (_lock)
{
return function();
}
}
So now my GetSomething method is a lot cleaner:
public int GetSomething()
{
return LockAndExecute(() => 1 );
}
Again, not sure you are gaining much in terms of less typing, but at least you know every call is locking on the same object.
While your gains may be pretty minimal in the case where all you need to do is lock, I could imagine a case where you had a bunch of methods something like this:
public void DoSomething()
{
// check some preconditions
// maybe do some logging
try
{
// do actual work here
}
catch (SomeException e)
{
// do some error handling
}
}
In that case, extracting all the precondition checking and error handling into one place could be pretty useful:
private void CheckExecuteAndHandleErrors(Action action)
{
// preconditions
// logging
try
{
action();
}
catch (SomeException e)
{
// handle errors
}
}
Using Action or Function Delegate.
Creating a method like
public T ExecuteMethodThreadSafe<T>(Func<T> MethodToExecute)
{
lock (_lock)
{
MethodToExecute.Invoke();
}
}
and using it like
public T GetInfo2(Guid guid)
{
return ExecuteMethodThreadSafe(() => State.Info2);
}
I would like to add what I ended up putting together, using some of the ideas presented by Matt and Abhinav in order to generalize this and make it as seamless as possible to implement.
private static readonly object Lock = new object();
public static void ExecuteMethodThreadSafe<T>(this T #object, Action<T> method) {
lock (Lock) {
method(#object);
}
}
public static TResult ExecuteMethodThreadSafe<T, TResult>(this T #object, Func<T, TResult> method) {
lock (Lock) {
return method(#object);
}
}
Which can then be extended in ways like this (if you want):
private static readonly Random Random = new Random();
public static T GetRandom<T>(Func<Random, T> method) => Random.ExecuteMethodThreadSafe(method);
And then when implemented could look something like this:
var bounds = new Collection<int>();
bounds.ExecuteMethodThreadSafe(list => list.Add(15)); // using the base method
int x = GetRandom(random => random.Next(-10, bounds[0])); // using the extended method
int y = GetRandom(random => random.Next(bounds[0])); // works with any method overload

Cache class and modified collections

I've written a generic Cache class designed to return an in-memory object and only evaluate src (an IQueryable, or a function returning IQueryable) occasionally. Its used in a couple of places in my app where fetching a lot of data via entity framework is expensive.
It is called
//class level
private static CachedData<Foo> fooCache= new CachedData<Foo>();
//method level
var results = fooCache.GetData("foos", fooRepo.Include("Bars"));
Although it appeared to work OK in testing, running on a busy web server I'm seeing some issues with "Collection was modified; enumeration operation may not execute." errors in the code that consumes the results.
This must be because one thread is overwriting the results object inside the lock, while another is using them, outside the lock.
I'm guessing my only solution is to return a copy of the results to each consumer rather than the original, and that I cannot allow the copy to occur while inside the Fetch lock, but that multiple copies could occur simultaneously.
Can anyone suggest a better way, or help with the locking strategy please?
public class CachedData<T> where T:class
{
private static Dictionary<string, IEnumerable<T>> DataCache { get; set; }
public static Dictionary<string, DateTime> Expire { get; set; }
public int TTL { get; set; }
private object lo = new object();
public CachedData()
{
TTL = 600;
Expire = new Dictionary<string, DateTime>();
DataCache = new Dictionary<string, IEnumerable<T>>();
}
public IEnumerable<T> GetData(string key, Func<IQueryable<T>> src)
{
var bc = brandKey(key);
if (!DataCache.ContainsKey(bc)) Fetch(bc, src);
if (DateTime.Now > Expire[bc]) Fetch(bc, src);
return DataCache[bc];
}
public IEnumerable<T> GetData(string key, IQueryable<T> src)
{
var bc = brandKey(key);
if ((!DataCache.ContainsKey(bc)) || (DateTime.Now > Expire[bc])) Fetch(bc, src);
return DataCache[bc];
}
private void Fetch(string key, IQueryable<T> src )
{
lock (lo)
{
if ((!DataCache.ContainsKey(key)) || (DateTime.Now > Expire[key])) ExecuteFetch(key, src);
}
}
private void Fetch(string key, Func<IQueryable<T>> src)
{
lock (lo)
{
if ((!DataCache.ContainsKey(key)) || (DateTime.Now > Expire[key])) ExecuteFetch(key, src());
}
}
private void ExecuteFetch(string key, IQueryable<T> src)
{
if (!DataCache.ContainsKey(key)) DataCache.Add(key, src.ToList());
else DataCache[key] = src.ToList();
if (!Expire.ContainsKey(key)) Expire.Add(key, DateTime.Now.AddSeconds(TTL));
else Expire[key] = DateTime.Now.AddSeconds(TTL);
}
private string brandKey(string key, int? brandid = null)
{
return string.Format("{0}/{1}", brandid ?? Config.BrandID, key);
}
}
I usually use a ConcurrentDictionary<TKey, Lazy<TValue>>. That gives you a lock per key. It makes the strategy to hold the lock while fetching viable. This also avoids cache-stampeding. It guarantees that only one evaluation per key will ever happen. Lazy<T> automates the locking entirely.
Regarding your expiration logic: You could set up a timer that cleans the dictionary (or rewrites it entirely) every X seconds.

Categories