locking multiple objects: scenarios and risks - c#

I want to load some objects from database and cache them. It's simple:
public class Dal {
public Entity GetEntity(int id) {
var cacheKey = string.Format(".cache.key.{0}", id);
var item = Cache.Get(cacheKey) as Entity;
if(item == null) {
item = LoadEntityFromDatabase(id);
Cache.Add(cacheKey, item);
}
return item;
}
}
It's pretty simple. But when multiple threads want to access Dal.GetEntity, I want to lock an object which is bounded to id. The scenario I can think about is something like this:
public class Dal {
private static readonly ConcurrentDictionary<int, object>
_locks = new ConcurrentDictionary<int, object>();
public Entity GetEntity(int id) {
var cacheKey = string.Format(".cache.key.{0}", id);
var item = Cache.Get(cacheKey) as Entity;
if(item == null) {
var lockObject = _locks.GetOrAdd(id, new object());
lock (lockObject) {
if(item == null) {
// all exceptions are handled in this block.
item = LoadEntityFromDatabase(id);
Cache.Add(cacheKey, item);
}
}
// * Here is the removing problem:
_locks.TryRemove(id, out lockObject);
}
return item;
}
}
The goal is: I want to lock and id, when one thread is fetching it from database. Actually I want prevent requesting an object from database twice. It seems to work, I think. But I'm not exactly a multithread programmer, nor a lock-guy. So, the question is: what risks am I taking with this scenario?
Also, I have this problem: how to prevent dictionary from getting larger and larger? As you can see, - where I marked with a * -, I'm trying to remove used items with TryRemove method. But It seems a stupid move: What if it tries to remove an object, while the object is locked with another thread? Am I right?

I would strongly advice you not to mix ConcurrentDictionary and locks.
Either work with locks and a normal Dictionary or work with a ConcurrentDictionary and lock-free only.
You are currently risking races - e.g. when 3 threads act on the same record. A thread removes the old lock - B thread uses the old lock and C thread creates a new lock, so that B and C might work in parallel.

Something like that. (not tested);
public class Dal
{
private sealed class Locker
{
private static readonly object DictionaryLocker = new object();
private static readonly Dictionary<int, Locker>
Locks = new Dictionary<int, Locker>();
private readonly object _lockerObject;
private int _num;
private Locker()
{
_num = 0;
_lockerObject = new object();
}
public static object StartLock(int id)
{
Locker locker;
lock (DictionaryLocker)
{
if (!Locks.TryGetValue(id, out locker))
{
locker = new Locker();
Locks.Add(id, locker);
}
++locker._num;
}
return locker._lockerObject;
}
public static void EndLock(int id)
{
lock (DictionaryLocker)
{
Locker locker = Locks[id];
--locker._num;
if (locker._num == 0)
Locks.Remove(id);
}
}
}
public Entity GetEntity(int id)
{
var cacheKey = string.Format(".cache.key.{0}", id);
var item = Cache.Get(cacheKey) as Entity;
if (item != null)
return item;
object lockObject = Locker.StartLock(id);
lock (lockObject)
{
// all exceptions are handled in this block.
item = LoadEntityFromDatabase(id);
Cache.Add(cacheKey, item);
Locker.EndLock(id);
}
return item;
}
}
EDIT Changed to Dictionary

Related

Will this C# code work in thread-safe mode?

I need to be able to get the Project by ID and safely change the properties of it. I am not the specialist in multi-threading. So, please, help me with this.
public static class Application
{
private static ConcurrentDictionary<string, Project> projects = new ConcurrentDictionary<string, Project>();
private static readonly object locker = new object();
public static Project GetProjectByGuid(string guid)
{
if (guid == null) return null;
lock (locker)
{
return projects.GetValueOrDefault(guid, null);
}
}
public static void AddOrUpdateProject(Project project)
{
Project dbProject;
lock (locker)
{
dbProject = GetProjectByGuid(project.Guid);
if (dbProject == null)
{
projects[project.Guid] = project;
}
}
if (dbProject != null)
{
lock (dbProject.locker)
{
dbProject.Name = project.Name;
dbProject.Users = project.Users;
}
}
}
}
Since the Project object is not thread safe, the answer to your question is no, you will not be able to make this thread safe.

MVC 5 HttpContext.Current.Cache method fails when hit by 2 threads at the same time

I'm using the following service method in order to cache the result of a query:
private readonly CoreDbContext _dbContext;
public EcommerceProductService()
{
_dbContext = GetDbContext();
}
public IEnumerable<EcommerceProduct> GetAllCached()
{
var cachedResult = HttpContext.Current.Cache["EcommerceProductService.GetAllCached"] as IEnumerable<EcommerceProduct>;
if (cachedResult == null)
{
var result = _dbContext.EcommerceProducts.ToList();
HttpContext.Current.Cache.Insert("EcommerceProductService.GetAllCached", result);
return result;
}
return cachedResult;
}
In a certain page, this method is called simultaneously by 2 threads (because I need to display the entire collection of products twice, but with different filters).
Oddly, the first time I launch the application only one of the threads "wins" and receives the list of products, while the other receives null. If I refresh the page they both start to work fine (because at that point they fetch the result from the cache), but the first time it's only one or the other, they never both work.
I also tried to wrap the entire code in a Lock statement, but it didn't change a thing. What am I missing?
private readonly CoreDbContext _dbContext;
private static readonly object Locker = new object();
public EcommerceProductService()
{
_dbContext = GetDbContext();
}
public IEnumerable<EcommerceProduct> GetAllCached()
{
lock (Locker)
{
var cachedResult = HttpContext.Current.Cache["EcommerceProductService.GetAllCached"] as IEnumerable<EcommerceProduct>;
if (cachedResult == null)
{
var result = _dbContext.EcommerceProducts.ToList();
HttpContext.Current.Cache.Insert("EcommerceProductService.GetAllCached", result);
return result;
}
return cachedResult;
}
}
Change your _locker declaration to
private static object _locker = new object();
As you have it now, each instance of EcommerceProductService assigns a new value to it, rendering locking useless.

How to dynamically lock strings but remove the lock objects from memory

I have the following situation:
I have a lot of threads in my project, and each thread process one "key" by time.
Two threads cannot process the same "key" at the same time, but my project process A LOOOOOT OF KEYS, so I can't store the "keys" on memory, I need to store on memory that a thread is processing a "key" and if another thread tries to process the same "key" this thread will be waiting in a lock clause.
Now I have the following structure:
public class Lock
{
private static object _lockObj = new object();
private static List<object> _lockListValues = new List<object>();
public static void Execute(object value, Action action)
{
lock (_lockObj)
{
if (!_lockListValues.Contains(value))
_lockListValues.Add(value);
}
lock (_lockListValues.First(x => x.Equals(value)))
{
action.Invoke();
}
}
}
It is working fine, the problem is that the keys aren't being removed from the memory. the biggest trouble is the multi thread feature because at any time a "key" can be processed.
How could I solve this without a global lock independent of the keys?
Sorry, but no, this is not the way it should be done.
First, you speak about keys, but you store keys as type object in List and then you are searching with LINQ to get that from list.
For that kind of stuff is here dictionary.
Second, object model, usually it is best to implement locking of some object around some class, make it nice and clean:
like:
using System.Collections.Concurrent;
public LockedObject<T>
{
public readonly T data;
public readonly int id;
private readonly object obj = new object();
LockedObject(int id, T data)
{
this.id = id;
this.data = data;
}
//Usually, if you have Action related to some data,
//it is better to receive
//that data as parameter
public void InvokeAction(Action<T> action)
{
lock(obj)
{
action(data);
}
}
}
//Now it is a concurrently safe object applying some action
//concurrently on given data, no matter how it is stored.
//But still, this is the best idea:
ConcurrentDictionary<int, LockedObject<T>> dict =
new ConcurrentDictionary<int, LockedObject<T>>();
//You can insert, read, remove all object's concurrently.
But, the best thing is yet to come! :) You can make it lock free and very easily!
EDIT1:
ConcurrentInvoke, dictionary like collection for concurrently safe invoking action over data. There can be only one action at the time on given key.
using System;
using System.Threading;
using System.Collections.Concurrent;
public class ConcurrentInvoke<TKey, TValue>
{
//we hate lock() :)
private class Data<TData>
{
public readonly TData data;
private int flag;
private Data(TData data)
{
this.data = data;
}
public static bool Contains<TTKey>(ConcurrentDictionary<TTKey, Data<TData>> dict, TTKey key)
{
return dict.ContainsKey(key);
}
public static bool TryAdd<TTKey>(ConcurrentDictionary<TTKey, Data<TData>> dict, TTKey key, TData data)
{
return dict.TryAdd(key, new Data<TData>(data));
}
// can not remove if,
// not exist,
// remove of the key already in progress,
// invoke action of the key inprogress
public static bool TryRemove<TTKey>(ConcurrentDictionary<TTKey, Data<TData>> dict, TTKey key, Action<TTKey, TData> action_removed = null)
{
Data<TData> data = null;
if (!dict.TryGetValue(key, out data)) return false;
var access = Interlocked.CompareExchange(ref data.flag, 1, 0) == 0;
if (!access) return false;
Data<TData> data2 = null;
var removed = dict.TryRemove(key, out data2);
Interlocked.Exchange(ref data.flag, 0);
if (removed && action_removed != null) action_removed(key, data2.data);
return removed;
}
// can not invoke if,
// not exist,
// remove of the key already in progress,
// invoke action of the key inprogress
public static bool TryInvokeAction<TTKey>(ConcurrentDictionary<TTKey, Data<TData>> dict, TTKey key, Action<TTKey, TData> invoke_action = null)
{
Data<TData> data = null;
if (invoke_action == null || !dict.TryGetValue(key, out data)) return false;
var access = Interlocked.CompareExchange(ref data.flag, 1, 0) == 0;
if (!access) return false;
invoke_action(key, data.data);
Interlocked.Exchange(ref data.flag, 0);
return true;
}
}
private
readonly
ConcurrentDictionary<TKey, Data<TValue>> dict =
new ConcurrentDictionary<TKey, Data<TValue>>()
;
public bool Contains(TKey key)
{
return Data<TValue>.Contains(dict, key);
}
public bool TryAdd(TKey key, TValue value)
{
return Data<TValue>.TryAdd(dict, key, value);
}
public bool TryRemove(TKey key, Action<TKey, TValue> removed = null)
{
return Data<TValue>.TryRemove(dict, key, removed);
}
public bool TryInvokeAction(TKey key, Action<TKey, TValue> invoke)
{
return Data<TValue>.TryInvokeAction(dict, key, invoke);
}
}
ConcurrentInvoke<int, string> concurrent_invoke = new ConcurrentInvoke<int, string>();
concurrent_invoke.TryAdd(1, "string 1");
concurrent_invoke.TryAdd(2, "string 2");
concurrent_invoke.TryAdd(3, "string 3");
concurrent_invoke.TryRemove(1);
concurrent_invoke.TryInvokeAction(3, (key, value) =>
{
Console.WriteLine("InvokingAction[key: {0}, vale: {1}", key, value);
});
I modified a KeyedLock class that I posted in another question, to use internally the Monitor class instead of SemaphoreSlims. I expected that using a specialized mechanism for synchronous locking would offer better performance, but I can't actually see any difference. I am posting it anyway because it has the added convenience feature of releasing the lock automatically with the using statement. This feature adds no significant overhead in the case of synchronous locking, so there is no reason to omit it.
Another reason that justifies this separate implementation is that the Monitor has different semantics than the SemaphoreSlim. The Monitor is reentrant while the SemaphoreSlim is not. A single thread is allowed to enter the Monitor multiple times, before finally Exiting an equal number of times. This is not possible with a SemaphoreSlim. If a thread make an attempt to Wait a second time a SemaphoreSlim(1, 1), most likely it will deadlock.
The KeyedMonitor class stores internally only the locking objects that are currently in use, plus a small pool of locking objects that have been released and can be reused. This pool reduces significantly the memory allocations under heavy usage, at the cost of some added synchronization overhead.
public class KeyedMonitor<TKey>
{
private readonly Dictionary<TKey, (object, int)> _perKey;
private readonly Stack<object> _pool;
private readonly int _poolCapacity;
public KeyedMonitor(IEqualityComparer<TKey> keyComparer = null,
int poolCapacity = 10)
{
_perKey = new Dictionary<TKey, (object, int)>(keyComparer);
_pool = new Stack<object>(poolCapacity);
_poolCapacity = poolCapacity;
}
public ExitToken Enter(TKey key)
{
var locker = GetLocker(key);
Monitor.Enter(locker);
return new ExitToken(this, key);
}
// Abort-safe API
public void Enter(TKey key, ref bool lockTaken)
{
try { }
finally // Abort-safe block
{
var locker = GetLocker(key);
try { Monitor.Enter(locker, ref lockTaken); }
finally { if (!lockTaken) ReleaseLocker(key, withMonitorExit: false); }
}
}
public bool TryEnter(TKey key, int millisecondsTimeout)
{
var locker = GetLocker(key);
bool acquired = false;
try { acquired = Monitor.TryEnter(locker, millisecondsTimeout); }
finally { if (!acquired) ReleaseLocker(key, withMonitorExit: false); }
return acquired;
}
public void Exit(TKey key) => ReleaseLocker(key, withMonitorExit: true);
private object GetLocker(TKey key)
{
object locker;
lock (_perKey)
{
if (_perKey.TryGetValue(key, out var entry))
{
int counter;
(locker, counter) = entry;
counter++;
_perKey[key] = (locker, counter);
}
else
{
lock (_pool) locker = _pool.Count > 0 ? _pool.Pop() : null;
if (locker == null) locker = new object();
_perKey[key] = (locker, 1);
}
}
return locker;
}
private void ReleaseLocker(TKey key, bool withMonitorExit)
{
object locker; int counter;
lock (_perKey)
{
if (_perKey.TryGetValue(key, out var entry))
{
(locker, counter) = entry;
// It is important to allow a possible SynchronizationLockException
// to be surfaced before modifying the internal state of the class.
// That's why the Monitor.Exit should be called here.
// Exiting the Monitor while holding the inner lock should be safe.
if (withMonitorExit) Monitor.Exit(locker);
counter--;
if (counter == 0)
_perKey.Remove(key);
else
_perKey[key] = (locker, counter);
}
else
{
throw new InvalidOperationException("Key not found.");
}
}
if (counter == 0)
lock (_pool) if (_pool.Count < _poolCapacity) _pool.Push(locker);
}
public readonly struct ExitToken : IDisposable
{
private readonly KeyedMonitor<TKey> _parent;
private readonly TKey _key;
public ExitToken(KeyedMonitor<TKey> parent, TKey key)
{
_parent = parent; _key = key;
}
public void Dispose() => _parent?.Exit(_key);
}
}
Usage example:
var locker = new KeyedMonitor<string>();
using (locker.Enter("Hello"))
{
DoSomething(); // with the "Hello" resource
}
Although the KeyedMonitor class is thread-safe, it is not as robust as using the lock statement directly, because it offers no resilience in case of a ThreadAbortException. An aborted thread could leave the class in a corrupted internal state. I don't consider this to be a big issue, since the Thread.Abort method has become obsolete in the current version of the .NET platform (.NET 5).
For an explanation about why the IDisposable ExitToken struct is not boxed by the using statement, you can look here: If my struct implements IDisposable will it be boxed when used in a using statement? If this was not the case, the ExitToken feature would add significant overhead.
Caution: please don't store anywhere the ExitToken value returned by the KeyedMonitor.Enter method. There is no protection against misuse of this struct (like disposing it multiple times). The intended usage of this method is shown in the example above.
Update: I added an Enter overload that allows to take the lock with thread-abort resilience, albeit with an inconvenient syntax:
bool lockTaken = false;
try
{
locker.Enter("Hello", ref lockTaken);
DoSomething();
}
finally
{
if (lockTaken) locker.Exit("Hello");
}
As with the underlying Monitor class, the lockTaken is always true after a successful invocation of the Enter method. The lockTaken can be false only if the Enter throws an exception.

Locking a thread by user

I need a way to lock c# threads by user
I have my data object and I create new instance for every user.
Every user has several threads that use this object and in I.O. operations I want to lock this object instance for this user only.
Using simple Lock {} is locking all the object instances, there for blocking other user.
I need some simple solution.
Edit
I build new instance of MyDataObj per user;
Then run job that updating some data in MyDataObj every minute;
Using lockObj as lock, lock the data to all the users (Although it's not static Variables)
I need only to lock the data to the current user
this is the code sample
public sealed class MyDataObj
{
private static readonly Dictionary<object, MyDataObj> _instances = new Dictionary<object, MyDataObj>();
public object lockObj = new object();
public bool jobRunning = false;
private string data = string.Empty;
//// --------- constractor -------------------
private MyDataObj(int key)
{
LoadMyDataObj(key);
}
public static MyDataObj GetInstance(int key)
{
lock (_instances)
{
MyDataObj instance;
if (_instances.TryGetValue(key, out instance))
{
instance = _instances[key];
return instance;
}
instance = new MyDataObj(key);
return instance;
}
}
private void LoadMyDataObj(int key)
{
// get the data from db
}
public void UpdateMyData(string newData)
{
lock (lockObj)
{
this.data = newData;
}
}
public string ReadMyData()
{
lock (lockObj)
{
return this.data;
}
}
public class ActionObject
{
MyDataObj myDataObj;
int UserID;
//// --------- constractor -------------------
public ActionObject(int userid)
{
this.UserID = userid;
myDataObj = MyDataObj.GetInstance(userid);
if (!myDataObj.jobRunning)
{
jobs jbs = new jobs(myDataObj);
System.Threading.Thread RunJob = new System.Threading.Thread(new System.Threading.ThreadStart(jbs.dominutesAssignment));
RunJob.Start();
myDataObj.jobRunning = true;
}
}
public ActionObject()
{
myDataObj = MyDataObj.GetInstance(this.UserID);
myDataObj.UpdateMyData("some data");
}
}
public class jobs
{
MyDataObj myDataObj = null;
public jobs(MyDataObj grp)
{
this.myDataObj = grp;
}
public void dominutesAssignment()
{
while (true)
{
myDataObj.ReadMyData();
System.Threading.Thread.Sleep(1000);
}
}
}
}
I need a way to lock c# threads by user. I have my data object and I create new instance for every user
Create one lock per user. Or if the user exists longer than the threads: Use the user object as the lock.
lock (userOrTheUserObject)
{
//Do some op
}
Every user has several threads that use this object and in I.O. operations
That sounds more like you should use asynchronous IO instead of creating several threads (which will be less effecient)
I want to lock this object instance for this user only. Using simple Lock {} is locking all the object instances, there for blocking other user.
If the object is shared between all users you HAVE to lock it using lock. The lock won't be very effective otherwise. The other object is to redesign the object to now be shared.
I need some simple solution.
There are no simple threading solutions.
You can use Monitor. In this sample anyone but user 1 can execute DoIt method concurrently. While user 1 executing DoIt no one can enter it. A weak point is if user 1 tries to execute DoIt when user 2 already executing it, user 2 continues its execution. Also you have to handle exceptions properly otherwise there may be dead locks.
private static readonly object lockObj = new Object();
public void Do(int userId)
{
Monitor.Enter(lockObj);
if (userId != 1)
Monitor.Exit(lockObj);
try
{
DoIt();
}
finally
{
if (userId == 1)
Monitor.Exit(lockObj);
}
}
public void DoIt()
{
// Do It
}

Better way to handle read-only access to state with another thread?

This is a design question, not a bug fix problem.
The situation is this. I have a lot of collections and objects contained in one class. Their contents are only changed by a single message handler thread. There is one other thread which is doing rendering. Each frame it iterates through some of these collections and draws to the screen based on the value of these objects. It does not alter the objects in any way, it is just reading their values.
Now when the rendering is being done, if any of the collections are altered, my foreach loops in the rendering method fail. How should I make this thread safe? Edit: So I have to lock the collections outside each foreach loop I run on them. This works, but it seems like a lot of repetitive code to solve this problem.
As a short, contrived example:
class State
{
public object LockObjects;
public List<object> Objects;
// Called by message handler thread
void HandleMessage()
{
lock (LockObjects)
{
Objects.Add(new object());
}
}
}
class Renderer
{
State m_state;
// Called by rendering thread
void Render()
{
lock (m_state.LockObjects)
{
foreach (var obj in m_state.Objects)
{
DrawObject(obj);
}
}
}
}
This is all well and good, but I'd rather not put locks on all my state collections if there's a better way. Is this "the right" way to do it or is there a better way?
The better way is to use begin/end methods and separated lists for your both threads and synchronization using auto events for example. It will be lock-free to your message handler thread and enables you to have a lot of render/message handler threads:
class State : IDisposable
{
private List<object> _objects;
private ReaderWriterLockSlim _locker;
private object _cacheLocker;
private List<object> _objectsCache;
private Thread _synchronizeThread;
private AutoResetEvent _synchronizationEvent;
private bool _abortThreadToken;
public State()
{
_objects = new List<object>();
_objectsCache = new List<object>();
_cacheLocker = new object();
_locker = new ReaderWriterLockSlim();
_synchronizationEvent = new AutoResetEvent(false);
_abortThreadToken = false;
_synchronizeThread = new Thread(Synchronize);
_synchronizeThread.Start();
}
private void Synchronize()
{
while (!_abortThreadToken)
{
_synchronizationEvent.WaitOne();
int objectsCacheCount;
lock (_cacheLocker)
{
objectsCacheCount = _objectsCache.Count;
}
if (objectsCacheCount > 0)
{
_locker.EnterWriteLock();
lock (_cacheLocker)
{
_objects.AddRange(_objectsCache);
_objectsCache.Clear();
}
_locker.ExitWriteLock();
}
}
}
public IEnumerator<object> GetEnumerator()
{
_locker.EnterReadLock();
foreach (var o in _objects)
{
yield return o;
}
_locker.ExitReadLock();
}
// Called by message handler thread
public void HandleMessage()
{
lock (_cacheLocker)
{
_objectsCache.Add(new object());
}
_synchronizationEvent.Set();
}
public void Dispose()
{
_abortThreadToken = true;
_synchronizationEvent.Set();
}
}
Or (the simpler way) you can use ReaderWriteerLockSlim (Or just locks if you sure you have only one reader) like in the following code:
class State
{
List<object> m_objects = new List<object>();
ReaderWriterLockSlim locker = new ReaderWriterLockSlim();
public IEnumerator<object> GetEnumerator()
{
locker.EnterReadLock();
foreach (var o in Objects)
{
yield return o;
}
locker.ExitReadLock();
}
private List<object> Objects
{
get { return m_objects; }
set { m_objects = value; }
}
// Called by message handler thread
public void HandleMessage()
{
locker.EnterWriteLock();
Objects.Add(new object());
locker.ExitWriteLock();
}
}
Humm... have you tried with a ReaderWriterLockSlim ? Enclose each conllection with one of this, and ensure you start a read or write operation each time you access it.

Categories