Suppose I have a Singleton that loads resources into memory when created, and performs operation on the data when callings its methods.
Now suppose, that I want to have the ability to tell the Singleton to release those resources, as I don't expect to be using them in the near future, but also be able to load those resources back in, when the time comes. And I want it all to be thread safe.
What would be the best way to aproach this problem?
Would this example work?:
// Singleton implementation
...
private IDisposable resource;
private bool loadingResources;
private IDisposable Resource {
get => resource ?? throw new CustomException();
}
// Method A
public void A() {
var resource = Resource; // Throws CustomException if resource is null
// Do stuff
}
// Method B
public void B() {
var resource = Resource;
// Do stuff
}
public void ReleaseResources() {
if (resource != null)
lock (thislock) {
//resource.Dispose();
resource = null;
}
}
public void LoadResources() {
if (!loadingResources && resource == null)
lock (thislock)
if (!loadingResources && resource == null)
{
loadingResources = true;
// Load resources
resource = CreateResource();
loadingResources = false;
}
}
I would suggest separating the resource handling from the actual usage. Assuming the resource requires disposal this could look something like:
public class DisposableWrapper<T> where T : IDisposable
{
private readonly Func<T> resourceFactory;
private T resource;
private bool constructed;
private object lockObj = new object();
private int currentUsers = 0;
public DisposableWrapper(Func<T> resourceFactory)
{
this.resourceFactory = resourceFactory;
}
public O Run<O>(Func<T, O> func)
{
lock (lockObj)
{
if (!constructed)
{
resource = resourceFactory();
constructed = true;
}
currentUsers++;
}
try
{
return func(resource);
}
catch
{
return default;
}
finally
{
Interlocked.Decrement(ref currentUsers);
}
}
public void Run(Action<T> action)
{
lock (lockObj)
{
if (!constructed)
{
resource = resourceFactory();
constructed = true;
}
currentUsers++;
}
try
{
action(resource);
}
finally
{
Interlocked.Decrement(ref currentUsers);
}
}
public bool TryRelease()
{
lock (lockObj)
{
if (currentUsers == 0 && constructed)
{
constructed = false;
resource.Dispose();
resource = default;
return true;
}
return false;
}
}
}
If the resource does not require disposal I would suggest to instead use lazy<T>. Releasing resources would simply mean replacing the existing lazy object with a new one. Letting the old object be cleaned up by the garbage collector.
I have the following situation:
I have a lot of threads in my project, and each thread process one "key" by time.
Two threads cannot process the same "key" at the same time, but my project process A LOOOOOT OF KEYS, so I can't store the "keys" on memory, I need to store on memory that a thread is processing a "key" and if another thread tries to process the same "key" this thread will be waiting in a lock clause.
Now I have the following structure:
public class Lock
{
private static object _lockObj = new object();
private static List<object> _lockListValues = new List<object>();
public static void Execute(object value, Action action)
{
lock (_lockObj)
{
if (!_lockListValues.Contains(value))
_lockListValues.Add(value);
}
lock (_lockListValues.First(x => x.Equals(value)))
{
action.Invoke();
}
}
}
It is working fine, the problem is that the keys aren't being removed from the memory. the biggest trouble is the multi thread feature because at any time a "key" can be processed.
How could I solve this without a global lock independent of the keys?
Sorry, but no, this is not the way it should be done.
First, you speak about keys, but you store keys as type object in List and then you are searching with LINQ to get that from list.
For that kind of stuff is here dictionary.
Second, object model, usually it is best to implement locking of some object around some class, make it nice and clean:
like:
using System.Collections.Concurrent;
public LockedObject<T>
{
public readonly T data;
public readonly int id;
private readonly object obj = new object();
LockedObject(int id, T data)
{
this.id = id;
this.data = data;
}
//Usually, if you have Action related to some data,
//it is better to receive
//that data as parameter
public void InvokeAction(Action<T> action)
{
lock(obj)
{
action(data);
}
}
}
//Now it is a concurrently safe object applying some action
//concurrently on given data, no matter how it is stored.
//But still, this is the best idea:
ConcurrentDictionary<int, LockedObject<T>> dict =
new ConcurrentDictionary<int, LockedObject<T>>();
//You can insert, read, remove all object's concurrently.
But, the best thing is yet to come! :) You can make it lock free and very easily!
EDIT1:
ConcurrentInvoke, dictionary like collection for concurrently safe invoking action over data. There can be only one action at the time on given key.
using System;
using System.Threading;
using System.Collections.Concurrent;
public class ConcurrentInvoke<TKey, TValue>
{
//we hate lock() :)
private class Data<TData>
{
public readonly TData data;
private int flag;
private Data(TData data)
{
this.data = data;
}
public static bool Contains<TTKey>(ConcurrentDictionary<TTKey, Data<TData>> dict, TTKey key)
{
return dict.ContainsKey(key);
}
public static bool TryAdd<TTKey>(ConcurrentDictionary<TTKey, Data<TData>> dict, TTKey key, TData data)
{
return dict.TryAdd(key, new Data<TData>(data));
}
// can not remove if,
// not exist,
// remove of the key already in progress,
// invoke action of the key inprogress
public static bool TryRemove<TTKey>(ConcurrentDictionary<TTKey, Data<TData>> dict, TTKey key, Action<TTKey, TData> action_removed = null)
{
Data<TData> data = null;
if (!dict.TryGetValue(key, out data)) return false;
var access = Interlocked.CompareExchange(ref data.flag, 1, 0) == 0;
if (!access) return false;
Data<TData> data2 = null;
var removed = dict.TryRemove(key, out data2);
Interlocked.Exchange(ref data.flag, 0);
if (removed && action_removed != null) action_removed(key, data2.data);
return removed;
}
// can not invoke if,
// not exist,
// remove of the key already in progress,
// invoke action of the key inprogress
public static bool TryInvokeAction<TTKey>(ConcurrentDictionary<TTKey, Data<TData>> dict, TTKey key, Action<TTKey, TData> invoke_action = null)
{
Data<TData> data = null;
if (invoke_action == null || !dict.TryGetValue(key, out data)) return false;
var access = Interlocked.CompareExchange(ref data.flag, 1, 0) == 0;
if (!access) return false;
invoke_action(key, data.data);
Interlocked.Exchange(ref data.flag, 0);
return true;
}
}
private
readonly
ConcurrentDictionary<TKey, Data<TValue>> dict =
new ConcurrentDictionary<TKey, Data<TValue>>()
;
public bool Contains(TKey key)
{
return Data<TValue>.Contains(dict, key);
}
public bool TryAdd(TKey key, TValue value)
{
return Data<TValue>.TryAdd(dict, key, value);
}
public bool TryRemove(TKey key, Action<TKey, TValue> removed = null)
{
return Data<TValue>.TryRemove(dict, key, removed);
}
public bool TryInvokeAction(TKey key, Action<TKey, TValue> invoke)
{
return Data<TValue>.TryInvokeAction(dict, key, invoke);
}
}
ConcurrentInvoke<int, string> concurrent_invoke = new ConcurrentInvoke<int, string>();
concurrent_invoke.TryAdd(1, "string 1");
concurrent_invoke.TryAdd(2, "string 2");
concurrent_invoke.TryAdd(3, "string 3");
concurrent_invoke.TryRemove(1);
concurrent_invoke.TryInvokeAction(3, (key, value) =>
{
Console.WriteLine("InvokingAction[key: {0}, vale: {1}", key, value);
});
I modified a KeyedLock class that I posted in another question, to use internally the Monitor class instead of SemaphoreSlims. I expected that using a specialized mechanism for synchronous locking would offer better performance, but I can't actually see any difference. I am posting it anyway because it has the added convenience feature of releasing the lock automatically with the using statement. This feature adds no significant overhead in the case of synchronous locking, so there is no reason to omit it.
Another reason that justifies this separate implementation is that the Monitor has different semantics than the SemaphoreSlim. The Monitor is reentrant while the SemaphoreSlim is not. A single thread is allowed to enter the Monitor multiple times, before finally Exiting an equal number of times. This is not possible with a SemaphoreSlim. If a thread make an attempt to Wait a second time a SemaphoreSlim(1, 1), most likely it will deadlock.
The KeyedMonitor class stores internally only the locking objects that are currently in use, plus a small pool of locking objects that have been released and can be reused. This pool reduces significantly the memory allocations under heavy usage, at the cost of some added synchronization overhead.
public class KeyedMonitor<TKey>
{
private readonly Dictionary<TKey, (object, int)> _perKey;
private readonly Stack<object> _pool;
private readonly int _poolCapacity;
public KeyedMonitor(IEqualityComparer<TKey> keyComparer = null,
int poolCapacity = 10)
{
_perKey = new Dictionary<TKey, (object, int)>(keyComparer);
_pool = new Stack<object>(poolCapacity);
_poolCapacity = poolCapacity;
}
public ExitToken Enter(TKey key)
{
var locker = GetLocker(key);
Monitor.Enter(locker);
return new ExitToken(this, key);
}
// Abort-safe API
public void Enter(TKey key, ref bool lockTaken)
{
try { }
finally // Abort-safe block
{
var locker = GetLocker(key);
try { Monitor.Enter(locker, ref lockTaken); }
finally { if (!lockTaken) ReleaseLocker(key, withMonitorExit: false); }
}
}
public bool TryEnter(TKey key, int millisecondsTimeout)
{
var locker = GetLocker(key);
bool acquired = false;
try { acquired = Monitor.TryEnter(locker, millisecondsTimeout); }
finally { if (!acquired) ReleaseLocker(key, withMonitorExit: false); }
return acquired;
}
public void Exit(TKey key) => ReleaseLocker(key, withMonitorExit: true);
private object GetLocker(TKey key)
{
object locker;
lock (_perKey)
{
if (_perKey.TryGetValue(key, out var entry))
{
int counter;
(locker, counter) = entry;
counter++;
_perKey[key] = (locker, counter);
}
else
{
lock (_pool) locker = _pool.Count > 0 ? _pool.Pop() : null;
if (locker == null) locker = new object();
_perKey[key] = (locker, 1);
}
}
return locker;
}
private void ReleaseLocker(TKey key, bool withMonitorExit)
{
object locker; int counter;
lock (_perKey)
{
if (_perKey.TryGetValue(key, out var entry))
{
(locker, counter) = entry;
// It is important to allow a possible SynchronizationLockException
// to be surfaced before modifying the internal state of the class.
// That's why the Monitor.Exit should be called here.
// Exiting the Monitor while holding the inner lock should be safe.
if (withMonitorExit) Monitor.Exit(locker);
counter--;
if (counter == 0)
_perKey.Remove(key);
else
_perKey[key] = (locker, counter);
}
else
{
throw new InvalidOperationException("Key not found.");
}
}
if (counter == 0)
lock (_pool) if (_pool.Count < _poolCapacity) _pool.Push(locker);
}
public readonly struct ExitToken : IDisposable
{
private readonly KeyedMonitor<TKey> _parent;
private readonly TKey _key;
public ExitToken(KeyedMonitor<TKey> parent, TKey key)
{
_parent = parent; _key = key;
}
public void Dispose() => _parent?.Exit(_key);
}
}
Usage example:
var locker = new KeyedMonitor<string>();
using (locker.Enter("Hello"))
{
DoSomething(); // with the "Hello" resource
}
Although the KeyedMonitor class is thread-safe, it is not as robust as using the lock statement directly, because it offers no resilience in case of a ThreadAbortException. An aborted thread could leave the class in a corrupted internal state. I don't consider this to be a big issue, since the Thread.Abort method has become obsolete in the current version of the .NET platform (.NET 5).
For an explanation about why the IDisposable ExitToken struct is not boxed by the using statement, you can look here: If my struct implements IDisposable will it be boxed when used in a using statement? If this was not the case, the ExitToken feature would add significant overhead.
Caution: please don't store anywhere the ExitToken value returned by the KeyedMonitor.Enter method. There is no protection against misuse of this struct (like disposing it multiple times). The intended usage of this method is shown in the example above.
Update: I added an Enter overload that allows to take the lock with thread-abort resilience, albeit with an inconvenient syntax:
bool lockTaken = false;
try
{
locker.Enter("Hello", ref lockTaken);
DoSomething();
}
finally
{
if (lockTaken) locker.Exit("Hello");
}
As with the underlying Monitor class, the lockTaken is always true after a successful invocation of the Enter method. The lockTaken can be false only if the Enter throws an exception.
I have an ASP.net with MVC program with the current singleton class:
public sealed class Foo
{
private static volatile Foo_instance;
private static object syncRoot = new Object();
private List<Obj> _objList;
private Foo()
{
_objList = new List<Obj>();
}
public static Foo Instance
{
get
{
if (_instance == null)
{
lock (syncRoot)
{
_instance = new Foo();
}
}
return _instance;
}
}
public void AddObjToList(Obj _object)
{
lock (_instance)
{
_objList.Add(_object);
}
}
public void FindAndRemoveObj(string id)
{
lock (_instance)
{
Obj _object = null;
_object= _objList.FirstOrDefault(t => t.UniKey == id);
if (_object!= null)
{
_objList.Remove(object);
}
}
}
}
The first time that a class get the instance of this class it will return a new/clean instace of foo class, as expected, and then populating the list however a second class that will remove itens from the same list is receveing an new instance with an empty list.
This code has the lock in the wrong place:
if (_instance == null)
{
lock (syncRoot)
{
_instance = new Foo();
}
}
As thread 1 creates _instance, a second thread will block on the lock, then create _instance anew when it is released.
Another thing to be careful about it that you should never rely on static variables in IIS across server lookups. The application pool can be recycled at any time.
The conclusion was that the asp.net is creating new instances of the domain and that causes a new singleton object each time. I have searched how to sync objects between domains but that is too "workaround" for me, so I decided to add a boolean column that update the value with the confirmation or failure.
Thanks everyone
I want to load some objects from database and cache them. It's simple:
public class Dal {
public Entity GetEntity(int id) {
var cacheKey = string.Format(".cache.key.{0}", id);
var item = Cache.Get(cacheKey) as Entity;
if(item == null) {
item = LoadEntityFromDatabase(id);
Cache.Add(cacheKey, item);
}
return item;
}
}
It's pretty simple. But when multiple threads want to access Dal.GetEntity, I want to lock an object which is bounded to id. The scenario I can think about is something like this:
public class Dal {
private static readonly ConcurrentDictionary<int, object>
_locks = new ConcurrentDictionary<int, object>();
public Entity GetEntity(int id) {
var cacheKey = string.Format(".cache.key.{0}", id);
var item = Cache.Get(cacheKey) as Entity;
if(item == null) {
var lockObject = _locks.GetOrAdd(id, new object());
lock (lockObject) {
if(item == null) {
// all exceptions are handled in this block.
item = LoadEntityFromDatabase(id);
Cache.Add(cacheKey, item);
}
}
// * Here is the removing problem:
_locks.TryRemove(id, out lockObject);
}
return item;
}
}
The goal is: I want to lock and id, when one thread is fetching it from database. Actually I want prevent requesting an object from database twice. It seems to work, I think. But I'm not exactly a multithread programmer, nor a lock-guy. So, the question is: what risks am I taking with this scenario?
Also, I have this problem: how to prevent dictionary from getting larger and larger? As you can see, - where I marked with a * -, I'm trying to remove used items with TryRemove method. But It seems a stupid move: What if it tries to remove an object, while the object is locked with another thread? Am I right?
I would strongly advice you not to mix ConcurrentDictionary and locks.
Either work with locks and a normal Dictionary or work with a ConcurrentDictionary and lock-free only.
You are currently risking races - e.g. when 3 threads act on the same record. A thread removes the old lock - B thread uses the old lock and C thread creates a new lock, so that B and C might work in parallel.
Something like that. (not tested);
public class Dal
{
private sealed class Locker
{
private static readonly object DictionaryLocker = new object();
private static readonly Dictionary<int, Locker>
Locks = new Dictionary<int, Locker>();
private readonly object _lockerObject;
private int _num;
private Locker()
{
_num = 0;
_lockerObject = new object();
}
public static object StartLock(int id)
{
Locker locker;
lock (DictionaryLocker)
{
if (!Locks.TryGetValue(id, out locker))
{
locker = new Locker();
Locks.Add(id, locker);
}
++locker._num;
}
return locker._lockerObject;
}
public static void EndLock(int id)
{
lock (DictionaryLocker)
{
Locker locker = Locks[id];
--locker._num;
if (locker._num == 0)
Locks.Remove(id);
}
}
}
public Entity GetEntity(int id)
{
var cacheKey = string.Format(".cache.key.{0}", id);
var item = Cache.Get(cacheKey) as Entity;
if (item != null)
return item;
object lockObject = Locker.StartLock(id);
lock (lockObject)
{
// all exceptions are handled in this block.
item = LoadEntityFromDatabase(id);
Cache.Add(cacheKey, item);
Locker.EndLock(id);
}
return item;
}
}
EDIT Changed to Dictionary
I need a way to lock c# threads by user
I have my data object and I create new instance for every user.
Every user has several threads that use this object and in I.O. operations I want to lock this object instance for this user only.
Using simple Lock {} is locking all the object instances, there for blocking other user.
I need some simple solution.
Edit
I build new instance of MyDataObj per user;
Then run job that updating some data in MyDataObj every minute;
Using lockObj as lock, lock the data to all the users (Although it's not static Variables)
I need only to lock the data to the current user
this is the code sample
public sealed class MyDataObj
{
private static readonly Dictionary<object, MyDataObj> _instances = new Dictionary<object, MyDataObj>();
public object lockObj = new object();
public bool jobRunning = false;
private string data = string.Empty;
//// --------- constractor -------------------
private MyDataObj(int key)
{
LoadMyDataObj(key);
}
public static MyDataObj GetInstance(int key)
{
lock (_instances)
{
MyDataObj instance;
if (_instances.TryGetValue(key, out instance))
{
instance = _instances[key];
return instance;
}
instance = new MyDataObj(key);
return instance;
}
}
private void LoadMyDataObj(int key)
{
// get the data from db
}
public void UpdateMyData(string newData)
{
lock (lockObj)
{
this.data = newData;
}
}
public string ReadMyData()
{
lock (lockObj)
{
return this.data;
}
}
public class ActionObject
{
MyDataObj myDataObj;
int UserID;
//// --------- constractor -------------------
public ActionObject(int userid)
{
this.UserID = userid;
myDataObj = MyDataObj.GetInstance(userid);
if (!myDataObj.jobRunning)
{
jobs jbs = new jobs(myDataObj);
System.Threading.Thread RunJob = new System.Threading.Thread(new System.Threading.ThreadStart(jbs.dominutesAssignment));
RunJob.Start();
myDataObj.jobRunning = true;
}
}
public ActionObject()
{
myDataObj = MyDataObj.GetInstance(this.UserID);
myDataObj.UpdateMyData("some data");
}
}
public class jobs
{
MyDataObj myDataObj = null;
public jobs(MyDataObj grp)
{
this.myDataObj = grp;
}
public void dominutesAssignment()
{
while (true)
{
myDataObj.ReadMyData();
System.Threading.Thread.Sleep(1000);
}
}
}
}
I need a way to lock c# threads by user. I have my data object and I create new instance for every user
Create one lock per user. Or if the user exists longer than the threads: Use the user object as the lock.
lock (userOrTheUserObject)
{
//Do some op
}
Every user has several threads that use this object and in I.O. operations
That sounds more like you should use asynchronous IO instead of creating several threads (which will be less effecient)
I want to lock this object instance for this user only. Using simple Lock {} is locking all the object instances, there for blocking other user.
If the object is shared between all users you HAVE to lock it using lock. The lock won't be very effective otherwise. The other object is to redesign the object to now be shared.
I need some simple solution.
There are no simple threading solutions.
You can use Monitor. In this sample anyone but user 1 can execute DoIt method concurrently. While user 1 executing DoIt no one can enter it. A weak point is if user 1 tries to execute DoIt when user 2 already executing it, user 2 continues its execution. Also you have to handle exceptions properly otherwise there may be dead locks.
private static readonly object lockObj = new Object();
public void Do(int userId)
{
Monitor.Enter(lockObj);
if (userId != 1)
Monitor.Exit(lockObj);
try
{
DoIt();
}
finally
{
if (userId == 1)
Monitor.Exit(lockObj);
}
}
public void DoIt()
{
// Do It
}