I have a method that is accessed from multiple threads at the same time and I want to make sure that only 1 thread can be inside of a body of any method.
Can this code be refactored to something more generic? (Apart from Locking inside the State property?
public class StateManager : IStateManager
{
private readonly object _lock = new object();
public Guid? GetInfo1()
{
lock (_lock)
{
return State.Info1;
}
}
public void SetInfo1(Guid guid)
{
lock (_lock)
{
State.Info1 = guid;
}
}
public Guid? GetInfo2()
{
lock (_lock)
{
return State.Info2;
}
}
public void SetInfo2(Guid guid)
{
lock (_lock)
{
State.Info2 = guid;
}
}
}
Maybe something like:
private void LockAndExecute(Action action)
{
lock (_lock)
{
action();
}
}
Then your methods might look like this:
public void DoSomething()
{
LockAndExecute(() => Console.WriteLine("DoSomething") );
}
public int GetSomething()
{
int i = 0;
LockAndExecute(() => i = 1);
return i;
}
I'm not sure that's really saving you very much however and return values are a bit of a pain.
Although you could work around that by adding another method like this:
private T LockAndExecute<T>(Func<T> function)
{
lock (_lock)
{
return function();
}
}
So now my GetSomething method is a lot cleaner:
public int GetSomething()
{
return LockAndExecute(() => 1 );
}
Again, not sure you are gaining much in terms of less typing, but at least you know every call is locking on the same object.
While your gains may be pretty minimal in the case where all you need to do is lock, I could imagine a case where you had a bunch of methods something like this:
public void DoSomething()
{
// check some preconditions
// maybe do some logging
try
{
// do actual work here
}
catch (SomeException e)
{
// do some error handling
}
}
In that case, extracting all the precondition checking and error handling into one place could be pretty useful:
private void CheckExecuteAndHandleErrors(Action action)
{
// preconditions
// logging
try
{
action();
}
catch (SomeException e)
{
// handle errors
}
}
Using Action or Function Delegate.
Creating a method like
public T ExecuteMethodThreadSafe<T>(Func<T> MethodToExecute)
{
lock (_lock)
{
MethodToExecute.Invoke();
}
}
and using it like
public T GetInfo2(Guid guid)
{
return ExecuteMethodThreadSafe(() => State.Info2);
}
I would like to add what I ended up putting together, using some of the ideas presented by Matt and Abhinav in order to generalize this and make it as seamless as possible to implement.
private static readonly object Lock = new object();
public static void ExecuteMethodThreadSafe<T>(this T #object, Action<T> method) {
lock (Lock) {
method(#object);
}
}
public static TResult ExecuteMethodThreadSafe<T, TResult>(this T #object, Func<T, TResult> method) {
lock (Lock) {
return method(#object);
}
}
Which can then be extended in ways like this (if you want):
private static readonly Random Random = new Random();
public static T GetRandom<T>(Func<Random, T> method) => Random.ExecuteMethodThreadSafe(method);
And then when implemented could look something like this:
var bounds = new Collection<int>();
bounds.ExecuteMethodThreadSafe(list => list.Add(15)); // using the base method
int x = GetRandom(random => random.Next(-10, bounds[0])); // using the extended method
int y = GetRandom(random => random.Next(bounds[0])); // works with any method overload
Related
I want to reduce multiple events into a single delayed action. After some trigger occurs I expect some more similar triggers to come, but I prefer not to repeat the resulting delayed action. The action waits, to give a chance of completion to the burst.
The question: How can I do it in an elegant reusable way?
Till now I used a property to flag the event and trigger a delayed action like below:
public void SomeMethod()
{
SomeFlag = true; //this will intentionally return to the caller before completing the resulting buffered actions.
}
private bool someFlag;
public bool SomeFlag
{
get { return someFlag; }
set
{
if (someFlag != value)
{
someFlag = value;
if (value)
SomeDelayedMethod(5000);
}
}
}
public async void SomeDelayedMethod(int delay)
{
//some bufferred work.
await Task.Delay(delay);
SomeFlag = false;
}
below is a shorter way, but still not generic or reusable... I want something concise that packages the actions and the flag, and keeps the functionality (returning to the caller before execution is complete (like today)). I also need to be able to pass an object reference to this action)
public void SerializeAccountsToConfig()
{
if (!alreadyFlagged)
{
alreadyFlagged = true;
SerializeDelayed(5000, Serialize);
}
}
public async void SerializeDelayed(int delay, Action whatToDo)
{
await Task.Delay(delay);
whatToDo();
}
private bool alreadyFlagged;
private void Serialize()
{
//some buferred work.
//string json = JsonConvert.SerializeObject(Accounts, Formatting.Indented);
//Settings1.Default.Accounts = json;
//Settings1.Default.Save();
alreadyFlagged = false;
}
Here's a thread-safe and reusable solution.
You can create an instance of DelayedSingleAction, and in the constructor you pass the action that you want to have performed. I believe this is thread safe, though there is a tiny risk that it will restart the timer just before commencing the action, but I think that risk would exist no matter what the solution is.
public class DelayedSingleAction
{
private readonly Action _action;
private readonly long _millisecondsDelay;
private long _syncValue = 1;
public DelayedSingleAction(Action action, long millisecondsDelay)
{
_action = action;
_millisecondsDelay = millisecondsDelay;
}
private Task _waitingTask = null;
private void DoActionAndClearTask(Task _)
{
Interlocked.Exchange(ref _syncValue, 1);
_action();
}
public void PerformAction()
{
if (Interlocked.Exchange(ref _syncValue, 0) == 1)
{
_waitingTask = Task.Delay(TimeSpan.FromMilliseconds(_millisecondsDelay))
.ContinueWith(DoActionAndClearTask);
}
}
public Task Complete()
{
return _waitingTask ?? Task.FromResult(0);
}
}
See this dotnetfiddle for an example which invokes one action continuously from multiple threads.
https://dotnetfiddle.net/el14wZ
Since you're interested in RX here simple console app sample:
static void Main(string[] args)
{
// event source
var burstEvents = Observable.Interval(TimeSpan.FromMilliseconds(50));
var subscription = burstEvents
.Buffer(TimeSpan.FromSeconds(3)) // collect events 3 seconds
//.Buffer(50) // or collect 50 events
.Subscribe(events =>
{
//Console.WriteLine(events.First()); // take only first event
// or process event collection
foreach (var e in events)
Console.Write(e + " ");
Console.WriteLine();
});
Console.ReadLine();
return;
}
Based on the solution proposed by Andrew, here is a more generic solution.
Declaration and instance creation of the delayed action:
public DelayedSingleAction<Account> SendMailD;
Create the instance inside a function or in the constructor (this can be a collection of such actions each working on a different object):
SendMailD = new DelayedSingleAction<Account>(SendMail, AccountRef, 5000);
repeatedly call this action
SendMailD.PerformAction();
Send mail is the action you will "burst control". Its signature matches :
public int SendMail(Account A)
{}
Here is the updated class
public class DelayedSingleAction<T>
{
private readonly Func<T, int> actionOnObj;
private T tInstance;
private readonly long millisecondsDelay;
private long _syncValue = 1;
public DelayedSingleAction(Func<T, int> ActionOnObj, T TInstance, long MillisecondsDelay)
{
actionOnObj = ActionOnObj;
tInstance = TInstance;
millisecondsDelay = MillisecondsDelay;
}
private Task _waitingTask = null;
private void DoActionAndClearTask(Task _)
{
Console.WriteLine(string.Format("{0:h:mm:ss.fff} DelayedSingleAction Resetting SyncObject: Thread {1} for {2}", DateTime.Now, System.Threading.Thread.CurrentThread.ManagedThreadId, tInstance));
Interlocked.Exchange(ref _syncValue, 1);
actionOnObj(tInstance);
}
public void PerformAction()
{
if (Interlocked.Exchange(ref _syncValue, 0) == 1)
{
Console.WriteLine(string.Format("{0:h:mm:ss.fff} DelayedSingleAction Starting the timer: Thread {1} for {2}", DateTime.Now, System.Threading.Thread.CurrentThread.ManagedThreadId, tInstance));
_waitingTask = Task.Delay(TimeSpan.FromMilliseconds(millisecondsDelay)).ContinueWith(DoActionAndClearTask);
}
}
public Task Complete()
{
return _waitingTask ?? Task.FromResult(0);
}
}
Let me try to simplify my question:
I have four classes: Admins, Users, Players, Roles
The database returns names of methods that I will need to execute. For example if Admins_GetName is returned then GetName() method will need to be executed on the Admins class.
If Players_GetRank is returned then GetRank() method will need to be called on the Players class.
I don't want to write a huge IF or SWITCH statement with all my business logic in it. What would be the most efficient solution WITHOUT using reflection ? If possibly I would like to avoid the performance hit that reflection brings.
Keep in mind that all methods may have different parameters and but will return strings.
Here is what I'm thinking to do now:
1) Have a method with a switch statement that will break apart the database value and find the class and method I need to execute.
Something like:
switch(DbValue)
{
case DbValue == "Admins_GetName":
Declare a delegate to Admins.GetName();
return;
case: DbValue = "Players_GetRank"
Declare a delegate to Players.GetRank();
return;
.
.
.
etc
}
return class/method reference;
2) Pass the declaration from above to:
var myValue = Retrieved method.invoke()
Can you guys suggest me with the best way to accomplish this or help me out with the correct syntax on my idea of how to implement it.
Thank you.
Needs a little more context; for example, do all the methods in question have the same signature? In the general case, reflection is the most appropriate tool for this, and as long as you aren't calling it in a tight loop, it will be fine.
Otherwise, the switch statement approach is reasonable, but has the maintenance overhead. If that is problematic, I would be tempted to build a delegate cache at runtime, for example:
using System;
using System.Collections.Generic;
public class Program
{
public string Bar { get; set; }
static void Main()
{
var foo = new Foo();
FooUtils.Execute(foo, "B");
FooUtils.Execute(foo, "D");
}
}
static class FooUtils
{
public static void Execute(Foo foo, string methodName)
{
methodCache[methodName](foo);
}
static readonly Dictionary<string, Action<Foo>> methodCache;
static FooUtils()
{
methodCache = new Dictionary<string, Action<Foo>>();
foreach (var method in typeof(Foo).GetMethods())
{
if (!method.IsStatic && method.ReturnType == typeof(void)
&& method.GetParameters().Length == 0)
{
methodCache.Add(method.Name, (Action<Foo>)
Delegate.CreateDelegate(typeof(Action<Foo>), method));
}
}
}
}
public class Foo
{
public void A() { Console.WriteLine("A"); }
public void B() { Console.WriteLine("B"); }
public void C() { Console.WriteLine("C"); }
public void D() { Console.WriteLine("D"); }
public string Ignored(int a) { return ""; }
}
That approach can be extended to multiple target types by using generics:
static class FooUtils
{
public static void Execute<T>(T target, string methodName)
{
MethodCache<T>.Execute(target, methodName);
}
static class MethodCache<T>
{
public static void Execute(T target, string methodName)
{
methodCache[methodName](target);
}
static readonly Dictionary<string, Action<T>> methodCache;
static MethodCache()
{
methodCache = new Dictionary<string, Action<T>>();
foreach (var method in typeof(T).GetMethods())
{
if (!method.IsStatic && method.ReturnType == typeof(void)
&& method.GetParameters().Length == 0)
{
methodCache.Add(method.Name, (Action<T>)
Delegate.CreateDelegate(typeof(Action<T>), method));
}
}
}
}
}
I would like to implement lazy loading on properties with PostSharp.
To make it short, instead of writing
SomeType _field = null;
private SomeType Field
{
get
{
if (_field == null)
{
_field = LongOperation();
}
return _field;
}
}
I would like to write
[LazyLoadAspect]
private object Field
{
get
{
return LongOperation();
}
}
So, I identify that I need to emit some code in the class to generate the backing field, as well as inside the getter method in order to implement the test.
With PostSharp, I was considering overriding CompileTimeInitialize, but I am missing the knowledge to get a handle over the compiled code.
EDIT:
The question can be extended to any parameterless method like:
SomeType _lazyLoadedField = null;
SomeType LazyLoadableMethod ()
{
if(_lazyLoadedField ==null)
{
// Long operations code...
_lazyLoadedField = someType;
}
return _lazyLoadedField ;
}
would become
[LazyLoad]
SomeType LazyLoadableMethod ()
{
// Long operations code...
return someType;
}
After our comments, I think I know what you want now.
[Serializable]
public class LazyLoadGetter : LocationInterceptionAspect, IInstanceScopedAspect
{
private object backing;
public override void OnGetValue(LocationInterceptionArgs args)
{
if (backing == null)
{
args.ProceedGetValue();
backing = args.Value;
}
args.Value = backing;
}
public object CreateInstance(AdviceArgs adviceArgs)
{
return this.MemberwiseClone();
}
public void RuntimeInitializeInstance()
{
}
}
Test code
public class test
{
[LazyLoadGetter]
public int MyProperty { get { return LongOperation(); } }
}
Thanks to DustinDavis's answer and comments, I could work on my own implementation, and I just wanted here to share it to help other people.
The main differences from the original answer are:
Implement the suggested "only run the operation once" (purpose of the lock)
Made the initialization status of the backing field more reliable by passing this responsibility to a boolean.
Here is the code:
[Serializable]
public class LazyLoadAttribute : LocationInterceptionAspect, IInstanceScopedAspect
{
// Concurrent accesses management
private readonly object _locker = new object();
// the backing field where the loaded value is stored the first time.
private object _backingField;
// More reliable than checking _backingField for null as the result of the loading could be null.
private bool _hasBeenLoaded = false;
public override void OnGetValue(LocationInterceptionArgs args)
{
if (_hasBeenLoaded)
{
// Job already done
args.Value = _backingField;
return;
}
lock (_locker)
{
// Once the lock passed, we must check if the aspect has been loaded meanwhile or not.
if (_hasBeenLoaded)
{
args.Value = _backingField;
return;
}
// First call to the getter => need to load it.
args.ProceedGetValue();
// Indicate that we Loaded it
_hasBeenLoaded = true;
// store the result.
_backingField = args.Value;
}
}
public object CreateInstance(AdviceArgs adviceArgs)
{
return MemberwiseClone();
}
public void RuntimeInitializeInstance() { }
}
I think the requirement cannot be accurately described as 'lazy loading', but is a special case of a more general caching aspect with in-AppDomain storage but without eviction. A general caching aspect would be able to handle method parameters.
I'm writing a wrapper around a 3rd party library, and it has a method to scan the data it manages. The method takes a callback method that it calls for each item in the data that it finds.
e.g. The method is essentially: void Scan(Action<object> callback);
I want to wrap it and expose a method like IEnumerable<object> Scan();
Is this possible without resorting to a separate thread to do the actual scan and a buffer?
You can do this quite simply with Reactive:
class Program
{
static void Main(string[] args)
{
foreach (var x in CallBackToEnumerable<int>(Scan))
Console.WriteLine(x);
}
static IEnumerable<T> CallBackToEnumerable<T>(Action<Action<T>> functionReceivingCallback)
{
return Observable.Create<T>(o =>
{
// Schedule this onto another thread, otherwise it will block:
Scheduler.Later.Schedule(() =>
{
functionReceivingCallback(o.OnNext);
o.OnCompleted();
});
return () => { };
}).ToEnumerable();
}
public static void Scan(Action<int> act)
{
for (int i = 0; i < 100; i++)
{
// Delay to prove this is working asynchronously.
Thread.Sleep(100);
act(i);
}
}
}
Remember that this doesn't take care of things like cancellation, since the callback method doesn't really allow it. A proper solution would require work on the part of the external library.
You should investigate the Rx project — this allows an event source to be consumed as an IEnumerable.
I'm not sure if it allows vanilla callbacks to be presented as such (it's aimed at .NET events) but it would be worth a look as it should be possible to present a regular callback as an IObservable.
Here is a blocking enumerator (the Scan method needs to run in a separate thread)
public class MyEnumerator : IEnumerator<object>
{
private readonly Queue<object> _queue = new Queue<object>();
private ManualResetEvent _event = new ManualResetEvent(false);
public void Callback(object value)
{
lock (_queue)
{
_queue.Enqueue(value);
_event.Set();
}
}
public void Dispose()
{
}
public bool MoveNext()
{
_event.WaitOne();
lock (_queue)
{
Current = _queue.Dequeue();
if (_queue.Count == 0)
_event.Reset();
}
return true;
}
public void Reset()
{
_queue.Clear();
}
public object Current { get; private set; }
object IEnumerator.Current
{
get { return Current; }
}
}
static void Main(string[] args)
{
var enumerator = new MyEnumerator();
Scan(enumerator.Callback);
while (enumerator.MoveNext())
{
Console.WriteLine(enumerator.Current);
}
}
You could wrap it in a simple IEnumerable<Object>, but I would not recommend it. IEnumerable lists implies that you can run multiple enumerators on the same list, which you can't in this case.
How about this one:
IEnumerable<Object> Scan()
{
List<Object> objList = new List<Object>();
Action<Object> action = (obj) => { objList.Add(obj); };
Scan(action);
return objList;
}
Take a look at the yield keyword -- which will allow you to have a method that looks like an IEnumerable but which actually does processing for each return value.
The thing is I've been using the lock statement to protect a critical part of my code, but now, I realize I could allow concurrent execution of that critical code is some conditions are met.
Is there a way to condition the lock?
bool locked = false;
if (condition) {
Monitor.Enter(lockObject);
locked = true;
}
try {
// possibly critical section
}
finally {
if (locked) Monitor.Exit(lockObject);
}
EDIT: yes, there is a race condition unless you can assure that the condition is constant while threads are entering.
I'm no threading expert, but it sounds like you might be looking for something like this (double-checked locking). The idea is to check the condition both before and after acquiring the lock.
private static object lockHolder = new object();
if (ActionIsValid()) {
lock(lockHolder) {
if (ActionIsValid()) {
DoSomething();
}
}
}
Action doThatThing = someMethod;
if (condition)
{
lock(thatThing)
{
doThatThing();
}
}
else
{
doThatThing();
}
Actually, to avoid a race condition, I'd be tempted to use a ReaderWriterLockSlim here - treat concurrent access as a read lock, and exclusive access as a write lock. That way, if the conditions change you won't end up with some inappropriate code still executing blindly in the region (under the false assumption that it is safe); a bit verbose, but
(formatted for space):
if (someCondition) {
lockObj.EnterReadLock();
try { Foo(); }
finally { lockObj.ExitReadLock(); }
} else {
lockObj.EnterWriteLock();
try { Foo(); }
finally { lockObj.ExitWriteLock(); }
}
If you have many methods/properties that require conditional locking, you don't want to repeat the same pattern over and over again. I propose the following trick:
Non-repetitive conditional-lock pattern
With a private helper struct implementing IDisposable we can encapsulate the condition/lock without measurable overhead.
public void DoStuff()
{
using (ConditionalLock())
{
// Thread-safe code
}
}
It's quite easy to implement. Here's a sample class demonstrating this pattern:
public class Counter
{
private static readonly int MAX_COUNT = 100;
private readonly bool synchronized;
private int count;
private readonly object lockObject = new object();
private int lockCount;
public Counter(bool synchronized)
{
this.synchronized = synchronized;
}
public int Count
{
get
{
using (ConditionalLock())
{
return count;
}
}
}
public int LockCount
{
get
{
using (ConditionalLock())
{
return lockCount;
}
}
}
public void Increase()
{
using (ConditionalLock())
{
if (count < MAX_COUNT)
{
Thread.Sleep(10);
++count;
}
}
}
private LockHelper ConditionalLock() => new LockHelper(this);
// This is where the magic happens!
private readonly struct LockHelper : IDisposable
{
private readonly Counter counter;
private readonly bool lockTaken;
public LockHelper(Counter counter)
{
this.counter = counter;
lockTaken = false;
if (counter.synchronized)
{
Monitor.Enter(counter.lockObject, ref lockTaken);
counter.lockCount++;
}
}
private void Exit()
{
if (lockTaken)
{
Monitor.Exit(counter.lockObject);
}
}
void IDisposable.Dispose() => Exit();
}
}
Now, let's create a small sample program demonstrating its correctness.
class Program
{
static void Main(string[] args)
{
var onlyOnThisThread = new Counter(synchronized: false);
IncreaseToMax(c1);
var onManyThreads = new Counter(synchronized: true);
var t1 = Task.Factory.StartNew(() => IncreaseToMax(c2));
var t2 = Task.Factory.StartNew(() => IncreaseToMax(c2));
var t3 = Task.Factory.StartNew(() => IncreaseToMax(c2));
Task.WaitAll(t1, t2, t3);
Console.WriteLine($"Counter(false) => Count = {c1.Count}, LockCount = {c1.LockCount}");
Console.WriteLine($"Counter(true) => Count = {c2.Count}, LockCount = {c2.LockCount}");
}
private static void IncreaseToMax(Counter counter)
{
for (int i = 0; i < 1000; i++)
{
counter.Increase();
}
}
}
Output:
Counter(false) => Count = 100, LockCount = 0
Counter(true) => Count = 100, LockCount = 3002
Now you can let the caller decide whether locking (costly) is needed.
I'm guessing you've got some code that looks a little like this:
private Monkey GetScaryMonkey(int numberOfHeads){
Monkey ape = null;
lock(this) {
ape = new Monkey();
ape.AddHeads(numberOfHeads);
}
return ape;
}
To make this conditional couldn't you just do this:
private Monkey GetScaryMonkey(int numberOfHeads){
if ( numberOfHeads > 1 ) {
lock(this) {
return CreateNewMonkey( numberOfHeads );
}
}
return CreateNewMonkey( numberOfHeads );
}
Should work, no?
Use Double-checked locking pattern, as suggested above. that's the trick IMO :)
make sure you have your lock object as a static, as listed in not.that.dave.foley.myopenid.com's example.