c# return value if match otherwise continue - c#

This is probably not possible, but the OCD in me wants to at least ask if there is a way :)
I have this method:
public async Task<List<Strategy>> Handle(StrategyList query, CancellationToken cancellationToken)
{
return _attemptHandler.List();
}
I have now added some Attempts to help with error handling, so my code changed to this:
public async Task<Attempt<List<Strategy>>> Handle(StrategyList query, CancellationToken cancellationToken)
{
var attempt = _attemptHandler.List();
if (attempt.Failed) return attempt.Error;
return attempt.Result.ToList();
}
Think of an Attempt like an IdentityResult.
What I would like to do, so completely remove the 2nd line so it becomes something like this:
public async Task<Attempt<List<Strategy>>> Handle(StrategyList query, CancellationToken cancellationToken)
{
var attempt = _attemptHandler.List().ThrowIfError();
return attempt.Result.ToList();
}
So basically, if there was an error trying to get the list, then return that error (in the method ThrowIfError) but if there isn't, continue on to the return attempt.Result.ToList().
Is this possible?
You might be asking why. This use case I gave there doesn't look like a lot, but there are some places where I have to check multiple attempts and I would like it if I could do it without having to write the same code over and over (i.e. if (attempt.Failed) return attempt.Error;)
Here is an example of multiple attempts:
public async Task<Attempt<Strategy>> Handle(StrategySave query, CancellationToken cancellationToken)
{
var request = query.Model;
_strategyValidator.Validate(request);
if (request.Id == 0)
{
var attempt = _attemptHandler.Create(request);
if (attempt.Failed) return attempt.Error;
}
else
{
var attempt = _attemptHandler.List();
if (attempt.Failed) return attempt.Error;
var strategy = attempt.Result.ToList().SingleOrDefault(m => m.Id.Equals(query.Model.Id));
if (strategy == null) return new NotFoundError(nameof(Strategy), query.Model.Id.ToString());
strategy.Url = request.Url;
var updateAttempt = _attemptHandler.Update(strategy);
if (updateAttempt.Failed) return updateAttempt.Error;
}
var saveAttempt = await _attemptHandler.SaveChangesAsync();
if (saveAttempt.Failed) return saveAttempt.Error;
return request;
}

Here is:
a simple implementation of an Attempt<TResult> class that lets you do what you want to achieve and
a unit test that demonstrates how it is used.
To simplify, the example uses a List<string> as the result type. The HandleAsync method corresponds to your Handle method. MakeAttemptAsync() is comparable to your attemptHandler.List().
/// <summary>
/// Utility class that helps shorten the calling code.
/// </summary>
public static class Attempt
{
public static async Task<Attempt<TResult>> ResultAsync<TResult>(Task<TResult> task)
{
return await Attempt<TResult>.ResultAsync(task);
}
public static Attempt<TResult> ResultOf<TResult>(Func<TResult> func)
{
return Attempt<TResult>.ResultOf(func);
}
}
/// <summary>
/// Represents a successful or failed attempt.
/// </summary>
/// <typeparam name="TResult">The result type.</typeparam>
public class Attempt<TResult>
{
private Attempt(TResult result, bool success, Exception exception)
{
Result = result;
Success = success;
Exception = exception;
}
public TResult Result { get; }
public bool Success { get; }
public Exception Exception { get; }
public static async Task<Attempt<TResult>> ResultAsync(Task<TResult> task)
{
try
{
TResult result = await task;
return new Attempt<TResult>(result, true, null);
}
catch (Exception ex)
{
return new Attempt<TResult>(default, false, ex);
}
}
public static Attempt<TResult> ResultOf(Func<TResult> func)
{
try
{
TResult result = func();
return new Attempt<TResult>(result, true, null);
}
catch (Exception ex)
{
return new Attempt<TResult>(default, false, ex);
}
}
}
public class AttemptsTests
{
private static readonly List<string> SuccessList = new List<string> { "a", "b", "c" };
/// <summary>
/// Simple demonstrator for a short, synchronous handler making use of the
/// Attempt class, called with flag equal to true or false to simulate
/// success or failure of the MakeAttemptAsync method.
/// </summary>
private static Attempt<List<string>> Handle(bool flag)
{
return Attempt.ResultOf(() => MakeAttempt(flag));
}
/// <summary>
/// Simple demonstrator for a short, asynchronous handler making use of the
/// Attempt class, called with flag equal to true or false to simulate
/// success or failure of the MakeAttemptAsync method.
/// </summary>
private static async Task<Attempt<List<string>>> HandleAsync(bool flag)
{
Task<List<string>> task = MakeAttemptAsync(flag);
return await Attempt.ResultAsync(task);
}
/// <summary>
/// Simple dummy method that returns a List or throws an exception.
/// </summary>
private static List<string> MakeAttempt(bool flag)
{
return flag
? SuccessList
: throw new Exception("Failed attempt");
}
/// <summary>
/// Simple dummy method that returns a successful or failed task.
/// </summary>
private static Task<List<string>> MakeAttemptAsync(bool flag)
{
return flag
? Task.FromResult(SuccessList)
: Task.FromException<List<string>>(new Exception("Failed attempt"));
}
[Fact]
public void Handle_Failure_ExceptionReturned()
{
Attempt<List<string>> attempt = Handle(false);
Assert.False(attempt.Success);
Assert.Null(attempt.Result);
Assert.Equal("Failed attempt", attempt.Exception.Message);
}
[Fact]
public void Handle_Success_ListReturned()
{
Attempt<List<string>> attempt = Handle(true);
Assert.True(attempt.Success);
Assert.Equal(SuccessList, attempt.Result);
Assert.Null(attempt.Exception);
}
[Fact]
public async Task HandleAsync_Failure_ExceptionReturned()
{
Attempt<List<string>> attempt = await HandleAsync(false);
Assert.False(attempt.Success);
Assert.Null(attempt.Result);
Assert.Equal("Failed attempt", attempt.Exception.Message);
}
[Fact]
public async Task HandleAsync_Success_ListReturned()
{
Attempt<List<string>> attempt = await HandleAsync(true);
Assert.True(attempt.Success);
Assert.Equal(SuccessList, attempt.Result);
Assert.Null(attempt.Exception);
}
}
Update 2020-01-26
I amended the above example by adding a new static Attempt utility class that helps shorten the calling code. For example, instead of writing:
return await Attempt<List<string>>.ResultAsync(task);
you can write:
return await Attempt.ResultAsync(task);
as TResult is implicit from the task parameter. Secondly, I added a ResutOf method that takes a Func<TResult>, so you don't need to use TaskFromResult to turn a synchronous result into a task.

You can check for "Rail Oriented Programming", which is exactly what you are trying to achieve.
For example with multiple attempts where next attempt should be executed only when previous one succeed.
public Attempt<List<Strategy>> Process(params AttemptHandler[] handlers)
{
var attempt = default(Attempt<List<Strategy>>);
foreach(var handler in handlers)
{
attempt = handler.List();
if (attempt.Failed)
{
return attempt.Error;
}
}
return attempt.Result.ToList();
}
Instead of using null as default value for attempt variable - use "empty" attempt object, which will return empty attempt if no handlers were provided.
Usage
var attempt = Process(_handler1, _handler2, _handler3);

Related

What should be locked for the same string? [duplicate]

I'm attempting to figure out an issue that has been raised with my ImageProcessor library here where I am getting intermittent file access errors when adding items to the cache.
System.IO.IOException: The process cannot access the file 'D:\home\site\wwwroot\app_data\cache\0\6\5\f\2\7\065f27fc2c8e843443d210a1e84d1ea28bbab6c4.webp' because it is being used by another process.
I wrote a class designed to perform an asynchronous lock based upon a key generated by a hashed url but it seems I have missed something in the implementation.
My locking class
public sealed class AsyncDuplicateLock
{
/// <summary>
/// The collection of semaphore slims.
/// </summary>
private static readonly ConcurrentDictionary<object, SemaphoreSlim> SemaphoreSlims
= new ConcurrentDictionary<object, SemaphoreSlim>();
/// <summary>
/// Locks against the given key.
/// </summary>
/// <param name="key">
/// The key that identifies the current object.
/// </param>
/// <returns>
/// The disposable <see cref="Task"/>.
/// </returns>
public IDisposable Lock(object key)
{
DisposableScope releaser = new DisposableScope(
key,
s =>
{
SemaphoreSlim locker;
if (SemaphoreSlims.TryRemove(s, out locker))
{
locker.Release();
locker.Dispose();
}
});
SemaphoreSlim semaphore = SemaphoreSlims.GetOrAdd(key, new SemaphoreSlim(1, 1));
semaphore.Wait();
return releaser;
}
/// <summary>
/// Asynchronously locks against the given key.
/// </summary>
/// <param name="key">
/// The key that identifies the current object.
/// </param>
/// <returns>
/// The disposable <see cref="Task"/>.
/// </returns>
public Task<IDisposable> LockAsync(object key)
{
DisposableScope releaser = new DisposableScope(
key,
s =>
{
SemaphoreSlim locker;
if (SemaphoreSlims.TryRemove(s, out locker))
{
locker.Release();
locker.Dispose();
}
});
Task<IDisposable> releaserTask = Task.FromResult(releaser as IDisposable);
SemaphoreSlim semaphore = SemaphoreSlims.GetOrAdd(key, new SemaphoreSlim(1, 1));
Task waitTask = semaphore.WaitAsync();
return waitTask.IsCompleted
? releaserTask
: waitTask.ContinueWith(
(_, r) => (IDisposable)r,
releaser,
CancellationToken.None,
TaskContinuationOptions.ExecuteSynchronously,
TaskScheduler.Default);
}
/// <summary>
/// The disposable scope.
/// </summary>
private sealed class DisposableScope : IDisposable
{
/// <summary>
/// The key
/// </summary>
private readonly object key;
/// <summary>
/// The close scope action.
/// </summary>
private readonly Action<object> closeScopeAction;
/// <summary>
/// Initializes a new instance of the <see cref="DisposableScope"/> class.
/// </summary>
/// <param name="key">
/// The key.
/// </param>
/// <param name="closeScopeAction">
/// The close scope action.
/// </param>
public DisposableScope(object key, Action<object> closeScopeAction)
{
this.key = key;
this.closeScopeAction = closeScopeAction;
}
/// <summary>
/// Disposes the scope.
/// </summary>
public void Dispose()
{
this.closeScopeAction(this.key);
}
}
}
Usage - within a HttpModule
private readonly AsyncDuplicateLock locker = new AsyncDuplicateLock();
using (await this.locker.LockAsync(cachedPath))
{
// Process and save a cached image.
}
Can anyone spot where I have gone wrong? I'm worried that I am misunderstanding something fundamental.
The full source for the library is stored on Github here
As the other answerer noted, the original code is removing the SemaphoreSlim from the ConcurrentDictionary before it releases the semaphore. So, you've got too much semaphore churn going on - they're being removed from the dictionary when they could still be in use (not acquired, but already retrieved from the dictionary).
The problem with this kind of "mapping lock" is that it's difficult to know when the semaphore is no longer necessary. One option is to never dispose the semaphores at all; that's the easy solution, but may not be acceptable in your scenario. Another option - if the semaphores are actually related to object instances and not values (like strings) - is to attach them using ephemerons; however, I believe this option would also not be acceptable in your scenario.
So, we do it the hard way. :)
There are a few different approaches that would work. I think it makes sense to approach it from a reference-counting perspective (reference-counting each semaphore in the dictionary). Also, we want to make the decrement-count-and-remove operation atomic, so I just use a single lock (making the concurrent dictionary superfluous):
public sealed class AsyncDuplicateLock
{
private sealed class RefCounted<T>
{
public RefCounted(T value)
{
RefCount = 1;
Value = value;
}
public int RefCount { get; set; }
public T Value { get; private set; }
}
private static readonly Dictionary<object, RefCounted<SemaphoreSlim>> SemaphoreSlims
= new Dictionary<object, RefCounted<SemaphoreSlim>>();
private SemaphoreSlim GetOrCreate(object key)
{
RefCounted<SemaphoreSlim> item;
lock (SemaphoreSlims)
{
if (SemaphoreSlims.TryGetValue(key, out item))
{
++item.RefCount;
}
else
{
item = new RefCounted<SemaphoreSlim>(new SemaphoreSlim(1, 1));
SemaphoreSlims[key] = item;
}
}
return item.Value;
}
public IDisposable Lock(object key)
{
GetOrCreate(key).Wait();
return new Releaser { Key = key };
}
public async Task<IDisposable> LockAsync(object key)
{
await GetOrCreate(key).WaitAsync().ConfigureAwait(false);
return new Releaser { Key = key };
}
private sealed class Releaser : IDisposable
{
public object Key { get; set; }
public void Dispose()
{
RefCounted<SemaphoreSlim> item;
lock (SemaphoreSlims)
{
item = SemaphoreSlims[Key];
--item.RefCount;
if (item.RefCount == 0)
SemaphoreSlims.Remove(Key);
}
item.Value.Release();
}
}
}
Here is a KeyedLock class that is less convenient and more error prone, but also less allocatey than Stephen Cleary's AsyncDuplicateLock. It maintains internally a pool of SemaphoreSlims, that can be reused by any key after they are released by the previous key. The capacity of the pool is configurable, and by default is 10.
This class is not allocation-free, because the SemaphoreSlim class allocates memory (quite a lot actually) every time the semaphore cannot be acquired synchronously because of contention.
The lock can be requested both synchronously and asynchronously, and can also be requested with cancellation and timeout. These features are provided by exploiting the existing functionality of the SemaphoreSlim class.
public class KeyedLock<TKey>
{
private readonly Dictionary<TKey, (SemaphoreSlim, int)> _perKey;
private readonly Stack<SemaphoreSlim> _pool;
private readonly int _poolCapacity;
public KeyedLock(IEqualityComparer<TKey> keyComparer = null, int poolCapacity = 10)
{
_perKey = new Dictionary<TKey, (SemaphoreSlim, int)>(keyComparer);
_pool = new Stack<SemaphoreSlim>(poolCapacity);
_poolCapacity = poolCapacity;
}
public async Task<bool> WaitAsync(TKey key, int millisecondsTimeout,
CancellationToken cancellationToken = default)
{
var semaphore = GetSemaphore(key);
bool entered = false;
try
{
entered = await semaphore.WaitAsync(millisecondsTimeout,
cancellationToken).ConfigureAwait(false);
}
finally { if (!entered) ReleaseSemaphore(key, entered: false); }
return entered;
}
public Task WaitAsync(TKey key, CancellationToken cancellationToken = default)
=> WaitAsync(key, Timeout.Infinite, cancellationToken);
public bool Wait(TKey key, int millisecondsTimeout,
CancellationToken cancellationToken = default)
{
var semaphore = GetSemaphore(key);
bool entered = false;
try { entered = semaphore.Wait(millisecondsTimeout, cancellationToken); }
finally { if (!entered) ReleaseSemaphore(key, entered: false); }
return entered;
}
public void Wait(TKey key, CancellationToken cancellationToken = default)
=> Wait(key, Timeout.Infinite, cancellationToken);
public void Release(TKey key) => ReleaseSemaphore(key, entered: true);
private SemaphoreSlim GetSemaphore(TKey key)
{
SemaphoreSlim semaphore;
lock (_perKey)
{
if (_perKey.TryGetValue(key, out var entry))
{
int counter;
(semaphore, counter) = entry;
_perKey[key] = (semaphore, ++counter);
}
else
{
lock (_pool) semaphore = _pool.Count > 0 ? _pool.Pop() : null;
if (semaphore == null) semaphore = new SemaphoreSlim(1, 1);
_perKey[key] = (semaphore, 1);
}
}
return semaphore;
}
private void ReleaseSemaphore(TKey key, bool entered)
{
SemaphoreSlim semaphore; int counter;
lock (_perKey)
{
if (_perKey.TryGetValue(key, out var entry))
{
(semaphore, counter) = entry;
counter--;
if (counter == 0)
_perKey.Remove(key);
else
_perKey[key] = (semaphore, counter);
}
else
{
throw new InvalidOperationException("Key not found.");
}
}
if (entered) semaphore.Release();
if (counter == 0)
{
Debug.Assert(semaphore.CurrentCount == 1);
lock (_pool) if (_pool.Count < _poolCapacity) _pool.Push(semaphore);
}
}
}
Usage example:
var locker = new KeyedLock<string>();
await locker.WaitAsync("Hello");
try
{
await DoSomethingAsync();
}
finally
{
locker.Release("Hello");
}
The implementation uses tuple deconstruction, that requires at least C# 7.
The KeyedLock class could be easily modified to become a KeyedSemaphore, that would allow more than one concurrent operations per key. It would just need a maximumConcurrencyPerKey parameter in the constructor, that would be stored and passed to the constructor of the SemaphoreSlims.
Note: The SemaphoreSlim class when misused it throws a SemaphoreFullException. This happens when the semaphore is released more times than it has been acquired. The KeyedLock implementation of this answer behaves differently in case of misuse: it throws an InvalidOperationException("Key not found."). This happens because when a key is released as many times as it has been acquired, the associated semaphore is removed from the dictionary. If this implementation ever throw a SemaphoreFullException, it would be an indication of a bug.
I wrote a library called AsyncKeyedLock to fix this common problem. The library currently supports using it with the type object (so you can mix different types together) or using generics to get a more efficient solution. It allows for timeouts, cancellation tokens, and also pooling so as to reduce allocations. Underlying it uses a ConcurrentDictionary and also allows for setting the initial capacity and concurrency for this dictionary.
I have benchmarked this against the other solutions provided here and it is more efficient, in terms of speed, memory usage (allocations) as well as scalability (internally it uses the more scalable ConcurrentDictionary). It's being used in a number of systems in production and used by a number of popular libraries.
The source code is available on GitHub and packaged at NuGet.
The approach here is to basically use the ConcurrentDictionary to store an IDisposable object which has a counter on it and a SemaphoreSlim. Once this counter reaches 0, it is removed from the dictionary and either disposed or returned to the pool (if pooling is used). Monitor is used to lock this object when either the counter is being incremented or decremented.
Usage example:
var locker = new AsyncKeyedLocker<string>(o =>
{
o.PoolSize = 20;
o.PoolInitialFill = 1;
});
string key = "my key";
// asynchronous code
using (await locker.LockAsync(key, cancellationToken))
{
...
}
// synchronous code
using (locker.Lock(key))
{
...
}
Download from NuGet.
For a given key,
Thread 1 calls GetOrAdd and adds a new semaphore and acquires it via Wait
Thread 2 calls GetOrAdd and gets the existing semaphore and blocks on Wait
Thread 1 releases the semaphore, only after having called TryRemove, which removed the semaphore from the dictionary
Thread 2 now acquires the semaphore.
Thread 3 calls GetOrAdd for the same key as thread 1 and 2. Thread 2 is still holding the semaphore, but the semaphore is not in the dictionary, so thread 3 creates a new semaphore and both threads 2 and 3 access the same protected resource.
You need to adjust your logic. The semaphore should only be removed from the dictionary when it has no waiters.
Here is one potential solution, minus the async part:
public sealed class AsyncDuplicateLock
{
private class LockInfo
{
private SemaphoreSlim sem;
private int waiterCount;
public LockInfo()
{
sem = null;
waiterCount = 1;
}
// Lazily create the semaphore
private SemaphoreSlim Semaphore
{
get
{
var s = sem;
if (s == null)
{
s = new SemaphoreSlim(0, 1);
var original = Interlocked.CompareExchange(ref sem, null, s);
// If someone else already created a semaphore, return that one
if (original != null)
return original;
}
return s;
}
}
// Returns true if successful
public bool Enter()
{
if (Interlocked.Increment(ref waiterCount) > 1)
{
Semaphore.Wait();
return true;
}
return false;
}
// Returns true if this lock info is now ready for removal
public bool Exit()
{
if (Interlocked.Decrement(ref waiterCount) <= 0)
return true;
// There was another waiter
Semaphore.Release();
return false;
}
}
private static readonly ConcurrentDictionary<object, LockInfo> activeLocks = new ConcurrentDictionary<object, LockInfo>();
public static IDisposable Lock(object key)
{
// Get the current info or create a new one
var info = activeLocks.AddOrUpdate(key,
(k) => new LockInfo(),
(k, v) => v.Enter() ? v : new LockInfo());
DisposableScope releaser = new DisposableScope(() =>
{
if (info.Exit())
{
// Only remove this exact info, in case another thread has
// already put its own info into the dictionary
((ICollection<KeyValuePair<object, LockInfo>>)activeLocks)
.Remove(new KeyValuePair<object, LockInfo>(key, info));
}
});
return releaser;
}
private sealed class DisposableScope : IDisposable
{
private readonly Action closeScopeAction;
public DisposableScope(Action closeScopeAction)
{
this.closeScopeAction = closeScopeAction;
}
public void Dispose()
{
this.closeScopeAction();
}
}
}
I rewrote the #StephenCleary answer with this:
public sealed class AsyncLockList {
readonly Dictionary<object, SemaphoreReferenceCount> Semaphores = new Dictionary<object, SemaphoreReferenceCount>();
SemaphoreSlim GetOrCreateSemaphore(object key) {
lock (Semaphores) {
if (Semaphores.TryGetValue(key, out var item)) {
item.IncrementCount();
} else {
item = new SemaphoreReferenceCount();
Semaphores[key] = item;
}
return item.Semaphore;
}
}
public IDisposable Lock(object key) {
GetOrCreateSemaphore(key).Wait();
return new Releaser(Semaphores, key);
}
public async Task<IDisposable> LockAsync(object key) {
await GetOrCreateSemaphore(key).WaitAsync().ConfigureAwait(false);
return new Releaser(Semaphores, key);
}
sealed class SemaphoreReferenceCount {
public readonly SemaphoreSlim Semaphore = new SemaphoreSlim(1, 1);
public int Count { get; private set; } = 1;
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public void IncrementCount() => Count++;
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public void DecrementCount() => Count--;
}
sealed class Releaser : IDisposable {
readonly Dictionary<object, SemaphoreReferenceCount> Semaphores;
readonly object Key;
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public Releaser(Dictionary<object, SemaphoreReferenceCount> semaphores, object key) {
Semaphores = semaphores;
Key = key;
}
public void Dispose() {
lock (Semaphores) {
var item = Semaphores[Key];
item.DecrementCount();
if (item.Count == 0)
Semaphores.Remove(Key);
item.Semaphore.Release();
}
}
}
}
Inspired by this previous answer, here is a version that supports async wait:
public class KeyedLock<TKey>
{
private readonly ConcurrentDictionary<TKey, LockInfo> _locks = new();
public int Count => _locks.Count;
public async Task<IDisposable> WaitAsync(TKey key, CancellationToken cancellationToken = default)
{
// Get the current info or create a new one.
var info = _locks.AddOrUpdate(key,
// Add
k => new LockInfo(),
// Update
(k, v) => v.Enter() ? v : new LockInfo());
try
{
await info.Semaphore.WaitAsync(cancellationToken);
return new Releaser(() => Release(key, info, true));
}
catch (OperationCanceledException)
{
// The semaphore wait was cancelled, release the lock.
Release(key, info, false);
throw;
}
}
private void Release(TKey key, LockInfo info, bool isCurrentlyLocked)
{
if (info.Leave())
{
// This was the last lock for the key.
// Only remove this exact info, in case another thread has
// already put its own info into the dictionary
// Note that this call to Remove(entry) is in fact thread safe.
var entry = new KeyValuePair<TKey, LockInfo>(key, info);
if (((ICollection<KeyValuePair<TKey, LockInfo>>)_locks).Remove(entry))
{
// This exact info was removed.
info.Dispose();
}
}
else if (isCurrentlyLocked)
{
// There is another waiter.
info.Semaphore.Release();
}
}
private class LockInfo : IDisposable
{
private SemaphoreSlim _semaphore = null;
private int _refCount = 1;
public SemaphoreSlim Semaphore
{
get
{
// Lazily create the semaphore.
var s = _semaphore;
if (s is null)
{
s = new SemaphoreSlim(1, 1);
// Assign _semaphore if its current value is null.
var original = Interlocked.CompareExchange(ref _semaphore, s, null);
// If someone else already created a semaphore, return that one
if (original is not null)
{
s.Dispose();
return original;
}
}
return s;
}
}
// Returns true if successful
public bool Enter()
{
if (Interlocked.Increment(ref _refCount) > 1)
{
return true;
}
// This lock info is not valid anymore - its semaphore is or will be disposed.
return false;
}
// Returns true if this lock info is now ready for removal
public bool Leave()
{
if (Interlocked.Decrement(ref _refCount) <= 0)
{
// This was the last lock
return true;
}
// There is another waiter
return false;
}
public void Dispose() => _semaphore?.Dispose();
}
private sealed class Releaser : IDisposable
{
private readonly Action _dispose;
public Releaser(Action dispose) => _dispose = dispose;
public void Dispose() => _dispose();
}
}

Singleton dictionary is not being singleton

I have a dictionary in a singleton class. I am saving there the pair , every time I access to that dictionary from the method that is creating the token it shows all the credentials that I've stored there.
But when I access from another class in another project of the solutions it displays the dictionary empty. Can anybody tell me why this happens?
This is the class that manage the dictionary:
public class UserAccessToken
{
public Dictionary<string, string> UserDictionary { get; set; }
private static UserAccessToken _instance;
private UserAccessToken() { }
public static UserAccessToken Instance
{
get
{
if (_instance == null)
_instance = new UserAccessToken
{
UserDictionary = new Dictionary<string, string>()
};
return _instance;
}
}
}
This is the method that insert the key,value pair in the dictionary:
public override Task TokenEndpointResponse(OAuthTokenEndpointResponseContext context)
{
var accessToken = context.AccessToken;
if (context.Properties.Dictionary.ContainsKey("userName"))
{
var username = context.Properties.Dictionary["userName"];
// If I access here multiple times the singleton works
UserAccessToken.Instance.UserDictionary[username] = accessToken;
}
return Task.FromResult<object>(null);
}
This is the method where I access the dictionary, from here I can see that it's empty:
private bool IsTokenValid(HttpContextBase httpContext)
{
var username = httpContext.User.Identity.Name;
var userTokens = UserAccessToken.Instance.UserDictionary;
var tokenToAccess = httpContext.Request.Headers["Authorization"];
tokenToAccess = tokenToAccess.Replace("Bearer ", "");
if (userTokens.ContainsKey(username))
{
var token = userTokens[username];
if (token == tokenToAccess) return true;
}
return true;
}
I already solved my problem, but I'll let my solution here in case could be useful for somebody.
The problem is that if you are running two different projects, that will mean two different process, so, what I wanted to do is pretty impossible. I used Redis for this and it is working well.
This is an example of Redis use:
public class CallManagerCache : ICallManagerMethods{
private static Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() => ConnectionMultiplexer
.Connect(CloudConfigurationManager.GetSetting("RedisConnectionString")));
public static ConnectionMultiplexer cacheConnection
{
get
{
return lazyConnection.Value;
}
}
/// <summary>
/// Here you save the value in cache, you get the connection, then StringSetAsync is in charge of saving the value
/// </summary>
/// <param name="name"></param>
/// <param name="template"></param>
public async Task UpdateCallInstance(int id, byte[] data, bool instanceForCallback = false, TimeSpan? timespan = null)
{
var cache = cacheConnection.GetDatabase();
await cache.StringSetAsync(instanceForCallback ? $"Call_{id}" : id.ToString(), data, timespan ?? new TimeSpan(0, 0, 15, 0));
}
/// <summary>
/// Here you get the value in cache, you get the connection, then StringGetAsync is in charge of getting the value
/// </summary>
/// <param name="name"></param>
/// <param name="template"></param>
public async Task<CallInstance> GetById(int id, bool isForCallback)
{
var cache = cacheConnection.GetDatabase();
var callInstance = new CallInstance
{
Id = id,
InstanceData = await cache.StringGetAsync(isForCallback ? $"Call_{id}" : id.ToString())
};
return callInstance;
}
}

using statement with asynchronous Task inside not await

I created an IDisposable object that log the execution time on Dispose.
Then, I created a RealProxy that uses this object to log the time of any method call of my business class
My business class uses async methods, which then leave the using statement before completion of the task, and then log wrong execution time.
Here's my code:
/// <summary>
/// Logging proxy with stopwatch
/// </summary>
/// <typeparam name="T"></typeparam>
public sealed class LoggingProxy<T> : RealProxy// where T : MarshalByRefObject
{
private ILog _logger;
private readonly T _instance;
private LoggingProxy(T instance, ILog logger)
: base(typeof(T))
{
_logger = logger;
_instance = instance;
}
/// <summary>
/// Create the Transparent proy for T
/// </summary>
/// <param name="type">An implementation type of T</param>
/// <returns>T instance</returns>
public static T Create(ILog logger)
{
logger.DebugFormat("[{0}] Instantiate {1}", "LoggingProxy", typeof(T).Name);
var instance = (T)Activator.CreateInstance(typeof(T), logger);
//return the proxy with execution timing if debug enable in logger
if (logger.IsDebugEnabled)
return (T)new LoggingProxy<T>(instance, logger).GetTransparentProxy();
else
return instance;
}
/// <summary>
/// Invoke the logging method using Stopwatch to log execution time
/// </summary>
/// <param name="msg"></param>
/// <returns></returns>
public override IMessage Invoke(IMessage msg)
{
var methodCall = (IMethodCallMessage)msg;
var method = (MethodInfo)methodCall.MethodBase;
string methodName = method.Name;
string className = method.DeclaringType.Name;
//return directly methods inherited from Object
if (method.DeclaringType.Name.Equals("Object"))
{
var result = method.Invoke(_instance, methodCall.Args);
return new ReturnMessage(result, null, 0, methodCall.LogicalCallContext, methodCall);
}
using (var logContext = _logger.DebugTiming("[{0}] Execution time for {1}", className, methodName))
{
_logger.DebugFormat("[{0}] Call method {1}", className, methodName);
//execute the method
//var result = method.Invoke(_instance, methodCall.Args);
object[] arg = methodCall.Args.Clone() as object[];
var result = method.Invoke(_instance, arg);
//wait the task ends before log the timing
if (result is Task)
(result as Task).Wait();
return new ReturnMessage(result, arg, 0, methodCall.LogicalCallContext, methodCall);
}
}
The _logger.DebugTiming method start a stopwatch tand log it on Dispose.
The only way I found to make it work with async methods, is to use that line:
//wait the task ends before log the timing
if (result is Task)
(result as Task).Wait();
But I just have the feeling that I break all the benefits of async methods by doing that.
-> If you have a suggestion about how to make a proper implementation
-> Any idea of the real impact of calling wait() on the proxy?
You can use Task.ContinueWith():
public override IMessage Invoke(IMessage msg)
{
var methodCall = (IMethodCallMessage)msg;
var method = (MethodInfo)methodCall.MethodBase;
string methodName = method.Name;
string className = method.DeclaringType.Name;
//return directly methods inherited from Object
if (method.DeclaringType.Name.Equals("Object"))
{
var result = method.Invoke(_instance, methodCall.Args);
return new ReturnMessage(result, null, 0, methodCall.LogicalCallContext, methodCall);
}
var logContext = _logger.DebugTiming("[{0}] Execution time for {1}", className, methodName);
bool disposeLogContext = true;
try
{
_logger.DebugFormat("[{0}] Call method {1}", className, methodName);
//execute the method
//var result = method.Invoke(_instance, methodCall.Args);
object[] arg = methodCall.Args.Clone() as object[];
var result = method.Invoke(_instance, arg);
//wait the task ends before log the timing
if (result is Task) {
disposeLogContext = false;
((Task)result).ContinueWith(() => logContext.Dispose());
}
return new ReturnMessage(result, arg, 0, methodCall.LogicalCallContext, methodCall);
}
finally
{
if (disposeLogContext)
logContext.Dispose();
}
}
Don't call Wait() on the task - that changes the behavior and might lead to deadlocks.

What is keeping the TPL from spinning up another thread

Using the command pattern returning tasks that are just sleeps for 5 seconds the total for the 3 tasks to complete is ~15 seconds.
What am I doing that is keeping this code from executing in "parallel"?
The calling code
var timer = new Stopwatch();
timer.Start();
var task = CommandExecutor.ExecuteCommand(new Fake());
var task2 = CommandExecutor.ExecuteCommand(new Fake());
var task3 = CommandExecutor.ExecuteCommand(new Fake());
Task.WaitAll(new Task[]
{
task, task2, task3
});
timer.Stop();
Debug.Print("{0}ms for {1},{2},{3} records", timer.ElapsedMilliseconds, task.Result.Count, task2.Result.Count(), task3.Result.Count());
The Commands being executed
public class Fake : Command<Task<Dictionary<string, string[]>>>
{
public override string ToString()
{
return string.Format("Fake");
}
protected override void Execute()
{
new System.Threading.ManualResetEvent(false).WaitOne(5000);
Result = Task.Factory.StartNew(() => new Dictionary<string, string[]>());
}
}
The Command Abstraction
public abstract class Command
{
public void Run()
{
try
{
var timer = new Stopwatch();
timer.Start();
//Debug.Print("{0}-{1}", ToString(), "Executing");
Execute();
timer.Stop();
Debug.Print("{0}-{1} Duration: {2}ms", ToString(), "Done", timer.ElapsedMilliseconds.ToString(CultureInfo.InvariantCulture));
}
catch (Exception ex)
{
Debug.Print("Error processing task:" + ToString(), ex);
}
}
public abstract override string ToString();
protected abstract void Execute();
}
/// <summary>
/// A command with a return value
/// </summary>
/// <typeparam name="T"></typeparam>
public abstract class Command<T> : Command
{
public T Result { get; protected set; }
public T GetResult()
{
Run();
return Result;
}
}
the Command Executor
public class CommandExecutor
{
/// <summary>
/// Executes the command.
/// </summary>
/// <param name="cmd">The CMD.</param>
public static void ExecuteCommand(Command cmd)
{
cmd.Run();
}
/// <summary>
/// Executes the command for commands with a result.
/// </summary>
/// <typeparam name="TResult">The type of the result.</typeparam>
/// <param name="cmd">The CMD.</param>
/// <returns></returns>
public static TResult ExecuteCommand<TResult>(Command<TResult> cmd)
{
ExecuteCommand((Command) cmd);
return cmd.Result;
}
The problem is that you aren't waiting inside the actual Task object, you're waiting in the method that creates the task before it actually provides that task:
protected override void Execute()
{
new System.Threading.ManualResetEvent(false).WaitOne(5000);
Result = Task.Factory.StartNew(() => new Dictionary<string, string[]>());
}
should instead be:
protected override void Execute()
{
Result = Task.Factory.StartNew(() =>
{
new System.Threading.ManualResetEvent(false).WaitOne(5000);
return new Dictionary<string, string[]>();
});
}

Efficient signaling Tasks for TPL completions on frequently reoccuring events

I'm working on a simulation system that, among other things, allows for the execution of tasks in discrete simulated time steps. Execution all occurs in the context of the simulation thread, but, from the perspective of an 'operator' using the system, they wish to behave asynchronously. Thankfully the TPL, with the handy 'async/await' keywords, makes this fairly straightforward. I have a primitive method on the Simulation like this:
public Task CycleExecutedEvent()
{
lock (_cycleExecutedBroker)
{
if (!IsRunning) throw new TaskCanceledException("Simulation has been stopped");
return _cycleExecutedBroker.RegisterForCompletion(CycleExecutedEventName);
}
}
This is basically creating a new TaskCompletionSource and then returning a Task. The purpose of this Task is to execute its continuation when the new 'ExecuteCycle' on the simulation occurs.
I then have some extension methods like this:
public static async Task WaitForDuration(this ISimulation simulation, double duration)
{
double startTime = simulation.CurrentSimulatedTime;
do
{
await simulation.CycleExecutedEvent();
} while ((simulation.CurrentSimulatedTime - startTime) < duration);
}
public static async Task WaitForCondition(this ISimulation simulation, Func<bool> condition)
{
do
{
await simulation.CycleExecutedEvent();
} while (!condition());
}
These are very handy, then, for building sequences from an 'operator' perspective, taking actions based on conditions and waiting for periods of simulated time. The issue I'm running into is that CycleExecuted occurs very frequently (roughly every few milliseconds if I'm running at fully accelerated speed). Because these 'wait' helper methods register a new 'await' on each cycle, this causes a large turnover in TaskCompletionSource instances.
I've profiled my code and I've found that roughly 5.5% of my total CPU time is spent within these completions, of which only a negligible percentage is spent in the 'active' code. Effectively all of the time is spent registering new completions while waiting for the triggering conditions to be valid.
My question: how can I improve performance here while still retaining the convenience of the async/await pattern for writing 'operator behaviors'? I'm thinking I need something like a lighter-weight and/or reusable TaskCompletionSource, given that the triggering event occurs so frequently.
I've been doing a bit more research and it sounds like a good option would be to create a custom implementation of the Awaitable pattern, which could tie directly into the event, eliminating the need for a bunch of TaskCompletionSource and Task instances. The reason it could be useful here is that there are a lot of different continuations awaiting the CycleExecutedEvent and they need to await it frequently. So ideally I'm looking at a way to just queue up continuation callbacks, then call back everything in the queue whenever the event occurs. I'll keep digging, but I welcome any help if folks know a clean way to do this.
For anybody browsing this question in the future, here is the custom awaiter I put together:
public sealed class CycleExecutedAwaiter : INotifyCompletion
{
private readonly List<Action> _continuations = new List<Action>();
public bool IsCompleted
{
get { return false; }
}
public void GetResult()
{
}
public void OnCompleted(Action continuation)
{
_continuations.Add(continuation);
}
public void RunContinuations()
{
var continuations = _continuations.ToArray();
_continuations.Clear();
foreach (var continuation in continuations)
continuation();
}
public CycleExecutedAwaiter GetAwaiter()
{
return this;
}
}
And in the Simulator:
private readonly CycleExecutedAwaiter _cycleExecutedAwaiter = new CycleExecutedAwaiter();
public CycleExecutedAwaiter CycleExecutedEvent()
{
if (!IsRunning) throw new TaskCanceledException("Simulation has been stopped");
return _cycleExecutedAwaiter;
}
It's a bit funny, as the awaiter never reports Complete, but fires continues to call completions as they are registered; still, it works well for this application. This reduces the CPU overhead from 5.5% to 2.1%. It will likely still require some tweaking, but it's a nice improvement over the original.
The await keyword doesn't work just on Tasks, it works on anything that follows the awaitable pattern. For details, see Stephen Toub's article await anything;.
The short version is that the type has to have a method GetAwaiter() that returns a type that implements INotifyCompletion and also has IsCompleted property and GetResult() method (void-returning, if the await expression shouldn't have a value). For an example, see TaskAwaiter.
If you create your own awaitable, you could return the same object every time, avoiding the overhead of allocating many TaskCompletionSources.
Here is my version of ReusableAwaiter simulating TaskCompletionSource
public sealed class ReusableAwaiter<T> : INotifyCompletion
{
private Action _continuation = null;
private T _result = default(T);
private Exception _exception = null;
public bool IsCompleted
{
get;
private set;
}
public T GetResult()
{
if (_exception != null)
throw _exception;
return _result;
}
public void OnCompleted(Action continuation)
{
if (_continuation != null)
throw new InvalidOperationException("This ReusableAwaiter instance has already been listened");
_continuation = continuation;
}
/// <summary>
/// Attempts to transition the completion state.
/// </summary>
/// <param name="result"></param>
/// <returns></returns>
public bool TrySetResult(T result)
{
if (!this.IsCompleted)
{
this.IsCompleted = true;
this._result = result;
if (_continuation != null)
_continuation();
return true;
}
return false;
}
/// <summary>
/// Attempts to transition the exception state.
/// </summary>
/// <param name="result"></param>
/// <returns></returns>
public bool TrySetException(Exception exception)
{
if (!this.IsCompleted)
{
this.IsCompleted = true;
this._exception = exception;
if (_continuation != null)
_continuation();
return true;
}
return false;
}
/// <summary>
/// Reset the awaiter to initial status
/// </summary>
/// <returns></returns>
public ReusableAwaiter<T> Reset()
{
this._result = default(T);
this._continuation = null;
this._exception = null;
this.IsCompleted = false;
return this;
}
public ReusableAwaiter<T> GetAwaiter()
{
return this;
}
}
And here is the test code.
class Program
{
static readonly ReusableAwaiter<int> _awaiter = new ReusableAwaiter<int>();
static void Main(string[] args)
{
Task.Run(() => Test());
Console.ReadLine();
_awaiter.TrySetResult(22);
Console.ReadLine();
_awaiter.TrySetException(new Exception("ERR"));
Console.ReadLine();
}
static async void Test()
{
int a = await AsyncMethod();
Console.WriteLine(a);
try
{
await AsyncMethod();
}
catch(Exception ex)
{
Console.WriteLine(ex.Message);
}
}
static ReusableAwaiter<int> AsyncMethod()
{
return _awaiter.Reset();
}
}
Do you really need to receive the WaitForDuration-event on a different thread? If not, you could just register a callback (or an event) with _cycleExecutedBroker and receive notification synchronously. In the callback you can test any condition you like and only if that condition turns out to be true, notify a different thread (using a task or message or whatever mechanism). I understand the condition you test for rarely evaluates to true, so you avoid most cross-thread calls that way.
I guess the gist of my answer is: Try to reduce the amount of cross-thread messaging by moving computation to the "source" thread.

Categories