Transaction scope similar functionality - c#

I am looking to setup something very similar to transaction scope which creates a version on a service and will delete/commit at the end of scope. Every SQL statement ran inside the transaction scope internally looks at some connection pool / transaction storage to determine if its in the scope and reacts appropriately. The caller doesn't need to pass in the transaction to every call. I am looking for this functionality.
Here is a little more about it: https://blogs.msdn.microsoft.com/florinlazar/2005/04/19/transaction-current-and-ambient-transactions/
Here is the basic disposable class:
public sealed class VersionScope : IDisposable
{
private readonly GeodatabaseVersion _version;
private readonly VersionManager _versionManager;
public VersionScope(Configuration config)
{
_versionManager = new VersionManager(config);
_version = _versionManager.GenerateTempVersion();
_versionManager.Create(_version);
_versionManager.VerifyValidVersion(_version);
_versionManager.ServiceReconcilePull();
_versionManager.ReconcilePull(_version);
}
public void Dispose()
{
_versionManager.Delete(_version);
}
public void Complete()
{
_versionManager.ReconcilePush(_version);
}
}
I want the ability for all the code I've written thus far to not have any concept of being in a version. I just want to include a simple
Version = GetCurrentVersionWithinScope()
at the lowest level of the code.
What is the safest way of implementing something like this with little risk of using the wrong version if there are multiple instances in memory simultaneously running.
My very naive approach would be find if there is a unique identifier for a block of memory a process is running in. Then store the current working version to a global array or concurrent dictionary. Then in the code where I need the current version, I use its block of memory identifier and it maps to the version that was created.
Edit:
Example of usage:
using (var scope = new VersionScope(_config))
{
AddFeature(); // This has no concept of scope passed to it, and could error out forcing a dispose() without a complete()
scope.Complete();
}

The most straightforward approach would be to use ThreadStatic or ThreadLocal to store current version in thread local storage. That way multiple threads will not interfere with each other. For example suppose we version class:
public class Version {
public Version(int number) {
Number = number;
}
public int Number { get; }
public override string ToString() {
return "Version " + Number;
}
}
Then implementation of VersionScope can go like this:
public sealed class VersionScope : IDisposable {
private bool _isCompleted;
private bool _isDisposed;
// note ThreadStatic attribute
[ThreadStatic] private static Version _currentVersion;
public static Version CurrentVersion => _currentVersion;
public VersionScope(int version) {
_currentVersion = new Version(version);
}
public void Dispose() {
if (_isCompleted || _isDisposed)
return;
var v = _currentVersion;
if (v != null) {
DeleteVersion(v);
}
_currentVersion = null;
_isDisposed = true;
}
public void Complete() {
if (_isCompleted || _isDisposed)
return;
var v = _currentVersion;
if (v != null) {
PushVersion(v);
}
_currentVersion = null;
_isCompleted = true;
}
private void DeleteVersion(Version version) {
Console.WriteLine($"Version {version} deleted");
}
private void PushVersion(Version version) {
Console.WriteLine($"Version {version} pushed");
}
}
It will work, but it will not support nested scopes, which is not good, so to fix we need to store previous scope when starting new one, and restore it on Complete or Dispose:
public sealed class VersionScope : IDisposable {
private bool _isCompleted;
private bool _isDisposed;
private static readonly ThreadLocal<VersionChain> _versions = new ThreadLocal<VersionChain>();
public static Version CurrentVersion => _versions.Value?.Current;
public VersionScope(int version) {
var cur = _versions.Value;
// remember previous versions if any
_versions.Value = new VersionChain(new Version(version), cur);
}
public void Dispose() {
if (_isCompleted || _isDisposed)
return;
var cur = _versions.Value;
if (cur != null) {
DeleteVersion(cur.Current);
// restore previous
_versions.Value = cur.Previous;
}
_isDisposed = true;
}
public void Complete() {
if (_isCompleted || _isDisposed)
return;
var cur = _versions.Value;
if (cur != null) {
PushVersion(cur.Current);
// restore previous
_versions.Value = cur.Previous;
}
_isCompleted = true;
}
private void DeleteVersion(Version version) {
Console.WriteLine($"Version {version} deleted");
}
private void PushVersion(Version version) {
Console.WriteLine($"Version {version} pushed");
}
// just a class to store previous versions
private class VersionChain {
public VersionChain(Version current, VersionChain previous) {
Current = current;
Previous = previous;
}
public Version Current { get; }
public VersionChain Previous { get; }
}
}
That's already something you can work with. Sample usage (I use single thread, but if there were multiple threads doing this separately - they will not interfere with each other):
static void Main(string[] args) {
PrintCurrentVersion(); // no version
using (var s1 = new VersionScope(1)) {
PrintCurrentVersion(); // version 1
s1.Complete();
PrintCurrentVersion(); // no version, 1 is already completed
using (var s2 = new VersionScope(2)) {
using (var s3 = new VersionScope(3)) {
PrintCurrentVersion(); // version 3
} // version 3 deleted
PrintCurrentVersion(); // back to version 2
s2.Complete();
}
PrintCurrentVersion(); // no version, all completed or deleted
}
Console.ReadKey();
}
private static void PrintCurrentVersion() {
Console.WriteLine("Current version: " + VersionScope.CurrentVersion);
}
This however will not work when you are using async calls, because ThreadLocal is tied to a thread, but async method can span multiple threads. However, there is similar construct named AsyncLocal, which value will flow through asynchronous calls. So we can add constructor parameter to VersionScope indicating if we need async flow or not. Transaction scope works in a similar way - there is TransactionScopeAsyncFlowOption you pass into TransactionScope constructor indicating if it will flow through async calls.
Modified version looks like this:
public sealed class VersionScope : IDisposable {
private bool _isCompleted;
private bool _isDisposed;
private readonly bool _asyncFlow;
// thread local versions
private static readonly ThreadLocal<VersionChain> _tlVersions = new ThreadLocal<VersionChain>();
// async local versions
private static readonly AsyncLocal<VersionChain> _alVersions = new AsyncLocal<VersionChain>();
// to get current version, first check async local storage, then thread local
public static Version CurrentVersion => _alVersions.Value?.Current ?? _tlVersions.Value?.Current;
// helper method
private VersionChain CurrentVersionChain => _asyncFlow ? _alVersions.Value : _tlVersions.Value;
public VersionScope(int version, bool asyncFlow = false) {
_asyncFlow = asyncFlow;
var cur = CurrentVersionChain;
// remember previous versions if any
if (asyncFlow) {
_alVersions.Value = new VersionChain(new Version(version), cur);
}
else {
_tlVersions.Value = new VersionChain(new Version(version), cur);
}
}
public void Dispose() {
if (_isCompleted || _isDisposed)
return;
var cur = CurrentVersionChain;
if (cur != null) {
DeleteVersion(cur.Current);
// restore previous
if (_asyncFlow) {
_alVersions.Value = cur.Previous;
}
else {
_tlVersions.Value = cur.Previous;
}
}
_isDisposed = true;
}
public void Complete() {
if (_isCompleted || _isDisposed)
return;
var cur = CurrentVersionChain;
if (cur != null) {
PushVersion(cur.Current);
// restore previous
if (_asyncFlow) {
_alVersions.Value = cur.Previous;
}
else {
_tlVersions.Value = cur.Previous;
}
}
_isCompleted = true;
}
private void DeleteVersion(Version version) {
Console.WriteLine($"Version {version} deleted");
}
private void PushVersion(Version version) {
Console.WriteLine($"Version {version} pushed");
}
// just a class to store previous versions
private class VersionChain {
public VersionChain(Version current, VersionChain previous) {
Current = current;
Previous = previous;
}
public Version Current { get; }
public VersionChain Previous { get; }
}
}
Sample usage of scopes with async flow:
static void Main(string[] args) {
Test();
Console.ReadKey();
}
static async void Test() {
PrintCurrentVersion(); // no version
using (var s1 = new VersionScope(1, asyncFlow: true)) {
await Task.Delay(100);
PrintCurrentVersion(); // version 1
await Task.Delay(100);
s1.Complete();
await Task.Delay(100);
PrintCurrentVersion(); // no version, 1 is already completed
using (var s2 = new VersionScope(2, asyncFlow: true)) {
using (var s3 = new VersionScope(3, asyncFlow: true)) {
PrintCurrentVersion(); // version 3
} // version 3 deleted
await Task.Delay(100);
PrintCurrentVersion(); // back to version 2
s2.Complete();
}
await Task.Delay(100);
PrintCurrentVersion(); // no version, all completed or deleted
}
}
private static void PrintCurrentVersion() {
Console.WriteLine("Current version: " + VersionScope.CurrentVersion);
}

Use of IDisposable like this is somewhat questionable. (See Is it abusive to use IDisposable and "using" as a means for getting "scoped behavior" for exception safety?)
I, myself find it useful for some things. This is a pattern I use:
class LevelContext
{
private int _level;
public int CurrentLevel
{
get { return _level; }
set { _level = value < 0 ? 0 : value; }
}
public ILevel NewLevel(int depth = 1)
{
return new Level(this, depth);
}
/// <summary>
/// Provides an interface that calling code can use to handle level objects.
/// </summary>
public interface ILevel : IDisposable
{
LevelContext Owner { get; }
int Depth { get; }
void Close();
}
/// <summary>
/// Private class that provides an easy way to scope levels by allowing
/// them to participate in the "using" construct. Creation of a Level results in an
/// increase in owner's level, while disposal returns owner's level to what it was before.
/// </summary>
class Level : ILevel
{
public Level(LevelContext owner, int depth)
{
Owner = owner;
Depth = depth;
PreviousLevel = owner.CurrentLevel;
Owner.CurrentLevel += Depth;
}
public LevelContext Owner { get; private set; }
public int Depth { get; private set; }
public int PreviousLevel { get; private set; }
public void Close()
{
if (Owner != null)
{
Owner.CurrentLevel = PreviousLevel;
Owner = null;
}
}
void IDisposable.Dispose()
{
Close();
}
}
Then the calling code looks like this:
static void Main(string[] args)
{
var lc = new LevelContext();
Console.WriteLine(lc.CurrentLevel);
using (lc.NewLevel())
Console.WriteLine(lc.CurrentLevel);
Console.WriteLine(lc.CurrentLevel);
}
So in your case, you are correct - you need to create something that tracks the current version. That something should get updated when VersionScopes are created and disposed.

Related

Ripple Effect: OutOfMemoryException

I have been trying to learn about Roslyn and see if it works for my needs.
In a very simple project I am trying to create a simple ‘Ripple Effect’, which is for each iteration causing a new assembly to be loaded and eventually after 500 iterations it crashes (OutOfMemoryException)
Is there a way to do this without causing it to explode?
class Program
{
static void Main(string[] args)
{
string code = #"
IEnumerable<double> combined = A.Concat(B);
return combined.Average();
";
Globals<double> globals = new Globals<double>()
{
A = new double[] { 1, 2, 3, 4, 5 },
B = new double[] { 1, 2, 3, 4, 5 },
};
ScriptOptions options = ScriptOptions.Default;
Assembly systemCore = typeof(Enumerable).Assembly;
options = options.AddReferences(systemCore);
options = options.AddImports("System");
options = options.AddImports("System.Collections.Generic");
options = options.AddImports("System.Linq");
var ra = CSharpScript.RunAsync(code, options, globals).Result;
for (int i = 0; i < 1000; i++)
{
ra = ra.ContinueWithAsync(code).Result;
}
}
}
public class Globals<T>
{
public IEnumerable<T> A;
public IEnumerable<T> B;
}
Exception Image
Everytime you use CSharpScript.Run or Evaluate method you are actually loading a new script (a .dll) which happens to be quite large. In order to avoid this you need te cache the script that you are executing by doing so:
_script = CSharpScript.Create<TR>(code, opts, typeof(Globals<T>)); // Other options may be needed here
Having _script cached you can now execute it by:
_script.RunAsync(new Globals<T> {A = a, B = b}); // The script will compile here in the first execution
If you have a few scripts to load with your application each time, this is the easiest thing to do. However a better solution is to use a separate AppDomain and load the script isolated. Here is one way of doing it:
Create a script executor proxy as MarshalByRefObject:
public class ScriptExecutor<TP, TR> : CrossAppDomainObject, IScriptExecutor<TP, TR>
{
private readonly Script<TR> _script;
private int _currentClients;
public DateTime TimeStamp { get; }
public int CurrentClients => _currentClients;
public string Script => _script.Code;
public ScriptExecutor(string script, DateTime? timestamp = null, bool eagerCompile = false)
{
if (string.IsNullOrWhiteSpace(script))
throw new ArgumentNullException(nameof(script));
var opts = ScriptOptions.Default.AddImports("System");
_script = CSharpScript.Create<TR>(script, opts, typeof(Host<TP>)); // Other options may be needed here
if (eagerCompile)
{
var diags = _script.Compile();
Diagnostic firstError;
if ((firstError = diags.FirstOrDefault(d => d.Severity == DiagnosticSeverity.Error)) != null)
{
throw new ArgumentException($"Provided script can't compile: {firstError.GetMessage()}");
}
}
if (timestamp == null)
timestamp = DateTime.UtcNow;
TimeStamp = timestamp.Value;
}
public void Execute(TP parameters, RemoteCompletionSource<TR> completionSource)
{
Interlocked.Increment(ref _currentClients);
_script.RunAsync(new Host<TP> {Args = parameters}).ContinueWith(t =>
{
if (t.IsFaulted && t.Exception != null)
{
completionSource.SetException(t.Exception.InnerExceptions.ToArray());
Interlocked.Decrement(ref _currentClients);
}
else if (t.IsCanceled)
{
completionSource.SetCanceled();
Interlocked.Decrement(ref _currentClients);
}
else
{
completionSource.SetResult(t.Result.ReturnValue);
Interlocked.Decrement(ref _currentClients);
}
});
}
}
public class Host<T>
{
public T Args { get; set; }
}
Create a proxy object to share data between script execution app domain and the main domain:
public class RemoteCompletionSource<T> : CrossAppDomainObject
{
private readonly TaskCompletionSource<T> _tcs = new TaskCompletionSource<T>();
public void SetResult(T result) { _tcs.SetResult(result); }
public void SetException(Exception[] exception) { _tcs.SetException(exception); }
public void SetCanceled() { _tcs.SetCanceled(); }
public Task<T> Task => _tcs.Task;
}
Create this helper abstract type that all the other remote ones need to inherit from:
public abstract class CrossAppDomainObject : MarshalByRefObject, IDisposable
{
private bool _disposed;
/// <summary>
/// Gets an enumeration of nested <see cref="MarshalByRefObject"/> objects.
/// </summary>
protected virtual IEnumerable<MarshalByRefObject> NestedMarshalByRefObjects
{
get { yield break; }
}
~CrossAppDomainObject()
{
Dispose(false);
}
/// <summary>
/// Disconnects the remoting channel(s) of this object and all nested objects.
/// </summary>
private void Disconnect()
{
RemotingServices.Disconnect(this);
foreach (var tmp in NestedMarshalByRefObjects)
RemotingServices.Disconnect(tmp);
}
public sealed override object InitializeLifetimeService()
{
//
// Returning null designates an infinite non-expiring lease.
// We must therefore ensure that RemotingServices.Disconnect() is called when
// it's no longer needed otherwise there will be a memory leak.
//
return null;
}
public void Dispose()
{
GC.SuppressFinalize(this);
Dispose(true);
}
protected virtual void Dispose(bool disposing)
{
if (_disposed)
return;
Disconnect();
_disposed = true;
}
}
Here is how we use it:
public static IScriptExecutor<T, R> CreateExecutor<T, R>(AppDomain appDomain, string script)
{
var t = typeof(ScriptExecutor<T, R>);
var executor = (ScriptExecutor<T, R>)appDomain.CreateInstanceAndUnwrap(t.Assembly.FullName, t.FullName, false, BindingFlags.CreateInstance, null,
new object[] {script, null, true}, CultureInfo.CurrentCulture, null);
return executor;
}
public static AppDomain CreateSandbox()
{
var setup = new AppDomainSetup { ApplicationBase = AppDomain.CurrentDomain.SetupInformation.ApplicationBase };
var appDomain = AppDomain.CreateDomain("Sandbox", null, setup, AppDomain.CurrentDomain.PermissionSet);
return appDomain;
}
string script = #"int Square(int number) {
return number*number;
}
Square(Args)";
var domain = CreateSandbox();
var executor = CreateExecutor<int, int>(domain, script);
using (var src = new RemoteCompletionSource<int>())
{
executor.Execute(5, src);
Console.WriteLine($"{src.Task.Result}");
}
Note the usage of RemoteCompletionSource within a using block. If you forget to dispose it you will have memory leaks because instances of this object on the other domain (not the caller) will never get GCed.
Disclaimer: I took the idea of RemoteCompletionSource from
here, also the idea for the CrossAppDomainObject from public domain.

C# Parallel.For not executing every step

I have been working on a mock-up for an import service which currently runs in sequence. However my mock-up seems to exhibit a strange problem where by sometimes one or two items in the for loop aren't executed.
class Service
{
private Thread _worker;
private bool _stopping;
private CancellationTokenSource _cts;
private ParallelOptions _po;
private Repository _repository;
public void Start(Repository repository)
{
_repository = repository;
_cts = new CancellationTokenSource();
_po = new ParallelOptions {
CancellationToken = _cts.Token
};
_worker = new Thread(ProcessImport);
_worker.Start();
}
public void Stop()
{
_stopping = true;
_cts.Cancel();
if(_worker != null && _worker.IsAlive)
_worker.Join();
}
private void ProcessImport()
{
while (!_stopping)
{
var import = _repository.GetInProgressImport();
if (import == null)
{
Thread.Sleep(1000);
continue;
}
try
{
Parallel.For(0, 1000, _po, i => Work.DoWork(i, import, _cts.Token, _repository));
}
catch (OperationCanceledException)
{
// Unmark batch so it can be started again
batch = _repository.GetBatch(import.BatchId);
batch.Processing = false;
_repository.UpdateBatch(batch);
Console.WriteLine("Aborted import {0}", import.ImportId);
}
catch (Exception ex)
{
Console.WriteLine("Something went wrong: {0}", ex.Message);
}
}
}
}
class Work
{
public static void DoWork(int i, Import import, CancellationToken ct, Repository repository)
{
// Simulate doing some work
Thread.Sleep(100);
HandleAbort(ct);
Thread.Sleep(100);
HandleAbort(ct);
Thread.Sleep(100);
// Update the batch
var batch = repository.GetBatch(import.BatchId);
batch.Processed++;
if (batch.Processed == batch.Total)
{
batch.Finished = DateTime.Now;
batch.Processing = false;
}
repository.UpdateBatch(batch);
}
private static void HandleAbort(CancellationToken ct)
{
if (!ct.IsCancellationRequested)
return;
ct.ThrowIfCancellationRequested();
}
}
With this code, I often find that the batches are never complete and that batch.Processed = 999 or 998.
Can anyone shed any light on what I've done wrong.
Thanks in advance.
Edit:
To be clear about the repository/batch object - I believe in my current mock-up that it is threadsafe
class Repository
{
private ConcurrentBag<Batch> _batchData = new ConcurrentBag<Batch>();
private ConcurrentBag<Import> _importData = new ConcurrentBag<Import>();
public void CreateImport(Import import)
{
_importData.Add(import);
}
public Import GetInProgressImport()
{
var import = _importData
.Join(_batchData, i => i.BatchId, b => b.BatchId, (i, b) => new
{
Import = i,
Batch = b
})
.Where(j => j.Batch.Processed < j.Batch.Total && !j.Batch.Processing)
.OrderByDescending(j => j.Batch.Total - j.Batch.Processed)
.ThenBy(j => j.Batch.BatchId - j.Batch.BatchId)
.Select(j => j.Import)
.FirstOrDefault();
if (import == null)
return null;
// mark the batch as processing
var batch = GetBatch(import.BatchId);
batch.Processing = true;
UpdateBatch(batch);
return import;
}
public List<Import> ListImports()
{
return _importData.ToList();
}
public void CreateBatch(Batch batch)
{
_batchData.Add(batch);
}
public Batch GetBatch(Int64 batchId)
{
return _batchData.FirstOrDefault(b => b.BatchId == batchId);
}
public void UpdateBatch(Batch batch)
{
var batchData = _batchData.First(b => b.BatchId == batch.BatchId);
batchData.Total = batch.Total;
batchData.Processed = batch.Processed;
batchData.Started = batch.Started;
batchData.Finished = batch.Finished;
batchData.Processing = batch.Processing;
}
}
class Import
{
public Int64 ImportId { get; set; }
public Int64 BatchId { get; set; }
}
class Batch
{
public Int64 BatchId { get; set; }
public int Total { get; set; }
public int Processed { get; set; }
public DateTime Created { get; set; }
public DateTime Started { get; set; }
public DateTime Finished { get; set; }
public bool Processing { get; set; }
}
This is only a mock-up so there is no DB or other persistence behind my repository.
Also, I'm not competing my batch on the value of i, but rather the number of iterations of the loop (the work actually having been done) indicated by the Processed property of the batch object.
Thanks
Solution:
I had forgotten about the need synchronise the update of the batch. Should look like:
class Work
{
private static object _sync = new object();
public static void DoWork(int i, Import import, CancellationToken ct, Repository repository)
{
// Do work
Thread.Sleep(100);
HandleAbort(ct);
Thread.Sleep(100);
HandleAbort(ct);
Thread.Sleep(100);
lock (_sync)
{
// Update the batch
var batch = repository.GetBatch(import.BatchId);
batch.Processed++;
if (batch.Processed == batch.Total)
{
batch.Finished = DateTime.Now;
batch.Processing = false;
}
repository.UpdateBatch(batch);
}
}
private static void HandleAbort(CancellationToken ct)
{
if (!ct.IsCancellationRequested)
return;
ct.ThrowIfCancellationRequested();
}
}
Looks like lost updates on batch.Processed. Increments are not atomic. batch.Processed++; is racy. Use Interlocked.Increment.
It seems to me like you don't have a good understanding of threading right now. It's very dangerous to perform such elaborate threading without a good understanding. The mistakes you make are hard to test for but production will find them.
According to MSDN, the overloads of Parallel.For specify the second integer as toExclusive, meaning to goes up to but does not meet that value. In other words, 999 is the expected result, not 1000 - but note also that by starting at "0", your loop does execute 1,000 times.
From a glance, your code is parallel, so make sure you're not seeing the "999" call out of order from the "998" one - this is because by being executed in parallel, your code is inherently unordered, and can easily end up being very randomly rearranged. Also, read up on lock, as your code may be accessing values which it should be waiting for.

Deadlock detection ordered lock implementation

I am trying to implement a mechanism where lock ordering is automatically checked and an exception is thrown when locks are acquired out of order at runtime to avoid deadlocks. A reference implementation is below. Please let me know if you see any issues with this implementation. Many thanks.
public class someResource
{
private OrderedLock lock1 = new OrderedLock(1);
private OrderedLock lock2 = new OrderedLock(2);
public void lockInOrder()
{
lock1.AcquireWriteLock();
lock2.AcquireWriteLock();
// do something
lock1.ReleaseWriteLock();
lock2.ReleaseWriteLock();
}
public void lockOutOfOrder()
{
lock2.AcquireReadLock();
lock1.AcquireReadLock(); // throws exception
// read something
lock2.ReleaseReadLock();
lock1.ReleaseReadLock();
}
}
public class OrderedLock : IDisposable
{
private static readonly ConcurrentDictionary<int, object> createdLocks = new ConcurrentDictionary<int, object>();
[ThreadStatic]
private static ISet<int> acquiredLocks;
private readonly ThreadLocal<int> refCount = new ThreadLocal<int>(false);
private readonly ReaderWriterLockSlim locker = new ReaderWriterLockSlim(LockRecursionPolicy.SupportsRecursion);
private readonly int id;
/// <exception cref="InvalidOperationException">Duplicate identifier detected</exception>
public OrderedLock(int id)
{
if (!createdLocks.TryAdd(id, null))
{
throw new InvalidOperationException("Duplicate identifier detected");
}
this.id = id;
this.refCount.Value = 0;
}
public void AcquireReadLock()
{
this.CheckLockOrder();
this.locker.EnterReadLock();
}
public void AcquireWriteLock()
{
this.CheckLockOrder();
this.locker.EnterWriteLock();
}
public void ReleaseReadLock()
{
this.refCount.Value--;
this.locker.ExitReadLock();
if (this.refCount.Value == 0)
{
acquiredLocks.Remove(this.id);
}
}
public void ReleaseWriteLock()
{
this.refCount.Value--;
this.locker.ExitWriteLock();
if (this.refCount.Value == 0)
{
acquiredLocks.Remove(this.id);
}
}
public void Dispose()
{
while (this.locker.IsWriteLockHeld)
{
this.ReleaseWriteLock();
}
while (this.locker.IsReadLockHeld)
{
ReleaseReadLock();
}
this.locker.Dispose();
this.refCount.Dispose();
GC.SuppressFinalize(this);
}
/// <exception cref="InvalidOperationException">Invalid order of locking detected</exception>
private void CheckLockOrder()
{
if (acquiredLocks == null)
{
acquiredLocks = new HashSet<int>();
}
if (!acquiredLocks.Contains(this.id))
{
if (acquiredLocks.Any() && acquiredLocks.Max() > this.id)
{
throw new InvalidOperationException("Invalid order of locking detected");
}
acquiredLocks.Add(this.id);
}
this.refCount.Value++;
}
}

Multithreading BlockingCollection Alternatives to GetConsumingEnumerable() Producer-Consumer

I have a situation where I have multiple producers and multiple consumers. The producers enters a job into a queue. I chose the BlockingCollection and it works great since I need the consumers to wait for a job to be found. However, if I use the GetConsumingEnumerable() feature the order of the items in the collection change... this is not what I need.
It even says in MSDN http://msdn.microsoft.com/en-us/library/dd287186.aspx
that it does not preserve the order of the items.
Does anyone know an alternative for this situation?
I see that the Take method is available but does it also provide a 'wait' condition for the consumer threads?
It says http://msdn.microsoft.com/en-us/library/dd287085.aspx
'A call to Take may block until an item is available to be removed.' Is it better to use TryTake? I really need the thread to wait and keep checking for a job.
Take blocks the thread till something comes available.
TryTake as the name implies tries to do so but returns a bool if it fails or succeeds.
Allowing for more flex using it:
while(goingOn){
if( q.TryTake(out var){
Process(var)
}
else{
DoSomething_Usefull_OrNotUseFull_OrEvenSleep();
}
}
instead of
while(goingOn){
if( var x = q.Take(){
//w'll wait till this ever will happen and then we:
Process(var)
}
}
My votes are for TryTake :-)
EXAMPLE:
public class ProducerConsumer<T> {
public struct Message {
public T Data;
}
private readonly ThreadRunner _producer;
private readonly ThreadRunner _consumer;
public ProducerConsumer(Func<T> produce, Action<T> consume) {
var q = new BlockingCollection<Message>();
_producer = new Producer(produce,q);
_consumer = new Consumer(consume,q);
}
public void Start() {
_producer.Run();
_consumer.Run();
}
public void Stop() {
_producer.Stop();
_consumer.Stop();
}
private class Producer : ThreadRunner {
public Producer(Func<T> produce, BlockingCollection<Message> q) : base(q) {
_produce = produce;
}
private readonly Func<T> _produce;
public override void Worker() {
try {
while (KeepRunning) {
var item = _produce();
MessageQ.TryAdd(new Message{Data = item});
}
}
catch (ThreadInterruptedException) {
WasInterrupted = true;
}
}
}
public abstract class ThreadRunner {
protected readonly BlockingCollection<Message> MessageQ;
protected ThreadRunner(BlockingCollection<Message> q) {
MessageQ = q;
}
protected Thread Runner;
protected bool KeepRunning = true;
public bool WasInterrupted;
public abstract void Worker();
public void Run() {
Runner = new Thread(Worker);
Runner.Start();
}
public void Stop() {
KeepRunning = false;
Runner.Interrupt();
Runner.Join();
}
}
class Consumer : ThreadRunner {
private readonly Action<T> _consume;
public Consumer(Action<T> consume,BlockingCollection<Message> q) : base(q) {
_consume = consume;
}
public override void Worker() {
try {
while (KeepRunning) {
Message message;
if (MessageQ.TryTake(out message, TimeSpan.FromMilliseconds(100))) {
_consume(message.Data);
}
else {
//There's nothing in the Q so I have some spare time...
//Excellent moment to update my statisics or update some history to logfiles
//for now we sleep:
Thread.Sleep(TimeSpan.FromMilliseconds(100));
}
}
}
catch (ThreadInterruptedException) {
WasInterrupted = true;
}
}
}
}
}
USAGE:
[Fact]
public void ConsumerShouldConsume() {
var produced = 0;
var consumed = 0;
Func<int> produce = () => {
Thread.Sleep(TimeSpan.FromMilliseconds(100));
produced++;
return new Random(2).Next(1000);
};
Action<int> consume = c => { consumed++; };
var t = new ProducerConsumer<int>(produce, consume);
t.Start();
Thread.Sleep(TimeSpan.FromSeconds(5));
t.Stop();
Assert.InRange(produced,40,60);
Assert.InRange(consumed, 40, 60);
}

Run code once before and after ALL tests in xUnit.net

TL;DR - I'm looking for xUnit's equivalent of MSTest's AssemblyInitialize (aka the ONE feature it has that I like).
Specifically I'm looking for it because I have some Selenium smoke tests which I would like to be able to run with no other dependencies. I have a Fixture that will launch IisExpress for me and kill it on disposal. But doing this before every test hugely bloats runtime.
I would like to trigger this code once at the start of testing, and dispose of it (shutting down the process) at the end. How could I go about doing that?
Even if I can get programmatic access to something like "how many tests are currently being run" I can figure something out.
As of Nov 2015 xUnit 2 is out, so there is a canonical way to share features between tests. It is documented here.
Basically you'll need to create a class doing the fixture:
public class DatabaseFixture : IDisposable
{
public DatabaseFixture()
{
Db = new SqlConnection("MyConnectionString");
// ... initialize data in the test database ...
}
public void Dispose()
{
// ... clean up test data from the database ...
}
public SqlConnection Db { get; private set; }
}
A dummy class bearing the CollectionDefinition attribute.
This class allows Xunit to create a test collection, and will use the given fixture for all test classes of the collection.
[CollectionDefinition("Database collection")]
public class DatabaseCollection : ICollectionFixture<DatabaseFixture>
{
// This class has no code, and is never created. Its purpose is simply
// to be the place to apply [CollectionDefinition] and all the
// ICollectionFixture<> interfaces.
}
Then you need to add the collection name over all your test classes.
The test classes can receive the fixture through the constructor.
[Collection("Database collection")]
public class DatabaseTestClass1
{
DatabaseFixture fixture;
public DatabaseTestClass1(DatabaseFixture fixture)
{
this.fixture = fixture;
}
}
It's a bit more verbose than MsTests AssemblyInitialize since you have to declare on each test class which test collection it belongs, but it's also more modulable (and with MsTests you still need to put a TestClass on your classes)
Note: the samples have been taken from the documentation.
To execute code on assembly initialize, then one can do this (Tested with xUnit 2.3.1)
using Xunit.Abstractions;
using Xunit.Sdk;
[assembly: Xunit.TestFramework("MyNamespace.MyClassName", "MyAssemblyName")]
namespace MyNamespace
{
public class MyClassName : XunitTestFramework
{
public MyClassName(IMessageSink messageSink)
:base(messageSink)
{
// Place initialization code here
}
public new void Dispose()
{
// Place tear down code here
base.Dispose();
}
}
}
See also https://github.com/xunit/samples.xunit/tree/master/AssemblyFixtureExample
Create a static field and implement a finalizer.
You can use the fact that xUnit creates an AppDomain to run your test assembly and unloads it when it's finished. Unloading the app domain will cause the finalizer to run.
I am using this method to start and stop IISExpress.
public sealed class ExampleFixture
{
public static ExampleFixture Current = new ExampleFixture();
private ExampleFixture()
{
// Run at start
}
~ExampleFixture()
{
Dispose();
}
public void Dispose()
{
GC.SuppressFinalize(this);
// Run at end
}
}
Edit: Access the fixture using ExampleFixture.Current in your tests.
It's not possible to do in the framework today. This is a feature planned for 2.0.
In order to make this work before 2.0, it would require you to perform significant re-architecture on the framework, or write your own runners that recognized your own special attributes.
I use AssemblyFixture (NuGet).
What it does is it provides an IAssemblyFixture<T> interface that is replacing any IClassFixture<T> where you want the object's lifetime to be as the testing assembly.
Example:
public class Singleton { }
public class TestClass1 : IAssemblyFixture<Singleton>
{
readonly Singletone _Singletone;
public TestClass1(Singleton singleton)
{
_Singleton = singleton;
}
[Fact]
public void Test1()
{
//use singleton
}
}
public class TestClass2 : IAssemblyFixture<Singleton>
{
readonly Singletone _Singletone;
public TestClass2(Singleton singleton)
{
//same singleton instance of TestClass1
_Singleton = singleton;
}
[Fact]
public void Test2()
{
//use singleton
}
}
I was quite annoyed for not having the option to execute things at the end of all the xUnit tests. Some of the options here are not as great, as they involve changing all your tests or putting them under one collection (meaning they get executed synchronously). But Rolf Kristensen's answer gave me the needed information to get to this code. It's a bit long, but you only need to add it into your test project, no other code changes necessary:
using Siderite.Tests;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Reflection;
using System.Text;
using Xunit;
using Xunit.Abstractions;
using Xunit.Sdk;
[assembly: TestFramework(
SideriteTestFramework.TypeName,
SideriteTestFramework.AssemblyName)]
namespace Siderite.Tests
{
public class SideriteTestFramework : ITestFramework
{
public const string TypeName = "Siderite.Tests.SideriteTestFramework";
public const string AssemblyName = "Siderite.Tests";
private readonly XunitTestFramework _innerFramework;
public SideriteTestFramework(IMessageSink messageSink)
{
_innerFramework = new XunitTestFramework(messageSink);
}
public ISourceInformationProvider SourceInformationProvider
{
set
{
_innerFramework.SourceInformationProvider = value;
}
}
public void Dispose()
{
_innerFramework.Dispose();
}
public ITestFrameworkDiscoverer GetDiscoverer(IAssemblyInfo assembly)
{
return _innerFramework.GetDiscoverer(assembly);
}
public ITestFrameworkExecutor GetExecutor(AssemblyName assemblyName)
{
var executor = _innerFramework.GetExecutor(assemblyName);
return new SideriteTestExecutor(executor);
}
private class SideriteTestExecutor : ITestFrameworkExecutor
{
private readonly ITestFrameworkExecutor _executor;
private IEnumerable<ITestCase> _testCases;
public SideriteTestExecutor(ITestFrameworkExecutor executor)
{
this._executor = executor;
}
public ITestCase Deserialize(string value)
{
return _executor.Deserialize(value);
}
public void Dispose()
{
_executor.Dispose();
}
public void RunAll(IMessageSink executionMessageSink, ITestFrameworkDiscoveryOptions discoveryOptions, ITestFrameworkExecutionOptions executionOptions)
{
_executor.RunAll(executionMessageSink, discoveryOptions, executionOptions);
}
public void RunTests(IEnumerable<ITestCase> testCases, IMessageSink executionMessageSink, ITestFrameworkExecutionOptions executionOptions)
{
_testCases = testCases;
_executor.RunTests(testCases, new SpySink(executionMessageSink, this), executionOptions);
}
internal void Finished(TestAssemblyFinished executionFinished)
{
// do something with the run test cases in _testcases and the number of failed and skipped tests in executionFinished
}
}
private class SpySink : IMessageSink
{
private readonly IMessageSink _executionMessageSink;
private readonly SideriteTestExecutor _testExecutor;
public SpySink(IMessageSink executionMessageSink, SideriteTestExecutor testExecutor)
{
this._executionMessageSink = executionMessageSink;
_testExecutor = testExecutor;
}
public bool OnMessage(IMessageSinkMessage message)
{
var result = _executionMessageSink.OnMessage(message);
if (message is TestAssemblyFinished executionFinished)
{
_testExecutor.Finished(executionFinished);
}
return result;
}
}
}
}
The highlights:
assembly: TestFramework instructs xUnit to use your framework, which
proxies to the default one
SideriteTestFramework also wraps the executor into a custom class
that then wraps the message sink
in the end, the Finished method is executed, with the list of tests
run and the result from the xUnit message
More work could be done here. If you want to execute stuff without caring about the tests run, you could inherit from XunitTestFramework and just wrap the message sink.
You can use IUseFixture interface to make this happen. Also all of your test must inherit TestBase class. You can also use OneTimeFixture directly from your test.
public class TestBase : IUseFixture<OneTimeFixture<ApplicationFixture>>
{
protected ApplicationFixture Application;
public void SetFixture(OneTimeFixture<ApplicationFixture> data)
{
this.Application = data.Fixture;
}
}
public class ApplicationFixture : IDisposable
{
public ApplicationFixture()
{
// This code run only one time
}
public void Dispose()
{
// Here is run only one time too
}
}
public class OneTimeFixture<TFixture> where TFixture : new()
{
// This value does not share between each generic type
private static readonly TFixture sharedFixture;
static OneTimeFixture()
{
// Constructor will call one time for each generic type
sharedFixture = new TFixture();
var disposable = sharedFixture as IDisposable;
if (disposable != null)
{
AppDomain.CurrentDomain.DomainUnload += (sender, args) => disposable.Dispose();
}
}
public OneTimeFixture()
{
this.Fixture = sharedFixture;
}
public TFixture Fixture { get; private set; }
}
EDIT: Fix the problem that new fixture create for each test class.
Does your build tool provide such a feature?
In the Java world, when using Maven as a build tool, we use the appropriate phases of the build lifecycle. E.g. in your case (acceptance tests with Selenium-like tools), one can make good use of the pre-integration-test and post-integration-test phases to start/stop a webapp before/after one's integration-tests.
I'm pretty sure the same mechanism can be set up in your environment.
The method described by Jared Kells
does not work under Net Core, because, well it is not guaranteed that finalizers will be called. And, in fact, it is not called for the code above. Please, see:
Why does the Finalize/Destructor example not work in .NET Core?
https://github.com/dotnet/runtime/issues/16028
https://github.com/dotnet/runtime/issues/17836
https://github.com/dotnet/runtime/issues/24623
So, based on the great answer above, here is what I ended up doing (replace saving to file as necessary):
public class DatabaseCommandInterceptor : IDbCommandInterceptor
{
private static ConcurrentDictionary<DbCommand, DateTime> StartTime { get; } = new();
public void ReaderExecuted(DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext) => Log(command, interceptionContext);
public void NonQueryExecuted(DbCommand command, DbCommandInterceptionContext<int> interceptionContext) => Log(command, interceptionContext);
public void ScalarExecuted(DbCommand command, DbCommandInterceptionContext<object> interceptionContext) => Log(command, interceptionContext);
private static void Log<T>(DbCommand command, DbCommandInterceptionContext<T> interceptionContext)
{
var parameters = new StringBuilder();
foreach (DbParameter param in command.Parameters)
{
if (parameters.Length > 0) parameters.Append(", ");
parameters.Append($"{param.ParameterName}:{param.DbType} = {param.Value}");
}
var data = new DatabaseCommandInterceptorData
{
CommandText = command.CommandText,
CommandType = $"{command.CommandType}",
Parameters = $"{parameters}",
Duration = StartTime.TryRemove(command, out var startTime) ? DateTime.Now - startTime : TimeSpan.Zero,
Exception = interceptionContext.Exception,
};
DbInterceptorFixture.Current.LogDatabaseCall(data);
}
public void NonQueryExecuting(DbCommand command, DbCommandInterceptionContext<int> interceptionContext) => OnStart(command);
public void ReaderExecuting(DbCommand command, DbCommandInterceptionContext<DbDataReader> interceptionContext) => OnStart(command);
public void ScalarExecuting(DbCommand command, DbCommandInterceptionContext<object> interceptionContext) => OnStart(command);
private static void OnStart(DbCommand command) => StartTime.TryAdd(command, DateTime.Now);
}
public class DatabaseCommandInterceptorData
{
public string CommandText { get; set; }
public string CommandType { get; set; }
public string Parameters { get; set; }
public TimeSpan Duration { get; set; }
public Exception Exception { get; set; }
}
/// <summary>
/// All times are in milliseconds.
/// </summary>
public record DatabaseCommandStatisticalData
{
public string CommandText { get; }
public int CallCount { get; init; }
public int ExceptionCount { get; init; }
public double Min { get; init; }
public double Max { get; init; }
public double Mean { get; init; }
public double StdDev { get; init; }
public DatabaseCommandStatisticalData(string commandText)
{
CommandText = commandText;
CallCount = 0;
ExceptionCount = 0;
Min = 0;
Max = 0;
Mean = 0;
StdDev = 0;
}
/// <summary>
/// Calculates k-th moment for n + 1 values: M_k(n + 1)
/// based on the values of k, n, mkn = M_k(N), and x(n + 1).
/// The sample adjustment (replacement of n -> (n - 1)) is NOT performed here
/// because it is not needed for this function.
/// Note that k-th moment for a vector x will be calculated in Wolfram as follows:
/// Sum[x[[i]]^k, {i, 1, n}] / n
/// </summary>
private static double MknPlus1(int k, int n, double mkn, double xnp1) =>
(n / (n + 1.0)) * (mkn + (1.0 / n) * Math.Pow(xnp1, k));
public DatabaseCommandStatisticalData Updated(DatabaseCommandInterceptorData data) =>
CallCount == 0
? this with
{
CallCount = 1,
ExceptionCount = data.Exception == null ? 0 : 1,
Min = data.Duration.TotalMilliseconds,
Max = data.Duration.TotalMilliseconds,
Mean = data.Duration.TotalMilliseconds,
StdDev = 0.0,
}
: this with
{
CallCount = CallCount + 1,
ExceptionCount = ExceptionCount + (data.Exception == null ? 0 : 1),
Min = Math.Min(Min, data.Duration.TotalMilliseconds),
Max = Math.Max(Max, data.Duration.TotalMilliseconds),
Mean = MknPlus1(1, CallCount, Mean, data.Duration.TotalMilliseconds),
StdDev = Math.Sqrt(
MknPlus1(2, CallCount, Math.Pow(StdDev, 2) + Math.Pow(Mean, 2), data.Duration.TotalMilliseconds)
- Math.Pow(MknPlus1(1, CallCount, Mean, data.Duration.TotalMilliseconds), 2)),
};
public static string Header { get; } =
string.Join(TextDelimiter.VerticalBarDelimiter.Key,
new[]
{
nameof(CommandText),
nameof(CallCount),
nameof(ExceptionCount),
nameof(Min),
nameof(Max),
nameof(Mean),
nameof(StdDev),
});
public override string ToString() =>
string.Join(TextDelimiter.VerticalBarDelimiter.Key,
new[]
{
$"\"{CommandText.Replace("\"", "\"\"")}\"",
$"{CallCount}",
$"{ExceptionCount}",
$"{Min}",
$"{Max}",
$"{Mean}",
$"{StdDev}",
});
}
public class DbInterceptorFixture
{
public static readonly DbInterceptorFixture Current = new();
private bool _disposedValue;
private ConcurrentDictionary<string, DatabaseCommandStatisticalData> DatabaseCommandData { get; } = new();
private static IMasterLogger Logger { get; } = new MasterLogger(typeof(DbInterceptorFixture));
/// <summary>
/// Will run once at start up.
/// </summary>
private DbInterceptorFixture()
{
AssemblyLoadContext.Default.Unloading += Unloading;
}
/// <summary>
/// A dummy method to call in order to ensure that static constructor is called
/// at some more or less controlled time.
/// </summary>
public void Ping()
{
}
public void LogDatabaseCall(DatabaseCommandInterceptorData data) =>
DatabaseCommandData.AddOrUpdate(
data.CommandText,
_ => new DatabaseCommandStatisticalData(data.CommandText).Updated(data),
(_, d) => d.Updated(data));
private void Unloading(AssemblyLoadContext context)
{
if (_disposedValue) return;
GC.SuppressFinalize(this);
_disposedValue = true;
SaveData();
}
private void SaveData()
{
try
{
File.WriteAllLines(
#"C:\Temp\Test.txt",
DatabaseCommandData
.Select(e => $"{e.Value}")
.Prepend(DatabaseCommandStatisticalData.Header));
}
catch (Exception e)
{
Logger.LogError(e);
}
}
}
and then register DatabaseCommandInterceptor once somewhere in the tests:
DbInterception.Add(new DatabaseCommandInterceptor());
I also prefer calling DbInterceptorFixture.Current.Ping() in the base test class, though I don't think that this is needed.
The interface IMasterLogger is just a strongly typed wrapper around log4net, so just replace it with your favorite one.
The value of TextDelimiter.VerticalBarDelimiter.Key is just '|' and it sits in what we call a closed set.
PS If I screwed up with statistics, please, comment and I will update the answer.
Just use the static constructor, that's all you need to do, it runs just once.

Categories