Verify methods are called in order - c#

I have the following method:
public async Task DeleteAmendment(int amendmentHeaderId, int userId)
{
// Delete the corresponding version records.
await _amendmentVersionService.DeleteForAmendmentAsync(amendmentHeaderId);
// Delete the corresponding lifecycle records.
await _amendmentLifecycleService.DeleteForAmendmentAsync(amendmentHeaderId);
// Delete the amendment header record itself.
await _amendmentHeaderService.DeleteAsync(amendmentHeaderId, userId);
}
I am trying to verify that the methods are called in order.
I have tried setting callbacks (see below)
AmendmentVersionService.Setup(x => x.DeleteForAmendmentAsync(It.IsAny<int>()))
.Callback(() => ServiceCallbackList.Add("AmendmentVersionService"));
AmendmentLifecycleService.Setup(x => x.DeleteForAmendmentAsync(It.IsAny<int>()))
.Callback(() => ServiceCallbackList.Add("AmendmentLifecycleService"));
AmendmentHeaderService.Setup(x => x.DeleteAsync(It.IsAny<int>(), It.IsAny<int>()))
.Callback(() => ServiceCallbackList.Add("AmendmentHeaderService"));
But the list only contains the string "AmendmentVersionService"
Any ideas?

One way to achieve the same goal, with a different concept would be to have 3 tests, one per call. It's a bit dirty, but as a fallback solution, it could get you out of the woods
for the first call:
Setup call 2 to throw an exception of a custom type TestException.
assert only call 1 was performed
Expect the TestException to be thrown
for the second call:
Setup call 3 to throw an exception of a custom type TestException.
assert call 1 and 2 were performed
Expect the TestException to be thrown
for the third call:
Setup all calls to perform normally.
assert call 1, 2 and 3 were performed

You could use continuations (below) but really if you need to garuntee that these things happen in order then they should not be async operations. Typically you would want the async operations to be able to run at the same time;
public async Task DeleteAmendment(int amendmentHeaderId, int userId)
{
Task.Run(() =>
{
// Delete the corresponding version records.
await _amendmentVersionService.DeleteForAmendmentAsync(amendmentHeaderId);
}).ContinueWith(_ =>
{
// Delete the corresponding lifecycle records.
await _amendmentLifecycleService.DeleteForAmendmentAsync(amendmentHeaderId);
}).ContinueWith(_ =>
{
// Delete the amendment header record itself.
await _amendmentHeaderService.DeleteAsync(amendmentHeaderId, userId);
});
}

Your problem is that you will never be able to know if a method was performed as a result of the previous one finishing (awaited) or if you were lucky enough not to suffer from a race condition (call made without await, or no ContinueWith)
The only way you can actually test it for sure is by replacing the default TaskScheduler by an implementation of your own, which will not queue the subsequent task. If the subsequent task gets called, then your code is wrong. If not, that means that the task is really executed as a result of the previous one completing.
We have done this in Testeroids, a test framework a friend and I built.
by doing so, your custom TaskScheduler can perform the tasks sequentially, in a single thread (to really highlight the timeline problems you could have) and record which tasks were scheduled and in which older.
It will require a lot of effort on your part if you want to be that thorough, but at least you get the idea.
In order to replace the defaukt TaskScheduler, you can get inspired by the work we did on Testeroids.
https://github.com/Testeroids/Testeroids/blob/c5f3f02e8078db649f804d94c37cdab3df89fed4/solution/src/app/Testeroids/TplTestPlatformHelper.cs

Thanks to Stephen Brickner ...
I made all my calls synchronous which made the callbacks in the Moq's work like a dream.
Thanks for all your help much appreciated!

Related

Executing Task based methods in Observable chain => IObservable<IObservable<Unit>>

I have a lot of code that is reactive but needs to call into Task based methods.
For example, in this snippet PracticeIdChanged is an IObservable. When PracticeIdChanged fires, I want the system to react by reloading some stuff and have code that looks like this:
PracticeIdChanged.Subscribe(async x => {
SelectedChargeInfo.Item = null;
await LoadAsync().ConfigureAwait(false);
});
Although it seems to work ok, I get warnings about executing async code in the Subscribe. Additionally, I consider this to be a code smell as I am mixing two separate threading models which I think may come to bite me later.
Refactoring like this works (even) with no combination methods like .Merge(), .Switch() or .Concat()
PracticeIdChanged
.Do(_ => SelectedChargeInfo.Item = null)
.Select(_ => LoadAsync().ToObservable())
.Subscribe();
When PracticeIdChanged fires the LoadAsync method executes
The Select results in an IObservable<IObservable> which looks odd. Is this ok or does it require some combination function like .Merge or .Switch()
In many places, I use SelectMany to execute the Task based method but it requires returning Task which would require changing the signature of the Task based method in the example above which I do not want to do.
It depends on what kind of notifications you expect to get from the resulting sequence, and what kind of behavior you want in case of errors. In your example you .Subscribe() to the sequence without passing any handler whatsoever (onNext/onError/onCompleted), indicating that you are not interested to be notified for anything. You don't care about the completion of the asynchronous operations, all of them becoming essentially fired-and-forgotten. Also a failure of one asynchronous operation will have no consequence to the rest: the already started asynchronous operations will continue running (they won't get canceled), and starting new asynchronous operations will not be impeded. Finally a failure of the source sequence (PracticeIdChanged) will result in an unhandled exception, that will crash the process. If that's the behavior that you want, then your current setup is what you need.
For comparison, let's consider this setup:
await PracticeIdChanged
.Do(_ => SelectedChargeInfo.Item = null)
.Select(_ => Observable.FromAsync(ct => LoadAsync(ct)))
.Merge()
.DefaultIfEmpty();
This setup assumes that the LoadAsync method has a CancellationToken parameter. The resulting sequence is awaited. The await will complete when all LoadAsync operations have completed, or any one of them has failed, or if the source sequence has failed. In case of failure, all currently running asynchronous operations will receive a cancellation signal, so that they can bail out quickly. The await will not wait for their completion though. Only the first error that occurred will be propagated as an exception. This exception can be handled by wrapping the await in a try/catch block. There is no possibility for an uncatchable, process-crashing, unhandled exception.
The purpose of the DefaultIfEmpty at the end of the chain is to prevent an InvalidOperationException, in case the source sequence emits zero elements. It's a workaround for this strange "feature" of empty observable sequences, to throw when waited synchronously or asynchronously.

How does this ConcurrentDictionary + Lazy<Task<T>> code work?

There's various posts/answers that say that the .NET/.NET Core's ConcurrentDictionary GetOrAdd method is not thread-safe when the Func delegate is used to calculate the value to insert into the dictionary, if the key didn't already exist.
I'm under the belief that when using the factory method of a ConcurrentDictionary's GetOrAdd method, it could be called multiple times "at the same time/in really quick succession" if a number of requests occur at the "same time". This could be wasteful, especially if the call is "expensive". (#panagiotis-kanavos explains this better than I). With this assumption, I'm struggling to understand how some sample code I made, seems to work.
I've created a working sample on .NET Fiddle but I'm stuck trying to understand how it works.
A common recommendation suggestion/idea I've read is to have a Lazy<Task<T>> value in the ConcurrentDictionary. The idea is that the Lazy prevents other calls from executing the underlying method.
The main part of the code which does the heavy lifting is this:
public static async Task<DateTime> GetDateFromCache()
{
var result = await _cache.GetOrAdd("someDateTime", new Lazy<Task<DateTime>>(async () =>
{
// NOTE: i've made this method take 2 seconds to run, each time it's called.
var someData = await GetDataFromSomeExternalDependency();
return DateTime.UtcNow;
})).Value;
return result;
}
This is how I read this:
Check if someDateTime key exists in the dictionary.
If yes, return that. <-- That's a thread-safe atomic action. Yay!
If no, then here we go ....
Create an instance of a Lazy<Task<DateTime>> (which is basically instant)
Return that Lazy instance. (so far, the actual 'expensive' operation hasn't been called, yet.)
Now get the Value, which is a Task<DateTime>.
Now await this task .. which finally does the 'expensive' call. It waits 2 seconds .. and then returns the result (some point in Time).
Now this is where I'm all wrong. Because I'm assuming (above) that the value in the key/value is a Lazy<Task<DateTime>> ... which the await would call each time. If the await is called, one at a time (because the Lazy protects other callers from all calling at the same time) then I would have though that the result would a different DateTime with each independent call.
So can someone please explain where I'm wrong in my thinking, please?
(please refer to the full running code in .NET Fiddler).
Because I'm assuming (above) that the value in the key/value is a Lazy<Task<DateTime>>
Yes, that is true.
which the await would call each time. If the await is called, one at a time (because the Lazy protects other callers from all calling at the same time) then I would have though that the result would a different DateTime with each independent call.
await is not a call, it is more like "continue execution when the result is available". Accessing Lazy.Value will create the task, and this will initiate the call to the GetDataFromSomeExternalDependency that eventually returns the DateTime. You can await the task however many times you want and get the same result.

Observable.Range being repeated?

New to Rx -- I have a sequence that appears to be functioning correctly except for the fact that it appears to repeat.
I think I'm missing something around calls to Select() or SelectMany() that triggers the range to re-evaluate.
Explanation of Code & What I'm trying to Do
For all numbers, loop through a method that retrieves data (paged from a database).
Eventually, this data will be empty (I only want to keep processing while it retrieves data
For each of those records retrieved, I only want to process ones that should be processed
Of those that should be processed, I'd like to process up to x of them in parallel (according to a setting).
I want to wait until the entire sequence is completed to exit the method (hence the wait call at the end).
Problem With the Code Below
I run the code through with a data set that I know only has 1 item.
So, page 0 returns 1 item, and page 1 return 0 items.
My expectation is that the process runs once for the one item.
However, I see that both page 0 and 1 are called twice and the process thus runs twice.
I think this has something to do with a call that is causing the range to re-evaluate beginning from 0, but I can't figure out what that it is.
The Code
var query = Observable.Range(0, int.MaxValue)
.Select(pageNum =>
{
_etlLogger.Info("Calling GetResProfIDsToProcess with pageNum of {0}", pageNum);
return _recordsToProcessRetriever.GetResProfIDsToProcess(pageNum, _processorSettings.BatchSize);
})
.TakeWhile(resProfList => resProfList.Any())
.SelectMany(records => records.Where(x=> _determiner.ShouldProcess(x)))
.Select(resProf => Observable.Start(async () => await _schoolDataProcessor.ProcessSchoolsAsync(resProf)))
.Merge(maxConcurrent: _processorSettings.ParallelProperties)
.Do(async trackingRequests =>
{
await CreateRequests(trackingRequests.Result, createTrackingPayload);
var numberOfAttachments = SumOfRequestType(trackingRequests.Result, TrackingRecordRequestType.AttachSchool);
var numberOfDetachments = SumOfRequestType(trackingRequests.Result, TrackingRecordRequestType.DetachSchool);
var numberOfAssignmentTypeUpdates = SumOfRequestType(trackingRequests.Result,
TrackingRecordRequestType.UpdateAssignmentType);
_etlLogger.Info("Extractor generated {0} attachments, {1} detachments, and {2} assignment type changes.",
numberOfAttachments, numberOfDetachments, numberOfAssignmentTypeUpdates);
});
var subscription = query.Subscribe(
trackingRequests =>
{
//Nothing really needs to happen here. Technically we're just doing something when it's done.
},
() =>
{
_etlLogger.Info("Finished! Woohoo!");
});
await query.Wait();
This is because you subscribe to the sequence twice. Once at query.Subscribe(...) and again at query.Wait().
Observable.Range(0, int.MaxValue) is a cold observable. Every time you subscribe to it, it will be evaluated again. You could make the observable hot by publishing it with Publish(), then subscribe to it, and then Connect() and then Wait(). This does add a risk to get a InvalidOperationException if you call Wait() after the last element is already yielded. A better alternative is LastOrDefaultAsync().
That would get you something like this:
var connectable = query.Publish();
var subscription = connectable.Subscribe(...);
subscription = new CompositeDisposable(connectable.Connect(), subscription);
await connectable.LastOrDefaultAsync();
Or you can avoid await and return a task directly with ToTask() (do remove async from your method signature).
return connectable.LastOrDefaultAsync().ToTask();
Once converted to a task, you can synchronously wait for it with Wait() (do not confuse Task.Wait() with Observable.Wait()).
connectable.LastOrDefaultAsync().ToTask().Wait();
However, most likely you do not want to wait at all! Waiting in a async context makes little sense. What you should do it put the remaining of the code that needs to run after the sequence completes in the OnComplete() part of the subscription. If you have (clean-up) code that needs to run even when you unsubscribe (Dispose), consider Observable.Using or the Finally(...) method to ensure this code is ran.
As already mentioned the cause of the Observable.Range being repeated is the fact that you're subscribing twice - once with .Subscribe(...) and once with .Wait().
In this kind of circumstance I would go with a very simple blocking call to get the values. Just do this:
var results = query.ToArray().Wait();
The .ToArray() turns a multi-valued IObservable<T> into a single values IObservable<T[]>. The .Wait() turns this into T[]. It's the easy way to ensure only one subscription, blocking, and getting all of the values out.
In your case you may not need all values, but I think this is a good habit to get into.

What is causing this particular method to deadlock?

As best as I can, I opt for async all the way down. However, I am still stuck using ASP.NET Membership which isn't built for async. As a result my calls to methods like string[] GetRolesForUser() can't use async.
In order to build roles properly I depend on data from various sources so I am using multiple tasks to fetch the data in parallel:
public override string[] GetRolesForUser(string username) {
...
Task.WaitAll(taskAccounts, taskContracts, taskOtherContracts, taskMoreContracts, taskSomeProduct);
...
}
All of these tasks are simply fetching data from a SQL Server database using the Entity Framework. However, the introduction of that last task (taskSomeProduct) is causing a deadlock while none of the other methods have been.
Here is the method that causes a deadlock:
public async Task<int> SomeProduct(IEnumerable<string> ids) {
var q = from c in this.context.Contracts
join p in this.context.Products
on c.ProductId equals p.Id
where ids.Contains(c.Id)
select p.Code;
//Adding .ConfigureAwait(false) fixes the problem here
var codes = await q.ToListAsync();
var slotCount = codes .Sum(p => char.GetNumericValue(p, p.Length - 1));
return Convert.ToInt32(slotCount);
}
However, this method (which looks very similar to all the other methods) isn't causing deadlocks:
public async Task<List<CustomAccount>> SomeAccounts(IEnumerable<string> ids) {
return await this.context.Accounts
.Where(o => ids.Contains(o.Id))
.ToListAsync()
.ToCustomAccountListAsync();
}
I'm not quite sure what it is about that one method that is causing the deadlock. Ultimately they are both doing the same task of querying the database. Adding ConfigureAwait(false) to the one method does fix the problem, but I'm not quite sure what differentiates itself from the other methods which execute fine.
Edit
Here is some additional code which I originally omitted for brevity:
public static Task<List<CustomAccount>> ToCustomAccountListAsync(this Task<List<Account>> sqlObjectsTask) {
var sqlObjects = sqlObjectsTask.Result;
var customObjects = sqlObjects.Select(o => PopulateCustomAccount(o)).ToList();
return Task.FromResult<List<CustomAccount>>(customObjects);
}
The PopulateCustomAccount method simply returns a CustomAccount object from the database Account object.
In ToCustomAccountListAsync you call Task.Result. That's a classic deadlock. Use await.
This is not an answer, but I have a lot to say, it wouldn't fit in comments.
Some fact: EF context is not thread safe and doesn't support parallel execution:
While thread safety would make async more useful it is an orthogonal feature. It is unclear that we could ever implement support for it in the most general case, given that EF interacts with a graph composed of user code to maintain state and there aren't easy ways to ensure that this code is also thread safe.
For the moment, EF will detect if the developer attempts to execute two async operations at one time and throw.
Some prediction:
You say that:
The parallel execution of the other four tasks has been in production for months without deadlocking.
They can't be executing in parallel. One possibility is that the thread pool cannot assign more than one thread to your operations, in that case they would be executed sequentially. Or it could be the way you are initializing your tasks, I'm not sure. Assuming they are executed sequentially (otherwise you would have recognized the exception I'm talking about), there is another problem:
Task.WaitAll hanging with multiple awaitable tasks in ASP.NET
So maybe it isn't about that specific task SomeProduct but it always happens on the last task? Well, if they executed in parallel, there wouldn't be a "last task" but as I've already pointed out, they must be running sequentially considering they had been in production for quite a long time.

.Net RX: tracking progress of parallel execution

I need to execute multiple long-running operations in parallel and would like to report a progress in some way. From my initial research it seems that IObservable fits into this model. The idea is that I call a method that return IObservable of int where int is reported percent complete, parallel execution starts immediately upon exiting a method, this observable must be a hot observable so that all subscribers learn the same progress information at specific point in time, e.g. late subscriber may only learn that the whole execution is complete and there is no more progress to track.
The closest approach to this problem that I found is to use Observable.ForkJoin and Observable.Start, but I can't come to understanding how to make them a single observable that I can return from a method. 
Please share your ideas of how can it be achieved or maybe there is another approach to this problem using .Net RX.
To make a hot observable, I would probably start with a method that uses a BehaviorSubject as the return value and the way the operations report progress. If you just want the example, skip to the end. The rest of this answer explains the steps.
I will assume for the sake of this answer that your long-running operations do not have their own way to be called asynchronously. If they do, the next step may be a little different. The next thing to do is to send the work to another thread using an IScheduler. You may allow the caller to select where the work happens by making an overload that takes the scheduler as a parameter if desired (in which case the overload that does not will pick a default scheduler). There are quite a few overloads of IScheduler.Scheduler, of which several are extensions methods, so you should look through them to see which is most appropriate for your situation; I'm using the on that takes only an Action here. If you have multiple operations that can all run in parallel, you can call scheduler.Schedule multiple times.
The hardest part of this will probably be determining what the progress is at any given point. If you have multiple operations going on at once, you will probably need to keep track of how many have completed to know what the current progress is. With the information you provided, I can't be more specific than that.
Finally, if your operations are cancellable, you may want to take a CancellationToken as a parameter. You can use this to cancel the operation while it is in the scheduler's queue before it starts. If you write your operation code correctly, it can use the token for cancellation as well.
IObservable<int> DoStuff(/*args*/,
CancellationToken cancel,
IScheduler scheduler)
{
BehaviorSubject<int> progress;
//if you don't take it as a parameter, pick a scheduler
//IScheduler scheduler = Scheduler.ThreadPool;
var disp = scheduler.Schedule(() =>
{
//do stuff that needs to run on another thread
//report progres
porgress.OnNext(25);
});
var disp2 = scheduler.Schedule(...);
//if the operation is cancelled before the scheduler has started it,
//you need to dispose the return from the Schedule calls
var allOps = new CompositeDisposable(disp, disp2);
cancel.Register(allOps.Dispose);
return progress;
}
Here is one approach
// setup a method to do some work,
// and report it's own partial progress
Func<string, IObservable<int>> doPartialWork =
(arg) => Observable.Create<int>(obsvr => {
return Scheduler.TaskPool.Schedule(arg,(sched,state) => {
var progress = 0;
var cancel = new BooleanDisposable();
while(progress < 10 && !cancel.IsDisposed)
{
// do work with arg
Thread.Sleep(550);
obsvr.OnNext(1); //report progress
progress++;
}
obsvr.OnCompleted();
return cancel;
});
});
var myArgs = new[]{"Arg1", "Arg2", "Arg3"};
// run all the partial bits of work
// use SelectMany to get a flat stream of
// partial progress notifications
var xsOfPartialProgress =
myArgs.ToObservable(Scheduler.NewThread)
.SelectMany(arg => doPartialWork(arg))
.Replay().RefCount();
// use Scan to get a running aggreggation of progress
var xsProgress = xsOfPartialProgress
.Scan(0d, (prog,nextPartial)
=> prog + (nextPartial/(myArgs.Length*10d)));

Categories