UI should not be freeze during background processing - c#

I am working on legacy application (winform App) where I need to improve performance.
In this application, we are using MVP pattern and Shell usage reflection to find which presenter it needs to call in order to satisfied the user request. So there is a function which do the following tasks...
Find the appropriate presenter
Itrate through it's all methods to find out default method reference.
prepare an array for method's parameter inputs
Call the default method on presenter
return the presenter reference
here is some code...
public object FindPresenter(Type pType, string action, Dictionary<string, object> paramDictonary, string callerName = null)
{
if (pType == null)
{
throw new ArgumentNullException("presenterType");
}
var presenterTypeName = pType.Name;
var presenter = _presenterFactory.Create(pType);
presenter.CallerName = callerName;
if (presenter == null)
{
throw new SomeException(string.Format("Unable to resolve presenter"));
}
// Check each interface for the named method
MethodInfo method = null;
foreach (var i in presenter.GetType().GetInterfaces())
{
method = i.GetMethod(action, BindingFlags.FlattenHierarchy | BindingFlags.Public | BindingFlags.Instance);
if (method != null) break;
}
if (method == null)
{
throw new SomeException(string.Format("No action method found"));
}
// Match up parameters
var par = method.GetParameters();
object[] results = null;
if (paramDictonary != null)
{
if (par.Length != paramDictonary.Count)
throw new ArgumentException(
"Parameter mis-match");
results = (from d in paramDictonary
join p in par on d.Key equals p.Name
orderby p.Position
select d.Value).ToArray();
}
// Attach release action
presenter.ReleaseAction = () => _presenterFactory.Release(presenter);
// Invoke target method
method.Invoke(presenter, results);
return presenter;
}
This method take around 15-20 second to complete and freeze the UI. I want to refector this method with some async processing so UI is not freeze during this method. As I need to return presenter reference, I thought of using wait() or join() method, but they will again lock the UI.
Please note, I am using .NET 4.0.

Unless you have millions of presenter types to search through, which is highly doubtful, and unless your average presenter has millions of parameters, which is also highly doubtful, there is nothing in the code that I see above that should take 15 seconds to execute.
So, the entirety of the delay is not in the code that you are showing us, but in one of the functions it invokes.
That could be in the very suspicious looking _presenterFactory.Create(pType); or in the implementation of paramDictionary if by any chance it happens to be a roll-your-own dictionary instead of a standard hash dictionary, or in the invocation of method.Invoke(presenter, results); itself.
Most likely in the last.
So, first of all, profile your code to find what the real culprit is.
Then, restructure your code so that the lengthy process happens on a separate worker thread. This may require that you pull considerable parts of your application out of the GUI thread and into that worker thread. Nobody said GUI programming was easy.
If the culprit is method.Invoke(), then this looks like something that you may be able to move to another thread relatively easily: Start that thread right before returning, and make sure everything that happens with each one of those presenter objects is thread safe.
But of course, these presenters are trying to actually render stuff in the GUI, then you will have to go and refactor all of them too, to separate their computationally expensive logic from the rendering logic, because WinForms can only be invoked from within its own thread.

The easiest approach is to use Application.DoEvents(); in your foreach loop to keep UI unlocked during long running processes
You may refer to MSDN system.windows.forms.application.doevents
But before using it you must also read Keeping your UI Responsive and the Dangers of Application.DoEvents

Well, based on your comment: "My question is how can I refector this by putting some tasks in background which do not lock the UI."
Try this:
class Program
{
static void Main(string[] args)
{
Task t1 = Task.Run(() => FindPresenter(typeof(Program), "boo"))
.ContinueWith(x => UsePresenter(x.Result));
while (true)
{
Thread.Sleep(200);
Console.WriteLine("I am the ui thread. Press a key to exit.");
if ( Console.KeyAvailable)
break;
}
}
static object FindPresenter(Type type, string action)
{
// Your FindPresenter logic here
Thread.Sleep(1000);
return (object)"I'm a presenter";
}
static void UsePresenter(object presenter)
{
Console.WriteLine(presenter.ToString());
Console.WriteLine("Done using presenter.");
}
}
Sorry I am not a Winforms guy.... In WPF we have Dispatcher for thread affinity issues. You might try this:
System.Windows.Threading.Dispatcher and WinForms?

Related

How to unit test that tasks are run synchronously

In my code I have a method such as:
void PerformWork(List<Item> items)
{
HostingEnvironment.QueueBackgroundWorkItem(async cancellationToken =>
{
foreach (var item in items)
{
await itemHandler.PerformIndividualWork(item);
}
});
}
Where Item is just a known model and itemHandler just does some work based off of the model (the ItemHandler class is defined in a separately maintained code base as nuget pkg I'd rather not modify).
The purpose of this code is to have work done for a list of items in the background but synchronously.
As part of the work, I would like to create a unit test to verify that when this method is called, the items are handled synchronously. I'm pretty sure the issue can be simplified down to this:
await MyTask(1);
await MyTask(2);
Assert.IsTrue(/* MyTask with arg 1 was completed before MyTask with arg 2 */);
The first part of this code I can easily unit test is that the sequence is maintained. For example, using NSubstitute I can check method call order on the library code:
Received.InOrder(() =>
{
itemHandler.PerformIndividualWork(Arg.Is<Item>(arg => arg.Name == "First item"));
itemHandler.PerformIndividualWork(Arg.Is<Item>(arg => arg.Name == "Second item"));
itemHandler.PerformIndividualWork(Arg.Is<Item>(arg => arg.Name == "Third item"));
});
But I'm not quite sure how to ensure that they aren't run in parallel. I've had several ideas which seem bad like mocking the library to have an artificial delay when PerformIndividualWork is called and then either checking a time elapsed on the whole background task being queued or checking the timestamps of the itemHandler received calls for a minimum time between the calls. For instance, if I have PerformIndividualWork mocked to delay 500 milliseconds and I'm expecting three items, then I could check elapsed time:
stopwatch.Start();
// I have an interface instead of directly calling HostingEnvironment, so I can access the task being queued here
backgroundTask.Invoke(...);
stopwatch.Stop();
Assert.IsTrue(stopwatch.ElapsedMilliseconds > 1500);
But that doesn't feels right and could lead to false positives. Perhaps the solution lies in modifying the code itself; however, I can't think of a way of meaningfully changing it to make this sort of unit test (testing tasks are run in order) possible. We'll definitely have system/integration testing to ensure the issue caused by asynchronous performance of the individual items doesn't happen, but I would like to hit testing here at this level as well.
Not sure if this is a good idea, but one approach could be to use an itemHandler that will detect when items are handled in parallel. Here is a quick and dirty example:
public class AssertSynchronousItemHandler : IItemHandler
{
private volatile int concurrentWork = 0;
public List<Item> Items = new List<Item>();
public Task PerformIndividualWork(Item item) =>
Task.Run(() => {
var result = Interlocked.Increment(ref concurrentWork);
if (result != 1) {
throw new Exception($"Expected 1 work item running at a time, but got {result}");
}
Items.Add(item);
var after = Interlocked.Decrement(ref concurrentWork);
if (after != 0) {
throw new Exception($"Expected 0 work items running once this item finished, but got {after}");
}
});
}
There are probably big problems with this, but the basic idea is to check how many items are already being handled when we enter the method, then decrement the counter and check there are still no other items being handled. With threading stuff I think it is very hard to make guarantees about things from tests alone, but with enough items processed this can give us a little confidence that it is working as expected:
[Fact]
public void Sample() {
var handler = new AssertSynchronousItemHandler();
var subject = new Subject(handler);
var input = Enumerable.Range(0, 100).Select(x => new Item(x.ToString())).ToList();
subject.PerformWork(input);
// With the code from the question we don't have a way of detecting
// when `PerformWork` finishes. If we can't change this we need to make
// sure we wait "long enough". Yes this is yuck. :)
Thread.Sleep(1000);
Assert.Equal(input, handler.Items);
}
If I modify PerformWork to do things in parallel I get the test failing:
public void PerformWork2(List<Item> items) {
Task.WhenAll(
items.Select(item => itemHandler.PerformIndividualWork(item))
).Wait(2000);
}
// ---- System.Exception : Expected 1 work item running at a time, but got 4
That said, if it is very important to run synchronously and it is not apparent from glancing at the implementation with async/await then maybe it is worth using a more obviously synchronous design, like a queue serviced by only one thread, so that you're guaranteed synchronous execution by design and people won't inadvertently change it to async during refactoring (i.e. it is deliberately synchronous and documented that way).

Calling WPF from Native C++ application causes async calls to loose the context

I'm calling a C# WPF Dialog from a Win32 native application by using a CLI wrapper. We have some other dialogs which work fine, this one however is showing some peculiar behavior - it's also more complex than the others.
Well, from beginning on, after the CLI C++ code calls the C# code, we set the context by calling:
var currentContext = SynchronizationContext.Current;
if (currentContext == null && (Thread.CurrentThread.IsThreadPoolThread))
{
throw new InvalidOperationException("Can't set a dispatcher sync context on thread pool threads");
}
SynchronizationContext context = null;
// WPF needs to make sure that all GUI actions are done on the main GUI thread.
// As we are beeing called by Unmanaged C++ there is no SynchronizationContext set.
// This check make sure we will be back to the GUI thread after an await
if (currentContext == null)
{
context = new DispatcherSynchronizationContext();
if (logger != null)
{
logger.Trace(string.Format("Applying DispatcherSyncContext on thread: {0}", Thread.CurrentThread.ManagedThreadId));
}
SynchronizationContext.SetSynchronizationContext(context);
Debug.Assert(Dispatcher.FromThread(Thread.CurrentThread) != null);
}
else if (!(currentContext is DispatcherSynchronizationContext))
{
LogWarning(
"Current SynchronizationContext is not a DispatcherSynchronizationContext and may cause exceptions. Current context: " + currentContext,
logger);
context = currentContext;
}
I'm not sure if we should be checking for the Dispatcher and not for the context, but as far as i Know, this should work.
Well, it does - most of the time.
There is a async call to load data from the DB which looks like this:
public async Task<bool> AddStuff(IEnumerable<Stuff> stuffs)
{
// Stuff is a poco - no logic..
CheckIfOnDispatcher(); // all well
if (!stuffs.Any())
{
return false;
}
var failureText = await Task.Run(() =>
{
return GetStuffDetails(DataService, stuffs);
}).ConfigureAwait(true);
CheckIfOnDispatcher(); // here its not on dispatcher anymore
The Check method was added to debug whats happening:
[Conditional("DEBUG")]
void CheckIfOnDispatcher()
{
if (Dispatcher.FromThread(Thread.CurrentThread) == null)
{
UserFeedback.ShowFeedback($"Not on GUI: {Thread.CurrentThread.ManagedThreadId}");
}
}
After the await Task.Run, somethimes only, it will not return to the main thread but continue on a Worker thread.
This behavior really only happens on a win32 client, if we call it from a Windows Form client or C# WPF client, no problem.
It looks like the main thread from the win32 MFC application has some special behavior but i cant find any documentation about it.
Are we setting the context in the wrong way?
I've also tried to open the dialog minimized from the beginning one, and hide it again, but this also did not work well (Instead of setting the context)
PS: UserFeedback is calling MessageBox which works also without a Dispatcher...
TIA,
Marco

Process a call in wpf Application in Main thread

My wpf application connects to my legacy application through communication pipes. WPF application allows user to plot locations on map using a button on the interface. So when user clicks the button on WPF application user interface, a pipe message is sent to legacy application to allow user to plot locations on map. When user plot locations on map using mouse, the coordinates are sent back to wpf application using the 2 way communication pipe. When my wpf application receives the coordinates, it needs to process and perform the workflows accordingly. There might appear some errors, so application might need to show error message. or in some cases might need to clear collections that were created in Application main thread. So there is a whole branch of code that get executed when coordinates are received.
How can I bring my WPF application back to Main thread so that when coordinates are received, user actions like showing message box etc.. can be performed?
right now I am getting exceptions like "collection was created in a different thread".
I know I can use this code to show message in Main thread or clear collections
Application.Current.Dispatcher.Invoke((Action)(() => { PointsCollection.Clear(); }));
Application.Current.Dispatcher.Invoke((Action)(() => { MessageBox.Show("Error"); }));
but this wont work in unit testing and also I will have to do this in lot of places. is there a better way?
public void PipeClientMessageReceived(int type, string message)
{
var command = (PipeCommand)type;
switch (command)
{
case PipeCommand.Points:
{
string[] tokens = message.Split(':');
var x = Convert.ToDouble(tokens[0]);
var y = Convert.ToDouble(tokens[1]);
SetSlotCoordinates(new Point2D(x, y));
}
break;
}
}
SetSlotCoordinates method actually does all the work to process the coordinates. I tried putting this calling in Application.Current.Dispatcher but no success.
Application.Current.Dispatcher.Invoke((Action)(() => { SetSlotCoordinates(new Point2D(x, y)); }));
Unfortunately, the question is not very clear. What issue exists with unit testing that you believe prevents you from using Dispatcher.Invoke()? When you tried using Dispatcer.Invoke() on the call to SetSlotCoordinates(), in what way was there "no success"?
Basically, the use of Dispatcher.Invoke() (or its asynchronous sibling, Dispatcher.BeginInvoke() should do the job for you. However, if you're able, I would recommend using the new async/await pattern.
Without a complete code example, it's impossible to give you the exact code. But it would look something like this:
async Task ReceiveFromPipe(Stream pipeStream, int bufferSize)
{
byte[] buffer = new byte[bufferSize];
int byteCount;
while ((byteCount = await pipeStream.ReadAsync(buffer, 0, buffer.Length)) > 0)
{
int type;
string message;
if (TryCompleteMessage(buffer, byteCount, out type, out message))
{
PipeClientMessageReceived(type, message);
}
}
}
Using this technique, and assuming that the ReceiveFromPipe() method is called from the UI thread, you will already be on the UI thread when the read from the pipe completes, making everything else "just work".
Note: I've glossed over details such as how exactly you maintain your buffer of incoming data until a complete message is received...I've assumed that's encapsulated in the hypothetical TryCompleteMessage() method. The above is for illustration purposes, and of course you'd have to adapt to your own specific code.
Also, you may find it makes more sense to do more of the processing in the background thread, in which case you'd put the actual receive and that processing into a separate async method; in that case, that method would still call ReadAsync(), but you could call ConfigureAwait(false) on the return value of that, so that the switch back to the UI thread didn't happen until that separate async method returned. For example:
async Task ReceiveFromPipe(Stream pipeStream, int bufferSize)
{
Action action;
while ((action = await ReceivePoint2D(pipeStream, bufferSize)) != null)
{
action();
}
}
async Task<Action> ReceivePoint2D(Stream pipeStream, int bufferSize)
{
byte[] buffer = new byte[bufferSize];
int byteCount;
while ((byteCount = await pipeStream
.ReadAsync(buffer, 0, buffer.Length).ConfigureAwait(false)) > 0)
{
int type;
string message;
if (TryCompleteMessage(buffer, byteCount, out type, out message))
{
return PipeClientMessageReceived(type, message);
}
}
return null;
}
public Action PipeClientMessageReceived(int type, string message)
{
var command = (PipeCommand)type;
switch (command)
{
case PipeCommand.Points:
{
string[] tokens = message.Split(':');
var x = Convert.ToDouble(tokens[0]);
var y = Convert.ToDouble(tokens[1]);
return () => SetSlotCoordinates(new Point2D(x, y));
}
break;
}
}
In the above example, the asynchronous code does everything except the call to SetSlotCoordinates(). For that, it wraps the call in an Action delegate, returning that to the UI thread where the UI thread can then invoke it. Of course, you don't have to return an Action delegate; that was just the most convenient way I saw to adapt the code you already have. You can return any value or object and let the UI thread handle it appropriately.
Finally, with respect to all of the above, note that nowhere in the code is an explicit dependency on the UI thread. While I'm not sure what issue you are concerned with respect to unit testing, the above should be much more easily adapted to unit testing scenarios where no Dispatcher is available or you'd prefer not to use it for some reason.
If you want to stick with explicit use of Dispatcher, then you should be more specific about what exactly isn't working.

How can I prevent synchronous continuations on a Task?

I have some library (socket networking) code that provides a Task-based API for pending responses to requests, based on TaskCompletionSource<T>. However, there's an annoyance in the TPL in that it seems to be impossible to prevent synchronous continuations. What I would like to be able to do is either:
tell a TaskCompletionSource<T> that is should not allow callers to attach with TaskContinuationOptions.ExecuteSynchronously, or
set the result (SetResult / TrySetResult) in a way that specifies that TaskContinuationOptions.ExecuteSynchronously should be ignored, using the pool instead
Specifically, the issue I have is that the incoming data is being processed by a dedicated reader, and if a caller can attach with TaskContinuationOptions.ExecuteSynchronously they can stall the reader (which affects more than just them). Previously, I have worked around this by some hackery that detects whether any continuations are present, and if they are it pushes the completion onto the ThreadPool, however this has significant impact if the caller has saturated their work queue, as the completion will not get processed in a timely fashion. If they are using Task.Wait() (or similar), they will then essentially deadlock themselves. Likewise, this is why the reader is on a dedicated thread rather than using workers.
So; before I try and nag the TPL team: am I missing an option?
Key points:
I don't want external callers to be able to hijack my thread
I can't use the ThreadPool as an implementation, as it needs to work when the pool is saturated
The example below produces output (ordering may vary based on timing):
Continuation on: Main thread
Press [return]
Continuation on: Thread pool
The problem is the fact that a random caller managed to get a continuation on "Main thread". In the real code, this would be interrupting the primary reader; bad things!
Code:
using System;
using System.Threading;
using System.Threading.Tasks;
static class Program
{
static void Identify()
{
var thread = Thread.CurrentThread;
string name = thread.IsThreadPoolThread
? "Thread pool" : thread.Name;
if (string.IsNullOrEmpty(name))
name = "#" + thread.ManagedThreadId;
Console.WriteLine("Continuation on: " + name);
}
static void Main()
{
Thread.CurrentThread.Name = "Main thread";
var source = new TaskCompletionSource<int>();
var task = source.Task;
task.ContinueWith(delegate {
Identify();
});
task.ContinueWith(delegate {
Identify();
}, TaskContinuationOptions.ExecuteSynchronously);
source.TrySetResult(123);
Console.WriteLine("Press [return]");
Console.ReadLine();
}
}
New in .NET 4.6:
.NET 4.6 contains a new TaskCreationOptions: RunContinuationsAsynchronously.
Since you're willing to use Reflection to access private fields...
You can mark the TCS's Task with the TASK_STATE_THREAD_WAS_ABORTED flag, which would cause all continuations not to be inlined.
const int TASK_STATE_THREAD_WAS_ABORTED = 134217728;
var stateField = typeof(Task).GetField("m_stateFlags", BindingFlags.NonPublic | BindingFlags.Instance);
stateField.SetValue(task, (int) stateField.GetValue(task) | TASK_STATE_THREAD_WAS_ABORTED);
Edit:
Instead of using Reflection emit, I suggest you use expressions. This is much more readable and has the advantage of being PCL-compatible:
var taskParameter = Expression.Parameter(typeof (Task));
const string stateFlagsFieldName = "m_stateFlags";
var setter =
Expression.Lambda<Action<Task>>(
Expression.Assign(Expression.Field(taskParameter, stateFlagsFieldName),
Expression.Or(Expression.Field(taskParameter, stateFlagsFieldName),
Expression.Constant(TASK_STATE_THREAD_WAS_ABORTED))), taskParameter).Compile();
Without using Reflection:
If anyone's interested, I've figured out a way to do this without Reflection, but it is a bit "dirty" as well, and of course carries a non-negligible perf penalty:
try
{
Thread.CurrentThread.Abort();
}
catch (ThreadAbortException)
{
source.TrySetResult(123);
Thread.ResetAbort();
}
I don't think there's anything in TPL which would provides explicit API control over TaskCompletionSource.SetResult continuations. I decided to keep my initial answer for controlling this behavior for async/await scenarios.
Here is another solution which imposes asynchronous upon ContinueWith, if the tcs.SetResult-triggered continuation takes place on the same thread the SetResult was called on:
public static class TaskExt
{
static readonly ConcurrentDictionary<Task, Thread> s_tcsTasks =
new ConcurrentDictionary<Task, Thread>();
// SetResultAsync
static public void SetResultAsync<TResult>(
this TaskCompletionSource<TResult> #this,
TResult result)
{
s_tcsTasks.TryAdd(#this.Task, Thread.CurrentThread);
try
{
#this.SetResult(result);
}
finally
{
Thread thread;
s_tcsTasks.TryRemove(#this.Task, out thread);
}
}
// ContinueWithAsync, TODO: more overrides
static public Task ContinueWithAsync<TResult>(
this Task<TResult> #this,
Action<Task<TResult>> action,
TaskContinuationOptions continuationOptions = TaskContinuationOptions.None)
{
return #this.ContinueWith((Func<Task<TResult>, Task>)(t =>
{
Thread thread = null;
s_tcsTasks.TryGetValue(t, out thread);
if (Thread.CurrentThread == thread)
{
// same thread which called SetResultAsync, avoid potential deadlocks
// using thread pool
return Task.Run(() => action(t));
// not using thread pool (TaskCreationOptions.LongRunning creates a normal thread)
// return Task.Factory.StartNew(() => action(t), TaskCreationOptions.LongRunning);
}
else
{
// continue on the same thread
var task = new Task(() => action(t));
task.RunSynchronously();
return Task.FromResult(task);
}
}), continuationOptions).Unwrap();
}
}
Updated to address the comment:
I don't control the caller - I can't get them to use a specific
continue-with variant: if I could, the problem would not exist in the
first place
I wasn't aware you don't control the caller. Nevertheless, if you don't control it, you're probably not passing the TaskCompletionSource object directly to the caller, either. Logically, you'd be passing the token part of it, i.e. tcs.Task. In which case, the solution might be even easier, by adding another extension method to the above:
// ImposeAsync, TODO: more overrides
static public Task<TResult> ImposeAsync<TResult>(this Task<TResult> #this)
{
return #this.ContinueWith(new Func<Task<TResult>, Task<TResult>>(antecedent =>
{
Thread thread = null;
s_tcsTasks.TryGetValue(antecedent, out thread);
if (Thread.CurrentThread == thread)
{
// continue on a pool thread
return antecedent.ContinueWith(t => t,
TaskContinuationOptions.None).Unwrap();
}
else
{
return antecedent;
}
}), TaskContinuationOptions.ExecuteSynchronously).Unwrap();
}
Use:
// library code
var source = new TaskCompletionSource<int>();
var task = source.Task.ImposeAsync();
// ...
// client code
task.ContinueWith(delegate
{
Identify();
}, TaskContinuationOptions.ExecuteSynchronously);
// ...
// library code
source.SetResultAsync(123);
This actually works for both await and ContinueWith (fiddle) and is free of reflection hacks.
What about instead of doing
var task = source.Task;
you do this instead
var task = source.Task.ContinueWith<Int32>( x => x.Result );
Thus you are always adding one continuation which will be executed asynchronously and then it doesn't matter if the subscribers want a continuation in the same context. It's sort of currying the task, isn't it?
The simulate abort approach looked really good, but led to the TPL hijacking threads in some scenarios.
I then had an implementation that was similar to checking the continuation object, but just checking for any continuation since there are actually too many scenarios for the given code to work well, but that meant that even things like Task.Wait resulted in a thread-pool lookup.
Ultimately, after inspecting lots and lots of IL, the only safe and useful scenario is the SetOnInvokeMres scenario (manual-reset-event-slim continuation). There are lots of other scenarios:
some aren't safe, and lead to thread hijacking
the rest aren't useful, as they ultimately lead to the thread-pool
So in the end, I opted to check for a non-null continuation-object; if it is null, fine (no continuations); if it is non-null, special-case check for SetOnInvokeMres - if it is that: fine (safe to invoke); otherwise, let the thread-pool perform the TrySetComplete, without telling the task to do anything special like spoofing abort. Task.Wait uses the SetOnInvokeMres approach, which is the specific scenario we want to try really hard not to deadlock.
Type taskType = typeof(Task);
FieldInfo continuationField = taskType.GetField("m_continuationObject", BindingFlags.Instance | BindingFlags.NonPublic);
Type safeScenario = taskType.GetNestedType("SetOnInvokeMres", BindingFlags.NonPublic);
if (continuationField != null && continuationField.FieldType == typeof(object) && safeScenario != null)
{
var method = new DynamicMethod("IsSyncSafe", typeof(bool), new[] { typeof(Task) }, typeof(Task), true);
var il = method.GetILGenerator();
var hasContinuation = il.DefineLabel();
il.Emit(OpCodes.Ldarg_0);
il.Emit(OpCodes.Ldfld, continuationField);
Label nonNull = il.DefineLabel(), goodReturn = il.DefineLabel();
// check if null
il.Emit(OpCodes.Brtrue_S, nonNull);
il.MarkLabel(goodReturn);
il.Emit(OpCodes.Ldc_I4_1);
il.Emit(OpCodes.Ret);
// check if is a SetOnInvokeMres - if so, we're OK
il.MarkLabel(nonNull);
il.Emit(OpCodes.Ldarg_0);
il.Emit(OpCodes.Ldfld, continuationField);
il.Emit(OpCodes.Isinst, safeScenario);
il.Emit(OpCodes.Brtrue_S, goodReturn);
il.Emit(OpCodes.Ldc_I4_0);
il.Emit(OpCodes.Ret);
IsSyncSafe = (Func<Task, bool>)method.CreateDelegate(typeof(Func<Task, bool>));
if you can and are ready to use reflection, this should do it;
public static class MakeItAsync
{
static public void TrySetAsync<T>(this TaskCompletionSource<T> source, T result)
{
var continuation = typeof(Task).GetField("m_continuationObject", BindingFlags.NonPublic | BindingFlags.GetField | BindingFlags.Instance);
var continuations = (List<object>)continuation.GetValue(source.Task);
foreach (object c in continuations)
{
var option = c.GetType().GetField("m_options", BindingFlags.NonPublic | BindingFlags.GetField | BindingFlags.Instance);
var options = (TaskContinuationOptions)option.GetValue(c);
options &= ~TaskContinuationOptions.ExecuteSynchronously;
option.SetValue(c, options);
}
source.TrySetResult(result);
}
}
Updated, I posted a separate answer to deal with ContinueWith as opposed to await (because ContinueWith doesn't care about the current synchronization context).
You could use a dumb synchronization context to impose asynchrony upon continuation triggered by calling SetResult/SetCancelled/SetException on TaskCompletionSource. I believe the current synchronization context (at the point of await tcs.Task) is the criteria TPL uses to decide whether to make such continuation synchronous or asynchronous.
The following works for me:
if (notifyAsync)
{
tcs.SetResultAsync(null);
}
else
{
tcs.SetResult(null);
}
SetResultAsync is implemented like this:
public static class TaskExt
{
static public void SetResultAsync<T>(this TaskCompletionSource<T> tcs, T result)
{
FakeSynchronizationContext.Execute(() => tcs.SetResult(result));
}
// FakeSynchronizationContext
class FakeSynchronizationContext : SynchronizationContext
{
private static readonly ThreadLocal<FakeSynchronizationContext> s_context =
new ThreadLocal<FakeSynchronizationContext>(() => new FakeSynchronizationContext());
private FakeSynchronizationContext() { }
public static FakeSynchronizationContext Instance { get { return s_context.Value; } }
public static void Execute(Action action)
{
var savedContext = SynchronizationContext.Current;
SynchronizationContext.SetSynchronizationContext(FakeSynchronizationContext.Instance);
try
{
action();
}
finally
{
SynchronizationContext.SetSynchronizationContext(savedContext);
}
}
// SynchronizationContext methods
public override SynchronizationContext CreateCopy()
{
return this;
}
public override void OperationStarted()
{
throw new NotImplementedException("OperationStarted");
}
public override void OperationCompleted()
{
throw new NotImplementedException("OperationCompleted");
}
public override void Post(SendOrPostCallback d, object state)
{
throw new NotImplementedException("Post");
}
public override void Send(SendOrPostCallback d, object state)
{
throw new NotImplementedException("Send");
}
}
}
SynchronizationContext.SetSynchronizationContext is very cheap in terms of the overhead it adds. In fact, a very similar approach is taken by the implementation of WPF Dispatcher.BeginInvoke.
TPL compares the target synchronization context at the point of await to that of the point of tcs.SetResult. If the synchronization context is the same (or there is no synchronization context at both places), the continuation is called directly, synchronously. Otherwise, it's queued using SynchronizationContext.Post on the target synchronization context, i.e., the normal await behavior. What this approach does is always impose the SynchronizationContext.Post behavior (or a pool thread continuation if there's no target synchronization context).
Updated, this won't work for task.ContinueWith, because ContinueWith doesn't care about the current synchronization context. It however works for await task (fiddle). It also does work for await task.ConfigureAwait(false).
OTOH, this approach works for ContinueWith.

Reactive Extensions: How to observe a IEnumerable method result async

i've a methode that return a IEnumerable of my business object. In this method i parse the content of a large text file to the business object model. There is no threading stuff in it.
In my ViewModel (WPF) i need to store and display the results of the method.
Store is an ObservableCollection.
Here ist the observable code:
private void OpenFile(string file)
{
_parser = new IhvParser();
App.Messenger.NotifyColleagues(Actions.ReportContentInfo, new Model.StatusInfoDisplayDTO { Information = "Lade Daten...", Interval = 0 });
_ihvDataList.Clear();
var obs = _parser.ParseDataObservable(file)
.ToObservable(NewThreadScheduler.Default)
.ObserveOnDispatcher()
.Subscribe<Ihv>(AddIhvToList, ReportError, ReportComplete);
}
private void ReportComplete()
{
App.Messenger.NotifyColleagues(Actions.ReportContentInfo, new Model.StatusInfoDisplayDTO { Information = "Daten fertig geladen.", Interval = 3000 });
RaisePropertyChanged(() => IhvDataList);
}
private void ReportError(Exception ex)
{
MessageBox.Show("...");
}
private void AddIhvToList(Ihv ihv)
{
_ihvDataList.Add(ihv);
}
And this is the parser code:
public IEnumerable<Model.Ihv> ParseDataObservable(string file)
{
using (StreamReader reader = new StreamReader(file))
{
var head = reader.ReadLine(); //erste Zeile ist Kopfinformation
if (!head.Contains("BayBAS") || !head.Contains("2.3.0"))
{
_logger.ErrorFormat("Die Datei {0} liegt nicht im BayBAS-Format 2.3.0 vor.");
}
else
{
while (!reader.EndOfStream)
{
var line = reader.ReadLine();
if (line.Length != 1415)
{
_logger.ErrorFormat("Die Datei {0} liegt nicht im BayBAS-Format 2.3.0 vor.");
break;
}
var tempIhvItem = Model.Ihv.Parse(line);
yield return tempIhvItem;
}
reader.Close();
}
}
}
Why do i don't get the results async? Before i see the results in my DataGrid, all items are parsed and delivered.
Can anybody help?
Andreas
Are you sure this isn't happening asynchronously? Are you assuming this based on what you perceive in the UI, or have you set breakpoints and determined that this is, in fact, the case?
Note that WPF's Dispatcher uses a priority queue, and DispatcherScheduler schedules items with Normal priority, which trumps the priority levels used for input, layout, and rendering. If the results come in quickly enough, then the UI may not get updated until after the last result has been processed: the dispatcher might be too busy processing results to perform layout and rendering of the UI.
You could try overriding the behavior of the DispatcherScheduler to schedule at a custom priority like so:
public class PriorityDispatcherScheduler : DispatcherScheduler
{
private readonly DispatcherPriority _priority;
public PriorityDispatcherScheduler(DispatcherPriority priority)
: this(priority, Dispatcher.CurrentDispatcher) {}
public PriorityDispatcherScheduler(DispatcherPriority priority, Dispatcher dispatcher)
: base(dispatcher)
{
_priority = priority;
}
public override IDisposable Schedule<TState>(TState state, Func<IScheduler, TState, IDisposable> action)
{
if (action == null)
throw new ArgumentNullException("action");
var d = new SingleAssignmentDisposable();
this.Dispatcher.BeginInvoke(
_priority,
(Action)(() =>
{
if (d.IsDisposed)
return;
d.Disposable = action(this, state);
}));
return d;
}
}
And then modify your observable sequence by replacing ObserveOnDispatcher() with ObserveOn(new PriorityDispatcherScheduler(p)), where p is an appropriate priority level (e.g., Background).
Also, this looks highly suspect: ToObservable(NewThreadScheduler.Default). I believe this will cause a new thread to be created every time a result comes in, for the sole purpose of passing it to the dispatcher, after which the new thread will terminate. This is almost certainly not what you intended. I assume you simply wanted the file processed on a separate thread; as written, your code would literally end up creating 1,000 short-lived threads if your IEnumerable yields 1,000 items, none of which would actually be doing the work of reading the file.
Lastly, is OpenFile() being invoked on the dispatcher thread? If so, I believe what is going to happen is as follows:
Dispatcher (on UI thread) will call Subscribe(), which will process the chain of observable operators all the way back to ParseDataObservable(file).
Dispatcher will iterate through your IEnumerable sequence, firing each result into the observable sequence created by ToObservable().
Each result passed into the observable sequence will be scheduled for delivery on the dispatcher (the very same dispatcher that is currently running).
If this is the case, then the entire file will be read before any of the results get passed to AddIhvToList(), because the dispatcher is tied up reading the file and won't get around to processing the results in its queue until it has finished. If this is what is happening, you can try altering your code as follows:
var obs = _parser.ParseDataObservable(file)
.ToObservable()
.SubscribeOn(/*NewThread*/Scheduler.Default)
.ObserveOnDispatcher() // consider using PriorityDispatcherScheduler
.Subscribe<Ihv>(AddIhvToList, ReportError, ReportComplete);
Injecting SubscribeOn() should ensure that the iteration of your IEnumerable (i.e., the reading of the file) occurs on a separate thread. Scheduler.Default should suffice here, but you could use a NewThreadScheduler if you really need to (you probably don't). The dispatcher thread will return from Subscribe() after everything has been set up, freeing it up to continue processing its queue, i.e., passing the results to AddIhvToList() as they come in. This should give you the asynchronous behavior you desire.

Categories