I'm developing WinForms App in VS2013, .NET FW 4.5.1. Here is my reduced code with inline comments about structure:
// Progress object implementing IProgress<MyProgressData>
var progressCallback = new Progress<MyProgressData>();
// listOfMyList is actually List<List<MyObject>>, which contains list of
// list of MyObject's which will be executed as tasks at once.
// For example, this would be sample structure for list of lists:
// List1
// MyObject1
// MyObject2
// MyObject3
// List2
// MyObject4
// MyObject5
// MyObject6
// List1's and List2's objects would be executed as all tasks at once, but List1 and List2 respectively
// would be executed one after another (because of resources usage inside TASK CODE)
foreach (var myItem in listOfMyList)
{
var myList = myItem.ToList();
// Create a list of tasks to be executed (20 by default; each taking from 30-60 seconds)
// Here cs is actually MyObject
var myTasks = myList.Select(cs => Task.Run(async () =>
{
// TASK CODE (using cs as an input object and using "cancellationToken.ThrowIfCancellationRequested();" inside execution to cancel executing if requested)
}, cancellationToken));
await Task.WhenAll(myTasks); // Wait for all tasks to finish
// Report progress to main form (this actually calls an event on my form)
await Task.Run(() => progressCallback.Report(new MyProgressData() { props }), CancellationToken.None);
}
As you can see, I construct progress object and then I have list of lists. Each item within top-level list should execute in serialized fashion (one after another). Each item's list elements, should execute all at once in a form of a tasks.
So far so good, all tasks start and even WhenAll waits for them. Or at least I thought so. I have put logging inside relevant methods, to show me code execution. It turns out that while progress logic (at the bottom) is executing, foreach loop starts executing another batch of tasks, which it shouldn't.
Am I missing something here? Does code for progress not block or wait for Report method to finish executing. Maybe I'm missing sth about async/await. With await we make sure that code won't continue until after method is finished? It won't block current thread, but it also won't continue executing?
Is it even possible (since its happening, it probably is), for my foreach loop to continue executing while progress reporting is still on the go?
This code resides inside an async method. It's actually called like this (lets assume this method is async MyProblematicMethod()):
while (true)
{
var result = await MyProblematicMethod();
if (result.HasToExitWhile)
break;
}
Every method up from MyProblematicMethod uses await to wait for async methods, and is not called many times.
Based on Glorin's suggestion that IProgress.Report returns immediately after firing an event handler, I've created exact copy of Progress class, which uses synchronizationContext.Send instead of Post:
public sealed class ProgressEx<T> : IProgress<T>
{
private readonly SynchronizationContext _synchronizationContext;
private readonly Action<T> _handler;
private readonly SendOrPostCallback _invokeHandlers;
public event EventHandler<T> ProgressChanged;
public ProgressEx(SynchronizationContext syncContext)
{
// From Progress.cs
//_synchronizationContext = SynchronizationContext.CurrentNoFlow ?? ProgressStatics.DefaultContext;
_synchronizationContext = syncContext;
_invokeHandlers = new SendOrPostCallback(InvokeHandlers);
}
public ProgressEx(SynchronizationContext syncContext, Action<T> handler)
: this(syncContext)
{
if (handler == null)
throw new ArgumentNullException("handler");
_handler = handler;
}
private void OnReport(T value)
{
// ISSUE: reference to a compiler-generated field
if (_handler == null && ProgressChanged == null)
return;
_synchronizationContext.Send(_invokeHandlers, (object)value);
}
void IProgress<T>.Report(T value)
{
OnReport(value);
}
private void InvokeHandlers(object state)
{
T e = (T)state;
Action<T> action = _handler;
// ISSUE: reference to a compiler-generated field
EventHandler<T> eventHandler = ProgressChanged;
if (action != null)
action(e);
if (eventHandler == null)
return;
eventHandler((object)this, e);
}
}
This means that ProgressEx.Report will wait for method to finish, before returning. Maybe not the best solution in all situations, but it worked for me in this case.
To call it, just create ProgressEx with SynchronizationContext.Current as a parameter to constructor. However, it must be created in UI thread, so the right SynchronizationContext gets passed in. E.g. new ProgressEx<MyDataObject>(SynchronizationContext.Current)
Related
I have a component with the following event:
// MyComponent :
[Parameter] public EventCallback<DataChangedEventArgs> OnTextChanged { get; set; }
protected async void SomeMethod(ChangeEventArgs args)
{
DataChangedEventArgs evArg =new DataChangedEventArgs(args.value);
Console.Writeline("1");
await OnTextChanged.InvokeAsync(evArg);
Items = evArg.Data; // <=== problem here: Data is always null.
Console.Writeline("4");
}
When I use and handle the OnTextChanged event on another component, I set the Data property of DataChangedEventArgs:
// AnotherComponent that uses MyComponent:
public async void theTextChanged(DataChangedEventArgs args)
{
//...
Console.Writeline("2");
args.Data = await GetAPersonAsync(); // or some other object
Console.Writeline("3");
}
Now I expect Items to be set to the Person (the line Items = evArg.Data). But Data is always null, which means the changes that have been made to args.Data(which is a reference type) in event handler are not accessible in the invoking method the SomeMethod.
Can anyone help me with that ?
UPDATE: I think it has something to do with async method, as the console result is not as I expect (1,2,3,4):
console:
1
2
4
3
You should use async Task instead of async void.
Theodor provided a link to the documentation
To add some meat to #Mister Magoo's answer, I believe the Blazor code for the event handlers looks something like this (simplified for clarity):
var task = InvokeAsync(EventMethod);
StateHasChanged();
if (task.Status != TaskStatus.RanToCompletion && task.Status != TaskStatus.Canceled)
{
await task;
StateHasChanged();
}
If your EventMethod returns a void, InvokeAsync returns a completed Task as soon as EventMethod yields. Only one StateHasChanged gets called - the if block is skipped. Anything that happens in EventMethod after the yield is not reflected in the UI until another component UI update occurs.
I have a static method, which can be called from anywhere. During execution it will encounter Invoke. Obviously when this method is called from UI thread it will deadlock.
Here is a repro:
public static string Test(string text)
{
return Task.Run(() =>
{
App.Current.Dispatcher.Invoke(() => { } );
return text + text;
}).Result;
}
void Button_Click(object sender, RoutedEventArgs e) => Test();
I've read multiple questions and like 10 answers of #StephenCleary (even some blogs linked from those), yet I fail to understand how to achieve following:
have a static method, which is easy to call and obtain result from anywhere (e.g. UI event handlers, tasks);
this method should block the caller and after it the caller code should continue run in the same context;
this method shouldn't freeze UI.
The closest analogy to what Test() should behave like is MessageBox.Show().
Is it achieve-able?
P.S.: to keep question short I am not attaching my various async/await attempts as well as one working for UI calls, but terrible looking using DoEvents one.
You can not.
Even just 2 of those 3 requirements can't be achieved together - "this method should block the caller" is in conflict with "this method shouldn't freeze UI".
You have to make this method either asynchronous in some way (await, callback) or make it executable in small chunks to block UI only for short periods of time using for example timer to schedule each step.
Just to reiterate what you already know - you can't block thread and call it back at the same time as discusses in many questions like - await works but calling task.Result hangs/deadlocks.
To achieve something what MessageBox does (but without creating window) one can do something like this:
public class Data
{
public object Lock { get; } = new object();
public bool IsFinished { get; set; }
}
public static bool Test(string text)
{
var data = new Data();
Task.Run(() =>
{
Thread.Sleep(1000); // simulate work
App.Current.Dispatcher.Invoke(() => { });
lock (data.Lock)
{
data.IsFinished = true;
Monitor.Pulse(data.Lock); // wake up
}
});
if (App.Current.Dispatcher.CheckAccess())
while (!data.IsFinished)
DoEvents();
else
lock (data.Lock)
Monitor.Wait(data.Lock);
return false;
}
static void DoEvents() // for wpf
{
var frame = new DispatcherFrame();
Dispatcher.CurrentDispatcher.BeginInvoke(DispatcherPriority.Background, new Func<object, object>(o =>
{
((DispatcherFrame)o).Continue = false;
return null;
}), frame);
Dispatcher.PushFrame(frame);
}
The idea is simple: check if current thread need invoke (UI thread) and then either run DoEvents loop or block thread.
Test() can be called from UI thread or from another task.
It works (not fully tested though), but it's crappy. I hope this will make my requirements clear and I still need the answer to my question if there is any better "no, you can't do this" ;)
I have a task, which executed async, in part of this task add items in UI run via Dispatcher.BeginInvoke where i update a ObservebleCollection. For thread safe access to collection, i use a semaphoreSlim, but as request to Collection proceed in UI thread and Dispatcher.BeginInvoke also work in UI thread, i receive a dead lock.
private readonly ObservebleCollection<String> parameters = new ObservebleCollection<String>();
private readonly SemaphoreSlim semaphore = new SemaphoreSlim(0, 1);
//Called from UI
public ObservebleCollection<String> Parameters
{
get
{
semaphore.Wait();
var result = this.parameters;
semaphore.Release();
return result;
}
}
public async Task Operation()
{
await semaphore.WaitAsync();
List<String> stored = new List<String>();
foreach (var parameter in currentRobot.GetParametersProvider().GetParameters())
{
stored.Add(parameter.PropertyName);
}
//Can't do add all items in UI at once, because it's take a long time, and ui started lag
foreach (var model in stored)
{
await UIDispatcher.BeginInvoke(new Action(() =>
{
this.parameters.Add(model);
}), System.Windows.Threading.DispatcherPriority.Background);
}
semaphore.Release();
}
And how i received a dead lock:
When i click a button in my program, Operation executed.
When i click a another button, program try access to Parameters property.
And i received a dead lock =D
Problem: in async operation i fill a observeblecollection via Dispatcher.BeginInvoke for each item separately, because if i add all items at once using Dispatcher, UI will lag. So i need a synchronization method for access to a Parameters property, which will wait until Operation ends.
await ensures the code after it will run in the original Synchronization context, in this case, in the UI thread. This makes BeginInvoke unnecessary as the code already runs on the correct thread.
The result is that you are trying to acquire two locks on the same object from the same thread, resulting in deadlock.
If you want thread-safe access to a collection of objects, avoid manually creating locks and use a thread-safe collection like ConcurrentQueue or ConcurrentDictionary.
Apart from that, I can't say I understand what the code tries to achieve as it does nothing in the background or asynchronously. It could easily be a simple method that copies parameters from one collection to another and it would still be thread safe if written properly. You could just write:
var _parameters=new ConcurrentQueue<string>();
....
public void CopyParameters()
{
foreach (var parameter in currentRobot.GetParametersProvider().GetParameters())
{
_parameters.Enqueue(parameter.PropertyName);
}
}
If you use databinding on the Parameters property, just raise PropertyChanged after you add all entries
What is the real problem you are trying to solve?
UPDATE
It seems the real problem is the UI freezes if you try to add too many items at a time. This isn't a threading problem, it's a WPF problem. There are various solutions, all of which involve raising PropertyChanged only after you finish adding all properties.
If you don't need the old values, just create a list with the new values, replace the old values then raise the PropertyChanged event, eg:
private ObservebleCollection<String> _parameters = new ObservebleCollection<String>();
public ObservebleCollection<String> Parameters
{
get
{
return _parameters;
}
private set
{
_parameters=value;
PropertyChanged("Parameters");
}
public void CopyParameters()
{
var newParameters=currentRobot.GetParametersProvider()
.GetParameters()
.Select(p=>p.PropertyName);
Parameters=new ObservableCollection<string>(newParameters);
}
Unless you have code that modified Parameters one item at a time though, you could easily swap ObservableCollection for any other collection type, even a string[] array.
Another option is to subclass ObservableCollection to add support for AddRange, as shown in this SO question
I want to write a synchronous test that calls into some asynchronous product tasks.
In the example below, DoSomething() is called by a separate thread, and then it sets the SomethingCompleted event.
In my test code, how do I wait for SomethingCompleted to be set?
public event Action<Result> SomethingCompleted;
public void DoSomething()
{
Something();
this.SomethingCompleted(new Result("Success"));
}
using (var evt = new ManualResetEvent()) {
Action<Result> handler = _ => evt.Set();
SomethingCompleted += handler;
evt.WaitOne();
SomethingCompleted -= handler; //cut object reference to help GC
}
If required you can unsubscribe from the event after the wait has completed. That way the event will not keep the delegate and closure instance alive.
You can extract this into a reusable helper method/extension.
// test case
public void Test()
{
var yourObj = new YourObj();
var done = false;
Result result;
yourObj.SomethingCompleted += (finalResult) => {
result=finalResult;
done=true;
};
yourObj.DoSomething();
while(!done) Thread.Sleep(200);
if(result != theExpectedResult) kaboom();
}
What about subscribing to an event and "polling" the lambda until result comes available? This should work.
You're using the wrong type of event. The event you're using is a callback. In the code you supplied, the delegates attached to the SomethingCompleted event are going to be called on the same thread as DoSomething.
What you want is thread synchronization, using an event like AutoResetEvent or ManualResetEvent, which have nothing to do with the framework/language-level event that you're using. You could do something like this:
void MainThreadProc()
{
// create thread synchronization event
using (var evt = new ManualResetEvent(false))
{
// start background operation
ThreadPool.QueueUserWorkItem(BackgroundThreadProc, evt);
// this line will block until BackgroundThreadProc sets the event
evt.WaitOne();
}
// resume main thread operations here
...
}
void BackgroundThreadProc(object o)
{
// get event from argument
var evt = (ManualResetEvent) o;
// do the deed
...
// set event to signal completion
evt.Set();
}
This is just one of a number of different ways to do this. Alternatives include Parallel LINQ or the Task Parallel Library (both of which are best used with parallel operations, not just a single background operation). If you don't want to block the main thread, look at BackgroundWorker.
Many times in UI development I handle events in such a way that when an event first comes - I immediately start processing, but if there is one processing operation in progress - I wait for it to complete before I process another event. If more than one event occurs before the operation completes - I only process the most recent one.
The way I typically do that my process method has a loop and in my event handler I check a field that indicates if I am currently processing something and if I am - I put my current event arguments in another field that is basically a one item sized buffer and when current processing pass completes - I check if there is some other event to process and I loop until I am done.
Now this seems a bit too repetitive and possibly not the most elegant way to do it, though it seems to otherwise work fine for me. I have two questions then:
Does what I need to do have a name?
Is there some reusable synchronization type out there that could do that for me?
I'm thinking of adding something to the set of async coordination primitives by Stephen Toub that I included in my toolkit.
So first, we'll handle the case that you described in which the method is always used from the UI thread, or some other synchronization context. The Run method can itself be async to handle all of the marshaling through the synchronization context for us.
If we're running we just set the next stored action. If we're not, then we indicate that we're now running, await the action, and then continue to await the next action until there is no next action. We ensure that whenever we're done we indicate that we're done running:
public class EventThrottler
{
private Func<Task> next = null;
private bool isRunning = false;
public async void Run(Func<Task> action)
{
if (isRunning)
next = action;
else
{
isRunning = true;
try
{
await action();
while (next != null)
{
var nextCopy = next;
next = null;
await nextCopy();
}
}
finally
{
isRunning = false;
}
}
}
private static Lazy<EventThrottler> defaultInstance =
new Lazy<EventThrottler>(() => new EventThrottler());
public static EventThrottler Default
{
get { return defaultInstance.Value; }
}
}
Because the class is, at least generally, going to be used exclusively from the UI thread there will generally need to be only one, so I added a convenience property of a default instance, but since it may still make sense for there to be more than one in a program, I didn't make it a singleton.
Run accepts a Func<Task> with the idea that it would generally be an async lambda. It might look like:
public class Foo
{
public void SomeEventHandler(object sender, EventArgs args)
{
EventThrottler.Default.Run(async () =>
{
await Task.Delay(1000);
//do other stuff
});
}
}
Okay, so, just to be verbose, here is a version that handles the case where the event handlers are called from different threads. I know you said that you assume they're all called from the UI thread, but I generalized it a bit. This means locking over all access to instance fields of the type in a lock block, but not actually executing the function inside of a lock block. That last part is important not just for performance, to ensure we're not blocking items from just setting the next field, but also to avoid issues with that action also calling run, so that it doesn't need to deal with re-entrancy issues or potential deadlocks. This pattern, of doing stuff in a lock block and then responding based on conditions determined in the lock means setting local variables to indicate what should be done after the lock ends.
public class EventThrottlerMultiThreaded
{
private object key = new object();
private Func<Task> next = null;
private bool isRunning = false;
public void Run(Func<Task> action)
{
bool shouldStartRunning = false;
lock (key)
{
if (isRunning)
next = action;
else
{
isRunning = true;
shouldStartRunning = true;
}
}
Action<Task> continuation = null;
continuation = task =>
{
Func<Task> nextCopy = null;
lock (key)
{
if (next != null)
{
nextCopy = next;
next = null;
}
else
{
isRunning = false;
}
}
if (nextCopy != null)
nextCopy().ContinueWith(continuation);
};
if (shouldStartRunning)
action().ContinueWith(continuation);
}
}
Does what I need to do have a name?
What you're describing sounds a bit like a trampoline combined with a collapsing queue. A trampoline is basically a loop that iteratively invokes thunk-returning functions. An example is the CurrentThreadScheduler in the Reactive Extensions. When an item is scheduled on a CurrentThreadScheduler, the work item is added to the scheduler's thread-local queue, after which one of the following things will happen:
If the trampoline is already running (i.e., the current thread is already processing the thread-local queue), then the Schedule() call returns immediately.
If the trampoline is not running (i.e., no work items are queued/running on the current thread), then the current thread begins processing the items in the thread-local queue until it is empty, at which point the call to Schedule() returns.
A collapsing queue accumulates items to be processed, with the added twist that if an equivalent item is already in the queue, then that item is simply replaced with the newer item (resulting in only the most recent of the equivalent items remaining in the queue, as opposed to both). The idea is to avoid processing stale/obsolete events. Consider a consumer of market data (e.g., stock ticks). If you receive several updates for a frequently traded security, then each update renders the earlier updates obsolete. There is likely no point in processing earlier ticks for the same security if a more recent tick has already arrived. Thus, a collapsing queue is appropriate.
In your scenario, you essentially have a trampoline processing a collapsing queue with for which all incoming events are considered equivalent. This results in an effective maximum queue size of 1, as every item added to a non-empty queue will result in the existing item being evicted.
Is there some reusable synchronization type out there that could do that for me?
I do not know of an existing solution that would serve your needs, but you could certainly create a generalized trampoline or event loop capable of supporting pluggable scheduling strategies. The default strategy could use a standard queue, while other strategies might use a priority queue or a collapsing queue.
What you're describing sounds very similar to how TPL Dataflow's BrodcastBlock behaves: it always remembers only the last item that you sent to it. If you combine it with ActionBlock that executes your action and has capacity only for the item currently being processed, you get what you want (the method needs a better name):
// returns send delegate
private static Action<T> CreateProcessor<T>(Action<T> executedAction)
{
var broadcastBlock = new BroadcastBlock<T>(null);
var actionBlock = new ActionBlock<T>(
executedAction, new ExecutionDataflowBlockOptions { BoundedCapacity = 1 });
broadcastBlock.LinkTo(actionBlock);
return item => broadcastBlock.Post(item);
}
Usage could be something like this:
var processor = CreateProcessor<int>(
i =>
{
Console.WriteLine(i);
Thread.Sleep(i);
});
processor(100);
processor(1);
processor(2);
Output:
100
2