Queue function calls to write char for char - c#

What should it do
I'm trying to write some text char for char into a TextBlock. I'm using this code for this:
void WriteTextCharForChar(String text)
{
new Thread(() =>
{
foreach(Char c in text)
{
TxtDisplayAppendText(c.ToString());
Thread.Sleep(rnd.Next(20, 100));
}
Thread.CurrentThread.Abort();
}).Start();
}
The problem
The Problem is that the text, of course, gets mixed when calling this method more then once. What I would need is a kind of queue or wait any how until the currently text is written to the TextBlock.
Of curse I'm open to any other solution to get this working. Thank you!

So there are several options here.
You could add a lock around the work that the thread does, so that there is never more than one running at a time.
This has several problems though:
The items aren't necessarily processed in order
You're creating lots of threads, all of which are spending almost all of their time waiting; this is very wasteful of resources.
You could create a thread safe queue (BlockingCollection would be best) and then have a single thread reading from it and writing out the results while the UI thread just adds to the queue
This also has problems. Most notably you're creating a new thread that's going to be spending basically all of its time waiting around. This is probably better than #1, but not by a lot.
You could avoid using multiple threads entirely and do everything asynchronously. The Task Parallel Library gives you a lot of tools to help with this. This is the best option as it results in the creation of 0 extra threads.
So first we'll create a helper method that will handling the writing of the text itself, so that another thread can be the one to handle "scheduling" these calls. This method will be much easier to write using await. A key point to note is that, rather than blocking the current thread for an unknown period of time, it will use Task.Delay to continue execution at a point in the future without blocking the thread:
private async Task WriteText(string text)
{
foreach (char c in text)
{
TxtDisplayAppendText(c.ToString());
await Task.Delay(rnd.Next(20, 100));
}
}
Now for your method. We can manage our queue though what will be in effect a linked list of tasks. If we have a single field of type Task representing the "previous task" we can have each method call add a continuation to that task, and then set itself as the previous task. The next task will set itself as the continuation of that, and so on. Each continuation will fire when the previous task runs, or will run immediately if the previous task has already finished, so this gives us our "queue", effectively. Note that since WriteTextCharForChar is being called from the UI thread these calls are all already synchronized, so there's no need to lock around the manipulation of this task.
private Task previousWrite = Task.FromResult(false); //already completed task
private void WriteTextCharForChar(String text)
{
previousWrite = previousWrite.ContinueWith(t => WriteText(text))
.Unwrap();
}

Related

Pausing a thread safely [duplicate]

This question already has answers here:
When to use Task.Delay, when to use Thread.Sleep?
(10 answers)
Closed 3 years ago.
I was wondering if there is any problem if i want to pause a thread for a defined period of time at every iteration ( i am running a continous loop).
My first choice was using Task.Delay but i do not know if there could be any issues.Should i just go for Thread.Sleep or EventWaitHandle ?
class UpdateThread {
private Thread thread;
Fabric.Client client;
public UpdateThread(Fabric.Client client) {
}
public void Run() {
thread = new Thread(new ThreadStart(async()=>await UpdateAsync()));
}
public async Task UpdateAsync() {
while (true) {
await Task.Delay(Constants.REFRESH_INTERVAL);
}
}
}
What are the downsides to the above mentioned methods ?
P.S: This thread is running alongside a Windows Forms application (thread)
There is a potential problem with the ThreadStart delegate that you pass to the Thread's constructor, which is defined as public delegate void ThreadStart(). The fact that you provide an async void lambda for it makes it a fire-and-forget call. I.e., it's asynchronous but it doesn't return a Task to observe for result or exceptions.
Your new thread will most likely end as soon as the execution flow inside it hits the first await something, be it await Task.Delay or anything else. So, technically, you're not pausing a thread here. The logical execution after that await will continue on a random thread pool thread, which will most likely be different from the thread you initially created.
You'd be better off just using Task.Run instead of new Thread. The former has an override for async Task lambdas, which you should normally be using instead of async void anyway. Thus, you could pass your UpdateAsync directly to Task.Run and have the proper exception propagation logic for async methods.
If for some reason you still want to stick with new Thread and pass an async void lambda to it, make sure to observe all exception thrown by UpdateAsync. Otherwise, they will be thrown "out-of-band" on a random pool thread, see the above link for more details. Also note, creating a new thread (and then almost instantly ending it) is a rather expensive runtime operation. OTOH, when using Task.Run, you normally just borrow/return an existing thread from/to thread pool, which is much faster.
That said, in this particular case you may as well just be using Thread.Sleep instead of async methods and Task.Delay, to avoid having to deal with asynchrony and thread switching at all. It's a client-side WinForms application where you normally don't care (to a reasonably extent) about scaling, i.e., the number of busy or blocked threads.
In this case you should use Task.Delay, because Thread.Sleep would send a Thread from the .NET ThreadPool to sleep and that is most likely not what you want. You are also mixing lower-level Thread with higher-level Task. You don't need to start a new thread. It is enough to just call UpdateAsync() without calling Wait() or similar.
Use Thread.Sleep when you want to block the current thread.
Use Task.Delay when you want a logical delay without blocking the current thread.
Source
I prefer handling such cases with a Thread.Sleep cause it's lower level and more effectively in my head, but it's just a personal thing.

Why it is not possible to join a task?

I recently come to a deadlock issue similar at the one described here: An async/await example that causes a deadlock
This question is not a duplicate of the one below.
Because I have access to the async part of the code I have been able to use the solution described by #stephen-cleary in his blog post here: http://blog.stephencleary.com/2012/07/dont-block-on-async-code.html
So I added ConfigureAwait(false) on the first awaited Task.
I can't use the second solution (put async everywhere) because of the amount of impacted code.
But I keep asking myself, what if I can't modify the called async code (ex: extern API).
So I came to a solution to avoid the dead lock. A way to Join the task. By Join I mean a way to wait for the targeted task to end without blocking other tasks that's run on the current thread.
Here is the code:
public static class TaskExtensions
{
public static void Join(this Task task)
{
var currentDispatcher = Dispatcher.CurrentDispatcher;
while (!task.IsCompleted)
{
// Call back the dispatcher to allow other Tasks on current thread to run
currentDispatcher.Invoke(delegate { }, DispatcherPriority.SystemIdle);
}
}
}
I want to emphasize that I'm not sure that the correct name for this method is Join.
My question is why Task.Wait() is not implemented this way or may be optionally used this way ?
This Join methods is simply busy waiting for the task to complete with an added equivalent of Application.DoEvents(). By repeatedly calling into the dispatcher like that you have essentially implemented a nested message pump that keeps the UI alive.
This is a really bad idea because it drives the CPU to 100%.
You really need to treat the UI message loop correctly and get off the UI thread when waiting. await is great for that.
You say that you really want to avoid making all code async aware because it would be a lot of work. Maybe you can smartly use the await Task.Run(() => OldSynchronousCode()); pattern to avoid a lot of the work to do that. Since this runs on the UI thread the frequency of such calls should be very low. This means that the overhead caused by this is also very low and it's not an issue.

C# await on a List<T> Count

I am upgrading some legacy WinForms code and I am trying to figure out what the "right way" as of .NET 4.6.1 to refactor the following.
The current code is doing a tight while(true) loop while checking a bool property. This property puts a lock() on a generic List<T> and then returns true if it has no items (list.Count == 0).
The loop has the dreaded Application.DoEvents() in it to make sure the message pump continues processing, otherwise it would lock up the application.
Clearly, this needs to go.
My confusion is how to start a basic refactoring where it still can check the queue length, while executing on a thread and not blowing out the CPU for no reason. A delay between checks here is fine, even a "long" one like 100ms+.
I was going to go with an approach that makes the method async and lets a Task run to do the check:
await Task.Run(() => KeepCheckingTheQueue());
Of course, this keeps me in the situation of the method needing to ... loop to check the state of the queue.
Between the waiting, awaiting, and various other methods that can be used to move this stuff to the thread pool ... any suggestion on how best to handle this?
What I need is how to best "poll' a boolean member (or property) while freeing the UI, without the DoEvents().
The answer you're asking for:
private async Task WaitUntilAsync(Func<bool> func)
{
while (!func())
await Task.Delay(100);
}
await WaitUntilAsync(() => list.Count == 0);
However, polling like this is a really poor approach. If you can describe the actual problem your code is solving, then you can get better solutions.
For example, if the list represents some queue of work, and your code is wanting to asynchronously wait until it's done, then this can be better coded using an explicit signal (e.g., TaskCompletionSource<T>) or a true producer/consumer queue (e.g., TPL Dataflow).
It's generally never a good idea for client code to worry about locking a collection (or sprinkling your code with lock() blocks everywhere) before querying it. Best to encapsulate that complexity out.
Instead I recommend using one of the .NET concurrent collections such as ConcurrentBag. No need for creating a Task which is somewhat expensive.
If your collection does not change much you might want to consider one of the immutable thread-safe collections such as ImmutableList<>.
EDIT: Upon reading your comments I suggest you use a WinForms Timer; OnApplicationIdle or BackgroundWorker. The problem with async is that you still need to periodically call it. Using a timer or app idle callback offers the benefit of using the GUI thread.
Depending on the use case, you could start a background thread or a background worker. Or maybe even a timer.
Those are executed in a different thread, and are therefore not locking the execution of your other form related code. Invoke the original thread if you have to perform actions on the UI thread.
I would also recommend to prevent locking as much as possible, for example by doing a check before actually locking:
if (list.Count == 0)
{
lock (lockObject)
{
if (list.Count == 0)
{
// execute your code here
}
}
}
That way you are only locking if you really need to and you avoid unnecessary blocking of your application.
I think what you're after here is the ability to await Task.Yield().
class TheThing {
private readonly List<int> _myList = new List<int>();
public async Task WaitForItToNotBeEmpty() {
bool hadItems;
do {
await Task.Yield();
lock (_myList) // Other answers have touched upon this locking concern
hadItems = _myList.Count != 0;
} while (!hadItems);
}
// ...
}

C# Async - How many objects are created here?

I am trying to understand the code I wrote,
for (int i = 0; i< 5; i++)
{
ExecuteCMD("/c robocopy C:\\Source D:\\Source /MIR", true);
}
public async void ExecuteCMD(string cmdLine, bool waitForExit)
{
await Task.Factory.StartNew(() =>
{
ExecuteProcess proc = new ExecuteProcess(cmdLine, waitForExit);
proc.Execute();
} );
}
The async method ExecuteCMD will run in a loop for 5 times. I know async doesn't create new threads. So, are there 5 objects created with the same name ('proc') in the same thread? Please explain
Many thanks in advance!
do you mean your ExecuteProcess proc Object? This is a local variable of your lambda function. So there is no conflict in your code.
The Lambda
() =>
{
ExecuteProcess proc = new ExecuteProcess(cmdLine, waitForExit);
proc.Execute();
}
is called 5 Times but every call creates only one instance of ExecuteProcess for the variable proc.
You are using Task.Factory.StartNew, so you will most likely (see Stephen Cleary's comment) end up on the default TaskScheduler, which happens to execute work on thread pool threads. Your ExecuteProcess allocation and Execute call will therefore occur 5 times, as expected, on thread pool threads (see above point re default scheduler) - and most likely in parallel to each other, and in parallel to your for loop (this last part might be difficult to wrap your head around, but that's the whole problem with async void - the execution order is non-deterministic; more on that later).
You are sort of right in that async/await does not necessarily create new threads. async/await is all about chaining tasks and their continuations so that they execute in correct order with respect to each other. Where the actual Task runs is determined by how that Task is created. Here you are explicitly requesting your work to be pushed out to the thread pool, because that is where Tasks created by Task.Factory.StartNew execute.
Others have pointed out that you might be using async void erroneously - possibly due to lack of understanding. async void is only good for scheduling work in a fire-and-forget manner, and this work needs to be self-contained, complete with its own exception handling and concurrency controls. Because your async void will run unobserved, in parallel to the rest of your code. It's like saying: "I want this piece of code to run at some point in the future. I don't need to know when it completes or whether it raised any exceptions - I'll just wait until it hits the first await, and then carry on executing my own work - and the rest of the async void will proceed on its own, in parallel, without supervision". Because of this, if you put some code after your for loop, it will most likely execute before your ExecuteProcess work, which may or may not be what you want.
Here's a step-by-step view of what actually happens in your application.
You hit the first iteration of the for loop on the main thread. The runtime calls ExecuteCMD, as if it were any other, synchronous method call. It enters the method and executes the code preceding the await, still as part of the first for loop iteration, on the main thread. Then it schedules some work on the thread pool via Task.Factory.StartNew. This work will execute at some point in the future on, let's say, thread pool thread #1. The Task returned by Task.Factory.StartNew is then awaited. This task cannot possibly complete synchronously, so the await schedules the rest of your async void to run in the future (on the main thread, after the task created by Task.Factory.StartNew has completed) and yields. At this point your ExecuteProcess work probably hasn't even started yet, but the main thread is already free to jump to the second iteration of the for loop. It does exactly that, which results in another task being scheduled to run on, say, thread pool thread #2, at some point in the future - followed by yet another continuation scheduled to run on the main thread. The for loop then jumps to the next item.
By the time your for loop ends, you will most likely have 5 Tasks waiting to execute on thread pool threads, followed by 5 continuation Tasks waiting to execute on the main thread. They will all complete at some point in the future. Your code won't know when because you told it that you don't care (by using async void).

Best practice for queueing events in WPF

I have a WPF (MVVM) project where I have multiple view-models, each with a button that launches different analyses on the same data source, which in this case is a file. The file cannot be shared, so if the buttons are pressed near the same time the second call will fail.
I need a way to queue the button clicks so that each analysis can be run sequentially, but I can't seem to get it to work. I tried using a static Semaphore, SemaphoreSlim and Mutex, but they appear to stop everything (the Wait() function appears to block the currently running analysis). I tried a lock() command with a static object but it didn't seem to block either event (I get the file share error). I also tried a thread pool (with a max concurrent thread count of 1), but it gives threading errors updating the UI (this may be solvable with Invoke() calls).
My question is what might be considered best practice in this situation with WPF?
EDIT: I created a mockup which exhibits the problem I'm having. It is at http://1drv.ms/1s4oQ1T.
What you need here is an asynchronous queue, so that you can enqueue these tasks without actually having anything blocking your threads. SemaphoreSlim actually has a WaitAsync method that makes creating such a queue rather simple:
public class TaskQueue
{
private SemaphoreSlim semaphore;
public TaskQueue()
{
semaphore = new SemaphoreSlim(1);
}
public async Task<T> Enqueue<T>(Func<Task<T>> taskGenerator)
{
await semaphore.WaitAsync();
try
{
return await taskGenerator();
}
finally
{
semaphore.Release();
}
}
public async Task Enqueue(Func<Task> taskGenerator)
{
await semaphore.WaitAsync();
try
{
await taskGenerator();
}
finally
{
semaphore.Release();
}
}
}
This allows you to enqueue operations that will be all executed sequentially, rather than in parallel, and without blocking any threads at any time. The operations can also be any type of asynchronous operation, whether that is CPU bound work in another thread, IO bound work, etc.
I would do two things to solve this problem:
First, encapsulate the analysis operations in a command pattern. If you aren't familiar with it, the simplest implementation is an interface with a single function Execute. When you want to perform an analysis operation, just create one of these. You could also use the built-in ICommand interface to help, but be aware that this interface has more to it than the generic command pattern.
Of course, creation is only half the battle, so after doing so I would add it to a BlockingCollection. This collection is .NET's solution to the Producer-Consumer problem. Have a background thread that consumes this collection (executing the command objects contained within) using a foreach on the collection's GetConsumingEnumerable method and your buttons will "feed" it.
foreach (var item in bc.GetConsumingEnumerable())
{
item.Execute();
}
MSDN for Blocking Collection: http://msdn.microsoft.com/en-us/library/dd267312(v=vs.110).aspx
Now, all the semaphores, waits, etc. are done for you, and you can just add an operation to the queue (if it needs to be a queue, consider using ConcurrentQueue as the backing collection for BlockingCollection) and return on the UI thread. The background thread will pick the task up and run it.
You will need to Invoke any UI updates from the background thread of course, no getting around that issue :).
I'd recommend a queue, in a scheduling object shared by the view-models, with a consumer task that waits on the queue to have an item added to it. When a button is pressed, the view-model adds a work item to the queue. The consumer task takes one item from the queue each time, does the analysis contained in the work item, and then checks the queue for another item, waiting for more work items to be added if there are no work items to be processed.

Categories