I have a scenario where I'm doing some Actor-Model kind of messagequeing where I want a method to insert a Task or delegate into a queue (possibly the new ConcurrentQueue) , wait for some other process to process the queue, execute the task and then return the result, preferably without locking. The method might be called both synchronously and asynchronously. Only one queued action might run simultaneously
I can't wrap my head around how to accomplish this in a somewhat performant manner, please help :)
EDIT
Here's an attempt, anyone seeing any problems with this approach (exception handling excluded) ? Also, I can imagine this has quite a lot of overhead compared to simply locking, and how does it compare to for instance using asynchronous delegates?
public partial class Form1 : Form
{
private BlockingCollection<Task<int>> blockingCollection = new BlockingCollection<Task<int>>(new ConcurrentQueue<Task<int>>());
private int i = 0;
public Form1() {
InitializeComponent();
Task.Factory.StartNew(() =>
{
foreach (var task in blockingCollection.GetConsumingEnumerable()) {
task.Start();
task.Wait();
}
});
}
public int Queue() {
var task = new Task<int>(new Func<int>(DoSomething));
this.blockingCollection.Add(task);
task.Wait();
return task.Result;
}
public int DoSomething() {
return Interlocked.Increment(ref this.i);
}
private void button1_Click(object sender, EventArgs e) {
Task.Factory.StartNew(() => Console.Write(this.Queue()));
}
}
The TPL should do that for you - just call Wait() on your Task<T> - however, there is no way to do this without blocking; by definition, in your scenario that is exactly want you want to do. Blocking might be implemented via a lock, but there are other ways too - the TPL hides this. Personally, in a similar scenario I do it with a custom queue and a mini-pool of objects I can use to lock against (never exposed outside the wrapper).
You might also want to look at the C# 5 async/await stuff.
But note: if you aren't going to do anything useful while you are waiting, you might as well run that code directly on the current thread - unless the issue is thread-bound, for example a multiplexer. If you are interested, later today (or over the weekend) I intend releasing the multiplexer that stackoverflow uses to talk to redis, which (in synchronous mode, at least) has exactly the problems you describe.
As a side note; if you can work with a callback (from the other thread), and not have to wait on completion, that can be more efficient overall. But it doesn't fit every scenario.
Related
I am trying to do the following :
I have a server that is supposed to get many messages from a queue and process them. Now what I want is to create a new thread for every message and those threads will handle the response to the queue, I just want my server (core thread) to be just listening to messages and creating threads, not caring of what happens to them.
How can I achieve this? I know I can use the Thread class to create a thread but then the application just keeps listening to the thread until if finishes.
Also I can create an async method and run it but what happens when it finishes? Also the method is supposed to be static if I want it to be async but in my current application that is not a solution since I use many non static variables into this method.
Any ideas would be appreciated.
Unless you have very specific reason, I'd recommend using Tasks instead of Threads.
Likely they'll run in background anyway, but they produce less CPU/memory overhead and (in my opinion) are easier to handle in case of exception,...
Task t = Task.Run(() => ProcessMessage(message));
Maybe take a look at this introduction
What do you mean with
I know I can use the Thread class to create a thread but then the application just keeps listening to the thread until if finishes.
Just spawn the thread and let it run:
{
Thread t = new Thread(Foo);
t.Start();
}
public void Foo()
{ }
This won't make the main thread listen to the child thread, it just spawn them and continue working on following instructions.
BTW there are tons of result on how to create and run threads.
Since I don't like when others do it, here are simple examples of each way (asynchrnous/task-based), and you pick which one you like.
Asynchronous Implementation
int main()
{
while(true)
{
string data = SomeMethodThatReturnsTheNextDataFromQueue();
ProcessDataAsync(data);
}
}
async private void ProcessDataAsync(string msg)
{
// The *await* keyword returns to caller and allows main thread to continue looping.
bool result = await ParseDataAndSaveSomewhere(msg);
return;
}
Task-Based Implementation
int main()
{
while(true)
{
string data = SomeMethodThatReturnsTheNextDataFromQueue();
Task task = new Task(() => { ProcessData(data) });
task.Start();
}
}
private void ProcessData(string data)
{
// Do work
}
I have several process that need to run in the background of a Windows Forms application, because they take too much time and I do not want to freeze the user interface until they completely finish, I would like to have an indicator to show the process of each operation, so far I have a form to show the progress of each operation but my operations run synchronously.
So my question is what is the easiest way to run these operation (that access the database) async??
I forgot one important feature that the application requires, the user will have the option to cancel any operation at any time. I think this requirement complicates the application a lot, at least with my current skills, so basically I would like to enphatize that I need a solution easy to understand, and easy to implement. i am aware there will be good practices to follow but at this point I would like some code working later I with more time I would refactor the code
.NET 4 added the Task Parallel Library, which provides a very clean mechanism for making synchronous operations asynchronous.
It allows you to wrap the sync operation into a Task, which you can then either wait on, or use with a continuation (some code that executes when the task completes).
This will often look something like:
Task processTask = Task.Factory.StartNew(() => YourProcess(foo, bar));
Once you have the task, you have quite a few options, including blocking:
// Do other work, then:
processTask.Wait(); // This blocks until the task is completed
Or, if you want a continuation (code to run when it's complete):
processTask.ContinueWith( t => ProcessCompletionMethod());
You can also use this to combine multiple asynchronous operations, and complete when any or all of them are finished, etc.
Note that using Task or Task<T> in this way has another huge advantage - if you later migrate to .NET 4.5, your API will work as-is, with no code changes, with the new async/await language features coming in C# 5.
I forgot one important feature that the application requires, the user will have the option to cancel any operation at any time.
The TPL was also designed, from it's inception, to work nicely in conjunction with the new cooperative cancellation model for .NET 4. This allows you to have a CancellationTokenSource which can be used to cancel any or all of your tasks.
Well in C# there are several ways to accomplish this
Personally I would recommend you to try the Reactive Extensions
http://msdn.microsoft.com/en-us/data/gg577609.aspx
You can actually do something like this:
https://stackoverflow.com/a/10804404/1268570
I created this for you, this is really easy although it is not thread-safe but this would be a good start point
In a form
var a = Observable.Start(() => Thread.Sleep(8000)).StartAsync(CancellationToken.None);
var b = Observable.Start(() => Thread.Sleep(15000)).StartAsync(CancellationToken.None);
var c = Observable.Start(() => Thread.Sleep(3000)).StartAsync(CancellationToken.None);
Manager.Add("a", a.ObserveOn(this).Subscribe(x => MessageBox.Show("a done")));
Manager.Add("b", b.ObserveOn(this).Subscribe(x => MessageBox.Show("b done")));
Manager.Add("c", c.ObserveOn(this).Subscribe(x => MessageBox.Show("c done")));
private void button1_Click(object sender, EventArgs e)
{
Manager.Cancel("b");
}
Manager utility
public static class Manager
{
private static IDictionary<string, IDisposable> runningOperations;
static Manager()
{
runningOperations = new Dictionary<string, IDisposable>();
}
public static void Add(string key, IDisposable runningOperation)
{
if (runningOperations.ContainsKey(key))
{
throw new ArgumentOutOfRangeException("key");
}
runningOperations.Add(key, runningOperation);
}
public static void Cancel(string key)
{
IDisposable value = null;
if (runningOperations.TryGetValue(key, out value))
{
value.Dispose();
runningOperations.Remove(key);
}
}
If the ORM/database API doesn't come with async methods itself, have a look at the BackgroundWorker Class. It supports both cancellation (CancelAsync/CancellationPending) and progress reporting (ReportProgress/ProgressChanged).
public void EnqueueTask(int[] task)
{
lock (_locker)
{
_taskQ.Enqueue(task);
Monitor.PulseAll(_locker);
}
}
So, here I'm adding elements to my queue and than threads do some work with them.How can I add items to my queue asynchronously?
If you using .net V4 have a look at the new thread safe collections, they are mostly none blocking so will properly avoid the need for an async add.
Since your using Queue<T> (recommended), Queue.Synchronized can't be used.
But besides that I would use the thread pool. But your EnqueueTask method kind of implies that the threading logic is handled outside of your "TaskQueue" class (your method implies that it is a Queue of tasks).
Your implementation also implies that it is not "Here" we wan't to add logic but rather in another place, the code you have there isn't really blocking for long so I would turn things upside down.
It also implies that the thing taking things off the queue is already on another thread since you use "PulseAll" to weak that thread up.
E.g.
public void StartQueueHandler()
{
new Thread(()=>StartWorker).Start();
}
private int[] Dequeue()
{
lock(_locker)
{
while(_taskQ.Count == 0) Monitor.Wait(_locker);
return _taskQ.Dequeue();
}
}
private void StartWorker(object obj)
{
while(_keepProcessing)
{
//Handle thread abort or have another "shot down" mechanism.
int[] work = Dequeue();
//If work should be done in parallel without results.
ThreadPool.QueueUserWorkItem(obj => DoWork(work));
//If work should be done sequential according to the queue.
DoWork(work);
}
}
Maybe something like this could work:
void AddToQueue(Queue queue, string mess) {
var t = new Thread(() => Queue.Synchronized(queue).Enqueue(mess));
t.Start();
}
The new thread ensures that your current thread does not block.
Queue.Syncronized handles all locking of the queue.
It could be replaced with your locker code, might be better performance.
The code from your question seems to indicate that you are attempting to implement a blocking queue. I make that obseration from the call to Monitor.PulseAll after the Queue<T>.Enqueue. This is the normal pattern for signalling the dequeuing thread. So if that is the case then the best option is to use the BlockingCollection class which is available in .NET 4.0.
Microsoft just announced the new C# Async feature. Every example I've seen so far is about asynchronously downloading something from HTTP. Surely there are other important async things?
Suppose I'm not writing a new RSS client or Twitter app. What's interesting about C# Async for me?
Edit I had an Aha! moment while watching Anders' PDC session. In the past I have worked on programs that used "watcher" threads. These threads sit waiting for something to happen, like watching for a file to change. They aren't doing work, they're just idle, and notify the main thread when something happens. These threads could be replaced with await/async code in the new model.
Ooh, this sounds interesting. I'm not playing with the CTP just yet, just reviewing the whitepaper. After seeing Anders Hejlsberg's talk about it, I think I can see how it could prove useful.
As I understand, async makes writing asynchronous calls easier to read and implement. Very much in the same way writing iterators is easier right now (as opposed to writing out the functionality by hand). This is essential blocking processes since no useful work can be done, until it is unblocked. If you were downloading a file, you cannot do anything useful until you get that file letting the thread go to waste. Consider how one would call a function which you know will block for an undetermined length and returns some result, then process it (e.g., store the results in a file). How would you write that? Here's a simple example:
static object DoSomeBlockingOperation(object args)
{
// block for 5 minutes
Thread.Sleep(5 * 60 * 1000);
return args;
}
static void ProcessTheResult(object result)
{
Console.WriteLine(result);
}
static void CalculateAndProcess(object args)
{
// let's calculate! (synchronously)
object result = DoSomeBlockingOperation(args);
// let's process!
ProcessTheResult(result);
}
Ok good, we have it implemented. But wait, the calculation takes minutes to complete. What if we wanted to have an interactive application and do other things while the calculation took place (such as rendering the UI)? This is no good, since we called the function synchronously and we have to wait for it to finish effectively freezing the application since the thread is waiting to be unblocked.
Answer, call the function expensive function asynchronously. That way we're not bound to waiting for the blocking operation to complete. But how do we do that? We'd call the function asynchronously and register a callback function to be called when unblocked so we may process the result.
static void CalculateAndProcessAsyncOld(object args)
{
// obtain a delegate to call asynchronously
Func<object, object> calculate = DoSomeBlockingOperation;
// define the callback when the call completes so we can process afterwards
AsyncCallback cb = ar =>
{
Func<object, object> calc = (Func<object, object>)ar.AsyncState;
object result = calc.EndInvoke(ar);
// let's process!
ProcessTheResult(result);
};
// let's calculate! (asynchronously)
calculate.BeginInvoke(args, cb, calculate);
}
Note: Sure we could start another thread to do this but that would mean we're spawning a thread that just sits there waiting to be unblocked, then do some useful work. That would be a waste.
Now the call is asynchronous and we don't have to worry about waiting for the calculation to finish and process, it's done asynchronously. It will finish when it can. An alternative to calling code asynchronously directly, you could use a Task:
static void CalculateAndProcessAsyncTask(object args)
{
// create a task
Task<object> task = new Task<object>(DoSomeBlockingOperation, args);
// define the callback when the call completes so we can process afterwards
task.ContinueWith(t =>
{
// let's process!
ProcessTheResult(t.Result);
});
// let's calculate! (asynchronously)
task.Start();
}
Now we called our function asynchronously. But what did it take to get it that way? First of all, we needed the delegate/task to be able to call it asynchronously, we needed a callback function to be able to process the results, then call the function. We've turned a two line function call to much more just to call something asynchronously. Not only that, the logic in the code has gotten more complex then it was or could be. Although using a task helped simplify the process, we still needed to do stuff to make it happen. We just want to run asynchronously then process the result. Why can't we just do that? Well now we can:
// need to have an asynchronous version
static async Task<object> DoSomeBlockingOperationAsync(object args)
{
//it is my understanding that async will take this method and convert it to a task automatically
return DoSomeBlockingOperation(args);
}
static async void CalculateAndProcessAsyncNew(object args)
{
// let's calculate! (asynchronously)
object result = await DoSomeBlockingOperationAsync(args);
// let's process!
ProcessTheResult(result);
}
Now this was a very simplified example with simple operations (calculate, process). Imagine if each operation couldn't conveniently be put into a separate function but instead have hundreds of lines of code. That's a lot of added complexity just to gain the benefit of asynchronous calling.
Another practical example used in the whitepaper is using it on UI apps. Modified to use the above example:
private async void doCalculation_Click(object sender, RoutedEventArgs e) {
doCalculation.IsEnabled = false;
await DoSomeBlockingOperationAsync(GetArgs());
doCalculation.IsEnabled = true;
}
If you've done any UI programming (be it WinForms or WPF) and attempted to call an expensive function within a handler, you'll know this is handy. Using a background worker for this wouldn't be that much helpful since the background thread will be sitting there waiting until it can work.
Suppose you had a way to control some external device, let's say a printer. And you wanted to restart the device after a failure. Naturally it will take some time for the printer to start up and be ready for operation. You might have to account for the restart not helping and attempt to restart again. You have no choice but to wait for it. Not if you did it asynchronously.
static async void RestartPrinter()
{
Printer printer = GetPrinter();
do
{
printer.Restart();
printer = await printer.WaitUntilReadyAsync();
} while (printer.HasFailed);
}
Imagine writing the loop without async.
One last example I have. Imagine if you had to do multiple blocking operations in a function and wanted to call asynchronously. What would you prefer?
static void DoOperationsAsyncOld()
{
Task op1 = new Task(DoOperation1Async);
op1.ContinueWith(t1 =>
{
Task op2 = new Task(DoOperation2Async);
op2.ContinueWith(t2 =>
{
Task op3 = new Task(DoOperation3Async);
op3.ContinueWith(t3 =>
{
DoQuickOperation();
}
op3.Start();
}
op2.Start();
}
op1.Start();
}
static async void DoOperationsAsyncNew()
{
await DoOperation1Async();
await DoOperation2Async();
await DoOperation3Async();
DoQuickOperation();
}
Read the whitepaper, it actually has a lot of practical examples like writing parallel tasks and others.
I can't wait to start playing with this either in the CTP or when .NET 5.0 finally makes it out.
The main scenarios are any scenario that involves high latency. That is, lots of time between "ask for a result" and "obtain a result". Network requests are the most obvious example of high latency scenarios, followed closely by I/O in general, and then by lengthy computations that are CPU bound on another core.
However, there are potentially other scenarios that this technology will mesh nicely with. For example, consider scripting the logic of a FPS game. Suppose you have a button click event handler. When the player clicks the button you want to play a siren for two seconds to alert the enemies, and then open the door for ten seconds. Wouldn't it be nice to say something like:
button.Disable();
await siren.Activate();
await Delay(2000);
await siren.Deactivate();
await door.Open();
await Delay(10000);
await door.Close();
await Delay(1000);
button.Enable();
Each task gets queued up on the UI thread, so nothing blocks, and each one resumes the click handler at the right point after its job is finished.
I've found another nice use-case for this today: you can await user interaction.
For example, if one form has a button that opens another form:
Form toolWindow;
async void button_Click(object sender, EventArgs e) {
if (toolWindow != null) {
toolWindow.Focus();
} else {
toolWindow = new Form();
toolWindow.Show();
await toolWindow.OnClosed();
toolWindow = null;
}
}
Granted, this isn't really any simpler than
toolWindow.Closed += delegate { toolWindow = null; }
But I think it nicely demonstrates what await can do. And once the code in the event handler is non-trivial, await make programming much easier. Think about the user having to click a sequence of buttons:
async void ButtonSeries()
{
for (int i = 0; i < 10; i++) {
Button b = new Button();
b.Text = i.ToString();
this.Controls.Add(b);
await b.OnClick();
this.Controls.Remove(b);
}
}
Sure, you could do this with normal event handlers, but it would require you to take apart the loop and convert it into something much harder to understand.
Remember that await can be used with anything that gets completed at some point in the future. Here's the extension method Button.OnClick() to make the above work:
public static AwaitableEvent OnClick(this Button button)
{
return new AwaitableEvent(h => button.Click += h, h => button.Click -= h);
}
sealed class AwaitableEvent
{
Action<EventHandler> register, deregister;
public AwaitableEvent(Action<EventHandler> register, Action<EventHandler> deregister)
{
this.register = register;
this.deregister = deregister;
}
public EventAwaiter GetAwaiter()
{
return new EventAwaiter(this);
}
}
sealed class EventAwaiter
{
AwaitableEvent e;
public EventAwaiter(AwaitableEvent e) { this.e = e; }
Action callback;
public bool BeginAwait(Action callback)
{
this.callback = callback;
e.register(Handler);
return true;
}
public void Handler(object sender, EventArgs e)
{
callback();
}
public void EndAwait()
{
e.deregister(Handler);
}
}
Unfortunately it doesn't seem possible to add the GetAwaiter() method directly to EventHandler (allowing await button.Click;) because then the method wouldn't know how to register/deregister that event.
It's a bit of boilerplate, but the AwaitableEvent class can be re-used for all events (not just UI). And with a minor modification and adding some generics, you could allow retrieving the EventArgs:
MouseEventArgs e = await button.OnMouseDown();
I could see this being useful with some more complex UI gestures (drag'n'drop, mouse gestures, ...) - though you'd have to add support for cancelling the current gesture.
There are some samples and demos in the CTP that don't use the Net, and even some that don't do any I/O.
And it does apply to all multithreaded / parallel problem areas (that already exist).
Async and Await are a new (easier) way of structuring all parallel code, be it CPU-bound or I/O bound. The biggest improvement is in areas where before C#5 you had to use the APM (IAsyncResult) model, or the event model (BackgroundWorker, WebClient). I think that is why those examples lead the parade now.
A GUI clock is a good example; say you want to draw a clock, that updates the time shown every second. Conceptually, you want to write
while true do
sleep for 1 second
display the new time on the clock
and with await (or with F# async) to asynchronously sleep, you can write this code to run on the UI thread in a non-blocking fashion.
http://lorgonblog.wordpress.com/2010/03/27/f-async-on-the-client-side/
The async extensions are useful in some cases when you have an asynchronous operation. An asynchronous operation has a definite start and completion. When asynchronous operations complete, they may have a result or an error. (Cancellation is treated as a special kind of error).
Asynchronous operations are useful in three situations (broadly speaking):
Keeping your UI responsive. Any time you have a long-running operation (whether CPU-bound or I/O-bound), make it asynchronous.
Scaling your servers. Using asynchronous operations judiciously on the server side may help your severs to scale. e.g., asynchronous ASP.NET pages may make use of async operations. However, this is not always a win; you need to evaluate your scalability bottlenecks first.
Providing a clean asynchronous API in a library or shared code. async is excellent for reusability.
As you begin to adopt the async way of doing things, you'll find the third situation becoming more common. async code works best with other async code, so asynchronous code kind of "grows" through the codebase.
There are a couple of types of concurrency where async is not the best tool:
Parallelization. A parallel algorithm may use many cores (CPUs, GPUs, computers) to solve a problem more quickly.
Asynchronous events. Asynchronous events happen all the time, independent of your program. They often do not have a "completion." Normally, your program will subscribe to an asynchronous event stream, receive some number of updates, and then unsubscribe. Your program can treat the subscribe and unsubscribe as a "start" and "completion", but the actual event stream never really stops.
Parallel operations are best expressed using PLINQ or Parallel, since they have a lot of built-in support for partitioning, limited concurrency, etc. A parallel operation may easily be wrapped in an awaitable by running it from a ThreadPool thread (Task.Factory.StartNew).
Asynchronous events do not map well to asynchronous operations. One problem is that an asynchronous operation has a single result at its point of completion. Asynchronous events may have any number of updates. Rx is the natural language for dealing with asynchronous events.
There are some mappings from an Rx event stream to an asynchronous operation, but none of them are ideal for all situations. It's more natural to consume asynchronous operations by Rx, rather than the other way around. IMO, the best way of approaching this is to use asynchronous operations in your libraries and lower-level code as much as possible, and if you need Rx at some point, then use Rx from there on up.
Here is probably a good example of how not to use the new async feature (that's not writing a new RSS client or Twitter app), mid-method overload points in a virtual method call. To be honest, i am not sure there is any way to create more than a single overload point per method.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Threading;
namespace AsyncText
{
class Program
{
static void Main(string[] args)
{
Derived d = new Derived();
TaskEx.Run(() => d.DoStuff()).Wait();
System.Console.Read();
}
public class Base
{
protected string SomeData { get; set; }
protected async Task DeferProcessing()
{
await TaskEx.Run(() => Thread.Sleep(1) );
return;
}
public async virtual Task DoStuff() {
Console.WriteLine("Begin Base");
Console.WriteLine(SomeData);
await DeferProcessing();
Console.WriteLine("End Base");
Console.WriteLine(SomeData);
}
}
public class Derived : Base
{
public async override Task DoStuff()
{
Console.WriteLine("Begin Derived");
SomeData = "Hello";
var x = base.DoStuff();
SomeData = "World";
Console.WriteLine("Mid 1 Derived");
await x;
Console.WriteLine("EndDerived");
}
}
}
}
Output Is:
Begin Derived
Begin Base
Hello
Mid 1 Derived
End Base
World
EndDerived
With certain inheritance hierarchies (namely using command pattern) i find myself wanting to do stuff like this occasionally.
here is an article about showing how to use the 'async' syntax in a non-networked scenario that involves UI and multiple actions.
There is a sequence for FORM(some UI) should get downloaded using service.
Currently, this download is in a BackgroundWorker Thread.
Now, since the performance is slow... We decided to categories the FORMS into 2 and start downloading parallely using another BackgroundWorker on top of the existing Thread.
Now, the scenario is the either of this BackgroundWorker should wait for other to complete.
So, how to implement it.
I tried with AutoResetEvent. but, i could not achieve this.
Any help is appreciated.
I don't think that the scenario is really that one BackgroundWorker should wait for another. What you really want is to fire some UI event after (and only after) both of them complete. It's a subtle but important difference; the second version is a lot easier to code.
public class Form1 : Form
{
private object download1Result;
private object download2Result;
private void BeginDownload()
{
// Next two lines are only necessary if this is called multiple times
download1Result = null;
download2Result = null;
bwDownload1.RunWorkerAsync();
bwDownload2.RunWorkerAsync();
}
private void bwDownload1_RunWorkerCompleted(object sender,
RunWorkerCompletedEventArgs e)
{
download1Result = e.Result;
if (download2Result != null)
DisplayResults();
}
private void bwDownload2_RunWorkerCompleted(object sender,
RunWorkerCompletedEventArgs e)
{
download2Result = e.Result;
if (download1Result != null)
DisplayResults();
}
private void DisplayResults()
{
// Do something with download1Result and download2Result
}
}
Note that those object references should be strongly-typed, I just used object because I don't know what you're downloading.
This is really all you need; the RunWorkerCompleted event runs in the foreground thread so you actually don't need to worry about synchronization or race conditions in there. No need for lock statements, AutoResetEvent, etc. Just use two member variables to hold the results, or two boolean flags if the result of either can actually be null.
You should be able to use two AutoResetEvent's and the WaitAll function to wait for both to complete. Call the Set function on the AutoResetEvent objects in the respective OnRunWorkerCompleted event.
Jeffrey Richter is THE guru when it comes to multi threading and he's written an amazing library called Power Threading Library which makes doing tasks like downloading n files asynchronously and continuing after they are all completed (or one or some), really simple.
Take a little time out to watch the video, learn about it and you won't regret it. Using the power threading library (which is free and has a Silverlight and Compact Framework version also) also makes your code easier to read, which is a big advantage when doing any async stuff.
Good luck,
Mark
int completedCount = 0;
void threadProc1() { //your thread1 proc
//do something
....
completedCount++;
while (completedCount < 2) Thread.Sleep(10);
//now both threads are done
}
void threadProc2() { //your thread1 proc
//do something
....
completedCount++;
while (completedCount < 2) Thread.Sleep(10);
//now both threads are done
}
Just use 2 BackgroundWorker objects, and have each one alert the UI when it completes. That way you can display a spinner, progress bar, whatever on the UI and update it as download results come back from the threads. You will also avoid any risks of thread deadlocking, etc.
By the way, just so we are all clear, you should NEVER call a blocking function such as WaitAll from the UI thread. It will cause the UI to completely lock up which will make you users wonder WTF is going on :)