Fire and forget with TPL - c#

I have following code:
public void Func()
{
...
var task = httpClient.PostAsync(...);
var onlyOnRanToCompletionTask = task
.ContinueWith(
t => OnPostAsyncSuccess(t, notification, provider, account.Name),
TaskContinuationOptions.OnlyOnRanToCompletion);
var onlyOnFaultedTask = task
.ContinueWith(
t => OnPostAsyncAggregateException(t, notification.EntityId),
TaskContinuationOptions.OnlyOnFaulted);
return true;
}
Func is not async and I would like to have something like fire and forget but with continuation function. I don't want to have that function async. The scenario is that this function is called in some kind of loop for group of objects to handle them. For me the problem seems that e.g. onlyOnRanToCompletionTask can be deleted when we finish Func execution.
Thank you in advance!

But your code should be performed as expected. Simple example:
void Main()
{
HandlingMyFuncAsync();
Console.WriteLine("Doing some work, while 'fire and forget job is performed");
Console.ReadLine();
}
public void HandlingMyFuncAsync()
{
var task = MyFuncAsync();
task.ContinueWith(t => Console.WriteLine(t), TaskContinuationOptions.OnlyOnRanToCompletion);
}
public async Task<string> MyFuncAsync()
{
await Task.Delay(5000);
return "A";
}
produces
Doing some work, while 'fire and forget job is performed
[end after 5 sec]
A

If I were you I'd look at using Microsoft's Reactive Framework (Rx) for this. Rx is designed for this kind of thing.
Here's some basic code:
void Main()
{
int[] source = new [] { 1, 2, 3 };
IObservable<int> query =
from s in source.ToObservable()
from u in Observable.Start(() => Func(s))
select s;
IDisposable subscription =
query
.Subscribe(
x => Console.WriteLine($"!{x}!"),
() => Console.WriteLine("Done."));
}
public void Func(int x)
{
Console.WriteLine($"+{x} ");
Thread.Sleep(TimeSpan.FromSeconds(1.0));
Console.WriteLine($" {x}-");
}
When I run that I get this kind of output:
+1
+2
+3
1-
!1!
2-
!2!
3-
!3!
Done.
It triggers off all of the calls and waits for the results to come in. It also let's you know when it is done.
Here's a more practical example showing how to use async and Rx to return a result:
void Main()
{
int[] source = new [] { 1, 2, 3 };
var query =
from s in source.ToObservable()
from u in Observable.FromAsync(() => Func(s))
select new { s, u };
IDisposable subscription =
query
.Subscribe(
x => Console.WriteLine($"!{x.s},{x.u}!"),
() => Console.WriteLine("Done."));
}
public async Task<int> Func(int x)
{
Console.WriteLine($"+{x} ");
await Task.Delay(TimeSpan.FromSeconds(1.0));
Console.WriteLine($" {x}-");
return 10 * x;
}
That gives:
+1
+2
+3
1-
!1,10!
3-
2-
!2,20!
!3,30!
Done.
Without seeing your full code I can't easily give you a complete example that you can use, but Rx will also handle opening an closing on your DB contexts using the Observable.Using operator.
You can get Rx by Nugetting "System.Reactive".

Related

C# LanguageExt - combine multiple async calls into one grouped call

I have a method that looks up an item asynchronously from a datastore;
class MyThing {}
Task<Try<MyThing>> GetThing(int thingId) {...}
I want to look up multiple items from the datastore, and wrote a new method to do this. I also wrote a helper method that will take multiple Try<T> and combine their results into a single Try<IEnumerable<T>>.
public static class TryExtensions
{
Try<IEnumerable<T>> Collapse<T>(this IEnumerable<Try<T>> items)
{
var failures = items.Fails().ToArray();
return failures.Any() ?
Try<IEnumerable<T>>(new AggregateException(failures)) :
Try(items.Select(i => i.Succ(a => a).Fail(Enumerable.Empty<T>())));
}
}
async Task<Try<MyThing[]>> GetThings(IEnumerable<string> ids)
{
var results = new List<Try<Things>>();
foreach (var id in ids)
{
var thing = await GetThing(id);
results.Add(thing);
}
return results.Collapse().Map(p => p.ToArray());
}
Another way to do it would be like this;
async Task<Try<MyThing[]>> GetThings(IEnumerable<string> ids)
{
var tasks = ids.Select(async id => await GetThing(id)).ToArray();
await Task.WhenAll(tasks);
return tasks.Select(t => t.Result).Collapse().Map(p => p.ToArray());
}
The problem with this is that all the tasks will run in parallel and I don't want to hammer my datastore with lots of parallel requests. What I really want is to make my code functional, using monadic principles and features of LanguageExt. Does anyone know how to achieve this?
Update
Thanks for the suggestion #MatthewWatson, this is what it looks like with the SemaphoreSlim;
async Task<Try<MyThing[]>> GetThings(IEnumerable<string> ids)
{
var mutex = new SemaphoreSlim(1);
var results = ids.Select(async id =>
{
await mutex.WaitAsync();
try { return await GetThing(id); }
finally { mutex.Release(); }
}).ToArray();
await Task.WhenAll(tasks);
return tasks.Select(t => t.Result).Collapse().Map(Enumerable.ToArray);
return results.Collapse().Map(p => p.ToArray());
}
Problem is, this is still not very monadic / functional, and ends up with more lines of code than my original code with a foreach block.
In the "Another way" you almost achieved your goal when you called:
var tasks = ids.Select(async id => await GetThing(id)).ToArray();
Except that Tasks doesn't run sequentially so you will end up with many queries hitting your datastore, which is caused by .ToArray() and Task.WhenAll. Once you called .ToArray() it allocated and started the Tasks already, so if you can "tolerate" one foreach to achieve the sequential tasks running, like this:
public static class TaskExtensions
{
public static async Task RunSequentially<T>(this IEnumerable<Task<T>> tasks)
{
foreach (var task in tasks) await task;
}
}
Despite that running a "loop" of queries is not a quite good practice
in general, unless you have in some background service and some
special scenario, leveraging this to the Database engine through
WHERE thingId IN (...) in general is a better option. Even you
have big amount of thingIds we can slice it into small 10s, 100s.. to
narrow the WHERE IN footprint.
Back to our RunSequentially, I would like to make it more functional like this for example:
tasks.ToList().ForEach(async task => await task);
But sadly this will still run kinda "Parallel" tasks.
So the final usage should be:
async Task<Try<MyThing[]>> GetThings(IEnumerable<string> ids)
{
var tasks = ids.Select(id => GetThing(id));// remember don't use .ToArray or ToList...
await tasks.RunSequentially();
return tasks.Select(t => t.Result).Collapse().Map(p => p.ToArray());
}
Another overkill functional solution is to get Lazy in a Queue recursively !!
Instead GetThing, get a Lazy one GetLazyThing that returns Lazy<Task<Try<MyThing>>> simply by wrapping GetThing:
new Lazy<Task<Try<MyThing>>>(() => GetThing(id))
Now using couple extensions/functions:
public static async Task RecRunSequentially<T>(this IEnumerable<Lazy<Task<T>>> tasks)
{
var queue = tasks.EnqueueAll();
await RunQueue(queue);
}
public static Queue<T> EnqueueAll<T>(this IEnumerable<T> list)
{
var queue = new Queue<T>();
list.ToList().ForEach(m => queue.Enqueue(m));
return queue;
}
public static async Task RunQueue<T>(Queue<Lazy<Task<T>>> queue)
{
if (queue.Count > 0)
{
var task = queue.Dequeue();
await task.Value; // this unwraps the Lazy object content
await RunQueue(queue);
}
}
Finally:
var lazyTasks = ids.Select(id => GetLazyThing(id));
await lazyTasks.RecRunSequentially();
// Now collapse and map as you like
Update
However if you don't like the fact that EnqueueAll and RunQueue are not "pure", we can take the following approach with the same Lazy trick
public static async Task AwaitSequentially<T>(this Lazy<Task<T>>[] array, int index = 0)
{
if (array == null || index < 0 || index >= array.Length - 1) return;
await array[index].Value;
await AwaitSequentially(array, index + 1); // ++index is not pure :)
}
Now:
var lazyTasks = ids.Select(id => GetLazyThing(id));
await tasks.ToArray().AwaitSequentially();
// Now collapse and map as you like

Unwrapping IObservable<Task<T>> into IObservable<T> with order preservation

Is there a way to unwrap the IObservable<Task<T>> into IObservable<T> keeping the same order of events, like this?
Tasks: ----a-------b--c----------d------e---f---->
Values: -------A-----------B--C------D-----E---F-->
Let's say I have a desktop application that consumes a stream of messages, some of which require heavy post-processing:
IObservable<Message> streamOfMessages = ...;
IObservable<Task<Result>> streamOfTasks = streamOfMessages
.Select(async msg => await PostprocessAsync(msg));
IObservable<Result> streamOfResults = ???; // unwrap streamOfTasks
I imagine two ways of dealing with that.
First, I can subscribe to streamOfTasks using the asynchronous event handler:
streamOfTasks.Subscribe(async task =>
{
var result = await task;
Display(result);
});
Second, I can convert streamOfTasks using Observable.Create, like this:
var streamOfResults =
from task in streamOfTasks
from value in Observable.Create<T>(async (obs, cancel) =>
{
var v = await task;
obs.OnNext(v);
// TODO: don't know when to call obs.OnComplete()
})
select value;
streamOfResults.Subscribe(result => Display(result));
Either way, the order of messages is not preserved: some later messages that
don't need any post-processing come out faster than earlier messages that
require post-processing. Both my solutions handle the incoming messages
in parallel, but I'd like them to be processed sequentially, one by one.
I can write a simple task queue to process just one task at a time,
but perhaps it's an overkill. Seems to me that I'm missing something obvious.
UPD. I wrote a sample console program to demonstrate my approaches. All solutions by far don't preserve the original order of events. Here is the output of the program:
Timer: 0
Timer: 1
Async handler: 1
Observable.Create: 1
Observable.FromAsync: 1
Timer: 2
Async handler: 2
Observable.Create: 2
Observable.FromAsync: 2
Observable.Create: 0
Async handler: 0
Observable.FromAsync: 0
Here is the complete source code:
// "C:\Program Files (x86)\MSBuild\14.0\Bin\csc.exe" test.cs /r:System.Reactive.Core.dll /r:System.Reactive.Linq.dll /r:System.Reactive.Interfaces.dll
using System;
using System.Reactive;
using System.Reactive.Concurrency;
using System.Reactive.Linq;
using System.Threading.Tasks;
class Program
{
static void Main()
{
Console.WriteLine("Press ENTER to exit.");
// the source stream
var timerEvents = Observable.Timer(TimeSpan.Zero, TimeSpan.FromSeconds(1));
timerEvents.Subscribe(x => Console.WriteLine($"Timer: {x}"));
// solution #1: using async event handler
timerEvents.Subscribe(async x =>
{
var result = await PostprocessAsync(x);
Console.WriteLine($"Async handler: {x}");
});
// solution #2: using Observable.Create
var processedEventsV2 =
from task in timerEvents.Select(async x => await PostprocessAsync(x))
from value in Observable.Create<long>(async (obs, cancel) =>
{
var v = await task;
obs.OnNext(v);
})
select value;
processedEventsV2.Subscribe(x => Console.WriteLine($"Observable.Create: {x}"));
// solution #3: using FromAsync, as answered by #Enigmativity
var processedEventsV3 =
from msg in timerEvents
from result in Observable.FromAsync(() => PostprocessAsync(msg))
select result;
processedEventsV3.Subscribe(x => Console.WriteLine($"Observable.FromAsync: {x}"));
Console.ReadLine();
}
static async Task<long> PostprocessAsync(long x)
{
// some messages require long post-processing
if (x % 3 == 0)
{
await Task.Delay(TimeSpan.FromSeconds(2.5));
}
// and some don't
return x;
}
}
Combining #Enigmativity's simple approach with #VMAtm's idea of attaching the counter and some code snippets from this SO question, I came up with this solution:
// usage
var processedStream = timerEvents.SelectAsync(async t => await PostprocessAsync(t));
processedStream.Subscribe(x => Console.WriteLine($"Processed: {x}"));
// my sample console program prints the events ordered properly:
Timer: 0
Timer: 1
Timer: 2
Processed: 0
Processed: 1
Processed: 2
Timer: 3
Timer: 4
Timer: 5
Processed: 3
Processed: 4
Processed: 5
....
Here is my SelectAsync extension method to transform IObservable<Task<TSource>> into IObservable<TResult> keeping the original order of events:
public static IObservable<TResult> SelectAsync<TSource, TResult>(
this IObservable<TSource> src,
Func<TSource, Task<TResult>> selectorAsync)
{
// using local variable for counter is easier than src.Scan(...)
var counter = 0;
var streamOfTasks =
from source in src
from result in Observable.FromAsync(async () => new
{
Index = Interlocked.Increment(ref counter) - 1,
Result = await selectorAsync(source)
})
select result;
// buffer the results coming out of order
return Observable.Create<TResult>(observer =>
{
var index = 0;
var buffer = new Dictionary<int, TResult>();
return streamOfTasks.Subscribe(item =>
{
buffer.Add(item.Index, item.Result);
TResult result;
while (buffer.TryGetValue(index, out result))
{
buffer.Remove(index);
observer.OnNext(result);
index++;
}
});
});
}
I'm not particularly satisfied with my solution as it looks too complex to me, but at least it doesn't require any external dependencies. I'm using here a simple Dictionary to buffer and reorder task results because the subscriber need not to be thread-safe (the subscriptions are neved called concurrently).
Any comments or suggestions are welcome. I'm still hoping to find the native RX way of doing this without custom buffering extension method.
The RX library contains three operators that can unwrap an observable sequence of tasks, the Concat, Merge and Switch. All three accept a single source argument of type IObservable<Task<T>>, and return an IObservable<T>. Here are their descriptions from the documentation:
Concat
Concatenates all task results, as long as the previous task terminated successfully.
Merge
Merges results from all source tasks into a single observable sequence.
Switch
Transforms an observable sequence of tasks into an observable sequence producing values only from the most recent observable sequence. Each time a new task is received, the previous task's result is ignored.
In other words the Concat returns the results in their original order, the Merge returns the results in order of completion, and the Switch filters out any results from tasks that didn't complete before the next task was emitted. So your problem can be solved by just using the built-in Concat operator. No custom operator is needed.
var streamOfResults = streamOfTasks
.Select(async task =>
{
var result1 = await task;
var result2 = await PostprocessAsync(result1);
return result2;
})
.Concat();
The tasks are already started before they are emitted by the streamOfTasks. In other words they are emerging in a "hot" state. So the fact that the Concat operator awaits them the one after the other has no consequence regarding the concurrency of the operations. It only affects the order of their results. This would be a consideration if instead of hot tasks you had cold observables, like these created by the Observable.FromAsync and Observable.Create methods, in which case the Concat would execute the operations sequentially.
Is the following simple approach an answer for you?
IObservable<Result> streamOfResults =
from msg in streamOfMessages
from result in Observable.FromAsync(() => PostprocessAsync(msg))
select result;
To maintain the order of events you can funnel your stream into a TransformBlock from TPL Dataflow. The TransformBlock would execute your post-processing logic and will maintain the order of its output by default.
using System;
using System.Collections.Generic;
using System.Reactive.Linq;
using System.Threading.Tasks;
using System.Threading.Tasks.Dataflow;
using NUnit.Framework;
namespace HandlingStreamInOrder {
[TestFixture]
public class ItemHandlerTests {
[Test]
public async Task Items_Are_Output_In_The_Same_Order_As_They_Are_Input() {
var itemHandler = new ItemHandler();
var timerEvents = Observable.Timer(TimeSpan.Zero, TimeSpan.FromMilliseconds(250));
timerEvents.Subscribe(async x => {
var data = (int)x;
Console.WriteLine($"Value Produced: {x}");
var dataAccepted = await itemHandler.SendAsync((int)data);
if (dataAccepted) {
InputItems.Add(data);
}
});
await Task.Delay(5000);
itemHandler.Complete();
await itemHandler.Completion;
CollectionAssert.AreEqual(InputItems, itemHandler.OutputValues);
}
private IList<int> InputItems {
get;
} = new List<int>();
}
public class ItemHandler {
public ItemHandler() {
var options = new ExecutionDataflowBlockOptions() {
BoundedCapacity = DataflowBlockOptions.Unbounded,
MaxDegreeOfParallelism = Environment.ProcessorCount,
EnsureOrdered = true
};
PostProcessBlock = new TransformBlock<int, int>((Func<int, Task<int>>)PostProcess, options);
var output = PostProcessBlock.AsObservable().Subscribe(x => {
Console.WriteLine($"Value Output: {x}");
OutputValues.Add(x);
});
}
public async Task<bool> SendAsync(int data) {
return await PostProcessBlock.SendAsync(data);
}
public void Complete() {
PostProcessBlock.Complete();
}
public Task Completion {
get { return PostProcessBlock.Completion; }
}
public IList<int> OutputValues {
get;
} = new List<int>();
private IPropagatorBlock<int, int> PostProcessBlock {
get;
}
private async Task<int> PostProcess(int data) {
if (data % 3 == 0) {
await Task.Delay(TimeSpan.FromSeconds(2));
}
return data;
}
}
}
Rx and TPL can be easily combined here, and TPL do save the order of events, by default, so your code could be something like this:
using System.Threading.Tasks;
using System.Threading.Tasks.Dataflow;
static async Task<long> PostprocessAsync(long x) { ... }
IObservable<Message> streamOfMessages = ...;
var streamOfTasks = new TransformBlock<long, long>(async msg =>
await PostprocessAsync(msg)
// set the concurrency level for messages to handle
, new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = Environment.ProcessorCount });
// easily convert block into observable
IObservable<long> streamOfResults = streamOfTasks.AsObservable();
Edit: Rx extensions meant to be a reactive pipeline of events for UI. As this type of applications are in general single-threaded, so messages are being handled with saving the order. But in general events in C# aren't thread safe, so you have to provide some additional logic to same the order.
If you don't like the idea to introduce another dependency, you need to store the operation number with Interlocked class, something like this:
// counter for operations get started
int operationNumber = 0;
// counter for operations get done
int doneNumber = 0;
...
var currentOperationNumber = Interlocked.Increment(ref operationNumber);
...
while (Interlocked.CompareExchange(ref doneNumber, currentOperationNumber + 1, currentOperationNumber) != currentOperationNumber)
{
// spin once here
}
// handle event
Interlocked.Increment(ref doneNumber);

Is it OK to do some async/await inside some .NET Parallel.ForEach() code?

Given the following code, is it OK to do async/await inside a Parallel.ForEach ?
eg.
Parallel.ForEach(names, name =>
{
// Do some stuff...
var foo = await GetStuffFrom3rdPartyAsync(name);
// Do some more stuff, with the foo.
});
or is there some gotcha's that I need to be made aware of?
EDIT: No idea if this compiles, btw. Just Pseduo-code .. thinking out loud.
No, It doesn't make sense to combine async with Paralell.Foreach.
Consider the following example:
private void DoSomething()
{
var names = Enumerable.Range(0,10).Select(x=> "Somename" + x);
Parallel.ForEach(names, async(name) =>
{
await Task.Delay(1000);
Console.WriteLine("Name {0} completed",name);
});
Console.WriteLine("Parallel ForEach completed");
}
What output you will expect?
Name Somename3 completed
Name Somename8 completed
Name Somename4 completed
...
Parallel ForEach completed
That's not what will happen. It will output :
Parallel ForEach completed
Name Somename3 completed
Name Somename8 completed
Name Somename4 completed
...
Why? Because when ForEach hits first await the method actually returns, Parallel.ForEach doesn't know it is asynchronous and it ran to completion!. Code after await runs as continuation on another thread not "Paralell processing thread"
Stephen toub addressed this here
From the name, I'm assuming that GetStuffFrom3rdPartyAsync is I/O-bound. The Parallel class is specifically for CPU-bound code.
In the asynchronous world, you can start multiple tasks and then (asynchronously) wait for them all to complete using Task.WhenAll. Since you're starting with a sequence, it's probably easiest to project each element to an asynchronous operation, and then await all of those operations:
await Task.WhenAll(names.Select(async name =>
{
// Do some stuff...
var foo = await GetStuffFrom3rdPartyAsync(name);
// Do some more stuff, with the foo.
}));
A close alternative might be this:
static void ForEach<T>(IEnumerable<T> data, Func<T, Task> func)
{
var tasks = data.Select(item =>
Task.Run(() => func(item)));
Task.WaitAll(tasks.ToArray());
}
// ...
ForEach(names, name => GetStuffFrom3rdPartyAsync(name));
Ideally, you shouldn't be using a blocking call like Task.WaitAll, if you can make the whole chain of methods calls async, "all the way down" on the current call stack:
var tasks = data.Select(item =>
Task.Run(() => func(item)));
await Task.WhenAll(tasks.ToArray());
Furthermore, if you don't do any CPU-bound work inside GetStuffFrom3rdPartyAsync, Task.Run may be redundant:
var tasks = data.Select(item => func(item));
As pointed out by #Sriram Sakthivel there are some problems with using Parallel.ForEach with asynchronous lambdas. Steven Toub's ForEachASync can do the equivalent. He talks about it here, but here is the code:
public static class Extensions
{
public static Task ForEachAsync<T>(this IEnumerable<T> source, int dop, Func<T, Task> body)
{
return Task.WhenAll(
from partition in Partitioner.Create(source).GetPartitions(dop)
select Task.Run(async delegate {
using (partition) while (partition.MoveNext()) await body(partition.Current);
}));
}
}
It uses the Partitioner class to create a load balancing partitioner(doco), and allows you to specify how many threads you want to run with the dop parameter. to see the difference between it and Parallel.ForEach. Try the following code.
class Program
{
public static async Task GetStuffParallelForEach()
{
var data = Enumerable.Range(1, 10);
Parallel.ForEach(data, async i =>
{
await Task.Delay(1000 * i);
Console.WriteLine(i);
});
}
public static async Task GetStuffForEachAsync()
{
var data = Enumerable.Range(1, 10);
await data.ForEachAsync(5, async i =>
{
await Task.Delay(1000 * i);
Console.WriteLine(i);
});
}
static void Main(string[] args)
{
//GetStuffParallelForEach().Wait(); // Finished printed before work is complete
GetStuffForEachAsync().Wait(); // Finished printed after all work is done
Console.WriteLine("Finished");
Console.ReadLine();
}
if you run GetStuffForEachAsync the program waits for all work to finish. If you run GetStuffParallelForEach, the line Finished will be printed before the work is finished.

Task.Factory.StartNew(someMethod(withParam)).continueWith(sameMethod(differentParam)).Wait()

What is the correct syntax for parallelizing the following code?
static void Main(string[] args)
{
Task.Factory.StartNew(
() =>
doOne(SelectedTask.option1)
.ContinueWith(
task =>
doOne(SelectedTask.option1)).Wait()
);
}
Same method with enum "selectedTask" to decide which code to execute :
static enum SelectedTask
{
option1,
option2
}
static void doOne(SelectedTask Lunch)
{
switch (lunch)
{
case SelectedTask.option1:
Console.WriteLine("option1");
break;
case SelectedTask.option2:
Console.WriteLine("option2");
break;
default:
break;
}
}
Do you want your doOne calls to occur concurrently? Then you can just start them straight from the task factory:
// Start two concurrent tasks
var task1 = Task.Factory.StartNew(() => doOne(SelectedTask.option1));
var task2 = Task.Factory.StartNew(() => doOne(SelectedTask.option2));
// Block the current thread until all tasks complete
Task.WaitAll(task1, task2);
Do you want your doOne calls to occur sequentially? Then you can chain them using ContinueWith:
// Start a chain of tasks
var task1 = Task.Factory.StartNew(() => doOne(SelectedTask.option1));
var task2 = task1.ContinueWith(t => doOne(SelectedTask.option2));
// Block the current thread until the last task completes
task2.Wait();
The code in the title of your post (with a couple of fixes) is essentially performing the exact same function as my sequential task chain above:
Task.Factory.StartNew(() => doOne(SelectedTask.option1))
.ContinueWith(t => doOne(SelectedTask.option2))
.Wait();
Answer to your question below.
If I understand correctly, you want to be able to run a task for a variable list of SelectedTasks in parallel:
List<SelectedTask> selectedTaskOptions = new List<SelectedTask>()
{
SelectedTask.option1,
SelectedTask.option2,
SelectedTask.option3,
SelectedTask.option4,
SelectedTask.option5
};
RunAllSelectedTaskOptions(selectedTaskOptions);
RunAllSelectedTaskOptions to accept and run a list of SelectedTasks:
public void RunAllSelectedTaskOptions(List<SelectedTask> selectedTaskOptions)
{
List<Task> createdTasks = new List<Task>();
foreach(var taskOption in selectedTaskOptions)
{
createdTasks.Add(Task.Factory.CreateNew(() => doOne(taskOption)));
}
Task.WaitAll(createdTasks);
}
Another way of implementing RunAllSelectedTaskOptions would be to use Parallel.ForEach, which will execute in parallel and will block until the slowest/last iteration has completed:
public void RunAllSelectedTaskOptions(List<SelectedTask> selectedTaskOptions)
{
Parallel.ForEach(selectedTaskOptions, taskOption => doOne(taskOption));
}
I assume you are talking about parallelizing the two doOne calls?
If so, then you will need to do something like this:
var task1 = Task.Factory.StartNew(() => doOne(SelectedTask.option1));
var task2 = Task.Factory.StartNew(() => doOne(SelectedTask.option2));
var taskList = new List<Task>{task1, task2};
Task.WaitAll(taskList);
*The above code is fairly accurate but the syntax has not been validated.

Is there any way to start task using ContinueWith task?

My code:
var r = from x in new Task<int>(() => 1)
from y in new Task<int>(() => x + 1)
select y;
r.ContinueWith(x => Console.WriteLine(x.Result)).Start();
or
new Task<int>(() => 1)
.ContinueWith(x => x.Result + 1)
.ContinueWith(x => Console.WriteLine(x.Result))
.Start();
Exception:
Start may not be called on a continuation task.
So I need to start the first task. Is there any way to call last task Start method to run all tasks?
Any reason not to use Task.Factory.StartNewmsdn, ms docs for the first task? Yes, it's inconsistent - but it's fundamentally a different kind of task, in terms of being started explicitly rather than just as a continuation.
I'm not really sure what's wrong with just writing this:
var t1 = new Task<int>(() => 1)
var r = from x in t1
from y in new Task<int>(() => x + 1)
select y;
r.ContinueWith(x => Console.WriteLine(x.Result));
t1.Start();
or this:
var t = new Task<int>(() => 1)
t.ContinueWith(x => x.Result + 1)
.ContinueWith(x => Console.WriteLine(x.Result))
t.Start();
That directly expresses what you actually want to do. (It's the initial task that you want to kick off. So what's wrong with invoking Start on that initial task?) Why are you looking for a syntax that obscures that?
EDIT: fixed first example...
EDIT 2 to add:
So I realise now that LinqToTasks expects task selectors to return running tasks. So the second from clause in your first example returns a task that nothing will ever run. So what you actually need is this:
var t1 = new Task<int>(() => 1);
var r = from x in t1
from y in Task<int>.Factory.StartNew(() => x + 1)
select y;
r.ContinueWith(x => Console.WriteLine(x.Result));
t1.Start();
Nothing else is going to call Start on the tasks produced in these from clauses. Since the relevant selectors don't actually get executed until the previous task completes, you're still in control of when to kick off the root task.
That appears to work, but it's pretty ugly. But that appears to be how LinqToTasks is designed... I think I'd stick with the regular function call syntax.
The problem is that selecting tasks with LINQ Will Only Create an Expression Tree!
So here's what you need to do:
var query =
from i in Enumerable.Range(1, 4)
let task = Task.Factory.StartNew(() => Tuple.Create(i, IsPrime(i))) // put a breakpoint here
select task.ContinueWith(delegate {
Console.WriteLine("{0} {1} prime.", _.Result.Item1, _.Result.Item2 ? "is" : "is not");
});
// breakpoint never hit yet
query.ToArray(); // breakpoint hit here 4 times
// all tasks are now running and continuations will start
TaskEx.Await(query.ToArray()); // breakpoint hit 4 more times!!
I had the same problem today. I wanted to create a wrapper task that handles an error from an inner task. This is what I came up with:
var yourInitialTask = new Task(delegate
{
throw e;
});
var continuation = task.ContinueWith(t =>
{
if (task.IsCanceled)
{
Debug.WriteLine("IsCanceled: " + job.GetType());
}
else if (task.IsFaulted)
{
Debug.WriteLine("IsFaulted: " + job.GetType());
}
else if (task.IsCompleted)
{
Debug.WriteLine("IsCompleted: " + job.GetType());
}
}, TaskContinuationOptions.ExecuteSynchronously); //or consider removing execute synchronously if your continuation task is going to take long
var wrapper = new Task(() =>
{
task.Start();
continuation.Wait();
});
return wrapper;
The key features here are that
-the continuation part runs after the original task, as you want
-the wrapper is Startable. Continuation tasks created with ContineWith() are nto Startable.
A less key feature of this example, is that the exception is being logged and discarded (solves my problem, not yours). You might want to be doing something different when exceptions occur in the continuation such as rethrow it as an exception of the current task, so that it bubbles out.
As far as I'm aware, there's no sensible way to compose non-started tasks provided by the framework. The simplest solution I can think of is extension methods. Here are some examples which you could build on if you need this functionality.
Warning: Just as with passing around and composing tons of lambdas, if you find yourself needing these, it often means you are missing a type in your design that would simplify your code. Ask yourself what you gained by creating the subtasks.
/// <summary>
/// Compose tasks without starting them.
/// Waiting on the returned task waits for both components to complete.
/// An exception in the first task will stop the second task running.
/// </summary>
public static class TaskExtensions
{
public static Task FollowedBy(this Task first, Task second)
{
return FollowedBy(first,
() =>
{
second.Start();
second.Wait();
});
}
public static Task FollowedBy(this Task first, Action second)
{
return new Task(
() =>
{
if (first.Status == TaskStatus.Created) first.Start();
first.Wait();
second();
});
}
public static Task FollowedBy<T>(this Task first, Task<T> second)
{
return new Task<T>(
() =>
{
if (first.Status == TaskStatus.Created) first.Start();
first.Wait();
second.Start();
return second.Result;
});
}
public static Task FollowedBy<T>(this Task<T> first, Action<T> second)
{
return new Task(
() =>
{
if (first.Status == TaskStatus.Created) first.Start();
var firstResult = first.Result;
second(firstResult);
});
}
public static Task<TSecond> FollowedBy<TFirst, TSecond>(this Task<TFirst> first, Func<TFirst, TSecond> second)
{
return new Task<TSecond>(
() =>
{
if (first.Status == TaskStatus.Created) first.Start();
return second(first.Result);
});
}
}
The answer is simple. ContinueWith is automatically start task. And first task need to be running.
var r= Task<int>.Run<int>( () => 1 )
.ContinueWith<int>( x => x.Result + 1 )
.ContinueWith( x => Console.WriteLine( x.Result ) );
ContinueWith return task that start with checking previous task is done or not. This code work in the same way like below code
var firstTask = new Task<int>( () => 1 );
firstTask.Start();
var firstawaiter = firstTask.GetAwaiter();
var secondTask = new Task<int>( x => (int)x + 1 , firstawaiter.GetResult());
firstawaiter.OnCompleted( () =>
{
secondTask.Start();
} );
var secondawaiter = secondTask.GetAwaiter();
var thirdTask = new Task( x => Console.WriteLine( x ) , secondawaiter.GetResult());
secondawaiter.OnCompleted( () =>
{
thirdTask.Start();
} );
So, If first task is not completed, next task will not be started.
And you not need to start continuewith block .

Categories