Why simple multi task doesn't work when multi thread does? - c#

var finalList = new List<string>();
var list = new List<int> {1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ................. 999999};
var init = 0;
var limitPerThread = 5;
var countDownEvent = new CountdownEvent(list.Count);
for (var i = 0; i < list.Count; i++)
{
var listToFilter = list.Skip(init).Take(limitPerThread).ToList();
new Thread(delegate()
{
Foo(listToFilter);
countDownEvent.Signal();
}).Start();
init += limitPerThread;
}
//wait all to finish
countDownEvent.Wait();
private static void Foo(List<int> listToFilter)
{
var listDone = Boo(listToFilter);
lock (Object)
{
finalList.AddRange(listDone);
}
}
This doesn't:
var taskList = new List<Task>();
for (var i = 0; i < list.Count; i++)
{
var listToFilter = list.Skip(init).Take(limitPerThread).ToList();
var task = Task.Factory.StartNew(() => Foo(listToFilter));
taskList.add(task);
init += limitPerThread;
}
//wait all to finish
Task.WaitAll(taskList.ToArray());
This process must create at least 700 threads in the end. When I run using Thread, it works and creates all of them. But with Task it doesn't.. It seems like its not starting multiples Tasks async.
I really wanna know why.... any ideas?
EDIT
Another version with PLINQ (as suggested).
var taskList = new List<Task>(list.Count);
Parallel.ForEach(taskList, t =>
{
var listToFilter = list.Skip(init).Take(limitPerThread).ToList();
Foo(listToFilter);
init += limitPerThread;
t.Start();
});
Task.WaitAll(taskList.ToArray());
EDIT2:
public static List<Communication> Foo(List<Dispositive> listToPing)
{
var listResult = new List<Communication>();
foreach (var item in listToPing)
{
var listIps = item.listIps;
var communication = new Communication
{
IdDispositive = item.Id
};
try
{
for (var i = 0; i < listIps.Count(); i++)
{
var oPing = new Ping().Send(listIps.ElementAt(i).IpAddress, 10000);
if (oPing != null)
{
if (oPing.Status.Equals(IPStatus.TimedOut) && listIps.Count() > i+1)
continue;
if (oPing.Status.Equals(IPStatus.TimedOut))
{
communication.Result = "NOK";
break;
}
communication.Result = oPing.Status.Equals(IPStatus.Success) ? "OK" : "NOK";
break;
}
if (listIps.Count() > i+1)
continue;
communication.Result = "NOK";
break;
}
}
catch
{
communication.Result = "NOK";
}
finally
{
listResult.Add(communication);
}
}
return listResult;
}

Tasks are NOT multithreading. They can be used for that, but mostly they're actually used for the opposite - multiplexing on a single thread.
To use tasks for multithreading, I suggest using Parallel LINQ. It has many optimizations in it already, such as intelligent partitioning of your lists and only spawning as many threads as there ar CPU cores, etc.
To understand Task and async, think of it this way - a typical workload often includes IO that needs to be waited upon. Maybe you read a file, or query a webservice, or access a database, or whatever. The point is - your thread gets to wait a loooong time (in CPU cycles at least) until you get a response from some faraway destination.
In the Olden Days™ that meant that your thread was getting locked down (suspended) until that response came. If you wanted to do something else in the meantime, you needed to spawn a new thread. That's doable, but not too efficient. Each OS thread carries a significant overhead (memory, kernel resources) with it. And you could end up with several threads actively burning the CPU, which means that the OS needs to switch between them so that each gets a bit of CPU time and these "context switches" are pretty expensive.
async changes that workflow. Now you can have multiple workloads executing on the same thread. While one piece of work is awaiting the result from a faraway source, another can step in and use that thread to do something else useful. When that second workload gets to its own await, the first can awaken and continue.
After all, it doesn't make sense to spawn more threads than there are CPU cores. You're not going to get more work done that way. Just the opposite - more time will be spent on switching the threads and less time will be available for useful work.
That is what the Task/async/await was originally designed for. However Parallel LINQ has also taken advantage of it and reused it for multithreading. In this case you can look at it this way - the other threads is what your main thread is the "faraway destination" that your main thread is waiting on.

Tasks are executed on the Thread Pool. This means that a handful of threads will serve a large number of tasks. You have multi-threading, but not a thread for every task spawned.
You should use tasks. You should aim to use as much threads as your CPU. Generally, the thread pool is doing this for you.

How did you measure up the performance? Do you think that the 700 threads will work faster than 700 tasks executing by 4 threads? No, they would not.
It seems like its not starting multiples Tasks async
How did you came up with this? As other suggested in comments and in other answers, you probably need to remove a thread creation, as after creating 700 threads you'll degrade your system performance, as your threads would fight to each other for the processor time, without any work done faster.
So, you need to add the async/await for your IO operations, into the Foo method, with SendPingAsync version. Also, your method could be simplyfied, as many checks for a listIps.Count() > i + 1 conditions are useless - you do it in the for condition block:
public static async Task<List<Communication>> Foo(List<Dispositive> listToPing)
{
var listResult = new List<Communication>();
foreach (var item in listToPing)
{
var listIps = item.listIps;
var communication = new Communication
{
IdDispositive = item.Id
};
try
{
var ping = new Ping();
communication.Result = "NOK";
for (var i = 0; i < listIps.Count(); i++)
{
var oPing = await ping.SendPingAsync(listIps.ElementAt(i).IpAddress, 10000);
if (oPing != null)
{
if (oPing.Status.Equals(IPStatus.Success)
{
communication.Result = "OK";
break;
}
}
}
}
catch
{
communication.Result = "NOK";
}
finally
{
listResult.Add(communication);
}
}
return listResult;
}
Other problem with your code is that PLINQ version isn't threadsafe:
init += limitPerThread;
This can fail while executing in parallel. You may introduce some helper method, like in this answer:
private async Task<List<PingReply>> PingAsync(List<Communication> theListOfIPs)
{
Ping pingSender = new Ping();
var tasks = theListOfIPs.Select(ip => pingSender.SendPingAsync(ip, 10000));
var results = await Task.WhenAll(tasks);
return results.ToList();
}
And do this kind of check (try/catch logic removed for simplicity):
public static async Task<List<Communication>> Foo(List<Dispositive> listToPing)
{
var listResult = new List<Communication>();
foreach (var item in listToPing)
{
var listIps = item.listIps;
var communication = new Communication
{
IdDispositive = item.Id
};
var check = await PingAsync(listIps);
communication.Result = check.Any(p => p.Status.Equals(IPStatus.Success)) ? "OK" : "NOK";
}
}
And you probably should use Task.Run instead of Task.StartNew for being sure that you aren't blocking the UI thread.

Related

await Task.Run taking longer than expected

The method below is suppose to run for the (duration is milliseconds) being passed in for case 0:, but what I'm seeing is the method may take up to 2 seconds to run for a 400ms duration. Is it possible that Task.run is taking long time to start? If so is there a better way?
private static async void PulseWait(int duration, int axis){
await Task.Run(() =>
{
try
{
var logaction = true;
switch (axis)
{
case 0:
var sw1 = Stopwatch.StartNew();
if (duration > 0) duration += 20; // allowance for the call to the mount
while (sw1.Elapsed.TotalMilliseconds <= duration) { } // wait out the duration
_isPulseGuidingRa = false;
logaction = false;
break;
case 1:
var axis2Stopped = false;
var loopcount = 0;
switch (SkySettings.Mount)
{
case MountType.Simulator:
while (!axis2Stopped && loopcount < 30)
{
loopcount++;
var statusy = new CmdAxisStatus(MountQueue.NewId, Axis.Axis2);
var axis2Status = (AxisStatus)MountQueue.GetCommandResult(statusy).Result;
axis2Stopped = axis2Status.Stopped;
if (!axis2Stopped) Thread.Sleep(10);
}
break;
case MountType.SkyWatcher:
while (!axis2Stopped && loopcount < 30)
{
loopcount++;
var statusy = new SkyIsAxisFullStop(SkyQueue.NewId, AxisId.Axis2);
axis2Stopped = Convert.ToBoolean(SkyQueue.GetCommandResult(statusy).Result);
if (!axis2Stopped) Thread.Sleep(10);
}
break;
default:
throw new ArgumentOutOfRangeException();
}
_isPulseGuidingDec = false;
logaction = false;
break;
}
var monitorItem = new MonitorEntry
{ Datetime = HiResDateTime.UtcNow, Device = MonitorDevice.Telescope, Category = MonitorCategory.Mount, Type = MonitorType.Data, Method = MethodBase.GetCurrentMethod().Name, Thread = Thread.CurrentThread.ManagedThreadId, Message = $"PulseGuide={logaction}" };
MonitorLog.LogToMonitor(monitorItem);
}
catch (Exception)
{
_isPulseGuidingDec = false;
_isPulseGuidingRa = false;
}
});}
Log showing how time taken...
33652,2019:07:12:01:15:35.590,13,AxisPulse,Axis1,0.00208903710815278,400,0,True <<--line just before PulseWait is called with 400ms duration
33653,2019:07:12:01:15:35.591,13,SendRequest,:I1250100
33654,2019:07:12:01:15:35.610,13,ReceiveResponse,:I1250100,=
33655,2019:07:12:01:15:36.026,13,SendRequest,:I1B70100
33656,2019:07:12:01:15:36.067,13,ReceiveResponse,:I1B70100,=
33657,2019:07:12:01:15:36.067,13,SendRequest,:j1
33658,2019:07:12:01:15:36.120,13,ReceiveResponse,:j1,=DDCDBD
33659,2019:07:12:01:15:36.120,13,SendRequest,:j2
33660,2019:07:12:01:15:36.165,13,ReceiveResponse,:j2,=67CF8A
33661,2019:07:12:01:15:36.467,13,SendRequest,:j1
33662,2019:07:12:01:15:36.484,13,ReceiveResponse,:j1,=10CEBD
33663,2019:07:12:01:15:36.484,13,SendRequest,:j2
33664,2019:07:12:01:15:36.501,13,ReceiveResponse,:j2,=67CF8A
33665,2019:07:12:01:15:36.808,13,SendRequest,:j1
33666,2019:07:12:01:15:36.842,13,ReceiveResponse,:j1,=3CCEBD
33667,2019:07:12:01:15:36.842,13,SendRequest,:j2
33668,2019:07:12:01:15:36.868,13,ReceiveResponse,:j2,=67CF8A
33669,2019:07:12:01:15:37.170,13,SendRequest,:j1
33670,2019:07:12:01:15:37.188,13,ReceiveResponse,:j1,=6BCEBD
33671,2019:07:12:01:15:37.188,13,SendRequest,:j2
33672,2019:07:12:01:15:37.204,13,ReceiveResponse,:j2,=67CF8A
33673,2019:07:12:01:15:37.221,5,b__0,PulseGuide=False <<--PulseWait is finished 1.631ms after start
The purpose of async and await is to make things easy. But just like everything that makes things easy, it comes with a cost of having full control over what's going on. Here, it's really a cost of asynchronous programming in general. The point of asynchronous programming is to free up the current thread so that the current thread can go off and do something else. But if something else is done on the current thread, then the continuation of what you were doing must wait until that is done. (i.e. What comes after the await may not happen instantaneously after the task completes)
So while asynchronous programming will help overall performance (like increasing the overall throughput performance of a web app), but will actually hurt the performance of any one specific task. If every millisecond counts to you, you might be able to do the low-level tasks yourself, like creating a Thread (if this really needs to be run on a separate thread).
Here is a simple example that demonstrates this:
var s = new Stopwatch();
// Test the time it takes to run an empty method on a
// different thread with Task.Run and await it.
s.Start();
await Task.Run(() => { });
s.Stop();
Console.WriteLine($"Time of Task.Run: {s.ElapsedMilliseconds}ms");
// Test the time it takes to create a new thread directly
// and wait for it.
s.Restart();
var t = new Thread(() => { });
t.Start();
t.Join();
s.Stop();
Console.WriteLine($"Time of new Thread: {s.ElapsedMilliseconds}ms");
The output will vary, but it looks something like this:
Time of Task.Run: 8ms
Time of new Thread: 0ms
In an application with lots of other things going on, that 8ms could be much more if some other operation uses the thread during the await.
That's not to say that you should use Thread. t.Join() is not an asynchronous operation. It will block the thread. So if PulseWait runs on the UI thread (if this is a UI app), it will lock the UI thread, which is a bad user experience. In that case, you may not be able to get around the cost of using asynchronous code.
If this is not an application with a UI, then I don't see why you need to do all that on a different thread at all. Maybe you can just.... not do that.

C# Scan tree recursively with multiple threads

I'm scanning some directory for items. I've just read Multithreaded Directory Looping in C# question but I still want to make it multithreated. Even though everyone says the drive will be the bottleneck I have some points:
The drives may mostly be "single threaded" but how you know what they going to bring up in the future?
How you know the different sub-paths you are scanning are one the same physical drive?
I using an abstraction layer (even two) over the System.IO so that I can later reuse the code in different scenarios.
So, my first idea was to use Task and first dummy implementation was this:
public async Task Scan(bool recursive = false) {
var t = new Task(() => {
foreach (var p in path.scan) Add(p);
if (!recursive) return;
var tks = new Task[subs.Count]; var i = 0;
foreach (var s in subs) tks[i++] = s.Scan(true);
Task.WaitAll(tks);
}); t.Start();
await t;
}
I don't like the idea of creating a Task for each item and generally this doesn't seem ideal, but this was just for a test as Tasks are advertised to automatically manage the threads...
This method works but it's very slow. It takes above 5s to complete, while the single threated version below takes around 0.5s to complete the whole program on the same data set:
public void Scan2(bool recursive = false) {
foreach (var p in path.scan) Add(p);
if (!recursive) return;
foreach (var s in subs) s.Scan2(true);
}
I wander what really goes wrong with fist method. The machine is not on load, CUP usage is insignificant, drive is fine... I've tried profiling it with NProfiler it don't tell me much besides the program sits on Task.WaitAll(tks) all the time.
I also wrote a thread-locked counting mechanism that is invoked during addition of each item. Maybe it's the problem with it?
#region SubCouting
public Dictionary<Type, int> counters = new Dictionary<Type, int>();
private object cLock = new object();
private int _sc = 0;
public int subCount => _sc;
private void inCounter(Type t) {
lock (cLock) {
if (!counters.ContainsKey(t)) counters.Add(t, 1);
counters[t]++;
_sc++;
}
if (parent) parent.inCounter(t);
}
#endregion
But even if threads are waiting here, wouldn't the execution time be similar to single threaded version as opposed to 10x slower?
I'm not sure how to approach this. If I don't want to use tasks, do I need to manage threads manually or is there already some library that would fit nicely for the job?
I think you almost got it. Task.WaitAll(tks) is the problem. You block one thread for this as this is synchronous operation. You get out of threads soon, all threads are just waiting for some tasks that have no threads to run on. You can solve this with async, replace the waiting with await Task.WhenAll(...). It would free the thread when waiting. With some workload the multithreaded version is significantly faster. When just IO bound it is roughly equal.
ConcurrentBag<string> result = new ConcurrentBag<string>();
List<string> result2 = new List<string>();
public async Task Scan(string path)
{
await Task.Run(async () =>
{
var subs = Directory.GetDirectories(path);
await Task.WhenAll(subs.Select(s => Scan(s)));
result.Add(Enumerable.Range(0, 1000000).Sum(i => path[i % path.Length]).ToString());
});
}
public void Scan2(string path)
{
result2.Add(Enumerable.Range(0, 1000000).Sum(i => path[i % path.Length]).ToString());
var subs = Directory.GetDirectories(path);
foreach (var s in subs) Scan2(s);
}
private async void button4_Click(object sender, EventArgs e)
{
string dir = #"d:\tmp";
System.Diagnostics.Stopwatch st = new System.Diagnostics.Stopwatch();
st.Start();
await Scan(dir);
st.Stop();
MessageBox.Show(st.ElapsedMilliseconds.ToString());
st = new System.Diagnostics.Stopwatch();
st.Start();
Scan2(dir);
st.Stop();
MessageBox.Show(st.ElapsedMilliseconds.ToString());
MessageBox.Show(result.OrderBy(x => x).SequenceEqual(result2.OrderBy(x => x)) ? "OK" : "ERROR");
}

TPL DataFlow Workflow

I have just started reading TPL Dataflow and it is really confusing for me. There are so many articles on this topic which I read but I am unable to digest it easily. May be it is difficult and may be I haven't started to grasp the idea.
The reason why I started looking into this is that I wanted to implement a scenario where parallel tasks could be run but in order and found that TPL Dataflow can be used as this.
I am practicing TPL and TPL Dataflow both and am at very beginners level so I need help from experts who could guide me to the right direction. In the test method written by me I have done the following thing,
private void btnTPLDataFlow_Click(object sender, EventArgs e)
{
Stopwatch watch = new Stopwatch();
watch.Start();
txtOutput.Clear();
ExecutionDataflowBlockOptions execOptions = new ExecutionDataflowBlockOptions();
execOptions.MaxDegreeOfParallelism = DataflowBlockOptions.Unbounded;
ActionBlock<string> actionBlock = new ActionBlock<string>(async v =>
{
await Task.Delay(200);
await Task.Factory.StartNew(
() => txtOutput.Text += v + Environment.NewLine,
CancellationToken.None,
TaskCreationOptions.None,
scheduler
);
}, execOptions);
for (int i = 1; i < 101; i++)
{
actionBlock.Post(i.ToString());
}
actionBlock.Complete();
watch.Stop();
lblTPLDataFlow.Text = Convert.ToString(watch.ElapsedMilliseconds / 1000);
}
Now the procedure is parallel and both asynchronous (not freezing my UI) but the output generated is not in order whereas I have read that TPL Dataflow keeps the order of the elements by default. So my guess is that, then the Task which I have created is the culprit and it is not output the string in correct order. Am I right?
If this is the case then how do I make this Asynchronous and in order both?
I have tried to separate the code and tried to distribute the code in to different methods but my this try is failed as only string is output to textbox and nothing else happened.
private async void btnTPLDataFlow_Click(object sender, EventArgs e)
{
Stopwatch watch = new Stopwatch();
watch.Start();
await TPLDataFlowOperation();
watch.Stop();
lblTPLDataFlow.Text = Convert.ToString(watch.ElapsedMilliseconds / 1000);
}
public async Task TPLDataFlowOperation()
{
var actionBlock = new ActionBlock<int>(async values => txtOutput.Text += await ProcessValues(values) + Environment.NewLine,
new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = DataflowBlockOptions.Unbounded, TaskScheduler = scheduler });
for (int i = 1; i < 101; i++)
{
actionBlock.Post(i);
}
actionBlock.Complete();
await actionBlock.Completion;
}
private async Task<string> ProcessValues(int i)
{
await Task.Delay(200);
return "Test " + i;
}
I know I have written a bad piece of code but this is the first time I am experimenting with TPL Dataflow.
How do I make this Asynchronous and in order?
This is something of a contradiction. You can make concurrent tasks start in order, but you can't really guarantee that they will run or complete in order.
Let's examine your code and see what's happening.
First, you've selected DataflowBlockOptions.Unbounded. This tells TPL Dataflow that it shouldn't limit the number of tasks that it allows to run concurrently. Therefore, each of your tasks will start at more-or-less the same time, in order.
Your asynchronous operation begins with await Task.Delay(200). This will cause your method to be suspended and then resume after about 200 ms. However, this delay is not exact, and will vary from one invocation to the next. Also, the mechanism by which your code is resumed after the delay may presumably take a variable amount of time. Because of this random variation in the actual delay, then next bit of code to run is now not in order—resulting in the discrepancy you're seeing.
You might find this example interesting. It's a console application to simplify things a bit.
class Program
{
static void Main(string[] args)
{
OutputNumbersWithDataflow();
OutputNumbersWithParallelLinq();
Console.ReadLine();
}
private static async Task HandleStringAsync(string s)
{
await Task.Delay(200);
Console.WriteLine("Handled {0}.", s);
}
private static void OutputNumbersWithDataflow()
{
var block = new ActionBlock<string>(
HandleStringAsync,
new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = DataflowBlockOptions.Unbounded });
for (int i = 0; i < 20; i++)
{
block.Post(i.ToString());
}
block.Complete();
block.Completion.Wait();
}
private static string HandleString(string s)
{
// Perform some computation on s...
Thread.Sleep(200);
return s;
}
private static void OutputNumbersWithParallelLinq()
{
var myNumbers = Enumerable.Range(0, 20).AsParallel()
.AsOrdered()
.WithExecutionMode(ParallelExecutionMode.ForceParallelism)
.WithMergeOptions(ParallelMergeOptions.NotBuffered);
var processed = from i in myNumbers
select HandleString(i.ToString());
foreach (var s in processed)
{
Console.WriteLine(s);
}
}
}
The first set of numbers is calculated using a method rather similar to yours—with TPL Dataflow. The numbers are out-of-order.
The second set of numbers, output by OutputNumbersWithParallelLinq(), doesn't use Dataflow at all. It relies on the Parallel LINQ features built into .NET. This runs my HandleString() method on background threads, but keeps the data in order through to the end.
The limitation here is that PLINQ doesn't let you supply an async method. (Well, you could, but it wouldn't give you the desired behavior.) HandleString() is a conventional synchronous method; it just gets executed on a background thread.
And here's a more complex Dataflow example that does preserve the correct order:
private static void OutputNumbersWithDataflowTransformBlock()
{
Random r = new Random();
var transformBlock = new TransformBlock<string, string>(
async s =>
{
// Make the delay extra random, just to be sure.
await Task.Delay(160 + r.Next(80));
return s;
},
new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = DataflowBlockOptions.Unbounded });
// For a GUI application you should also set the
// scheduler here to make sure the output happens
// on the correct thread.
var outputBlock = new ActionBlock<string>(
s => Console.WriteLine("Handled {0}.", s),
new ExecutionDataflowBlockOptions
{
SingleProducerConstrained = true,
MaxDegreeOfParallelism = 1
});
transformBlock.LinkTo(outputBlock, new DataflowLinkOptions { PropagateCompletion = true });
for (int i = 0; i < 20; i++)
{
transformBlock.Post(i.ToString());
}
transformBlock.Complete();
outputBlock.Completion.Wait();
}

Alternate to Dataflow BroadcastBlock with guaranteed delivery

I need to have some kind of object that acts like a BroadcastBlock, but with guaranteed delivery. So i used an answer from this question. But i don't really clearly understand the execution flow here. I have a console app. Here is my code:
static void Main(string[] args)
{
ExecutionDataflowBlockOptions execopt = new ExecutionDataflowBlockOptions { BoundedCapacity = 5 };
List<ActionBlock<int>> blocks = new List<ActionBlock<int>>();
for (int i = 0; i <= 10; i++)
blocks.Add(new ActionBlock<int>(num =>
{
int coef = i;
Console.WriteLine(Thread.CurrentThread.ManagedThreadId + ". " + num * coef);
}, execopt));
ActionBlock<int> broadcaster = new ActionBlock<int>(async num =>
{
foreach (ActionBlock<int> block in blocks) await block.SendAsync(num);
}, execopt);
broadcaster.Completion.ContinueWith(task =>
{
foreach (ActionBlock<int> block in blocks) block.Complete();
});
Task producer = Produce(broadcaster);
List<Task> ToWait = new List<Task>();
foreach (ActionBlock<int> block in blocks) ToWait.Add(block.Completion);
ToWait.Add(producer);
Task.WaitAll(ToWait.ToArray());
Console.ReadLine();
}
static async Task Produce(ActionBlock<int> broadcaster)
{
for (int i = 0; i <= 15; i++) await broadcaster.SendAsync(i);
broadcaster.Complete();
}
Each number must be handled sequentially, so i can't use MaxDegreeOfParallelism in broadcaster block. But all actionblocks that receive the number can run in parallel.
So here is the question:
In the output i can see different thread ids. Do i understand it correctly that works as follows:
Execution hits await block.SendAsync(num); in a broadcaster.
If current block is not ready to accept the number, execution exits broadcaster and hangs at the Task.WaitAll.
When block accepts the number, the rest of foreach statement in broadcaster is executed in a threadpool.
And the same till the end.
Each iteration of foreach is executed in a threadpool. But actually it happens sequentially.
Am i right or wrong in my understanding?
How can i change this code to send the number to all blocks asynchronously?
To make sure that if one of blocks is not ready to receive the number at the moment, i won't wait for it and all others that are ready will receive the number. And that all blocks can run in parallel. And guarantee delivery.
Assuming you want to handle one item at a time by the broadcaster while enabling the target blocks to receive that item concurrently you need to change the broadcaster to offer the number to all blocks at the same time and then asynchronously wait for all of them together to accept it before moving on to the next number:
var broadcaster = new ActionBlock<int>(async num =>
{
var tasks = new List<Task>();
foreach (var block in blocks)
{
tasks.Add(block.SendAsync(num));
}
await Task.WhenAll(tasks);
}, execopt);
Now, in this case where you don't have work after the await you can slightly optimize while still returning an awaitable task:
ActionBlock<int> broadcaster = new ActionBlock<int>(
num => Task.WhenAll(blocks.Select(block => block.SendAsync(num))), execopt);

C# RX (System.Reactive) - Async - Publish an IEnumerable<DataRow> to multiple observing data handers

I'm new to RX.
I'd like to traverse an IEnumerable and publish to multi DataHandlers that process the data in their respective threads.
Below is my sample program. The publish works and a new thread is created, but the 3 RowHandlers are all running in 1 thread. I need 3 threads. What is the best way to implement this?
class Program
{
public class MyDataGenerator
{
public IEnumerable<int> myData()
{
//Heavy lifting....Don't want to process more than once.
yield return 1;
yield return 2;
yield return 3;
yield return 4;
yield return 5;
yield return 6;
}
}
static void Main(string[] args)
{
MyDataGenerator h = new MyDataGenerator();
Console.WriteLine("Thread id " + Thread.CurrentThread.ManagedThreadId.ToString());
//
var shared = h.myData().ToObservable().Publish();
///////////////////////////////
// Row Handling Requirements
//
// 1. Single Scan of IEnumerable.
// 2. Row handlers process data in their own threads.
// 3. OK if scanning thread blocks while data is processed
//
//Create the RowHandlers
MyRowHandler rn1 = new MyRowHandler();
rn1.ido = shared.Subscribe(i => rn1.processID(i));
MyRowHandler rn2 = new MyRowHandler();
rn2.ido = shared.Subscribe(i => rn2.processID(i));
MyRowHandler rn3 = new MyRowHandler();
rn3.ido = shared.Subscribe(i => rn3.processID(i));
//
shared.Connect();
}
public class MyRowHandler
{
public IDisposable ido = null;
public void processID(int i)
{
var o = Observable.Start(() =>
{
Console.WriteLine(String.Format("Start Thread ID {0} Int{1}", Thread.CurrentThread.ManagedThreadId, i));
Thread.Sleep(30);
Console.WriteLine("Done Thread ID"+Thread.CurrentThread.ManagedThreadId.ToString());
}
);
o.First();
}
}
}
Discovery :
The coding speed & code quality gains one receives from Rx come at the expense of performance. Task/Delegates are without a doubt multiples faster. That means that the most important thing one needs to learn about Rx is when to use Rx. Below is a draft summary guideline. For large volumes I can see use for Rx in chuncking, combining, and other many stream-many handler models; however, basic Async should not use rx.
I'd post an image with a matrix guideline, but the site won't let me post images
If I understand your sequencing requirements correctly and you want three parallel running scans, you can just observe on the TaskPool and subscribe from there;
...
//Create the RowHandlers
MyRowHandler rn1 = new MyRowHandler();
rn1.ido = shared.ObserveOn(Scheduler.TaskPool).Subscribe(i => rn1.processID(i));
...
Note that since you're then running asynchronously and your main thread doesn't wait for the scans to get done, your program will terminate right away unless you for example put a Console.ReadKey() at the end of the program.
EDIT: Regarding running the same thread "all the way", you're scheduling a bit strangely for that. If you drop the observable in the rowhandler, you can use Scheduler.NewThread and get good results;
...
var rowHandler1 = new MyRowHandler();
rowHandler1.ido = shared.ObserveOn(Scheduler.NewThread).Subscribe(rowHandler1.ProcessID);
...
public void ProcessID(int i)
{
Console.WriteLine(String.Format("Start Thread ID {0} Int{1}", Thread.CurrentThread.ManagedThreadId, i));
Thread.Sleep(30);
Console.WriteLine("Done Thread ID" + Thread.CurrentThread.ManagedThreadId.ToString(CultureInfo.InvariantCulture));
}
That will give each subscription its own thread, and stay with it.

Categories