How to run batch of Tasks? - c#

I have a lot of Task what i running simultaneously. But sometimes there are too much of them to run in same time. So i want to run them by bunches for 100 Tasks. But i'm not sure how can i modify my code.
There is my current code:
protected void ValidateFile(List<MyFile> validFiles, MyFile file)
{
///do something
validFiles.add(file);
}
internal Task ValidateFilesAsync(List<MyFile> validFiles, SplashScreenManager splashScreen, MyFile file)
{
return Task.Run(() => ValidateFile(validFiles, file)).ContinueWith(
t => splashScreen?.SendCommand(SplashScreen.SplashScreenCommand.IncreaseGeneralActionValue,
1));
}
var validFiles = new List<MyFile>;
var tasks = new List<Task>();
foreach (var file in filesToValidate)
{
tasks.Add(ValidateFilesAsync(validFiles, splashScreenManager, file));
}
Task.WaitAll(tasks.ToArray());
I'm not very good with Tasks so code may be not an optimal, but somehow it's working.
I found that i can use Parallel.Foreach with MaxDegreeOfParallelism param, but for this i have to ValidateFile into Action and remove ValidateFilesAsync and in this case i will lose ContinueWith functional what i use to increase progress bar in gui.
How can i achive restriction of simultaneously running Tasks and if possible save ContinueWith-like functional?

I would suggest using Parallel.Foreach. Not sure what you mean with " have to ValidateFile into Action", just change your foreach-body to a lambda. There should be plenty of examples easily available.
To update the UI there are several ways to do it:
Use SynchronizationContext
Start a task using a task scheduler that runs the task on the UI thread
Update the progress variable on the background thread, and use a timer to poll the progress variable from the UI thread.
Keep in mind that IO, like reading files, may not improve much, if at all, by reading in parallel. Spinning disks are inherently serial, and have a fairly large seek times. SSDs are inherently parallel, but I would still not expect any huge performance gains from reading in parallel, especially not going up to 100 concurrent reads.

Related

How to ensure parallel tasks dequeue unique entries from ConcurrentQueue<T>?

Hi I have a concurrent Queue that is loaded with files from database. These files are to be processed by parallel Tasks that will dequeue the files. However I run into issues where after some time, I start getting tasks that dequeue the same file at the same time (which leads to "used by another process errors on the file). And I also get more tasks than are supposed to be allocated. I have even seen 8 tasks running at once which should not be happening. The active tasks limit is 5
Rough code:
private void ParseQueuedTDXFiles()
{
while (_signalParseQueuedFilesEvent.WaitOne())
{
Task.Run(() => SetParsersTask());
}
}
The _signalParseQueuedFilesEvent is set on a timer in a Windows Service
The above function then calls SetParsersTask. This is why I use a concurrent Dictionary to track how many active tasks there are. And make sure they are below _ActiveTasksLimit:
private void SetParsersTask()
{
if (_ConcurrentqueuedTdxFilesToParse.Count > 0)
{
if (_activeParserTasksDict.Count < _ActiveTasksLimit) //ConcurrentTask Dictionary Used to control how many Tasks should run
{
int parserCountToStart = _ActiveTasksLimit - _activeParserTasksDict.Count;
Parallel.For(0, parserCountToStart, parserToStart =>
{
lock(_concurrentQueueLock)
Task.Run(() => PrepTdxParser());
});
}
}
}
Which then calls this function which dequeues the Concurrent Queue:
private void PrepTdxParser()
{
TdxFileToProcessData fileToProcess;
lock (_concurrentQueueLock)
_ConcurrentqueuedTdxFilesToParse.TryDequeue(out fileToProcess);
if (!string.IsNullOrEmpty(fileToProcess.TdxFileName))
{
LaunchTDXParser(fileToProcess);
}
}
I even put a lock on _ConcurrentqueuedTdxFilesToParse even though I know it doesn't need one. All to make sure that I never run into a situation where two Tasks are dequeuing the same file.
This function is where I add and remove Tasks as well as launch the file parser for the dequeued file:
private void LaunchTDXParser(TdxFileToProcessData fileToProcess)
{
string fileName = fileToProcess.TdxFileName;
Task startParserTask = new Task(() => ConfigureAndStartProcess(fileName));
_activeParserTasksDict.TryAdd(fileName, startParserTask);
startParserTask.Start();
Task.WaitAll(startParserTask);
_activeParserTasksDict.TryRemove(fileName, out Task taskToBeRemoved);
}
Can you guys help me understand why I am getting the same file dequeued in two different Tasks? And why I am getting more Tasks than the _ActiveTasksLimit?
There is a number of red flags in this¹ code:
Using a WaitHandle. This tool it too primitive. I've never seen a problem solved with WaitHandles, that can't be solved in a simpler way without them.
Launching Task.Run tasks in a fire-and-forget fashion.
Launching a Parallel.For loop without configuring the MaxDegreeOfParallelism. This practically guarantees that the ThreadPool will get saturated.
Protecting a queue (_queuedTdxFilesToParse) with a lock (_concurrentQueueLock) only partially. If the queue is a Queue<T>, you must protect it on each and every operation, otherwise the behavior of the program is undefined. If the queue is a ConcurrentQueue<T>, there is no need to protect it because it is thread-safe by itself.
Calling Task.Factory.StartNew and Task.Start without configuring the scheduler argument.
So I am not surprised that your code is not working as expected. I can't point to a specific error that needs to be fixed. For me the whole approach is dubious, and needs to be reworked/scraped. Some concepts and tools that you might want to research before attempting to rewrite this code:
The producer-consumer pattern.
The BlockingCollection<T> class.
The TPL Dataflow library.
Optionally you could consider familiarizing yourself with asynchronous programming. It can help at reducing the number of threads that your program uses while running, resulting in a more efficient and scalable program. Two powerful asynchronous tools is the Channel<T> class and the Parallel.ForEachAsync API (available from .NET 6 and later).
¹ This answer was intended for a related question that is now deleted.
So I fixed my problem. The solution was first to not add more parallelism than needs be. I was trying to create a situaion where private void SetParsersTask() would not be held by tasks that still needed to finish process a file. So I foolishly threw in Parallel.For in addition to Task.Start which is already parallel. I fixed this by generating Fire and Forget Tasks in a normal for loop as opposed to Paralle.For:
private void SetParsersTask()
{
if (_queuedTdxFilesToParse.Count > 0)
{
if (_activeParserTasksDict.Count < _tdxParsersInstanceCount)
{
int parserCountToStart = _tdxParsersInstanceCount - _activeParserTasksDict.Count;
_queuedTdxFilesToParse = new ConcurrentQueue<TdxFileToProcessData>(_queuedTdxFilesToParse.Distinct());
for (int i = 0; i < parserCountToStart; i++)
{
Task.Run(() => PrepTdxParser());
}
}
}
}
After that I was still getting the occasional duplicate files so I moved the queue loading to another long running thread. And for that thread I use an AutoResetEvent so that the queue is only populated only once at any instance of time. As opposed to potentially another task loading it with duplicate files. It could be that both my enqueue and dequeue were both responsible and now it's addressed:
var _loadQueueTask = Task.Factory.StartNew(() => LoadQueue(), TaskCreationOptions.LongRunning);
private void LoadQueue()
{
while (_loadConcurrentQueueEvent.WaitOne())
{
if (_queuedTdxFilesToParse.Count < _tdxParsersInstanceCount)
{
int numFilesToGet = _tdxParsersInstanceCount - _activeParserTasksDict.Count;
var filesToAdd = ServiceDBHelper.GetTdxFilesToEnqueueForProcessingFromDB(numFilesToGet);
foreach (var fileToProc in filesToAdd)
{
ServiceDBHelper.UpdateTdxFileToProcessStatusAndUpdateDateTime(fileToProc.TdxFileName, 1, DateTime.Now);
_queuedTdxFilesToParse.Enqueue(fileToProc);
}
}
}
}
Thanks to Theo for pointing me to additional tools and making me look closer in my parallel loops

EventHub ForEach Parallel Async

Always managing to confuse myself working with async, I'm after a bit of validation/confirmation here that i'm doing what i think i'm doing in the following scenarios..
given the following trivial example:
// pretend / assume these are json msgs or something ;)
var strEvents = new List<string> { "event1", "event2", "event3" };
i can post each event to an eventhub simply as follows:
foreach (var e in strEvents)
{
// Do some things
outEventHub.Add(e); // ICollector
}
the foreach will run on a single thread, and execute each thing inside sequentially.. the posting to eventhub will also remain on the same thread too i guess??
Changing ICollector to IAsyncCollector, and achieve the following:
foreach (var e in strEvents)
{
// Do some things
await outEventHub.AddAsync(e);
}
I think i am right here in saying that the foreach will run on a single thread, the actual sending to the event hub will be pushed off elsewhere? Or at least not block that same thread..
Changing to Parallel.ForEach event as these events will be arriving 100+ or so at a time:
Parallel.ForEach(events, async (e) =>
{
// Do some things
await outEventHub.AddAsync(e);
});
Starting to get a bit hazy now, as i am not sure what really is going on now... afaik the each event has it's own thread (within the bounds of the hardware) and steps within that thread do not block it.. so this trivial example aside.
Finally, i could turn them all in to Tasks i thought..
private static async Task DoThingAsync(string e, IAsyncCollector<string> outEventHub)
{
await outEventHub.AddAsync(e);
}
var t = new List<Task>();
foreach (var e in strEvents)
{
t.Add(DoThingAsync(e, outEventHub));
}
await Task.WhenAll(t);
now i am really hazy, and i think this is prepping everything on a single thread.. and then running everything exactly at the same time, on any thread available??
I appreciate that in order to determine which is right for the job at hand benchmarking is required... but an explanation of what the framework is doing in each situation would be super helpful for me right now..
Parallel != async
This is the main idea here. Both of them have their uses, and they can be used together, but they are very different. You are mostly right with your assumptions, but let me clarify:
Simple foreach
This is non-parallel and non-async. Nothing to talk about.
Await inside foreach
This is async code that is non-parallel.
foreach (var e in strEvents)
{
// Do some things
await outEventHub.AddAsync(e);
}
This will all take place on a single thread. It takes an event, starts adding it to your event hub, and while it is being completed (I'm guessing it does some sort of network IO) it hands back the thread to the thread pool (or UI if it was called on a UI thread) so it can do other work while wating on AddAsync to return. But as you said, is is not parallel at all.
Parallel Foreach (async)
This one is a trap! In short, Parallel.Foreach is designed for synchronous workloads. We'll get back to this but first let's assume you used it with the non-async code.
Parallel foreach (sync)
A.k.a. Parallel but not async.
Parallel.ForEach(events, (e) =>
{
// Do some things
outEventHub.Add(e);
});
Each item will get its own "Task", but they won't spawn a thread. Creating threads is expensive, and in an optimal case there is no point in having more threads than CPU cores. Instead these tasks run on a ThreadPool, which has just as many Threads as optimal. Each thread takes a task, works on it, then takes another one, etc.
You can think of it as - on a 4 core machine - having 4 workers around a pile of tasks, so 4 of them are being run at a time. You can imagine that this is not ideal in case of IO bound workloads (which this most likely is). If your network is slow, you can have all 4 threads blocked on trying to send the event out, while they could be doing useful work. This leads us to...
Tasks
Async and potentially parallel (depends on the usage).
Your description is correct here, too, except for the ThreadPool, it is kikking off all the tasks at once (on the main thread), which then run on the pool's threads. While they are running, the main thread is released, which then can do other work, as needed. Up to this point it is the same as the Parallel.Foreach case. But:
What happens is that a TaskPool thread picks up a task, does the necessary preprocessing, then sends out the network request asynchronously. This means that this task will not block while waiting for the network, but rather it releases the ThreadPool thread to pick up another workitem. When the network request completes, the tasks continuation (the remaining code lines after the network request) is scheduled back to the list of tasks.
You can see that theoretically this is the most efficient process, so fast that you have to be careful not to flood your network.
Back to Parallel.Foreach and async
At this point you should be able to spot the problem. All your async lambda async (e) => { await outEventHub.AddAsync(e);} is doing is to kick off the work, it will return right after it hits the await. (Remember that async/await is releasing threads while waiting.) Parallel.Foreach returns right after it started all of them. But nothing is awaiting these tasks! These become fire and forget, which is usually a bad practice. It is like you deleted the await Task.WhenAll call from your task example.
I hope this cleared most things for you, if not, let me know what to improve on.
Why don't you send those events asynchronously in parallel, like this:
var tasks = new List<Task>();
foreach( var e in strEvents )
{
tasks.Add(outEventHub.AddAsync(e));
}
await Task.WhenAll(tasks);
await outEventHub.FlushAsync();

Using Task.Yield to overcome ThreadPool starvation while implementing producer/consumer pattern

Answering the question: Task.Yield - real usages?
I proposed to use Task.Yield allowing a pool thread to be reused by other tasks. In such pattern:
CancellationTokenSource cts;
void Start()
{
cts = new CancellationTokenSource();
// run async operation
var task = Task.Run(() => SomeWork(cts.Token), cts.Token);
// wait for completion
// after the completion handle the result/ cancellation/ errors
}
async Task<int> SomeWork(CancellationToken cancellationToken)
{
int result = 0;
bool loopAgain = true;
while (loopAgain)
{
// do something ... means a substantial work or a micro batch here - not processing a single byte
loopAgain = /* check for loop end && */ cancellationToken.IsCancellationRequested;
if (loopAgain) {
// reschedule the task to the threadpool and free this thread for other waiting tasks
await Task.Yield();
}
}
cancellationToken.ThrowIfCancellationRequested();
return result;
}
void Cancel()
{
// request cancelation
cts.Cancel();
}
But one user wrote
I don't think using Task.Yield to overcome ThreadPool starvation while
implementing producer/consumer pattern is a good idea. I suggest you
ask a separate question if you want to go into details as to why.
Anybody knows, why is not a good idea?
There are some good points left in the comments to your question. Being the user you quoted, I'd just like to sum it up: use the right tool for the job.
Using ThreadPool doesn't feel like the right tool for executing multiple continuous CPU-bound tasks, even if you try to organize some cooperative execution by turning them into state machines which yield CPU time to each other with await Task.Yield(). Thread switching is rather expensive; by doing await Task.Yield() on a tight loop you add a significant overhead. Besides, you should never take over the whole ThreadPool, as the .NET framework (and the underlying OS process) may need it for other things. On a related note, TPL even has the TaskCreationOptions.LongRunning option that requests to not run the task on a ThreadPool thread (rather, it creates a normal thread with new Thread() behind the scene).
That said, using a custom TaskScheduler with limited parallelism on some dedicated, out-of-pool threads with thread affinity for individual long-running tasks might be a different thing. At least, await continuations would be posted on the same thread, which should help reducing the switching overhead. This reminds me of a different problem I was trying to solve a while ago with ThreadAffinityTaskScheduler.
Still, depending on a particular scenario, it's usually better to use an existing well-established and tested tool. To name a few: Parallel Class, TPL Dataflow, System.Threading.Channels, Reactive Extensions.
There is also a whole range of existing industrial-strength solutions to deal with Publish-Subscribe pattern (RabbitMQ, PubNub, Redis, Azure Service Bus, Firebase Cloud Messaging (FCM), Amazon Simple Queue Service (SQS) etc).
After a bit of debating on the issue with other users - who are worried about the context switching and its influence on the performance.
I see what they are worried about.
But I meant: do something ... inside the loop to be a substantial task - usually in the form of a message handler which reads a message from the queue and processes it. The message handlers are usually user defined and the message bus executes them using some sort of dispatcher. The user can implement a handler which executes synchronously (nobody knows what the user will do), and without Task.Yield that will block the thread to process those synchronous tasks in a loop.
Not to be empty worded i added tests to github: https://github.com/BBGONE/TestThreadAffinity
They compare the ThreadAffinityTaskScheduler, .NET ThreadScheduler with BlockingCollection and .NET ThreadScheduler with Threading.Channels.
The tests show that for Ultra Short jobs the performance degradation is
around 15%. To use the Task.Yield without the performance degradation (even small) - it is not to use extremely short tasks and if the task is too short then combine shorter tasks into a bigger batch.
[The price of context switch] = [context switch duration] / ([job duration]+[context switch duration]).
In that case the influence of the switching the tasks is negligible on the performance. But it adds a better task cooperation and responsiveness of the system.
For long running tasks it is better to use a custom Scheduler which executes tasks on its own dedicated thread pool - (like the WorkStealingTaskScheduler).
For the mixed jobs - which can contain different parts - short running CPU bound, asynchronous and long running code parts. It is better to split the task into subtasks.
private async Task HandleLongRunMessage(TestMessage message, CancellationToken token = default(CancellationToken))
{
// SHORT SYNCHRONOUS TASK - execute as is on the default thread (from thread pool)
CPU_TASK(message, 50);
// IO BOUND ASYNCH TASK - used as is
await Task.Delay(50);
// BUT WRAP the LONG SYNCHRONOUS TASK inside the Task
// which is scheduled on the custom thread pool
// (to save threadpool threads)
await Task.Factory.StartNew(() => {
CPU_TASK(message, 100000);
}, token, TaskCreationOptions.DenyChildAttach, _workStealingTaskScheduler);
}

Async/await performance

I'm working on performance optimization of the program which widely uses async/await feature. Generally speaking it downloads thousands of json documents through HTTP in parallel, parses them and builds some response using this data. We experience some issues with performance, when we handle many requests simultaneously (e.g. download 1000 jsons), we can see that a simple HTTP request can take a few minutes.
I wrote a small console app to test it on a simplified example:
class Program
{
static void Main(string[] args)
{
for (int i = 0; i < 100000; i++)
{
Task.Run(IoBoundWork);
}
Console.ReadKey();
}
private static async Task IoBoundWork()
{
var sw = Stopwatch.StartNew();
await Task.Delay(1000);
Console.WriteLine(sw.Elapsed);
}
}
And I can see similar behavior here:
The question is why "await Task.Delay(1000)" eventually takes 23 sec.
Task.Delay isn't broken, but you're performing 100,000 tasks which each take some time. It's the call to Console.WriteLine that is causing the problem in this particular case. Each call is cheap, but they're accessing a shared resource, so they aren't very highly parallelizable.
If you remove the call to Console.WriteLine, all the tasks complete very quickly. I changed your code to return the elapsed time that each task observes, and then print just a single line of output at the end - the maximum observed time. On my computer, without any Console.WriteLine call, I see output of about 1.16 seconds, showing very little inefficiency:
using System;
using System.Linq;
using System.Collections.Generic;
using System.Diagnostics;
using System.Threading;
using System.Threading.Tasks;
class Program
{
static void Main(string[] args)
{
ThreadPool.SetMinThreads(50000, 50000);
var tasks = Enumerable.Repeat(0, 100000)
.Select(_ => Task.Run(IoBoundWork))
.ToArray();
Task.WaitAll(tasks);
var maxTime = tasks.Max(t => t.Result);
Console.WriteLine($"Max: {maxTime}");
}
private static async Task<double> IoBoundWork()
{
var sw = Stopwatch.StartNew();
await Task.Delay(1000);
return sw.Elapsed.TotalSeconds;
}
}
You can then modify IoBoundWork to do different tasks, and see the effect. Examples of work to try:
CPU work (do something actively "hard" for the CPU, but briefly)
Synchronous sleeping (so the thread is blocked, but the CPU isn't)
Synchronous IO which doesn't have any shared bottlenecks (although that's generally hard, given that the disk or network is likely to end up being a shared resource bottleneck even if you're writing to different files etc)
Synchronous IO with a shared bottleneck such as Console.WriteLine
Asynchronous IO (await foo.WriteAsync(...) etc)
You can also try removing the call to Task.Delay(1000) or changing it. I found that by removing it entirely, the result was very small - whereas replacing it with Task.Yield was very similar to Task.Delay. It's worth remembering that as soon as your async method has to actually "pause" you're effectively doubling the task scheduling problem - instead of scheduling 100,000 operations, you're scheduling 200,000.
You'll see a different pattern in each case. Fundamentally, you're starting 100,000 tasks, asking them all to wait for a second, then asking them all to do something. That causes issues in terms of continuation scheduling that's async/await specific, but also plain resource management of "Performing 100,000 tasks each of which needs to write to the console is going to take a while."
If your problem is performance, async-await is the wrong solution.
async-await is all about availability. Availability to handle the screen and user impute, availability to handle HTTP requests, etc.
The synchronization work behind async-await will use more resources and take more time than simply blocking until the operation completes.
Your HTTP server will handle more requests because less threads will be blocked waiting for operations to complete but each request will take slightly longer.

Best way to work on 15000 work items that need 1-2 I/O calls each

I have a C#/.NET 4.5 application that does work on around 15,000 items that are all independent of each other. Each item has a relatively small cpu work to do (no more than a few milliseconds) and 1-2 I/O calls to WCF services implemented in .NET 4.5 with SQL Server 2008 backend. I assume they will queue concurrent requests that they can't process quick enough? These I/O operations can take anywhere from a few milliseconds to a full second. The work item then has a little more cpu work(less than 100 milliseconds) and it is done.
I am running this on a quad-core machine with hyper-threading. Using the task parallel library, I am trying to get the best performance with machine as I can with as little waiting on I/O as possible by running those operations asynchronously and the CPU work done in parallel.
Synchronously, with no parallel processes and no async operations, the application takes around 9 hours to run. I believe I can speed this up to under an hour or less but I am not sure if I am going about this the right way.
What is the best way to do the work per item in .NET? Should I make 15000 threads and have them doing all the work with context switching? Or should I just make 8 threads (how many logical cores I have) and go about it that way? Any help on this would be greatly appreciated.
My usuall suggestion is TPL Dataflow.
You can use an ActionBlock with an async operation and set the parallelism as high as you need it to be:
var block = new ActionBlock<WorkItem>(wi =>
{
DoWork(wi);
await Task.WhenAll(DoSomeWorkAsync(wi), DoOtherWorkAsync(wi));
},
new ExecutionDataflowBlockOptions{ MaxDegreeOfParallelism = 1000 });
foreach (var workItem in workItems)
{
block.Post(workItem);
}
block.Complete();
await block.Completion;
That way you can test and tweak MaxDegreeOfParallelism until you find the number that fits your specific situation the most.
For CPU intensive work having higher parallelism than your cores doesn't help, but for I/O (and other async operations) it definitely does so if your CPU intensive work is short then I would go with at least a 1000.
You definitely don't want to kick off 15000 threads and let them all thrash it out. If you can make your I/O methods completely async - meaning I/O completion ports based - then you can get some very nice controlled parallelism going on.
If you have to tie up threads whilst waiting for I/O you're going to be massively limiting your ability to process the items.
TaskFactory taskFactory = new TaskFactory(new WorkStealingTaskScheduler(Environment.ProcessorCount));
public Job[] GetJobs() { get { return new Job[15000];} }
public async Task ProcessJobs(Job[] jobs)
{
var jobTasks = jobs.Select(j => StartJob(j));
await Task.WhenAll(jobTasks);
}
private async Task StartJob(Job j)
{
var initialCpuResults = await taskFactory.StartNew(() => j.DoInitialCpuWork());
var wcfResult = await DoIOCalls(initialCpuResults);
await taskFactory.StartNew(() => j.DoLastCpuWork(wcfResult));
}
private async Task<bool> DoIOCalls(Result r)
{
// Sequential...
await myWcfClientProxy.DoIOAsync(...); // These MUST be fully IO completion port based methods [not Task.Run etc] to achieve good throughput
await mySQLServerClient.DoIOAsync(...);
// or in Parallel...
// await Task.WhenAll(myWcfClientProxy.DoIOAsync(...), mySQLServerClient.DoIOAsync(...));
return true;
}

Categories