Contentious paralleled work between two collections of objects - c#

I am trying to simulate work between two collections asynchronously and in parallel, I have a ConcurrentQueue of customers and a collection of workers. I need the workers to take a Customer from the queue perform work on the customer and once done take another customer right away.
I decided I'd use an event-based paradigm where the collection of workers would perform an action on a customer; who holds an event handler that fires off when the customer is done; which would hopefully fire off the DoWork Method once again, that way I can parallelize the workers to take customers from the queue. But I can't figure out how to pass a customer into DoWork in OnCustomerFinished()! The worker shouldn't depend on a queue of customers obviously
public class Worker
{
public async Task DoWork(ConcurrentQueue<Customer> cust)
{
await Task.Run(() =>
{
if (cust.TryDequeue(out Customer temp))
{
Task.Delay(5000);
temp.IsDone = true;
}
});
}
public void OnCustomerFinished()
{
// This is where I'm stuck
DoWork(~HOW TO PASS THE QUEUE OF CUSTOMER HERE?~);
}
}
// Edit - This is the Customer Class
public class Customer
{
private bool _isDone = false;
public EventHandler<EventArgs> CustomerFinished;
public bool IsDone
{
private get { return _isDone; }
set
{
_isDone = value;
if (_isDone)
{
OnCustomerFinished();
}
}
}
protected virtual void OnCustomerFinished()
{
if (CustomerFinished != null)
{
CustomerFinished(this, EventArgs.Empty);
}
}
}

.NET already has pub/sub and worker mechanisms in the form of DataFlow blocks and lately, Channels.
Dataflow
Dataflow blocks from the System.Threading.Tasks.Dataflow namespace are the "old" way (2012 and later) of building workers and pipelines of workers. Each block has an input and/or output buffer. Each message posted to the block is processed by one or more tasks in the background. For blocks with outputs, the output of each iteration is stored in the output buffer.
Blocks can be combined into pipelines similar to a CMD or Powershell pipeline, with each block running on its own task(s).
In the simplest case an ActionBlock can be used as a worker:
void ProcessCustomer(Customer customer)
{
....
}
var block =new ActionBlock<Customer>(cust=>ProcessCustomer(cust));
That's it. There's no need to manually dequeue or poll.
The producer method can start sending customer instances to the block. Each of them will be processed in the background, in the order they were posted :
foreach(var customer in bigCustomerList)
{
block.Post(customer);
}
When done, eg when the application terminates, the producer only needs to call Complete() on the block and wait for any remaining entries to complete.
block.Complete();
await block.Completion;
Blocks can work with asynchronous methods too.
Channels
Channels are a new mechanism, built into .NET Core 3 and available as a NuGet in previous .NET Framework and .NET Core version. The producer writes to a channel using a ChannelWriter and the consumer reads from the channel using a ChannelReader. This may seem a bit strange until you realize it allows some powerful patterns.
The producer could be something like this, eg a producer that "produces" all customers in a list with a 0.5 sec delay :
ChannelReader<Customer> Producer(IEnumerable<Customer> customers,CancellationToken token=default)
{
//Create a channel that can buffer an infinite number of entries
var channel=Channel.CreateUnbounded();
var writer=channel.Writer;
//Start a background task to produce the data
_ = Task.Run(async ()=>{
foreach(var customer in customers)
{
//Exit gracefully in case of cancellation
if (token.IsCancellationRequested)
{
return;
}
await writer.WriteAsync(customer,token);
await Task.Delay(500);
}
},token)
//Ensure we complete the writer no matter what
.ContinueWith(t=>writer.Complete(t.Exception);
return channel.Reader;
}
That's a bit more involved but notice that the only thing the function needs to return is the ChannelReader. The cancellation token is useful for terminating the producer early, eg after a timeout or if the application closes.
When the writer completes, all the channel's readers will also complete.
The consumer only needs that ChannelReader to work :
async Task Consumer(ChannelReader<Customer> reader,CancellationToken token=default)
{
while(await reader.WaitToReadAsync(token))
{
while(reader.TryRead(out var customer))
{
//Process the customer
}
}
}
Should the writer complete, WaitToReadAsync will return false and the loop will exit.
In .NET Core 3 the ChannelReader supports IAsyncEnumerable through the ReadAllAsync method, making the code even simpler :
async Task Consumer(ChannelReader<Customer> reader,CancellationToken token=default)
{
await foreach(var customer in reader.ReadAllAsync(token))
{
//Process the customer
}
}
The reader created by the producer can be passed directly to the consumer :
var customers=new []{......}
var reader=Producer(customers);
await Consumer(reader);
Intermediate steps can read from a previous channel reader and publish data to the next, eg an order generator :
ChannelReader<Order> ConsumerOrders(ChannelReader<Customer> reader,CancellationToken token=default)
{
var channel=Channel.CreateUnbounded();
var writer=channel.Writer;
//Start a background task to produce the data
_ = Task.Run(async ()=>{
await foreach(var customer in reader.ReadAllAsync(token))
{
//Somehow create an order for the customer
var order=new Order(...);
await writer.WriteAsync(order,token);
}
},token)
//Ensure we complete the writer no matter what
.ContinueWith(t=>writer.Complete(t.Exception);
return channel.Reader;
}
Again, all we need to do is pass the readers from one method to the next
var customers=new []{......}
var customerReader=Producer(customers);
var orderReader=CustomerOrders(customerReader);
await ConsumeOrders(orderReader);

Related

Parallel.ForEach MaxDegreeOfParallelism Strange Behavior with Increasing "Chunking"

I'm not sure if the title makes sense, it was the best I could come up with, so here's my scenario.
I have an ASP.NET Core app that I'm using more as a shell and for DI configuration. In Startup it adds a bunch of IHostedServices as singletons, along with their dependencies, also as singletons, with minor exceptions for SqlConnection and DbContext which we'll get to later. The hosted services are groups of similar services that:
Listen for incoming reports from GPS devices and put into a listening buffer.
Parse items out of the listening buffer and put into a parsed buffer.
Eventually there's a single service that reads the parsed buffer and actually processes the parsed reports. It does this by passing the report it took out of the buffer to a handler and awaits for it to complete to move to the next. This has worked well for the past year, but it appears we're running into a scalability issue now because its processing one report at a time and the average time to process is 62ms on the server which includes the Dapper trip to the database to get the data needed and the EF Core trip to save changes.
If however the handler decides that a report's information requires triggering background jobs, then I suspect it takes 100ms or more to complete. Over time, the buffer fills up faster than the handler can process to the point of holding 10s if not 100s of thousands of reports until they can be processed. This is an issue because notifications are delayed and because it has the potential for data loss if the buffer is still full by the time the server restarts at midnight.
All that being said, I'm trying to figure out how to make the processing parallel. After lots of experimentation yesterday, I settled on using Parallel.ForEach over the buffer using GetConsumingEnumerable(). This works well, except for a weird behavior I don't know what to do about or even call. As the buffer is filled and the ForEach is iterating over it it will begin to "chunk" the processing into ever increasing multiples of two. The size of the chunking is affected by the MaxDegreeOfParallelism setting. For example (N# = Next # of reports in buffer):
MDP = 1
N3 = 1 at a time
N6 = 2 at a time
N12 = 4 at a time
...
MDP = 2
N6 = 1 at a time
N12 = 2 at a time
N24 = 4 at a time
...
MDP = 4
N12 = 1 at a time
N24 = 2 at a time
N48 = 4 at a time
...
MDP = 8 (my CPU core count)
N24 = 1 at a time
N48 = 2 at a time
N96 = 4 at a time
...
This is arguably worse than the serial execution I have now because by the end of the day it will buffer and wait for, say, half a million reports before actually processing them.
Is there a way to fix this? I'm not very experienced with Parallel.ForEach so from my point of view this is strange behavior. Ultimately I'm looking for a way to parallel process the reports as soon as they are in the buffer, so if there's other ways to accomplish this I'm all ears. This is roughly what I have for the code. The handler that processes the reports does use IServiceProvider to create a scope and get an instance of SqlConnection and DbContext. Thanks in advance for any suggestions!
public sealed class GpsReportService :
IHostedService {
private readonly GpsReportBuffer _buffer;
private readonly Config _config;
private readonly GpsReportHandler _handler;
private readonly ILogger _logger;
public GpsReportService(
GpsReportBuffer buffer,
Config config,
GpsReportHandler handler,
ILogger<GpsReportService> logger) {
_buffer = buffer;
_config = config;
_handler = handler;
_logger = logger;
}
public Task StartAsync(
CancellationToken cancellationToken) {
_logger.LogInformation("GPS Report Service => starting");
Task.Run(Process, cancellationToken).ConfigureAwait(false);// Is ConfigureAwait here correct usage?
_logger.LogInformation("GPS Report Service => started");
return Task.CompletedTask;
}
public Task StopAsync(
CancellationToken cancellationToken) {
_logger.LogInformation("GPS Parsing Service => stopping");
_buffer.CompleteAdding();
_logger.LogInformation("GPS Parsing Service => stopped");
return Task.CompletedTask;
}
// ========================================================================
// Utilities
// ========================================================================
private void Process() {
var options = new ParallelOptions {
MaxDegreeOfParallelism = 8,
CancellationToken = CancellationToken.None
};
Parallel.ForEach(_buffer.GetConsumingEnumerable(), options, async report => {
try {
await _handler.ProcessAsync(report).ConfigureAwait(false);
} catch (Exception e) {
if (_config.IsDevelopment) {
throw;
}
_logger.LogError(e, "GPS Report Service");
}
});
}
private async Task ProcessAsync() {
while (!_buffer.IsCompleted) {
try {
var took = _buffer.TryTake(out var report, 10);
if (!took) {
continue;
}
await _handler.ProcessAsync(report!).ConfigureAwait(false);
} catch (Exception e) {
if (_config.IsDevelopment) {
throw;
}
_logger.LogError(e, "GPS Report Service");
}
}
}
}
public sealed class GpsReportBuffer :
BlockingCollection<GpsReport> {
}
You can't use Parallel methods with async delegates - at least, not yet.
Since you already have a "pipeline" style of architecture, I recommend looking into TPL Dataflow. A single ActionBlock may be all that you need, and once you have that working, other blocks in TPL Dataflow may replace other parts of your pipeline.
If you prefer to stick with your existing buffer, then you should use asynchronous concurrency instead of Parallel:
private void Process() {
var throttler = new SemaphoreSlim(8);
var tasks = _buffer.GetConsumingEnumerable()
.Select(async report =>
{
await throttler.WaitAsync();
try {
await _handler.ProcessAsync(report).ConfigureAwait(false);
} catch (Exception e) {
if (_config.IsDevelopment) {
throw;
}
_logger.LogError(e, "GPS Report Service");
}
finally {
throttler.Release();
}
})
.ToList();
await Task.WhenAll(tasks);
}
You have an event stream processing/dataflow problem, not a parallelism problem. If you use the appropriate classes, like the Dataflow blocks, Channels, or Reactive Extensions the problem is simplified a lot.
Even if you want to use a single buffer and a fat worker method though, the appropriate buffer class is the asynchronous Channel, not BlockingCollection. The code could become as simple as:
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
await foreach(GpsMessage msg in _reader.ReadAllAsync(stopppingToken))
{
await _handler.ProcessAsync(msg);
}
}
The first option shows how to use a Dataflow to create a pipeline. The second, how to use Channel instead of BlockingCollection to process multiple queued items concurrently
A pipeline with Dataflow
Once you break the process into independent methods, it's easy to create a pipeline of processing steps using any library.
Task<IEnumerable<GpsMessage>> Poller(DateTime time,IList<Device> devices,CancellationToken token=default)
{
foreach(var device in devices)
{
if(token.IsCancellationRequested)
{
break;
}
var msg=await device.ReadMessage();
yield return msg;
}
}
GpsReport Parser(GpsMessage msg)
{
//Do some parsing magic.
return report;
}
async Task<GpsReport> Enrich(GpsReport report,string connectionString,CancellationToken token=default)
{
//Depend on connection pooling to eliminate the cost of connections
//We may have to use a pool of opened connections otherwise
using var con=new SqlConnection(connectionString);
var extraData=await con.QueryAsync<Extra>(sql,new {deviceId=report.DeviceId},token);
report.Extra=extraData;
return report;
}
async Task BulkImport(SqlReport[] reports,CancellationToken token=default)
{
using var bcp=new SqlBulkCopy(...);
using var reader=ObjectReader.Create(reports);
...
await bcp.WriteToServerAsync(reader,token);
}
In the BulkImport method I use FasMember's ObjectReader to create an IDataReader wrapper over the reports so I can use them with SqlBulkCopy. Another option would be to convert them to a DataTable, but that would create an extra copy of the data in memory.
Combining all these with Dataflow is relatively easy.
var execOptions=new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 10
}
_poller = new TransformManyBlock<DateTime,GpsBuffer>(time=>Poller(time,devices));
_parser = new TransformBlock<GpsBuffer,GpsReport>(b=>Parser(b),execOptions);
var enricher = new TransformBlock<GpsReport,GpsReport>(rpt=>Enrich(rpt,connStr),execOptions);
_batch = new BatchBlock<GpsReport>(50);
_bcpBlock = new ActionBlock<GpsReport[]>(reports=>BulkImport(reports));
Each block has an input and output buffer (except ActionBlock). Each block takes care of processing the messages in its input buffer and processes it. By default, each block uses only one worker task, but that can be changed. The message order is maintained, so if we use eg 10 worker tasks for the parser block, the messages will still be emitted in the order they were received.
Next comes linking the blocks.
var linkOptions=new DataflowLinkOptions {PropagateCompletion=true};
_poller.LinkTo(_parser,options);
_parser.LinkTo(_enricher,options);
_enricher.LinkTo(_batch,options);
_batch.LinkTo(_bcpBlock,options);
After that, a timer can be used to "ping" the head block, the poller, whenever we want:
private void Ping(object state)
{
_poller.Post(DateTime.Now);
}
public Task StartAsync(CancellationToken stoppingToken)
{
_logger.LogInformation("Timed Hosted Service running.");
_timer = new Timer(Ping, null, TimeSpan.Zero,
TimeSpan.FromSeconds(5));
return Task.CompletedTask;
}
To stop the pipeline gracefully, we call Complete() on the head block and await the Completion task on the last block. Assuming the hosted service is similar to the timed background service example:
public Task StopAsync(CancellationToken cancellationToken)
{
....
_timer?.Change(Timeout.Infinite, 0);
_poller.Complete();
await _bcpBlock.Completion;
...
}
Using Channel as an Async queue
A Channel is a far better alternative for asynchronous publisher/subscriber scenarios than BlockingCollection. Roughly, it's an asynchronous queue that goes to extremes to prevent the publisher from reading, or the subscriber from writing, by forcing callers to use the ChannelWriter and ChannelReader classes. In fact, it's quite common to only pass those classes around, never the Channel instance itself.
In your publishing code, you can create a Channel<T> and pass its Reader to the GpsReportService service. Let's assume the publisher is another service that implements an IGpsPublisher interface :
public interface IGpsPublisher
{
ChannelReader<GspMessage> Reader{get;}
}
and the implementation
Channel<GpsMessage> _channel=Channel.CreateUnbounded<GpsMessage>();
public ChannelReader<GspMessage> Reader=>_channel;
private async void Ping(object state)
{
foreach(var device in devices)
{
if(token.IsCancellationRequested)
{
break;
}
var msg=await device.ReadMessage();
await _channel.Writer.WriteAsync(msg);
}
}
public Task StartAsync(CancellationToken stoppingToken)
{
_timer = new Timer(Ping, null, TimeSpan.Zero,
TimeSpan.FromSeconds(5));
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
_timer?.Change(Timeout.Infinite, 0);
_channel.Writer.Complete();
}
This can be passed to GpsReportService as a dependency that will be resolved by the DI container:
public sealed class GpsReportService : BackgroundService
{
private readonly ChannelReader<GpsMessage> _reader;
public GpsReportService(
IGpsPublisher publisher,
...)
{
_reader = publisher.Reader;
...
}
And used
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
await foreach(GpsMessage msg in _reader.ReadAllAsync(stopppingToken))
{
await _handler.ProcessAsync(msg);
}
}
Once the publisher completes, the subscriber loop will also complete once all messages are processed.
To process in parallel, you can start multiple loops concurrently:
async Task Process(ChannelReader<GgpsMessage> reader,CancellationToken token)
{
await foreach(GpsMessage msg in reader.ReadAllAsync(token))
{
await _handler.ProcessAsync(msg);
}
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
var tasks=Enumerable.Range(0,10)
.Select(_=>ProcessReader(_reader,stoppingToken))
.ToArray();
await Task.WhenAll(tasks);
}
Explaining the pipeline
I have a similar situation: every 15 minutes I request air ticket sales reports from airlines (actually GDSs), parse them to extract data and ticket numbers, download the ticket record for each ticket to get some extra data and save everything to the database. I have to do that for 20+ cities (ticket reports are per city) with each report having from 10 to over 100K tickets.
This almost begs for a pipeline. Using your example, you can create a pipeline with the following steps/blocks:
Listen for GPS messages and emit the unparsed message.
Parse the message and emit the parsed message
Load any extra data needed per message and emit a combined record
Handle the combined record and emit the result
(Optional) batch results
Save the results to the database
All three options (Dataflow, Channels, Rx) take care of buffering between the steps. Dataflow is a some-assembly-required library for pipelines processing independent events, Rx is ready-made to analyze streams of events where time is important (eg to calculate average speed in a sliding window), Channels is Lego bricks that can do anything but need to be put together.
Why not Parallel.ForEach
Parallel.ForEach is meant for data parallelism, not async operations. It's meant to process large chunks of in-memory data, independent of each other. Amdah's Law explains that parallelization benefits are limited by the synchronous part of an operation, so all data parallelism libraries try to reduce that by partitioning, and using one core/machine/node to process each partition.
Parallel.ForEach also works by partitioning the data and using roughly one worker task per CPU core, to reduce synchronization between cores. It will even use the current thread which leads to the mistaken assumption it's blocking. When all cores are busy, why not use the thread? It won't be able to run anyway.
The Parallel.ForEach employs chunk partitioning by default, which is intended for reducing the synchronization overhead in CPU-intensive applications, but can result to problematic behavior in some usage scenarios. The chunk partitioning can be disabled by passing as argument a Partitioner<T> instead of an IEnumerable<T>:
Parallel.ForEach(Partitioner.Create(_buffer.GetConsumingEnumerable(),
EnumerablePartitionerOptions.NoBuffering), options, ...
You can also find a custom partitioner, tailored specifically for BlockingCollection<T>s, in this article: ParallelExtensionsExtras Tour – #4 – BlockingCollectionExtensions
That said, the Parallel.ForEach is not async-friendly, meaning that it doesn't understand async delegates. The lambda passed is async void, which is something to avoid. So I would recommend using an ActionBlock<T> instead.

Creating a class that runs tasks sequentially [duplicate]

I know that asynchronous programming has seen a lot of changes over the years. I'm somewhat embarrassed that I let myself get this rusty at just 34 years old, but I'm counting on StackOverflow to bring me up to speed.
What I am trying to do is manage a queue of "work" on a separate thread, but in such a way that only one item is processed at a time. I want to post work on this thread and it doesn't need to pass anything back to the caller. Of course I could simply spin up a new Thread object and have it loop over a shared Queue object, using sleeps, interrupts, wait handles, etc. But I know things have gotten better since then. We have BlockingCollection, Task, async/await, not to mention NuGet packages that probably abstract a lot of that.
I know that "What's the best..." questions are generally frowned upon so I'll rephrase it by saying "What is the currently recommended..." way to accomplish something like this using built-in .NET mechanisms preferably. But if a third party NuGet package simplifies things a bunch, it's just as well.
I considered a TaskScheduler instance with a fixed maximum concurrency of 1, but seems there is probably a much less clunky way to do that by now.
Background
Specifically, what I am trying to do in this case is queue an IP geolocation task during a web request. The same IP might wind up getting queued for geolocation multiple times, but the task will know how to detect that and skip out early if it's already been resolved. But the request handler is just going to throw these () => LocateAddress(context.Request.UserHostAddress) calls into a queue and let the LocateAddress method handle duplicate work detection. The geolocation API I am using doesn't like to be bombarded with requests which is why I want to limit it to a single concurrent task at a time. However, it would be nice if the approach was allowed to easily scale to more concurrent tasks with a simple parameter change.
To create an asynchronous single degree of parallelism queue of work you can simply create a SemaphoreSlim, initialized to one, and then have the enqueing method await on the acquisition of that semaphore before starting the requested work.
public class TaskQueue
{
private SemaphoreSlim semaphore;
public TaskQueue()
{
semaphore = new SemaphoreSlim(1);
}
public async Task<T> Enqueue<T>(Func<Task<T>> taskGenerator)
{
await semaphore.WaitAsync();
try
{
return await taskGenerator();
}
finally
{
semaphore.Release();
}
}
public async Task Enqueue(Func<Task> taskGenerator)
{
await semaphore.WaitAsync();
try
{
await taskGenerator();
}
finally
{
semaphore.Release();
}
}
}
Of course, to have a fixed degree of parallelism other than one simply initialize the semaphore to some other number.
Your best option as I see it is using TPL Dataflow's ActionBlock:
var actionBlock = new ActionBlock<string>(address =>
{
if (!IsDuplicate(address))
{
LocateAddress(address);
}
});
actionBlock.Post(context.Request.UserHostAddress);
TPL Dataflow is robust, thread-safe, async-ready and very configurable actor-based framework (available as a nuget)
Here's a simple example for a more complicated case. Let's assume you want to:
Enable concurrency (limited to the available cores).
Limit the queue size (so you won't run out of memory).
Have both LocateAddress and the queue insertion be async.
Cancel everything after an hour.
var actionBlock = new ActionBlock<string>(async address =>
{
if (!IsDuplicate(address))
{
await LocateAddressAsync(address);
}
}, new ExecutionDataflowBlockOptions
{
BoundedCapacity = 10000,
MaxDegreeOfParallelism = Environment.ProcessorCount,
CancellationToken = new CancellationTokenSource(TimeSpan.FromHours(1)).Token
});
await actionBlock.SendAsync(context.Request.UserHostAddress);
Actually you don't need to run tasks in one thread, you need them to run serially (one after another), and FIFO. TPL doesn't have class for that, but here is my very lightweight, non-blocking implementation with tests. https://github.com/Gentlee/SerialQueue
Also have #Servy implementation there, tests show it is twice slower than mine and it doesn't guarantee FIFO.
Example:
private readonly SerialQueue queue = new SerialQueue();
async Task SomeAsyncMethod()
{
var result = await queue.Enqueue(DoSomething);
}
Use BlockingCollection<Action> to create a producer/consumer pattern with one consumer (only one thing running at a time like you want) and one or many producers.
First define a shared queue somewhere:
BlockingCollection<Action> queue = new BlockingCollection<Action>();
In your consumer Thread or Task you take from it:
//This will block until there's an item available
Action itemToRun = queue.Take()
Then from any number of producers on other threads, simply add to the queue:
queue.Add(() => LocateAddress(context.Request.UserHostAddress));
I'm posting a different solution here. To be honest I'm not sure whether this is a good solution.
I'm used to use BlockingCollection to implement a producer/consumer pattern, with a dedicated thread consuming those items. It's fine if there are always data coming in and consumer thread won't sit there and do nothing.
I encountered a scenario that one of the application would like to send emails on a different thread, but total number of emails is not that big.
My initial solution was to have a dedicated consumer thread (created by Task.Run()), but a lot of time it just sits there and does nothing.
Old solution:
private readonly BlockingCollection<EmailData> _Emails =
new BlockingCollection<EmailData>(new ConcurrentQueue<EmailData>());
// producer can add data here
public void Add(EmailData emailData)
{
_Emails.Add(emailData);
}
public void Run()
{
// create a consumer thread
Task.Run(() =>
{
foreach (var emailData in _Emails.GetConsumingEnumerable())
{
SendEmail(emailData);
}
});
}
// sending email implementation
private void SendEmail(EmailData emailData)
{
throw new NotImplementedException();
}
As you can see, if there are not enough emails to be sent (and it is my case), the consumer thread will spend most of them sitting there and do nothing at all.
I changed my implementation to:
// create an empty task
private Task _SendEmailTask = Task.Run(() => {});
// caller will dispatch the email to here
// continuewith will use a thread pool thread (different to
// _SendEmailTask thread) to send this email
private void Add(EmailData emailData)
{
_SendEmailTask = _SendEmailTask.ContinueWith((t) =>
{
SendEmail(emailData);
});
}
// actual implementation
private void SendEmail(EmailData emailData)
{
throw new NotImplementedException();
}
It's no longer a producer/consumer pattern, but it won't have a thread sitting there and does nothing, instead, every time it is to send an email, it will use thread pool thread to do it.
My lib, It can:
Run random in queue list
Multi queue
Run prioritize first
Re-queue
Event all queue completed
Cancel running or cancel wait for running
Dispatch event to UI thread
public interface IQueue
{
bool IsPrioritize { get; }
bool ReQueue { get; }
/// <summary>
/// Dont use async
/// </summary>
/// <returns></returns>
Task DoWork();
bool CheckEquals(IQueue queue);
void Cancel();
}
public delegate void QueueComplete<T>(T queue) where T : IQueue;
public delegate void RunComplete();
public class TaskQueue<T> where T : IQueue
{
readonly List<T> Queues = new List<T>();
readonly List<T> Runnings = new List<T>();
[Browsable(false), DefaultValue((string)null)]
public Dispatcher Dispatcher { get; set; }
public event RunComplete OnRunComplete;
public event QueueComplete<T> OnQueueComplete;
int _MaxRun = 1;
public int MaxRun
{
get { return _MaxRun; }
set
{
bool flag = value > _MaxRun;
_MaxRun = value;
if (flag && Queues.Count != 0) RunNewQueue();
}
}
public int RunningCount
{
get { return Runnings.Count; }
}
public int QueueCount
{
get { return Queues.Count; }
}
public bool RunRandom { get; set; } = false;
//need lock Queues first
void StartQueue(T queue)
{
if (null != queue)
{
Queues.Remove(queue);
lock (Runnings) Runnings.Add(queue);
queue.DoWork().ContinueWith(ContinueTaskResult, queue);
}
}
void RunNewQueue()
{
lock (Queues)//Prioritize
{
foreach (var q in Queues.Where(x => x.IsPrioritize)) StartQueue(q);
}
if (Runnings.Count >= MaxRun) return;//other
else if (Queues.Count == 0)
{
if (Runnings.Count == 0 && OnRunComplete != null)
{
if (Dispatcher != null && !Dispatcher.CheckAccess()) Dispatcher.Invoke(OnRunComplete);
else OnRunComplete.Invoke();//on completed
}
else return;
}
else
{
lock (Queues)
{
T queue;
if (RunRandom) queue = Queues.OrderBy(x => Guid.NewGuid()).FirstOrDefault();
else queue = Queues.FirstOrDefault();
StartQueue(queue);
}
if (Queues.Count > 0 && Runnings.Count < MaxRun) RunNewQueue();
}
}
void ContinueTaskResult(Task Result, object queue_obj) => QueueCompleted((T)queue_obj);
void QueueCompleted(T queue)
{
lock (Runnings) Runnings.Remove(queue);
if (queue.ReQueue) lock (Queues) Queues.Add(queue);
if (OnQueueComplete != null)
{
if (Dispatcher != null && !Dispatcher.CheckAccess()) Dispatcher.Invoke(OnQueueComplete, queue);
else OnQueueComplete.Invoke(queue);
}
RunNewQueue();
}
public void Add(T queue)
{
if (null == queue) throw new ArgumentNullException(nameof(queue));
lock (Queues) Queues.Add(queue);
RunNewQueue();
}
public void Cancel(T queue)
{
if (null == queue) throw new ArgumentNullException(nameof(queue));
lock (Queues) Queues.RemoveAll(o => o.CheckEquals(queue));
lock (Runnings) Runnings.ForEach(o => { if (o.CheckEquals(queue)) o.Cancel(); });
}
public void Reset(T queue)
{
if (null == queue) throw new ArgumentNullException(nameof(queue));
Cancel(queue);
Add(queue);
}
public void ShutDown()
{
MaxRun = 0;
lock (Queues) Queues.Clear();
lock (Runnings) Runnings.ForEach(o => o.Cancel());
}
}
I know this thread is old, but it seems all the present solutions are extremely onerous. The simplest way I could find uses the Linq Aggregate function to create a daisy-chained list of tasks.
var arr = new int[] { 1, 2, 3, 4, 5};
var queue = arr.Aggregate(Task.CompletedTask,
(prev, item) => prev.ContinueWith(antecedent => PerformWorkHere(item)));
The idea is to get your data into an IEnumerable (I'm using an int array), and then reduce that enumerable to a chain of tasks, starting with a default, completed, task.

Is there a better, maybe more reliable pattern than "fire and forget" to handle variable number of asynchronous tasks concurrently?

I have following code:
while (!cancellationToken.IsCancellationRequested)
{
var connection = await listener.AcceptAsync(cancellationToken);
HandleConnectionAsync(connection, cancellationToken)
.FireAndForget(HandleException);
}
The FireAndForget is an extension method:
public static async void FireAndForget(this ValueTask task, Action<Exception> exceptionHandler)
{
try
{
await task.ConfigureAwait(false);
}
catch (Exception e)
{
exceptionHandler.Invoke(e);
}
}
The while loop is the server lifecycle. When new connection is accepted then it starts some "background task" so it can handle this new connection and then while loop goes back to accepting new connections without awaiting anything - pausing the lifecycle.
I cannot await HandleConnectionAsync (pause the lifecycle) here, because I want to immediately accept another connection (if there is one) and be able to handle multiple connections concurrently. HandleConnectionAsync is I/O bound and handles one connection at time until closed (task completes after some time).
The connections have to be handled separately - I don't want to have a situation when some error while handling one connection have any influence on other connections.
The "fire and forget" solution I have here works, but the general rule is to always await asynchronous methods and never use async void.
It seems like I've broken the rules, so is there a better, maybe more reliable way to handle variable (number of tasks varies in time) number of asynchronous I/O bound tasks concurrently in a situation described here?
More information:
Each call to AcceptAsync allocates system resources even before returning the connection and I want to avoid that whenever possible (the connection may not be returned for hours (code may "await" for hours) - until some external client decides to connect to my server). It is better to assume that this is the method I don't want to be called concurrently/in parallel - just one AcceptAsync at time is enough
Please take into account that I can have millions of clients per day connecting and disconnecting to my server and server (while loop) can work for many many days
I don't know how many connections I will need to handle at a specific time
I do know the maximum number of connections my program will be able to handle concurrently
If I hit the maximum number of connections limit then AcceptAsync won't return new connection until some other active connection closes, so I don't need to worry about that, but any solution based on this limit have to take into account that the active connections may be closed and I still need to handle new connections - number of connections varies over time. "fire and forget" have no issues with that
The code for HandleConnectionAsync is not relevant - it just handles one connection at time until closed (task completes after some time) and is I/O bound (HandleConnectionAsync handles one connection at time, but of course we can start multiple HandleConnectionAsync tasks to handle multiple connections concurrently - which is what I did with "fire and forget")
I'm assuming that changing to something like SignalR isn't an acceptable solution. That would be my first recommendation.
Custom server sockets is a scenario where some kind of "fire and forget" is acceptable. I'm considering adding a "task manager" kind of type to AsyncEx to make this kind of solution easier, but haven't done it yet.
The bottom line is that you need to manage your list of connections yourself. The "connection" object can include a Task that represents the handling loop; that's fine. It's also useful (especially for debugging or management purposes) to have other properties on there as well, such as the remote IP.
So I would approach it something like this:
private readonly object _mutex = new object();
private readonly List<State> _connections = new List<State>();
private void Add(State state)
{
lock (_mutex)
_connections.Add(state);
}
private void Remove(State state)
{
lock (_mutex)
_connections.Remove(state);
}
public async Task RunAsync(CancellationToken cancellationToken)
{
while (true)
{
var connection = await listener.AcceptAsync(cancellationToken);
Add(new State(this, connection));
}
}
private sealed class State
{
private readonly Parent _parent;
public State(Parent parent, Connection connection, CancellationToken cancellationToken)
{
_parent = parent;
Task = ExecuteAsync(connection, cancellationToken);
}
private static async Task ExecuteAsync(Connection connection, CancellationToken cancellationToken)
{
try { await HandleConnectionAsync(connection, cancellationToken); }
finally { _parent.Remove(this); }
}
public Task Task { get; }
// other properties as desired, e.g., RemoteAddress
}
You now have a collection of connections. You can either ignore the tasks in the State objects (as the code above is doing), which is just like fire-and-forget. Or you can await them all at some point. E.g.:
public async Task RunAsync(CancellationToken cancellationToken)
{
try
{
while (true)
{
var connection = await listener.AcceptAsync(cancellationToken);
Add(new State(this, connection));
}
}
catch (OperationCanceledException)
{
// Wait for all connections to cancel.
// I'm not really sure why you would *want* to do this, though.
List<State> connections;
lock (_mutex) { connections = _connections.ToList(); }
await Task.WhenAll(connections.Select(x => x.Task));
}
}
Then it's easy to extend the State object so you can do things that are sometimes useful for a server app to do, e.g.:
List all remote addresses this server has connections to.
Wait until a specific connection is done.
...
Notes:
Use one pattern for cancellation. Passing the token will result in an OperationCanceledException, which is the normal cancellation pattern. The code also was formerly doing a while (!IsCancellationRequested), resulting in a successful completion on cancellation, which is not the normal cancellation pattern. So I removed that so the code is no longer using two cancellation patterns.
When working with raw sockets, in the general case, you need to be constantly reading (even when you're writing) and periodically writing (even if you have no data to send). So your HandleConnectionAsync should be starting an asynchronous reader and writer and then using Task.WhenAll.
I removed the call to HandleException because (probably) whatever it does should be handled by State.ExecuteAsync. It's not hard to add it back in if necessary.
If there is a limit to the maximum number of allowed concurrent tasks, you should use SemaphoreSlim:
int allowedConcurrent = //..
var semaphore = new SemaphoreSlim(allowedConcurrent);
var tasks = new List<Task>();
while (!cancellationToken.IsCancellationRequested)
{
Func<Task> func = async () =>
{
var connection = await listener.AcceptAsync(cancellationToken);
await HandleConnectionAsync(connection, cancellationToken);
semaphore.Release();
};
await semaphore.WaitAsync(); // Will return immediately if the number of concurrent tasks does not exceed allowed
tasks.Add(func());
}
await Task.WhenAll(tasks);
This will accumulate the tasks into a list, then Task.WhenAll can wait for them all to complete.
First things first:
Don't do async void...
Then you can implement a producer/consumer pattern for this, the below pseudocode is just to guide, you need to make sure your Consumer is a Singleton in your app
public class Data
{
public Uri Url { get; set; }
}
public class Producer
{
private Consumer _consumer = new Consumer();
public void DoStuff()
{
var data = new Data();
_consumer.Enqueue(data);
}
}
public class Consumer
{
private readonly List<Data> _toDo = new List<Data>();
private bool _stop = false;
public Consumer()
{
Task.Factory.StartNew(Loop);
}
private async Task Loop()
{
while (!_stop)
{
Data toDo = null;
lock (_toDo)
{
if (_toDo.Any())
{
toDo = _toDo.First();
_toDo.RemoveAt(0);
}
}
if (toDo != null)
{
await DoSomething(toDo);
}
Thread.Sleep(TimeSpan.FromSeconds(1));
}
}
private async Task DoSomething(Data toDo)
{
// YOUR ASYNC STUFF HERE
}
public void Enqueue(Data data)
{
lock (_toDo)
{
_toDo.Add(data);
}
}
}
So your calling method produces what you need to do the background task and the consumer performs that, that's another fire and forget.
You should consider too what happens if something goes wrong at an application level, should you store the Data in the Consumer.Enqueue() so if the app starts again can do the missing job...
Hope this helps

Producer/consumer pattern in .NET with the ability to wait for submitted tasks/jobs to complete

I am trying to get my head around the following design, but fail to get a clear picture.
I have a number of producers submitting tasks/jobs to a queue. A consumer/worker would then pick these up and complete on these. For now, there is only one consumer/worker.
So far, this sounds like the standard producer/consumer pattern which could be done with a BlockingCollection.
However, some producers might want to submit a task/job and be able to wait for its completion (or submit multiple tasks/jobs and wait for some or all of them, etc.), while other producers would just "fire&forget" their tasks/jobs.
(Note that this is not waiting for the queue to be empty, but waiting for a particular task/job).
How would this be done? In all examples I have seen, producers just post data to the queue using BlockingQueue.Add().
Any help would be highly appreciated.
A common approach is to wrap your work operations using a TaskCompletionSource whose Task can be returned to the caller and awaited on for completion.
public class ProducerConsumerQueue
{
private readonly BlockingCollection<Action> queue = new BlockingCollection<Action>();
public Task Produce(Action work)
{
var tcs = new TaskCompletionSource<bool>();
Action action = () =>
{
try
{
work();
tcs.SetResult(true);
}
catch (Exception ex)
{
tcs.SetException(ex);
}
};
queue.Add(action);
return tcs.Task;
}
public void RunConsumer(CancellationToken token)
{
while (true)
{
token.ThrowIfCancellationRequested();
var action = queue.Take(token);
action();
}
}
}
That said, you should consider leveraging the task infrastructure provided by TPL itself, rather than coming up with your own structures. If your only requirement is having a bounded number of consumers, you could use a LimitedConcurrencyLevelTaskScheduler.

Nested transactions in StatefulService to save async state of aborted transaction

I have a ReliableQueue<MyTask> which is enqueued into in a different scope, and I'm dequeuing tasks in a transaction, then want to run some long running calculations on each task.
The problem here is that in case my queue transaction is aborted, I don't want to lose the instance of the long calculation. It will keep running in the background, independent, and I just want to check if its completed or not once I retry to process the tasks.
Code segment:
public void protected override async Task RunAsync(CancellationToken cancellationToken)
{
var queue = await StateManager.GetOrAddAsync<IReliableQueue<MyTask>>(...);
while(!cancellationToken.IsCancellationRequested)
{
using (var transaction = ...)
{
var myTaskConditional = await queue.TryDequeueAsync(transaction);
if (!myTaskConditional.HasValue)
{
break;
}
await DoLongProcessing(myTaskConditional)
await transaction.CommitAsync();
}
}
}
private async void DoLongProcessing(MyTask myTask) {
var dict = await StateManager.GetOrAddAsync<IReliableDictionary<Guid,Guid>>(...);
Conditional<Guid> guidConditional;
using (var transaction = ...)
{
guidConditional = await dict.TryGetValueAsync(myTask.TaskGuid);
if (guidConditional.HasValue) {
await transaction.CommitAsync();
// continue handling knowing we already started, continue to wait for
await WaitForClaulcationFinish(guidConditional.Value);
}
else {
// start handling knowing we never handled this task, create new guid and store it in dict
var runGuid = await StartRunningCalculation(runGuid);
await dict.AddAsync(myTask.TaskGuid, runGuid);
await transaction.CommitAsync();
await WaitForClaulcationFinish(runGuid);
}
}
}
My concern: I'm using nested transactions and that's not recommended.
Is there actually a risk of deadlock here if I'm using the transactions solely for the ReliableQueue or ReliableDictionary separately?
Is there a better intended design for what I'm trying to achieve?
You should not be doing anything long running within a transaction. Take a look at the priority queue service I published. Take the item out of the queue an place it into a collections while doing work, then when done, either put it back into the queue or complete the work.

Categories