I am learning about the TPL Dataflow Library. So far it's exactly what I was looking for.
I've created a simple class (below) that performs the following functions
Upon execution of ImportPropertiesForBranch I go to a 3rd party api and get a list of properties
A xml list is returned and deserialized into an array of property data (id, api endpoint, lastupdated). There are around 400+ properties (as in houses).
I then use a Parallel.For to SendAsync the property data into my propertyBufferBlock
The propertyBufferBlock is linked to a propertyXmlBlock (which itself is a TransformBlock).
The propertyXmlBlock then (asynchronously) goes back to the API (using the api endpoint supplied in the property data) and fetches the property xml for deserialization.
Once the xml is returned and becomes available, we can then deserialize
Later, I'll add more TransformBlocks to persist it to a data store.
So my questions are;
Are there any potential bottlenecks or areas of the code that could be troublesome? I'm aware that I've not included any error handling or cancellation (this is to come).
Is it ok to await async calls inside a TransformBlock or is this a
bottleneck?
Although the code works , I am worried about the buffering and asyncronsity of the Parallel.For, BufferBlock and async in the TransformBlock. I'm not sure its the best way and I maybe mixing up some concepts.
Any guidance, improvemets and pitfall advice welcomed.
using System.Diagnostics;
using System.Threading.Tasks;
using System.Threading.Tasks.Dataflow;
using My.Interfaces;
using My.XmlService.Models;
namespace My.ImportService
{
public class ImportService
{
private readonly IApiService _apiService;
private readonly IXmlService _xmlService;
private readonly IRepositoryService _repositoryService;
public ImportService(IApiService apiService,
IXmlService xmlService,
IRepositoryService repositoryService)
{
_apiService = apiService;
_xmlService = xmlService;
_repositoryService = repositoryService;
ConstructPipeline();
}
private BufferBlock<propertiesProperty> propertyBufferBlock;
private TransformBlock<propertiesProperty, string> propertyXmlBlock;
private TransformBlock<string, propertyType> propertyDeserializeBlock;
private ActionBlock<propertyType> propertyCompleteBlock;
public async Task<bool> ImportPropertiesForBranch(string branchName, int branchUrlId)
{
var propertyListXml = await _apiService.GetPropertyListAsync(branchUrlId);
if (string.IsNullOrEmpty(propertyListXml))
return false;
var properties = _xmlService.DeserializePropertyList(propertyListXml);
if (properties?.property == null || properties.property.Length == 0)
return false;
// limited to the first 20 for testing
Parallel.For(0, 20,
new ParallelOptions {MaxDegreeOfParallelism = 3},
i => propertyBufferBlock.SendAsync(properties.property[i]));
propertyBufferBlock.Complete();
await propertyCompleteBlock.Completion;
return true;
}
private void ConstructPipeline()
{
propertyBufferBlock = GetPropertyBuffer();
propertyXmlBlock = GetPropertyXmlBlock();
propertyDeserializeBlock = GetPropertyDeserializeBlock();
propertyCompleteBlock = GetPropertyCompleteBlock();
propertyBufferBlock.LinkTo(
propertyXmlBlock,
new DataflowLinkOptions {PropagateCompletion = true});
propertyXmlBlock.LinkTo(
propertyDeserializeBlock,
new DataflowLinkOptions {PropagateCompletion = true});
propertyDeserializeBlock.LinkTo(
propertyCompleteBlock,
new DataflowLinkOptions {PropagateCompletion = true});
}
private BufferBlock<propertiesProperty> GetPropertyBuffer()
{
return new BufferBlock<propertiesProperty>();
}
private TransformBlock<propertiesProperty, string> GetPropertyXmlBlock()
{
return new TransformBlock<propertiesProperty, string>(async propertiesProperty =>
{
Debug.WriteLine($"getting xml {propertiesProperty.prop_id}");
var propertyXml = await _apiService.GetXmlAsStringAsync(propertiesProperty.url);
return propertyXml;
},
new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 1,
BoundedCapacity = 2
});
}
private TransformBlock<string, propertyType> GetPropertyDeserializeBlock()
{
return new TransformBlock<string, propertyType>(xmlAsString =>
{
Debug.WriteLine($"deserializing");
var propertyType = _xmlService.DeserializeProperty(xmlAsString);
return propertyType;
},
new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 1,
BoundedCapacity = 2
});
}
private ActionBlock<propertyType> GetPropertyCompleteBlock()
{
return new ActionBlock<propertyType>(propertyType =>
{
Debug.WriteLine($"complete {propertyType.id}");
Debug.WriteLine(propertyType.address.display);
},
new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 1,
BoundedCapacity = 2
});
}
}
}
You actually do some stuff in a wrong way:
i => propertyBufferBlock.SendAsync(properties.property[i])
You need to await method, otherwise you're creating too many simultaneous tasks.
Also this line:
MaxDegreeOfParallelism = 1
will limit the execution of your blocks to consequent execution, which can degrade your performance.
As you said in comments, you switched to synchronous method Post and have limited capacity of the blocks by setting the BoundedCapacity. This variant should be used with caution, as you need to check the return value of it, which stating, has the message had been accepted or not.
As for your worries of awaiting the async methods inside the blocks - it absolutely ok, and should be done as in other cases of async method usage.
Are there any potential bottlenecks or areas of the code that could be troublesome?
In general your approach looks good and the potential bottle neck is that you are limiting parallel processing of your blocks with MaxDegreeOfParallelism = 1. Based on the description of the problem each item can be processed independently of others and that's why you can process multiple items at a time.
Is it ok to await async calls inside a TransformBlock or is this a bottleneck?
It is perfectly fine because TPL DataFlow supports async operations.
Although the code works , I am worried about the buffering and asyncronsity of the Parallel.For, BufferBlock and async in the TransformBlock. I'm not sure its the best way and I maybe mixing up some concepts.
One, potential problem in your code that could make you shoot yourself in the foot is calling async method in Parallel.For and then calling propertyBufferBlock.Complete();. The problem here is that Parallel.For does not support async actions and the way you invoke it will call propertyBufferBlock.SendAsync and move on before returned task is completed. Which means that by the time Parallel.For exits some operations might still be in running state and items are not yet added to buffer block. And if you will then call propertyBufferBlock.Complete(); those pending items will throw exception and items won't be added to processing. You will get unobserved exception.
You could use ForEachAsync form this blog post to ensure that all items are added to the block before completing the block. But if you are still limitting processing to 1 operation you can simply add items one at a time. I am not sure how propertyBufferBlock.SendAsync is implemented, but it can be that in will internally restrict to adding one item at a time so parallel adding would not make any sense.
Related
I'm implementing simple data loader over HTTP, following tips from my previous question C# .NET Parallel I/O operation (with throttling), answered by Throttling asynchronous tasks.
I split loading and deserialization, assuming that one may be slower/faster than other. Also I want to throttle downloading, but don't want to throttle deserialization. Therefore I'm using two blocks and one buffer.
Unfortunately I'm facing problem that this pipeline sometimes processes less messages than consumed (I know from target server that I did exactly n requests, but I end up with less responses).
My method looks like this (no error handling):
public async Task<IEnumerable<DummyData>> LoadAsync(IEnumerable<Uri> uris)
{
IList<DummyData> result;
using (var client = new HttpClient())
{
var buffer = new BufferBlock<DummyData>();
var downloader = new TransformBlock<Uri, string>(
async u => await client.GetStringAsync(u),
new ExecutionDataflowBlockOptions
{ MaxDegreeOfParallelism = _maxParallelism });
var deserializer =
new TransformBlock<string, DummyData>(
s => JsonConvert.DeserializeObject<DummyData>(s),
new ExecutionDataflowBlockOptions
{ MaxDegreeOfParallelism = DataflowBlockOptions.Unbounded });
var linkOptions = new DataflowLinkOptions { PropagateCompletion = true };
downloader.LinkTo(deserializer, linkOptions);
deserializer.LinkTo(buffer, linkOptions);
foreach (Uri uri in uris)
{
await downloader.SendAsync(uri);
}
downloader.Complete();
await downloader.Completion;
buffer.TryReceiveAll(out result);
}
return result;
}
So to be more specific, I have 100 URLs to load, but I get 90-99 responses. No error & server handled 100 requests. This happens randomly, most of the time code behaves correctly.
There are three issues with your code:
Awaiting for the completion of the first block of the pipeline (downloader) instead of the last (buffer).
Using the TryReceiveAll method for retrieving the messages of the buffer block. The correct way to retrieve all messages from an unlinked block without introducing race conditions is to use the methods OutputAvailableAsync and TryReceive in a nested loop. You can find examples here and here.
In case of timeout the HttpClient throws an unexpected TaskCanceledException, and the TPL Dataflow blocks happens to ignore exceptions of this type. The combination of these two unfortunate realities means that by default any timeout occurrence will remain unobserved. To fix this problem you could change your code like this:
var downloader = new TransformBlock<Uri, string>(async url =>
{
try
{
return await client.GetStringAsync(url);
}
catch (OperationCanceledException) { throw new TimeoutException(); }
},
new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = _maxParallelism });
A fourth unrelated issue is the use of the MaxDegreeOfParallelism = DataflowBlockOptions.Unbounded option for the deserializer block. In the (hopefully unlikely) case that the deserializer is slower than the downloader, the deserializer will start queuing more and more work on the ThreadPool, keeping it permanently starved. This will not be good for the performance and the responsiveness of your application, or for the health of the system as a whole. In practice there is rarely a reason to configure a CPU-bound block with a MaxDegreeOfParallelism larger than Environment.ProcessorCount.
I have a C# requirement for individually processing a 'great many' (perhaps > 100,000) records. Running this process sequentially is proving to be very slow with each record taking a good second or so to complete (with a timeout error set at 5 seconds).
I would like to try running these tasks asynchronously by using a set number of worker 'threads' (I use the term 'thread' here cautiously as I am not sure if I should be looking at a thread, or a task or something else).
I have looked at the ThreadPool, but I can't imagine it could queue the volume of requests required. My ideal pseudo code would look something like this...
public void ProcessRecords() {
SetMaxNumberOfThreads(20);
MyRecord rec;
while ((rec = GetNextRecord()) != null) {
var task = WaitForNextAvailableThreadFromPool(ProcessRecord(rec));
task.Start()
}
}
I will also need a mechanism that the processing method can report back to the parent/calling class.
Can anyone point me in the right direction with perhaps some example code?
A possible simple solution would be to use a TPL Dataflow block which is a higher abstraction over the TPL with configurations for degree of parallelism and so forth. You simply create the block (ActionBlock in this case), Post everything to it, wait asynchronously for completion and TPL Dataflow handles all the rest for you:
var block = new ActionBlock<MyRecord>(
rec => ProcessRecord(rec),
new ExecutionDataflowBlockOptions{MaxDegreeOfParallelism = 20});
MyRecord rec;
while ((rec = GetNextRecord()) != null)
{
block.Post(rec);
}
block.Complete();
await block.Completion
Another benefit is that the block starts working as soon as the first record arrives and not only when all the records have been received.
If you need to report back on each record you can use a TransformBlock to do the actual processing and link an ActionBlock to it that does the updates:
var transform = new TransfromBlock<MyRecord, Report>(rec =>
{
ProcessRecord(rec);
return GenerateReport(rec);
}, new ExecutionDataflowBlockOptions{MaxDegreeOfParallelism = 20});
var reporter = new ActionBlock<Report>(report =>
{
RaiseEvent(report) // Or any other mechanism...
});
transform.LinkTo(reporter, new DataflowLinkOptions { PropagateCompletion = true });
MyRecord rec;
while ((rec = GetNextRecord()) != null)
{
transform.Post(rec);
}
transform.Complete();
await transform.Completion
Have you thought about using parallel processing with Actions?
ie, create a method to process a single record, add each record method as an action into a list, and then perform a parrallel.for on the list.
Dim list As New List(Of Action)
list.Add(New Action(Sub() MyMethod(myParameter)))
Parallel.ForEach(list, Sub(t) t.Invoke())
This is in vb.net, but I think you get the gist.
At work one of our processes uses a SQL database table as a queue. I've been designing a queue reader to check the table for queued work, update the row status when work starts, and delete the row when the work is finished. I'm using Parallel.Foreach to give each process its own thread and setting MaxDegreeOfParallelism to 4.
When the queue reader starts up it checks for any unfinished work and loads the work into an list, then it does a Concat with that list and a method that returns an IEnumerable which runs in an infinite loop checking for new work to do. The idea is that the unfinished work should be processed first and then the new work can be worked as threads are available. However what I'm seeing is that FetchQueuedWork will change dozens of rows in the queue table to 'Processing' immediately but only work on a few items at a time.
What I expected to happen was that FetchQueuedWork would only get new work and update the table when a slot opened up in the Parallel.Foreach. What's really odd to me is that it behaves exactly as I would expect when I run the code in my local developer environment, but in production I get the above problem.
I'm using .Net 4. Here is the code:
public void Go()
{
List<WorkData> unfinishedWork = WorkData.LoadUnfinishedWork();
IEnumerable<WorkData> work = unfinishedWork.Concat(FetchQueuedWork());
Parallel.ForEach(work, new ParallelOptions { MaxDegreeOfParallelism = 4 }, DoWork);
}
private IEnumerable<WorkData> FetchQueuedWork()
{
while (true)
{
var workUnit = WorkData.GetQueuedWorkAndSetStatusToProcessing();
yield return workUnit;
}
}
private void DoWork(WorkData workUnit)
{
if (!workUnit.Loaded)
{
System.Threading.Thread.Sleep(5000);
return;
}
Work();
}
I suspect that the default (Release mode?) behaviour is to buffer the input. You might need to create your own partitioner and pass it the NoBuffering option:
List<WorkData> unfinishedWork = WorkData.LoadUnfinishedWork();
IEnumerable<WorkData> work = unfinishedWork.Concat(FetchQueuedWork());
var options = new ParallelOptions { MaxDegreeOfParallelism = 4 };
var partitioner = Partitioner.Create(work, EnumerablePartitionerOptions.NoBuffering);
Parallel.ForEach(partioner, options, DoWork);
Blorgbeard's solution is correct when it comes to .NET 4.5 - hands down.
If you are constrained to .NET 4, you have a few options:
Replace your Parallel.ForEach with work.AsParallel().WithDegreeOfParallelism(4).ForAll(DoWork). PLINQ is more conservative when it comes to buffering items, so this should do the trick.
Write your own enumerable partitioner (good luck).
Create a grotty semaphore-based hack such as this:
(Side-effecting Select used for the sake of brevity)
public void Go()
{
const int MAX_DEGREE_PARALLELISM = 4;
using (var semaphore = new SemaphoreSlim(MAX_DEGREE_PARALLELISM, MAX_DEGREE_PARALLELISM))
{
List<WorkData> unfinishedWork = WorkData.LoadUnfinishedWork();
IEnumerable<WorkData> work = unfinishedWork
.Concat(FetchQueuedWork())
.Select(w =>
{
// Side-effect: bad practice, but easier
// than writing your own IEnumerable.
semaphore.Wait();
return w;
});
// You still need to specify MaxDegreeOfParallelism
// here so as not to saturate your thread pool when
// Parallel.ForEach's load balancer kicks in.
Parallel.ForEach(work, new ParallelOptions { MaxDegreeOfParallelism = MAX_DEGREE_PARALLELISM }, workUnit =>
{
try
{
this.DoWork(workUnit);
}
finally
{
semaphore.Release();
}
});
}
}
I am confused about the difference between sending items through Post() or SendAsync(). My understanding is that in all cases once an item reached the input buffer of a data block, control is returned to the calling context, correct? Then why would I ever need SendAsync? If my assumption is incorrect then I wonder, on the contrary, why anyone would ever use Post() if the whole idea of using data blocks is to establish a concurrent and async environment.
I understand of course the difference technically in that Post() returns a bool whereas SendAsync returns an awaitable Task of bool. But what implications does that have? When would the return of a bool (which I understand is a confirmation whether the item was placed in the queue of the data block or not) ever be delayed? I understand the general idea of the async/await concurrency framework but here it does not make a whole lot sense because other than a bool the results of whatever is done to the passed-in item is never returned to the caller but instead placed in an "out-queue" and either forwarded to linked data blocks or discarded.
And is there any performance difference between the two methods when sending items?
To see the difference, you need a situation where blocks will postpone their messages. In this case, Post will return false immediately, whereas SendAsync will return a Task that will be completed when the block decides what to do with the message. The Task will have a true result if the message is accepted, and a false result if not.
One example of a postponing situation is a non-greedy join. A simpler example is when you set BoundedCapacity:
[TestMethod]
public void Post_WhenNotFull_ReturnsTrue()
{
var block = new BufferBlock<int>(new DataflowBlockOptions {BoundedCapacity = 1});
var result = block.Post(13);
Assert.IsTrue(result);
}
[TestMethod]
public void Post_WhenFull_ReturnsFalse()
{
var block = new BufferBlock<int>(new DataflowBlockOptions { BoundedCapacity = 1 });
block.Post(13);
var result = block.Post(13);
Assert.IsFalse(result);
}
[TestMethod]
public void SendAsync_WhenNotFull_ReturnsCompleteTask()
{
// This is an implementation detail; technically, SendAsync could return a task that would complete "quickly" instead of already being completed.
var block = new BufferBlock<int>(new DataflowBlockOptions { BoundedCapacity = 1 });
var result = block.SendAsync(13);
Assert.IsTrue(result.IsCompleted);
}
[TestMethod]
public void SendAsync_WhenFull_ReturnsIncompleteTask()
{
var block = new BufferBlock<int>(new DataflowBlockOptions { BoundedCapacity = 1 });
block.Post(13);
var result = block.SendAsync(13);
Assert.IsFalse(result.IsCompleted);
}
[TestMethod]
public async Task SendAsync_BecomesNotFull_CompletesTaskWithTrueResult()
{
var block = new BufferBlock<int>(new DataflowBlockOptions { BoundedCapacity = 1 });
block.Post(13);
var task = block.SendAsync(13);
block.Receive();
var result = await task;
Assert.IsTrue(result);
}
[TestMethod]
public async Task SendAsync_BecomesDecliningPermanently_CompletesTaskWithFalseResult()
{
var block = new BufferBlock<int>(new DataflowBlockOptions { BoundedCapacity = 1 });
block.Post(13);
var task = block.SendAsync(13);
block.Complete();
var result = await task;
Assert.IsFalse(result);
}
The documentation makes this reasonably clear, IMO. In particular, for Post:
This method will return once the target block has decided to accept or decline the item, but unless otherwise dictated by special semantics of the target block, it does not wait for the item to actually be processed.
And:
For target blocks that support postponing offered messages, or for blocks that may do more processing in their Post implementation, consider using SendAsync, which will return immediately and will enable the target to postpone the posted message and later consume it after SendAsync returns.
In other words, while both are asynchronous with respect to processing the message, SendAsync allows the target block to decide whether or not to accept the message asynchronously too.
It sounds like SendAsync is a generally "more asynchronous" approach, and one which is probably encouraged in general. What isn't clear to me is why both are required, as it certainly sounds like Post is broadly equivalent to using SendAsync and then just waiting on the result. As noted in comments, there is one significant difference: if the buffer is full, Post will immediately reject, whereas SendAsync doesn't.
Suppose you have a TransformBlock with configured parallelism and want to stream data trough the block. The input data should be created only when the pipeline can actually start processing it. (And should be released the moment it leaves the pipeline.)
Can I achieve this? And if so how?
Basically I want a data source that works as an iterator.
Like so:
public IEnumerable<Guid> GetSourceData()
{
//In reality -> this should also be an async task -> but yield return does not work in combination with async/await ...
Func<ICollection<Guid>> GetNextBatch = () => Enumerable.Repeat(100).Select(x => Guid.NewGuid()).ToArray();
while (true)
{
var batch = GetNextBatch();
if (batch == null || !batch.Any()) break;
foreach (var guid in batch)
yield return guid;
}
}
This would result in +- 100 records in memory. OK: more if the blocks you append to this data source would keep them in memory for some time, but you have a chance to get only a subset (/stream) of data.
Some background information:
I intend to use this in combination with azure cosmos db, where the source could all objects in a collection, or a change feed. Needless to say that I don't want all of those objects stored in memory. So this can't work:
using System.Threading.Tasks.Dataflow;
public async Task ExampleTask()
{
Func<Guid, object> TheActualAction = text => text.ToString();
var config = new ExecutionDataflowBlockOptions
{
BoundedCapacity = 5,
MaxDegreeOfParallelism = 15
};
var throtteler = new TransformBlock<Guid, object>(TheActualAction, config);
var output = new BufferBlock<object>();
throtteler.LinkTo(output);
throtteler.Post(Guid.NewGuid());
throtteler.Post(Guid.NewGuid());
throtteler.Post(Guid.NewGuid());
throtteler.Post(Guid.NewGuid());
//...
throtteler.Complete();
await throtteler.Completion;
}
The above example is not good because I add all the items without knowing if they are actually being 'used' by the transform block. Also, I don't really care about the output buffer. I understand that I need to send it somewhere so I can await the completion, but I have no use for the buffer after that. So it should just forget about all it gets ...
Post() will return false if the target is full without blocking. While this could be used in a busy-wait loop, it's wasteful. SendAsync() on the other hand will wait if the target is full :
public async Task ExampleTask()
{
var config = new ExecutionDataflowBlockOptions
{
BoundedCapacity = 50,
MaxDegreeOfParallelism = 15
};
var block= new ActionBlock<Guid, object>(TheActualAction, config);
while(//some condition//)
{
var data=await GetDataFromCosmosDB();
await block.SendAsync(data);
//Wait a bit if we want to use polling
await Task.Delay(...);
}
block.Complete();
await block.Completion;
}
It seems you want to process data at a defined degree of parallelism (MaxDegreeOfParallelism = 15). TPL dataflow is very clunky to use for such a simple requirement.
There's a very simple and powerful pattern that might solve your problem. It's a parallel async foreach loop as described here: https://blogs.msdn.microsoft.com/pfxteam/2012/03/05/implementing-a-simple-foreachasync-part-2/
public static Task ForEachAsync<T>(this IEnumerable<T> source, int dop, Func<T, Task> body)
{
return Task.WhenAll(
from partition in Partitioner.Create(source).GetPartitions(dop)
select Task.Run(async delegate {
using (partition)
while (partition.MoveNext())
await body(partition.Current);
}));
}
You can then write something like:
var dataSource = ...; //some sequence
dataSource.ForEachAsync(15, async item => await ProcessItem(item));
Very simple.
You can dynamically reduce the DOP by using a SemaphoreSlim. The semaphore acts as a gate that only lets N concurrent threads/tasks in. N can be changed dynamically.
So you would use ForEachAsync as the basic workhorse and then add additional restrictions and throttling on top.