We're am trying to serialize processing of a list of business objects using a Saga.
Right now, without a Saga, we simply loop through a list of objects, and fire off a bus.Send(new ProcessBusinessObejct(obj)) async to have handlers execute. So the processing happens more or less in parallel, depending on this setting, I believe:
endpointConfiguration.LimitMessageProcessingConcurrencyTo( 4 );
This has worked fine but the amount of concurrent handlers is now hard on the database.
It would be OK to trigger these handlers in series, i.e. continue with the next only when the current process has finished (failed or succeeded). We don't want to set the concurrency to 1, it would affect all handlers in the endpoint.
The idea is to use the Scatter/Gather pattern and a Saga to keep track of the number of objects and update the state machine with the a count (total count, failed count, success count), and lastly fire an event when the list is done/empty.
The problem is
A) I'm not sure how to keep track of the list in the saga. The SagaData would need a List to keep all objects? Then remove an instance when a handler signals it's done processing.
The saga does not support hierarchical data and hence no List or List. I believe this is still the case in NSB v7.
And B) Is this use of saga feasable or overkill or is there a much simpler way to accomplish this?
We are using Sql Server persistence and transport and NSB 7.
Any input is much appreciated!
I think you are looking to do this. Mind you, depending on the persistence layer you are using, you might need to separate the actual import from updating the saga state. I have blogged about this here.
Saga data can also store a List, but I think in most of the scenarios you can get away with counts. Another important note (although it should be obvious) is that if a message fails to process and goes to the error queue (e.g. an uncaught exception in ImportData), the whole saga will be left incompleted until that message is retried and processed.
public class MySaga : Saga<MySagaData>
: IAmStartedByMessages<StartTheProcess>,
IHandleMessages<ImportData>,
IHandleMessages<ImportFinished>
{
public async Task Handle(StartTheProcess message, IMessageHandlerContext context)
{
Data.ObjectsToImport = message.ObjectCount;
Data.JobID = Guid.NewGuid(); //To generate a correlation ID to connect future messages back to this saga instance
foreach(var id in message.ObjectIdsToImport)
{
await context.SendLocal(new ImportData
{
JobID = Data.JobID //You need this to correlate messages back to the saga
//Anything else you need to pass on to ImportData
ObjectIdToImport = id
}
});
}
public async Task Handle(ImportData message, IMessageHandlerContext context)
{
//import the data and increment the counter
var result = ImportData(message.ObjectIdToImport);
if(result == Result.Success)
{
Data.SuccessImport++;
}
else
{
Data.FailedImport++;
}
await CheckIfFinished(context);
}
public async Task Handle(ImportFinished message, IMessageHandlerContext context)
{
//do any post cleanups or Mark as complete
MarkAsComplete();
return Task.CompletedTask;
}
private async Task CheckIfFinished(IMessageHandlerContext context)
{
if(Data.SuccessImport + Data.FailedImport == Data.ObjectsToImport)
{
//Everything is done
context.SendLocal(new ImportFinished { JobID = Data.JobID });
}
}
}
Related
I'm not sure if the title makes sense, it was the best I could come up with, so here's my scenario.
I have an ASP.NET Core app that I'm using more as a shell and for DI configuration. In Startup it adds a bunch of IHostedServices as singletons, along with their dependencies, also as singletons, with minor exceptions for SqlConnection and DbContext which we'll get to later. The hosted services are groups of similar services that:
Listen for incoming reports from GPS devices and put into a listening buffer.
Parse items out of the listening buffer and put into a parsed buffer.
Eventually there's a single service that reads the parsed buffer and actually processes the parsed reports. It does this by passing the report it took out of the buffer to a handler and awaits for it to complete to move to the next. This has worked well for the past year, but it appears we're running into a scalability issue now because its processing one report at a time and the average time to process is 62ms on the server which includes the Dapper trip to the database to get the data needed and the EF Core trip to save changes.
If however the handler decides that a report's information requires triggering background jobs, then I suspect it takes 100ms or more to complete. Over time, the buffer fills up faster than the handler can process to the point of holding 10s if not 100s of thousands of reports until they can be processed. This is an issue because notifications are delayed and because it has the potential for data loss if the buffer is still full by the time the server restarts at midnight.
All that being said, I'm trying to figure out how to make the processing parallel. After lots of experimentation yesterday, I settled on using Parallel.ForEach over the buffer using GetConsumingEnumerable(). This works well, except for a weird behavior I don't know what to do about or even call. As the buffer is filled and the ForEach is iterating over it it will begin to "chunk" the processing into ever increasing multiples of two. The size of the chunking is affected by the MaxDegreeOfParallelism setting. For example (N# = Next # of reports in buffer):
MDP = 1
N3 = 1 at a time
N6 = 2 at a time
N12 = 4 at a time
...
MDP = 2
N6 = 1 at a time
N12 = 2 at a time
N24 = 4 at a time
...
MDP = 4
N12 = 1 at a time
N24 = 2 at a time
N48 = 4 at a time
...
MDP = 8 (my CPU core count)
N24 = 1 at a time
N48 = 2 at a time
N96 = 4 at a time
...
This is arguably worse than the serial execution I have now because by the end of the day it will buffer and wait for, say, half a million reports before actually processing them.
Is there a way to fix this? I'm not very experienced with Parallel.ForEach so from my point of view this is strange behavior. Ultimately I'm looking for a way to parallel process the reports as soon as they are in the buffer, so if there's other ways to accomplish this I'm all ears. This is roughly what I have for the code. The handler that processes the reports does use IServiceProvider to create a scope and get an instance of SqlConnection and DbContext. Thanks in advance for any suggestions!
public sealed class GpsReportService :
IHostedService {
private readonly GpsReportBuffer _buffer;
private readonly Config _config;
private readonly GpsReportHandler _handler;
private readonly ILogger _logger;
public GpsReportService(
GpsReportBuffer buffer,
Config config,
GpsReportHandler handler,
ILogger<GpsReportService> logger) {
_buffer = buffer;
_config = config;
_handler = handler;
_logger = logger;
}
public Task StartAsync(
CancellationToken cancellationToken) {
_logger.LogInformation("GPS Report Service => starting");
Task.Run(Process, cancellationToken).ConfigureAwait(false);// Is ConfigureAwait here correct usage?
_logger.LogInformation("GPS Report Service => started");
return Task.CompletedTask;
}
public Task StopAsync(
CancellationToken cancellationToken) {
_logger.LogInformation("GPS Parsing Service => stopping");
_buffer.CompleteAdding();
_logger.LogInformation("GPS Parsing Service => stopped");
return Task.CompletedTask;
}
// ========================================================================
// Utilities
// ========================================================================
private void Process() {
var options = new ParallelOptions {
MaxDegreeOfParallelism = 8,
CancellationToken = CancellationToken.None
};
Parallel.ForEach(_buffer.GetConsumingEnumerable(), options, async report => {
try {
await _handler.ProcessAsync(report).ConfigureAwait(false);
} catch (Exception e) {
if (_config.IsDevelopment) {
throw;
}
_logger.LogError(e, "GPS Report Service");
}
});
}
private async Task ProcessAsync() {
while (!_buffer.IsCompleted) {
try {
var took = _buffer.TryTake(out var report, 10);
if (!took) {
continue;
}
await _handler.ProcessAsync(report!).ConfigureAwait(false);
} catch (Exception e) {
if (_config.IsDevelopment) {
throw;
}
_logger.LogError(e, "GPS Report Service");
}
}
}
}
public sealed class GpsReportBuffer :
BlockingCollection<GpsReport> {
}
You can't use Parallel methods with async delegates - at least, not yet.
Since you already have a "pipeline" style of architecture, I recommend looking into TPL Dataflow. A single ActionBlock may be all that you need, and once you have that working, other blocks in TPL Dataflow may replace other parts of your pipeline.
If you prefer to stick with your existing buffer, then you should use asynchronous concurrency instead of Parallel:
private void Process() {
var throttler = new SemaphoreSlim(8);
var tasks = _buffer.GetConsumingEnumerable()
.Select(async report =>
{
await throttler.WaitAsync();
try {
await _handler.ProcessAsync(report).ConfigureAwait(false);
} catch (Exception e) {
if (_config.IsDevelopment) {
throw;
}
_logger.LogError(e, "GPS Report Service");
}
finally {
throttler.Release();
}
})
.ToList();
await Task.WhenAll(tasks);
}
You have an event stream processing/dataflow problem, not a parallelism problem. If you use the appropriate classes, like the Dataflow blocks, Channels, or Reactive Extensions the problem is simplified a lot.
Even if you want to use a single buffer and a fat worker method though, the appropriate buffer class is the asynchronous Channel, not BlockingCollection. The code could become as simple as:
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
await foreach(GpsMessage msg in _reader.ReadAllAsync(stopppingToken))
{
await _handler.ProcessAsync(msg);
}
}
The first option shows how to use a Dataflow to create a pipeline. The second, how to use Channel instead of BlockingCollection to process multiple queued items concurrently
A pipeline with Dataflow
Once you break the process into independent methods, it's easy to create a pipeline of processing steps using any library.
Task<IEnumerable<GpsMessage>> Poller(DateTime time,IList<Device> devices,CancellationToken token=default)
{
foreach(var device in devices)
{
if(token.IsCancellationRequested)
{
break;
}
var msg=await device.ReadMessage();
yield return msg;
}
}
GpsReport Parser(GpsMessage msg)
{
//Do some parsing magic.
return report;
}
async Task<GpsReport> Enrich(GpsReport report,string connectionString,CancellationToken token=default)
{
//Depend on connection pooling to eliminate the cost of connections
//We may have to use a pool of opened connections otherwise
using var con=new SqlConnection(connectionString);
var extraData=await con.QueryAsync<Extra>(sql,new {deviceId=report.DeviceId},token);
report.Extra=extraData;
return report;
}
async Task BulkImport(SqlReport[] reports,CancellationToken token=default)
{
using var bcp=new SqlBulkCopy(...);
using var reader=ObjectReader.Create(reports);
...
await bcp.WriteToServerAsync(reader,token);
}
In the BulkImport method I use FasMember's ObjectReader to create an IDataReader wrapper over the reports so I can use them with SqlBulkCopy. Another option would be to convert them to a DataTable, but that would create an extra copy of the data in memory.
Combining all these with Dataflow is relatively easy.
var execOptions=new ExecutionDataflowBlockOptions
{
MaxDegreeOfParallelism = 10
}
_poller = new TransformManyBlock<DateTime,GpsBuffer>(time=>Poller(time,devices));
_parser = new TransformBlock<GpsBuffer,GpsReport>(b=>Parser(b),execOptions);
var enricher = new TransformBlock<GpsReport,GpsReport>(rpt=>Enrich(rpt,connStr),execOptions);
_batch = new BatchBlock<GpsReport>(50);
_bcpBlock = new ActionBlock<GpsReport[]>(reports=>BulkImport(reports));
Each block has an input and output buffer (except ActionBlock). Each block takes care of processing the messages in its input buffer and processes it. By default, each block uses only one worker task, but that can be changed. The message order is maintained, so if we use eg 10 worker tasks for the parser block, the messages will still be emitted in the order they were received.
Next comes linking the blocks.
var linkOptions=new DataflowLinkOptions {PropagateCompletion=true};
_poller.LinkTo(_parser,options);
_parser.LinkTo(_enricher,options);
_enricher.LinkTo(_batch,options);
_batch.LinkTo(_bcpBlock,options);
After that, a timer can be used to "ping" the head block, the poller, whenever we want:
private void Ping(object state)
{
_poller.Post(DateTime.Now);
}
public Task StartAsync(CancellationToken stoppingToken)
{
_logger.LogInformation("Timed Hosted Service running.");
_timer = new Timer(Ping, null, TimeSpan.Zero,
TimeSpan.FromSeconds(5));
return Task.CompletedTask;
}
To stop the pipeline gracefully, we call Complete() on the head block and await the Completion task on the last block. Assuming the hosted service is similar to the timed background service example:
public Task StopAsync(CancellationToken cancellationToken)
{
....
_timer?.Change(Timeout.Infinite, 0);
_poller.Complete();
await _bcpBlock.Completion;
...
}
Using Channel as an Async queue
A Channel is a far better alternative for asynchronous publisher/subscriber scenarios than BlockingCollection. Roughly, it's an asynchronous queue that goes to extremes to prevent the publisher from reading, or the subscriber from writing, by forcing callers to use the ChannelWriter and ChannelReader classes. In fact, it's quite common to only pass those classes around, never the Channel instance itself.
In your publishing code, you can create a Channel<T> and pass its Reader to the GpsReportService service. Let's assume the publisher is another service that implements an IGpsPublisher interface :
public interface IGpsPublisher
{
ChannelReader<GspMessage> Reader{get;}
}
and the implementation
Channel<GpsMessage> _channel=Channel.CreateUnbounded<GpsMessage>();
public ChannelReader<GspMessage> Reader=>_channel;
private async void Ping(object state)
{
foreach(var device in devices)
{
if(token.IsCancellationRequested)
{
break;
}
var msg=await device.ReadMessage();
await _channel.Writer.WriteAsync(msg);
}
}
public Task StartAsync(CancellationToken stoppingToken)
{
_timer = new Timer(Ping, null, TimeSpan.Zero,
TimeSpan.FromSeconds(5));
return Task.CompletedTask;
}
public Task StopAsync(CancellationToken cancellationToken)
{
_timer?.Change(Timeout.Infinite, 0);
_channel.Writer.Complete();
}
This can be passed to GpsReportService as a dependency that will be resolved by the DI container:
public sealed class GpsReportService : BackgroundService
{
private readonly ChannelReader<GpsMessage> _reader;
public GpsReportService(
IGpsPublisher publisher,
...)
{
_reader = publisher.Reader;
...
}
And used
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
await foreach(GpsMessage msg in _reader.ReadAllAsync(stopppingToken))
{
await _handler.ProcessAsync(msg);
}
}
Once the publisher completes, the subscriber loop will also complete once all messages are processed.
To process in parallel, you can start multiple loops concurrently:
async Task Process(ChannelReader<GgpsMessage> reader,CancellationToken token)
{
await foreach(GpsMessage msg in reader.ReadAllAsync(token))
{
await _handler.ProcessAsync(msg);
}
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
var tasks=Enumerable.Range(0,10)
.Select(_=>ProcessReader(_reader,stoppingToken))
.ToArray();
await Task.WhenAll(tasks);
}
Explaining the pipeline
I have a similar situation: every 15 minutes I request air ticket sales reports from airlines (actually GDSs), parse them to extract data and ticket numbers, download the ticket record for each ticket to get some extra data and save everything to the database. I have to do that for 20+ cities (ticket reports are per city) with each report having from 10 to over 100K tickets.
This almost begs for a pipeline. Using your example, you can create a pipeline with the following steps/blocks:
Listen for GPS messages and emit the unparsed message.
Parse the message and emit the parsed message
Load any extra data needed per message and emit a combined record
Handle the combined record and emit the result
(Optional) batch results
Save the results to the database
All three options (Dataflow, Channels, Rx) take care of buffering between the steps. Dataflow is a some-assembly-required library for pipelines processing independent events, Rx is ready-made to analyze streams of events where time is important (eg to calculate average speed in a sliding window), Channels is Lego bricks that can do anything but need to be put together.
Why not Parallel.ForEach
Parallel.ForEach is meant for data parallelism, not async operations. It's meant to process large chunks of in-memory data, independent of each other. Amdah's Law explains that parallelization benefits are limited by the synchronous part of an operation, so all data parallelism libraries try to reduce that by partitioning, and using one core/machine/node to process each partition.
Parallel.ForEach also works by partitioning the data and using roughly one worker task per CPU core, to reduce synchronization between cores. It will even use the current thread which leads to the mistaken assumption it's blocking. When all cores are busy, why not use the thread? It won't be able to run anyway.
The Parallel.ForEach employs chunk partitioning by default, which is intended for reducing the synchronization overhead in CPU-intensive applications, but can result to problematic behavior in some usage scenarios. The chunk partitioning can be disabled by passing as argument a Partitioner<T> instead of an IEnumerable<T>:
Parallel.ForEach(Partitioner.Create(_buffer.GetConsumingEnumerable(),
EnumerablePartitionerOptions.NoBuffering), options, ...
You can also find a custom partitioner, tailored specifically for BlockingCollection<T>s, in this article: ParallelExtensionsExtras Tour – #4 – BlockingCollectionExtensions
That said, the Parallel.ForEach is not async-friendly, meaning that it doesn't understand async delegates. The lambda passed is async void, which is something to avoid. So I would recommend using an ActionBlock<T> instead.
I have an azure function that reads from a ServiceBus topic and calls a 3rd party service. If the service is down, I would like to wait 5 minutes before trying to call it again with the same message. How can I add a delay so the azure function doesn't abandon the message and immediately pick it back up again?
public static void Run([ServiceBusTrigger("someTopic",
"someSubscription", AccessRights.Manage, Connection =
"ServiceBusConnection")] BrokeredMessage message)
{
CallService(bodyOfBrokeredMessage); //service is down
//How do I add a delay so the message won't be reprocessed immediately thus quickly exhausting it's max delivery count?
}
One option is to create a new message and submit that message to the queue but set the ScheduledEnqueueTimeUtc to be five minutes in the future.
[FunctionName("DelayMessage")]
public static async Task DelayMessage(
[ServiceBusTrigger("MyQueue", AccessRights.Listen, Connection = "MyConnection")]BrokeredMessage originalMessage,
[ServiceBus("MyQueue", AccessRights.Send, Connection = "MyConnection")]IAsyncCollector<BrokeredMessage> newMessages,
TraceWriter log)
{
//handle any kind of error scenerio
var newMessage = originalMessage.Clone();
newMessage.ScheduledEnqueueTimeUtc = DateTime.UtcNow.AddMinutes(5);
await newMessages.AddAsync(newMessage);
}
You can now use the fixed delay retry, which was added to Azure Functions around November 2020 (preview).
[FunctionName("MyFunction")]
[FixedDelayRetry(10, "00:05:00")] // retries with a 5-minute delay
public static void Run([ServiceBusTrigger("someTopic",
"someSubscription", AccessRights.Manage, Connection =
"ServiceBusConnection")] BrokeredMessage message)
{
CallService(bodyOfBrokeredMessage); //service is down
}
As Josh said, you could simply clone the original message, set up the scheduled enqueue time, send the clone and complete the original.
Well, it’s a shame that sending the clone and completing the original are not an atomic operation, so there is a very slim chance of us seeing the original again should the handling process crash at just the wrong moment.
And the other issue is that DeliveryCount on the clone will always be 1, because this is a brand new message. So we could infinitely resubmit and never get round to dead-lettering this message.
Fortunately, that can be fixed by adding our own resubmit count as a property of the message:
[FunctionName("DelayMessage")]
public static async Task DelayMessage([ServiceBusTrigger("MyQueue", AccessRights.Listen, Connection = "MyConnection")]BrokeredMessage originalMessage,
[ServiceBus("MyQueue", AccessRights.Send, Connection = "MyConnection")]IAsyncCollector<BrokeredMessage> newMessages,TraceWriter log)
{
//handle any kind of error scenerio
int resubmitCount = originalMessage.Properties.ContainsKey("ResubmitCount") ? (int)originalMessage.Properties["ResubmitCount"] : 0;
if (resubmitCount > 5)
{
Console.WriteLine("DEAD-LETTERING");
originalMessage.DeadLetter("Too many retries", $"ResubmitCount is {resubmitCount}");
}
else
{
var newMessage = originalMessage.Clone();
newMessage.ScheduledEnqueueTimeUtc = DateTime.UtcNow.AddMinutes(5);
await newMessages.AddAsync(newMessage);
}
}
For more details, you could refer to this article.
Also, it's quite easy to implement the wait/retry/dequeue next pattern in a LogicApp since this type of flow control is exactly what LogicApps was designed for. Please refer to this SO thread.
I have observer module which takes care of subscriptions of some reactive stream I have created from Kafka. Sadly I need to Poll in order to receive messages from kafka, so I need to dedicate one background thread for that. My first solution was this one:
public void Poll()
{
if (Interlocked.Exchange(ref _state, POLLING) == NOTPOLLING)
{
Task.Run(() =>
{
while (CurrentSubscriptions.Count != 0)
{
_consumer.Poll(TimeSpan.FromSeconds(1));
}
_state = NOTPOLLING;
});
}
}
Now my reviewer suggested that I should Task because it have statuses and can be checked if they are running or not. This led to this code:
public void Poll()
{
// checks for statuses: WaitingForActivation, WaitingToRun, Running
if (_runningStatuses.Contains(_pollingTask.Status)) return;
_pollingTask.Start(); // this obviously throws exception once Task already completes and then I want to start it again
}
Task remained pretty much the same but check changed, now since my logic is that I want to start polling when I have subscriptions and stop when I don't I need to sort of re-use the Task, but since I can't I am wondering do I need to go back to my first implementation or is there any other neat way of doing this that right now I am missing?
I am wondering do I need to go back to my first implementation or is there any other neat way of doing this that right now I am missing?
Your first implementation looks fine. You might use a ManualResetEventSlim instead of enum and Interlocked.Exchange, but that's essentially the same... as long as you have just two states.
I think I made a compromise and removed Interlocked API for MethodImpl(MethodImpl.Options.Synchronized) it lets me have simple method body without possibly confusing Interlocked API code for eventual newcomer/inexperienced guy.
[MethodImpl(MethodImplOptions.Synchronized)]
public void Poll()
{
if (!_polling)
{
_polling = true;
new Task(() =>
{
while (_currentSubscriptions.Count != 0)
{
_consumer.Poll(TimeSpan.FromSeconds(1));
}
_polling = false;
}, TaskCreationOptions.LongRunning).Start();
}
}
Imagine of a WebForms application where there is a main method named CreateAll(). I can describe the process of the method tasks step by step as follows:
1) Stores to database (Update/Create Db items 3-4 times)
2) Starts a new thread
3) Result1 = Calls a soap service, and by using a timeout threshold it checks the status and after x minutes.The it continues (status now is OK and it isn't means failure)
4) Stores to database (Update/Create Db items 3-4 times)
5) result2 = Calls a soap service (In a fire and forget way)
6) Updates a configuration file (that is taken actually from result1)
7) By using callback requests it checks every x secs at front part the state of the result2 and the UI shows a progress bar.If the process is finished (100%) it means success
I am considering that all of them are tasks that can be grouped by their type.Basically the several types of actions are :
Type1: DB transaction
Type2: Service communication/transaction
Type3: Config file I/O transactions
I want to add a rollback/retry mechanism to the existing implementation and to use a task oriented architecture and refactor existing legacy code.
I found that something like Memento Design Pattern OR Command Pattern in C# could help for this purpose.I also found the msdn Retry Pattern description interesting. I don't realy know and I want someone to lead me to the safest and best decision...
Can you suggest me the best way for this case to keep the existing implementation and the flow but wrapping it in a general and abstract retry/rollback/tasklist implementation ?
The final implementation must be able to retry in every case (whatever task or general failure such as timeout etc throughout the general createAll process) and also there would be a rollback decision list where the app must be able to rollback all the tasks that was accomplished.
I want some examples how to break this coupled code.
PseudoCode that might be helpful:
class something
{
static result CreateAll(object1 obj1, object2 obj2 ...)
{
//Save to database obj1
//...
//Update to database obj1
//
//NEW THREAD
//Start a new thread with obj1, obj2 ...CreateAll
//...
}
void CreateAllAsync()
{
//Type1 Save to database obj1
//...
//Type1 Update to database obj2
//Type2 Call Web Service to create obj1 on the service (not async)
while (state != null && now < times)
{
if (status == "OK")
break;
else
//Wait for X seconds
}
//Check status continue or general failure
//Type1 Update to database obj2 and obj1
//Type2 Call Web Service to create obj2 on the service (fire and forget)
//Type3 Update Configuration File
//Type1 Update to database obj2 and obj1
//..
return;
}
//Then the UI takes the responsibility to check the status of result2
Look at using Polly for retry scenarios which seems to align well with your Pseudo code. At the end of this answer is a sample from the documentation. You can do all sorts of retry scenarios, retry and waits etc. For example, you could retry a complete transaction a number of times, or alternatively retry a set of idempotent actions a number of times and then write compensation logic if/when the retry policy finally fails.
A memento patterns is more for undo-redo logic that you would find in a word processor (Ctrl-Z and Ctrl-Y).
Other helpful patterns to look at is a simple queue, a persistent queue or even a service bus to give you eventual consistency without having to have the user wait for everything to complete successfully.
// Retry three times, calling an action on each retry
// with the current exception and retry count
Policy
.Handle<DivideByZeroException>()
.Retry(3, (exception, retryCount) =>
{
// do something
});
A sample based on your Pseudo-Code may look as follows:
static bool CreateAll(object1 obj1, object2 obj2)
{
// Policy to retry 3 times, waiting 5 seconds between retries.
var policy =
Policy
.Handle<SqlException>()
.WaitAndRetry(3, count =>
{
return TimeSpan.FromSeconds(5);
});
policy.Execute(() => UpdateDatabase1(obj1));
policy.Execute(() => UpdateDatabase2(obj2));
}
You can opt for Command pattern where each command contains all the necessary information like connection string, service url, retry count etc.On top of this, you can consider rx, data flow blocks to do the plumbing.
High level view:
Update: Intention is to have Separation Of Concern. Retry logic is confined to one class which is a wrapper to existing command.
You can do more analysis and come up with proper command, invoker and receiver objects and add rollback functionality.
public abstract class BaseCommand
{
public abstract RxObservables Execute();
}
public class DBCommand : BaseCommand
{
public override RxObservables Execute()
{
return new RxObservables();
}
}
public class WebServiceCommand : BaseCommand
{
public override RxObservables Execute()
{
return new RxObservables();
}
}
public class ReTryCommand : BaseCommand // Decorator to existing db/web command
{
private readonly BaseCommand _baseCommand;
public RetryCommand(BaseCommand baseCommand)
{
_baseCommand = baseCommand
}
public override RxObservables Execute()
{
try
{
//retry using Polly or Custom
return _baseCommand.Execute();
}
catch (Exception)
{
throw;
}
}
}
public class TaskDispatcher
{
private readonly BaseCommand _baseCommand;
public TaskDispatcher(BaseCommand baseCommand)
{
_baseCommand = baseCommand;
}
public RxObservables ExecuteTask()
{
return _baseCommand.Execute();
}
}
public class Orchestrator
{
public void Orchestrate()
{
var taskDispatcherForDb = new TaskDispatcher(new ReTryCommand(new DBCommand));
var taskDispatcherForWeb = new TaskDispatcher(new ReTryCommand(new WebCommand));
var dbResultStream = taskDispatcherForDb.ExecuteTask();
var WebResultStream = taskDispatcherForDb.ExecuteTask();
}
}
For me this sounds like 'Distributed Transactions', since you have different resources (database, service communication, file i/o) and want to make a transaction that possible involves all of them.
In C# you could solve this with Microsoft Distributed Transaction Coordinator. For every resource you need a resource manager. For databases, like sql server and file i/o, it is already available, as far as i know. For others you can develop your own.
As an example, to execute these transactions you can use the TransactionScope class like this:
using (TransactionScope ts = new TransactionScope())
{
//all db code here
// if an error occurs jump out of the using block and it will dispose and rollback
ts.Complete();
}
(Example taken from here)
To develop your own resource manager, you have to implement IEnlistmentNotification and that can be a fairly complex task. Here is a short example.
Some code that may help you to achieve your goal.
public static class Retry
{
public static void Do(
Action action,
TimeSpan retryInterval,
int retryCount = 3)
{
Do<object>(() =>
{
action();
return null;
}, retryInterval, retryCount);
}
public static T Do<T>(
Func<T> action,
TimeSpan retryInterval,
int retryCount = 3)
{
var exceptions = new List<Exception>();
for (int retry = 0; retry < retryCount; retry++)
{
try
{
if (retry > 0)
Thread.Sleep(retryInterval);
return action();
}
catch (Exception ex)
{
exceptions.Add(ex);
}
}
throw new AggregateException(exceptions);
}
}
Call and retry as below:
int result = Retry.Do(SomeFunctionWhichReturnsInt, TimeSpan.FromSeconds(1), 4);
Ref: http://gist.github.com/KennyBu/ac56371b1666a949daf8
Well...sounds like a really, really nasty situation. You can't open a transaction, write something to the database and go walk your dog in the park. Because transactions have this nasty habit of locking resources for everyone. This eliminates your best option: distributed transactions.
I would execute all operations and prepare a reverse script as I go. If operation is a success I would purge the script. Otherwise I would run it. But this is open for potential pitfalls and script must be ready to handle them. For example - what if in the mid-time someone already updated the records you added; or calculated an aggregate based on your values?
Still: building a reverse script is the simple solution, no rocket science there. Just
List<Command> reverseScript;
and then, if you need to rollback:
using (TransactionScope tx= new TransactionScope()) {
foreach(Command cmd in reverseScript) cmd.Execute();
tx.Complete();
}
In my .NET 4.0 library I have a piece of code that sends data over the network and waits for a response. In order to not block the calling code the method returns a Task<T> that completes when the response is received so that the code can call the method like this:
// Send the 'message' to the given 'endpoint' and then wait for the response
Task<IResult> task = sender.SendMessageAndWaitForResponse(endpoint, message);
task.ContinueWith(
t =>
{
// Do something with t.Result ...
});
The underlying code uses a TaskCompletionSource so that it can wait for the response message without having to spin up a thread only to have it sit there idling until the response comes in:
private readonly Dictionary<int, TaskCompletionSource<IResult>> m_TaskSources
= new Dictionary<int, TaskCompletionSource<IResult>>();
public Task<IResult> SendMessageAndWaitForResponse(int endpoint, object message)
{
var source = new TaskCompletionSource<IResult>(TaskCreationOptions.None);
m_TaskSources.Add(endpoint, source);
// Send the message here ...
return source.Task;
}
When the response is received it is processed like this:
public void CompleteWaitForResponseResponse(int endpoint, IResult value)
{
if (m_TaskSources.ContainsKey(endpoint))
{
var source = m_TaskSources[endpoint];
source.SetResult(value);
m_TaskSources.Remove(endpoint);
}
}
Now I want to add a time-out so that the calling code won't wait indefinitely for the response. However on .NET 4.0 that is somewhat messy because there is no easy way to time-out a task. So I was wondering if Rx would be able to do this easier. So I came up with the following:
private readonly Dictionary<int, Subject<IResult>> m_SubjectSources
= new Dictionary<int, Subject<IResult>>();
private Task<IResult> SendMessageAndWaitForResponse(int endpoint, object message, TimeSpan timeout)
{
var source = new Subject<IResult>();
m_SubjectSources.Add(endpoint, source);
// Send the message here ...
return source.Timeout(timeout).ToTask();
}
public void CompleteWaitForResponseResponse(int endpoint, IResult value)
{
if (m_SubjectSources.ContainsKey(endpoint))
{
var source = m_SubjectSources[endpoint];
source.OnNext(value);
source.OnCompleted();
m_SubjectSources.Remove(endpoint);
}
}
This all seems to work without issue, however I've seen several questions stating that Subject should be avoided so now I'm wondering if there is a more Rx-y way to achieve my goal.
The advice to avoid using Subject in Rx is often overstated. There has to be a source for events in Rx, and it's fine for it to be a Subject.
The issue with Subject is generally when it is used in between two Rx queries that could otherwise be joined, or where there is already a well-defined conversion to IObservable<T> (such as Observable.FromEventXXX or Observable.FromAsyncXXX etc.
If you want, you can do away with the Dictionary and multiple Subjects with the approach below. This uses a single subject and returns a filtered query to the client.
It's not "better" per se, Whether this makes sense will depend on the specifics of your scenario, but it saves spawning lots of subjects, and gives you a nice option for monitoring all results in a single stream. If you were dispatching results serially (say from a message queue) this could make sense.
// you only need to synchronize if you are receiving results in parallel
private readonly ISubject<Tuple<int,IResult>, Tuple<int,IResult>> results =
Subject.Synchronize(new Subject<Tuple<int,IResult>>());
private Task<IResult> SendMessageAndWaitForResponse(
int endpoint, object message, TimeSpan timeout)
{
// your message processing here, I'm just echoing a second later
Task.Delay(TimeSpan.FromSeconds(1)).ContinueWith(t => {
CompleteWaitForResponseResponse(endpoint, new Result { Value = message });
});
return results.Where(r => r.Item1 == endpoint)
.Select(r => r.Item2)
.Take(1)
.Timeout(timeout)
.ToTask();
}
public void CompleteWaitForResponseResponse(int endpoint, IResult value)
{
results.OnNext(Tuple.Create(endpoint,value));
}
Where I defined a class for results like this:
public class Result : IResult
{
public object Value { get; set; }
}
public interface IResult
{
object Value { get; set; }
}
EDIT - In response to additional questions in the comments.
No need to dispose of the single Subject - it won't leak and will be garbage collected when it goes out of scope.
ToTask does accept a cancellation token - but that's really for cancellation from the client side.
If the remote side disconnects, you can send an the error to all clients with results.OnError(exception); - you'll want to instantiate a new subject instance at the same time.
Something like:
private void OnRemoteError(Exception e)
{
results.OnError(e);
}
This will manifest as a faulted task to all clients in the expected manner.
It's pretty thread safe too because clients subscribing to a subject that has previously sent OnError will get an error back immediately - it's dead from that point. Then when ready you can reinitialise with:
private void OnInitialiseConnection()
{
// ... your connection logic
// reinitialise the subject...
results = Subject.Synchronize(new Subject<Tuple<int,IResult>>());
}
For individual client errors, you could consider:
Extending your IResult interface to include errors as data
You can then optionally project this to a fault for just that client by extending the Rx query in SendMessageAndWaitForResponse. For example, and an Exception and HasError property to IResult so that you can do something like:
return results.Where(r => r.Item1 == endpoint)
.SelectMany(r => r.Item2.HasError
? Observable.Throw<IResult>(r.Item2.Exception)
: Observable.Return(r.Item2))
.Take(1)
.Timeout(timeout)
.ToTask();