What is the best way to implement a Retry Wrapper in C#? - c#

We currently have a naive RetryWrapper which retries a given func upon the occurrence of an exception:
public T Repeat<T, TException>(Func<T> work, TimeSpan retryInterval, int maxExecutionCount = 3) where TException : Exception
{
...
And for the retryInterval we are using the below logic to "wait" before the next attempt.
_stopwatch.Start();
while (_stopwatch.Elapsed <= retryInterval)
{
// do nothing but actuallky it does! lots of CPU usage specially if retryInterval is high
}
_stopwatch.Reset();
I don't particularly like this logic, also ideally I would prefer the retry logic NOT to happen on the main thread, can you think of a better way?
Note: I am happy to consider answers for .Net >= 3.5

So long as your method signature returns a T, the main thread will have to block until all retries are completed. However, you can reduce CPU by having the thread sleep instead of doing a manual reset event:
Thread.Sleep(retryInterval);
If you are willing to change your API, you can make it so that you don't block the main thread. For example, you could use an async method:
public async Task<T> RepeatAsync<T, TException>(Func<T> work, TimeSpan retryInterval, int maxExecutionCount = 3) where TException : Exception
{
for (var i = 0; i < maxExecutionCount; ++i)
{
try { return work(); }
catch (TException ex)
{
// allow the program to continue in this case
}
// this will use a system timer under the hood, so no thread is consumed while
// waiting
await Task.Delay(retryInterval);
}
}
This can be consumed synchronously with:
RepeatAsync<T, TException>(work, retryInterval).Result;
However, you can also start the task and then wait for it later:
var task = RepeatAsync<T, TException>(work, retryInterval);
// do other work here
// later, if you need the result, just do
var result = task.Result;
// or, if the current method is async:
var result = await task;
// alternatively, you could just schedule some code to run asynchronously
// when the task finishes:
task.ContinueWith(t => {
if (t.IsFaulted) { /* log t.Exception */ }
else { /* success case */ }
});

Consider using Transient Fault Handling Application Block
The Microsoft Enterprise Library Transient Fault Handling Application
Block lets developers make their applications more resilient by adding
robust transient fault handling logic. Transient faults are errors
that occur because of some temporary condition such as network
connectivity issues or service unavailability. Typically, if you retry
the operation that resulted in a transient error a short time later,
you find that the error has disappeared.
It is available as a NuGet package.
using Microsoft.Practices.TransientFaultHandling;
using Microsoft.Practices.EnterpriseLibrary.WindowsAzure.TransientFaultHandling;
...
// Define your retry strategy: retry 5 times, starting 1 second apart
// and adding 2 seconds to the interval each retry.
var retryStrategy = new Incremental(5, TimeSpan.FromSeconds(1),
TimeSpan.FromSeconds(2));
// Define your retry policy using the retry strategy and the Windows Azure storage
// transient fault detection strategy.
var retryPolicy =
new RetryPolicy<StorageTransientErrorDetectionStrategy>(retryStrategy);
// Receive notifications about retries.
retryPolicy.Retrying += (sender, args) =>
{
// Log details of the retry.
var msg = String.Format("Retry - Count:{0}, Delay:{1}, Exception:{2}",
args.CurrentRetryCount, args.Delay, args.LastException);
Trace.WriteLine(msg, "Information");
};
try
{
// Do some work that may result in a transient fault.
retryPolicy.ExecuteAction(
() =>
{
// Your method goes here!
});
}
catch (Exception)
{
// All the retries failed.
}

How about using a timer instead of stopwatch?
For example:
TimeSpan retryInterval = new TimeSpan(0, 0, 5);
DateTime startTime;
DateTime retryTime;
Timer checkInterval = new Timer();
private void waitMethod()
{
checkInterval.Interval = 1000;
checkInterval.Tick += checkInterval_Tick;
startTime = DateTime.Now;
retryTime = startTime + retryInterval;
checkInterval.Start();
}
void checkInterval_Tick(object sender, EventArgs e)
{
if (DateTime.Now >= retryTime)
{
checkInterval.Stop();
// Retry Interval Elapsed
}
}

Related

HttpClient.SendAsync processes two requests at a time when the limit is higher

I have a Windows service that reads data from the database and processes this data using multiple REST API calls.
Originally, this service ran on a timer where it would read unprocessed data from the database and process it using multiple threads limited using SemaphoreSlim. This worked well except that the database read had to wait for all processing to finish before reading again.
ServicePointManager.DefaultConnectionLimit = 10;
Original that works:
// Runs every 5 seconds on a timer
private void ProcessTimer_Elapsed(object sender, ElapsedEventArgs e)
{
var hasLock = false;
try
{
Monitor.TryEnter(timerLock, ref hasLock);
if (hasLock)
{
ProcessNewData();
}
else
{
log.Info("Failed to acquire lock for timer."); // This happens all of the time
}
}
finally
{
if (hasLock)
{
Monitor.Exit(timerLock);
}
}
}
public void ProcessNewData()
{
var unproceesedItems = GetDatabaseItems();
if (unproceesedItems.Count > 0)
{
var downloadTasks = new Task[unproceesedItems.Count];
var maxThreads = new SemaphoreSlim(semaphoreSlimMinMax, semaphoreSlimMinMax); // semaphoreSlimMinMax = 10 is max threads
for (var i = 0; i < unproceesedItems .Count; i++)
{
maxThreads.Wait();
var iClosure = i;
downloadTasks[i] =
Task.Run(async () =>
{
try
{
await ProcessItemsAsync(unproceesedItems[iClosure]);
}
catch (Exception ex)
{
// handle exception
}
finally
{
maxThreads.Release();
}
});
}
Task.WaitAll(downloadTasks);
}
}
To improve efficiency, I rewrite the service to run GetDatabaseItems in a separate thread from the rest so that there is a ConcurrentDictionary of unprocessed items between them that GetDatabaseItems fills and ProcessNewData empties.
The problem is that while 10 unprocessed items are send to ProcessItemsAsync, they are processed two at a time instead of all 10.
The code inside of ProcessItemsAsync calls var response = await client.SendAsync(request); where the delay occurs. All 10 threads make it to this code but come out of it two at a time. None of this code changed between the old version and the new.
Here is the code in the new version that did change:
public void Start()
{
ServicePointManager.DefaultConnectionLimit = maxSimultaneousThreads; // 10
// Start getting unprocessed data
getUnprocessedDataTimer.Interval = getUnprocessedDataInterval; // 5 seconds
getUnprocessedDataTimer.Elapsed += GetUnprocessedData; // writes data into a ConcurrentDictionary
getUnprocessedDataTimer.Start();
cancellationTokenSource = new CancellationTokenSource();
// Create a new thread to process data
Task.Factory.StartNew(() =>
{
try
{
ProcessNewData(cancellationTokenSource.Token);
}
catch (Exception ex)
{
// error handling
}
}, TaskCreationOptions.LongRunning
);
}
private void ProcessNewData(CancellationToken token)
{
// Check if task has been canceled.
while (!token.IsCancellationRequested)
{
if (unprocessedDictionary.Count > 0)
{
try
{
var throttler = new SemaphoreSlim(maxSimultaneousThreads, maxSimultaneousThreads); // maxSimultaneousThreads = 10
var tasks = unprocessedDictionary.Select(async item =>
{
await throttler.WaitAsync(token);
try
{
if (unprocessedDictionary.TryRemove(item.Key, out var item))
{
await ProcessItemsAsync(item);
}
}
catch (Exception ex)
{
// handle error
}
finally
{
throttler.Release();
}
});
Task.WhenAll(tasks);
}
catch (OperationCanceledException)
{
break;
}
}
Thread.Sleep(1000);
}
}
Environment
.NET Framework 4.7.1
Windows Server 2016
Visual Studio 2019
Attempts to fix:
I tried the following with the same bad result (two await client.SendAsync(request) completing at a time):
Set Max threads and ServicePointManager.DefaultConnectionLimit to 30
Manually create threads using Thread.Start()
Replace async/await pattern with sync HttpClient calls
Call data processing using Task.Run(async () => and Task.WaitAll(downloadTasks);
Replace the new long-running thread for ProcessNewData with a timer
What I want is to run GetUnprocessedData and ProcessNewData concurrently with an HttpClient connection limit of 10 (set in config) so that 10 requests are processed at the same time.
Note: the issue is similar to HttpClient.GetAsync executes only 2 requests at a time? but the DefaultConnectionLimit is increased and the service runs on a Windows Server. It also creates more than 2 connections when original code runs.
Update
I went back to the original project to make sure it still worked, it did. I added a new timer to perform some unrelated operations and the httpClient issue came back. I removed the timer, everything worked. I added a new thread to do parallel processing, the problem came back.
This is not a direct answer to your question, but a suggestion for simplifying your service that could make the debugging of any problem easier. My suggestion is to implement the producer-consumer pattern using an iterator for producing the unprocessed items, and a parallel loop for consuming them. Ideally the parallel loop would have async delegates, but since you are targeting the .NET Framework you don't have access to the .NET 6 method Parallel.ForEachAsync. So I will suggest the slightly wasteful approach of using a synchronous parallel loop that blocks threads. You could use either the Parallel.ForEach method, or the PLINQ like in the example below:
private IEnumerable<Item> Iterator(CancellationToken token)
{
while (true)
{
Task delayTask = Task.Delay(5000, token);
foreach (Item item in GetDatabaseItems()) yield return item;
delayTask.GetAwaiter().GetResult();
}
}
public void Start()
{
//...
ThreadPool.SetMinThreads(degreeOfParallelism, Environment.ProcessorCount);
new Thread(() =>
{
try
{
Partitioner
.Create(Iterator(token), EnumerablePartitionerOptions.NoBuffering)
.AsParallel()
.WithDegreeOfParallelism(degreeOfParallelism)
.WithCancellation(token)
.ForAll(item => ProcessItemAsync(item).GetAwaiter().GetResult());
}
catch (OperationCanceledException) { } // Ignore
}).Start();
}
Online demo.
The Iterator fetches unprocessed items from the database in batches, and yields them one by one. The database won't be hit more frequently than once every 5 seconds.
The PLINQ query is going to fetch a new item from the Iterator each time it has a worker available, according to the WithDegreeOfParallelism policy. The setting EnumerablePartitionerOptions.NoBuffering ensures that it won't try to fetch more items in advance.
The ThreadPool.SetMinThreads is used in order to boost the availability of ThreadPool threads, since the PLINQ is going to use lots of them. Without it the ThreadPool will not be able to satisfy the demand immediately, although it will gradually inject more threads and eventually will catch up. But since you already know how many threads you'll need, you can configure the ThreadPool from the start.
In case you dislike the idea of blocking threads, you can find a simple substitute of the Parallel.ForEachAsync here, based on the TPL Dataflow library. It requires installing a NuGet package.
The issue turned out to be the place where ServicePointManager.DefaultConnectionLimit is set.
In the version where HttpClient was only doing two requests at a time, ServicePointManager.DefaultConnectionLimit was being set before the threads were being created but after the HttpClient was initialized.
Once I moved it into the constructor before the HttpClient is initialized, everything started working.
Thank you very much to #Theodor Zoulias for the help.
TLDR; Set ServicePointManager.DefaultConnectionLimit before initializing the HttpClient.

Pattern for cyclic calls of async operations

I want to do some I/O based async operations periodically.
It should not run as fast as possible but with a configurable delay between the cycles.
So far I came up with two different approaches and I am wondering which one is better in regards of ressource consumption.
Approach 1 with Task.Run()
internal class Program
{
private static void Main(string[] args)
{
for (var i = 0; i < 80; i++)
{
var handler = new CommunicationService();
handler.Start();
}
Console.ReadLine();
}
}
internal class CommunicationService
{
private readonly HttpClient _httpClient = new HttpClient(new HttpClientHandler());
public void Start()
{
Run();
}
private void Run()
{
Task.Run(async () =>
{
try
{
var result = await _httpClient.GetAsync(someUri);
await Task.Delay(TimeSpan.FromSeconds(configurableValue));
}
catch (Exception ex)
{
Console.Error.WriteLine(ex);
Run();
}
Run();
});
}
}
So the async operation is wrapped in a Task.Run() in a fire and forget style, so it can be started without blocking.
Approach 2 with EventHandler
internal class CommunicationService
{
private event EventHandler CommunicationHandler;
private readonly HttpClient _httpClient = new HttpClient(new HttpClientHandler());
public void Start()
{
CommunicationHandler = (o, events) => Communicate();
OnCommunicationTriggered();
}
private async void Communicate()
{
try
{
var result = await _httpClient.GetAsync(someUri);
await Task.Delay(TimeSpan.FromSeconds(configurableValue));
}
catch (Exception ex)
{
Console.Error.WriteLine(ex);
OnCommunicationTriggered();
}
OnCommunicationTriggered();
}
private void OnCommunicationTriggered()
{
CommunicationHandler.Invoke(this, EventArgs.Empty);
}
}
With this approach wrapping in Task.Run() is not necessary but is it therefore better?
I created a .Net console application for both approaches and recorded following performance counters over a few minutes and did not see that much difference to be honest:
\Process(Events)% Processor Time (approach 2 ~20 % higher)
\Process(Events)\Private Bytes (almost equal, approach 2 slighlty
lower)
\Process(Events)\Thread Count (approach 2 ~ 25% lower)
.NET CLR LocksAndThreads(Events)# of current logical Threads
(almost equal, approach 2 slighlty higher)
.NET CLR LocksAndThreads(Events)# of current physical Threads
(almost equal, approach 2 slighlty higher)
.NET CLR LocksAndThreads(Events)\Contention Rate / sec (approach 2
~ 50% higher)
Am I missing the point here with these counters?
Both are really doing the same thing. The event option seem to add a unneeded layer of complexity. There is not significant difference in resource consumption.
A more appropriate option would be to use a timer.timer or threading.timer. This makes the code easier to read and understand since it expresses intent. Behind the scene all of the alternatives result in more or less the same thing.
You will need to consider how you count the time. Should the execution time be included in the timing interval or not? Often the interval is much longer than the execution time, so it does not matter. If it does matter you might need to set the timer to only trigger once, and reset the timer once your operation has completed.
according to the accepted answer here is my new approach:
internal class EventHandlerService
{
private Timer _timer;
private TimeSpan refreshTime = TimeSpan.FromSeconds(5);
public void Start()
{
_timer = new Timer(Communicate, null, 0,(int)refreshTime.TotalMilliseconds);
}
private void Communicate(object stateInfo)
{
Task.Run(async () =>
{
_timer.Change(Timeout.InfiniteTimeSpan, Timeout.InfiniteTimeSpan); // stop the timer
Console.WriteLine($"Starting at {DateTime.UtcNow.ToString("O")}");
var stopWatch=new Stopwatch();
stopWatch.Start();
try
{
await Task.Delay(TimeSpan.FromSeconds(1));
}
catch (Exception ex)
{
}
finally
{
Console.WriteLine($"Finishing at {DateTime.UtcNow.ToString("O")} after: {stopWatch.Elapsed}");
var dueTime = refreshTime.Subtract(stopWatch.Elapsed);
Console.WriteLine($"Calced dueTime to: {dueTime.TotalSeconds} at {DateTime.UtcNow.ToString("O")}");
_timer.Change(Math.Max((int) dueTime.TotalMilliseconds, 0), (int)refreshTime.TotalMilliseconds); // start the timer
}
});
}
}
with this approach I got my needs covered:
the actual refresh/timer period does never fall below 5 seconds but if the handler takes longer than 5 seconds, the next execution is being triggered without delay.

Running two tasks with Parallel.Invoke and add a timeout in case one task takes longer

I'm calling two functions that rely on some external web services. For now, they run in parallel and when they both complete the execution resumes. However, if the external servers take too much time to process the requests, it could lock my code for a while.
I want to add a timeout so that if the servers take more than 10 seconds to respond then just continue on. This is what I have, how can I add a timeout?
Parallel.Invoke(
() => FunctionThatCallsServer1(TheParameter),
() => FunctionThatCallsServer2(TheParameter)
);
RunThisFunctionNoMatterWhatAfter10Seconds();
I don't think there's an easy way of timing out a Parallel.Invoke once the functions have started, which clearly they will have done after ten seconds here. Parallel.Invoke waits for the functions to complete even if you cancel, so you would have to find a way to complete the functions early.
However, under the covers Parallel.Invoke uses Tasks, and if you use Tasks directly instead of Parallel.Invoke then you can provide a timeout. The code below shows how:
Task task1 = Task.Run(() => FunctionThatCallsServer1(TheParameter));
Task task2 = Task.Run(() => FunctionThatCallsServer2(TheParameter));
// 10000 is timeout in ms, allTasksCompleted is true if they completed, false if timed out
bool allTasksCompleted = Task.WaitAll(new[] { task1, task2 }, 10000);
RunThisFunctionNoMatterWhatAfter10Seconds();
One slight difference this code has with Parallel.Invoke is that if you have a VERY large number of functions then Parallel.Invoke will manage the Task creation better than just blindly creating a Task for every function as here. Parallel.Invoke will create a limited number of Tasks and re-use them as the functions complete. This won't be an issue with just a few functions to call as above.
You will need to create an instance of CancellationTokenSource and right at creating time you ca configure your timeout time, like
var cts = new CancellationTokenSource(timeout);
then you will need to create an instance of ParallelOptions where you set the ParallelOptions.CancellationToken to the token of the CancellationTokenSource, like
var options = new ParallelOptions {
CancellationToken = cts.Token,
};
Then you can call Parallel.Invoke with the options and your actions
try
{
Parallel.Invoke(
options,
() => FunctionThatCallsServer1(token),
() => FunctionThatCallsServer2(token)
);
}
catch (OperationCanceledException ex)
{
// timeout reached
Console.WriteLine("Timeout");
throw;
}
but you will also need to hand the token to the called Server functions and handle the timeout in these actions aswell.
This is because the Parallel.Invoke will only check before it starts an action if the token it got is cancelled. That means if all actions are started before the timeout occures the Parallel.Invoke call will block as long as the actions need to finish.
Update:
A good way to test the cancellation is to define FunctionThatCallsServer1 like,
static void FunctionThatCallsServer1(CancellationToken token) {
var endTime = DateTime.Now.AddSeconds(5);
while (DateTime.Now < endTime) {
token.ThrowIfCancellationRequested();
Thread.Sleep(1);
}
}
Below is the code:
using System;
using System.Threading.Tasks;
namespace Algorithums
{
public class Program
{
public static void Main(string[] args)
{
ParelleTasks();
Console.WriteLine("Main");
Console.ReadLine();
}
private static void ParelleTasks()
{
Task t = Task.Run(() => {
FunctionThatCallsServers();
Console.WriteLine("Task ended after 20 Seconds");
});
try
{
Console.WriteLine("About to wait for 10 sec completion of task {0}", t.Id);
bool result = t.Wait(10000);
Console.WriteLine("Wait completed normally: {0}", result);
Console.WriteLine("The task status: {0:G}", t.Status);
}
catch (OperationCanceledException e)
{
Console.WriteLine("Error: " + e.ToString());
}
RunThisFunctionNoMatterWhatAfter10Seconds();
}
private static bool FunctionThatCallsServers()
{
Parallel.Invoke(
() => FunctionThatCallsServer1(),
() => FunctionThatCallsServer2()
);
return true;
}
private static void FunctionThatCallsServer1()
{
System.Threading.Thread.Sleep(20000);
Console.WriteLine("FunctionThatCallsServer1");
}
private static void FunctionThatCallsServer2()
{
System.Threading.Thread.Sleep(20000);
Console.WriteLine("FunctionThatCallsServer2");
}
private static void RunThisFunctionNoMatterWhatAfter10Seconds()
{
Console.WriteLine("RunThisFunctionNoMatterWhatAfter10Seconds");
}
}
}

How to determine whether Task.Run is completed within a loop

This may be an odd question and it is really for my educational purpose so I can apply it in future scenarios that may come up.
I am using C#.
I am stress testing so this is not quite production code.
I upload data to my server via a web service.
I start the service off using a Task.Run.
I check to see if the Task is completed before allowing the next Run.Task to begin.
This is done within a loop.
However, because I am using a modular declared Task will not the result be affected?
I could declare a local Task.Run variable but I want to see how far I can get with this question 1st.
If the Task.Run can raise an event to say it is completed then this may not be an issue?
This is my code:
//module declaration:
private static Task webTask = Task.Run(() => { System.Windows.Forms.Application.DoEvents(); });
//in a function called via a timer
if (webTask.IsCompleted)
{
//keep count of completed tasks
}
webTask = Task.Run(() =>
{
try
{
wcf.UploadMotionDynamicRaw(bytes); //my web service
}
catch (Exception ex)
{
//deal with error
}
);
IMO you do not need the timer. Using Task Continuation you subscribe to the done event:
System.Threading.Tasks.Task
.Run(() =>
{
// simulate processing
for (var i = 0; i < 10; i++)
{
Console.WriteLine("do something {0}", i + 1);
}
})
.ContinueWith(t => Console.WriteLine("done."));
The output is:
do something 1
do something 2
.
.
do something 9
do something 10
done
Your code could look like this:
var webTask = Task.Run(() =>
{
try
{
wcf.UploadMotionDynamicRaw(bytes); //my web service
}
catch (Exception ex)
{
//deal with error
}
}).ContinueWith(t => taskCounter++);
With task continuation you could even differentiate between failed and success process result, if you want to count only successfull tasks - using the TaskContinuationOptrions.
You can wait for your task to complete by awaiting your task like this
await webTask;
that will asynchronously wait for 'webTask' to complete. Instead of the timer you can use await Task.Delay which will asynchronously wait for the delay to expire. I would also consider making the wcf call asynchronous so you don't have to call inside Task.Run. See this question for some tips.
I'd rewrite the code as follows:
public async Task UploadAsync()
{
while(true)
{
await Task.Delay(1000); // this is essentially your timer
// wait for the webTask to complete asynchrnously
await webTask;
//keep count of competed tasks
webTask = Task.Run(() =>
{
try
{
// consider generating an asynchronous method for this if possible.
wcf.UploadMotionDynamicRaw(bytes); //my web service
}
catch (Exception ex)
{
//deal with error
}
});
}
}

Restarting a task in the background if certain errors occur

I am using some REST requests using Mono.Mac (3.2.3) to communicate with a server, and as a retry mechanism I am quietly attempting to give the HTTP actions multiple tries if they fail, or time out.
I have the following;
var tries = 0;
while (tries <= ALLOWED_TRIES)
{
try
{
postTask.Start();
tries++;
if (!postTask.Wait(Timeout))
{
throw new TimeoutException("Operation timed out");
}
break;
} catch (Exception e) {
if (tries > ALLOWED_TRIES)
{
throw new Exception("Failed to access Resource.", e);
}
}
}
Where the task uses parameters of the parent method like so;
var postTask = new Task<HttpWebResponse>(() => {return someStuff(foo, bar);},
Task.Factory.CancellationToken,
Task.Factory.CreationOptions);
The problem seems to be that the task does not want to be run again with postTask.Start() after it's first completion (and subsequent failure). Is there a simple way of doing this, or am I misusing tasks in this way? Is there some sort of method that resets the task to its initial state, or am I better off using a factory of some sort?
You're indeed misusing the Task here, for a few reasons:
You cannot run the same task more than once. When it's done, it's done.
It is not recommended to construct a Task object manually, there's Task.Run and Task.Factory.Start for that.
You should not use Task.Run/Task.Factory.Start for a task which does IO-bound work. They are intended for CPU-bound work, as they "borrow" a thread from ThreadPool to execute the task action. Instead, use pure async Task-based APIs for this, which do not need a dedicate thread to complete.
For example, below you can call GetResponseWithRetryAsync from the UI thread and still keep the UI responsive:
async Task<HttpWebResponse> GetResponseWithRetryAsync(string url, int retries)
{
if (retries < 0)
throw new ArgumentOutOfRangeException();
var request = WebRequest.Create(url);
while (true)
{
try
{
var result = await request.GetResponseAsync();
return (HttpWebResponse)result;
}
catch (Exception ex)
{
if (--retries == 0)
throw; // rethrow last error
// otherwise, log the error and retry
Debug.Print("Retrying after error: " + ex.Message);
}
}
}
More reading:
"Task.Factory.StartNew" vs "new Task(...).Start".
Task.Run vs Task.Factory.StartNew.
I would recommend doing something like this:
private int retryCount = 3;
...
public async Task OperationWithBasicRetryAsync()
{
int currentRetry = 0;
for (; ;)
{
try
{
// Calling external service.
await TransientOperationAsync();
// Return or break.
break;
}
catch (Exception ex)
{
Trace.TraceError("Operation Exception");
currentRetry++;
// Check if the exception thrown was a transient exception
// based on the logic in the error detection strategy.
// Determine whether to retry the operation, as well as how
// long to wait, based on the retry strategy.
if (currentRetry > this.retryCount || !IsTransient(ex))
{
// If this is not a transient error
// or we should not retry re-throw the exception.
throw;
}
}
// Wait to retry the operation.
// Consider calculating an exponential delay here and
// using a strategy best suited for the operation and fault.
Await.Task.Delay();
}
}
// Async method that wraps a call to a remote service (details not shown).
private async Task TransientOperationAsync()
{
...
}
This code is from the Retry Pattern Design from Microsoft. You can check it out here: https://msdn.microsoft.com/en-us/library/dn589788.aspx

Categories