What could be the reason behind the failure of the task? - c#

I have a class named TableStorageController.cs
public class TableStorageController
{
private static Dictionary<string, BlockingCollection<CloudModelDetail>> s_dictionary = new Dictionary<string, BlockingCollection<CloudModelDetail>>();
private static StorageAccount s_azureStorageAccount;
private readonly CloudModelDetail _cloudModelDetail;
private static CancellationTokenSource s_cancellationTokenSource = new CancellationTokenSource();
private static int s_retailerId;
/// <summary>
/// Static task to log transactions to Azure Table Storage after every 5 minutes.
/// </summary>
static TableStorageController()
{
try
{
Task.Run(async () =>
{
while (true)
{
await Task.Delay(300000, s_cancellationTokenSource.Token);
foreach (var retaileridkey in s_dictionary.Keys)
{
var batchOperation = new TableBatchOperation();
while (s_dictionary[retaileridkey].Count != 0 && batchOperation.Count < 101)
{
batchOperation.InsertOrMerge(s_dictionary[retaileridkey].Take());
}
if (batchOperation?.Count != 0)
await s_azureStorageAccount.VerifyCloudTable.ExecuteBatchAsync(batchOperation);
}
}
});
}
catch (Exception ex)
{
s_log.Fatal("Azure task run failed", ex);
}
}
}
This task is meant to run after every 5 minutes and log whatever items are present in the dictionary to Azure Table Storage.
Locally when I run my code, I can see it gets triggered after every 5 minutes.
But, somehow once after the deployment in another environment (Production), it fails.
Can anyone point out what am I missing?
Note: I have never got an exception Azure task run failed.

Try putting the exception handling inside the Task.Run block. You do not await the call to Task.Run so exceptions will go unnoticed. And since await Task.Delay is already non-blocking I do not see why you need the extra Task for. Try this:
static async void TableStorageController()
{
try
{
while (true)
{
await Task.Delay(TimeSpan.FromMinutes(5), s_cancellationTokenSource.Token);
foreach (var retaileridkey in s_dictionary.Keys)
{
var batchOperation = new TableBatchOperation();
while (s_dictionary[retaileridkey].Count != 0 && batchOperation.Count < 101)
{
batchOperation.InsertOrMerge(s_dictionary[retaileridkey].Take());
}
if (batchOperation?.Count != 0)
await s_azureStorageAccount.VerifyCloudTable.ExecuteBatchAsync(batchOperation);
}
}
}
catch (Exception ex)
{
s_log.Fatal("Azure task run failed", ex);
}
}
By the way, you could also use a timer instead of a Task.Delay with a CancellationToken

Related

Alternatives for Monitor (Wait, PluseAll) in Async Tasks in C#

I implemented Task synchronization using Monitor in C#.
However, I have read Monitor should not be used in asynchronous operation.
In the below code, how do I implement Monitor methods Wait and PulseAll with a construct that works with Task (asynchronous operations).
I have read that SemaphoreSlim.WaitAsync and Release methods can help.
But how do they fit in the below sample where multiple tasks need to wait on a lock object, and releasing the lock wakes up all waiting tasks ?
private bool m_condition = false;
private readonly Object m_lock = new Object();
private async Task<bool> SyncInteralWithPoolingAsync(
SyncDatabase db,
List<EntryUpdateInfo> updateList)
{
List<Task> activeTasks = new List<Task>();
int addedTasks = 0;
int removedTasks = 0;
foreach (EntryUpdateInfo entryUpdateInfo in updateList)
{
Monitor.Enter(m_lock);
//If 5 tasks are waiting in ProcessEntryAsync method
if(m_count >= 5)
{
//Do some batch processing to obtian values to set for adapterEntry.AdapterEntryId in ProcessEntryAsync
//.......
//.......
m_condition = true;
Monitor.PulseAll(m_lock); // Wakes all waiters AFTER lock is released
}
Monitor.Exit(m_lock);
removedTasks += activeTasks.RemoveAll(t => t.IsCompleted);
Task processingTask = Task.Run(
async () =>
{
await this.ProcessEntryAsync(
entryUpdateInfo,
db)
.ContinueWith(this.ProcessEntryCompleteAsync)
.ConfigureAwait(false);
});
activeTasks.Add(processingTask);
addedTasks++;
}
}
private async Task<bool> ProcessEntryAsync(SyncDatabase db, EntryUpdateInfo entryUpdateInfo)
{
SyncEntryAdapterData adapterEntry =
updateInfo.Entry.AdapterEntries.FirstOrDefault(e => e.AdapterId == this.Config.Id);
if (adapterEntry == null)
{
adapterEntry = new SyncEntryAdapterData()
{
SyncEntry = updateInfo.Entry,
AdapterId = this.Config.Id
};
updateInfo.Entry.AdapterEntries.Add(adapterEntry);
}
m_condition = false;
Monitor.Enter(m_lock);
while (!m_condition)
{
m_count++;
Monitor.Wait(m_lock);
}
m_count--;
adapterEntry.AdapterEntryId = .... //Set Value obtained form batch processing
Monitor.Exit(m_lock);
}
private void ProcessEntryCompleteAsync(Task<bool> task, object context)
{
EntryProcessingContext ctx = (EntryProcessingContext)context;
try
{
string message;
if (task.IsCanceled)
{
Logger.Warning("Processing was cancelled");
message = "The change was cancelled during processing";
}
else if (task.Exception != null)
{
Exception ex = task.Exception;
Logger.Warning("Processing failed with {0}: {1}", ex.GetType().FullName, ex.Message);
message = "An error occurred while synchronzing the changed.";
}
else
{
message = "The change was successfully synchronized";
if (task.Result)
{
//Processing
//...
//...
}
}
}
catch (Exception e)
{
Logger.Info(
"Caught an exception while completing entry processing. " + e);
}
finally
{
}
}
Thanks

Using Parallel ForEach to POST in API and single Handle each Response

I would like to know if the code I produced is good practice and does not produce leaks, I have more than 7000 objects Participant which I will push individually and Handle the Response to save the "external" id in our database. First I use the Parallel ForEach on the list pPartcipant:
Parallel.ForEach(pParticipant, participant =>
{
try
{
//Create
if (participant.id == null)
{
ExecuteRequestCreate(res, participant);
}
else
{//Update
ExecuteRequestUpdate(res, participant);
}
}
catch (Exception ex)
{
LogHelper.Log("Fail Parallel ", ex);
}
});
Then I do a classic (not async request), but after I need to "handle" the response (print in the console, save in a text file in async mode, and Update in my database)
private async void ExecuteRequestCreate(Uri pRes, ParticipantDo pParticipant)
{
try
{
var request = SetRequest(pParticipant);
//lTaskAll.Add(Task.Run(() => { ExecuteAll(request, pRes, pParticipant); }));
//Task.Run(() => ExecuteAll(request, pRes, pParticipant));
var result = RestExecute(request, pRes);
await HandleResult(result, pParticipant);
//lTaskHandle.Add(Task.Run(() => { HandleResult(result, pParticipant); }));
}
catch (Exception e)
{
lTaskLog.Add(LogHelper.Log(e.Message + " " + e.InnerException));
}
}
Should I run a new task for handeling the result (as commented) ? Will it improve the performance ?
In comment you can see that I created a list of tasks so I can wait all at the end (tasklog is all my task to write in a textfile) :
int nbtask = lTaskHandle.Count;
try
{
Task.WhenAll(lTaskHandle).Wait();
Task.WhenAll(lTaskLog).Wait();
}
catch (Exception ex)
{
LogHelper.Log("Fail on API calls tasks", ex);
}
I don't have any interface it is a console program.
I would like to know if the code I produced is good practice
No; you should avoid async void and also avoid Parallel for async work.
Here's a similar top-level method that uses asynchronous concurrency instead of Parallel:
var tasks = pParticipant
.Select(participant =>
{
try
{
//Create
if (participant.id == null)
{
await ExecuteRequestCreateAsync(res, participant);
}
else
{//Update
await ExecuteRequestUpdateAsync(res, participant);
}
}
catch (Exception ex)
{
LogHelper.Log("Fail Parallel ", ex);
}
})
.ToList();
await Task.WhenAll(tasks);
And your work methods should be async Task instead of async void:
private async Task ExecuteRequestCreateAsync(Uri pRes, ParticipantDo pParticipant)
{
try
{
var request = SetRequest(pParticipant);
var result = await RestExecuteAsync(request, pRes);
await HandleResult(result, pParticipant);
}
catch (Exception e)
{
LogHelper.Log(e.Message + " " + e.InnerException);
}
}

Correct way of starting a forever-running loop with Async

I've an existing code I wrote some time ago, that works but I dislike the fact that the thread I start remains in loop.
This piece of code is a consumer on an IBMMQ code, waiting for messages to be processed.The problem I've is that with the following code
private Task ExecuteQueuePolling(CancellationToken cancellationToken)
{
return Task.Factory.StartNew(() =>
{
ConnectToAccessQueue();
Logger.Debug($"Accessed to the queue {queueName}");
Logger.DebugFormat("Repeating timer started, checking frequency: {checkingFrequency}",
checkingFrequency);
while (!cancellationToken.IsCancellationRequested)
{
Logger.Trace( () => "Listening on queues for new messages");
// isChecking = true;
var mqMsg = new MQMessage();
var mqGetMsgOpts = new MQGetMessageOptions
{ WaitInterval = (int)checkingFrequency.TotalMilliseconds };
// 15 second limit for waiting
mqGetMsgOpts.Options |= MQC.MQGMO_WAIT | MQC.MQGMO_FAIL_IF_QUIESCING |
MQC.MQCNO_RECONNECT_Q_MGR | MQC.MQOO_INPUT_AS_Q_DEF;
try
{
mqQueue.Get(mqMsg, mqGetMsgOpts);
if (string.Compare(mqMsg.Format, MQC.MQFMT_STRING, StringComparison.Ordinal) == 0)
{
var text = mqMsg.ReadString(mqMsg.MessageLength);
Logger.Debug($"Message received : [{text}]");
Message message = new Message { Content = text };
foreach (var observer in observers)
observer.OnNext(message);
}
else
{
Logger.Warn("Non-text message");
}
}
catch (MQException ex)
{
if (ex.Message == MQC.MQRC_NO_MSG_AVAILABLE.ToString())
{
Logger.Trace("No messages available");
//nothing to do, emtpy queue
}
else if (ex.Message == MQC.MQRC_CONNECTION_BROKEN.ToString())
{
Logger.ErrorException("MQ Exception, trying to recconect", ex);
throw new ReconnectException();
}
}
Thread.Sleep(100);
}
},cancellationToken);
}
//Calling method
try
{
string queueManagerName = configuration.GetValue<string>("IBMMQ:QUEUE_MANAGER_NAME");
// var queueManager = new MQQueueManager(queueManagerName,dictionary2);
QueueMonitor monitor = new QueueMonitor(configuration, "IMPORTER_RECEIVER_TEST");
//_subscription = monitor.Subscribe(receiver);
await monitor.StartAsync(cts.Token).ConfigureAwait(false);
}
catch (Exception e)
{
log.Error(e, "Error creating the queue monitor or it's subscription");
}
finally
{
WaitForCancel(cts);
}
The call to await monitor.StartAsync(cts.Token).ConfigureAwait(false); remains pending.
How should I modify my code, so that the call returns and in background the task continue to loop?
Thanks in advance
Here is how you can simplify your code by replacing Thread.Sleep with Task.Delay:
private async Task ExecuteQueuePolling(CancellationToken cancellationToken)
{
while (true)
{
// Process mqQueue here
await Task.Delay(100, cancellationToken);
}
}
Task.Delay has the advantage that accepts a CancellationToken, so in case of cancellation the loop will exit immediately. This could be important if the pooling of the MQ was lazier (for example every 5 seconds).
private static Task _runningTask;
static void Main(string[] args)
{
var cts = new CancellationTokenSource();
_runningTask = ExecuteQueuePolling(cts.Token);
WaitForCancel(cts);
}
private static void WaitForCancel(CancellationTokenSource cts)
{
var spinner = new SpinWait();
while (!cts.IsCancellationRequested
&& _runningTask.Status == TaskStatus.Running) spinner.SpinOnce();
}
private static Task ExecuteQueuePolling(CancellationToken cancellationToken)
{
var t = new Task(() =>
{
while (!cancellationToken.IsCancellationRequested)
; // your code
if (cancellationToken.IsCancellationRequested)
throw new OperationCanceledException();
}, cancellationToken, TaskCreationOptions.LongRunning);
t.Start();
return t;
}

Stop hanging synchronous method

There is a method HTTP_actions.put_import() in XenAPI, which is synchronous and it supports cancellation via its delegate.
I have the following method:
private void UploadImage(.., Func<bool> isTaskCancelled)
{
try
{
HTTP_actions.put_import(
cancellingDelegate: () => isTaskCancelled(),
...);
}
catch (HTTP.CancelledException exception)
{
}
}
It so happens that in some cases this method HTTP_actions.put_import hangs and doesn't react to isTaskCancelled(). In that case the whole application also hangs.
I can run this method in a separate thread and kill it forcefully once I receive cancellation signal, but this method doesn't always hang and sometimes I want to gracefully cancel this method. Only when this method is really hanging, I want to kill it myself.
What is the best way to handle such situation?
Wrote blog post for below : http://pranayamr.blogspot.in/2017/12/abortcancel-task.html
Tried lot of solution since last 2 hr for you and I come up with below working solution , please have try it out
class Program
{
//capture request running that , which need to be cancel in case
// it take more time
static Thread threadToCancel = null;
static async Task<string> DoWork(CancellationToken token)
{
var tcs = new TaskCompletionSource<string>();
//enable this for your use
//await Task.Factory.StartNew(() =>
//{
// //Capture the thread
// threadToCancel = Thread.CurrentThread;
// HTTP_actions.put_import(...);
//});
//tcs.SetResult("Completed");
//return tcs.Task.Result;
//comment this whole this is just used for testing
await Task.Factory.StartNew(() =>
{
//Capture the thread
threadToCancel = Thread.CurrentThread;
//Simulate work (usually from 3rd party code)
for (int i = 0; i < 100000; i++)
{
Console.WriteLine($"value {i}");
}
Console.WriteLine("Task finished!");
});
tcs.SetResult("Completed");
return tcs.Task.Result;
}
public static void Main()
{
var source = new CancellationTokenSource();
CancellationToken token = source.Token;
DoWork(token);
Task.Factory.StartNew(()=>
{
while(true)
{
if (token.IsCancellationRequested && threadToCancel!=null)
{
threadToCancel.Abort();
Console.WriteLine("Thread aborted");
}
}
});
///here 1000 can be replace by miliseconds after which you want to
// abort thread which calling your long running method
source.CancelAfter(1000);
Console.ReadLine();
}
}
Here is my final implementation (based on Pranay Rana's answer).
public class XenImageUploader : IDisposable
{
public static XenImageUploader Create(Session session, IComponentLogger parentComponentLogger)
{
var logger = new ComponentLogger(parentComponentLogger, typeof(XenImageUploader));
var taskHandler = new XenTaskHandler(
taskReference: session.RegisterNewTask(UploadTaskName, logger),
currentSession: session);
return new XenImageUploader(session, taskHandler, logger);
}
private XenImageUploader(Session session, XenTaskHandler xenTaskHandler, IComponentLogger logger)
{
_session = session;
_xenTaskHandler = xenTaskHandler;
_logger = logger;
_imageUploadingHasFinishedEvent = new AutoResetEvent(initialState: false);
_xenApiUploadCancellationReactionTime = new TimeSpan();
}
public Maybe<string> Upload(
string imageFilePath,
XenStorage destinationStorage,
ProgressToken progressToken,
JobCancellationToken cancellationToken)
{
_logger.WriteDebug("Image uploading has started.");
var imageUploadingThread = new Thread(() =>
UploadImageOfVirtualMachine(
imageFilePath: imageFilePath,
storageReference: destinationStorage.GetReference(),
isTaskCancelled: () => cancellationToken.IsCancellationRequested));
imageUploadingThread.Start();
using (new Timer(
callback: _ => WatchForImageUploadingState(imageUploadingThread, progressToken, cancellationToken),
state: null,
dueTime: TimeSpan.Zero,
period: TaskStatusUpdateTime))
{
_imageUploadingHasFinishedEvent.WaitOne(MaxTimeToUploadSvm);
}
cancellationToken.PerformCancellationIfRequested();
return _xenTaskHandler.TaskIsSucceded
? new Maybe<string>(((string) _xenTaskHandler.Result).GetOpaqueReferenceFromResult())
: new Maybe<string>();
}
public void Dispose()
{
_imageUploadingHasFinishedEvent.Dispose();
}
private void UploadImageOfVirtualMachine(string imageFilePath, XenRef<SR> storageReference, Func<bool> isTaskCancelled)
{
try
{
_logger.WriteDebug("Uploading thread has started.");
HTTP_actions.put_import(
progressDelegate: progress => { },
cancellingDelegate: () => isTaskCancelled(),
timeout_ms: -1,
hostname: new Uri(_session.Url).Host,
proxy: null,
path: imageFilePath,
task_id: _xenTaskHandler.TaskReference,
session_id: _session.uuid,
restore: false,
force: false,
sr_id: storageReference);
_xenTaskHandler.WaitCompletion();
_logger.WriteDebug("Uploading thread has finished.");
}
catch (HTTP.CancelledException exception)
{
_logger.WriteInfo("Image uploading has been cancelled.");
_logger.WriteInfo(exception.ToDetailedString());
}
_imageUploadingHasFinishedEvent.Set();
}
private void WatchForImageUploadingState(Thread imageUploadingThread, ProgressToken progressToken, JobCancellationToken cancellationToken)
{
progressToken.Progress = _xenTaskHandler.Progress;
if (!cancellationToken.IsCancellationRequested)
{
return;
}
_xenApiUploadCancellationReactionTime += TaskStatusUpdateTime;
if (_xenApiUploadCancellationReactionTime >= TimeForXenApiToReactOnCancel)
{
_logger.WriteWarning($"XenApi didn't cancel for {_xenApiUploadCancellationReactionTime}.");
if (imageUploadingThread.IsAlive)
{
try
{
_logger.WriteWarning("Trying to forcefully abort uploading thread.");
imageUploadingThread.Abort();
}
catch (Exception exception)
{
_logger.WriteError(exception.ToDetailedString());
}
}
_imageUploadingHasFinishedEvent.Set();
}
}
private const string UploadTaskName = "Xen image uploading";
private static readonly TimeSpan TaskStatusUpdateTime = TimeSpan.FromSeconds(1);
private static readonly TimeSpan TimeForXenApiToReactOnCancel = TimeSpan.FromSeconds(10);
private static readonly TimeSpan MaxTimeToUploadSvm = TimeSpan.FromMinutes(20);
private readonly Session _session;
private readonly XenTaskHandler _xenTaskHandler;
private readonly IComponentLogger _logger;
private readonly AutoResetEvent _imageUploadingHasFinishedEvent;
private TimeSpan _xenApiUploadCancellationReactionTime;
}
HTTP_actions.put_import
calls
HTTP_actions.put
calls
HTTP.put
calls
HTTP.CopyStream
The delegate is passed to CopyStream which then checks that the function isn’t null (not passed) or true (return value). However, it only does this at the While statement so the chances are it is the Read of the Stream that is causing the blocking operation. Though it could also occur in the progressDelegate if one is used.
To get around this, put the call to HTTP.put_import() inside a task or background thread and then separately check for cancellation or a return from the task/thread.
Interestingly enough, a quick glance at that CopyStream code revealed a bug to me. If the function that works out if a process has been cancelled returns a different value based off some check it is making, you can actually get the loop to exit without generating a CancelledException(). The result of the CancelledException call should be stored in a local variable.

Does This Asp.Net WebAPI controller method get swapped out of app pool correctly?

I've got a Post method in my webapi 2 controller that does an insert into a database but often has to retry for many seconds before it succeeds. Basically, that causes a lot of sleeps between the retries. Effectively, it is running the code below. My question is, is this code correct so that I can have thousands of these running at the same time and not using up my iis page pool?
public async Task<HttpResponseMessage> Post()
{
try
{
HttpContent requestContent = Request.Content;
string json = await requestContent.ReadAsStringAsync();
Thread.Sleep(30000);
//InsertInTable(json);
}
catch (Exception ex)
{
throw ex;
}
return new HttpResponseMessage(HttpStatusCode.OK);
}
* Added By Peter As To Try Stephens's suggestion of Await.Delay. Shows error that can not put await in catch.
public async Task<HttpResponseMessage> PostXXX()
{
HttpContent requestContent = Request.Content;
string json = await requestContent.ReadAsStringAsync();
bool success = false;
int retriesMax = 30;
int retries = retriesMax;
while (retries > 0)
{
try
{
// DO ADO.NET EXECUTE THAT MAY FAIL AND NEED RETRY
retries = 0;
}
catch (SqlException exception)
{
// exception is a deadlock
if (exception.Number == 1205)
{
await Task.Delay(1000);
retries--;
}
// exception is not a deadlock
else
{
throw;
}
}
}
return new HttpResponseMessage(HttpStatusCode.OK);
}
* More Added By Peter, Trying Enterprise block, missing class (StorageTransientErrorDetectionStrategy class not found)
public async Task<HttpResponseMessage> Post()
{
HttpContent requestContent = Request.Content;
string json = await requestContent.ReadAsStringAsync();
var retryStrategy = new Incremental(5, TimeSpan.FromSeconds(1),
TimeSpan.FromSeconds(2));
var retryPolicy =
new RetryPolicy<StorageTransientErrorDetectionStrategy>(retryStrategy);
try
{
// Do some work that may result in a transient fault.
retryPolicy.ExecuteAction(
() =>
{
// Call a method that uses Windows Azure storage and which may
// throw a transient exception.
Thread.Sleep(10000);
});
}
catch (Exception)
{
// All the retries failed.
}
*****Code that causes SqlServer to spin out of control with open connections
try
{
await retryPolicy.ExecuteAsync(
async () =>
{
// this opens a SqlServer Connection and Transaction.
// if fails, rolls back and rethrows exception so
// if deadlock, this retry loop will handle correctly
// (caused sqlserver to spin out of control with open
// connections so replacing with simple call and
// letting sendgrid retry non 200 returns)
await InsertBatchWithTransaction(sendGridRecordList);
});
}
catch (Exception)
{
Utils.Log4NetSimple("SendGridController:POST Retries all failed");
}
and the async insert code (with some ...'s)
private static async Task
InsertBatchWithTransaction(List<SendGridRecord> sendGridRecordList)
{
using (
var sqlConnection =
new SqlConnection(...))
{
await sqlConnection.OpenAsync();
const string sqlInsert =
#"INSERT INTO SendGridEvent...
using (SqlTransaction transaction =
sqlConnection.BeginTransaction("SampleTransaction"))
{
using (var sqlCommand = new SqlCommand(sqlInsert, sqlConnection))
{
sqlCommand.Parameters.Add("EventName", SqlDbType.VarChar);
sqlCommand.Transaction = transaction;
try
{
foreach (var sendGridRecord in sendGridRecordList)
{
sqlCommand.Parameters["EventName"].Value =
GetWithMaxLen(sendGridRecord.EventName, 60);
await sqlCommand.ExecuteNonQueryAsync();
}
transaction.Commit();
}
catch (Exception)
{
transaction.Rollback();
throw;
}
}
}
}
}
No. At the very least, you want to replace Thread.Sleep with await Task.Delay. Thread.Sleep will block a thread pool thread in that request context, doing nothing. Using await allows that thread to return to the thread pool to be used for other requests.
You might also want to consider the Transient Fault Handling Application Block.
Update: You can't use await in a catch block; this is a limitation of the C# language in VS2013 (the next version will likely allow this, as I note on my blog). So for now, you have to do something like this:
private async Task RetryAsync(Func<Task> action, int retries = 30)
{
while (retries > 0)
{
try
{
await action();
return;
}
catch (SqlException exception)
{
// exception is not a deadlock
if (exception.Number != 1205)
throw;
}
await Task.Delay(1000);
retries--;
}
throw new Exception("Retry count exceeded");
}
To use the Transient Fault Handling Application Block, first you define what errors are "transient" (should be retried). According to your example code, you only want to retry when there's a SQL deadlock exception:
private sealed class DatabaseDeadlockTransientErrorDetectionStrategy : ITransientErrorDetectionStrategy
{
public bool IsTransient(Exception ex)
{
var sqlException = ex as SqlException;
if (sqlException == null)
return false;
return sqlException.Number == 1205;
}
public static readonly DatabaseDeadlockTransientErrorDetectionStrategy Instance = new DatabaseDeadlockTransientErrorDetectionStrategy();
}
Then you can use it as such:
private static async Task RetryAsync()
{
var retryStrategy = new Incremental(5, TimeSpan.FromSeconds(1), TimeSpan.FromSeconds(2));
var retryPolicy = new RetryPolicy(DatabaseDeadlockTransientErrorDetectionStrategy.Instance, retryStrategy);
try
{
// Do some work that may result in a transient fault.
await retryPolicy.ExecuteAsync(
async () =>
{
// TODO: replace with async ADO.NET calls.
await Task.Delay(1000);
});
}
catch (Exception)
{
// All the retries failed.
}
}

Categories