The best way (pattern) to execute something while exception is thown - c#

Below is a code showing tries to encapsulate a logic to re-run something while an exception is being catch.
Does exist patterns or something else to do that ? Or what improvements would you suggest to that code ?
public static void DoWhileFailing(int triesAmount, int pauseAmongTries, Action codeToTryRun) {
bool passed = false;
Exception lastException = null;
for (int i = 0; !passed && i < triesAmount; i++) {
try {
if (i > 0) {
Thread.Sleep(pauseAmongTries);
}
codeToTryRun();
passed = true;
} catch(Exception e) {
lastException = e;
}
}
if (!passed && lastException != null) {
throw new Exception(String.Format("Something failed more than {0} times. That is the last exception catched.", triesAmount), lastException);
}
}

I would re-write this to eliminate a few variables, but in general your code is OK:
public static void DoWhileFailing(int triesAmount, int pauseAmongTries, Action codeToTryRun) {
if (triesAmount<= 0) {
throw new ArgumentException("triesAmount");
}
Exception ex = null;
for (int i = 0; i < triesAmount; i++) {
try {
codeToTryRun();
return;
} catch(Exception e) {
ex = e;
}
Thread.Sleep(pauseAmongTries);
}
throw new Exception(String.Format("Something failed more than {0} times. That is the last exception catched.", triesAmount, ex);
}

I wrote the following code to do basically the same thing. It also lets you specifiy the type of Exception to catch and a Func that determines if the current iteration should throw the exception or continue retrying.
public static void RetryBeforeThrow<T>(
this Action action,
Func<T, int, bool> shouldThrow,
int waitTime) where T : Exception
{
if (action == null)
throw new ArgumentNullException("action");
if (shouldThrow == null)
throw new ArgumentNullException("shouldThrow");
if (waitTime <= 0)
throw new ArgumentException("Should be greater than zero.", "waitTime");
int tries = 0;
do
{
try
{
action();
return;
}
catch (T ex)
{
if (shouldThrow(ex, ++tries))
throw;
Thread.Sleep(waitTime);
}
}
while (true);
}
Then you can call it like this
Action a = () =>
{
//do stuff
};
a.RetryBeforeThrow<Exception>((e, i) => i >= 5, 1000);
And you can specify any exception type and you can check the exception in the Func to determine if it is an exception that you want to throw or to continue to retry. This gives you the ability to throw your own exceptions in your Action that will stop the retries from occurring.

I don't see anything wrong with the code, I would just question your assumptions.
A couple of problems that I see:
Clients need to understand the failure modes of the called action to choose the right parameters.
The action could fail intermittently, which is a capacity killer. Code like this can't scale well.
Clients can wait for an indeterminate amount of time for the action to complete.
All exceptions but the last will be swallowed, which could hide important diagnostic information.
Depending on your needs, your code might suffice, but for a more robust way to encapsulate an unreliable resource, take a look at the circuit breaker pattern.

Related

Assign passed functions result to object with variable type in C#

For an integration I'm running as a service once a day, I need to assign the result of API-calls to local variables. However, those API's might at any time decide to throw a 401 error, in which case I just want to try again, up to three times.
I've got a functioning code to do that:
List<APIEntityProject> projectList = null;
private bool SetProjectList(){
const maxRetries = 3;
const RetryPause = 3000;
int retries = 0;
do
{
try
{
projectList = ProjApi.GetProject(activeWorkspace.WorkspaceCode);
}
catch (ApiException e)
{
if (e.ErrorCode == 401) // Unauthorized error (e.g. user doesn't have access to this Workspace
{
Log.Warning("Unauthorized error while fetching projects from Workspace, try {retries}",retries);
retries++;
System.Threading.Thread.Sleep(RetryPause * retries);//Waits 3 and then 6 seconds before retrying.
}
else throw;
}
} while (projectList == null || retries < maxRetries);
if (retries == maxRetries)
{
Log.Error("An error has occured while trying to retrieve affected Projects, skipped document");
errorCount++;
return false;
}
return true;
}
But unfortunately I need to replicate this Logic so often I would like to use it in a function e.g. RetryNTimes (similar to This Solution
List<APIEntityProject> projectList = null;
List<APIEntityWBS> WBSList = null;
List<APIEntitySopeItem> SIList = null;
List<APIEntityScopeAsignment> SAList = null;
List<APIEntityActivity> ActList = null;
...
RetryNTimes(projectList,ProjApi.GetProject(activeWorkspace.WorkspaceCode),3,3000,"ProjectList");
RetryNTimes(WBSList, WBSApi.GetAllWBS(activeProject.ProjectID),3,3000,"WBSList");
RetryNTimes(SIList, SIApi.GetAllScopeItems(activeProject.ProjectID),3,3000,"ScopeItemsList");
RetryNTimes(SAList, SAApi.GetAllScopeAssignments(activeProject.ProjectID),3,3000,"ScopeAssignmentsList");
RetryNTimes(ActList, ActApi.GetAllActivities(activeProject.ProjectID),3,3000,"ActivityList");
...
private bool RetryNTimes(T object, Func<T> func, int times, int WaitInterval, string etext){
do
{
try
{
object = func();
}
catch (ApiException e)
{
if (e.ErrorCode == 401)
{
retries++;
Log.Warning("Unauthorized error while fetching {APIErrorSubject}, try {retries}",eText,retries);
System.Threading.Thread.Sleep(RetryPause * retries);//Waits 3 and then 6 seconds before retrying.
}
else throw;
}
} while (object == null || retries < maxRetries);
if (retries == maxRetries)
{
Log.Error("An error has occured while trying to retrieve {APIErrorSubject}, skipped document",eText);
errorCount++;
return false;
}
return true;
}
I've also read through typedef and function pointers but I'm not sure if it's possible to do with variable types.
Any Ideas?
That article refers to C language. In C# you can use delegates. Here's a link to start you off.
Based on the idea of asawyer and by looking through some other examples of delegates I've been able to make it work.
static T2 TryNTimes<T1,T2>(Func<T1,T2> func,T1 obj, int times, int WaitInterval)
{
while (times > 0)
{
try
{
T2 result = func.Invoke(obj);
return result;
}
catch (Exception e)
{
if (--times <= 0)
throw;
System.Threading.Thread.Sleep(WaitInterval * times);
}
}
return default;
}
Now I need only 2 steps in my main function
activeWorkspace = TryNTimes(WrkApi.WorkspaceCodeWorkspaceCodeFindByName17, ServiceSettings.sqlConnection.Workspace, 3, 3000)[0];
ProjectList = TryNTimes(WrkApi.GetProjectsByWorkspaceCode, activeWorkspace.code, 3, 3000);
The first one can still generate an error as the default List is empty and you can't take 0th element then. But I guess I can find another way around that issue.

Attempt and retry

How can I formalize this to be more generic, where I can specify an X exceptions to throw and X exceptions to try again all while improving the code readability.
private const int RetrySegmentCount = 3;
private const int SecondsBetweenRetry = 30;
var retryCounter = 0;
while (true)
{
try
{
ExecuteProcessThatMayThrow();
break;
}
catch (NotSupportedException) // Do no retry if this is thrown
{
throw;
}
catch (Exception)
{
if (retryCounter < RetrySegmentCount)
{
retryCounter++;
Thread.Sleep(SecondsBetweenRetry * 1000);
}
else
{
throw;
}
}
}
An ideal syntax in puesdocode might be
Repeat(3, 30, [NotSupportedException], [Exception]) => ExecuteProcessThatMayThrow();
Repeat(3, 30) => ExecuteProcessThatMayThrow(); // This will repeat on all
Repeat(3, 30, [NotSupportedException, VeryBadException], [RetryableException]) => ExecuteProcessThatMayThrow();
You can create a reusable method that has multiple result depending on the error type. Here a small modified version of what i use
This method handles the different conditions and retry
public static bool TryExecute(Action action, int retry, int secondBeforeRetry, List<Type> notSupportedExceptions, List<Type> veryBadExceptions, List<Type> retryableExceptions)
{
var success = false;
// keep trying to run the action
for (int i = 0; i < retry; i++)
{
try
{
// run action
action.Invoke();
// if it reached here it was successful
success = true;
// break the loop
break;
}
catch (Exception ex)
{
// if the exception is not retryable
if (!retryableExceptions.Contains(ex.GetType()))
{
// if its a not supported exception
if (notSupportedExceptions.Contains(ex.GetType()))
{
throw new Exception("No supported");
}
else if (veryBadExceptions.Contains(ex.GetType()))
{
throw new Exception("Very bad");
}
}
else
{
System.Threading.Thread.Sleep(secondBeforeRetry * 1000);
}
}
}
return success;
}
To call this method it before very easy as they can all be easily change to optional parameters. here is and example :
// sample action that force an error to be thrown
var a = new Action(() =>
{
var test = "";
var test2 = test[3]; // throw out of range exception
});
try
{
var success = TryExecute(a, 5, 30, new List<Type>() { typeof(IndexOutOfRangeException) }, new List<Type>(), new List<Type>());
}
catch (Exception ex)
{
// handle whatever you want
}

Dataflow(TPL) - exception handling issue?

I'm not sure if i'm doing something wrong or it's an issue with Dataflow but I can't work out when Receive() throws exception.
When I run this test:
public class AsyncProblem
{
[Fact]
public void AsyncVsAwaiterProblem()
{
var max = 1000;
var noOfExceptions = 0;
for (int i = 0; i < max; i++)
{
try
{
Await().Wait();
}
catch
{
noOfExceptions++;
}
}
Assert.Equal(max,noOfExceptions);
}
public async Task Await()
{
bool firstPassed = false;
var divideBlock = new TransformBlock<int, int>((x) =>
{
if (firstPassed)
throw new ArgumentException("error");
firstPassed = true;
return 0;
});
divideBlock.Post(2);
divideBlock.Post(3); // this should cause failure;
divideBlock.Complete();
while (await divideBlock.OutputAvailableAsync())
{
var value = divideBlock.Receive(); // this should throw exception on second call
}
try
{
divideBlock.Completion.Wait();
}
catch
{
}
}
}
I'm getting inconsistent results, first run:
Xunit.Sdk.EqualExceptionAssert.Equal() Failure
Expected: 1000
Actual: 127
then run again:
Xunit.Sdk.EqualExceptionAssert.Equal() Failure
Expected: 1000
Actual: 14
Can someone confirm that it's not "on my machine" only issue?
Gist: https://gist.github.com/plentysmart/1c2ed2e925cc3f690f61
Actually, I think the confusion is due to the OutputAvailableAsync behavior. This method will return false when there will never be any more output.
When a block faults (i.e., as the result of an exception from the transformation delegate), it will clear both input and output buffers. This causes OutputAvailableAsync to return false.

Retry policy within ITargetBlock<TInput>

I need to introduce a retry policy to the workflow. Let's say there are 3 blocks that are connected in such a way:
var executionOptions = new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 3 };
var buffer = new BufferBlock<int>();
var processing = new TransformBlock<int, int>(..., executionOptions);
var send = new ActionBlock<int>(...);
buffer.LinkTo(processing);
processing.LinkTo(send);
So there is a buffer which accumulates data, then send it to the transform block that processes not more that 3 items at one time, and then the result send to the action block.
Potentially during processing the transform block transient errors are possible, and I want retry the block if the error is transient for several times.
I know that blocks generally are not retryable (delegates that passed into the blocks could be made retryable). And one of the options is to wrap the delegate passed to support retrying.
I also know that there is a very good library TransientFaultHandling.Core that provides the retry mechanisms to transient faults. This is an excellent library but not in my case. If I wrap the delegate that is passed to the transform block into the RetryPolicy.ExecuteAsync method, the message inside the transform block will be locked, and until retry either completes or fails, the transform block won't be able to receive a new message. Imagine, if all the 3 messages are entered into the retrying (let's say, the next retry attempt will be in 2 minutes) and fail, the transform block will be stuck until at least one message leave the transform block.
The only solution I see is to extend the TranformBlock (actually, ITargetBlock will be enough too), and do the retry manually (like from here):
do
{
try { return await transform(input); }
catch
{
if( numRetries <= 0 ) throw;
else Task.Delay(timeout).ContinueWith(t => processing.Post(message));
}
} while( numRetries-- > 0 );
i.g. to put the message inside the transform block again with a delay, but in this case the retry context (number of retries left, etc.) also should be passed into this block. Sounds too complex...
Does anyone see a simpler approach to implement retry policy for a workflow block?
I think you pretty much have to do that, you have to track the remaining number of retries for a message and you have to schedule the retried attempt somehow.
But you could make this better by encapsulating it in a separate method. Something like:
// it's a private class, so public fields are okay
private class RetryingMessage<T>
{
public T Data;
public int RetriesRemaining;
public readonly List<Exception> Exceptions = new List<Exception>();
}
public static IPropagatorBlock<TInput, TOutput>
CreateRetryingBlock<TInput, TOutput>(
Func<TInput, Task<TOutput>> transform, int numberOfRetries,
TimeSpan retryDelay, Action<IEnumerable<Exception>> failureHandler)
{
var source = new TransformBlock<TInput, RetryingMessage<TInput>>(
input => new RetryingMessage<TInput>
{ Data = input, RetriesRemaining = numberOfRetries });
// TransformManyBlock, so that we can propagate zero results on failure
TransformManyBlock<RetryingMessage<TInput>, TOutput> target = null;
target = new TransformManyBlock<RetryingMessage<TInput>, TOutput>(
async message =>
{
try
{
return new[] { await transform(message.Data) };
}
catch (Exception ex)
{
message.Exceptions.Add(ex);
if (message.RetriesRemaining == 0)
{
failureHandler(message.Exceptions);
}
else
{
message.RetriesRemaining--;
Task.Delay(retryDelay)
.ContinueWith(_ => target.Post(message));
}
return null;
}
});
source.LinkTo(
target, new DataflowLinkOptions { PropagateCompletion = true });
return DataflowBlock.Encapsulate(source, target);
}
I have added code to track the exceptions, because I think that failures should not be ignored, they should be at the very least logged.
Also, this code doesn't work very well with completion: if there are retries waiting for their delay and you Complete() the block, it will immediately complete and the retries will be lost. If that's a problem for you, you will have to track outstanding reties and complete target when source completes and no retries are waiting.
In addition to svick's excellent answer, there are a couple of other options:
You can use TransientFaultHandling.Core - just set MaxDegreeOfParallelism to Unbounded so the other messages can get through.
You can modify the block output type to include failure indication and a retry count, and create a dataflow loop, passing a filter to LinkTo that examines whether another retry is necessary. This approach is more complex; you'd have to add a delay to your block if it is doing a retry, and add a TransformBlock to remove the failure/retry information for the rest of the mesh.
Here are two methods CreateRetryTransformBlock and CreateRetryActionBlock that operate under these assumptions:
The caller wants all items to be processed, even if some of them have repeatedly failed.
The caller is interested to know about all occured exceptions, even for items that finally succeeded (not applicable for the CreateRetryActionBlock).
The caller may want to set an upper limit to the number of total retries, after which the block should transition to a faulted state.
The caller wants to be able to set all available options of a normal block, including the MaxDegreeOfParallelism, BoundedCapacity, CancellationToken and EnsureOrdered, on top of the options related to the retry functionality.
The implementation below uses a SemaphoreSlim to control the level of concurrency between operations that are attempted for the first time, and previously faulted operations that are retried after their delay duration has elapsed.
public class RetryExecutionDataflowBlockOptions : ExecutionDataflowBlockOptions
{
/// <summary>The limit after which an item is returned as failed.</summary>
public int MaxAttemptsPerItem { get; set; } = 1;
/// <summary>The delay duration before retrying an item.</summary>
public TimeSpan RetryDelay { get; set; } = TimeSpan.Zero;
/// <summary>The limit after which the block transitions to a faulted
/// state (unlimited is the default).</summary>
public int MaxRetriesTotal { get; set; } = -1;
}
public readonly struct RetryResult<TInput, TOutput>
{
public readonly TInput Input { get; }
public readonly TOutput Output { get; }
public readonly bool Success { get; }
public readonly Exception[] Exceptions { get; }
public bool Failed => !Success;
public Exception FirstException => Exceptions != null ? Exceptions[0] : null;
public int Attempts =>
Exceptions != null ? Exceptions.Length + (Success ? 1 : 0) : 1;
public RetryResult(TInput input, TOutput output, bool success,
Exception[] exceptions)
{
Input = input;
Output = output;
Success = success;
Exceptions = exceptions;
}
}
public class RetryLimitException : Exception
{
public RetryLimitException(string message, Exception innerException)
: base(message, innerException) { }
}
public static IPropagatorBlock<TInput, RetryResult<TInput, TOutput>>
CreateRetryTransformBlock<TInput, TOutput>(
Func<TInput, Task<TOutput>> transform,
RetryExecutionDataflowBlockOptions dataflowBlockOptions)
{
if (transform == null) throw new ArgumentNullException(nameof(transform));
if (dataflowBlockOptions == null)
throw new ArgumentNullException(nameof(dataflowBlockOptions));
int maxAttemptsPerItem = dataflowBlockOptions.MaxAttemptsPerItem;
int maxRetriesTotal = dataflowBlockOptions.MaxRetriesTotal;
TimeSpan retryDelay = dataflowBlockOptions.RetryDelay;
if (maxAttemptsPerItem < 1) throw new ArgumentOutOfRangeException(
nameof(dataflowBlockOptions.MaxAttemptsPerItem));
if (maxRetriesTotal < -1) throw new ArgumentOutOfRangeException(
nameof(dataflowBlockOptions.MaxRetriesTotal));
if (retryDelay < TimeSpan.Zero) throw new ArgumentOutOfRangeException(
nameof(dataflowBlockOptions.RetryDelay));
var cancellationToken = dataflowBlockOptions.CancellationToken;
var exceptionsCount = 0;
var semaphore = new SemaphoreSlim(
dataflowBlockOptions.MaxDegreeOfParallelism);
async Task<(TOutput, Exception)> ProcessOnceAsync(TInput item)
{
await semaphore.WaitAsync(); // Preserve the SynchronizationContext
try
{
var result = await transform(item).ConfigureAwait(false);
return (result, null);
}
catch (Exception ex)
{
if (maxRetriesTotal != -1)
{
if (Interlocked.Increment(ref exceptionsCount) > maxRetriesTotal)
{
throw new RetryLimitException($"The max retry limit " +
$"({maxRetriesTotal}) has been reached.", ex);
}
}
return (default, ex);
}
finally
{
semaphore.Release();
}
}
async Task<Task<RetryResult<TInput, TOutput>>> ProcessWithRetryAsync(
TInput item)
{
// Creates a two-stages operation. Preserves the context on every await.
var (result, firstException) = await ProcessOnceAsync(item);
if (firstException == null) return Task.FromResult(
new RetryResult<TInput, TOutput>(item, result, true, null));
return RetryStageAsync();
async Task<RetryResult<TInput, TOutput>> RetryStageAsync()
{
var exceptions = new List<Exception>();
exceptions.Add(firstException);
for (int i = 2; i <= maxAttemptsPerItem; i++)
{
await Task.Delay(retryDelay, cancellationToken);
var (result, exception) = await ProcessOnceAsync(item);
if (exception != null)
exceptions.Add(exception);
else
return new RetryResult<TInput, TOutput>(item, result,
true, exceptions.ToArray());
}
return new RetryResult<TInput, TOutput>(item, default, false,
exceptions.ToArray());
};
}
// The input block awaits the first stage of each operation
var input = new TransformBlock<TInput, Task<RetryResult<TInput, TOutput>>>(
item => ProcessWithRetryAsync(item), dataflowBlockOptions);
// The output block awaits the second (and final) stage of each operation
var output = new TransformBlock<Task<RetryResult<TInput, TOutput>>,
RetryResult<TInput, TOutput>>(t => t, dataflowBlockOptions);
input.LinkTo(output, new DataflowLinkOptions { PropagateCompletion = true });
// In case of failure ensure that the input block is faulted too,
// so that its input/output queues are emptied, and any pending
// SendAsync operations are aborted
PropagateFailure(output, input);
return DataflowBlock.Encapsulate(input, output);
async void PropagateFailure(IDataflowBlock block1, IDataflowBlock block2)
{
try { await block1.Completion.ConfigureAwait(false); }
catch (Exception ex) { block2.Fault(ex); }
}
}
public static ITargetBlock<TInput> CreateRetryActionBlock<TInput>(
Func<TInput, Task> action,
RetryExecutionDataflowBlockOptions dataflowBlockOptions)
{
if (action == null) throw new ArgumentNullException(nameof(action));
var block = CreateRetryTransformBlock<TInput, object>(async input =>
{
await action(input).ConfigureAwait(false); return null;
}, dataflowBlockOptions);
var nullTarget = DataflowBlock.NullTarget<RetryResult<TInput, object>>();
block.LinkTo(nullTarget);
return block;
}

List<T>Get Chunk Number being executed

I am breaking a list into chunks and processing it as below:
foreach (var partialist in breaklistinchunks(chunksize))
{
try
{
do something
}
catch
{
print error
}
}
public static class IEnumerableExtensions
{
public static IEnumerable<List<T>> BreakListinChunks<T>(this IEnumerable<T> sourceList, int chunkSize)
{
List<T> chunkReturn = new List<T>(chunkSize);
foreach (var item in sourceList)
{
chunkReturn.Add(item);
if (chunkReturn.Count == chunkSize)
{
yield return chunkReturn;
chunkReturn = new List<T>(chunkSize);
}
}
if (chunkReturn.Any())
{
yield return chunkReturn;
}
}
}
If there is an error, I wish to run the chunk again. Is it possible to find the particular chunk number where we received the error and run that again ?
The batches have to be executed in sequential order .So if batch#2 generates an error, then I need to be able to run 2 again, if it fails again. I just need to get out of the loop for good .
List<Chunk> failedChunks = new List<Chunk>();
foreach (var partialist in breaklistinchunks(chunksize))
{
try
{
//do something
}
catch
{
//print error
failedChunks.Add(partiallist);
}
}
// attempt to re-process failed chunks here
I propose this answer based on your comment to Aaron's answer.
The batches have to be executed in sequential order .So if 2 is a problem , then I need to be able to run 2 again, if it fails again. I just need to get out of the loop for good.
foreach (var partialist in breaklistinchunks(chunksize))
{
int fails = 0;
bool success = false;
do
{
try
{
// do your action
success = true; // should be on the last line before the 'catch'
}
catch
{
fails += 1;
// do something about error before running again
}
}while (!success && fails < 2);
// exit the iteration if not successful and fails is 2
if (!success && fails >= 2)
break;
}
I made a possible solution for you if you don't mind switching from Enumerable to Queue, which kind of fits given the requirements...
void Main()
{
var list = new Queue<int>();
list.Enqueue(1);
list.Enqueue(2);
list.Enqueue(3);
list.Enqueue(4);
list.Enqueue(5);
var random = new Random();
int chunksize = 2;
foreach (var chunk in list.BreakListinChunks(chunksize))
{
foreach (var item in chunk)
{
try
{
if(random.Next(0, 3) == 0) // 1 in 3 chance of error
throw new Exception(item + " is a problem");
else
Console.WriteLine (item + " is OK");
}
catch (Exception ex)
{
Console.WriteLine (ex.Message);
list.Enqueue(item);
}
}
}
}
public static class IEnumerableExtensions
{
public static IEnumerable<List<T>> BreakListinChunks<T>(this Queue<T> sourceList, int chunkSize)
{
List<T> chunkReturn = new List<T>(chunkSize);
while(sourceList.Count > 0)
{
chunkReturn.Add(sourceList.Dequeue());
if (chunkReturn.Count == chunkSize || sourceList.Count == 0)
{
yield return chunkReturn;
chunkReturn = new List<T>(chunkSize);
}
}
}
}
Outputs
1 is a problem
2 is OK
3 is a problem
4 is a problem
5 is a problem
1 is a problem
3 is OK
4 is OK
5 is OK
1 is a problem
1 is OK
One possibility would be to use a for loop instead of a foreach loop and use the counter as a means to determine where an error occurred. Then you could continue from where you left off.
You can use break to exit out of the loop as soon as a chunk fails twice:
foreach (var partialList in breaklistinchunks(chunksize))
{
if(!TryOperation(partialList) && !TryOperation(partialList))
{
break;
}
}
private bool TryOperation<T>(List<T> list)
{
try
{
// do something
}
catch
{
// print error
return false;
}
return true;
}
You could even make the loop into a one-liner with LINQ, but it is generally bad practice to combine LINQ with side-effects, and it's not very readable:
breaklistinchunks(chunksize).TakeWhile(x => TryOperation(x) || TryOperation(x));

Categories