We have interceptors on data-changing methods that flush changes to the database after the methods have run. In case of a deadlock, we would like to rerun the methods.
In this simplified example I catch SqlExceptions and in case of a deadlock I try calling Proceed() again.
try {
invocation.Proceed();
if (!isReadOnly) {
log.Trace("Flushing the unit of work.");
session.Flush();
}
} catch (GenericADOException ex) {
var sqle = ADOExceptionHelper.ExtractDbException(ex) as SqlException;
if (sqle != null) {
if (sqle.Number == deadlockVictim) {
invocation.Proceed();
}
}
This fails due to Castle noticing that I'm trying to call Proceed() a second time and throws an exception starting with:
This is a DynamicProxy2 error: invocation.Proceed() has been called
more times than expected.
How can I call Proceed again after catching the exception?
It is not possible without a ugly hack that resets the interceptor count of your interceptor chain, but if you really need to you must:
get into the IInterception class and change the currentInterceptorIndex (it changed name since the article was published) to -1, in whichever way you want. The article author uses an extension method
call the proceed method again after solving whichever problem in your DB
This is really really really not recommended; you should at least setup some kind of upper limit to the call loop of the method to avoid some case where your DB call never gets out of the proceed-reset-proceed flow
Related
I would like to globally catch any exception that is thrown in my models and controllers because I assume following logic in every action method:
public ActionResult SomeActionMethod(SomeViewModel someViewModel)
{
try
{
// Do operation that may throw exception
}
catch (BLLException e)
{
ModelState.AddModelError("Error", e.Message);
}
catch (Exception e)
{
_log.Info(e);
RedirectToAction("ErrorPage", "ErrorControler");
}
return View(someViewModel);
}
A business logic layer will throw exceptions that user will be able to handle, and a message about that exception will be displayed to him. All other kinds of exception will be saved on server log and user will get an error page.
So since that logic will repeat in every controller I decided to move it to a global exception handler. But the question is: is it possible to go back from exception handler attribute to a line in action where it was thrown? I would like to achieve something like:
public class ExceptionGlobalHandler : HandleErrorAttribute
{
public override void OnException(ExceptionContext filterContext)
{
if( filterContext.Exception.GetType() == typeof(BLLException))
{
ModelState.AddModelError("Error", e.Message);
//Continue executing where excetpion was thrown
}
catch (filterContext.Exception.GetType() == typeof(BLLException))
{
_log.Info(e);
RedirectToAction("ErrorPage", "ErrorControler");
}
}
}
Is it clear solution, and what is the best way of doing it?. What do you think about that kind of approach?
Unfortunately I don't think there is a way in C# to do this, however you could build in some logic to do it for you.
In your Action's code, you'd just have to throw the exception later on, like record the exception internally and throw it only at the end of the method. For example:
public void SomeAction()
{
Exception innerEx;
try
{
//some code that may/may not cause exceptions.
}catch(Exception e)
{
innerEx = e;
}
//some more execution code, equivelent to your "carry on at line x"
throw innerEx;
}
Obviously this means your action's code would have to change, in addition to the wrapper you're using, but this, I think, is the unfortunate problem you have :(
TBH, I think you should rewrite the action's code, because a program that crashes should stop executing, and developers will generally put error checking code within their methods.
( to avoid confusion, I've been using Action to mean the System.Action class, NOT the mvc action class, because I know quite little about MVC, though I hope it makes sense anyway :P )
I would also say there may be something you've been missing - Have you considered seperating the methods you're calling into multiple separate calls? Thus, you could call them all at once, reacting appropriately on an Exception result from one of the calls, while the others will carry on happily.
Apparently, some exceptions may just get lost while using nested using statement. Consider this simple console app:
using System;
namespace ConsoleApplication
{
public class Throwing: IDisposable
{
int n;
public Throwing(int n)
{
this.n = n;
}
public void Dispose()
{
var e = new ApplicationException(String.Format("Throwing({0})", this.n));
Console.WriteLine("Throw: {0}", e.Message);
throw e;
}
}
class Program
{
static void DoWork()
{
// ...
using (var a = new Throwing(1))
{
// ...
using (var b = new Throwing(2))
{
// ...
using (var c = new Throwing(3))
{
// ...
}
}
}
}
static void Main(string[] args)
{
AppDomain.CurrentDomain.UnhandledException += (sender, e) =>
{
// this doesn't get called
Console.WriteLine("UnhandledException:", e.ExceptionObject.ToString());
};
try
{
DoWork();
}
catch (Exception e)
{
// this handles Throwing(1) only
Console.WriteLine("Handle: {0}", e.Message);
}
Console.ReadLine();
}
}
}
Each instance of Throwing throws when it gets disposed of. AppDomain.CurrentDomain.UnhandledException never gets called.
The output:
Throw: Throwing(3)
Throw: Throwing(2)
Throw: Throwing(1)
Handle: Throwing(1)
I prefer to at least be able to log the missing Throwing(2) and Throwing(3). How do I do this, without resorting to a separate try/catch for each using (which would kinda kill the convenience of using)?
In real life, those objects are often instances of classes over which I have no control. They may or may not be throwing, but in case they do, I'd like to have an option to observe such exceptions.
This question came along while I was looking at reducing the level of nested using. There's a neat answer suggesting aggregating exceptions. It's interesting how this is different from the standard behavior of nested using statements.
[EDITED] This question appears to be closely related:
Should you implement IDisposable.Dispose() so that it never throws?
There's a code analyzer warning for this. CA1065, "Do not raise exceptions in unexpected locations". The Dispose() method is on that list. Also a strong warning in the Framework Design Guide, chapter 9.4.1:
AVOID throwing an exception from within Dispose(bool) except under critical situations where the containing process has been corrupted (leaks, inconsistent shared state, etc.).
This goes wrong because the using statement calls Dispose() inside a finally block. An exception raised in a finally block can have an unpleasant side-effect, it replaces an active exception if the finally block was called while the stack is being unwound because of an exception. Exactly what you see happening here.
Repro code:
class Program {
static void Main(string[] args) {
try {
try {
throw new Exception("You won't see this");
}
finally {
throw new Exception("You'll see this");
}
}
catch (Exception ex) {
Console.WriteLine(ex.Message);
}
Console.ReadLine();
}
}
What you are noticing is a fundamental problem in the design of Dispose and using, for which no nice solution as yet exists. IMHO the best design would be to have a version of Dispose which receives as an argument any exception which may be pending (or null, if none is pending), and can either log or encapsulate that exception if it needs to throw one of its own. Otherwise, if you have control of both the code which could cause an exception within the using as well as within the Dispose, you may be able to use some sort of outside data channel to let the Dispose know about the inner exception, but that's rather hokey.
It's too bad there's no proper language support for code associated with a finally block (either explicitly, or implicitly via using) to know whether the associated try completed properly and if not, what went wrong. The notion that Dispose should silently fail is IMHO very dangerous and wrongheaded. If an object encapsulates a file which is open for writing, and Dispose closes the file (a common pattern) and the data cannot be written, having the Dispose call return normally would lead the calling code to believe the data was written correctly, potentially allowing it to overwrite the only good backup. Further, if files are supposed to be closed explicitly and calling Dispose without closing a file should be considered an error, that would imply that Dispose should throw an exception if the guarded block would otherwise complete normally, but if the guarded block fails to call Close because an exception occurred first, having Dispose throw an exception would be very unhelpful.
If performance isn't critical, you could write a wrapper method in VB.NET which would accept two delegates (of types Action and an Action<Exception>), call the first within a try block, and then call the second in a finally block with the exception that occurred in the try block (if any). If the wrapper method was written in VB.NET, it could discover and report the exception that occurred without having to catch and rethrow it. Other patterns would be possible as well. Most usages of the wrapper would involve closures, which are icky, but the wrapper could at least achieve proper semantics.
An alternative wrapper design which would avoid closures, but would require that clients use it correctly and would provide little protection against incorrect usage would have a usage batter like:
var dispRes = new DisposeResult();
...
try
{
.. the following could be in some nested routine which took dispRes as a parameter
using (dispWrap = new DisposeWrap(dispRes, ... other disposable resources)
{
...
}
}
catch (...)
{
}
finally
{
}
if (dispRes.Exception != null)
... handle cleanup failures here
The problem with this approach is that there's no way to ensure that anyone will ever evaluate dispRes.Exception. One could use a finalizer to log cases where dispRes gets abandoned without ever having been examined, but there would be no way to distinguish cases where that occurred because an exception kicked code out beyond the if test, or because the programmer simply forgot the check.
PS--Another case where Dispose really should know whether exceptions occur is when IDisposable objects are used to wrap locks or other scopes where an object's invariants may temporarily be invalidated but are expected to be restored before code leaves the scope. If an exception occurs, code should often have no expectation of resolving the exception, but should nonetheless take action based upon it, leaving the lock neither held nor released but rather invalidated, so that any present or future attempt to acquire it will throw an exception. If there are no future attempts to acquire the lock or other resource, the fact that it is invalid should not disrupt system operation. If the resource is critically necessary to some part of the program, invalidating it will cause that part of the program to die while minimizing the damage it does to anything else. The only way I know to really implement this case with nice semantics is to use icky closures. Otherwise, the only alternative is to require explicit invalidate/validate calls and hope that any return statements within the part of the code where the resource is invalid are preceded by calls to validate.
Maybe some helper function that let you write code similar to using:
void UsingAndLog<T>(Func<T> creator, Action<T> action) where T:IDisposabe
{
T item = creator();
try
{
action(item);
}
finally
{
try { item.Dispose();}
catch(Exception ex)
{
// Log/pick which one to throw.
}
}
}
UsingAndLog(() => new FileStream(...), item =>
{
//code that you'd write inside using
item.Write(...);
});
Note that I'd probably not go this route and just let exceptions from Dispose to overwrite my exceptions from code inside normal using. If library throws from Dispose against strong recommendations not to do so there is a very good chance that it is not the only issue and usefulness of such library need to be reconsidered.
I thought this approach would be safe, in that it wouldn't allow exceptions to propagate. A colleague of mine suggested that the exceptions may need to be observed on the main thread, and should thus be passed up to the main thread. Is that the answer? Can you see how an exception could leak through this?
private static void InvokeProcessHandlers<T>(List<T> processHandlers, Action<T> action)
{
// Loop through process handlers asynchronously, giving them each their own chance to do their thing.
Task.Factory.StartNew(() =>
{
foreach (T handler in processHandlers)
{
try
{
action.Invoke(handler);
}
catch (Exception ex)
{
try
{
EventLog.WriteEntry(ResourceCommon.LogSource,
String.Format(CultureInfo.CurrentCulture, "An error occurred in a pre- or post-process interception handler: {0}", ex.ToString()),
EventLogEntryType.Error);
}
catch (Exception)
{
// Eat it. Nothing else we can do. Something is seriously broken.
}
continue; // Don't let one handler failure stop the rest from processing.
}
}
});
}
By the way, a stack trace is indeed showing that an exception is leaking from this method.
The exception is AccessViolation, and I believe it has to do with the code that calls this method:
InvokeProcessHandlers<IInterceptionPostProcessHandler>(InterceptionPostProcessHandlers, handler => handler.Process(methodCallMessage, methodReturnMessage));
The getter for InterceptionPostProcessHandlers contains this:
_interceptionPreprocessHandlers = ReflectionUtility.GetObjectsForAnInterface<IInterceptionPreprocessHandler>(Assembly.GetExecutingAssembly());
Just make sure to check parameter for null references before you iterate
other than that there is nothing wrong as log writing is not something to stop the execution, but i would recommend to make it more clean and maintainable by encapsulating the logging into a mothod like:
bool Logger.TryLog(params);
and inside this method do the try with a catch that returns false and if you want to handle it in client code do it and if you dont never mind just call the logger in a clean encapsulated way
A colleague of mine suggested that the exceptions may need to be
observed on the main thread, and should thus be passed up to the main
thread.
How can it be "passed up to the main thread"? The main thread is away and doing its own thing.
The best you can do is to make it configurable and accept an ExceptionHandler delegate that is called.
I'm developing a .Net Webform application, with heavy use of web services to communicate with an outside-server database.
So, I'm trying to find the best way to deal with disconnections and failures when calling a WS method.
For now, I've made a proxy function -kind of a layer- for every WS method I call, that repeats the specific WS call in a loop until it cames out successfully.
For Both Sync and Async calls, I've solved my problem, but I added an annoying extra layer to my WebService layer, with extra maintenance, and a lot of redundant code.
I refuse to believe there's not an existing solution for this standard situation, but can't find it anywhere.
Any Ideas?
Following, an example of my extra layer (Sync):
public static int WsMethod(string param1, int param2)
{
while(true)
{
try
{
return new Webpoint().WsMethod(param1, param2);
}
catch (Exception)
{
Thread.Sleep(new TimeSpan(0, 0, sleep_seconds));
}
}
}
And Async:
public static void WsMethodAsync(string param1, int param2, WsMethodCompletedEventHandler handler)
{
while (true)
{
try
{
var server = new Webpoint();
server.WsMethodAsyncCompleted += delegate(object sender, WsMethodAsyncCompletedEventArgs args)
{
if (args.Error != null)
{
Thread.Sleep(new TimeSpan(0, 0, sleep_seconds));
this.WsMethodAsync(param1, param2, handler);
}
else
{
handler(sender, args);
}
};
server.WsMethodAsyncAsync(param1, param2);
return;
}
catch (Exception)
{
Thread.Sleep(new TimeSpan(0, 0, sleep_seconds));
}
}
}
I would not recommend this pattern. If there is some problem with the parameters on your call this will run forever.
Normaly I would catch the few expected exceptions (CommunicationException, SocketException, whatever you need) and return some status-code for this (Ok, or NoNetwork, or whatever).
Or wrap up all expected exceptions into a MyCommunicationException and throw this (to hide implementation details from the caller and make exception-handling easier for it)
But give the control back to the caller and let the caller decide how to go on. Don't catch the other unexpected exceptions or rethrow them.
The caller can then decide to try time and again or 3-times or whatever.
If something were genuinely wrong with the service, or the connection thereto, or the request being made, then this would repeat indefinitely without ever telling you what's wrong.
What are the implications of the service call failing? How often does it really fail? And, most importantly, for what reason does it fail? If the reason is something that can be fixed, it should be fixed. Not worked around.
As a simple example, if this back-end service call is something initiated by a user of the website (say, they're trying to fetch some data to edit) then if the call fails you just present an error to the user. Something like:
"I'm sorry, but that data is not available at this time. The support team has been notified of this problem. Please try your request again. If the problem persists, contact the help desk at 800-555-1234."
Now, this shouldn't just be a single generic error to show the user no matter what happens. The code needs to be robust enough to discern one kind of error from another. If the service is unreachable, this error applies. If the service is saying that the request is invalid, then there's something wrong either with that the user is doing or what your code is doing, and that needs to be fixed. Etc.
How you deal with the errors and maintain a usable application is ultimately up to you and the business overall. But I honestly can't recommend the approach you outline on the question. That approach doesn't solve anything, it just ignores the problem until it gets worse. You need to determine the root cause of the errors and address that, not ignore them.
Also, any time an error is suppressed/ignored, a kitten dies.
I'm writing a wrapper around a fairly large unmanaged API. Almost every imported method returns a common error code when it fails. For now, I'm doing this:
ErrorCode result = Api.Method();
if (result != ErrorCode.SUCCESS) {
throw Helper.ErrorToException(result);
}
This works fine. The problem is, I have so many unmanaged method calls that this gets extremely frustrating and repetitive. So, I tried switching to this:
public static void ApiCall(Func<ErrorCode> apiMethod) {
ErrorCode result = apiMethod();
if (result != ErrorCode.SUCCESS) {
throw Helper.ErrorToException(result);
}
}
Which allows me to cut down all of those calls to one line:
Helper.ApiCall(() => Api.Method());
There are two immediate problems with this, however. First, if my unmanaged method makes use of out parameters, I have to initialize the local variables first because the method call is actually in a delegate. I would like to be able to simply declare a out destination without initializing it.
Second, if an exception is thrown, I really have no idea where it came from. The debugger jumps into the ApiCall method and the stack trace only shows the method that contains the call to ApiCall rather than the delegate itself. Since I could have many API calls in a single method, this makes debugging difficult.
I then thought about using PostSharp to wrap all of the unmanaged calls with the error code check, but I'm not sure how that would be done with extern methods. If it ends up simply creating a wrapper method for each of them, then I would have the same exception problem as with the ApiCall method, right? Plus, how would the debugger know how to show me the site of the thrown exception in my code if it only exists in the compiled assembly?
Next, I tried implementing a custom marshaler that would intercept the return value of the API calls and check the error code there. Unfortunately, you can't apply a custom marshaler to return values. But I think that would have been a really clean solution it if had worked.
[return:
MarshalAs(UnmanagedType.CustomMarshaler, MarshalTypeRef=typeof(ApiMethod))]
public static extern ErrorCode Method();
Now I'm completely out of ideas. What are some other ways that I could handle this?
Follow ErrorHandler class from the Visual Studio 2010 SDK. It existed in earlier versions, but the new one has CallWithCOMConvention(Action), which may prove valuable depending on how your API interacts with other managed code.
Of the available methods, I recommend implementing the following:
Succeeded(int)
(Failed() is just !Succeeded(), so you can skip it)
ThrowOnFailure(int)
(Throws a proper exception for your return code)
CallWith_MyErrorCode_Convention(Action) and CallWith_MyErrorCode_Convention(Func<int>)
(like CallWithCOMConvention, but for your error codes)
IsCriticalException(Exception)
(used by CallWith_MyErrorCode_Convention)
What happens if you don't check ErrorCode.SUCCESS? Will your code quickly fail and throw an exception? Can you tell which unmanaged API failed if your managed code throws? If so, consider not checking for errors and just letting the runtime throw when your unmanaged API fails.
If this is not the case, I suggest biting the bullet and following your first idea. I know you called it "frustrating and repetitive", but after coming from a project with a "clever" macro solution to a similar problem, checking return values in method calls and wrapping exceptions is the doorway to insanity: exception messages and stack traces become misleading, you can't trace the code, performance suffers, your code become optimized for errors and goes off the rails upon success.
If a particular return value is an error, thow a unique exception then. If it might not be an error, let it go and throw if becomes an error. You said you wanted to reduce the check to one line?
if (Api.Method() != ErrorCode.SUCCESS) throw new MyWrapperException("Api.Method broke because ...");
Your proposal also throws the same exception if any method returns the same "common error code". This is another debugging nightmare; for APIs which return the same error codes from multiple calls, do this:
switch (int returnValue = Api.Method1())
{
case ErrorCode.SUCCESS: break;
case ErrorCode.TIMEOUT: throw new MyWrapperException("Api.Method1 timed out in situation 1.");
case ErrorCode.MOONPHASE: throw new MyWrapperException("Api.Method1 broke because of the moon's phase.");
default: throw new MyWrapperException(string.Format("Api.Method1 returned {0}.", returnValue));
}
switch (int returnValue = Api.Method2())
{
case ErrorCode.SUCCESS: break;
case ErrorCode.TIMEOUT: throw new MyWrapperException("Api.Method2 timed out in situation 2, which is different from situation 1.");
case ErrorCode.MONDAY: throw new MyWrapperException("Api.Method2 broke because of Mondays.");
default: throw new MyWrapperException(string.Format("Api.Method2 returned {0}.", returnValue));
}
Verbose? Yup. Frustrating? No, what's frustrating is trying to debug an app that throws the same exception from every line whatever the error.
I think, the easy way is to add aditional layer.
class Api
{
....
private static ErrorCode Method();//changing Method to private
public static void NewMethod()//NewMetod is void, because error is converted to exceptions
{
ErrorCode result = Method();
if (result != ErrorCode.SUCCESS) {
throw Helper.ErrorToException(result);
}
}
....
}
Create a private property to hold the ErrorCode value, and throw the exception from the setter.
class Api
{
private static ErrorCode _result;
private static ErrorCode Result
{
get { return _result; }
set
{
_result = value;
if (_result != ErrorCode.SUCCESS)
{
throw Helper.ErrorToException(_result);
}
}
}
public static void NewMethod()
{
Result = Api.Method();
Result = Api.Method2();
}
}
Write a T4 template to do the generation for you.
Your existing code is actually really, really close. If you use an expression tree to hold the lambda, instead of a Func delegate, then your Helper.ApiCall can pull out the identity of the function that was called and add that to the exception it throws. For more information on expression trees and some very good examples, Google Marc Gravell.