try catch bad form? - c#

i think i sort of know the answer to this, but there are always many ways to do things (some of which are clearly wrong :) )...
I have a little recursive function to find an employees manager's id. this is being used in a import script and it may be that the persons immediate manager has left (been disabled) so we need to find that employees (the manager) manager (and so on) so we can assign stuff to them. in case it isn't obvious, the EmployeesToDisable is a generic list of employees that are marked as disabled in this import.
I guess what i am really asking, is : is the overhead associated with catching an exception too much of a trade off to make in this instance. and should i be doing this differently.
this does work fine, but feels like it is bad form..
i have code thus:
private Guid getMyEnabledManagersID(OnlineEmployee e)
{
Employee manager;
try
{
//see if Employee e's manager is in the disabled list.
manager = (from emp in EmployeesToDisable where emp.EmployeeID.Equals(e.ManagerID) select emp).Single();
//yes they are, so need to call this again
return getMyEnabledManagersID(manager);
}
catch
{
return e.ManagerID;
}
}

Leaving aside the recursion, you should perhaps just use SingleOrDefault and test for null. Actually, you probably don't need the full employee object - you can probably suffice with just returning the id (throughout), i.e.
private Guid getMyEnabledManagersID(Guid managerId)
{
var disabled = (from emp in EmployeesToDisable
where emp.EmployeeID == managerId
select (Guid?)emp.ManagerID).SingleOrDefault();
return disabled == null ? managerId : getMyEnabledManagersID(disabled.Value);
}
Actually, the biggest concern I have with the original form is that it isn't specific to the type of exception; it could be "thread abort", "zombie connection", "deadlock", etc.

As others have pointed out never do that. That is a "worst practice". The exception is there to tell you that you have a logic bug in your program. By catching the exception and continuing on, you hide the logic bug.
Only use Single when you know, positively and for sure, that there is exactly one element in the sequence. If there might be other numbers of elements in the list then use First, FirstOrDefault, SingleOrDefault, or write your own sequence operator; it's not hard to do.
The main reasons to not use this worst practice are:
1) As I said, it hides a bug; those exceptions should never be caught because they should never be thrown in a working program. The exceptions are there to help you debug your program, not to control its flow.
2) Using exceptions as control flow like this makes it difficult to debug a program. Debuggers are often configured to stop on any exception, whether it is handled or not. Lots of "expected" exceptions make that harder. Exceptions should never be expected, they should be exceptional; that's why they're called "exceptions".
3) The catch catches everything, including stuff that probably indicates a fatal error that should be reported to the user.

Apart from other answers,
Try/Catch are very costly operations, your simple if statement is faster in terms of performance then expecting a catch and then following the logic.
Try/Catch should not be part of business logic, instead they should only be part of error handling.

You could use FirstOrDefault() instead of Single and handle the returned null value.

Related

Exiting a Class From C# Using a Catch

I'm looking for the best method of handling errors in a c# winforms class that I have. The gist of the application is that it has a data analyzer that analyzes the data for statistics and other such stuff. However, I'm looking for the proper way of handling an ABORT.
For example, I have the class called Analyzer
namespace PHOEBE
{
public class Analyzer
{
public Analyzer(){
DoAnalysis();
DoFurtherAnalysis();
}
public class DoAnalysis(){
try{
Convert.ToInt32("someNumber...."); //obviously fails..
}
catch{
//ERROR OCCURRED, ABORT ALL ANALYSIS
return;
}
}
}
Obviously, when DoAnalysis() is called, there will be an error that occurs. The catch block will catch the exception. However, when this catch occurs, it will return to the constructor and run DoFurtherAnalysis(). This is a problem.
I know that you could do return values from each method where each value indicates a certain outcome (ie. 1 = success, 0 = fail). However, a lot of the methods I call, use return values already. I could also use a boolean that gets flagged when an error occurs and check that value before calling the next method from the constructor, but checking this value each time is annoying and repetitive.
I was really hoping for some sort of like "abort mechanism" that I could use. Is there any other ways of working around this? Any interesting work-arounds for this?
Assume this class is being called from a form.
Just let the exception propagate up - you should only catch the exception if you can actually handle it. Exceptions are the "abort mechanism" in .NET. You're currently swallowing the signal that everything's gone wrong, and returning as if all were well.
Generally I find catching exceptions to be pretty rare - usually it's either at the top level (to stop a whole server from going down just because of one request) or in order to transform an exception of one kind into another in order to maintain appropriate abstractions.
I was really hoping for some sort of like "abort mechanism" that I
could use. Is there any other ways of working around this? Any
interesting work-arounds for this?
Yes, there is. It is called exception handling.
Let's rewrite your code:
namespace PHOEBE
{
public class Analyzer
{
public Analyzer()
{
try
{
DoAnalysis();
DoFurtherAnalysis();
}
catch
{
//ERROR OCCURRED, ABORT ALL ANALYSIS
return;
}
}
public class DoAnalysis()
{
Convert.ToInt32("someNumber...."); //obviously fails..
}
}
Now, the constructor will abort and not run the second method since the exception will "bubble through" and be catched where you want it.
On an unrelated note: Please try to catch as specific exceptions as possible, in this case a FormatException
You are subverting the existing "abort" mechanism by catching an exception that you are not doing anything about and swallowing it.
You should not use a try{}catch{} block in this case and let the exception bubble up and cause the application to abort.
The easiest work-around is don't catch the exception. If that were to happen, it'd go straight past the DoFurtherAnalysis() function and out to the original caller.
Don't see anything anoying in returning and checking bool return value from the function. It's much much better solution then having some tricky internal state management, that you for sure will messed up after a couple of months when you return to your code.
Make code sumple and streghtforward. It's not anoying, it's good.
In your specific case if you want just abort everything, just do not catch exception it will abort your program.
use a try...catch in the constructor?
Well, you've got several issues mixed up here. First, it looks like you do possibly-very expensive processing from your constructor. If that processing can throw, you really don't want to call it from your constructor becuase you don't even have the option of returning an error code.
Second, (and you'll read in many threads here,) how you handlee errors really depends on the application and expectation of your users. Some errors could be corrected by changes to inputs. Others might happen in the middle of the night if analysis takes a long time and you might want to continue with another analysis, logging this error.
So I think we're really going to punt back to you for more information about the above.
You could just move DoFurtherAnalysis(); into the the try block
And I would do this entire process somewhere other than the constructor.
Only thing I ever do in the constructor is initialize properties.

Bad Form to use try/catch as a test?

I get the feeling that this is, but I wanted to confirm-
is it bad form to do something like:
try
{
SqlUpload(table);
}
catch(PrimaryKeyException pke)
{
DeleteDuplicatesInTable(table);
SqlUpload(table);
}
catch(Exception ex)
{
Console.Write(ex);
}
Is it better to do this to potentially save on efficiency in case the table doesn't have duplicates, or is it better to run the delete duplicates bit anyway? (I'm assuming conditions where the table will upload as long as there are no duplicates within the table itself). Also, given how try-catch statements impact performance, would it even be faster to do it this way at all?
I apologize for the crude nature of this example, but it was just to illustrate a point.
Exceptions can be used correctly in transaction management, but it is not the case in this example. On my first look, it appeared that this seemed similar to what Linq2Sql's DataContext class does in its call to SubmitChanges(). However, that analogy was incorrect. (Please see at Chris Marisic's comment to my post for an accurate criticism of the comparison).
On exceptions
In general, if there is some issue that is likely to be encountered, you should check for it first. Exceptions should be used when a response is truly "exceptional" (meaning it is unexpected in the context of proper usage). If the proper usage of a function within a completely valid context throws an exception , then you are probably using exceptions incorrectly.
Excerpt from DataContext.SubmitChanges
This code shows an example of the correct usage of exceptions in transaction management.
Note: Just because Microsoft does it, doesn't automatically mean its right. However, their code does have a pretty good track record.
DbTransaction transaction = null;
try
{
try
{
if (this.provider.Connection.State == ConnectionState.Open)
{
this.provider.ClearConnection();
}
if (this.provider.Connection.State == ConnectionState.Closed)
{
this.provider.Connection.Open();
flag = true;
}
transaction = this.provider.Connection.BeginTransaction(IsolationLevel.ReadCommitted);
this.provider.Transaction = transaction;
new ChangeProcessor(this.services, this).SubmitChanges(failureMode);
this.AcceptChanges();
this.provider.ClearConnection();
transaction.Commit();
}
catch
{
if (transaction != null)
{
try
{
transaction.Rollback();
}
catch
{
}
}
throw;
}
return;
}
finally
{
this.provider.Transaction = null;
if (flag)
{
this.provider.Connection.Close();
}
}
}
Yes, this type of code is considered bad form in .NET.
You would be better off writing either code similar to
if(HasPrimaryKeyViolations(table))
DeletePrimaryKeyViolations(table)
SqlUpload(table)
By reading this code I would assume that primary key violations are an exceptional case and not expected - if they are I think you should remove the duplicates beforehand, do not rely on exception handling for an expected case.
In all common languages/interpreters/compilers, exception handling is implemented to have a minimal performance impact when an exception isn't raised -- under the hood, adding an exception handler is usually just pushing a single value onto a stack or something similar. Just adding a try block wont usually have a performance impact.
On the other hand, when an exception is actually raised, things can get very slow very fast. Its the trade-off for being able to add the try block without worrying about worrying, and its usually seen as acceptable, because you only take the performance hit if something unexpected has already gone wrong somewhere else.
So, in theory, if there is a condition that you expect to happen, use an if instead. Its semantically better because it expresses to the reader that the bad condition is probably going to happen from time to time (e.g., user types in some invalid input), while the try expresses something that you hope never happens (the data source is corrupt). As above, its also going to be easier on performance.
Of course, rules are defined by their exceptions (no pun intended). In practice, there are two situations where this becomes the wrong answer:
First, if you are performing a complex operation like parsing a file, and its and all-or-nothing -- if one field in the file is corrupt, you want to bail on the whole operation. Exceptions allow you to jump out of the whole complex process up to an exception handler encapsulating the entire parse. Sure, you could litter the parsing code with checks and return values and checks on the return values -- but its going to be a lot cleaner just to throw the exception and let it rise up to the top of the operation. Even if you expect that the input is going to be bad sometimes, if there isn't a reasonable way to handle the error exactly at the point where the error occurs, use exceptions to let the error rise up to a more appropriate place to handle it. Thats really what exceptions were for in the first place -- getting rid of all that error handling code down in the details, and moving it to one, consolidated, reasonable place.
Second, a library might not let you make the choice. For example, int.TryParse is a good alternative to int.Parse if the input hasn't already been vetted. On the other hand, a library might not have a non-exception-throwing option. In that case, don't brew up your own code to check without exceptions -- though it might be bad form to use exception handling to check for an expected condition, its worse form to duplicate the functionality of the library, just to avoid an exception. Just use the try/catch, and maybe add a snide comment about how you didn't WANT to do it, but the library authors MADE you :).
For your particular example, probably stick with the exception. While exception handling isn't considered 'fast,' its still faster than a round trip to the database; and there isn't going to be a reasonable way to check for that exception without sending the command anyways. Plus, hitting a database in interfacing with an external system -- that in itself is a pretty to good reason to expect the unexpected -- exception handling almost always makes sense when you are leaving your particular controlled environment.
Or, more specifically to your example, you may consider using a stored procedure with a MERGE statement, to use the source data in table to update or insert as appropriate; the update will be a little friendlier on all fronts than doing a delete-then-insert for existing keys.
try catches are expensive on performance, so don't use them as control structures. Instead use triggers at the database level.
One problem is that any exception caused by call of SqlUpload() in the catch block causes the application to crash, because there is no further exception handling.
You'll probably get a few different opinions on this, but mine is that try...catch should be used for things that shouldn't normally happen (although sometimes it's unavoidable). So by that rule, you should ask if the duplicates are in the table by normal execution of the program or if they should not exist given allowable execution of the program.
Just to clarify, I'm saying "normal" usage of the program, not "correct" usage (e.g. when I test it the duplicates don't appear (correct usage) but when the customer uses it they do (perhaps incorrect but normal), so I need to get rid of them). It's more that the duplicates would only appear in a way that the program cannot control (e.g. sometimes someone goes into the database and adds a duplicate row (hopefully not a normal situation)).
Something like duplicate rows, though, is likely a symptom of some other bug, so you shouldn't mask it but try and find the root cause so that the deletion isn't necessary.

Try - Catch return strategy

When using try-catch in methods, if you want you application to continue even if errors come along, is it okay to return the default value as return through the catch block, and log the error for later review?
For Example:
public static string GetNameById(int id)
{
string name;
try
{
//Connect to Db and get name - some error occured
}
catch(Exception ex)
{
Log(ex);
name = String.Empty;
}
return name;
}
Example 2:
public static string GetIdByName(string name)
{
int id;
try
{
//Connect to Db and get id- some error occured
}
catch(Exception ex)
{
Log(ex);
id = 0;
}
return id;
}
Is it okay to return any default value (depending on the return type of the method ...???) so that the application logic that required the result from this method do not crash and keeps going ....
Thanks in advance...
Regards.
The advice for exception handling is that mostly you should only catch exceptions that you can do something about (e.g. retry an operation, try a different approach, elevate security etc). If you have logging elsewhere in your application at a global level, this kind of catch-log-return is unnecessary.
Personally - typically - in this situation I'd do this:
public static string GetNameById(int id)
{
string name;
try
{
//Connect to Db and get name - some error occured
}
catch(Exception ex)
{
Log(ex);
throw; // Re-throw the exception, don't throw a new one...
}
return name;
}
So as usual - it depends.
Be aware of other pitfalls though, such as the calling method not being aware that there was a failure, and continuing to perform work on the assumption that the method throwing the exception actually worked. At this point you start the conversation about "return codes vs. throwing exceptions", which you'll find a lot of resources for both on SO.com and the internets in general.
I do not think that is a good solution. In my opinion it would be better to let the caller handle the exception. Alternatively you can catch the exception in the method and throw a custom exception (with the caught exception as the inner exception).
Another way of going about it would be to make a TryGet method, such as:
public static bool TryGetNameById(int id, out string name)
{
try
{
//Connect to Db and get name - some error occured
name = actualName
return true;
}
catch(Exception ex)
{
Log(ex);
name = String.Empty;
return false;
}
}
I think this approach is more intention revealing. The method name itself communicates that it may not always be able to produce a useful result. In my opinion this is better than returning some default value that the caller has to be able to know about to do the correct business logic.
My opinion is I'll never mute errors.
If some exception is thrown, it should be treated as a fatal error.
Maybe the problem is throwing exceptions for things that aren't exceptions. For example, business validation shoudn't be throwing such exceptions.
I'd prefer to validate the business and translate the errors in "broken rules" and transmit them to the presentation or wherever, that would save CPU and memory because there's no stack trace.
Another situation is a data connection loss or another situation that makes the application fall in a wrong state. Then, I'd prefer to log the error and prompt the user to re-open the application or maybe the application may restart itself.
I want to make you some suggestion: have you ever heard about PostSharp? Check it, you can implement that exception logging with an AOP paradigm.
It is advised that you only catch errors that you can handle and recover from. Catching and consuming exceptions that you cannot handle is bad form. However, some environments / applications define strict rules that go against this ideal behaviour.
In those cases, I would say in cases you don't have a choice, you can do what you like with returns - it is up to your application to work with them.
Based on the assumption that your application can handle any sort of failure in trying to get an ID, then returning a default ID is a good enough choice. You can go further and use something like the special case pattern to return empty / missing objects, but for simple data types this seems unwarranted.
this depends very much on the context and on the way your calling method is designed. I have used to return default values in the past as you are doing now and I understood only afterwards that it was deeply wrong...
you should imagine the catch block to throw the exception back, after logging it properly, then the calling method could have another try catch and depending on the severity of the error could inform the user or behave accordingly.
the fact is that if you return an empty string, in some cases, could be that the caller "thinks" there is a user with empty name, while would probably be better to notify the user that the record was not found, for example.
depending on the way your logger works you could log the exception only where you handle it and not in all catches...
That depends on what you want. If it's important for you that you log the exception, but that everything else keeps working, then this is - in my honest opinion- ok.
On the other hand: if an exception occurs, you have to make sure this does not have an impact on the further working of your application.
The point of a try/catch is to allow you to catch and handle errors in a graceful manor, allowing application execution to continue, rather than simply crashing the application and stopping execution.
Therefore it is perfectly acceptable to return a default value. However, be sure that all following code will continue to function if a default value is returned rather than causing further errors.
Eg - in your example, if the DB connection fails, ensure there are no further commands to edit / retrieve values from the database.
It REALLY depends. In most scenarios it is not - problems i that if there is a scenario specific issue you amy never find out. Soemtiems a recovery attempt is good. This normalyl depends on circumstances and actual logic.
The scenarios where "swallow and document" are really valid are rare and far in between in the normal world. They come in more often when writing driver type of thigns (like loaded moduels talkign to an external system). Whether a return value or default(T) equivalent maeks sense also depends.
I would say in 99% of the cases let the exception run up. In 1% of the cases it may well make sense to return some sensible default.
It is better to log the exception but then re-trow it so that it can be properly handled in the upper layers.
Note: the "log the exception part" is of course optional and depends a lot on your handling strategy(will you log it again somewhere else?)
Instead of making the app not crash by swalowing exception, it is better to let them pass and try to find the root cause why they were thrown in the first place.
Depends on what you want your app to do...
For example, display a friendly message, "Cannot Login" if you get a SqlException when trying to connect to the Database is OK.
Handling all errors is sometimes considered bad, although people will disagree...
Your application encountered an exception you did not expect, you can no longer be 100% sure what state your application is in, which lines of codes executed and which didn't, etc. Further reading on this: http://blogs.msdn.com/b/larryosterman/archive/2005/05/31/423507.aspx.
And more here : http://blogs.msdn.com/b/oldnewthing/archive/2011/01/20/10117963.aspx
I think the answer would have to be "it depends".
In some cases it may be acceptable to return an empty string from a function in case of an error. If you are looking up somebody's address to display then an empty string works fine rather than crashing the whole thing.
In other cases it may not work so well. If you are returning a string to use for a security lookup for example (eg getSecurityGroup) then you are going to have very undesired behaviour if you return the wrong thing and you might be better off keeping the error thrown and letting the user know something has gone wrong rather than pretending otherwise.
Even worse might be if you are persisting data provided to you by the user. Something goes wrong when you are getting their data, you return a default and store that without even telling them... That's gotta be bad.
So I think you need to look at each method where you are considering this and decide if it makes sense to throw an error or return a default value.
In fact as a more general rule any time you catch an error you should be thinking hard about whether it is an acceptable error and continuing is permissable or whether it is a show stopping error and you should just give up.

CRUD operations; do you notify whether the insert,update etc. went well?

I have a simple question for you (i hope) :)
I have pretty much always used void as a "return" type when doing CRUD operations on data.
Eg. Consider this code:
public void Insert(IAuctionItem item) {
if (item == null) {
AuctionLogger.LogException(new ArgumentNullException("item is null"));
}
_dataStore.DataContext.AuctionItems.InsertOnSubmit((AuctionItem)item);
_dataStore.DataContext.SubmitChanges();
}
and then considen this code:
public bool Insert(IAuctionItem item) {
if (item == null) {
AuctionLogger.LogException(new ArgumentNullException("item is null"));
}
_dataStore.DataContext.AuctionItems.InsertOnSubmit((AuctionItem)item);
_dataStore.DataContext.SubmitChanges();
return true;
}
It actually just comes down to whether you should notify that something was inserted (and went well) or not ?
I typically go with the first option there.
Given your code, if something goes wrong with the insert there will be an Exception thrown.
Since you have no try/catch block around the Data Access code, the calling code will have to handle that Exception...thus it will know both if and why it failed. If you just returned true/false, the calling code will have no idea why there was a failure (it may or may not care).
I think it would make more sense if in the case where "item == null" that you returned "false". That would indicate that it was a case that you expect to happen not infrequently, and that therefore you don't want it to raise an exception but the calling code could handle the "false" return value.
As it standards, you'll return "true" or there'll be an exception - that doesn't really help you much.
Don't fight the framework you happen to be in. If you are writing C code, where return values are the most common mechanism for communicating errors (for lack of a better built in construct), then use that.
.NET base class libraries use Exceptions to communicate errors and their absence means everything is okay. Because almost all code uses the BCL, much of it will be written to expect exceptions, except when it gets to a library written as if C# was C with no support for Exceptions, each invocation will need to be wrapped in a if(!myObject.DoSomething){ System.Writeline("Damn");} block.
For the next developer to use your code (which could be you after a few years when you've forgotten how you originally did it), it will be a pain to start writing all the calling code to take advantage of having error conditions passed as return values, as changes to values in an output parameter, as custom events, as callbacks, as messages to queue or any of the other imaginable ways to communicate failure or lack thereof.
I think it depends. Imaging that your user want to add a new post onto a forum. And the adding fail by some reason, then if you don't tell the user, they will never know that something wrong. The best way is to throw another exception with a nice message for them
And if it does not relate to the user, and you already logged it out to database log, you shouldn't care about return or not any more
I think it is a good idea to notify the user if the operation went well or not. Regardless how much you test your code and try to think out of the box, it is most likely that during its existence the software will encounter a problem you did not cater for, thus making it behave incorrectly. The use of notifications, to my opinion, allow the user to take action, a sort of Plan B if you like when the program fails. This action can either be a simple work around or else, inform people from the IT department so that they can fix it.
I'd rather click that extra "OK" button than learn that something went wrong when it is too late.
You should stick with void, if you need more data - use variables for it, as either you'll need specific data (And it can be more than one number/string) and an excpetion mechanism is a good solution for handling errors.
so.. if you want to know how many rows affected, if a sp returned something ect... - a return type will limit you..

What is the real overhead of try/catch in C#?

So, I know that try/catch does add some overhead and therefore isn't a good way of controlling process flow, but where does this overhead come from and what is its actual impact?
Three points to make here:
Firstly, there is little or NO performance penalty in actually having try-catch blocks in your code. This should not be a consideration when trying to avoid having them in your application. The performance hit only comes into play when an exception is thrown.
When an exception is thrown in addition to the stack unwinding operations etc that take place which others have mentioned you should be aware that a whole bunch of runtime/reflection related stuff happens in order to populate the members of the exception class such as the stack trace object and the various type members etc.
I believe that this is one of the reasons why the general advice if you are going to rethrow the exception is to just throw; rather than throw the exception again or construct a new one as in those cases all of that stack information is regathered whereas in the simple throw it is all preserved.
I'm not an expert in language implementations (so take this with a grain of salt), but I think one of the biggest costs is unwinding the stack and storing it for the stack trace. I suspect this happens only when the exception is thrown (but I don't know), and if so, this would be decently sized hidden cost every time an exception is thrown... so it's not like you are just jumping from one place in the code to another, there is a lot going on.
I don't think it's a problem as long as you are using exceptions for EXCEPTIONAL behavior (so not your typical, expected path through the program).
Are you asking about the overhead of using try/catch/finally when exceptions aren't thrown, or the overhead of using exceptions to control process flow? The latter is somewhat akin to using a stick of dynamite to light a toddler's birthday candle, and the associated overhead falls into the following areas:
You can expect additional cache misses due to the thrown exception accessing resident data not normally in the cache.
You can expect additional page faults due to the thrown exception accessing non-resident code and data not normally in your application's working set.
for example, throwing the exception will require the CLR to find the location of the finally and catch blocks based on the current IP and the return IP of every frame until the exception is handled plus the filter block.
additional construction cost and name resolution in order to create the frames for diagnostic purposes, including reading of metadata etc.
both of the above items typically access "cold" code and data, so hard page faults are probable if you have memory pressure at all:
the CLR tries to put code and data that is used infrequently far from data that is used frequently to improve locality, so this works against you because you're forcing the cold to be hot.
the cost of the hard page faults, if any, will dwarf everything else.
Typical catch situations are often deep, therefore the above effects would tend to be magnified (increasing the likelihood of page faults).
As for the actual impact of the cost, this can vary a lot depending on what else is going on in your code at the time. Jon Skeet has a good summary here, with some useful links. I tend to agree with his statement that if you get to the point where exceptions are significantly hurting your performance, you have problems in terms of your use of exceptions beyond just the performance.
Contrary to theories commonly accepted, try/catch can have significant performance implications, and that's whether an exception is thrown or not!
It disables some automatic optimisations (by design), and in some cases injects debugging code, as you can expect from a debugging aid. There will always be people who disagree with me on this point, but the language requires it and the disassembly shows it so those people are by dictionary definition delusional.
It can impact negatively upon maintenance. This is actually the most significant issue here, but since my last answer (which focused almost entirely on it) was deleted, I'll try to focus on the less significant issue (the micro-optimisation) as opposed to the more significant issue (the macro-optimisation).
The former has been covered in a couple of blog posts by Microsoft MVPs over the years, and I trust you could find them easily yet StackOverflow cares so much about content so I'll provide links to some of them as filler evidence:
Performance implications of try/catch/finally (and part two), by Peter Ritchie explores the optimisations which try/catch/finally disables (and I'll go further into this with quotes from the standard)
Performance Profiling Parse vs. TryParse vs. ConvertTo by Ian Huff states blatantly that "exception handling is very slow" and demonstrates this point by pitting Int.Parse and Int.TryParse against each other... To anyone who insists that TryParse uses try/catch behind the scenes, this ought to shed some light!
There's also this answer which shows the difference between disassembled code with- and without using try/catch.
It seems so obvious that there is an overhead which is blatantly observable in code generation, and that overhead even seems to be acknowledged by people who Microsoft value! Yet I am, repeating the internet...
Yes, there are dozens of extra MSIL instructions for one trivial line of code, and that doesn't even cover the disabled optimisations so technically it's a micro-optimisation.
I posted an answer years ago which got deleted as it focused on the productivity of programmers (the macro-optimisation).
This is unfortunate as no saving of a few nanoseconds here and there of CPU time is likely to make up for many accumulated hours of manual optimisation by humans. Which does your boss pay more for: an hour of your time, or an hour with the computer running? At what point do we pull the plug and admit that it's time to just buy a faster computer?
Clearly, we should be optimising our priorities, not just our code! In my last answer I drew upon the differences between two snippets of code.
Using try/catch:
int x;
try {
x = int.Parse("1234");
}
catch {
return;
}
// some more code here...
Not using try/catch:
int x;
if (int.TryParse("1234", out x) == false) {
return;
}
// some more code here
Consider from the perspective of a maintenance developer, which is more likely to waste your time, if not in profiling/optimisation (covered above) which likely wouldn't even be necessary if it weren't for the try/catch problem, then in scrolling through source code... One of those has four extra lines of boilerplate garbage!
As more and more fields are introduced into a class, all of this boilerplate garbage accumulates (both in source and disassembled code) well beyond reasonable levels. Four extra lines per field, and they're always the same lines... Were we not taught to avoid repeating ourselves? I suppose we could hide the try/catch behind some home-brewed abstraction, but... then we might as well just avoid exceptions (i.e. use Int.TryParse).
This isn't even a complex example; I've seen attempts at instantiating new classes in try/catch. Consider that all of the code inside of the constructor might then be disqualified from certain optimisations that would otherwise be automatically applied by the compiler. What better way to give rise to the theory that the compiler is slow, as opposed to the compiler is doing exactly what it's told to do?
Assuming an exception is thrown by said constructor, and some bug is triggered as a result, the poor maintenance developer then has to track it down. That might not be such an easy task, as unlike the spaghetti code of the goto nightmare, try/catch can cause messes in three dimensions, as it could move up the stack into not just other parts of the same method, but also other classes and methods, all of which will be observed by the maintenance developer, the hard way! Yet we are told that "goto is dangerous", heh!
At the end I mention, try/catch has its benefit which is, it's designed to disable optimisations! It is, if you will, a debugging aid! That's what it was designed for and it's what it should be used as...
I guess that's a positive point too. It can be used to disable optimizations that might otherwise cripple safe, sane message passing algorithms for multithreaded applications, and to catch possible race conditions ;) That's about the only scenario I can think of to use try/catch. Even that has alternatives.
What optimisations do try, catch and finally disable?
A.K.A
How are try, catch and finally useful as debugging aids?
they're write-barriers. This comes from the standard:
12.3.3.13 Try-catch statements
For a statement stmt of the form:
try try-block
catch ( ... ) catch-block-1
...
catch ( ... ) catch-block-n
The definite assignment state of v at the beginning of try-block is the same as the definite assignment state of v at the beginning of stmt.
The definite assignment state of v at the beginning of catch-block-i (for any i) is the same as the definite assignment state of v at the beginning of stmt.
The definite assignment state of v at the end-point of stmt is definitely assigned if (and only if) v is definitely assigned at the end-point of try-block and every catch-block-i (for every i from 1 to n).
In other words, at the beginning of each try statement:
all assignments made to visible objects prior to entering the try statement must be complete, which requires a thread lock for a start, making it useful for debugging race conditions!
the compiler isn't allowed to:
eliminate unused variable assignments which have definitely been assigned to before the try statement
reorganise or coalesce any of it's inner-assignments (i.e. see my first link, if you haven't already done so).
hoist assignments over this barrier, to delay assignment to a variable which it knows won't be used until later (if at all) or to pre-emptively move later assignments forward to make other optimisations possible...
A similar story holds for each catch statement; suppose within your try statement (or a constructor or function it invokes, etc) you assign to that otherwise pointless variable (let's say, garbage=42;), the compiler can't eliminate that statement, no matter how irrelevant it is to the observable behaviour of the program. The assignment needs to have completed before the catch block is entered.
For what it's worth, finally tells a similarly degrading story:
12.3.3.14 Try-finally statements
For a try statement stmt of the form:
try try-block
finally finally-block
• The definite assignment state of v at the beginning of try-block is the same as the definite assignment state of v at the beginning of stmt.
• The definite assignment state of v at the beginning of finally-block is the same as the definite assignment state of v at the beginning of stmt.
• The definite assignment state of v at the end-point of stmt is definitely assigned if (and only if) either:
o v is definitely assigned at the end-point of try-block
o v is definitely assigned at the end-point of finally-block
If a control flow transfer (such as a goto statement) is made that begins within try-block, and ends outside of try-block, then v is also considered definitely assigned on that control flow transfer if v is definitely assigned at the end-point of finally-block. (This is not an only if—if v is definitely assigned for another reason on this control flow transfer, then it is still considered definitely assigned.)
12.3.3.15 Try-catch-finally statements
Definite assignment analysis for a try-catch-finally statement of the form:
try try-block
catch ( ... ) catch-block-1
...
catch ( ... ) catch-block-n
finally finally-block
is done as if the statement were a try-finally statement enclosing a try-catch statement:
try {
try
try-block
catch ( ... ) catch-block-1
...
catch ( ... ) catch-block-n
}
finally finally-block
In my experience the biggest overhead is in actually throwing an exception and handling it. I once worked on a project where code similar to the following was used to check if someone had a right to edit some object. This HasRight() method was used everywhere in the presentation layer, and was often called for 100s of objects.
bool HasRight(string rightName, DomainObject obj) {
try {
CheckRight(rightName, obj);
return true;
}
catch (Exception ex) {
return false;
}
}
void CheckRight(string rightName, DomainObject obj) {
if (!_user.Rights.Contains(rightName))
throw new Exception();
}
When the test database got fuller with test data, this lead to a very visible slowdown while openening new forms etc.
So I refactored it to the following, which - according to later quick 'n dirty measurements - is about 2 orders of magnitude faster:
bool HasRight(string rightName, DomainObject obj) {
return _user.Rights.Contains(rightName);
}
void CheckRight(string rightName, DomainObject obj) {
if (!HasRight(rightName, obj))
throw new Exception();
}
So in short, using exceptions in normal process flow is about two orders of magnitude slower then using similar process flow without exceptions.
Not to mention if it's inside a frequently-called method it may affect the overall behavior of the application.
For example, I consider the use of Int32.Parse as a bad practice in most cases since it throws exceptions for something that can be caught easily otherwise.
So to conclude everything written here:
1) Use try..catch blocks to catch unexpected errors - almost no performance penalty.
2) Don't use exceptions for excepted errors if you can avoid it.
I wrote an article about this a while back because there were a lot of people asking about this at the time. You can find it and the test code at http://www.blackwasp.co.uk/SpeedTestTryCatch.aspx.
The upshot is that there is a tiny amount of overhead for a try/catch block but so small that it should be ignored. However, if you are running try/catch blocks in loops that are executed millions of times, you may want to consider moving the block to outside of the loop if possible.
The key performance issue with try/catch blocks is when you actually catch an exception. This can add a noticeable delay to your application. Of course, when things are going wrong, most developers (and a lot of users) recognise the pause as an exception that is about to happen! The key here is not to use exception handling for normal operations. As the name suggests, they are exceptional and you should do everything you can to avoid them being thrown. You should not use them as part of the expected flow of a program that is functioning correctly.
I made a blog entry about this subject last year.
Check it out. Bottom line is that there is almost no cost for a try block if no exception occurs - and on my laptop, an exception was about 36μs. That might be less than you expected, but keep in mind that those results where on a shallow stack. Also, first exceptions are really slow.
It is vastly easier to write, debug, and maintain code that is free of compiler error messages, code-analysis warning messages, and routine accepted exceptions (particularly exceptions that are thrown in one place and accepted in another). Because it is easier, the code will on average be better written and less buggy.
To me, that programmer and quality overhead is the primary argument against using try-catch for process flow.
The computer overhead of exceptions is insignificant in comparison, and usually tiny in terms of the application's ability to meet real-world performance requirements.
I really like Hafthor's blog post, and to add my two cents to this discussion, I'd like to say that, it's always been easy for me to have the DATA LAYER throw only one type of exception (DataAccessException). This way my BUSINESS LAYER knows what exception to expect and catches it. Then depending on further business rules (i.e. if my business object participates in the workflow etc), I may throw a new exception (BusinessObjectException) or proceed without re/throwing.
I'd say don't hesitate to use try..catch whenever it is necessary and use it wisely!
For example, this method participates in a workflow...
Comments?
public bool DeleteGallery(int id)
{
try
{
using (var transaction = new DbTransactionManager())
{
try
{
transaction.BeginTransaction();
_galleryRepository.DeleteGallery(id, transaction);
_galleryRepository.DeletePictures(id, transaction);
FileManager.DeleteAll(id);
transaction.Commit();
}
catch (DataAccessException ex)
{
Logger.Log(ex);
transaction.Rollback();
throw new BusinessObjectException("Cannot delete gallery. Ensure business rules and try again.", ex);
}
}
}
catch (DbTransactionException ex)
{
Logger.Log(ex);
throw new BusinessObjectException("Cannot delete gallery.", ex);
}
return true;
}
We can read in Programming Languages Pragmatics by Michael L. Scott that the nowadays compilers do not add any overhead in common case, this means, when no exceptions occurs. So every work is made in compile time.
But when an exception is thrown in run-time, compiler needs to perform a binary search to find the correct exception and this will happen for every new throw that you made.
But exceptions are exceptions and this cost is perfectly acceptable. If you try to do Exception Handling without exceptions and use return error codes instead, probably you will need a if statement for every subroutine and this will incur in a really real time overhead. You know a if statement is converted to a few assembly instructions, that will performed every time you enter in your sub-routines.
Sorry about my English, hope that it helps you. This information is based on cited book, for more information refer to Chapter 8.5 Exception Handling.
Let us analyse one of the biggest possible costs of a try/catch block when used where it shouldn't need to be used:
int x;
try {
x = int.Parse("1234");
}
catch {
return;
}
// some more code here...
And here's the one without try/catch:
int x;
if (int.TryParse("1234", out x) == false) {
return;
}
// some more code here
Not counting the insignificant white-space, one might notice that these two equivelant pieces of code are almost exactly the same length in bytes. The latter contains 4 bytes less indentation. Is that a bad thing?
To add insult to injury, a student decides to loop while the input can be parsed as an int. The solution without try/catch might be something like:
while (int.TryParse(...))
{
...
}
But how does this look when using try/catch?
try {
for (;;)
{
x = int.Parse(...);
...
}
}
catch
{
...
}
Try/catch blocks are magical ways of wasting indentation, and we still don't even know the reason it failed! Imagine how the person doing debugging feels, when code continues to execute past a serious logical flaw, rather than halting with a nice obvious exception error. Try/catch blocks are a lazy man's data validation/sanitation.
One of the smaller costs is that try/catch blocks do indeed disable certain optimizations: http://msmvps.com/blogs/peterritchie/archive/2007/06/22/performance-implications-of-try-catch-finally.aspx. I guess that's a positive point too. It can be used to disable optimizations that might otherwise cripple safe, sane message passing algorithms for multithreaded applications, and to catch possible race conditions ;) That's about the only scenario I can think of to use try/catch. Even that has alternatives.

Categories