IndexOutOfRangeException without array access - c#

I have a customer report of an IndexOutOfRangeException but the line number at which it is reported has no array access! The line is of the form:
using (XyzConnection conn = new XyzConnection(anObject.aProperty.anotherProperty))
XyzConnection, anObject, etc are made up replaced names but the construct is essentially same.
Can the above itself throw IndexOutOfRangeException?
Is it possible that the array access (and exception) are in some code called from above line, i.e. the constructor or one of the property getters? How can I identify the correct location?
I should mention that the problem cannot be reproduced in development environment and I cannot install Visual Studio on the customer's machine.

Can the above itself throw IndexOutOfRangeException?
That line can't in and of itself throw the exception.
Some code inside the XyzConnection constructor method could be doing it, Or, the property getter for anObject.aProperty could be throwing it, or the property getter for aProperty.anotherProperty could also be throwing. My bet would be that it is one of the property getters.
They could be being inlined by the JIT compiler, and hence you wouldn't see them in the stack trace, no matter what PDB's you had. This is actually quite common, as propert getters are usually small and simple, which makes them ideal candidates for inlining.
I'd recommend a solid code review of those 2 property getters, followed by the XyzConnection constructor

The first thing that comes to mind is that the PDB doesn't match the DLL version being used. Nothing about the line of code hints at an Index Out of Range exception. Without seeing the code surrounding that call, and the constructor declaration itself, I doubt there's much help people can give.
One other thing to check in terms of misleading line numbers, if the code is using try/catch blocks, ensure that catch blocks that are re-raising exceptions are using "throw;" not "throw ex;" (where ex is the caught exception.) This causes the exception trace stack to be re-generated. (both time consuming, and overwrites potentially useful info.)

Related

Is there a way to predict that `Marshal.GetExceptionForHR(code, pointers)` would throw an `AccessViolationException`

I have a whole sequence of interlinked questions. I would like to know answers for all of them independantly, so *yes*, some of them appear to be X-Y questions; but I do want to know the solution to them anyway!. See end of question for list of other questions in this set.
I have a situation in which calling Marshal.GetExceptionForHR(code, pointers) throws an AccessViolationException.
Apparently this a VeryBadThing. So bad that try...catch just shrugs at me and wanders off for a fag ... which I didn't know was a thing :(
Suppose a premise of "calling .GetExceptionForHR() is what I want to do" (see other questions for a discussion of whether this is a good idea!).
Suppose further that whatever causes the AccessViolation is unavoidable in some scenarios.
Given those premises, is there any way to predict in advance that calling them method will blow up?
Some equivalent of TryGetExceptionForHR() that'll return false rather than blowing up my entire Program.
Other questions in this problem set.
C#: How do I prevent a pipe reading error, when closing the exe on one end of a DuplexChannel connected by Named Pipes?
Ascertain reason for Channel Fault in WCF+NamedPipes
Is using the `Marshal` static class in otherwise normal C# code unwise?
Is there a way to predict that `Marshal.GetExceptionForHR(code, pointers)` would throw an `AccessViolationException`
Is this a suitable use of `HandleProcessCorruptedStateExceptions`
Just having a look at the Microsoft docs, Marshall.GetExceptionForHR doesn't throw an exception that can be caught.
Instead, they recommend using Marshall.ThrowExceptionForHR to get a thrown exception that you can then handle. This does not throw an exception when using a success HResult.
If you want to use Marshall.GetExceptionForHR then double-check that the correct pointer is being used. IntPtr(0) will use the current IErrorInfo interface while IntPtr(-1) will attempt to reconstruct the exception from just the code.
It's also possible that rather than throwing unexpected exceptions, GetExceptionForHR just returns the unexpected exception from a catch block e.g.:
catch(Exception ex)
{
return ex;
};
Unfortunately, I can't be 100% certain at this point, I'm trying to track down the Marshall class on Microsoft's Github and I'll run some of my own attempts to see if I can get the same effect as yourself.
Related links:
ThrowExceptionForHR
GetExceptionForHR

What does "throw new NotImplementedException();" do exactly?

I have a class 'b' that inherits from class 'a'. In class 'a' there is some code that performs an action if an event is not null. I need that code to fire in class 'b' during specific times in the application. So in 'b' I subscribed to a new Handler(event).
If I leave the autogenerated event 'as is' in class 'b' with the throw new NotImplementedException(); line, the code works/runs as expected. As soon as I remove the thow exception, the application no longer works as expected.
So, what is the throw new NotImplementedException doing besides throwing the exception?
I realize I'm probably trying to solve my coding problem the wrong way at this point, and I am sure I will find a better way to do it (I'm still learning), but my question remains. Why does that line change the outcome of code?
EDIT:
I reallize I wan't very specific with my code. Unfortunately, because of strict policies, I can't be. I have in class 'a' an if statement.
if (someEvent != null)
When the code 'works', the if statement is returning true. When it isn't working as expected, it is returning 'false'. In class 'b', the only time the application 'works' (or the if statement returns true), is when I have the throw new NotImplementedException(); line in class 'b's event method that is autogenerated when I attached the new event.
Think about this: what if you want to add two integers with the following method...
private int Add(int x, int y)
{
}
...and have no code inside to do such (the method doesn't even return an integer). This is what NotImplementedException is used for.
NotImplementedException is simply an exception that is used when the code for what you're trying to do isn't written yet. It's often used in snippets as a placeholder until you fill in the body of whatever has been generated with actual code.
It's good practice to use a NotImplementedException as opposed to an empty block of code, because it will very clearly throw an error alerting you that section of your code isn't complete yet. If it was blank, then the method might run and nothing would happen, and that's a pain to debug sometimes.
It is simply an exception, as for why it means your application "works" is entirely dependent on the code handling any exceptions.
It is not a "special" exception as opposed to a normal exception (other than being derived from Exception like the rest). You tend to see it with code generation as a placeholder for implementing the member it is throwing inside. It is a lot easier to do this than have code generation try to understand the member structure in order to output compiling code.
When you say "no longer works as expected", I am assuming it compiles. If removing this stops the code from compiling then the chances are good you have a compilation error about a return value.
Perhaps the code that triggers the event expects a certain response from handlers, or if there are no handlers or exceptions occur it defaults the response and carries on. In your case, there is a handler and no exception so it expects a better response?
Complete guesswork.
If there is code in a that you need to use in b, consider making the method that houses the code protected and optionally virtual if you need to override the behaviour.
NotImplementedException, I believe, has a little special meaning: "this code is still in development and needs to be changed". Developers put this exception to make some code compileable, but not executable (to avoid data damage).
Information about this exception type can be found in documentation, it explains the meaning very detailed: https://learn.microsoft.com/en-us/dotnet/api/system.notimplementedexception?view=netcore-3.1
Some development tools, like Resharper, for example, generates new members with NotImplementedException inside, thus protecting you from execution the code, which is not ready. Same time they highlight this exceptions same way as "//todo:" comment
For other situations, for example, when you don't need to implement an interface or virtual member, or may be, you don't implement some paths in switch/case, if/else etc statements, you probably will use NotSupportedException, OutOfRangeException, ArgumentNullException, InvalidOperationException etc.
At the end, consider this situation to understand the purpose of NotImplementedException:
We are writing banking application, and, at the moment, implementing money transferring feature, we have:
public void Transfer(sourceAccount, destAccount, decimal sum)
{
sourceAccount.Credit(sum);
destAccount.Debit(sum);
}
Here we are calling two methods which do not exist yet. We are generating both with default NotImplementedException, and going to first one (Credit) to implement it. Lets say, implementation took some time, we have written test for it, and even have done several manual tests. We completely forgot about second method "Debit" and deploying our application to beta, or even to production. Testers or users start using our application and soon they are coming to money transfer functionality. They are trying to call Transfer, which shows them a general message "We are so sorry, we got some errors", same time we, as a developer team, receive notification about NotImplementedException happened with the stack trace pointing us to the method "Debit". Now we are implementing it, and releasing new version, which can do money transfers.
Now imagine, what would happen, if there was not an exception there: users would spend lot of money trying to do that transfers, trying several times, each time throwing money in to a hole.
Depending on the purposes of the application it can be bigger or smaller problem: may be button on your calculator does not work, or may be that was scientific calculator and we just missed some important math while calculating vaccine code against some aggressive virus.
The NotImplementedException is a way of declaring that a particular method of an interface or base class is simply not implemented in your type. This is the exception form of the E_NOTIMPL error code.
In general an implementation shouldn't be throwing a NotImplementedException unless it's a specifically supported scenario for that particular interface. In the vast majority of scenarios this is not the case and types should fully implement interfaces.
In terms of what it's doing though. It's simply throwing an exception. It's hard to say why the program keeps function in the face of the exception and breaks without it unless you give us a bit more information.

Exceptions: When to use, timing, overall use

I'll try and ask my question so it doesn't end as a simple argumentative thread.
I've jumped into an application coded in C# recently and I'm discovering the exception mechanism. And I've had a few bad experiences with them such as the following
// _sValue is a string
try
{
return float.Parse(_sValue);
}
catch
{
return 0;
}
I changed it into :
float l_fParsedValue = 0.0f;
if (float.TryParse(_sValue, out l_fParsedValue))
{
return l_fParsedValue;
}
else
{
return 0;
}
Result, my Output in Visual Studio is not anymore flooded with message like
First chance System.FormatException blabla
when string like '-' arrive in the snippet. I think it's cleaner to use the second snippet.
Going a step further, I've often seen that exceptions are too often used ilke: "I do whatever I want in this try-catch, if anything goes wrong, catch.".
Now in order to not get stuck with bad misconceptions, I would like you guys to help me clearly define how/when to use those exceptions and when to stick with old school "if...else".
Thanks in advance for your help!
You should throw exception in exceptional cases. i.e. when something unexpected happens. If you expect a function to regularly throw an exception that's most likely bad design.
In your examples it is very clear that TryParse is better since the exception seems to occor often.
But for example when parsing a file, I expect it to be almost always valid. So I usually use Parse and catch the exception and generate a InvalidDataException with the caught exception as inner exception. Usually simplifies the parsing code a lot, even if it may be bad style.
I recommend Eric Lippers's blog entry: Vexing exceptions
In case of Parse()/TryParse() it's better don't wait for exception, use TryParse and act on incorrect input by yourself.
Exceptions should be used for exceptional behavior, not for flow-control. A basic guideline would be that if normal program flow regularly runs into exceptions you're doing something wrong.
However, it is important to notice that just having a try { } catch { } present will itself not affect performance negatively. Only when an exception is actually thrown and the stack trace needs to be computed you will see (in some cases quite severe) performance degradation.
Another point that hasn't been examined in depth so far is that Exceptions have a cost. They subvert the normal flow of control in the program and there is some resource use as a result.
A simple test would be to write a program that loops over your original float.Parse code with some invalid data, and compare how long it takes to run versus the TryParse version - there will be a small but noticeable difference.
A snippet that sticks in my mind when making decisions about exceptions is from this article:
Almost Rule #1
When deciding if you should throw an
exception, pretend that the throw
statement makes the computer beep 3
times, and sleep for 2 seconds. If
you still want to throw under those
circumstances, go for it.
Programs that use exceptions as part
of their normal processing suffer from
all the readability and
maintainability problems of classic
spaghetti code.
— Andy Hunt and Dave Thomas
I think there is no simple right answer about how/when to use exceptions. It depends on an architecture of the application you're working on and other factors.
I can suggest you to read the chapters 8.3. Error-Handling Techniques and 8.4. Exceptions of the Code Complete book.
Ahh, if only it was that simple! But, alas - the decision when to use exceptions is more often subjective than not. Still, there are guidelines you can use. For example, Microsoft has some.
All in all, the rule of thumb for throwing exceptions is - you only throw an exception when a function cannot do what it was supposed to do. Basically each function has a contract - it has a legal range of input values and a legal range of output values. When the input values are invalid, or it cannot provide the expected output values, you should throw an exception.
Note that there is a slippery point here - should validation (of user input) errors also be thrown as exceptions? Some schools of thought say no (Microsoft included), some say yes. Your call. Each approach has its benefits and drawbacks, and it's up to you to decide how you will structure your code.
A rule of thumb for catching exceptions is - you should only catch exceptions that you can handle. Now, this is also slippery. Is displaying the error message to the user also "handling" it? But what if it's the infamous StackOverflowException or OutOfMemoryException? You can hardly display anything then. And those might not be the only exceptions that can leave the whole system in an unusable state. So again - your call.

Throw/do-not-throw an exception based on a parameter - why is this not a good idea?

I was digging around in MSDN and found this article which had one interesting bit of advice: Do not have public members that can either throw or not throw exceptions based on some option.
For example:
Uri ParseUri(string uriValue, bool throwOnError)
Now of course I can see that in 99% of cases this would be horrible, but is its occasional use justified?
One case I have seen it used is with an "AllowEmpty" parameter when accessing data in the database or a configuration file. For example:
object LoadConfigSetting(string key, bool allowEmpty);
In this case, the alternative would be to return null. But then the calling code would be littered with null references check. (And the method would also preclude the ability to actually allow null as a specifically configurable value, if you were so inclined).
What are your thoughts? Why would this be a big problem?
I think it's definitely a bad idea to have a throw / no throw decision be based off of a boolean. Namely because it requires developers looking at a piece of code to have a functional knowledge of the API to determine what the boolean means. This is bad on it's own but when it changes the underlying error handling it can make it very easy for developers to make mistakes while reading code.
It would be much better and more readable to have 2 APIs in this case.
Uri ParseUriOrThrow(string value);
bool TryParseUri(string value, out Uri uri);
In this case it's 100% clear what these APIs do.
Article on why booleans are bad as parameters: http://blogs.msdn.com/jaredpar/archive/2007/01/23/boolean-parameters.aspx
It's usually best to choose one error handling mechanism and stick with it consistently. Allowing this sort of flip-flop code can't really improve the life of developers.
In the above example, what happens if parsing fails and throwOnError is false? Now the user has to guess if NULL if going to be returned, or god knows...
True there's an ongoing debate between exceptions and return values as the better error handling method, but I'm pretty certain there's a consensus about being consistent and sticking with whatever choice you make. The API can't surprise its users and error handling should be part of the interface, and be as clearly defined as the interface.
It's kind of nasty from a readabilty standpoint. Developers tend to expect every method to throw an exception, and if they want to ignore the exception, they'll catch it themselves. With the 'boolean flag' approach, every single method needs to implement this exception-inhibiting semantic.
However, I think the MSDN article is strictly referring to 'throwOnError' flags. In these cases either the error is ignored inside the method itself (bad, as it's hidden) or some kind of null/error object is returned (bad, because you're not using exceptions to handle the error, which is inconsistent and itself error-prone).
Whereas your example seems fine to me. An exception indicates a failure of the method to perform its duty - there is no return value. However the 'allowEmpty' flag changes the semantics of the method - so what would have been an exception ('Empty value') is now expected and legal. Plus, if you had thrown an exception, you wouldn't easily be able to return the config data. So it seems OK in this case.
In any public API it is really a bad idea to have two ways to check for a faulty condition because then it becomes non-obvious what will happen if the error occurs. Just by looking at the code will not help. You have to understand the semantics of the flag parameter (and nothing prevents it from being an expression).
If checking for null is not an option, and if I need to recover from this specific failure, I prefer to create a specific exception so that I can catch it later and handle it appropriately. In any other case I throw a general exception.
Another example in line with this could be set of TryParse methods on some of the value types
bool DateTime.TryParse(string text, out DateTime)
Having a donTThrowException parameter defeats the whole point of exceptions (in any language). If the calling code wants to have:
public static void Main()
{
FileStream myFile = File.Open("NonExistent.txt", FileMode.Open, FileAccess.Read);
}
they're welcome to (C# doesn't even have checked exceptions). In Java the same thing would be accomplished with:
public static void main(String[] args) throws FileNotFoundException
{
FileInputStream fs = new FileInputStream("NonExistent.txt");
}
Either way, it's the caller's job to decide how to handle (or not) the exception, not the callee's.
In the linked to article there is a note that Exceptions should not be used for flow of control - which seems to be implied in the example questsions. Exceptions should reflect Method level failure. To have a signature that it is OK to throw an Error seems like the design is not thought out.
Jeffrey Richters book CLR via C# points out - "you should throw an exception when the method cannot complete its task as indicated by its name".
His book also pointed out a very common error. People tend to write code to catch everything (his words "A ubiquitous mistake of developers who have not been properly trained on the proper use of exceptions tend to use catch blocks too often and improperly. When you catch an exception, you're stating that you expected this exception, you understand why it occurred, and you know how to deal with it.")
That has made me try to code for exceptions that I can expect and can handle in my logic otherwise it should be an error.
Validate your arguments and prevent the exceptions, and only catch what you can handle.
I would aver that it's often useful to have a parameter which indicates whether a failure should cause an exception or simply return an error indication, since such a parameters can be easily passed from an outer routine to an inner one. Consider something like:
Byte[] ReadPacket(bool DontThrowIfNone) // Documented as returning null if none
{
int len = ReadByte(DontThrowIfNone); // Documented as returning -1 if nothing
if (len
If something like a TimeoutException while reading the data should cause an exception, such exception should be thrown within the ReadByte() or ReadMultiBytesbytes(). If, however, such a lack of data should be considered normal, then the ReadByte() or ReadMultiBytesbytes() routine should not throw an exception. If one simply used the do/try pattern, a the ReadPacket and TryReadPacket routines would need to have almost identical code, but with one using Read* methods and the other using TryRead* methods. Icky.
It may be better to use an enumeration rather than a boolean.

What is the real overhead of try/catch in C#?

So, I know that try/catch does add some overhead and therefore isn't a good way of controlling process flow, but where does this overhead come from and what is its actual impact?
Three points to make here:
Firstly, there is little or NO performance penalty in actually having try-catch blocks in your code. This should not be a consideration when trying to avoid having them in your application. The performance hit only comes into play when an exception is thrown.
When an exception is thrown in addition to the stack unwinding operations etc that take place which others have mentioned you should be aware that a whole bunch of runtime/reflection related stuff happens in order to populate the members of the exception class such as the stack trace object and the various type members etc.
I believe that this is one of the reasons why the general advice if you are going to rethrow the exception is to just throw; rather than throw the exception again or construct a new one as in those cases all of that stack information is regathered whereas in the simple throw it is all preserved.
I'm not an expert in language implementations (so take this with a grain of salt), but I think one of the biggest costs is unwinding the stack and storing it for the stack trace. I suspect this happens only when the exception is thrown (but I don't know), and if so, this would be decently sized hidden cost every time an exception is thrown... so it's not like you are just jumping from one place in the code to another, there is a lot going on.
I don't think it's a problem as long as you are using exceptions for EXCEPTIONAL behavior (so not your typical, expected path through the program).
Are you asking about the overhead of using try/catch/finally when exceptions aren't thrown, or the overhead of using exceptions to control process flow? The latter is somewhat akin to using a stick of dynamite to light a toddler's birthday candle, and the associated overhead falls into the following areas:
You can expect additional cache misses due to the thrown exception accessing resident data not normally in the cache.
You can expect additional page faults due to the thrown exception accessing non-resident code and data not normally in your application's working set.
for example, throwing the exception will require the CLR to find the location of the finally and catch blocks based on the current IP and the return IP of every frame until the exception is handled plus the filter block.
additional construction cost and name resolution in order to create the frames for diagnostic purposes, including reading of metadata etc.
both of the above items typically access "cold" code and data, so hard page faults are probable if you have memory pressure at all:
the CLR tries to put code and data that is used infrequently far from data that is used frequently to improve locality, so this works against you because you're forcing the cold to be hot.
the cost of the hard page faults, if any, will dwarf everything else.
Typical catch situations are often deep, therefore the above effects would tend to be magnified (increasing the likelihood of page faults).
As for the actual impact of the cost, this can vary a lot depending on what else is going on in your code at the time. Jon Skeet has a good summary here, with some useful links. I tend to agree with his statement that if you get to the point where exceptions are significantly hurting your performance, you have problems in terms of your use of exceptions beyond just the performance.
Contrary to theories commonly accepted, try/catch can have significant performance implications, and that's whether an exception is thrown or not!
It disables some automatic optimisations (by design), and in some cases injects debugging code, as you can expect from a debugging aid. There will always be people who disagree with me on this point, but the language requires it and the disassembly shows it so those people are by dictionary definition delusional.
It can impact negatively upon maintenance. This is actually the most significant issue here, but since my last answer (which focused almost entirely on it) was deleted, I'll try to focus on the less significant issue (the micro-optimisation) as opposed to the more significant issue (the macro-optimisation).
The former has been covered in a couple of blog posts by Microsoft MVPs over the years, and I trust you could find them easily yet StackOverflow cares so much about content so I'll provide links to some of them as filler evidence:
Performance implications of try/catch/finally (and part two), by Peter Ritchie explores the optimisations which try/catch/finally disables (and I'll go further into this with quotes from the standard)
Performance Profiling Parse vs. TryParse vs. ConvertTo by Ian Huff states blatantly that "exception handling is very slow" and demonstrates this point by pitting Int.Parse and Int.TryParse against each other... To anyone who insists that TryParse uses try/catch behind the scenes, this ought to shed some light!
There's also this answer which shows the difference between disassembled code with- and without using try/catch.
It seems so obvious that there is an overhead which is blatantly observable in code generation, and that overhead even seems to be acknowledged by people who Microsoft value! Yet I am, repeating the internet...
Yes, there are dozens of extra MSIL instructions for one trivial line of code, and that doesn't even cover the disabled optimisations so technically it's a micro-optimisation.
I posted an answer years ago which got deleted as it focused on the productivity of programmers (the macro-optimisation).
This is unfortunate as no saving of a few nanoseconds here and there of CPU time is likely to make up for many accumulated hours of manual optimisation by humans. Which does your boss pay more for: an hour of your time, or an hour with the computer running? At what point do we pull the plug and admit that it's time to just buy a faster computer?
Clearly, we should be optimising our priorities, not just our code! In my last answer I drew upon the differences between two snippets of code.
Using try/catch:
int x;
try {
x = int.Parse("1234");
}
catch {
return;
}
// some more code here...
Not using try/catch:
int x;
if (int.TryParse("1234", out x) == false) {
return;
}
// some more code here
Consider from the perspective of a maintenance developer, which is more likely to waste your time, if not in profiling/optimisation (covered above) which likely wouldn't even be necessary if it weren't for the try/catch problem, then in scrolling through source code... One of those has four extra lines of boilerplate garbage!
As more and more fields are introduced into a class, all of this boilerplate garbage accumulates (both in source and disassembled code) well beyond reasonable levels. Four extra lines per field, and they're always the same lines... Were we not taught to avoid repeating ourselves? I suppose we could hide the try/catch behind some home-brewed abstraction, but... then we might as well just avoid exceptions (i.e. use Int.TryParse).
This isn't even a complex example; I've seen attempts at instantiating new classes in try/catch. Consider that all of the code inside of the constructor might then be disqualified from certain optimisations that would otherwise be automatically applied by the compiler. What better way to give rise to the theory that the compiler is slow, as opposed to the compiler is doing exactly what it's told to do?
Assuming an exception is thrown by said constructor, and some bug is triggered as a result, the poor maintenance developer then has to track it down. That might not be such an easy task, as unlike the spaghetti code of the goto nightmare, try/catch can cause messes in three dimensions, as it could move up the stack into not just other parts of the same method, but also other classes and methods, all of which will be observed by the maintenance developer, the hard way! Yet we are told that "goto is dangerous", heh!
At the end I mention, try/catch has its benefit which is, it's designed to disable optimisations! It is, if you will, a debugging aid! That's what it was designed for and it's what it should be used as...
I guess that's a positive point too. It can be used to disable optimizations that might otherwise cripple safe, sane message passing algorithms for multithreaded applications, and to catch possible race conditions ;) That's about the only scenario I can think of to use try/catch. Even that has alternatives.
What optimisations do try, catch and finally disable?
A.K.A
How are try, catch and finally useful as debugging aids?
they're write-barriers. This comes from the standard:
12.3.3.13 Try-catch statements
For a statement stmt of the form:
try try-block
catch ( ... ) catch-block-1
...
catch ( ... ) catch-block-n
The definite assignment state of v at the beginning of try-block is the same as the definite assignment state of v at the beginning of stmt.
The definite assignment state of v at the beginning of catch-block-i (for any i) is the same as the definite assignment state of v at the beginning of stmt.
The definite assignment state of v at the end-point of stmt is definitely assigned if (and only if) v is definitely assigned at the end-point of try-block and every catch-block-i (for every i from 1 to n).
In other words, at the beginning of each try statement:
all assignments made to visible objects prior to entering the try statement must be complete, which requires a thread lock for a start, making it useful for debugging race conditions!
the compiler isn't allowed to:
eliminate unused variable assignments which have definitely been assigned to before the try statement
reorganise or coalesce any of it's inner-assignments (i.e. see my first link, if you haven't already done so).
hoist assignments over this barrier, to delay assignment to a variable which it knows won't be used until later (if at all) or to pre-emptively move later assignments forward to make other optimisations possible...
A similar story holds for each catch statement; suppose within your try statement (or a constructor or function it invokes, etc) you assign to that otherwise pointless variable (let's say, garbage=42;), the compiler can't eliminate that statement, no matter how irrelevant it is to the observable behaviour of the program. The assignment needs to have completed before the catch block is entered.
For what it's worth, finally tells a similarly degrading story:
12.3.3.14 Try-finally statements
For a try statement stmt of the form:
try try-block
finally finally-block
• The definite assignment state of v at the beginning of try-block is the same as the definite assignment state of v at the beginning of stmt.
• The definite assignment state of v at the beginning of finally-block is the same as the definite assignment state of v at the beginning of stmt.
• The definite assignment state of v at the end-point of stmt is definitely assigned if (and only if) either:
o v is definitely assigned at the end-point of try-block
o v is definitely assigned at the end-point of finally-block
If a control flow transfer (such as a goto statement) is made that begins within try-block, and ends outside of try-block, then v is also considered definitely assigned on that control flow transfer if v is definitely assigned at the end-point of finally-block. (This is not an only if—if v is definitely assigned for another reason on this control flow transfer, then it is still considered definitely assigned.)
12.3.3.15 Try-catch-finally statements
Definite assignment analysis for a try-catch-finally statement of the form:
try try-block
catch ( ... ) catch-block-1
...
catch ( ... ) catch-block-n
finally finally-block
is done as if the statement were a try-finally statement enclosing a try-catch statement:
try {
try
try-block
catch ( ... ) catch-block-1
...
catch ( ... ) catch-block-n
}
finally finally-block
In my experience the biggest overhead is in actually throwing an exception and handling it. I once worked on a project where code similar to the following was used to check if someone had a right to edit some object. This HasRight() method was used everywhere in the presentation layer, and was often called for 100s of objects.
bool HasRight(string rightName, DomainObject obj) {
try {
CheckRight(rightName, obj);
return true;
}
catch (Exception ex) {
return false;
}
}
void CheckRight(string rightName, DomainObject obj) {
if (!_user.Rights.Contains(rightName))
throw new Exception();
}
When the test database got fuller with test data, this lead to a very visible slowdown while openening new forms etc.
So I refactored it to the following, which - according to later quick 'n dirty measurements - is about 2 orders of magnitude faster:
bool HasRight(string rightName, DomainObject obj) {
return _user.Rights.Contains(rightName);
}
void CheckRight(string rightName, DomainObject obj) {
if (!HasRight(rightName, obj))
throw new Exception();
}
So in short, using exceptions in normal process flow is about two orders of magnitude slower then using similar process flow without exceptions.
Not to mention if it's inside a frequently-called method it may affect the overall behavior of the application.
For example, I consider the use of Int32.Parse as a bad practice in most cases since it throws exceptions for something that can be caught easily otherwise.
So to conclude everything written here:
1) Use try..catch blocks to catch unexpected errors - almost no performance penalty.
2) Don't use exceptions for excepted errors if you can avoid it.
I wrote an article about this a while back because there were a lot of people asking about this at the time. You can find it and the test code at http://www.blackwasp.co.uk/SpeedTestTryCatch.aspx.
The upshot is that there is a tiny amount of overhead for a try/catch block but so small that it should be ignored. However, if you are running try/catch blocks in loops that are executed millions of times, you may want to consider moving the block to outside of the loop if possible.
The key performance issue with try/catch blocks is when you actually catch an exception. This can add a noticeable delay to your application. Of course, when things are going wrong, most developers (and a lot of users) recognise the pause as an exception that is about to happen! The key here is not to use exception handling for normal operations. As the name suggests, they are exceptional and you should do everything you can to avoid them being thrown. You should not use them as part of the expected flow of a program that is functioning correctly.
I made a blog entry about this subject last year.
Check it out. Bottom line is that there is almost no cost for a try block if no exception occurs - and on my laptop, an exception was about 36μs. That might be less than you expected, but keep in mind that those results where on a shallow stack. Also, first exceptions are really slow.
It is vastly easier to write, debug, and maintain code that is free of compiler error messages, code-analysis warning messages, and routine accepted exceptions (particularly exceptions that are thrown in one place and accepted in another). Because it is easier, the code will on average be better written and less buggy.
To me, that programmer and quality overhead is the primary argument against using try-catch for process flow.
The computer overhead of exceptions is insignificant in comparison, and usually tiny in terms of the application's ability to meet real-world performance requirements.
I really like Hafthor's blog post, and to add my two cents to this discussion, I'd like to say that, it's always been easy for me to have the DATA LAYER throw only one type of exception (DataAccessException). This way my BUSINESS LAYER knows what exception to expect and catches it. Then depending on further business rules (i.e. if my business object participates in the workflow etc), I may throw a new exception (BusinessObjectException) or proceed without re/throwing.
I'd say don't hesitate to use try..catch whenever it is necessary and use it wisely!
For example, this method participates in a workflow...
Comments?
public bool DeleteGallery(int id)
{
try
{
using (var transaction = new DbTransactionManager())
{
try
{
transaction.BeginTransaction();
_galleryRepository.DeleteGallery(id, transaction);
_galleryRepository.DeletePictures(id, transaction);
FileManager.DeleteAll(id);
transaction.Commit();
}
catch (DataAccessException ex)
{
Logger.Log(ex);
transaction.Rollback();
throw new BusinessObjectException("Cannot delete gallery. Ensure business rules and try again.", ex);
}
}
}
catch (DbTransactionException ex)
{
Logger.Log(ex);
throw new BusinessObjectException("Cannot delete gallery.", ex);
}
return true;
}
We can read in Programming Languages Pragmatics by Michael L. Scott that the nowadays compilers do not add any overhead in common case, this means, when no exceptions occurs. So every work is made in compile time.
But when an exception is thrown in run-time, compiler needs to perform a binary search to find the correct exception and this will happen for every new throw that you made.
But exceptions are exceptions and this cost is perfectly acceptable. If you try to do Exception Handling without exceptions and use return error codes instead, probably you will need a if statement for every subroutine and this will incur in a really real time overhead. You know a if statement is converted to a few assembly instructions, that will performed every time you enter in your sub-routines.
Sorry about my English, hope that it helps you. This information is based on cited book, for more information refer to Chapter 8.5 Exception Handling.
Let us analyse one of the biggest possible costs of a try/catch block when used where it shouldn't need to be used:
int x;
try {
x = int.Parse("1234");
}
catch {
return;
}
// some more code here...
And here's the one without try/catch:
int x;
if (int.TryParse("1234", out x) == false) {
return;
}
// some more code here
Not counting the insignificant white-space, one might notice that these two equivelant pieces of code are almost exactly the same length in bytes. The latter contains 4 bytes less indentation. Is that a bad thing?
To add insult to injury, a student decides to loop while the input can be parsed as an int. The solution without try/catch might be something like:
while (int.TryParse(...))
{
...
}
But how does this look when using try/catch?
try {
for (;;)
{
x = int.Parse(...);
...
}
}
catch
{
...
}
Try/catch blocks are magical ways of wasting indentation, and we still don't even know the reason it failed! Imagine how the person doing debugging feels, when code continues to execute past a serious logical flaw, rather than halting with a nice obvious exception error. Try/catch blocks are a lazy man's data validation/sanitation.
One of the smaller costs is that try/catch blocks do indeed disable certain optimizations: http://msmvps.com/blogs/peterritchie/archive/2007/06/22/performance-implications-of-try-catch-finally.aspx. I guess that's a positive point too. It can be used to disable optimizations that might otherwise cripple safe, sane message passing algorithms for multithreaded applications, and to catch possible race conditions ;) That's about the only scenario I can think of to use try/catch. Even that has alternatives.

Categories