Raising an error if GetValue() method fails - c#

I have inherited a WCF service which acts as a file cache (each file representing the results of a request to a third party API). At the moment if the file doesn't exist the code creates a new request to create the data and it also raises an exception to the client code.
I think the idea is that the clients would come back to request the file again and by then it would be available by them (it takes a couple of seconds to generate the file).
I think there's a code smell here and I should rewrite this part. At the moment the exception is getting caught and bubbled up through a couple of methods. I think I should be establishing at source whether the file exists and pass that information up the call stack.
At the WCF interface I currently have a GetValue() method, though there are two options I think I could use to replace it:
return null if the file does not exist.
Use a bool TryGetValue(string key, out string value) method
Does anyone have any preferences/recommendations?
Thanks

The "TryGet" approach is a little more explicit. With the null-returning approach, you have to document that the method returns null for such and such a reason, and this requires developers to read the documentation. As we all know, some people are allergic to reading documentation.
Another advantage of the "TryGet" approach is that you can use an enum rather than a bool, to give even more information to the caller about why and how the method failed (or succeeded).
Jeffrey Richter’s (CLR in C#) definition of an exception: When an action member cannot complete its task, the member should throw an exception. An exception means that an action member failed to complete the task it was supposed to perform as indicated by its name. My question is should I keep the GetValue method available for the client and raise an error when the data is unavailable or remove it and replace it with TryGetValue()?
Jeffrey Richter's definition is not helpful when you are determining the design of your API, because that includes determining what the tasks of each action member should be.
In your design, you are expecting the value to be unavailable as a matter of course; this means that it is not an exceptional situation for the value to be unavailable. I would therefore use only the TryGet... pattern.
But, truth be told, I would pursue a different approach altogether. Suppose somebody tries this approach:
while (!TryGetValue(key, out value)) {}
or:
SomeType value;
bool flag = false;
while (!flag)
{
try
{
value = GetValue(key);
flag = true;
}
catch {}
}
Your WCF service is going to get a lot of hits. It would probably be better to look into an asynchronous model, so the client is notified through a callback when the result is ready, rather than inviting the client to poll the service continually.

Related

Error communication and recovery approaches in .NET

I am trying to do error communication and recovery in my C# code without using Exceptions.
To give an example, suppose there is a Func A, which can be called by Func B or Func C or other functions. Func A has to be designed keeping reuse in mind. (This application has an evolving library where new features will keep getting added over a period of time)
If Func A is not able to do what it is supposed to do, it returns an int, where any non-zero value indicates failure. I also want to communicate the reason for failure. The caller function can use this information in multiple ways:
It can show the error message to the user,
It may display its own error message more relevant to its context
It may itself return an int value indicating failure to further ancestor caller functions.
It may try to recover from the error, using some intelligent algorithm.
Hypothetically, any function on which other functions depend, may need to communicate multiple things to its caller function to take appropriate action, including status code, error message, and other variables indicating the state of data. Returning everything as a delimited string may not allow the caller function to retrieve the information without parsing the string (which will lead to its own problems and is not recommended).
The only other way is to return an object containing member variables for all required information. This may lead to too many 'state' objects, as each function will need to have its state object.
I want to understand how this requirement can be designed in the most elegant way. Note that at the time of coding, Func A may not know whether the caller function will have the intelligence to recover from the error or not, so I do not want to throw exceptions. Also, I want to see whether such a design is possible (and elegant at the same time) without using exceptions.
If the only way is to communicate using data objects for each function, then is it the way professional libraries are written. Can there be a generic data object? Note new functions may be added in future, which may have different state variables, or supporting information about their errors.
Also note that since the function's return value is a 'state' object, the actual data what it is supposed to return may need to be passed as a ref or out parameter.
Is there a design pattern for this?
I have read the following articles before posting this question:
http://blogs.msdn.com/b/ricom/archive/2003/12/19/44697.aspx
Do try/catch blocks hurt performance when exceptions are not thrown?
Error Handling without Exceptions
I have read many other articles also, which suggest not to use exceptions for code flow control, and for errors which are recoverable. Also, throwing exceptions have their own cost. Moreover, if the caller function wants to recover from exception thrown by each of the called functions, it will have to surround each function call with a try catch block, as a generic try catch block will not allow to 'continue' from the next line of the error line.
EDIT:
A few specific questions:
I need to write an application which will synchronize 2 different databases: one is a proprietory database, and the other is a SQL Server database. I want to encapsulate reusable functions in a separate layer.
The functionality is like this: The proprietory application can have many databases. Some information from each of these databases needs to be pushed to a single common SQL Server database. The proprietory application's databases can be read only when the application's GUI is open and it can be read only through XML.
The algorithm is like this:
Read List of Open databases in Proprietory Application
For each database, start Sync process.
Check whether the user currently logged in, in this database has the Sync Permission. (Note: each database may be opened using a different user id).
Read data from this database.
Transfer data to SQL Server
Proceed to next database.
While developing this application, I will be writing several reusable functions, like ReadUserPermission, ReadListOfDatabases, etc.
In this case, if ReadUserPermission finds that the permission does not exist, the caller should log this and proceed to next open database. If ReadListOfDatabases is not able to establish a connection with the Proprietory Application, the caller should automatically start the application, etc.
So which error conditions should be communicated should exceptions and which using return codes?
Note the reusable functions may be used in other projects, where the caller may have different error recovery requirements or capabilities, so that has to be kept in mind.
EDIT:
For all those advocating exceptions, I ask them:
If Func A calls Func B,C,D,E,F,G and Func B throws an exception on some error condition, but Func A can recover from this error and will like to continue rest of execution i.e. call Func B,C,D,..., how does exception handling allow to do this 'elegantly'? The only solution will be to wrap calls to each of B,C,D,... within a try catch block, so that remaining statements get executed.
Please also read these 2 comments:
https://stackoverflow.com/a/1279137/1113579
https://stackoverflow.com/a/1272547/1113579
Note I am not averse to using exceptions, if error recovery and remaining code execution can be achieved elegantly and without impacting performance. Also, slight performance impact is not a concern, but I prefer the design should be scalable and elegant.
EDIT:
Ok, Based on "Zdeslav Vojkovic" comments', I am now thinking about using exceptions.
If I were to use exceptions, can you give some use case when not to use exception, but use return codes? Note: I am talking about return codes, not the data which function is supposed to return. Is there any use case of using return codes to indicate success / failure, or no use case? That will help me understand better.
One use case of exceptions what I have understood from "Zdeslav Vojkovic" is when the callee function wants to compulsorily notify caller function of some condition and interrupt the caller execution. In the absence of exception, the caller may or may not choose to examine the return codes. But in case of exceptions, the caller function must necessarily handle the exception, if it wants to continue execution.
EDIT:
I had another interesting idea.
Any callee function which wants to support the idea of caller function recovering from its error can raise event, and check the event data after the event has been handled, and then decide to throw or not to throw exception. Error codes will not be used at all. Exceptions will be used for unrecovered errors. Basically when a callee function is unable to do what its contract says, it asks for "help" in the form of any available event handlers. Still if it is not able to perform the contract, it throws an exception. The advantage is that the added overhead of throwing exceptions is reduced, and exceptions are thrown only when the callee function or any of its caller functions are not able to recover from the error.
Suppose if the caller function does not want to handle the error, but rather the caller's caller function wants to handle the error, the custom event dispatcher will ensure that event handlers are called in the reverse order of event registration, i.e. the most recently registered event handler should be called prior to other registered event handlers, and if this event handler is able to resolve the error, the subsequent event handlers are not at all called. On the other hand, if the most recent event handler can not resolve the error, the event chain will propagate to the next handler.
Please give feedback on this approach.
How about a common FunctionResult object that you use as an out param on all your methods that you don't want to throw exceptions in?
public class FuncResultInfo
{
public bool ExecutionSuccess { get; set; }
public string ErrorCode { get; set; }
public ErrorEnum Error { get; set; }
public string CustomErrorMessage { get; set; }
public FuncResultInfo()
{
this.ExecutionSuccess = true;
}
public enum ErrorEnum
{
ErrorFoo,
ErrorBar,
}
}
public static class Factory
{
public static int GetNewestItemId(out FuncResultInfo funcResInfo)
{
var i = 0;
funcResInfo = new FuncResultInfo();
if (true) // whatever you are doing to decide if the function fails
{
funcResInfo.Error = FuncResultInfo.ErrorEnum.ErrorFoo;
funcResInfo.ErrorCode = "234";
funcResInfo.CustomErrorMessage = "Er mah gawds, it done blewed up!";
}
else
{
i = 5; // whatever.
}
return i;
}
}
Make sure all of your functions that can fail without exceptions have that out param for FuncResultInfo
"is it the way professional libraries are written?"
No, professional libraries are written by using exceptions for error handling - I am not sure if there is a pattern for using your suggested approach, but I consider it an anti-pattern (in .NET). After all, .NET itself is a professional framework and it uses exceptions. Besides, .NET developers are used to exceptions. Do you think that your library is really that special to force the users to learn completely different way of error handling?
What you just did is reinvent the COM error handling. If that is what you want to do then check this and ISupportErrorInfo interface for some ideas.
Why do you want to do this? I bet it is a performance 'optimization'.
Fear of performance issues with regard to exception handling is almost always a premature optimization. You will create an awkward API where each return value must be handled via ref/out parameters and which will hurt every user of your lib, just to solve the problem which likely doesn't exist at all.
"Func A may not know whether the caller function will have the
intelligence to recover from the error or not, so I do not want to
throw exceptions"
So you want to ensure that caller silently allows FuncA to mess up the system invariants and caller just goes on happily? It will just make it much harder to debug seemingly impossible bug which happens in another function later on due to this.
There are scenarios where it makes sense to avoid exceptions, but there should be a good justification for that. Exceptions are good idea.
EDIT: I see that you have added that you "have read many other articles also, which suggest not to use exceptions for code flow control". That is correct, exceptions in .NET are not for code flow but for error handling.
You ask:
If Func A calls Func B,C,D,E,F and it has to encapsulate each call
with try catch because it can recover from error or it will still like
to execute remaining function calls, then is not so many try catch
statements awkward
not more than alternative. You are making a mistake that you can simple handle all errors returned from functions in a same way but you usually can't.
Consider if you need to handle every function separately - worst case scenario and code is usually not written like that:
Result x, y;
try {
x = Function1();
}
catch(SomeException e) {
// handle error
}
try {
y = Function2();
}
catch(SomeOtherException e) {
// handle error
}
against:
int error;
Result x, y;
error = Function1(out x);
if(error != SOME_KNOWN_ISSUE) {
// handle error
}
error = Function2(out y);
if(error != SOME_KNOWN_ISSUE) {
// handle error
}
not a big difference. please don't tell me that you would not check the error code.
However, if you decide to ignore all errors (a horrible idea) then exceptions are simpler:
try {
var x = Function1();
var y = Function2();
var z = Function3();
}
catch Exception() { you still can see the message here and possibly rethrow }
vs
Result1 r1;
Function1(out r1);
Result2 r2;
Function2(out r2);
Result3 r3;
Function3(out r3);
// and here you still don't know whether there was an error
Can you elaborate what do you mean by "I need predictability with regard to time constraints"?
in some system level software or realtime stuff, you can't afford stack unwinding related to exception handling, as you can't guarantee the duration, and that could violate your timing requirements. But this is never the case in .NET as garbage collection is far worse in this regard.
Also, when you say "In .NET I would always use the exceptions for
error handling", can you explain how or what do you define as an error
condition? Is a recoverable situation an error condition or not an
error condition? –
#shambulater already gave a great example in comments. In FileStream, missing file is not recoverable and it will throw. In the client of FileStream it might be recoverable or not depending on context. Some clients will ignore it, some will exit the app, some will wrap it in another exception and let someone upstream to decide.
When will you not use exceptions?
In those cases where I would also not return an error code.
I use the FunctionResult approach extensively in ms-access and it works wonderfully. I consider it far better than error handling. For a start, each error message is application specific and is not the usually off target default error message. If the error propagates up a call list of functions, the error messages can be daisy chained together. This eventual error message looks like a call stack but is cleaner e.g. [Could not read photos from Drive F:, Could not read files, Drive not ready]. Wacko, I have just discovered that some Drives can be mounted but not ready. I could not have unit tested for that error as I didn't know that such an error could occur (means SD card reader is empty). Yet even without prior knowledge of this error, I could write an application that handled it gracefully.
My method is to call a method in a class that is written as a function that returns a boolean value. The return value is set to True in the last line of the function so if the function is exited before the last line, it is by default unsuccessful. I code, calling the function looks like if getphotos(folderID) then...do something .. Else report error. Inside the class module is a module level error variable (Str mEM) and it is read via a getter, so the class has an .em property which holds the error message. I also have a comment variable which is sometimes used like an error message, for example if the folder is empty, the code that looked for photos worked but did not return any photos. That would not be an error but it is something that I might want to communicate to the calling program. If there was an error, the user would get an error message and the calling procedure would exit. In contrast, if there was a cmt, such as 'no photos', then I might skill trying to read the photo metadata for example. How does Zdeslav Vojkovic handle subtlies like that with exceptions?
I am moving to C# hence finding this thread. I like the certainty of knowing why function calls failed (I interact with databases and filing systems all the time so I struggle to cover my projects with Unit Tests). I do agree with Zdeslav Vojkovic about using exceptions where their used is standard, but will not be be doing so in my own code. I am looking for a clean design pattern that allows me to validate parameters within the called function and to inform the caller if the parameters were not right.

Why does Type.GetInterfaces() sometimes not return a valid list?

We inherited a somewhat poorly-designed WCF service that we want to improve. One problem with it is that it has over a hundred methods (on two different interfaces), most of which we suspect are not used. We decided to put some logging on each of the methods to track when and how they're called. To make the tracing code refactor-friendly and typo-proof, we implemented it like so:
public void LogUsage()
{
try
{
MethodBase callingMethod = new StackTrace().GetFrame(1).GetMethod();
string interfaceName = callingMethod.DeclaringType.GetInterfaces()[0].Name;
_loggingDao.LogUsage(interfaceName, callingMethod.Name, GetClientAddress(), GetCallingUrl());
}
catch (Exception exception)
{
_legacyLogger.Error("Error in usage tracking", exception);
}
}
LogUsage() is then called at the start of each method we want to trace.
The service is very high traffic, on the order of 500,000+ calls/day. 99.95% of the time, this code executes beautifully. But the other 0.05% of the time, GetInterfaces() returns an empty (but not null) array.
Why would GetInterfaces() occasionally return inconsistent results?
This may seem so trivial - a 0.05% error rate is something we can usually only dream of. But the whole point is to identify all the service touchpoints, and if this error is always coming out of one (or a few) method calls, then our tracing is incomplete. I've tried to reproduce this error in my development environment by calling each and every method on the service, but to no avail.
StackTrace is notoriously unreliable, especially in multi-threaded environments. Or rather, it is highly reliable, but isn't very practical at times. Asking for the 'last method that was called' can have unexpected results. Try logging the DeclaringType. You might be surprised what you find there. Note that while this is a 0.05% failure rate now, it might easily increase with the complexity of your application.
In order to properly implement reusable tracing code, you'll need to rely on the .NET 4.5 feature Caller Information, by using a dynamic proxy (e.g. Castle Dynamic Proxy), or by using an AOP framework such as PostSharp. Alternatively, you can just code tracing by hand.
From Erik Lippert (who works on the C# compiler team for MS) in response to Getting Type T from a StackFrame:
The stack frame does not actually tell you who called your method. The
stack frame tells you where control is going to return to. The stack
frame is the reification of continuation. The fact that who called the
method and where control will return to are almost always the same
thing is the source of your confusion, but I assure you that they need
not be the same.
The whole post is worth reading...

CRUD operations; do you notify whether the insert,update etc. went well?

I have a simple question for you (i hope) :)
I have pretty much always used void as a "return" type when doing CRUD operations on data.
Eg. Consider this code:
public void Insert(IAuctionItem item) {
if (item == null) {
AuctionLogger.LogException(new ArgumentNullException("item is null"));
}
_dataStore.DataContext.AuctionItems.InsertOnSubmit((AuctionItem)item);
_dataStore.DataContext.SubmitChanges();
}
and then considen this code:
public bool Insert(IAuctionItem item) {
if (item == null) {
AuctionLogger.LogException(new ArgumentNullException("item is null"));
}
_dataStore.DataContext.AuctionItems.InsertOnSubmit((AuctionItem)item);
_dataStore.DataContext.SubmitChanges();
return true;
}
It actually just comes down to whether you should notify that something was inserted (and went well) or not ?
I typically go with the first option there.
Given your code, if something goes wrong with the insert there will be an Exception thrown.
Since you have no try/catch block around the Data Access code, the calling code will have to handle that Exception...thus it will know both if and why it failed. If you just returned true/false, the calling code will have no idea why there was a failure (it may or may not care).
I think it would make more sense if in the case where "item == null" that you returned "false". That would indicate that it was a case that you expect to happen not infrequently, and that therefore you don't want it to raise an exception but the calling code could handle the "false" return value.
As it standards, you'll return "true" or there'll be an exception - that doesn't really help you much.
Don't fight the framework you happen to be in. If you are writing C code, where return values are the most common mechanism for communicating errors (for lack of a better built in construct), then use that.
.NET base class libraries use Exceptions to communicate errors and their absence means everything is okay. Because almost all code uses the BCL, much of it will be written to expect exceptions, except when it gets to a library written as if C# was C with no support for Exceptions, each invocation will need to be wrapped in a if(!myObject.DoSomething){ System.Writeline("Damn");} block.
For the next developer to use your code (which could be you after a few years when you've forgotten how you originally did it), it will be a pain to start writing all the calling code to take advantage of having error conditions passed as return values, as changes to values in an output parameter, as custom events, as callbacks, as messages to queue or any of the other imaginable ways to communicate failure or lack thereof.
I think it depends. Imaging that your user want to add a new post onto a forum. And the adding fail by some reason, then if you don't tell the user, they will never know that something wrong. The best way is to throw another exception with a nice message for them
And if it does not relate to the user, and you already logged it out to database log, you shouldn't care about return or not any more
I think it is a good idea to notify the user if the operation went well or not. Regardless how much you test your code and try to think out of the box, it is most likely that during its existence the software will encounter a problem you did not cater for, thus making it behave incorrectly. The use of notifications, to my opinion, allow the user to take action, a sort of Plan B if you like when the program fails. This action can either be a simple work around or else, inform people from the IT department so that they can fix it.
I'd rather click that extra "OK" button than learn that something went wrong when it is too late.
You should stick with void, if you need more data - use variables for it, as either you'll need specific data (And it can be more than one number/string) and an excpetion mechanism is a good solution for handling errors.
so.. if you want to know how many rows affected, if a sp returned something ect... - a return type will limit you..

Error Handling Should I throw exception? Or handle at the source?

I have this sort of format
asp.net MVC View -> Service Layer -> Repository.
So the view calls the service layer which has business/validation logic in it which in turns calls the Repository.
Now my service layer method usually has a bool return type so that I can return true if the database query has gone through good. Or if it failed. Then a generic message is shown to the user.
I of course will log the error with elmah. However I am not sure how I should get to this point.
Like right now my Repository has void return types for update,create,delete.
So say if an update fails should I have a try/catch in my repository that throws the error, Then my service layer catches it and does elmah signaling and returns false?
Or should I have these repository methods return a "bool", try/catch the error in the repository and then return "true" or "false" to the service layer what in turn returns "true" or "false" to the view?
Exception handling still confuses me how handle the errors and when to throw and when to catch the error.
The rule of thumb I always use is:
At low levels, throw when an operation cannot complete due to exceptional circumstances.
In middle layers, catch multiple exception types and rewrap in a single exception type.
Handle exceptions at the last responsible moment.
DOCUMENT!
Here's an example in pseudocode for a multi-layer ASP.NET MVC app (UI, Controller, Logic, Security, Repository):
User clicks submit button.
Controller action is executed and calls into the Logic (business) layer.
Logic method calls into Security with the current User credentials
User is invalid
Security layer throws SecurityException
Logic layer catches, wraps in LogicException with a more generic error message
Controller catches LogicException, redirects to Error page.
User is valid and Security returns
Logic layer calls into the Repository to complete action
Repository fails
Repository throws RepositoryException
Logic layer catches, wraps in LogicException with a more generic error message
Controller catches LogicException, redirects to Error page.
Repository succeeds
Logic layer returns
Controller redirects to the Success view.
Notice, the Logic layer only throws a single exception type -- LogicException. Any lower-level exceptions that bubble up are caught, wrapped in a new instance of LogicException, which is thrown. This gives us many advantages.
First, the stack trace is accessible. Second, callers only have to deal with a single exception type rather than multiple exceptions. Third, technical exception messages can be massaged for display to users while still retaining the original exception messages. Lastly, only the code responsible for handling user input can truly know what the user's intent was and determine what an appropriate response is when an operation fails. The Repository doesn't know if the UI should display the error page or request the user try again with different values. The controller knows this.
By the way, nothing says you can't do this:
try
{
var result = DoSomethingOhMyWhatIsTheReturnType();
}
catch(LogicException e)
{
if(e.InnerException is SqlException)
{
// handle sql exceptions
}else if(e.InnerException is InvalidCastException)
{
// handle cast exceptions
}
// blah blah blah
}
I like to think of exception handling this way: You define your method signature, as to what you are expecting to do. Now if you are not able to do that, then you must throw an exception. So if you are expecting something to fail based on the input data that you have, (ignoring the ambient state), then your method signature must indicate whether an operation has succeeded or failed. But if your method is not expecting to fail based on the input you have (again, ignoring all the other ambient state), then an exception is in order when the method fails.
Consider these two APIs:
int int.Parse(string integerValue); // In this case, the method will return int
// or it will die! That means your data must be
// valid for this method to function.
bool int.TryParse(string integerValue, out number); // In this case, we expect the data
// we passed in might not be fully
// valid, hence a boolean.
While returning an error (or success) code is often the better way, exceptions have one huge advantage over returning codes or silently suppressing errors: at least you can't just ignore them!
Don't abuse exceptions for simple flow control - that would be the silliest thing to do.
But if a function of yours really runs into an "exceptional" problem, then definitely throw an execption. The caller must then either handle it explicitly and thus know what's going on, or it'll bomb out on him.
Just returning an error code is dangerous since the caller might just not bother inspecting the code and could possibly still go on - even if in your app's logic, there's really something wrong and needs to be dealt with.
So: don't abuse exceptions, but if a real exception happens that requires the caller to do something about it, I would definitely recommend using that mechanism for signalling exceptional conditions.
As for handling exceptions: handle those that you can really deal with. E.g. if you try to save a file and get a security exception, show the user a dialog asking for some other location to save to (since he might not have permissions to save to where he wanted to).
However, exceptions you can't really deal with (what do you want to do about a "OutOfMemory exception", really?) should be left untouched - maybe a caller further up the call stack can handle those - or not.
Marc
First of all there is no one way and there certainly isn't a perfect one so don't overthink it.
In general you want to use exceptions for exceptional cases (exceptions incur a performance overhead so overusing them especially in "loopy" situations can have a perf impact). So let's say the repository cannot connect to the database server for some reason. Then you would use an exception. But if the repository executes a search for some object by id and the object is not found then you would want to return null instead of throwing an exception saying that object with ID x doesn't exist.
Same thing for the validation logic. Since it's validating it is assumed that sometimes the input won't validate so in that case it would be good to return false from the validation service (or perhaps a more complex type including some additional information as to why it didn't validate). But if the validation logic includes checking if a username is taken or not and it can't do this for some reason then you would throw an exception.
So say if an update fails should I
have a try/catch in my repository that
throws the error, Then my service
layer catches it and does elmah
signalling and returns false?
Why would the update fail? Is there a perfectly fine reason for this happening that's part of the normal process? Then don't throw an exception if it happens because of a strange reason (let's say something removed the record being updated before it was updated) then an exception seams logical. There really is no way to recover from this situation.

Throw/do-not-throw an exception based on a parameter - why is this not a good idea?

I was digging around in MSDN and found this article which had one interesting bit of advice: Do not have public members that can either throw or not throw exceptions based on some option.
For example:
Uri ParseUri(string uriValue, bool throwOnError)
Now of course I can see that in 99% of cases this would be horrible, but is its occasional use justified?
One case I have seen it used is with an "AllowEmpty" parameter when accessing data in the database or a configuration file. For example:
object LoadConfigSetting(string key, bool allowEmpty);
In this case, the alternative would be to return null. But then the calling code would be littered with null references check. (And the method would also preclude the ability to actually allow null as a specifically configurable value, if you were so inclined).
What are your thoughts? Why would this be a big problem?
I think it's definitely a bad idea to have a throw / no throw decision be based off of a boolean. Namely because it requires developers looking at a piece of code to have a functional knowledge of the API to determine what the boolean means. This is bad on it's own but when it changes the underlying error handling it can make it very easy for developers to make mistakes while reading code.
It would be much better and more readable to have 2 APIs in this case.
Uri ParseUriOrThrow(string value);
bool TryParseUri(string value, out Uri uri);
In this case it's 100% clear what these APIs do.
Article on why booleans are bad as parameters: http://blogs.msdn.com/jaredpar/archive/2007/01/23/boolean-parameters.aspx
It's usually best to choose one error handling mechanism and stick with it consistently. Allowing this sort of flip-flop code can't really improve the life of developers.
In the above example, what happens if parsing fails and throwOnError is false? Now the user has to guess if NULL if going to be returned, or god knows...
True there's an ongoing debate between exceptions and return values as the better error handling method, but I'm pretty certain there's a consensus about being consistent and sticking with whatever choice you make. The API can't surprise its users and error handling should be part of the interface, and be as clearly defined as the interface.
It's kind of nasty from a readabilty standpoint. Developers tend to expect every method to throw an exception, and if they want to ignore the exception, they'll catch it themselves. With the 'boolean flag' approach, every single method needs to implement this exception-inhibiting semantic.
However, I think the MSDN article is strictly referring to 'throwOnError' flags. In these cases either the error is ignored inside the method itself (bad, as it's hidden) or some kind of null/error object is returned (bad, because you're not using exceptions to handle the error, which is inconsistent and itself error-prone).
Whereas your example seems fine to me. An exception indicates a failure of the method to perform its duty - there is no return value. However the 'allowEmpty' flag changes the semantics of the method - so what would have been an exception ('Empty value') is now expected and legal. Plus, if you had thrown an exception, you wouldn't easily be able to return the config data. So it seems OK in this case.
In any public API it is really a bad idea to have two ways to check for a faulty condition because then it becomes non-obvious what will happen if the error occurs. Just by looking at the code will not help. You have to understand the semantics of the flag parameter (and nothing prevents it from being an expression).
If checking for null is not an option, and if I need to recover from this specific failure, I prefer to create a specific exception so that I can catch it later and handle it appropriately. In any other case I throw a general exception.
Another example in line with this could be set of TryParse methods on some of the value types
bool DateTime.TryParse(string text, out DateTime)
Having a donTThrowException parameter defeats the whole point of exceptions (in any language). If the calling code wants to have:
public static void Main()
{
FileStream myFile = File.Open("NonExistent.txt", FileMode.Open, FileAccess.Read);
}
they're welcome to (C# doesn't even have checked exceptions). In Java the same thing would be accomplished with:
public static void main(String[] args) throws FileNotFoundException
{
FileInputStream fs = new FileInputStream("NonExistent.txt");
}
Either way, it's the caller's job to decide how to handle (or not) the exception, not the callee's.
In the linked to article there is a note that Exceptions should not be used for flow of control - which seems to be implied in the example questsions. Exceptions should reflect Method level failure. To have a signature that it is OK to throw an Error seems like the design is not thought out.
Jeffrey Richters book CLR via C# points out - "you should throw an exception when the method cannot complete its task as indicated by its name".
His book also pointed out a very common error. People tend to write code to catch everything (his words "A ubiquitous mistake of developers who have not been properly trained on the proper use of exceptions tend to use catch blocks too often and improperly. When you catch an exception, you're stating that you expected this exception, you understand why it occurred, and you know how to deal with it.")
That has made me try to code for exceptions that I can expect and can handle in my logic otherwise it should be an error.
Validate your arguments and prevent the exceptions, and only catch what you can handle.
I would aver that it's often useful to have a parameter which indicates whether a failure should cause an exception or simply return an error indication, since such a parameters can be easily passed from an outer routine to an inner one. Consider something like:
Byte[] ReadPacket(bool DontThrowIfNone) // Documented as returning null if none
{
int len = ReadByte(DontThrowIfNone); // Documented as returning -1 if nothing
if (len
If something like a TimeoutException while reading the data should cause an exception, such exception should be thrown within the ReadByte() or ReadMultiBytesbytes(). If, however, such a lack of data should be considered normal, then the ReadByte() or ReadMultiBytesbytes() routine should not throw an exception. If one simply used the do/try pattern, a the ReadPacket and TryReadPacket routines would need to have almost identical code, but with one using Read* methods and the other using TryRead* methods. Icky.
It may be better to use an enumeration rather than a boolean.

Categories