Consider the following code:
partial class OurBusinessObject {
partial void OnOurPropertyChanged() {
if(ValidateOurProperty(this.OurProperty) == false) {
this.OurProperty = OurBusinessObject.Default.OurProperty;
}
}
}
That is, when the value of OurProperty in OurBusinessObject is changed, if the value is not valid, set it to be the default value. This pattern strikes me as code smell but others here (at my employer) do not agree. What are your thoughts?
Edited to add: I've been asked to add an explanation for why this is thought to be okay. The idea was that rather than having the producers of the business object validate the data, the business object could validate its own properties, and set clean default values in cases when the validation failed. Further, it was thought, if the validation rules change, the business object producers won't have to change their logic as the business object will take care of validating and cleaning the data.
It absolutely horrible. Good luck trying to debug issues in Production. The only thing it can lead to is to cover bugs, which will just pop up somewhere else, where it will be not obvious at all where they are coming from.
I think I have to agree with you. This could definitely lead to issues where the logic unexpectedly returns to the defaults, which could be very difficult to debug.
At the very least, this behavior should be logged, but this seems more like a case for throwing an exception.
To me this looks like the symptom, rather than the actual problem. What's really going on is that the setter for OurProperty fails to preserve the original value for use in the OnOurPropertyChanged event. If you do that, suddenly it becomes easier to make better choices about how to proceed.
For that matter, what you really want is an OnOurPropertyChanging event that is raised from the setter before the assignment actually takes place. This way you can allow or deny the assignment in the first place. Otherwise there is a small amount of time where your object is not valid, and that means the type is not thread safe and you can't count on consistency if you you consider concurrency is a concern.
Definitely a questionable practice.
How would an invalid value ever get assigned to this property? Wouldn't that indicate there's a bug somewhere in the calling code, in which case you'd probably want to know right away? Or that a user input something incorrectly in which case they should be informed right away?
In general, "failing fast" makes tracking down bugs a lot easier. Silently assigning a default behind the scenes is akin to "magic" and is only going to cause confusion to whoever has to maintain the codebase.
Distaste for the term 'code smell' aside, you might be right - depending on where it's coming from, silently changing the value is probably not a good thing. It would be better to ensure your value is valid instead of just reverting to the default.
I would highly recommend refactoring it to validate before setting the property.
You could always have a method that was more like:
T GetValidValueForProperty<T>(T suggestedValue, T currentValue);
or even:
T GetValidValueForProperty<T>(string propertyName, T suggestedValue, T currentValue);
If you do that, before you set the property, you could pass it to the business logic to validate, and the business logic could return the default property value (your current behavior) OR (more reasonable in most cases), return the currentValue, so setting had no effect.
This would be used more like:
T OurProperty
{
get
{
return this.propertyBackingField;
}
set
{
this.propertyBackingField = this.GetValidValueForProperty(value, this.propertyBackingField);
}
}
It doesn't really matter what you do, but it is important to validate before you change your current value. If you change your value before you determine whether the new value is good, you're asking for trouble in the long term.
It may or may not "smell", but I'm leaning more towards, "Yes it smells".
Does setting OurProperty to the default have a logical reason for doing so or is it simply convenient to do so in code? It is possible, however unlikely in practice, to contrive a scenario where this would be expected behavior, but I'm guessing that in most cases you should be throwing an exception and handling it cleanly somewhere.
Does setting the value to default get you closer to or move you away from the functional specifications description of how the application is supposed to work?
You are validating a change after it has been done? Validation should be done before the busyness property is altered.
Answering your questing: the solution presented in that code snippet can generate big issues in production, you don't know whether the default value appeared there due to invalid input or just because something else set the value to the default
It's hard to say without knowing the context or business rules. Generally speaking though, you should just validate at time of input, and maybe once more before persistence, but the way you're doing it won't really allow you to validate since you're not allowing a property to contain an invalid value.
I think your validation logic should raise an exception if asked to use an invalid value. If your consumer wants to use a default value, it should ask for it explicitly, either through a special, documented value or through another method.
The only kind of exceptions I can think would be forgivable would be, like, normalizing case, like in email fields to detect duplicates.
Furthermore, why in the world is this partial? Using partial classes for anything but a generated code framework is itself is a codesmell since you're likely using them to hide complexity which should be split up anyways!
I agree with Grzenio and would add that the best way to handle a validation error down in the domain layer (aka business objects) is to generate an exception. That exception could propagate all the way up into the UI layer where it could be handled and interactively rectified with the user. However, depending on the capabilities and technologies involved, this may not always be feasible, in which case, you probably should be validating up in the UI layer (possibly in addition to the domain layer). It's less than ideal, but might be your only viable option. In any case, setting it to a default value is a horrible thing to do and will lead to subtle bugs that will be near impossible to diagnose. If done on a broad scale, you'll have an unmaintainable system in no time (especially if you have no unit tests backing you up).
An argument that I have against this is the following. Suppose the user/producer of the business object accidentally inputs an invalid value. Then this pattern will gloss over that fact and default to clean data. But the right way to handle this is to throw an error and have the user/producer verify/clean their input data.
I'd say, implement PropertyChanging and allow the business logic to approve/deny a value, and then afterwards, throw an exception for invalid values.
This way, you don't ever have an invalid value. That, and you should never change a user's information. What if a user adds an entry to the database, and keeps track of it for his own records? Your code would re-assign the value to the default, and he's now tracking the wrong information. Its better to inform the user ASAP.
Related
I have a simple question for you (i hope) :)
I have pretty much always used void as a "return" type when doing CRUD operations on data.
Eg. Consider this code:
public void Insert(IAuctionItem item) {
if (item == null) {
AuctionLogger.LogException(new ArgumentNullException("item is null"));
}
_dataStore.DataContext.AuctionItems.InsertOnSubmit((AuctionItem)item);
_dataStore.DataContext.SubmitChanges();
}
and then considen this code:
public bool Insert(IAuctionItem item) {
if (item == null) {
AuctionLogger.LogException(new ArgumentNullException("item is null"));
}
_dataStore.DataContext.AuctionItems.InsertOnSubmit((AuctionItem)item);
_dataStore.DataContext.SubmitChanges();
return true;
}
It actually just comes down to whether you should notify that something was inserted (and went well) or not ?
I typically go with the first option there.
Given your code, if something goes wrong with the insert there will be an Exception thrown.
Since you have no try/catch block around the Data Access code, the calling code will have to handle that Exception...thus it will know both if and why it failed. If you just returned true/false, the calling code will have no idea why there was a failure (it may or may not care).
I think it would make more sense if in the case where "item == null" that you returned "false". That would indicate that it was a case that you expect to happen not infrequently, and that therefore you don't want it to raise an exception but the calling code could handle the "false" return value.
As it standards, you'll return "true" or there'll be an exception - that doesn't really help you much.
Don't fight the framework you happen to be in. If you are writing C code, where return values are the most common mechanism for communicating errors (for lack of a better built in construct), then use that.
.NET base class libraries use Exceptions to communicate errors and their absence means everything is okay. Because almost all code uses the BCL, much of it will be written to expect exceptions, except when it gets to a library written as if C# was C with no support for Exceptions, each invocation will need to be wrapped in a if(!myObject.DoSomething){ System.Writeline("Damn");} block.
For the next developer to use your code (which could be you after a few years when you've forgotten how you originally did it), it will be a pain to start writing all the calling code to take advantage of having error conditions passed as return values, as changes to values in an output parameter, as custom events, as callbacks, as messages to queue or any of the other imaginable ways to communicate failure or lack thereof.
I think it depends. Imaging that your user want to add a new post onto a forum. And the adding fail by some reason, then if you don't tell the user, they will never know that something wrong. The best way is to throw another exception with a nice message for them
And if it does not relate to the user, and you already logged it out to database log, you shouldn't care about return or not any more
I think it is a good idea to notify the user if the operation went well or not. Regardless how much you test your code and try to think out of the box, it is most likely that during its existence the software will encounter a problem you did not cater for, thus making it behave incorrectly. The use of notifications, to my opinion, allow the user to take action, a sort of Plan B if you like when the program fails. This action can either be a simple work around or else, inform people from the IT department so that they can fix it.
I'd rather click that extra "OK" button than learn that something went wrong when it is too late.
You should stick with void, if you need more data - use variables for it, as either you'll need specific data (And it can be more than one number/string) and an excpetion mechanism is a good solution for handling errors.
so.. if you want to know how many rows affected, if a sp returned something ect... - a return type will limit you..
[ This is a result of Best Practice: Should functions return null or an empty object? but I'm trying to be very general. ]
In a lot of legacy (um...production) C++ code that I've seen, there is a tendency to write a lot of NULL (or similar) checks to test pointers. Many of these get added near the end of a release cycle when adding a NULL-check provides a quick fix to a crash caused by the pointer dereference--and there isn't a lot of time to investigate.
To combat this, I started to write code that took a (const) reference parameter instead of the (much) more common technique of passing a pointer. No pointer, no desire to check for NULL (ignoring the corner case of actually having a null reference).
In C#, the same C++ "problem" is present: the desire to check every unknown reference against null (ArgumentNullException) and to quickly fix NullReferenceExceptions by adding a null check.
It seems to me, one way to prevent this is to avoid null objects in the first place by using empty objects (String.Empty, EventArgs.Empty) instead. Another would be to throw an exception rather than return null.
I'm just starting to learn F#, but it appears there are far fewer null objects in that enviroment. So maybe you don't really have to have a lot of null references floating around?
Am I barking up the wrong tree here?
Passing non-null just to avoid a NullReferenceException is trading a straightforward, easy-to-solve problem ("it blows up because it's null") for a much more subtle, hard-to-debug problem ("something several calls down the stack is not behaving as expected because much earlier it got some object which has no meaningful information but isn't null").
NullReferenceException is a wonderful thing! It fails hard, loud, fast, and it's almost always quick and easy to identify and fix. It's my favorite exception, because I know when I see it, my task is only going to take about 2 minutes. Contrast this with a confusing QA or customer report trying to describe strange behavior that has to be reproduced and traced back to the origin. Yuck.
It all comes down to what you, as a method or piece of code, can reasonably infer about the code which called you. If you are handed a null reference, and you can reasonably infer what the caller might have meant by null (maybe an empty collection, for example?) then you should definitely just deal with the nulls. However, if you can't reasonably infer what to do with a null, or what the caller means by null (for example, the calling code is telling you to open a file and gives the location as null), you should throw an ArgumentNullException.
Maintaining proper coding practices like this at every "gateway" point - logical bounds of functionality in your code—NullReferenceExceptions should be much more rare.
I tend to be dubious of code with lots of NULLs, and try to refactor them away where possible with exceptions, empty collections, Java Optionals, and so on.
The "Introduce Null Object" pattern in Martin Fowler's Refactoring (page 260) may also be helpful. A Null Object responds to all the methods a real object would, but in a way that "does the right thing". So rather than always check an Order to see if order.getDiscountPolicy() is NULL, make sure the Order has a NullDiscountPolicy in these cases. This streamlines the control logic.
Null gets my vote. Then again, I'm of the 'fail-fast' mindset.
String.IsNullOrEmpty(...) is very helpful too, I guess it catches either situation: null or empty strings. You could write a similar function for all your classes you're passing around.
If you are writing code that returns null as an error condition, then don't: generally, you should throw an exception instead - far harder to miss.
If you are consuming code that you fear may return null, then mostly these are boneheaded exceptions: perhaps do some Debug.Assert checks at the caller to sense-check the output during development. You shouldn't really need vast numbers of null checks in you production, but if some 3rd party library returns lots of nulls unpredictably, then sure: do the checks.
In 4.0, you might want to look at code-contracts; this gives you much better control to say "this argument should never be passed in as null", "this function never returns null", etc - and have the system validate those claims during static analysis (i.e. when you build).
The thing about null is that it doesn't come with meaning. It is merely the absence of an object.
So, if you really mean an empty string/collection/whatever, always return the relevant object and never null. If the language in question allows you to specify that, do so.
In the case where you want to return something that means not a value specifiable with the static type, then you have a number of options. Returning null is one answer, but without a meaning is a little dangerous. Throwing an exception may actually be what you mean. You might want to extend the type with special cases (probably with polymorphism, that is to say the Special Case Pattern (a special case of which is the Null Object Pattern)). You might want to wrap the return value in an type with more meaning. Or you might want to pass in a callback object. There usually are many choices.
I'd say it depends. For a method returning a single object, I'd generally return null. For a method returning a collection, I'd generally return an empty collection (non-null). These are more along the lines of guidelines than rules, though.
If you are serious about wanting to program in a "nullless" environment, consider using extension methods more often, they are immune to NullReferenceExceptions and at least "pretend" that null isn't there anymore:
public static GetExtension(this string s)
{
return (new FileInfo(s ?? "")).Extension;
}
which can be called as:
// this code will never throw, not even when somePath becomes null
string somePath = GetDataFromElseWhereCanBeNull();
textBoxExtension.Text = somePath.GetExtension();
I know, this is only convenience and many people correctly consider it violation of OO principles (though the "founder" of OO, Bertrand Meyer, considers null evil and completely banished it from his OO design, which is applied to the Eiffel language, but that's another story). EDIT: Dan mentions that Bill Wagner (More Effective C#) considers it bad practice and he's right. Ever considered the IsNull extension method ;-) ?
To make your code more readable, another hint may be in place: use the null-coalescing operator more often to designate a default when an object is null:
// load settings
WriteSettings(currentUser.Settings ?? new Settings());
// example of some readonly property
public string DisplayName
{
get
{
return (currentUser ?? User.Guest).DisplayName
}
}
None of these take the occasional check for null away (and ?? is nothing more then a hidden if-branch). I prefer as little null in my code as possible, simply because I believe it makes the code more readable. When my code gets cluttered with if-statements for null, I know there's something wrong in the design and I refactor. I suggest anybody to do the same, but I know that opinions vary wildly on the matter.
(Update) Comparison with exceptions
Not mentioned in the discussion so far is the similarity with exception handling. When you find yourself ubiquitously ignoring null whenever you consider it's in your way, it is basically the same as writing:
try
{
//...code here...
}
catch (Exception) {}
which has the effect of removing any trace of the exceptions only to find it raises unrelated exceptions much later in the code. Though I consider it good to avoid using null, as mentioned before in this thread, having null for exceptional cases is good. Just don't hide them in null-ignore-blocks, it will end up having the same effect as the catch-all-exceptions blocks.
For the exception protagonists they usually stem from transactional programming and strong exception safety guarantees or blind guidelines. In any decent complexity, ie. async workflow, I/O and especially networking code they are simply inappropriate. The reason why you see Google style docs on the matter in C++, as well as all good async code 'not enforcing it' (think your favourite managed pools as well).
There is more to it and while it might look like a simplification, it really is that simple. For one you will get a lot of exceptions in something that wasn't designed for heavy exception use.. anyway I digress, read upon on this from the world's top library designers, the usual place is boost (just don't mix it up with the other camp in boost that loves exceptions, because they had to write music software :-).
In your instance, and this is not Fowler's expertise, an efficient 'empty object' idiom is only possible in C++ due to available casting mechanism (perhaps but certainly not always by means of dominance ).. On ther other hand, in your null type you are capable throwing exceptions and doing whatever you want while preserving the clean call site and structure of code.
In C# your choice can be a single instance of a type that is either good or malformed; as such it is capable of throwing acceptions or simply running as is. So it might or might not violate other contracts ( up to you what you think is better depending on the quality of code you're facing ).
In the end, it does clean up call sites, but don't forget you will face a clash with many libraries (and especially returns from containers/Dictionaries, end iterators spring to mind, and any other 'interfacing' code to the outside world ). Plus null-as-value checks are extremely optimised pieces of machine code, something to keep in mind but I will agree any day wild pointer usage without understanding constness, references and more is going to lead to different kind of mutability, aliasing and perf problems.
To add, there is no silver bullet, and crashing on null reference or using a null reference in managed space, or throwing and not handling an exception is an identical problem, despite what managed and exception world will try to sell you. Any decent environment offers a protection from those (heck you can install any filter on any OS you want, what else do you think VMs do), and there are so many other attack vectors that this one has been overhammered to pieces. Enter x86 verification from Google yet again, their own way of doing much faster and better 'IL', 'dynamic' friendly code etc..
Go with your instinct on this, weight the pros and cons and localise the effects.. in the future your compiler will optimise all that checking anyway, and far more efficiently than any runtime or compile-time human method (but not as easily for cross-module interaction).
I try to avoid returning null from a method wherever possible. There are generally two kinds of situations - when null result would be legal, and when it should never happen.
In the first case, when no result is legal, there are several solutions available to avoid null results and null checks that are associated with them: Null Object pattern and Special Case pattern are there to return substitute objects that do nothing, or do some specific thing under specific circumstances.
If it is legal to return no object, but still there are no suitable substitutes in terms of Null Object or Special Case, then I typically use the Option functional type - I can then return an empty option when there is no legal result. It is then up to the client to see what is the best way to deal with empty option.
Finally, if it is not legal to have any object returned from a method, simply because the method cannot produce its result if something is missing, then I choose to throw an exception and cut further execution.
How are empty objects better than null objects? You're just renaming the symptom. The problem is that the contracts for your functions are too loosely defined "this function might return something useful, or it might return a dummy value" (where the dummy value might be null, an "empty object", or a magic constant like -1.) But no matter how you express this dummy value, callers still have to check for it before they use the return value.
If you want to clean up your code, the solution should be to narrow down the function so that it doesn't return a dummy value in the first place.
If you have a function which might return a value, or might return nothing, then pointers are a common (and valid) way to express this. But often, your code can be refactored so that this uncertainty is removed. If you can guarantee that a function returns something meaningful, then callers can rely on it returning something meaningful, and then they don't have to check the return value.
You can't always return an empty object, because 'empty' is not always defined. For example what does it mean for an int, float or bool to be empty?
Returning a NULL pointer is not necessarily a bad practice, but I think it's a better practice to return a (const) reference (where it makes sense to do so of course).
And recently I've often used a Fallible class:
Fallible<std::string> theName = obj.getName();
if (theName)
{
// ...
}
There are various implementations available for such a class (check Google Code Search), I also created my own.
Should I check whether particular key is present in Dictionary if I am sure it will be added in dictionary by the time I reach the code to access it?
There are two ways I can access the value in dictionary
checking ContainsKey method. If it returns true then I access using indexer [key] of dictionary object.
or
TryGetValue which will return true or false as well as return value through out parameter.
(2nd will perform better than 1st if I want to get value. Benchmark.)
However if I am sure that the function which is accessing global dictionary will surely have the key then should I still check using TryGetValue or without checking I should use indexer[].
Or I should never assume that and always check?
Use the indexer if the key is meant to be present - if it's not present, it will throw an appropriate exception, which is the right behaviour if the absence of the key indicates a bug.
If it's valid for the key not to be present, use TryGetValue instead and react accordingly.
(Also apply Marc's advice about accessing a shared dictionary safely.)
If the dictionary is global (static/shared), you should be synchronizing access to it (this is important; otherwise you can corrupt it).
Even if your thread is only reading data, it needs to respect the locks of other threads that might be editing it.
However; if you are sure that the item is there, the indexer should be fine:
Foo foo;
lock(syncLock) {
foo = data[key];
}
// use foo...
Otherwise, a useful pattern is to check and add in the same lock:
Foo foo;
lock(syncLock) {
if(!data.TryGetValue(key, out foo)) {
foo = new Foo(key);
data.Add(key, foo);
}
}
// use foo...
Here we only add the item if it wasn't there... but inside the same lock.
Always check. Never say never. I assume your application is not that performance critical that you will have to save the checking time.
TIP: If you decide not to check, at least use Debug.Assert( dict.ContainsKey( key ) ); This will only be compiled when in Debug mode, your release build will not contain it. That way you could at least have the check when debugging.
Still: if possible, just check it :-)
EDIT: There have been some misconceptions here. By "always check" I did not only mean using an if somewhere. Handling an exception properly was also included in this. So, to be more precise: never take anything for granted, expect the unexpected. Check by ContainsKey or handle the potential exception, but do SOMETHING in case the element is not contained.
Personally I'd check the key is there, regardless of whether or not you are SURE it is, some may say this check is superfluous and that dictionary will throw an exception which you can catch, but imho you should not rely on that exception, you should check yourself and then either throw your own exception which means something or a result object with a success flag and reason inside... the failure mechanism is really implementation dependant.
Surely the answer is "it all depends on the situation". You need to balance the risk that the key will be missing from the dictionary (low for small systems where there is limited access to the data, where you can rely on the order things are done, larger for larger systems, multiple programmers accessing the same data, especially with read/write/delete access, where threads are involved and order cannot be guaranteed or where data originates externally and reading can fail) with the impact of the risk (safety-critical systems, commercial releases or systems that a business will rely on compared with something made for fun, for a one-off job and/or for your use only) and with any requirements for speed, size and laziness.
If I were making a system to control railway signalling I would want to be safe against all possible and impossible errors, and safe from errors in the error-handling and so on (Murphy's 2nd law: "what can't go wrong will go wrong".) If I'm chucking stuff together for fun, even if size and speed are not an issue I will be MUCH more relaxed about stuff like this - I will want to get to the fun stuff.
Of course, sometimes this is the fun stuff in itself.
TryGetValue is the same code as indexing it by key, except the former returns a default value (for the out parameter) where the latter throws an exception. Use TryGetValue and you'll get consistent checks with absolutely no performance loss.
Edit: As Jon said, if you know it will always have the key, then you can index it and let it throw the appropriate exception. However, if you can provide better context information by throwing it yourself with a detailed message, that would be preferable.
There's 2 trains of thought on this from a performance point of view.
1) Avoid exceptions where possible, as exceptions are expensive - i.e. check before you try to retrieve a specific key from the dictionary, whether it exists or not. Better approach in my opinion if there's a fair chance it may not exist. This would prevent fairly common exceptions.
2) If you're confident the item will exist in there 99% of the time, then don't check for it's existence before accessing it. The 1% of times when it doesn't exist, an exception will be thrown but you've saved time for the other 99% of the time by not checking.
What I'm saying is, optimise for the majority if there is a clear one. If there is any real degree in uncertainty about an item existing, then check before retrieving.
If you know that the dictionary normally contains the key, you don't have to check for it before accessing it.
If something would be wrong and the dictionary doesn't contain the items that it should, you can let the dictionary throw the exception. The only reason for checking for the key first would be if you want to take care of this problem situation yourself without getting the exception. Letting the dictionary throw the exception and catch that is however a perfectly valid way of handling the situation.
I think Marc and Jon have it (as usual) pretty sown up. Since you also mention performance in your question it might be worth considering how you lock the dictionary.
The straightforward lock serialises all read access which may not be desirable if read is massively frequent and writes are relatively few. In that case using a ReaderWriterLockSlim might be better. The downside is the code is a little more complex and writes are slightly slower.
And if so, why?
and what constitutes "long running"?
Doing magic in a property accessor seems like my prerogative as a class designer. I always thought that is why the designers of C# put those things in there - so I could do what I want.
Of course it's good practice to minimize surprises for users of a class, and so embedding truly long running things - eg, a 10-minute monte carlo analysis - in a method makes sense.
But suppose a prop accessor requires a db read. I already have the db connection open. Would db access code be "acceptable", within the normal expectations, in a property accessor?
Like you mentioned, it's a surprise for the user of the class. People are used to being able to do things like this with properties (contrived example follows:)
foreach (var item in bunchOfItems)
foreach (var slot in someCollection)
slot.Value = item.Value;
This looks very natural, but if item.Value actually is hitting the database every time you access it, it would be a minor disaster, and should be written in a fashion equivalent to this:
foreach (var item in bunchOfItems)
{
var temp = item.Value;
foreach (var slot in someCollection)
slot.Value = temp;
}
Please help steer people using your code away from hidden dangers like this, and put slow things in methods so people know that they're slow.
There are some exceptions, of course. Lazy-loading is fine as long as the lazy load isn't going to take some insanely long amount of time, and sometimes making things properties is really useful for reflection- and data-binding-related reasons, so maybe you'll want to bend this rule. But there's not much sense in violating the convention and violating people's expectations without some specific reason for doing so.
In addition to the good answers already posted, I'll add that the debugger automatically displays the values of properties when you inspect an instance of a class. Do you really want to be debugging your code and have database fetches happening in the debugger every time you inspect your class? Be nice to the future maintainers of your code and don't do that.
Also, this question is extensively discussed in the Framework Design Guidelines; consider picking up a copy.
A db read in a property accessor would be fine - thats actually the whole point of lazy-loading. I think the most important thing would be to document it well so that users of the class understand that there might be a performance hit when accessing that property.
You can do whatever you want, but you should keep the consumers of your API in mind. Accessors and mutators (getters and setters) are expected to be very light weight. With that expectation, developers consuming your API might make frequent and chatty calls to these properties. If you are consuming external resources in your implementation, there might be an unexpected bottleneck.
For consistency sake, it's good to stick with convention for public APIs. If your implementations will be exclusively private, then there's probably no harm (other than an inconsistent approach to solving problems privately versus publicly).
It is just a "good practice" not to make property accessors taking long time to execute.
That's because properties looks like fields for the caller and hence caller (a user of your API that is) usually assumes there is nothing more than just a "return smth;"
If you really need some "action" behind the scenes, consider creating a method for that...
I don't see what the problem is with that, as long as you provide XML documentation so that the Intellisense notifies the object's consumer of what they're getting themselves into.
I think this is one of those situations where there is no one right answer. My motto is "Saying always is almost always wrong." You should do what makes the most sense in any given situation without regard to broad generalizations.
A database access in a property getter is fine, but try to limit the amount of times the database is hit through caching the value.
There are many times that people use properties in loops without thinking about the performance, so you have to anticipate this use. Programmers don't always store the value of a property when they are going to use it many times.
Cache the value returned from the database in a private variable, if it is feasible for this piece of data. This way the accesses are usually very quick.
This isn't directly related to your question, but have you considered going with a load once approach in combination with a refresh parameter?
class Example
{
private bool userNameLoaded = false;
private string userName = "";
public string UserName(bool refresh)
{
userNameLoaded = !refresh;
return UserName();
}
public string UserName()
{
if (!userNameLoaded)
{
/*
userName=SomeDBMethod();
*/
userNameLoaded = true;
}
return userName;
}
}
I’ve been reading a few questions and answers regarding exceptions and their use. Seems to be a strong opinion that exceptions should be raised only for exception, unhandled cases. So that lead me to wondering how validation works with business objects.
Lets say I have a business object with getters/setters for the properties on the object. Let’s say I need to validate that the value is between 10 and 20. This is a business rule so it belongs in my business object. So that seems to imply to me that the validation code goes in my setter. Now I have my UI databound to the properties of the data object. The user enters 5, so the rule needs to fail and the user is not allowed to move out of the textbox. . The UI is databound to the property so the setter is going to be called, rule checked and failed. If I raised an exception from my business object to say the rule failed, the UI would pick that up. But that seems to go against the preferred usage for exceptions. Given that it’s a setter, you aren’t really going to have a ‘result’ for the setter. If I set another flag on the object then that would imply the UI has to check that flag after each UI interaction.
So how should the validation work?
Edit: I've probably used an over-simplified example here. Something like the range check above could be handled easily by the UI but what if the valdation was more complicated, e.g. the business object calculates a number based on the input and if that calculated number is out of range it should be recjected. This is more complicated logic that should not be in th UI.
There is also the consideration of further data entered based on a field already entered. e.g.I have to enter an item on the order to get certain informaion like stock on hand, current cost, etc. The user may require this information to make decisions on further entry (liek how many units to order) or it may be required in order for further validation to be done. Should a user be able to enter other fields if the item isn't valid? What would be the point?
You want to delve a bit in the remarkable work of Paul Stovell concerning data validation. He summed up his ideas at one time in this article. I happen to share his point of view on the matter, which I implemented in my own libraries.
Here are, in Paul's words, the cons to throwing exceptions in the setters (based on a sample where a Name property should not be empty) :
There may be times where you actually need to have an empty name. For example, as the default value for a "Create an account" form.
If you're relying on this to validate any data before saving, you'll miss the cases where the data is already invalid. By that, I mean, if you load an account from the database with an empty name and don't change it, you might not ever know it was invalid.
If you aren't using data binding, you have to write a lot of code with try/catch blocks to show these errors to the user. Trying to show errors on the form as the user is filling it out becomes very difficult.
I don't like throwing exceptions for non-exceptional things. A user setting the name of an account to "Supercalafragilisticexpialadocious" isn't an exception, it's an error. This is, of course, a personal thing.
It makes it very difficult to get a list of all the rules that have been broken. For example, on some websites, you'll see validation messages such as "Name must be entered. Address must be entered. Email must be entered". To display that, you're going to need a lot of try/catch blocks.
And here are basic rules for an alternative solution :
There is nothing wrong with having an invalid business object, so long as you don't try to persist it.
Any and all broken rules should be retrievable from the business object, so that data binding, as well as your own code, can see if there are errors and handle them appropriately.
Assuming that you have separate validation and persist (i.e. save to database) code, I would do the following:
The UI should perform validation. Don't throw exceptions here. You can alert the user to errors and prevent the record from being saved.
Your database save code should throw invalid argument exceptions for bad data. It makes sense to do it here, since you cannot proceed with the database write at this point. Ideally this should never happen since the UI should prevent the user from saving, but you still need it to ensure database consistency. Also you might be calling this code from something other than the UI (e.g. batch updates) where there is no UI data validation.
I've always been a fan of Rocky Lhotka's approach in the CSLA framework (as mentioned by Charles). In general, whether it's driven by the setter or by calling an explicit Validate method, a collection of BrokenRule objects is maintained internally by the business object. The UI simply needs to check an IsValid method on the object, which in turn checks the number of BrokenRules, and handle it appropriately. Alternatively, you could easily have the Validate method raise an event which the UI could handle (probably the cleaner approach). You can also use the list of BrokenRules to display error messages to the use either in summary form or next to the appropriate field. Although the CSLA framework is written in .NET, the overall approach can be used in any language.
I don't think throwing an Exception is the best idea in this case. I definitely follow the school of thought that says Exceptions should be for exceptional circumstances, which a simple validation error is not. Raising an OnValidationFailed event would be the cleaner choice, in my opinion.
By the way, I have never liked the idea of not letting the user leave a field when it is in an invalid state. There are so many situations where you might need to leave the field temporarily (perhaps to set some other field first) before going back and fixing the invalid field. I think it's just an unnecessary inconvenience.
You might want to move the validation outside of the getters and setters. You could have a function or property called IsValid that would run all the validation rules. t would populate a dictionary or hashtable with all of the "Broken Rules". This dictionary would be exposed to the outside world, and you can use it to populate your error messages.
This is the approach that is taken in CSLA.Net.
Exceptions should not be thrown as a normal part of validation. Validation invoked from within business objects is a last line of defense, and should only happen if the UI fails to check something. As such they can be treated like any other runtime exception.
Note that here's a difference between defining validation rules and applying them. You might want to define (ie code or annotate) your business rules in your business logic layer but invoke them from the UI so that they can handled in a manner appropriate to that particular UI. The manner of handling will vary for different UI's, eg form based web-apps vs ajax web-apps. Exception-on-set validation offers very limited options for handling.
Many applications duplicate their validation rules, such as in javascript, domain object constraints and database constraints. Ideally this information will only be defined once, but implementing this can be challenge and requires lateral thinking.
Perhaps you should look at having both client-side and server-side validation. If anything slips past the client-side validation you can then feel free to throw an exception if your business object would be made invalid.
One approach I've used was to apply custom attributes to business object properties, which described the validation rules. e.g.:
[MinValue(10), MaxValue(20)]
public int Value { get; set; }
The attributes can then be processed and used to automatically create both client-side and server-side validation methods, to avoid the problem of duplicating business logic.
I'd definitely advocate both client and server-side validation (or validating at the various layers). This is especially important when communicating across physical tiers or processes, as the cost of throw exceptions becomes increasingly expensive. Also, the further down the chain you wait for validation, the more time is wasted.
As to use Exceptions or not for data validation. I think it's ok to use exception in process (though still not preferrable), but outside of process, call a method to validate the business object (eg before saving) and have the method return the success of the operation along with any validation errors. Errors arent' exceptional.
Microsoft throw exceptions from business objects when validation fails. At least, that's how the Enterprise Library's Validation Application Block works.
using Microsoft.Practices.EnterpriseLibrary.Validation;
using Microsoft.Practices.EnterpriseLibrary.Validation.Validators;
public class Customer
{
[StringLengthValidator(0, 20)]
public string CustomerName;
public Customer(string customerName)
{
this.CustomerName = customerName;
}
}
Your business objects should throw exceptions for bad inputs, but those exceptions should never be thrown in the course of a normal program run. I know that sounds contradictory, so I shall explain.
Each public method should validate its inputs, and throw "ArgumentException"s when they are incorrect. (And private methods should validate their inputs with "Debug.Assert()"s to ease development, but that's another story.) This rule about validating inputs to public methods (and properties, of course) is true for every layer of the application.
The requirements of the software interface should be spelled out in the interface documentation, of course, and it is the job of the calling code to make sure the arguments are correct and the exceptions will never be thrown, which means the UI needs to validate the inputs before handing them to the business object.
While the rules given above should almost never be broken, sometimes business object validation can be very complex, and that complexity shouldn't be foisted onto the UI. In that case it's good for the BO's interface to allow some leeway in what it accepts and then provide for an explicit Validate(out string[]) predicate to check the properties and give feedback on what needs to be changed. But notice in this case that there are still well-defined interface requirements and no exceptions need ever be thrown (assuming the calling code follows the rules).
Following this latter system, I almost never do early validation on property setters, since that soft-of complicates the use of the properties, (but in the case given in the question, I might). (As an aside, please don't prevent me from tabbing out of a field just because it has bad data in it. I get clausterphobic when I can't tab around a form! I'll go back and fix it in a minute, I promise! OK, I feel better now, sorry.)
It depends on what sort of validation you will be performing and where. I think that each layer of the application can be easily protected from bad data and its too easy to do for it not to be worth it.
Consider a multi-tiered application and the validation requirements/facilities of each layer. The middle layer, Object, is the one that seems to be up for debate here.
Database
protects itself from an invalid state with column constraints and referential integrity, which will cause the application's database code to throw exceptions
Object
?
ASP.NET/Windows Forms
protects the form's state (not the object) using validator routines and/or controls without using exceptions (winforms does not ship with validators, but there's an excellent series at msdn describing how to implement them)
Say you have a table with a list of hotel rooms, and each row has a column for the number of beds called 'beds'. The most sensible data type for that column is an unsigned small integer*. You also have a plain ole object with an Int16* property called 'Beds'. The issue is that you can stick -4555 into an Int16, but when you go to persist the data to a database you're going to get an Exception. Which is fine - my database shouldn't be allowed to say that a hotel room has less than zero beds, because a hotel room can't have less than zero beds.
* If your database can represent it, but let's assume it can
* I know you can just use a ushort in C#, but for the purpose of this example, let's assume you can't
There's some confusion as to whether objects should represent your business entity, or whether they should represent the state of your form. Certainly in ASP.NET and Windows Forms, the form is perfectly capable of handling and validating its own state. If you've got a text box on an ASP.NET form that is going to be used to populate that same Int16 field, you've probably put a RangeValidator control on your page which tests the input before it gets assigned to your object. It prevents you from entering a value less than zero, and probably prevents you from entering a value greater than, say, 30, which hopefully would be enough to cater for the worst flea-infested hostel you can imagine. On postback, you would probably be checking the IsValid property of the page before building your object, thereby preventing your object from ever representing less than zero beds and preventing your setter from ever being called with a value it shouldn't hold.
But your object is still capable of representing less than zero beds, and again, if you were using the object in a scenario not involving the layers which have validation integrated into them (your form and your DB) you're outta luck.
Why would you ever be in this scenario? It must be a pretty exceptional set of circumstances! Your setter therefore needs to throw an exception when it receives invalid data. It should never be thrown, but it could be. You could be writing a Windows Form to manage the object to replace the ASP.NET form and forget to validate the range before populating the object. You could be using the object in a scheduled task where there is no user interaction at all, and which saves to a different, but related, area of the database rather than the table which the object maps to. In the latter scenario, your object can enter a state where it is invalid, but you won't know until the results of other operations start to be affected by the invalid value. If you're checking for them and throwing exceptions, that is.
I tend to believe business objects should throw exceptions when passed values that violate its business rules. It however seems that winforms 2.0 data binding architecture assumes the opposite and so most people are rail-roaded into supporting this architecture.
I agree with shabbyrobe's last answer that business objects should be built to be usable and to work correctly in multiple environments and not just the winforms environment, e.g., the business object could be used in a SOA type web service, a command line interface, asp.net, etc. The object should behave correctly and protect itself from invalid data in all these cases.
An aspect that is often overlooked is also what happens in managing the collaborations between objects in 1-1, 1-n or n-n relationships, should these also accept the addition of invalid collaborators and just maintain a invalid state flag which should be checked or should it actively refuse to add invalid collaborations. I have to admit that I'm heavily influenced by the Streamlined Object Modeling (SOM) approach of Jill Nicola et al. But what else is logical.
The next thing is how to work with windows forms. I'm looking at creating a UI wrapper for the business objects for these scenarios.
As Paul Stovell's article mentioned, you can implement error-free validation in your business objects by implementing the IDataErrorInfo interface. Doing so will allow user error notification by WinForm's ErrorProvider and WPF's binding with validation rules. The logic to validate your objects properties is stored in one method, instead of in each of your property getters, and you do not necessarily have to resort to frameworks like CSLA or Validation Application Block.
As far as stopping the user from changing focus out of the textbox is concerned:
First of all, this is usually not the best practice. A user may want to fill out the form out of order, or, if a validation rule is dependent on the results of multiple controls, the user may have to fill in a dummy value just to get out of one control to set another control. That said, this can be implemented by setting the Form's AllowValidate property to its default, EnableAllowFocusChange and subscribing to the Control.Validating event:
private void textBox1_Validating(object sender, CancelEventArgs e)
{
if (textBox1.Text != String.Empty)
{
errorProvider1.SetError(sender as Control, "Can not be empty");
e.Cancel = true;
}
else
{
errorProvider1.SetError(sender as Control, "");
}
}
Using rules stored in the business object for this validation is a little more tricky since the Validating event is called before the focus changes and the data bound business object is updated.
You might like to consider the approach taken by the Spring framework. If you're using Java (or .NET), you can use Spring as-is, but even if you're not, you could still use that pattern; you'd just have to write your own implementation of it.
Throwing an exception in your case is fine. You can consider the case a true exception because something is trying to set an integer to a string (for example). The business rules lack of knowledege of your views means that they should consider this case exceptonal and return that back to the view.
Whether or not you validate your input values before you send them through to the business layer is up to you, I think that as long as you follow the same standard throughout your application then you will end up with clean and readable code.
You could use the spring framework as specified above, just be careful as much of the linked document was indicating writing code that is not strongly typed, I.E. you may get errors at run time that you could not pick up at compile time. This is something I try to avoid as much as possible.
The way we do it here currently is that we take all the input values from the screen, bind them to a data model object and throw an exception if a value is in error.
In my experience, validation rules are seldom universal across all screens/forms/processes in an application. Scenarios like this are common: on the add page, it may be ok for a Person object not to have a last name, but on the edit page it must have a last name. That being the case I've come to believe that validation should happen outside of an object, or the rules should be injected into the object so the rules can change given a context. Valid/Invalid should be an explicit state of the object after validation or one that can be derived by checking a collection for failed rules. A failed business rule is not an exception IMHO.
Have you considered raising an event in the setter if the data is invalid? That would avoid the problem of throwing an exception and would eliminate the need to explicitly check the object for an "invalid" flag. You could even pass an argument indicating which field failed validation, to make it more reusable.
The handler for the event should be able to take care of putting focus back onto the appropriate control if needed, and it could contain any code needed to notify the user of the error. Also, you could simply decline to hook up the event handler and be free to ignore the validation failure if needed.
I my opinion this is an example where throwing an exception is okay. Your property probably does not have any context by which to correct the problem, as such an exception is in order and the calling code should handle the situation, if possible.
If the input goes beyond the business rule implemented by the business object, I'd say it's a case not handled by the busines object. Therefore I'd throw an exception. Even though the setter would "handle" a 5 in your example, the business object won't.
For more complex combinations of input, a vaildation method is required though, or else you'll end up with quite complex validations scattered about all over the place.
In my opinion you'll have to decide which way to go depending on the complexity of the allowed/disallowed input.
I think it depends on how much your business model is important. If you want to go the DDD way, your model is the most important thing. Therefore, you want it to be in a valid state at all time.
In my opinion, most people are trying to do too much (communicate with the views, persist to the database, etc.) with the domain objects but sometimes you need more layers and a better separation of concerns i.e., one or more View Models. Then you can apply validation without exceptions on your View Model (the validation could be different for different contexts e.g., web services/web site/etc.) and keep exception validations inside your business model (to keep the model from being corrupted). You would need one (or more) Application Service layer to map your View Model with your Business Model. The business objects should not be polluted with validation attributes often related to specific frameworks e.g., NHibernate Validator.