using the db to prevent errors in a UI presentation - c#

I am going though this msdn article by noted DDD expert Udi Dahan, where he makes a great observation that he said took him years to realize; "Bringing all e-mail addresses into memory would probably get you locked up by the performance police. Even having the domain model call some service, which calls the database, to see if the e-mail address is there is unnecessary. A unique constraint in the database would suffice."
In a LOB presentation that captured some add or edit scenario, you wouldn't enable the Save type action until all edits were considered valid, so the first trade-off doing the above is that you need to enable Save and be prepared to notify the user if the uniqueness constraint is violated. But how best to do that, say with NHibernate?
I figure it needs to follow the lines of the pseudo-code below. Does anyone do something along these lines now?
Cheers,
Berryl
try {}
catch (GenericADOException)
{
// "Abort due to constraint violation\r\ncolumn {0} is not unique", columnName)
//(1) determine which db column violated uniqueness
//(2) potentially map the column name to something in context to the user
//(3) throw that can be translated into a BrokenRule for the UI presentation
//(4) reset the nhibernate session
}

The pessimistic approach is to check for unique-ness before saving; the optimistic approach is to attempt the save and handle the exception. The biggest problem with the optimistic approach is that you have to be able to parse the exception returned by the database to know that it's a specific unique constraint violation rather than the myriad of other things that can go wrong.
For this reason, it's much easier to check unique-ness before saving. It's a trivial database call to make this check: select 1 where email = 'newuser#somewhere.com'. It's also a better user experience to notify the user that the value is a duplicate (perhaps they already registered with the site?) before making them fill out the rest of the form and click save.
The unique constraint should definitely be in place, but the UI should check that the address is unique at the time it is entered on the form.

Related

Restful API thread clashing

I currently have a shopping cart API to add the item to table when the item doesn't exist, and then increase the qty column each time the item is then added:
var exist = _context.Carts.Any(a => a.CartID == dto.CartSesID && a.SweetID == dto.SweetID);
if (!exist)
{
// Create a new cart item if no cart item exists
var cartItem = new Cart
{
SweetID = dto.SweetID,
CartID = dto.CartSesID,
Qty = qty,
DateCreated = DateTime.Now
};
_context.Carts.Add(cartItem);
}
else
{
var cartItem = _context.Carts.FirstOrDefault(a => a.CartID == dto.CartSesID && a.SweetID == dto.SweetID);
// If the item does exist in the cart,
// then add one to the quantity
if(type == "plus")
cartItem.Qty = cartItem.Qty + qty;
if (type == "minus")
cartItem.Qty = cartItem.Qty - qty;
if(cartItem.Qty == 0)
_context.Carts.Remove(cartItem);
}
// Save changes
_context.SaveChanges();
The problem is that the if (!exist) check seems to think that the item doesn't exist when the button is clicked multiple times too fast (maybe thread not finished when other has started?) resulting in the same item added on several rows:
But it should be added as the following:
Does anyone know an ideal fix?
You have a race condition here. When two requests follow rapidly one another, the first request may have not committed its changes to the DB before the second one queries the existence of the item in question.
You need to apply some concurrency control to resolve this. Basically there are two ways to go:
Serializing the changes made to the DB. Again, two main options:
most RDBMS supports the serializable isolation level for transactions or
you can use some locking mechanisms. This can take place on DB level (e.g. table locking) or application level (.NET locking constructions) depending on your application architecture.
You can apply a trial-and-error (or trial-and-retry-on-error to be more precise) approach aka optimistic concurrency control.
Obviously, serializing have a negative impact on performance (especially option 1.1) so usually optimistic concurrency control is preferred and the other ones are for special cases.
Luckily, EF has built-in support for optimistic concurrency handling. All the details is discussed in this MSDN article.
In this particular case you have an even simpler way. You need to define a compound unique constraint on (CartID, SweetID) fields. Doing this guarantees that no duplicates can be inserted into the table. You get an exception when such an attempt is detected and by catching it you can handle the situation according to your requirements. E.g. you can initiate an update (but keep in mind that even in this case you need optimistic concurrency checking to make the process absolutely fail-safe!)
Footnote
Disabling the button is just a masking of the problem, indeed. On server-side you cannot trust what JS does on client-side because it is completely out of your control. JS can be disabled or modified by the user easily.
Update
Reading through my answer again, I feel there's a conclusion I should add:
In this particular case I think the best you can do is to
set the unique constraint as I suggested but don't care about the exception thrown when a duplicate insert is detected AND
disable the submit button on client-side for the time of submit.
This way you ensure that no invalid data can be stored in your DB even if someone manipulates your JS code. At the same time, you don't need to overcomplicate your data persisting logic.

Enforcing invariants for child entities in concurrent editing environments

Given the invariant that a child collection cannot exceed x number of items, how can the domain guarantee such an invariant is enforced in a concurrent/web environment? Let's look at a (classic) example:
We have a Manager with Employees. The (hypothetical) invariant states that a Manager cannot have more than seven direct reports (Employees). We might implement this (naively) like so:
public class Manager {
// Let us assume that the employee list is mapped (somehow) from a persistence layer
public IList<Employee> employees { get; private set; }
public Manager(...) {
...
}
public void AddEmployee(Employee employee) {
if (employees.Count() < 7) {
employees.Add(employee);
} else {
throw new OverworkedManagerException();
}
}
}
Until recently, I had considered this approach to be good enough. However, it seems there is an edge-case that makes it possible for the database to store more than seven employees and thus break the invariant. Consider this series of events:
Person A goes to edit Manager in UI
(6 employees in memory, 6 employees in database)
Person B goes to edit Manager in UI
(6 employees in memory, 6 employees in database)
Person B adds Employee and saves changes
(7 employees in memory, 7 employees in database)
Person A adds Employee and saves changes
(7 employees in memory, 8 employees in database)
When the domain object is once again pulled from the database, the Manager constructor may (or may not) reinforce the Employee count invariant on the collection, but either way we now have a discrepancy between our data and what our invariant expects. How do we prevent this situation from happening? How do we recover from this cleanly?
Consider this series of events:
Person A goes to edit Manager in UI
(6 employees in memory, 6 employees in database)
Person B goes to edit Manager in UI
(6 employees in memory, 6 employees in database)
Person B adds Employee and saves changes
(7 employees in memory, 7 employees in database)
Person A adds Employee and saves changes
(7 employees in memory, 8 employees in database)
The simplest approach is to implement the database writes as a compare and swap operation. All writes are working with a stale copy of the aggregate (after all, we're looking at the aggregate in memory, but the book of record is the durable copy on disk). The key idea is that when we actually perform the write, we are also checking that the stale copy we were working with is still the live copy in the book of record.
(For instance, in an event sourced system, you don't append to the stream, but append to a specific position in the stream -- ie, where you expect the tail pointer to be. So in a race, only one write gets to commit to the tail position; the other fails on a concurrency conflict and starts over.)
The analog to this in a web environment might be to use an eTag, and verify that the etag is still valid when you perform the write. The winner gets a successful response, the loser gets a 412 Precondition Failed.
An improvement on this is to use a better model for your domain. Udi Dahan wrote:
A microsecond difference in timing shouldn’t make a difference to core business behaviors
Specifically, if your model ends up in a different state just because commands A and B happen to be processed in a different order, your model probably doesn't match your business very well.
The analog in your example would be that both commands should succeed, but the second of the two should also set a flag that notes that the aggregate is currently out of compliance. This approach prevents idiocies when an addEmployee command and a removeEmployee command happen to get ordered the wrong way around in the transport layer.
The (hypothetical) invariant states that a Manager cannot have more than seven direct reports
A thing to be wary of -- even in hypothetical examples, is whether or not the database is the book of record. The database seldom gets veto power over the real world. If the real world is the book of record, your probably shouldn't be rejecting changes.
How do we prevent this situation from happening?
You implement this behavior in your Repository implementation: when you load the Aggregate, you also keep track of the Aggregate's version. The version can be implemented as a unique key constraint of Aggregate's Id and a integer sequence number. Every Aggregate has it's own sequence number (initially every Aggregate has sequence number 0). Before the Repository tries to persist it, it increments the sequence number; if a concurrent persist has occurred, the database behind the Repository will throw a "unique key constraint violated" kind of exception and the persisting will not occur.
Then (if you have designed the Aggregate as a pure, non-side effect object as you should do in DDD!), you could transparently retry the command execution, re-running all the Aggregate's domain code, thus re-checking the invariants. Please note that the operation must be retried only if a "unique constraint violation" infrastructure exception occur, not in case the Aggregate throws a domain exception.
How do we recover from this cleanly?
You could retry the command execution until no "unique constraint violation" is thrown.
I've implemented this retrying in PHP here: https://github.com/xprt64/cqrs-es/blob/master/src/Gica/Cqrs/Command/CommandDispatcher/ConcurrentProofFunctionCaller.php
This is not so much a DDD problem as a persistence layer problem. There are multiple ways to look at this.
From a traditional ACID/strong consistency perspective
You need to have a look at your particular database's available concurrency and isolation strategies, possibly reflected in your ORM capabilities. Some of them will allow you to detect such conflicts and throw an exception as Person A saves their changes at step 4.
As I said in my comment, in a typical web application that uses the Unit of Work pattern (via an ORM or otherwise), this shouldn't happen quite as often as you seem to imply though. Entities don't stay in memory tracked by the UoW all along steps 1. to 4., they are reloaded at steps 3. and 4. Transactions 3 and 4 would have to be concurrent for the problem to occur.
Weaker, lock-free consistency
You have a few options here.
Last-one-wins, where the 7 employees from Person A will erase those from Person B. This can be viable in certain business contexts. You can do it by persisting the change as a employees = <new list> instead of employees.Add.
Relying on version numbers, as #VoiceOfUnreason described.
Eventual consistency with compensation, where something else in the application checks the invariant (employees.Count() < 7) after the fact, outside of Person A and B's transactions. A compensating action has to be taken if a violation of the rule is detected, like rollbacking the last operation and notifying Person A that the manager would have been overworked.

Is checking rows affected count after database action (insert, update, delete) overkill?

Lately in apps I've been developing I have been checking the number of rows affected by an insert, update, delete to the database and logging an an error if the number is unexpected. For example on a simple insert, update, or delete of one row if any number of rows other than one is returned from an ExecuteNonQuery() call, I will consider that an error and log it. Also, I realize now as I type this that I do not even try to rollback the transaction if that happens, which is not the best practice and should definitely be addressed. Anyways, here's code to illustrate what I mean:
I'll have a data layer function that makes the call to the db:
public static int DLInsert(Person person)
{
Database db = DatabaseFactory.CreateDatabase("dbConnString");
using (DbCommand dbCommand = db.GetStoredProcCommand("dbo.Insert_Person"))
{
db.AddInParameter(dbCommand, "#FirstName", DbType.Byte, person.FirstName);
db.AddInParameter(dbCommand, "#LastName", DbType.String, person.LastName);
db.AddInParameter(dbCommand, "#Address", DbType.Boolean, person.Address);
return db.ExecuteNonQuery(dbCommand);
}
}
Then a business layer call to the data layer function:
public static bool BLInsert(Person person)
{
if (DLInsert(campusRating) != 1)
{
// log exception
return false;
}
return true;
}
And in the code-behind or view (I do both webforms and mvc projects):
if (BLInsert(person))
{
// carry on as normal with whatever other code after successful insert
}
else
{
// throw an exception that directs the user to one of my custom error pages
}
The more I use this type of code, the more I feel like it is overkill. Especially in the code-behind/view. Is there any legitimate reason to think a simple insert, update, or delete wouldn't actually modify the correct number of rows in the database? Is it more plausible to only worry about catching an actual SqlException and then handling that, instead of doing the monotonous check for rows affected every time?
Thanks. Hope you all can help me out.
UPDATE
Thanks everyone for taking the time to answer. I still haven't 100% decided what setup I will use going forward, but here's what I have taken away from all of your responses.
Trust the DB and .Net libraries to handle a query and do their job as they were designed to do.
Use transactions in my stored procedures to rollback the query on any errors and potentially use raiseerror to throw those exceptions back to the .Net code as a SqlException, which could handle these errors with a try/catch. This approach would replace the problematic return code checking.
Would there be any issue with the second bullet point that I am missing?
I guess the question becomes, "Why are you checking this?" If it's just because you don't trust the database to perform the query, then it's probably overkill. However, there could exist a logical reason to perform this check.
For example, I worked at a company once where this method was employed to check for concurrency errors. When a record was fetched from the database to be edited in the application, it would come with a LastModified timestamp. Then the standard CRUD operations in the data access layer would include a WHERE LastMotified=#LastModified clause when doing an UPDATE and check the record modified count. If no record was updated, it would assume a concurrency error had occurred.
I felt it was kind of sloppy for concurrency checking (especially the part about assuming the nature of the error), but it got the job done for the business.
What concerns me more in your example is the structure of how this is being accomplished. The 1 or 0 being returned from the data access code is a "magic number." That should be avoided. It's leaking an implementation detail from the data access code into the business logic code. If you do want to keep using this check, I'd recommend moving the check into the data access code and throwing an exception if it fails. In general, return codes should be avoided.
Edit: I just noticed a potentially harmful bug in your code as well, related to my last point above. What if more than one record is changed? It probably won't happen on an INSERT, but could easily happen on an UPDATE. Other parts of the code might assume that != 1 means no record was changed. That could make debugging very problematic :)
On the one hand, most of the time everything should behave the way you expect, and on those times the additional checks don't add anything to your application. On the other hand, if something does go wrong, not knowing about it means that the problem may become quite large before you notice it. In my opinion, the little bit of additional protection is worth the little bit of extra effort, especially if you implement a rollback on failure. It's kinda like an airbag in your car... it doesn't really serve a purpose if you never crash, but if you do it could save your life.
I've always prefered to raiserror in my sproc and handle exceptions rather than counting. This way, if you update a sproc to do something else, like logging/auditing, you don't have to worry about keeping the row counts in check.
Though if you like the second check in your code or would prefer not to deal with exceptions/raiserror, I've seen teams return 0 on successful sproc executions for every sproc in the db, and return another number otherwise.
It is absolutely overkill. You should trust that your core platform (.Net libraries, Sql Server) work correctly -you shouldn't be worrying about that.
Now, there are some related instances where you might want to test, like if transactions are correctly rolled back, etc.
If there's is a need for that check, why not do that check within the database itself? You save yourself from doing a round trip and it's done at a more 'centralized' stage - If you check in the database, you can be assured it's being applied consistently from any application that hits that database. Whereas if you put the logic in the UI, then you need to make sure that any UI application that hits that particular database applies the correct logic and does it consistently.

Business Validation Logic Code Smell

Consider the following code:
partial class OurBusinessObject {
partial void OnOurPropertyChanged() {
if(ValidateOurProperty(this.OurProperty) == false) {
this.OurProperty = OurBusinessObject.Default.OurProperty;
}
}
}
That is, when the value of OurProperty in OurBusinessObject is changed, if the value is not valid, set it to be the default value. This pattern strikes me as code smell but others here (at my employer) do not agree. What are your thoughts?
Edited to add: I've been asked to add an explanation for why this is thought to be okay. The idea was that rather than having the producers of the business object validate the data, the business object could validate its own properties, and set clean default values in cases when the validation failed. Further, it was thought, if the validation rules change, the business object producers won't have to change their logic as the business object will take care of validating and cleaning the data.
It absolutely horrible. Good luck trying to debug issues in Production. The only thing it can lead to is to cover bugs, which will just pop up somewhere else, where it will be not obvious at all where they are coming from.
I think I have to agree with you. This could definitely lead to issues where the logic unexpectedly returns to the defaults, which could be very difficult to debug.
At the very least, this behavior should be logged, but this seems more like a case for throwing an exception.
To me this looks like the symptom, rather than the actual problem. What's really going on is that the setter for OurProperty fails to preserve the original value for use in the OnOurPropertyChanged event. If you do that, suddenly it becomes easier to make better choices about how to proceed.
For that matter, what you really want is an OnOurPropertyChanging event that is raised from the setter before the assignment actually takes place. This way you can allow or deny the assignment in the first place. Otherwise there is a small amount of time where your object is not valid, and that means the type is not thread safe and you can't count on consistency if you you consider concurrency is a concern.
Definitely a questionable practice.
How would an invalid value ever get assigned to this property? Wouldn't that indicate there's a bug somewhere in the calling code, in which case you'd probably want to know right away? Or that a user input something incorrectly in which case they should be informed right away?
In general, "failing fast" makes tracking down bugs a lot easier. Silently assigning a default behind the scenes is akin to "magic" and is only going to cause confusion to whoever has to maintain the codebase.
Distaste for the term 'code smell' aside, you might be right - depending on where it's coming from, silently changing the value is probably not a good thing. It would be better to ensure your value is valid instead of just reverting to the default.
I would highly recommend refactoring it to validate before setting the property.
You could always have a method that was more like:
T GetValidValueForProperty<T>(T suggestedValue, T currentValue);
or even:
T GetValidValueForProperty<T>(string propertyName, T suggestedValue, T currentValue);
If you do that, before you set the property, you could pass it to the business logic to validate, and the business logic could return the default property value (your current behavior) OR (more reasonable in most cases), return the currentValue, so setting had no effect.
This would be used more like:
T OurProperty
{
get
{
return this.propertyBackingField;
}
set
{
this.propertyBackingField = this.GetValidValueForProperty(value, this.propertyBackingField);
}
}
It doesn't really matter what you do, but it is important to validate before you change your current value. If you change your value before you determine whether the new value is good, you're asking for trouble in the long term.
It may or may not "smell", but I'm leaning more towards, "Yes it smells".
Does setting OurProperty to the default have a logical reason for doing so or is it simply convenient to do so in code? It is possible, however unlikely in practice, to contrive a scenario where this would be expected behavior, but I'm guessing that in most cases you should be throwing an exception and handling it cleanly somewhere.
Does setting the value to default get you closer to or move you away from the functional specifications description of how the application is supposed to work?
You are validating a change after it has been done? Validation should be done before the busyness property is altered.
Answering your questing: the solution presented in that code snippet can generate big issues in production, you don't know whether the default value appeared there due to invalid input or just because something else set the value to the default
It's hard to say without knowing the context or business rules. Generally speaking though, you should just validate at time of input, and maybe once more before persistence, but the way you're doing it won't really allow you to validate since you're not allowing a property to contain an invalid value.
I think your validation logic should raise an exception if asked to use an invalid value. If your consumer wants to use a default value, it should ask for it explicitly, either through a special, documented value or through another method.
The only kind of exceptions I can think would be forgivable would be, like, normalizing case, like in email fields to detect duplicates.
Furthermore, why in the world is this partial? Using partial classes for anything but a generated code framework is itself is a codesmell since you're likely using them to hide complexity which should be split up anyways!
I agree with Grzenio and would add that the best way to handle a validation error down in the domain layer (aka business objects) is to generate an exception. That exception could propagate all the way up into the UI layer where it could be handled and interactively rectified with the user. However, depending on the capabilities and technologies involved, this may not always be feasible, in which case, you probably should be validating up in the UI layer (possibly in addition to the domain layer). It's less than ideal, but might be your only viable option. In any case, setting it to a default value is a horrible thing to do and will lead to subtle bugs that will be near impossible to diagnose. If done on a broad scale, you'll have an unmaintainable system in no time (especially if you have no unit tests backing you up).
An argument that I have against this is the following. Suppose the user/producer of the business object accidentally inputs an invalid value. Then this pattern will gloss over that fact and default to clean data. But the right way to handle this is to throw an error and have the user/producer verify/clean their input data.
I'd say, implement PropertyChanging and allow the business logic to approve/deny a value, and then afterwards, throw an exception for invalid values.
This way, you don't ever have an invalid value. That, and you should never change a user's information. What if a user adds an entry to the database, and keeps track of it for his own records? Your code would re-assign the value to the default, and he's now tracking the wrong information. Its better to inform the user ASAP.

Business Objects, Validation And Exceptions

I’ve been reading a few questions and answers regarding exceptions and their use. Seems to be a strong opinion that exceptions should be raised only for exception, unhandled cases. So that lead me to wondering how validation works with business objects.
Lets say I have a business object with getters/setters for the properties on the object. Let’s say I need to validate that the value is between 10 and 20. This is a business rule so it belongs in my business object. So that seems to imply to me that the validation code goes in my setter. Now I have my UI databound to the properties of the data object. The user enters 5, so the rule needs to fail and the user is not allowed to move out of the textbox. . The UI is databound to the property so the setter is going to be called, rule checked and failed. If I raised an exception from my business object to say the rule failed, the UI would pick that up. But that seems to go against the preferred usage for exceptions. Given that it’s a setter, you aren’t really going to have a ‘result’ for the setter. If I set another flag on the object then that would imply the UI has to check that flag after each UI interaction.
So how should the validation work?
Edit: I've probably used an over-simplified example here. Something like the range check above could be handled easily by the UI but what if the valdation was more complicated, e.g. the business object calculates a number based on the input and if that calculated number is out of range it should be recjected. This is more complicated logic that should not be in th UI.
There is also the consideration of further data entered based on a field already entered. e.g.I have to enter an item on the order to get certain informaion like stock on hand, current cost, etc. The user may require this information to make decisions on further entry (liek how many units to order) or it may be required in order for further validation to be done. Should a user be able to enter other fields if the item isn't valid? What would be the point?
You want to delve a bit in the remarkable work of Paul Stovell concerning data validation. He summed up his ideas at one time in this article. I happen to share his point of view on the matter, which I implemented in my own libraries.
Here are, in Paul's words, the cons to throwing exceptions in the setters (based on a sample where a Name property should not be empty) :
There may be times where you actually need to have an empty name. For example, as the default value for a "Create an account" form.
If you're relying on this to validate any data before saving, you'll miss the cases where the data is already invalid. By that, I mean, if you load an account from the database with an empty name and don't change it, you might not ever know it was invalid.
If you aren't using data binding, you have to write a lot of code with try/catch blocks to show these errors to the user. Trying to show errors on the form as the user is filling it out becomes very difficult.
I don't like throwing exceptions for non-exceptional things. A user setting the name of an account to "Supercalafragilisticexpialadocious" isn't an exception, it's an error. This is, of course, a personal thing.
It makes it very difficult to get a list of all the rules that have been broken. For example, on some websites, you'll see validation messages such as "Name must be entered. Address must be entered. Email must be entered". To display that, you're going to need a lot of try/catch blocks.
And here are basic rules for an alternative solution :
There is nothing wrong with having an invalid business object, so long as you don't try to persist it.
Any and all broken rules should be retrievable from the business object, so that data binding, as well as your own code, can see if there are errors and handle them appropriately.
Assuming that you have separate validation and persist (i.e. save to database) code, I would do the following:
The UI should perform validation. Don't throw exceptions here. You can alert the user to errors and prevent the record from being saved.
Your database save code should throw invalid argument exceptions for bad data. It makes sense to do it here, since you cannot proceed with the database write at this point. Ideally this should never happen since the UI should prevent the user from saving, but you still need it to ensure database consistency. Also you might be calling this code from something other than the UI (e.g. batch updates) where there is no UI data validation.
I've always been a fan of Rocky Lhotka's approach in the CSLA framework (as mentioned by Charles). In general, whether it's driven by the setter or by calling an explicit Validate method, a collection of BrokenRule objects is maintained internally by the business object. The UI simply needs to check an IsValid method on the object, which in turn checks the number of BrokenRules, and handle it appropriately. Alternatively, you could easily have the Validate method raise an event which the UI could handle (probably the cleaner approach). You can also use the list of BrokenRules to display error messages to the use either in summary form or next to the appropriate field. Although the CSLA framework is written in .NET, the overall approach can be used in any language.
I don't think throwing an Exception is the best idea in this case. I definitely follow the school of thought that says Exceptions should be for exceptional circumstances, which a simple validation error is not. Raising an OnValidationFailed event would be the cleaner choice, in my opinion.
By the way, I have never liked the idea of not letting the user leave a field when it is in an invalid state. There are so many situations where you might need to leave the field temporarily (perhaps to set some other field first) before going back and fixing the invalid field. I think it's just an unnecessary inconvenience.
You might want to move the validation outside of the getters and setters. You could have a function or property called IsValid that would run all the validation rules. t would populate a dictionary or hashtable with all of the "Broken Rules". This dictionary would be exposed to the outside world, and you can use it to populate your error messages.
This is the approach that is taken in CSLA.Net.
Exceptions should not be thrown as a normal part of validation. Validation invoked from within business objects is a last line of defense, and should only happen if the UI fails to check something. As such they can be treated like any other runtime exception.
Note that here's a difference between defining validation rules and applying them. You might want to define (ie code or annotate) your business rules in your business logic layer but invoke them from the UI so that they can handled in a manner appropriate to that particular UI. The manner of handling will vary for different UI's, eg form based web-apps vs ajax web-apps. Exception-on-set validation offers very limited options for handling.
Many applications duplicate their validation rules, such as in javascript, domain object constraints and database constraints. Ideally this information will only be defined once, but implementing this can be challenge and requires lateral thinking.
Perhaps you should look at having both client-side and server-side validation. If anything slips past the client-side validation you can then feel free to throw an exception if your business object would be made invalid.
One approach I've used was to apply custom attributes to business object properties, which described the validation rules. e.g.:
[MinValue(10), MaxValue(20)]
public int Value { get; set; }
The attributes can then be processed and used to automatically create both client-side and server-side validation methods, to avoid the problem of duplicating business logic.
I'd definitely advocate both client and server-side validation (or validating at the various layers). This is especially important when communicating across physical tiers or processes, as the cost of throw exceptions becomes increasingly expensive. Also, the further down the chain you wait for validation, the more time is wasted.
As to use Exceptions or not for data validation. I think it's ok to use exception in process (though still not preferrable), but outside of process, call a method to validate the business object (eg before saving) and have the method return the success of the operation along with any validation errors. Errors arent' exceptional.
Microsoft throw exceptions from business objects when validation fails. At least, that's how the Enterprise Library's Validation Application Block works.
using Microsoft.Practices.EnterpriseLibrary.Validation;
using Microsoft.Practices.EnterpriseLibrary.Validation.Validators;
public class Customer
{
[StringLengthValidator(0, 20)]
public string CustomerName;
public Customer(string customerName)
{
this.CustomerName = customerName;
}
}
Your business objects should throw exceptions for bad inputs, but those exceptions should never be thrown in the course of a normal program run. I know that sounds contradictory, so I shall explain.
Each public method should validate its inputs, and throw "ArgumentException"s when they are incorrect. (And private methods should validate their inputs with "Debug.Assert()"s to ease development, but that's another story.) This rule about validating inputs to public methods (and properties, of course) is true for every layer of the application.
The requirements of the software interface should be spelled out in the interface documentation, of course, and it is the job of the calling code to make sure the arguments are correct and the exceptions will never be thrown, which means the UI needs to validate the inputs before handing them to the business object.
While the rules given above should almost never be broken, sometimes business object validation can be very complex, and that complexity shouldn't be foisted onto the UI. In that case it's good for the BO's interface to allow some leeway in what it accepts and then provide for an explicit Validate(out string[]) predicate to check the properties and give feedback on what needs to be changed. But notice in this case that there are still well-defined interface requirements and no exceptions need ever be thrown (assuming the calling code follows the rules).
Following this latter system, I almost never do early validation on property setters, since that soft-of complicates the use of the properties, (but in the case given in the question, I might). (As an aside, please don't prevent me from tabbing out of a field just because it has bad data in it. I get clausterphobic when I can't tab around a form! I'll go back and fix it in a minute, I promise! OK, I feel better now, sorry.)
It depends on what sort of validation you will be performing and where. I think that each layer of the application can be easily protected from bad data and its too easy to do for it not to be worth it.
Consider a multi-tiered application and the validation requirements/facilities of each layer. The middle layer, Object, is the one that seems to be up for debate here.
Database
protects itself from an invalid state with column constraints and referential integrity, which will cause the application's database code to throw exceptions
Object
?
ASP.NET/Windows Forms
protects the form's state (not the object) using validator routines and/or controls without using exceptions (winforms does not ship with validators, but there's an excellent series at msdn describing how to implement them)
Say you have a table with a list of hotel rooms, and each row has a column for the number of beds called 'beds'. The most sensible data type for that column is an unsigned small integer*. You also have a plain ole object with an Int16* property called 'Beds'. The issue is that you can stick -4555 into an Int16, but when you go to persist the data to a database you're going to get an Exception. Which is fine - my database shouldn't be allowed to say that a hotel room has less than zero beds, because a hotel room can't have less than zero beds.
* If your database can represent it, but let's assume it can
* I know you can just use a ushort in C#, but for the purpose of this example, let's assume you can't
There's some confusion as to whether objects should represent your business entity, or whether they should represent the state of your form. Certainly in ASP.NET and Windows Forms, the form is perfectly capable of handling and validating its own state. If you've got a text box on an ASP.NET form that is going to be used to populate that same Int16 field, you've probably put a RangeValidator control on your page which tests the input before it gets assigned to your object. It prevents you from entering a value less than zero, and probably prevents you from entering a value greater than, say, 30, which hopefully would be enough to cater for the worst flea-infested hostel you can imagine. On postback, you would probably be checking the IsValid property of the page before building your object, thereby preventing your object from ever representing less than zero beds and preventing your setter from ever being called with a value it shouldn't hold.
But your object is still capable of representing less than zero beds, and again, if you were using the object in a scenario not involving the layers which have validation integrated into them (your form and your DB) you're outta luck.
Why would you ever be in this scenario? It must be a pretty exceptional set of circumstances! Your setter therefore needs to throw an exception when it receives invalid data. It should never be thrown, but it could be. You could be writing a Windows Form to manage the object to replace the ASP.NET form and forget to validate the range before populating the object. You could be using the object in a scheduled task where there is no user interaction at all, and which saves to a different, but related, area of the database rather than the table which the object maps to. In the latter scenario, your object can enter a state where it is invalid, but you won't know until the results of other operations start to be affected by the invalid value. If you're checking for them and throwing exceptions, that is.
I tend to believe business objects should throw exceptions when passed values that violate its business rules. It however seems that winforms 2.0 data binding architecture assumes the opposite and so most people are rail-roaded into supporting this architecture.
I agree with shabbyrobe's last answer that business objects should be built to be usable and to work correctly in multiple environments and not just the winforms environment, e.g., the business object could be used in a SOA type web service, a command line interface, asp.net, etc. The object should behave correctly and protect itself from invalid data in all these cases.
An aspect that is often overlooked is also what happens in managing the collaborations between objects in 1-1, 1-n or n-n relationships, should these also accept the addition of invalid collaborators and just maintain a invalid state flag which should be checked or should it actively refuse to add invalid collaborations. I have to admit that I'm heavily influenced by the Streamlined Object Modeling (SOM) approach of Jill Nicola et al. But what else is logical.
The next thing is how to work with windows forms. I'm looking at creating a UI wrapper for the business objects for these scenarios.
As Paul Stovell's article mentioned, you can implement error-free validation in your business objects by implementing the IDataErrorInfo interface. Doing so will allow user error notification by WinForm's ErrorProvider and WPF's binding with validation rules. The logic to validate your objects properties is stored in one method, instead of in each of your property getters, and you do not necessarily have to resort to frameworks like CSLA or Validation Application Block.
As far as stopping the user from changing focus out of the textbox is concerned:
First of all, this is usually not the best practice. A user may want to fill out the form out of order, or, if a validation rule is dependent on the results of multiple controls, the user may have to fill in a dummy value just to get out of one control to set another control. That said, this can be implemented by setting the Form's AllowValidate property to its default, EnableAllowFocusChange and subscribing to the Control.Validating event:
private void textBox1_Validating(object sender, CancelEventArgs e)
{
if (textBox1.Text != String.Empty)
{
errorProvider1.SetError(sender as Control, "Can not be empty");
e.Cancel = true;
}
else
{
errorProvider1.SetError(sender as Control, "");
}
}
Using rules stored in the business object for this validation is a little more tricky since the Validating event is called before the focus changes and the data bound business object is updated.
You might like to consider the approach taken by the Spring framework. If you're using Java (or .NET), you can use Spring as-is, but even if you're not, you could still use that pattern; you'd just have to write your own implementation of it.
Throwing an exception in your case is fine. You can consider the case a true exception because something is trying to set an integer to a string (for example). The business rules lack of knowledege of your views means that they should consider this case exceptonal and return that back to the view.
Whether or not you validate your input values before you send them through to the business layer is up to you, I think that as long as you follow the same standard throughout your application then you will end up with clean and readable code.
You could use the spring framework as specified above, just be careful as much of the linked document was indicating writing code that is not strongly typed, I.E. you may get errors at run time that you could not pick up at compile time. This is something I try to avoid as much as possible.
The way we do it here currently is that we take all the input values from the screen, bind them to a data model object and throw an exception if a value is in error.
In my experience, validation rules are seldom universal across all screens/forms/processes in an application. Scenarios like this are common: on the add page, it may be ok for a Person object not to have a last name, but on the edit page it must have a last name. That being the case I've come to believe that validation should happen outside of an object, or the rules should be injected into the object so the rules can change given a context. Valid/Invalid should be an explicit state of the object after validation or one that can be derived by checking a collection for failed rules. A failed business rule is not an exception IMHO.
Have you considered raising an event in the setter if the data is invalid? That would avoid the problem of throwing an exception and would eliminate the need to explicitly check the object for an "invalid" flag. You could even pass an argument indicating which field failed validation, to make it more reusable.
The handler for the event should be able to take care of putting focus back onto the appropriate control if needed, and it could contain any code needed to notify the user of the error. Also, you could simply decline to hook up the event handler and be free to ignore the validation failure if needed.
I my opinion this is an example where throwing an exception is okay. Your property probably does not have any context by which to correct the problem, as such an exception is in order and the calling code should handle the situation, if possible.
If the input goes beyond the business rule implemented by the business object, I'd say it's a case not handled by the busines object. Therefore I'd throw an exception. Even though the setter would "handle" a 5 in your example, the business object won't.
For more complex combinations of input, a vaildation method is required though, or else you'll end up with quite complex validations scattered about all over the place.
In my opinion you'll have to decide which way to go depending on the complexity of the allowed/disallowed input.
I think it depends on how much your business model is important. If you want to go the DDD way, your model is the most important thing. Therefore, you want it to be in a valid state at all time.
In my opinion, most people are trying to do too much (communicate with the views, persist to the database, etc.) with the domain objects but sometimes you need more layers and a better separation of concerns i.e., one or more View Models. Then you can apply validation without exceptions on your View Model (the validation could be different for different contexts e.g., web services/web site/etc.) and keep exception validations inside your business model (to keep the model from being corrupted). You would need one (or more) Application Service layer to map your View Model with your Business Model. The business objects should not be polluted with validation attributes often related to specific frameworks e.g., NHibernate Validator.

Categories