CQS with out parameter - c#

In CQS (Command Query Separation) it is common to have commands with a "void" return value and Queries with a return type. (or so I have learned...)
Now I wonder if this COMMAND is valid then, because basically, we are doing the same thing as in a query, just with the "out" keyword instead of using the return type:
public class LogTrace{
public Guid CorrelationId { get; }
public DateTime Timestamp { get; }
}
public class Logger{
public void Log(string message, out LogTrace trace){
trace = new LogTrace(){//Fill properties};
//Log the message (+ trace)
}
}

CQS is not about "not having return types" but about not having queries change data or commands return state - it doesn't prescribe anything about how or when you should use language constructs because it's a language-agnostic concept.
Whether you return data from your command via an explicit return value or implicitly via the use of out parameters is really immaterial; you're still violating the same principle.
If anything, using out is worse: not only do you violate the principle but you also add unnecessary complexity to your code by trying to undermine yourself with a technicality. If you really need to return a value, you should use a proper return value for it.
I'd also question why you are trying to conform to architectural patterns that apparently do not fit with what you're trying to do (or further analyse what makes you think you need to return a value and address that underlying issue, depending on your perspective).

Usually when you do CQS, you generally have ICommandHander<> and IQueryHandler<> so you can add decorators and get some really powerful cross-cutting concerns implemented. I went a step further and created a ICommandQueryHandler<>. There are gray areas, like Stacks/Queues where to get the data, you have to change it first. Thus, it is better to follow the standard as best you can, but just have something in place that you only use as a last resort when all the other options aren't going to work - and then use that each time to be consistent.

Related

.Net Core Binding to a Specific Model given a Specific QueryString Value

I'm in the middle of refactoring an analytics api which needs to allow clients to send events as HTTP GET requests, with their "Hit" data encoded in the URL's QueryString Parameters.
My API's job is to recieve these events, and then place the "valid" events onto a queue in another part of the system for processing.
Some Hits have the same shape. The only thing that makes them different is the value of the type parameter, which all events must have at a minimum.
The problem I've encountered is that based on the Hit type, I'd like to be able to assume the type of each field given to me, which requires model binding. Of course. Currently, I can only find out what model to validate against after checking the value of type - which risks making the API excessively "stringly typed"
An example route would be:
GET https://my.anonymousanalytics.net/capture?type=startAction&amount=300&so_rep=true
Therefore, my Hit would be:
{
type: "startAction",
amount: 300,
so_rep: true
}
Which, hypothetically, could be bound to the Model StackOverflowStartHitModel
class StackOverflowStartHitModel {
public string type { get; } // Q: Could I force the value of this to be "startAction"?
? public int amount { get; }
public boolean so_rep { get; }
}
Why am I asking this here? Well I'm normally a JavaScript developer, but everyone who I'd normally turn to for C# wisdom is off work with the flu.
I have experimented with the [FromQuery] attribute decorator, but my concern is that for Hits that are the exact same shape, I might not be able to tell the difference between whether it is a startAction or an endAction, for example.
you're going to need to have a validation engine of some sort, but do not confuse this with your UI model validation. It sounds like you really have one model with a number of valid states which really is business logic.
Your model looks like this:
public class StackOverflowModel
{
public string type { get; set;}
public int amount { get; set; }
public bool so_rep { get; set;}
}
it doesn't matter what value your type field has and you don't need to hard-code it either, it will be captured as is and then it can be checked against valid states.
There are a number of ways to do this, that I can think of.
One option would be to create a list of valid rules ( states ) and then simply check if your input model matches any of them. One way to implement something like this could be with a library like FluentValidation. You can see an example here: Validation Rules and Business Rules in MVC
Another option would be to use some sort of Pattern Matching techniques like described here: https://learn.microsoft.com/en-us/dotnet/csharp/pattern-matching
Whichever option you go with, make sure you put this validation stuff in a separate class, maybe even a separate project. You can then add tests for each rule that you have to make sure everything works. This will also keep your controller light.
You haven't given examples of valid and invalid states, but I am guessing you're really talking about variations of those 3 parameters such as, when type is "something" then amount can only be < 200 and so_rep can only be "whatever". This can be done quite nicely with the FluentValidation library.

Wrapping A Long List of Parameters as a Single Object

Consider the following interface:
public interface SomeRepo
{
public IEnumerable<IThings> GetThingsByParameters(DateTime startDate,
DateTime endDate,
IEnumerable<int> categorIds,
IEnumerable<int> userIds,
IEnumerable<int> typeIds,
string someStringToFilerBy);
}
Is there any benefit in doing this instead?
public IEnuemrable<IThings> GetThingsByParamters(IParameter parameter);
Where IParameter is an object defined as such:
public interface IParameter
{
DateTime startDate { get; }
DateTime endDate { get; }
IEnumerable<int> categorIds { get; }
IEnumerable<int> userIds { get; }
IEnumerable<int> typeIds { get; }
string someStringToFilerBy { get; }
}
I don't see any benefit in doing IParameter other than it makes it a bit more readable but the extra layer of complexity doesn't seem to be worth it.
Anything that I maybe missing? Thanks.
If that's just for that single place, it may not be worth it all that much.
Creating a class on its own does have some possible benefits, but they're quite dependent on exactly that; whether you would be able to reuse it.
You could add some sort of early data validation to your IParameters implementation (eg. endDate can't be earlier than startDate - it's common sense, you don't need to be a repository object to know that).
If some values are optional and some are not, a Parameters class gives you an opportunity to clearly distinguish these two categories.
It's much easier to find all usages of Parameters in your code than all the occurences of raw "start date / end date / ids" packs.
This being said, readability isn't a minor concern. I feel that 6 parameters per method is twice too many. And based off experience, I wouldn't bet it will stop at 6.
You can see in book Clean Code (Robert C. Martin) that is not a good idea to use many parameters in a method (the book recommends use at most 3), if you have a method that requires so many parameters you have to think again on your design, or it suggests that your model need one more class.
The extreme of that is developing your own expression system where IParameter has a string operator ("Equals", "LessThanOrEqualTo", "Plus", etc.) and then has an array of IParameter[] called Children or something. Of course if you're going to do that, why not use something built-in like LINQ or C#'s Expression's? If this isn't backed by a database and you need to use string filtering that's a good option (or use a DataTable's built-in filtering/parsing of expressions if you don't care about performance).
If this is backed by a database, usually it's a bad idea to expose arbitrary querying on a repository that, say, ties to a SQL database because the end developer may not know which columns are indexed and may write ill-performant queries (particularly if they don't have easy access to production-scale data) - it's better to give specific query methods that take in specific methods that map to essentially a SQL SELECT and fine-tuning each query (assuming your repository is backed by a SQL database).
This is more performant because now you explicitly control which indexes the end developer can query from by exposing a method that takes in explicit arguments.
This also makes unit-testing dependencies of your repository much easier because it's easy to mock repository a strongly typed method like that - you'd end up making a fake in-memory abstraction of the database using LINQ-to-Objects if you allow your services to define their own queries - and that can sometimes give false positives.
There's nothing inherently absolutely wrong - I just see the typical use case for something like that of being very explicit if backed by a database or if not leveraging an already-existing filtering/expression system if this is all in-memory.

How to express domain logic in a domain class?

I'm trying to port a years old MS Access app with spaghetti VBA code to C# and OOP, and I'm struggling to find the best way to put domain logic in my domain classes.
I'll use a Country class as a simple example. It has three properties with different business rules:
CountryCode can't be changed anymore once the country is created, because that would cause issues with a 3rd party app that uses the countries
CountryName can be changed anytime, without any underlying logic or business rules
IsoCode can be changed anytime, but must be exactly 2 characters long
(IsoCode has actually more rules, but in this example let's just assume that "must be exactly 2 characters" is the only rule, for the sake of simplicity)
I created two slightly different versions of the class.
I'm quite inexperienced in object-oriented programming, so I need help deciding:
does it matter which approach I use?
does one of them (or both) have issues that I don't see?
is there a different way that's even better?
Both of my approaches look good to me now, but I don't know if maybe they will cause problems later (the app in question is ten years old, and will probably live on for a long time).
Version 1:
public class Country1
{
public string CountryCode { get; private set; }
public string CountryName { get; set; }
public string IsoCode { get; private set; }
public Country1(string countryCode, string countryName, string isoCode)
{
this.CountryCode = countryCode;
this.CountryName = countryName;
SetIsoCode(isoCode);
}
public void SetIsoCode(string isoCode)
{
if (isoCode.Length != 2)
{
throw new ArgumentException("must be exactly 2 characters!");
}
this.IsoCode = isoCode;
}
}
Version 2:
public class Country2
{
public Country2(string countryCode, string countryName, string isoCode)
{
this.countrycode = countryCode;
this.CountryName = countryName;
this.isocode = isoCode;
}
private readonly string countrycode;
private string isocode;
public string CountryCode
{
get { return this.countrycode; }
}
public string CountryName { get; set; }
public string IsoCode
{
get { return this.isocode; }
set
{
if (value.Length != 2)
{
throw new ArgumentException("must be exactly 2 characters!");
}
this.isocode = value;
}
}
}
Some more background about why I'm asking this and what I want to know:
I have read lots of different opinions about the "right OOP way".
Some say that you shouldn't expose getters and setters at all. I understand why this is a bad idea with setters, that's why CountryCode can only be set from the constructor.
Some say that instead of using any getters and setters, you should use GetXXX and SetXXX methods. I can see that this makes sense in some cases (for example, a SetXXX method with several parameters when you have several values that need to be set together).
But often there are simple values like the CountryName in my example, which is a "dumb" value without any logic. When I have ten of these things in one class, I don't want to create GetXXX and SetXXX methods for each of them.
Then there's stuff like the IsoCode, which has no connection to any of the other properties either (so no need to use a SetXXX method to set it together with another property). But it contains some validation, so I could either make a SetXXX method, or just do the validation (and throw an exception when something is wrong) in the setter.
Is throwing an exception even the best way to notify the caller of an error? Some say it's fine, and some say that you should throw exceptions only for "exceptional" cases.
IMO it's not really exceptional when someone enters an invalid ISO code, but how else should I get the information that an error occured (including a human readable error message!!) to the client? Is it better to use a response object with an error code and a ErrorMessage string property?
Personally I don't think there is much difference between the property and method implementation. I have a preference for using properties when the values can be set independently. I use setter methods to force more than one value to be provided to your model at the same time when the values depend on each other in some way (e.g. you are setting integers x and y and y cannot be larger than x).
It makes a lot of sense to have simple validation logic (e.g. required fields, fields lengths) in your view-model because this allows you to tie the validation more closely to the user interface (e.g. you could highlight the field that was invalid after it had been filled in).
For WPF Validation see: http://www.codeproject.com/Articles/97564/Attributes-based-Validation-in-a-WPF-MVVM-Applicat
For MVC Validation see: http://www.codeproject.com/Articles/249452/ASP-NET-MVC3-Validation-Basic
I'd strongly recommend this article by Eric Lippert about what kind of condition makes a good candidate for throwing/catching an exception:
http://blogs.msdn.com/b/ericlippert/archive/2008/09/10/vexing-exceptions.aspx
I tend to use properties for single value changes and SetXXX methods rarely when I have some rule that involve more than one value that must be set at a time too (just like Andy Skirrow said).
SetXXX methods for everything are a common "Javaish" convention (althrough its used in many languages that don't have setters and getters like C#).
People try to force some good practices they learned for some language into every language they can, even if it did not fit necessarily well or the language have better alternatives for that.
Try not to be an "OOP maniac". This will only cause you pain in the head. OOP is not a religion (and even religion should not have fanatics IMHO, but this is another story).
Back to the point
The two approaches makes no functional difference and will not cause any harm in the future. Take the way you fill its more readable and pleasant to code. This will count much more than "the correct OOP way", cause many people have its own definition for the "better way" of doing things.
Validation
If you are using some structural or architectural pattern like MVC, you can use Data Annotation Attributes to enforce validation and, depending on the framework, use it to enforce client side validation as well.
See #Andy Skirrows's answer for links.

Domain Validation in a CQRS architecture

Danger ... Danger Dr. Smith... Philosophical post ahead
The purpose of this post is to determine if placing the validation logic outside of my domain entities (aggregate root actually) is actually granting me more flexibility or it's kamikaze code
Basically I want to know if there is a better way to validate my domain entities. This is how I am planning to do it but I would like your opinion
The first approach I considered was:
class Customer : EntityBase<Customer>
{
public void ChangeEmail(string email)
{
if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”);
if(!email.IsEmail()) throw new DomainException();
if(email.Contains(“#mailinator.com”)) throw new DomainException();
}
}
I actually do not like this validation because even when I am encapsulating the validation logic in the correct entity, this is violating the Open/Close principle (Open for extension but Close for modification) and I have found that violating this principle, code maintenance becomes a real pain when the application grows up in complexity. Why? Because domain rules change more often than we would like to admit, and if the rules are hidden and embedded in an entity like this, they are hard to test, hard to read, hard to maintain but the real reason why I do not like this approach is: if the validation rules change, I have to come and edit my domain entity. This has been a really simple example but in RL the validation could be more complex
So following the philosophy of Udi Dahan, making roles explicit, and the recommendation from Eric Evans in the blue book, the next try was to implement the specification pattern, something like this
class EmailDomainIsAllowedSpecification : IDomainSpecification<Customer>
{
private INotAllowedEmailDomainsResolver invalidEmailDomainsResolver;
public bool IsSatisfiedBy(Customer customer)
{
return !this.invalidEmailDomainsResolver.GetInvalidEmailDomains().Contains(customer.Email);
}
}
But then I realize that in order to follow this approach I had to mutate my entities first in order to pass the value being valdiated, in this case the email, but mutating them would cause my domain events being fired which I wouldn’t like to happen until the new email is valid
So after considering these approaches, I came out with this one, since I am going to implement a CQRS architecture:
class EmailDomainIsAllowedValidator : IDomainInvariantValidator<Customer, ChangeEmailCommand>
{
public void IsValid(Customer entity, ChangeEmailCommand command)
{
if(!command.Email.HasValidDomain()) throw new DomainException(“...”);
}
}
Well that’s the main idea, the entity is passed to the validator in case we need some value from the entity to perform the validation, the command contains the data coming from the user and since the validators are considered injectable objects they could have external dependencies injected if the validation requires it.
Now the dilemma, I am happy with a design like this because my validation is encapsulated in individual objects which brings many advantages: easy unit test, easy to maintain, domain invariants are explicitly expressed using the Ubiquitous Language, easy to extend, validation logic is centralized and validators can be used together to enforce complex domain rules. And even when I know I am placing the validation of my entities outside of them (You could argue a code smell - Anemic Domain) but I think the trade-off is acceptable
But there is one thing that I have not figured out how to implement it in a clean way. How should I use this components...
Since they will be injected, they won’t fit naturally inside my domain entities, so basically I see two options:
Pass the validators to each method of my entity
Validate my objects externally (from the command handler)
I am not happy with the option 1 so I would explain how I would do it with the option 2
class ChangeEmailCommandHandler : ICommandHandler<ChangeEmailCommand>
{
// here I would get the validators required for this command injected
private IEnumerable<IDomainInvariantValidator> validators;
public void Execute(ChangeEmailCommand command)
{
using (var t = this.unitOfWork.BeginTransaction())
{
var customer = this.unitOfWork.Get<Customer>(command.CustomerId);
// here I would validate them, something like this
this.validators.ForEach(x =. x.IsValid(customer, command));
// here I know the command is valid
// the call to ChangeEmail will fire domain events as needed
customer.ChangeEmail(command.Email);
t.Commit();
}
}
}
Well this is it. Can you give me your thoughts about this or share your experiences with Domain entities validation
EDIT
I think it is not clear from my question, but the real problem is: Hiding the domain rules has serious implications in the future maintainability of the application, and also domain rules change often during the life-cycle of the app. Hence implementing them with this in mind would let us extend them easily. Now imagine in the future a rules engine is implemented, if the rules are encapsulated outside of the domain entities, this change would be easier to implement
I am aware that placing the validation outside of my entities breaks the encapsulation as #jgauffin mentioned in his answer, but I think that the benefits of placing the validation in individual objects is much more substantial than just keeping the encapsulation of an entity. Now I think the encapsulation makes more sense in a traditional n-tier architecture because the entities were used in several places of the domain layer, but in a CQRS architecture, when a command arrives, there will be a command handler accessing an aggregate root and performing operations against the aggregate root only creating a perfect window to place the validation.
I'd like to make a small comparison between the advantages to place validation inside an entity vs placing it in individual objects
Validation in Individual objects
Pro. Easy to write
Pro. Easy to test
Pro. It's explicitly expressed
Pro. It becomes part of the Domain design, expressed with the current Ubiquitous Language
Pro. Since it's now part of the design, it can be modeled using UML diagrams
Pro. Extremely easy to maintain
Pro. Makes my entities and the validation logic loosely coupled
Pro. Easy to extend
Pro. Following the SRP
Pro. Following the Open/Close principle
Pro. Not breaking the law of Demeter (mmm)?
Pro. I'is centralized
Pro. It could be reusable
Pro. If required, external dependencies can be easily injected
Pro. If using a plug-in model, new validators can be added just by dropping the new assemblies without the need to re-compile the whole application
Pro. Implementing a rules engine would be easier
Con. Breaking encapsulation
Con. If encapsulation is mandatory, we would have to pass the individual validators to the entity (aggregate) method
Validation encapsulated inside the entity
Pro. Encapsulated?
Pro. Reusable?
I would love to read your thoughts about this
I agree with a number of the concepts presented in other responses, but I put them together in my code.
First, I agree that using Value Objects for values that include behavior is a great way to encapsulate common business rules and an e-mail address is a perfect candidate. However, I tend to limit this to rules that are constant and will not change frequently. I'm sure you are looking for a more general approach and e-mail is just an example, so I won't focus on that one use-case.
The key to my approach is recognizing that validation serves different purposes at different locations in an application. Put simply, validate only what is required to ensure that the current operation can execute without unexpected/unintended results. That leads to the question what validation should occur where?
In your example, I would ask myself if the domain entity really cares that the e-mail address conforms to some pattern and other rules or do we simply care that 'email' cannot be null or blank when ChangeEmail is called? If the latter, than a simple check to ensure a value is present is all that is needed in the ChangeEmail method.
In CQRS, all changes that modify the state of the application occur as commands with the implementation in command handlers (as you've shown). I will typically place any 'hooks' into business rules, etc. that validate that the operation MAY be performed in the command handler. I actually follow your approach of injecting validators into the command handler which allows me to extend/replace the rule set without making changes to the handler. These 'dynamic' rules allow me to define the business rules, such as what constitutes a valid e-mail address, before I change the state of the entity - further ensuring it does not go into an invalid state. But 'invalidity' in this case is defined by the business logic and, as you pointed out, is highly volitile.
Having come up through the CSLA ranks, I found this change difficult to adopt because it does seem to break encapsulation. But, I agrue that encapsulation is not broken if you take a step back and ask what role validation truly serves in the model.
I've found these nuances to be very important in keeping my head clear on this subject. There is validation to prevent bad data (eg missing arguments, null values, empty strings, etc) that belongs in the method itself and there is validation to ensure the business rules are enforced. In the case of the former, if the Customer must have an e-mail address, then the only rule I need to be concerned about to prevent my domain object from becoming invalid is to ensure that an e-mail address has been provided to the ChangeEmail method. The other rules are higher level concerns regarding the validity of the value itself and really have no affect on the validity of the domain entity itself.
This has been the source of a lot of 'discussions' with fellow developers but when most take a broader view and investigate the role validation really serves, they tend to see the light.
Finally, there is also a place for UI validation (and by UI I mean whatever serves as the interface to the application be it a screen, service endpoint or whatever). I find it perfectly reasonably to duplicate some of the logic in the UI to provide better interactivity for the user. But it is because this validation serves that single purpose why I allow such duplication. However, using injected validator/specification objects promotes reuse in this way without the negative implications of having these rules defined in multiple locations.
Not sure if that helps or not...
I wouldn't suggest trowing big pieces of code into your domain for validation. We eliminated most of our awkward placed validations by seeing them as a smell of missing concepts in our domain. In your sample code you write I see validation for an e-mail address. A Customer doesn't have anything to do with email validation.
Why not make an ValueObject called Email that does this validation at construct?
My experience is that awkward placed validations are hints to missed concepts in your domain. You can catch them in Validator objects, but I prefer value object because you make the related concept part of your domain.
I am at the beginning of a project and I am going to implement my validation outside my domain entities. My domain entities will contain logic to protect any invariants (such as missing arguments, null values, empty strings, collections, etc). But the actual business rules will live in validator classes. I am of the mindset of #SonOfPirate...
I am using FluentValidation that will essentially give me bunch of validators that act on my domain entities: aka, the specification pattern. Also, in accordance with the patterns described in Eric's blue book, I can construct the validators with any data they may need to perform the validations (be it from the database or another repository or service). I would also have the option to inject any dependencies here too. I can also compose and reuse these validators (e.g. an address validator can be reused in both an Employee validator and Company validator). I have a Validator factory that acts as a "service locator":
public class ParticipantService : IParticipantService
{
public void Save(Participant participant)
{
IValidator<Participant> validator = _validatorFactory.GetValidator<Participant>();
var results = validator.Validate(participant);
//if the participant is valid, register the participant with the unit of work
if (results.IsValid)
{
if (participant.IsNew)
{
_unitOfWork.RegisterNew<Participant>(participant);
}
else if (participant.HasChanged)
{
_unitOfWork.RegisterDirty<Participant>(participant);
}
}
else
{
_unitOfWork.RollBack();
//do some thing here to indicate the errors:generate an exception (or fault) that contains the validation errors. Or return the results
}
}
}
And the validator would contain code, something like this:
public class ParticipantValidator : AbstractValidator<Participant>
{
public ParticipantValidator(DateTime today, int ageLimit, List<string> validCompanyCodes, /*any other stuff you need*/)
{...}
public void BuildRules()
{
RuleFor(participant => participant.DateOfBirth)
.NotNull()
.LessThan(m_today.AddYears(m_ageLimit*-1))
.WithMessage(string.Format("Participant must be older than {0} years of age.", m_ageLimit));
RuleFor(participant => participant.Address)
.NotNull()
.SetValidator(new AddressValidator());
RuleFor(participant => participant.Email)
.NotEmpty()
.EmailAddress();
...
}
}
We have to support more than one type of presentation: websites, winforms and bulk loading of data via services. Under pinning all these are a set of services that expose the functionality of the system in a single and consistent way. We do not use Entity Framework or ORM for reasons that I will not bore you with.
Here is why I like this approach:
The business rules that are contained in the validators are totally unit testable.
I can compose more complex rules from simpler rules
I can use the validators in more than one location in my system (we support websites and Winforms, and services that expose functionality), so if there is a slightly different rule required for a use case in a service that differs from the websites, then I can handle that.
All the vaildation is expressed in one location and I can choose how / where to inject and compose this.
You put validation in the wrong place.
You should use ValueObjects for such things.
Watch this presentation http://www.infoq.com/presentations/Value-Objects-Dan-Bergh-Johnsson
It will also teach you about Data as Centers of Gravity.
There also a sample of how to reuse data validation, like for example using static validation methods ala Email.IsValid(string)
I would not call a class which inherits from EntityBase my domain model since it couples it to your persistence layer. But that's just my opinion.
I would not move the email validation logic from the Customer to anything else to follow the Open/Closed principle. To me, following open/closed would mean that you have the following hierarchy:
public class User
{
// some basic validation
public virtual void ChangeEmail(string email);
}
public class Employee : User
{
// validates internal email
public override void ChangeEmail(string email);
}
public class Customer : User
{
// validate external email addresses.
public override void ChangeEmail(string email);
}
You suggestions moves the control from the domain model to an arbitrary class, hence breaking the encapsulation. I would rather refactor my class (Customer) to comply to the new business rules than doing that.
Use domain events to trigger other parts of the system to get a more loosely coupled architecture, but don't use commands/events to violate the encapsulation.
Exceptions
I just noticed that you throw DomainException. That's a way to generic exception. Why don't you use the argument exceptions or the FormatException? They describe the error much better. And don't forget to include context information helping you to prevent the exception in the future.
Update
Placing the logic outside the class is asking for trouble imho. How do you control which validation rule is used? One part of the code might use SomeVeryOldRule when validating while another using NewAndVeryStrictRule. It might not be on purpose, but it can and will happen when the code base grows.
It sounds like you have already decided to ignore one of the OOP fundamentals (encapsulation). Go ahead and use a generic / external validation framework, but don't say that I didn't warn you ;)
Update2
Thanks for your patience and your answers, and that's the reason why I posted this question, I feel the same an entity should be responsible to guarantee it's in a valid state (and I have done it in previous projects) but the benefits of placing it in individual objects is huge and like I posted there's even a way to use individual objects and keep the encapsulation but personally I am not so happy with design but on the other hand it is not out of the table, consider this ChangeEmail(IEnumerable> validators, string email) I have not thought in detail the imple. though
That allows the programmer to specify any rules, it may or may not be the currently correct business rules. The developer could just write
customer.ChangeEmail(new IValidator<Customer>[] { new NonValidatingRule<Customer>() }, "notAnEmail")
which accepts everything. And the rules have to be specified in every single place where ChangeEmail is being called.
If you want to use a rule engine, create a singleton proxy:
public class Validator
{
IValidatorEngine _engine;
public static void Assign(IValidatorEngine engine)
{
_engine = engine;
}
public static IValidatorEngine Current { get { return _engine; } }
}
.. and use it from within the domain model methods like
public class Customer
{
public void ChangeEmail(string email)
{
var rules = Validator.GetRulesFor<Customer>("ChangeEmail");
rules.Validate(email);
// valid
}
}
The problem with that solution is that it will become a maintenance nightmare since the rule dependencies are hidden. You can never tell if all rules have been specified and working unless you test every domain model method and each rule scenario for every method.
The solution is more flexible but will imho take a lot more time to implement than to refactor the method who's business rules got changed.
I cannot say what I did is the perfect thing to do for I am still struggling with this problem myself and fighting one fight at a time. But I have been doing so far the following thing :
I have basic classes for encapsulating validation :
public interface ISpecification<TEntity> where TEntity : class, IAggregate
{
bool IsSatisfiedBy(TEntity entity);
}
internal class AndSpecification<TEntity> : ISpecification<TEntity> where TEntity: class, IAggregate
{
private ISpecification<TEntity> Spec1;
private ISpecification<TEntity> Spec2;
internal AndSpecification(ISpecification<TEntity> s1, ISpecification<TEntity> s2)
{
Spec1 = s1;
Spec2 = s2;
}
public bool IsSatisfiedBy(TEntity candidate)
{
return Spec1.IsSatisfiedBy(candidate) && Spec2.IsSatisfiedBy(candidate);
}
}
internal class OrSpecification<TEntity> : ISpecification<TEntity> where TEntity : class, IAggregate
{
private ISpecification<TEntity> Spec1;
private ISpecification<TEntity> Spec2;
internal OrSpecification(ISpecification<TEntity> s1, ISpecification<TEntity> s2)
{
Spec1 = s1;
Spec2 = s2;
}
public bool IsSatisfiedBy(TEntity candidate)
{
return Spec1.IsSatisfiedBy(candidate) || Spec2.IsSatisfiedBy(candidate);
}
}
internal class NotSpecification<TEntity> : ISpecification<TEntity> where TEntity : class, IAggregate
{
private ISpecification<TEntity> Wrapped;
internal NotSpecification(ISpecification<TEntity> x)
{
Wrapped = x;
}
public bool IsSatisfiedBy(TEntity candidate)
{
return !Wrapped.IsSatisfiedBy(candidate);
}
}
public static class SpecsExtensionMethods
{
public static ISpecification<TEntity> And<TEntity>(this ISpecification<TEntity> s1, ISpecification<TEntity> s2) where TEntity : class, IAggregate
{
return new AndSpecification<TEntity>(s1, s2);
}
public static ISpecification<TEntity> Or<TEntity>(this ISpecification<TEntity> s1, ISpecification<TEntity> s2) where TEntity : class, IAggregate
{
return new OrSpecification<TEntity>(s1, s2);
}
public static ISpecification<TEntity> Not<TEntity>(this ISpecification<TEntity> s) where TEntity : class, IAggregate
{
return new NotSpecification<TEntity>(s);
}
}
and to use it, I do the following :
command handler :
public class MyCommandHandler : CommandHandler<MyCommand>
{
public override CommandValidation Execute(MyCommand cmd)
{
Contract.Requires<ArgumentNullException>(cmd != null);
var existingAR= Repository.GetById<MyAggregate>(cmd.Id);
if (existingIntervento.IsNull())
throw new HandlerForDomainEventNotFoundException();
existingIntervento.DoStuff(cmd.Id
, cmd.Date
...
);
Repository.Save(existingIntervento, cmd.GetCommitId());
return existingIntervento.CommandValidationMessages;
}
the aggregate :
public void DoStuff(Guid id, DateTime dateX,DateTime start, DateTime end, ...)
{
var is_date_valid = new Is_dateX_valid(dateX);
var has_start_date_greater_than_end_date = new Has_start_date_greater_than_end_date(start, end);
ISpecification<MyAggregate> specs = is_date_valid .And(has_start_date_greater_than_end_date );
if (specs.IsSatisfiedBy(this))
{
var evt = new AgregateStuffed()
{
Id = id
, DateX = dateX
, End = end
, Start = start
, ...
};
RaiseEvent(evt);
}
}
the specification is now embedded in these two classes :
public class Is_dateX_valid : ISpecification<MyAggregate>
{
private readonly DateTime _dateX;
public Is_data_consuntivazione_valid(DateTime dateX)
{
Contract.Requires<ArgumentNullException>(dateX== DateTime.MinValue);
_dateX= dateX;
}
public bool IsSatisfiedBy(MyAggregate i)
{
if (_dateX> DateTime.Now)
{
i.CommandValidationMessages.Add(new ValidationMessage("datex greater than now"));
return false;
}
return true;
}
}
public class Has_start_date_greater_than_end_date : ISpecification<MyAggregate>
{
private readonly DateTime _start;
private readonly DateTime _end;
public Has_start_date_greater_than_end_date(DateTime start, DateTime end)
{
Contract.Requires<ArgumentNullException>(start == DateTime.MinValue);
Contract.Requires<ArgumentNullException>(start == DateTime.MinValue);
_start = start;
_end = end;
}
public bool IsSatisfiedBy(MyAggregate i)
{
if (_start > _end)
{
i.CommandValidationMessages.Add(new ValidationMessage(start date greater then end date"));
return false;
}
return true;
}
}
This allows me to reuse some validations for different aggregate and it is easy to test. If you see any flows in it. I would be real happy to discuss it.
yours,
From my OO experience (I am not a DDD expert) moving your code from the entity to a higher abstraction level (into a command handler) will cause code duplication. This is because every time a command handler gets an email address, it has to instantiate email validation rules. This kind of code will rot after a while, and it will smell very badly. In the current example it might not, if you don't have another command which changes the email address, but in other situations it surely will...
If you don't want to move the rules back to a lower abstraction level, like the entity or an email value object, then I strongly suggest you to reduce the pain by grouping the rules. So in your email example the following 3 rules:
if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”);
if(!email.IsEmail()) throw new DomainException();
if(email.Contains(“#mailinator.com”)) throw new DomainException();
can be part of an EmailValidationRule group which you can reuse easier.
From my point of view there is no explicit answer to the question where to put the validation logic. It can be part of every object depending on the abstraction level. In you current case the formal checking of the email address can be part of an EmailValueObject and the mailinator rule can be part of a higher abstraction level concept in which you state that your user cannot have an email address pointing on that domain. So for example if somebody wants to contact with your user without registration, then you can check her email against formal validation, but you don't have to check her email against the mailinator rule. And so on...
So I completely agree with #pjvds who claimed that this kind of awkward placed validation is a sign of a bad design. I don't think you will have any gain by breaking encapsulation, but it's your choice and it will be your pain.
The validation in your example is validation of a value object, not an entity (or aggregate root).
I would separate the validation into distinct areas.
Validate internal characteristics of the Email value object internally.
I adhere to the rule that aggregates should never be in an invalid state. I extend this principal to value objects where practical.
Use createNew() to instantiate an email from user input. This forces it to be valid according to your current rules (the "user#email.com" format, for example).
Use createExisting() to instantiate an email from persistent storage. This performs no validation, which is important - you don't want an exception to be thrown for a stored email that was valid yesterday but invalid today.
class Email
{
private String value_;
// Error codes
const Error E_LENGTH = "An email address must be at least 3 characters long.";
const Error E_FORMAT = "An email address must be in the 'user#email.com' format.";
// Private constructor, forcing the use of factory functions
private Email(String value)
{
this.value_ = value;
}
// Factory functions
static public Email createNew(String value)
{
validateLength(value, E_LENGTH);
validateFormat(value, E_FORMAT);
}
static public Email createExisting(String value)
{
return new Email(value);
}
// Static validation methods
static public void validateLength(String value, Error error = E_LENGTH)
{
if (value.length() < 3)
{
throw new DomainException(error);
}
}
static public void validateFormat(String value, Error error = E_FORMAT)
{
if (/* regular expression fails */)
{
throw new DomainException(error);
}
}
}
Validate "external" characteristics of the Email value object externally, e.g., in a service.
class EmailDnsValidator implements IEmailValidator
{
const E_MX_MISSING = "The domain of your email address does not have an MX record.";
private DnsProvider dnsProvider_;
EmailDnsValidator(DnsProvider dnsProvider)
{
dnsProvider_ = dnsProvider;
}
public void validate(String value, Error error = E_MX_MISSING)
{
if (!dnsProvider_.hasMxRecord(/* domain part of email address */))
{
throw new DomainException(error);
}
}
}
class EmailDomainBlacklistValidator implements IEmailValidator
{
const Error E_DOMAIN_FORBIDDEN = "The domain of your email address is blacklisted.";
public void validate(String value, Error error = E_DOMAIN_FORBIDDEN)
{
if (/* domain of value is on the blacklist */))
{
throw new DomainException(error);
}
}
}
Advantages:
Use of the createNew() and createExisting() factory functions allow control over internal validation.
It is possible to "opt out" of certain validation routines, e.g., skip the length check, using the validation methods directly.
It is also possible to "opt out" of external validation (DNS MX records and domain blacklisting). E.g., a project I worked on initially validated the existance of MX records for a domain, but eventually removed this because of the number of customers using "dynamic IP" type solutions.
It is easy to query your persistent store for email addresses that do not fit the current validation rules, but running a simple query and treating each email as "new" rather than "existing" - if an exception is thrown, there's a problem. From there you can issue, for example, a FlagCustomerAsHavingABadEmail command, using the exception error message as guidance for the user when they see the message.
Allowing the programmer to supply the error code provides flexibility. For example, when sending a UpdateEmailAddress command, the error of "Your email address must be at least 3 characters long" is self explanatory. However, when updating multiple email addresses (home and work), the above error message does not indicate WHICH email was wrong. Supplying the error code/message allows you to provide richer feedback to the end user.
I wrote a blog post on this topic a while back. The premise of the post was that there are different types of validation. I called them Superficial Validation and Domain Based Command Validation.
This simple version is this. Validating things like 'is it a number' or 'email address' are more often than not just superficial. These can be done before the command reaches the domain entities.
However, where the validation is more tied to the domain then it's right place is in the domain. For example, maybe you have some rules about the weight and type of cargo a certain lorry can take. This sounds much more like domain logic.
Then you have the hybrid types. Things like set based validation. These need to happen before the command is issued or injected into the domain (try to avoid that if at all possible - limiting dependencies is a good thing).
Anyway, you can read the full post here: How To Validate Commands in a CQRS Application
I'm still experimenting with this concept but you can try Decorators. If you use SimpleInjector you can easily inject your own validation classes that run ahead of your command handler. Then the command can assume it is valid if it got that far. However, This means all validation should be done on the command and not the entities. The entities won't go into an invalid state. But each command must implement its own validation fully so similar commands may have duplication of rules but you could either abstract common rules to share or treat different commands as truly separate.
You can use a message based solution with Domain Events as explained here.
Exceptions are not the right method for all validation errors, is not said that a not valid entity is an exceptional case.
If the validation is not trivial, the logic to validate the aggregate can be executed directly on the server and while you are trying to set new input you can raise a Domain Event to tell to the user (or the application that is using your domain) why the input is not correct.

a basic issue in implementing validations through properties ? Please guide me

thanks for your attention and time.
I want to implement validations in settter of properties. Here is an issue where your expert help is required please.
I have idea of how I will do validations before setting value. but not getting what to do if passed value is not correct. Just not setting is not a acceptable solution as I want to return an appropriate message to user (in a label in web form). My example code is:
private int id;
public int Id
{
get
{ return id; }
set
{
bool result = IsNumber(value);
if (result==false)
{
// What to do if passed data is not valid ? how to give a appropriate message to user that what is wrong ?
}
id = value;
}
}
A thought was to use return but it is not allowed.
Throwing error looks not good as generally we avoid thorwing custom errors.
Please guide and help me.
thanks in anticipation
haansi
You could consider throwing appropriate exception from property setter. That way it will be clear to the calling party what went wrong, especially assuming you have business rules with respect to setting properties. Of course you do expect the caller to do validations, if still there is a problem, then throwing exception doesn't seem that bad.
"It is valid and acceptable to throw exceptions from a property setter."
Property design guidelines
Best practices: throwing exceptions from properties
What exception to throw from a property setter?
I think you'd better change to another example because:
public int Id
{
get { ... }
set
{
if (!IsNumer(value)) // changes to if (value>5)
{
//the code here will never be executed
id = value;
}
}
}
If the check is only about number (type) then your property can very well handle the type safety. Eg. User wont be able to assign string to a property accepting int.
I would suggest that if Property involves certaing computations, then one should consider using a method instead. In that case, you will option to get some text in return.
One more option is to store all these validation checks in an instance collection (inside the same object). Like.
private List _faileValdations;
//more code
set
{
if (!IsNumber(value))
{
_faileValdations.Add("Invalid value for xxx. Expected... got..");
}
else{
id = value;
}
}
And then, your GUI can read the FailedValidations collection in the end, and display it in a formatted way in some label.
Edit: one more option below.
Sorry i forgot to mention about this before.
You can use an event driven approach also.
You object can expose an event like "ValidationFailed", and all the GUI objects can subscribe to this event. The object will trigger this event in case any validation is failed in the setters.
set {
if (!IsNumber(value))
{
RaiseValidationFailed("some message");
}
else{
id = value;
}
}
"RaiseValidationFailed" can collect the message, wrap it up in some event args and trigger "ValidationFailed" event with the message. Then GUI can react to this.
{I can provide you a full code for this if its not clear}
I would argue that you should rethink your approach to validation. The approach that you are suggesting means that every time a property changes, a message will be generated.
How will these messages be collected, stored and presented? Especially if you decide to use your classes in a website?
What if you want to validate your class at any other time?
What if you want to use the same rules in client side validation?
I can understand the appeal of catching an invalid value as early as possible, but it is a lot easier to validate an entire class in one call such as a Validate() method. This way, you have full control over when the validation logic is run.
I would recommend you read up on the two leading approaches to property validation:
Data Anotations
Fluent Validation for >NET 3.0 and above
Fluent Validation for .NET 2.0
Both Data Annotations and FluentValidation are easy to use and they are able to generate well-tested client side validation on web forms and win forms.
In Data Annotations, the validation is added to the properties using attributes. If you prefer to keep your data classes as clean data transfer objects, Fluent validation involves the creation of easily readable rules in Validator classes.

Categories