Domain Validation in a CQRS architecture - c#

Danger ... Danger Dr. Smith... Philosophical post ahead
The purpose of this post is to determine if placing the validation logic outside of my domain entities (aggregate root actually) is actually granting me more flexibility or it's kamikaze code
Basically I want to know if there is a better way to validate my domain entities. This is how I am planning to do it but I would like your opinion
The first approach I considered was:
class Customer : EntityBase<Customer>
{
public void ChangeEmail(string email)
{
if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”);
if(!email.IsEmail()) throw new DomainException();
if(email.Contains(“#mailinator.com”)) throw new DomainException();
}
}
I actually do not like this validation because even when I am encapsulating the validation logic in the correct entity, this is violating the Open/Close principle (Open for extension but Close for modification) and I have found that violating this principle, code maintenance becomes a real pain when the application grows up in complexity. Why? Because domain rules change more often than we would like to admit, and if the rules are hidden and embedded in an entity like this, they are hard to test, hard to read, hard to maintain but the real reason why I do not like this approach is: if the validation rules change, I have to come and edit my domain entity. This has been a really simple example but in RL the validation could be more complex
So following the philosophy of Udi Dahan, making roles explicit, and the recommendation from Eric Evans in the blue book, the next try was to implement the specification pattern, something like this
class EmailDomainIsAllowedSpecification : IDomainSpecification<Customer>
{
private INotAllowedEmailDomainsResolver invalidEmailDomainsResolver;
public bool IsSatisfiedBy(Customer customer)
{
return !this.invalidEmailDomainsResolver.GetInvalidEmailDomains().Contains(customer.Email);
}
}
But then I realize that in order to follow this approach I had to mutate my entities first in order to pass the value being valdiated, in this case the email, but mutating them would cause my domain events being fired which I wouldn’t like to happen until the new email is valid
So after considering these approaches, I came out with this one, since I am going to implement a CQRS architecture:
class EmailDomainIsAllowedValidator : IDomainInvariantValidator<Customer, ChangeEmailCommand>
{
public void IsValid(Customer entity, ChangeEmailCommand command)
{
if(!command.Email.HasValidDomain()) throw new DomainException(“...”);
}
}
Well that’s the main idea, the entity is passed to the validator in case we need some value from the entity to perform the validation, the command contains the data coming from the user and since the validators are considered injectable objects they could have external dependencies injected if the validation requires it.
Now the dilemma, I am happy with a design like this because my validation is encapsulated in individual objects which brings many advantages: easy unit test, easy to maintain, domain invariants are explicitly expressed using the Ubiquitous Language, easy to extend, validation logic is centralized and validators can be used together to enforce complex domain rules. And even when I know I am placing the validation of my entities outside of them (You could argue a code smell - Anemic Domain) but I think the trade-off is acceptable
But there is one thing that I have not figured out how to implement it in a clean way. How should I use this components...
Since they will be injected, they won’t fit naturally inside my domain entities, so basically I see two options:
Pass the validators to each method of my entity
Validate my objects externally (from the command handler)
I am not happy with the option 1 so I would explain how I would do it with the option 2
class ChangeEmailCommandHandler : ICommandHandler<ChangeEmailCommand>
{
// here I would get the validators required for this command injected
private IEnumerable<IDomainInvariantValidator> validators;
public void Execute(ChangeEmailCommand command)
{
using (var t = this.unitOfWork.BeginTransaction())
{
var customer = this.unitOfWork.Get<Customer>(command.CustomerId);
// here I would validate them, something like this
this.validators.ForEach(x =. x.IsValid(customer, command));
// here I know the command is valid
// the call to ChangeEmail will fire domain events as needed
customer.ChangeEmail(command.Email);
t.Commit();
}
}
}
Well this is it. Can you give me your thoughts about this or share your experiences with Domain entities validation
EDIT
I think it is not clear from my question, but the real problem is: Hiding the domain rules has serious implications in the future maintainability of the application, and also domain rules change often during the life-cycle of the app. Hence implementing them with this in mind would let us extend them easily. Now imagine in the future a rules engine is implemented, if the rules are encapsulated outside of the domain entities, this change would be easier to implement
I am aware that placing the validation outside of my entities breaks the encapsulation as #jgauffin mentioned in his answer, but I think that the benefits of placing the validation in individual objects is much more substantial than just keeping the encapsulation of an entity. Now I think the encapsulation makes more sense in a traditional n-tier architecture because the entities were used in several places of the domain layer, but in a CQRS architecture, when a command arrives, there will be a command handler accessing an aggregate root and performing operations against the aggregate root only creating a perfect window to place the validation.
I'd like to make a small comparison between the advantages to place validation inside an entity vs placing it in individual objects
Validation in Individual objects
Pro. Easy to write
Pro. Easy to test
Pro. It's explicitly expressed
Pro. It becomes part of the Domain design, expressed with the current Ubiquitous Language
Pro. Since it's now part of the design, it can be modeled using UML diagrams
Pro. Extremely easy to maintain
Pro. Makes my entities and the validation logic loosely coupled
Pro. Easy to extend
Pro. Following the SRP
Pro. Following the Open/Close principle
Pro. Not breaking the law of Demeter (mmm)?
Pro. I'is centralized
Pro. It could be reusable
Pro. If required, external dependencies can be easily injected
Pro. If using a plug-in model, new validators can be added just by dropping the new assemblies without the need to re-compile the whole application
Pro. Implementing a rules engine would be easier
Con. Breaking encapsulation
Con. If encapsulation is mandatory, we would have to pass the individual validators to the entity (aggregate) method
Validation encapsulated inside the entity
Pro. Encapsulated?
Pro. Reusable?
I would love to read your thoughts about this

I agree with a number of the concepts presented in other responses, but I put them together in my code.
First, I agree that using Value Objects for values that include behavior is a great way to encapsulate common business rules and an e-mail address is a perfect candidate. However, I tend to limit this to rules that are constant and will not change frequently. I'm sure you are looking for a more general approach and e-mail is just an example, so I won't focus on that one use-case.
The key to my approach is recognizing that validation serves different purposes at different locations in an application. Put simply, validate only what is required to ensure that the current operation can execute without unexpected/unintended results. That leads to the question what validation should occur where?
In your example, I would ask myself if the domain entity really cares that the e-mail address conforms to some pattern and other rules or do we simply care that 'email' cannot be null or blank when ChangeEmail is called? If the latter, than a simple check to ensure a value is present is all that is needed in the ChangeEmail method.
In CQRS, all changes that modify the state of the application occur as commands with the implementation in command handlers (as you've shown). I will typically place any 'hooks' into business rules, etc. that validate that the operation MAY be performed in the command handler. I actually follow your approach of injecting validators into the command handler which allows me to extend/replace the rule set without making changes to the handler. These 'dynamic' rules allow me to define the business rules, such as what constitutes a valid e-mail address, before I change the state of the entity - further ensuring it does not go into an invalid state. But 'invalidity' in this case is defined by the business logic and, as you pointed out, is highly volitile.
Having come up through the CSLA ranks, I found this change difficult to adopt because it does seem to break encapsulation. But, I agrue that encapsulation is not broken if you take a step back and ask what role validation truly serves in the model.
I've found these nuances to be very important in keeping my head clear on this subject. There is validation to prevent bad data (eg missing arguments, null values, empty strings, etc) that belongs in the method itself and there is validation to ensure the business rules are enforced. In the case of the former, if the Customer must have an e-mail address, then the only rule I need to be concerned about to prevent my domain object from becoming invalid is to ensure that an e-mail address has been provided to the ChangeEmail method. The other rules are higher level concerns regarding the validity of the value itself and really have no affect on the validity of the domain entity itself.
This has been the source of a lot of 'discussions' with fellow developers but when most take a broader view and investigate the role validation really serves, they tend to see the light.
Finally, there is also a place for UI validation (and by UI I mean whatever serves as the interface to the application be it a screen, service endpoint or whatever). I find it perfectly reasonably to duplicate some of the logic in the UI to provide better interactivity for the user. But it is because this validation serves that single purpose why I allow such duplication. However, using injected validator/specification objects promotes reuse in this way without the negative implications of having these rules defined in multiple locations.
Not sure if that helps or not...

I wouldn't suggest trowing big pieces of code into your domain for validation. We eliminated most of our awkward placed validations by seeing them as a smell of missing concepts in our domain. In your sample code you write I see validation for an e-mail address. A Customer doesn't have anything to do with email validation.
Why not make an ValueObject called Email that does this validation at construct?
My experience is that awkward placed validations are hints to missed concepts in your domain. You can catch them in Validator objects, but I prefer value object because you make the related concept part of your domain.

I am at the beginning of a project and I am going to implement my validation outside my domain entities. My domain entities will contain logic to protect any invariants (such as missing arguments, null values, empty strings, collections, etc). But the actual business rules will live in validator classes. I am of the mindset of #SonOfPirate...
I am using FluentValidation that will essentially give me bunch of validators that act on my domain entities: aka, the specification pattern. Also, in accordance with the patterns described in Eric's blue book, I can construct the validators with any data they may need to perform the validations (be it from the database or another repository or service). I would also have the option to inject any dependencies here too. I can also compose and reuse these validators (e.g. an address validator can be reused in both an Employee validator and Company validator). I have a Validator factory that acts as a "service locator":
public class ParticipantService : IParticipantService
{
public void Save(Participant participant)
{
IValidator<Participant> validator = _validatorFactory.GetValidator<Participant>();
var results = validator.Validate(participant);
//if the participant is valid, register the participant with the unit of work
if (results.IsValid)
{
if (participant.IsNew)
{
_unitOfWork.RegisterNew<Participant>(participant);
}
else if (participant.HasChanged)
{
_unitOfWork.RegisterDirty<Participant>(participant);
}
}
else
{
_unitOfWork.RollBack();
//do some thing here to indicate the errors:generate an exception (or fault) that contains the validation errors. Or return the results
}
}
}
And the validator would contain code, something like this:
public class ParticipantValidator : AbstractValidator<Participant>
{
public ParticipantValidator(DateTime today, int ageLimit, List<string> validCompanyCodes, /*any other stuff you need*/)
{...}
public void BuildRules()
{
RuleFor(participant => participant.DateOfBirth)
.NotNull()
.LessThan(m_today.AddYears(m_ageLimit*-1))
.WithMessage(string.Format("Participant must be older than {0} years of age.", m_ageLimit));
RuleFor(participant => participant.Address)
.NotNull()
.SetValidator(new AddressValidator());
RuleFor(participant => participant.Email)
.NotEmpty()
.EmailAddress();
...
}
}
We have to support more than one type of presentation: websites, winforms and bulk loading of data via services. Under pinning all these are a set of services that expose the functionality of the system in a single and consistent way. We do not use Entity Framework or ORM for reasons that I will not bore you with.
Here is why I like this approach:
The business rules that are contained in the validators are totally unit testable.
I can compose more complex rules from simpler rules
I can use the validators in more than one location in my system (we support websites and Winforms, and services that expose functionality), so if there is a slightly different rule required for a use case in a service that differs from the websites, then I can handle that.
All the vaildation is expressed in one location and I can choose how / where to inject and compose this.

You put validation in the wrong place.
You should use ValueObjects for such things.
Watch this presentation http://www.infoq.com/presentations/Value-Objects-Dan-Bergh-Johnsson
It will also teach you about Data as Centers of Gravity.
There also a sample of how to reuse data validation, like for example using static validation methods ala Email.IsValid(string)

I would not call a class which inherits from EntityBase my domain model since it couples it to your persistence layer. But that's just my opinion.
I would not move the email validation logic from the Customer to anything else to follow the Open/Closed principle. To me, following open/closed would mean that you have the following hierarchy:
public class User
{
// some basic validation
public virtual void ChangeEmail(string email);
}
public class Employee : User
{
// validates internal email
public override void ChangeEmail(string email);
}
public class Customer : User
{
// validate external email addresses.
public override void ChangeEmail(string email);
}
You suggestions moves the control from the domain model to an arbitrary class, hence breaking the encapsulation. I would rather refactor my class (Customer) to comply to the new business rules than doing that.
Use domain events to trigger other parts of the system to get a more loosely coupled architecture, but don't use commands/events to violate the encapsulation.
Exceptions
I just noticed that you throw DomainException. That's a way to generic exception. Why don't you use the argument exceptions or the FormatException? They describe the error much better. And don't forget to include context information helping you to prevent the exception in the future.
Update
Placing the logic outside the class is asking for trouble imho. How do you control which validation rule is used? One part of the code might use SomeVeryOldRule when validating while another using NewAndVeryStrictRule. It might not be on purpose, but it can and will happen when the code base grows.
It sounds like you have already decided to ignore one of the OOP fundamentals (encapsulation). Go ahead and use a generic / external validation framework, but don't say that I didn't warn you ;)
Update2
Thanks for your patience and your answers, and that's the reason why I posted this question, I feel the same an entity should be responsible to guarantee it's in a valid state (and I have done it in previous projects) but the benefits of placing it in individual objects is huge and like I posted there's even a way to use individual objects and keep the encapsulation but personally I am not so happy with design but on the other hand it is not out of the table, consider this ChangeEmail(IEnumerable> validators, string email) I have not thought in detail the imple. though
That allows the programmer to specify any rules, it may or may not be the currently correct business rules. The developer could just write
customer.ChangeEmail(new IValidator<Customer>[] { new NonValidatingRule<Customer>() }, "notAnEmail")
which accepts everything. And the rules have to be specified in every single place where ChangeEmail is being called.
If you want to use a rule engine, create a singleton proxy:
public class Validator
{
IValidatorEngine _engine;
public static void Assign(IValidatorEngine engine)
{
_engine = engine;
}
public static IValidatorEngine Current { get { return _engine; } }
}
.. and use it from within the domain model methods like
public class Customer
{
public void ChangeEmail(string email)
{
var rules = Validator.GetRulesFor<Customer>("ChangeEmail");
rules.Validate(email);
// valid
}
}
The problem with that solution is that it will become a maintenance nightmare since the rule dependencies are hidden. You can never tell if all rules have been specified and working unless you test every domain model method and each rule scenario for every method.
The solution is more flexible but will imho take a lot more time to implement than to refactor the method who's business rules got changed.

I cannot say what I did is the perfect thing to do for I am still struggling with this problem myself and fighting one fight at a time. But I have been doing so far the following thing :
I have basic classes for encapsulating validation :
public interface ISpecification<TEntity> where TEntity : class, IAggregate
{
bool IsSatisfiedBy(TEntity entity);
}
internal class AndSpecification<TEntity> : ISpecification<TEntity> where TEntity: class, IAggregate
{
private ISpecification<TEntity> Spec1;
private ISpecification<TEntity> Spec2;
internal AndSpecification(ISpecification<TEntity> s1, ISpecification<TEntity> s2)
{
Spec1 = s1;
Spec2 = s2;
}
public bool IsSatisfiedBy(TEntity candidate)
{
return Spec1.IsSatisfiedBy(candidate) && Spec2.IsSatisfiedBy(candidate);
}
}
internal class OrSpecification<TEntity> : ISpecification<TEntity> where TEntity : class, IAggregate
{
private ISpecification<TEntity> Spec1;
private ISpecification<TEntity> Spec2;
internal OrSpecification(ISpecification<TEntity> s1, ISpecification<TEntity> s2)
{
Spec1 = s1;
Spec2 = s2;
}
public bool IsSatisfiedBy(TEntity candidate)
{
return Spec1.IsSatisfiedBy(candidate) || Spec2.IsSatisfiedBy(candidate);
}
}
internal class NotSpecification<TEntity> : ISpecification<TEntity> where TEntity : class, IAggregate
{
private ISpecification<TEntity> Wrapped;
internal NotSpecification(ISpecification<TEntity> x)
{
Wrapped = x;
}
public bool IsSatisfiedBy(TEntity candidate)
{
return !Wrapped.IsSatisfiedBy(candidate);
}
}
public static class SpecsExtensionMethods
{
public static ISpecification<TEntity> And<TEntity>(this ISpecification<TEntity> s1, ISpecification<TEntity> s2) where TEntity : class, IAggregate
{
return new AndSpecification<TEntity>(s1, s2);
}
public static ISpecification<TEntity> Or<TEntity>(this ISpecification<TEntity> s1, ISpecification<TEntity> s2) where TEntity : class, IAggregate
{
return new OrSpecification<TEntity>(s1, s2);
}
public static ISpecification<TEntity> Not<TEntity>(this ISpecification<TEntity> s) where TEntity : class, IAggregate
{
return new NotSpecification<TEntity>(s);
}
}
and to use it, I do the following :
command handler :
public class MyCommandHandler : CommandHandler<MyCommand>
{
public override CommandValidation Execute(MyCommand cmd)
{
Contract.Requires<ArgumentNullException>(cmd != null);
var existingAR= Repository.GetById<MyAggregate>(cmd.Id);
if (existingIntervento.IsNull())
throw new HandlerForDomainEventNotFoundException();
existingIntervento.DoStuff(cmd.Id
, cmd.Date
...
);
Repository.Save(existingIntervento, cmd.GetCommitId());
return existingIntervento.CommandValidationMessages;
}
the aggregate :
public void DoStuff(Guid id, DateTime dateX,DateTime start, DateTime end, ...)
{
var is_date_valid = new Is_dateX_valid(dateX);
var has_start_date_greater_than_end_date = new Has_start_date_greater_than_end_date(start, end);
ISpecification<MyAggregate> specs = is_date_valid .And(has_start_date_greater_than_end_date );
if (specs.IsSatisfiedBy(this))
{
var evt = new AgregateStuffed()
{
Id = id
, DateX = dateX
, End = end
, Start = start
, ...
};
RaiseEvent(evt);
}
}
the specification is now embedded in these two classes :
public class Is_dateX_valid : ISpecification<MyAggregate>
{
private readonly DateTime _dateX;
public Is_data_consuntivazione_valid(DateTime dateX)
{
Contract.Requires<ArgumentNullException>(dateX== DateTime.MinValue);
_dateX= dateX;
}
public bool IsSatisfiedBy(MyAggregate i)
{
if (_dateX> DateTime.Now)
{
i.CommandValidationMessages.Add(new ValidationMessage("datex greater than now"));
return false;
}
return true;
}
}
public class Has_start_date_greater_than_end_date : ISpecification<MyAggregate>
{
private readonly DateTime _start;
private readonly DateTime _end;
public Has_start_date_greater_than_end_date(DateTime start, DateTime end)
{
Contract.Requires<ArgumentNullException>(start == DateTime.MinValue);
Contract.Requires<ArgumentNullException>(start == DateTime.MinValue);
_start = start;
_end = end;
}
public bool IsSatisfiedBy(MyAggregate i)
{
if (_start > _end)
{
i.CommandValidationMessages.Add(new ValidationMessage(start date greater then end date"));
return false;
}
return true;
}
}
This allows me to reuse some validations for different aggregate and it is easy to test. If you see any flows in it. I would be real happy to discuss it.
yours,

From my OO experience (I am not a DDD expert) moving your code from the entity to a higher abstraction level (into a command handler) will cause code duplication. This is because every time a command handler gets an email address, it has to instantiate email validation rules. This kind of code will rot after a while, and it will smell very badly. In the current example it might not, if you don't have another command which changes the email address, but in other situations it surely will...
If you don't want to move the rules back to a lower abstraction level, like the entity or an email value object, then I strongly suggest you to reduce the pain by grouping the rules. So in your email example the following 3 rules:
if(string.IsNullOrWhitespace(email)) throw new DomainException(“...”);
if(!email.IsEmail()) throw new DomainException();
if(email.Contains(“#mailinator.com”)) throw new DomainException();
can be part of an EmailValidationRule group which you can reuse easier.
From my point of view there is no explicit answer to the question where to put the validation logic. It can be part of every object depending on the abstraction level. In you current case the formal checking of the email address can be part of an EmailValueObject and the mailinator rule can be part of a higher abstraction level concept in which you state that your user cannot have an email address pointing on that domain. So for example if somebody wants to contact with your user without registration, then you can check her email against formal validation, but you don't have to check her email against the mailinator rule. And so on...
So I completely agree with #pjvds who claimed that this kind of awkward placed validation is a sign of a bad design. I don't think you will have any gain by breaking encapsulation, but it's your choice and it will be your pain.

The validation in your example is validation of a value object, not an entity (or aggregate root).
I would separate the validation into distinct areas.
Validate internal characteristics of the Email value object internally.
I adhere to the rule that aggregates should never be in an invalid state. I extend this principal to value objects where practical.
Use createNew() to instantiate an email from user input. This forces it to be valid according to your current rules (the "user#email.com" format, for example).
Use createExisting() to instantiate an email from persistent storage. This performs no validation, which is important - you don't want an exception to be thrown for a stored email that was valid yesterday but invalid today.
class Email
{
private String value_;
// Error codes
const Error E_LENGTH = "An email address must be at least 3 characters long.";
const Error E_FORMAT = "An email address must be in the 'user#email.com' format.";
// Private constructor, forcing the use of factory functions
private Email(String value)
{
this.value_ = value;
}
// Factory functions
static public Email createNew(String value)
{
validateLength(value, E_LENGTH);
validateFormat(value, E_FORMAT);
}
static public Email createExisting(String value)
{
return new Email(value);
}
// Static validation methods
static public void validateLength(String value, Error error = E_LENGTH)
{
if (value.length() < 3)
{
throw new DomainException(error);
}
}
static public void validateFormat(String value, Error error = E_FORMAT)
{
if (/* regular expression fails */)
{
throw new DomainException(error);
}
}
}
Validate "external" characteristics of the Email value object externally, e.g., in a service.
class EmailDnsValidator implements IEmailValidator
{
const E_MX_MISSING = "The domain of your email address does not have an MX record.";
private DnsProvider dnsProvider_;
EmailDnsValidator(DnsProvider dnsProvider)
{
dnsProvider_ = dnsProvider;
}
public void validate(String value, Error error = E_MX_MISSING)
{
if (!dnsProvider_.hasMxRecord(/* domain part of email address */))
{
throw new DomainException(error);
}
}
}
class EmailDomainBlacklistValidator implements IEmailValidator
{
const Error E_DOMAIN_FORBIDDEN = "The domain of your email address is blacklisted.";
public void validate(String value, Error error = E_DOMAIN_FORBIDDEN)
{
if (/* domain of value is on the blacklist */))
{
throw new DomainException(error);
}
}
}
Advantages:
Use of the createNew() and createExisting() factory functions allow control over internal validation.
It is possible to "opt out" of certain validation routines, e.g., skip the length check, using the validation methods directly.
It is also possible to "opt out" of external validation (DNS MX records and domain blacklisting). E.g., a project I worked on initially validated the existance of MX records for a domain, but eventually removed this because of the number of customers using "dynamic IP" type solutions.
It is easy to query your persistent store for email addresses that do not fit the current validation rules, but running a simple query and treating each email as "new" rather than "existing" - if an exception is thrown, there's a problem. From there you can issue, for example, a FlagCustomerAsHavingABadEmail command, using the exception error message as guidance for the user when they see the message.
Allowing the programmer to supply the error code provides flexibility. For example, when sending a UpdateEmailAddress command, the error of "Your email address must be at least 3 characters long" is self explanatory. However, when updating multiple email addresses (home and work), the above error message does not indicate WHICH email was wrong. Supplying the error code/message allows you to provide richer feedback to the end user.

I wrote a blog post on this topic a while back. The premise of the post was that there are different types of validation. I called them Superficial Validation and Domain Based Command Validation.
This simple version is this. Validating things like 'is it a number' or 'email address' are more often than not just superficial. These can be done before the command reaches the domain entities.
However, where the validation is more tied to the domain then it's right place is in the domain. For example, maybe you have some rules about the weight and type of cargo a certain lorry can take. This sounds much more like domain logic.
Then you have the hybrid types. Things like set based validation. These need to happen before the command is issued or injected into the domain (try to avoid that if at all possible - limiting dependencies is a good thing).
Anyway, you can read the full post here: How To Validate Commands in a CQRS Application

I'm still experimenting with this concept but you can try Decorators. If you use SimpleInjector you can easily inject your own validation classes that run ahead of your command handler. Then the command can assume it is valid if it got that far. However, This means all validation should be done on the command and not the entities. The entities won't go into an invalid state. But each command must implement its own validation fully so similar commands may have duplication of rules but you could either abstract common rules to share or treat different commands as truly separate.

You can use a message based solution with Domain Events as explained here.
Exceptions are not the right method for all validation errors, is not said that a not valid entity is an exceptional case.
If the validation is not trivial, the logic to validate the aggregate can be executed directly on the server and while you are trying to set new input you can raise a Domain Event to tell to the user (or the application that is using your domain) why the input is not correct.

Related

How to model aggregates that will be created in multiple steps, like wizard style

I will use Airbnb as an example.
When you sign up an Airbnb account, you can become a host by creating a listing. To create a listing, Airbnb UI guides you through the process of creating a new listing in multiple steps:
It will also remember your furthest step you've been, so next time when you want to resume the process, it will redirect to where you left.
I've been struggling to decide whether I should put the listing as the aggregate root, and define methods as available steps, or treat each step as their own aggregate roots so that they're small?
Listing as Aggregate Root
public sealed class Listing : AggregateRoot
{
private List<Photo> _photos;
public Host Host { get; private set; }
public PropertyAddress PropertyAddress { get; private set; }
public Geolocation Geolocation { get; private set; }
public Pricing Pricing { get; private set; }
public IReadonlyList Photos => _photos.AsReadOnly();
public ListingStep LastStep { get; private set; }
public ListingStatus Status { get; private set; }
private Listing(Host host, PropertyAddress propertyAddress)
{
this.Host = host;
this.PropertyAddress = propertyAddress;
this.LastStep = ListingStep.GeolocationAdjustment;
this.Status = ListingStatus.Draft;
_photos = new List<Photo>();
}
public static Listing Create(Host host, PropertyAddress propertyAddress)
{
// validations
// ...
return new Listing(host, propertyAddress);
}
public void AdjustLocation(Geolocation newGeolocation)
{
// validations
// ...
if (this.Status != ListingStatus.Draft || this.LastStep < ListingStep.GeolocationAdjustment)
{
throw new InvalidOperationException();
}
this.Geolocation = newGeolocation;
}
...
}
Most of the complex classes in the aggregate root are just value objects, and ListingStatus is just a simple enum:
public enum ListingStatus : int
{
Draft = 1,
Published = 2,
Unlisted = 3,
Deleted = 4
}
But ListingStep could be an enumeration class that stores the next step the current step can advance:
using Ardalis.SmartEnum;
public abstract class ListingStep : SmartEnum<ListingStep>
{
public static readonly ListingStep GeolocationAdjustment = new GeolocationAdjustmentStep();
public static readonly ListingStep Amenities = new AmenitiesStep();
...
private ListingStep(string name, int value) : base(name, value) { }
public abstract ListingStep Next();
private sealed class GeolocationAdjustmentStep : ListingStep
{
public GeolocationAdjustmentStep() :base("Geolocation Adjustment", 1) { }
public override ListingStep Next()
{
return ListingStep.Amenities;
}
}
private sealed class AmenitiesStep : ListingStep
{
public AmenitiesStep () :base("Amenities", 2) { }
public override ListingStep Next()
{
return ListingStep.Photos;
}
}
...
}
The benefits of having everything in the listing aggregate root is that everything would be ensured to have transaction consistency. And the steps are defined as one of the domain concerns.
The drawback is that the aggregate root is huge. On each step, in order to call the listing actions, you have to load up the listing aggregate root, which contains everything.
To me, it sounds like except the geolocation adjustment might depend on the property address, other steps don't depend on each other. For example, the title and the description of the listing doesn't care what photos you upload.
So I was thinking whether I can treat each step as their own aggregate roots?
Each step as own Aggregate Root
public sealed class Listing : AggregateRoot
{
public Host Host { get; private set; }
public PropertyAddress PropertyAddress { get; private set; }
private Listing(Host host, PropertyAddress propertyAddress)
{
this.Host = host;
this.PropertyAddress = propertyAddress;
}
public static Listing Create(Host host, PropertyAddress propertyAddress)
{
// Validations
// ...
return new Listing(host, propertyAddress);
}
}
public sealed class ListingGeolocation : AggregateRoot
{
public Guid ListingId { get; private set; }
public Geolocation Geolocation { get; private set; }
private ListingGeolocation(Guid listingId, Geolocation geolocation)
{
this.ListingId = listingId;
this.Geolocation = geolocation;
}
public static ListingGeolocation Create(Guid listingId, Geolocation geolocation)
{
// Validations
// ...
return new ListingGeolocation(listingId, geolocation);
}
}
...
The benefits of having each step as own aggregate root is that it makes aggregate roots small (To some extends I even feel like they're too small!) so when they're persisted back to data storage, the performance should be quicker.
The drawback is that I lost the transactional consistency of the listing aggregate. For example, the listing geolocation aggregate only references the listing by the Id. I don't know if I should put a listing value object there instead so that I can more information useful in the context, like the last step, listing status, etc.
Close as Opinion-based?
I can't find any example online where it shows how to model this wizard-like style in DDD. Also most examples I've found about splitting a huge aggregate roots into multiple smaller ones are about one-to-many relationships, but my example here is mostly about one-to-one relationship (except photos probably).
I think my question would not be opinion-based, because
There are only finite ways to go about modeling aggregates in DDD
I've introduced a concrete business model airbnb, as an example.
I've listed 2 approaches I've been thinking.
You can suggest me which approach you would take and why, or other approaches different from the two I listed and the reasons.
Let's discuss a couple of reasons to split up a large-cluster aggregate:
Transactional issues in multi-user environments.
In our case, there's only one Host managing the Listing. Only reviews could be posted by other users. Modelling Review as a separate aggregate allows transactional consistency on the root Listing.
Performance and scalability.
As always, it depends on your specific use case and needs. Although, once the Listing has been created, you would usually query the entire listing in order to present it to the user (apart from perhaps a collapsed reviews section).
Now let's have a look at the candidates for value objects (requiring no identity):
Location
Amenities
Description and title
Settings
Availability
Price
Remember there are advantages to limiting internal parts as value objects. For one, it greatly reduces overall complexity.
As for the wizard part, the key take away is that the current step needs to be remembered:
..., so next time when you want to resume the process, it will redirect to where you left.
As aggregates are conceptually a unit of persistence, resuming where you left off will require us to persist partially hydrated aggregates. You could indeed store a ListingStep on the aggregate, but does that really make sense from a domain perspective? Do the Amenities need to be specified before the Description and Title? Is this really a concern for the Listing aggregate or can this perhaps be moved to a Service? When all Listings are created through the use of the same Service, this Service could easily determine where it left off last time.
Pulling this wizard approach into the domain model feels like a violation of the Separation of Concerns principle. The B&B domain experts might very well be indifferent concerning the wizard flow.
Taking all of the above into account, the Listing as aggregate root seems like a good place to start.
UPDATE
I thought about the wizard being the concept of the UI, rather than of the domain, because in theory, since each step doesn't depend on others, you can finish any step in any order.
Indeed, the steps being independent is a clear indication that there's no real invariant, posed by the aggregate, on the order the data is entered. In this case, it's not even a domain concern.
I have no problem modeling those steps as their own aggregate roots, and have the UI determine where it left off last time.
The wizard steps (pages) shouldn't map to aggregates of their own. Following DDD, user actions will typically be forwarded to an Application API/Service, which in turn can delegate work to domain objects and services. The Application Service is only concerned with technical/infrastructure stuff (eg persistence), where as the domain objects and services hold the rich domain logic and knowledge. This often referred to as the Onion or Hexagonal architecture. Note that the dependencies point inward, so the domain model depends on nothing else, and knows about nothing else.
Another way to think about wizards is that these are basically data collectors. Often at the last step some sort of processing is done, but all steps before that usually just collect data. You could use this feature to wrap all data when the user closes the wizard (prematurely), send it to the Application API and then hydrate the aggregate and persist it until next time the user comes round. That way you only need to perform basic validation on the pages, but no real domain logic is involved.
My only concern of that approach is that, when all the steps are filled in and the listing is ready to be reviewed and published, who's responsible for it? I thought about the listing aggregate, but it doesn't have all the information.
This is where the Application Service, as a delegator of work, comes into play. By itself it holds no real domain knowledge, but it "knows" all the players involved and can delegate work to them. It's not an unbound context (no pun intended), as you want to keep the transactional scope limited to one aggregate at a time. If not, you'll have to resort to two stage commits, but that's another story.
To wrap it up, you could store the ListingStatus on Listing and make the invariant behind it a responsibility of the root aggregate. As such, it should have all the information, or be provided with it, to update the ListingStatus accordingly. In other words, it's not about the wizard steps, it's about the nouns and verbs that describe the processes behind the aggregate. In this case, the invariant that guards all data is entered and that it is currently in a correct state to be published. From then on, it's illegal to return to, and persist, the aggregate with only partial state or in an incoherent manner.
Like any other aggregate. It shouldn't care if you collect the needed data in a multistep wizard or in just one screen. It's a UI issue, gathering the data and passing it to the domain at the end of the wizard.
You're trying to design your system based on the UI (the wizard step)!
In Domain-Driven Design you shouldn't really care about the UI (which is a technical detail),
you should look for the bounded contexts, invariants, etc.
For Example:
Listing bounded-context: property and guests, location, amenities, description and title
Booking bounded-context: booking settings, calendar and availability, pricing
Review bounded-context:
the listing doesn't have to be a global one,
you can display the listings for which you have all required information from the 'Listing context' and are availability for the search period, etc.
In my experience, DDD was a design methodology that came from a culture of what we'd now call Java backend data modeling. Modern web development has matured and evolved quite a bit since then with Angular/React/Vue frameworks that have their own paradigms about data modeling. Coming from a UX background, I'll elaborate on how to structure UI components that integrate with DDD models.
Separate data from presentation
MVC design works here. Naively, the end result of this workflow is the construction of a Listing domain model. But, I'm sure AirBnB's domain model for a listing is much more complex. Let's approximate that by considering each "step" as a form that constructs independent models. To simplify, let's only consider models for Photo and Location.
Class Photo: Class Location:
id guid
src geolocation
Provide a view for each model
Think of these UI components as "form" models that should work outside the context of a wizard. All of their fields are nullable, which represent incomplete steps. As an invariant, a view is valid iff it can construct a valid instance of the associated model.
Class PhotoView: Class LocationView:
id guid
src geolocation
valid { get } valid { get }
Define the Controller
Now, consider a View-Model WizardView to help orchestrate the independent views into "Wizard" behavior. We already have the independent views taking care of "valid/invalid" state. Now we just need an idea of "current" step. In the AirBnb UX, it seems like the "current" step is more of a "selected" state where the list item is expanded and all others are collapsed. Either way, a full page transition or "selected" represents the same state of "this step is active <-> all others are inactive." If _selected is null, traverse steps[] for the first invalid step, otherwise, null <--> all valid.
A StepView could display a whole page or, in the case of AirBnb, a single list item, where status == view.valid.
Class WizardView: Class StepView:
steps[] title
_selected view
selected { get set } status { get }
addStep(StepView)
submit()
The submit() represents whatever handling you want to trigger when all steps are valid and the domain models can be constructed. Notice how I've deferred the actual creation of any real domain model and only maintained "form" or "draft" data structures in the views. Only at the time of submit(), either on button press or as a callback to when the "all valid" event occurs, do these views bubble up data, most likely to make server request. You can construct a higher level Listing model here and make that your request payload. However, it is not the Wizard's job to communicate with the backend. It simply pools all the data together for a proper handler to construct a valid request.
Why? Ideally, the frontend should speak the same domain model that the backend does. At the very least your UX models should match one-to-one to high level aggregates. The idea for the frontend is to interface with a high-level layer of abstraction that the backend is not likely to change, while giving it the freedom to decompose and restructure that data in whatever internal domain it needs to. In practice, the frontend and backend domains get out of sync, so it's better leave a layer for data-munging at the request level so that the UX is internally consistent and coherent.

Self validating and Mapping DTOs and DDD

I'm being exposed to DDD for the first time here, mostly, it seems like good stuff but there are some things that are bothering me.
The main one at present is the idea that an object validates itself.
Previously, I would have written a service method similar to this:
public class MyService
{
private IValidator _validator;
private IDomainMapper _domainMapper;
private IThingThatDoesSomething _thingThatDoesSomething;
private IResponseMapper _responseMapper;
public MyService(IValidator validator, IDomainMapper domainMapper, IResponseMapper responseMapper)
{
_validator = validator;
_domainMapper = domainMapper;
_responseMapper = responseMapper;
}
public ResponseDTO DoSomething(RequestDto requestDto)
{
if (_validator.IsValid(requestDto))
{
var domainObject = _domainMapper.Map(requestDto);
var domainResponse = _thingThatDoesSomething.DoSomething(domainObject);
return _responseMapper.Map(domainResponse);
}
else
{
return new ResponseDTO { Valid = false, Errors = /* some error information */ };
}
}
}
However, thy colleagues who have spent more time that me studying DDD prefer the validation and mapping functionality to sit on the domain object. So the DTO look like:
public class RequestDto
{
public string Something {get; set; }
public DomainObject Map()
{
return new DomainObject { something = this.Something };
}
public bool IsValid()
{
return this.Something == "something valid";
}
}
This feel really wrong. Firstly, the object now has multiple responsibilities and secondly, from a domain driven perspective, it seems wrong as I wouldn't expect a letter to arrive at my desk which declares itself to be valid or not or know how to convert itself into something else.
Could someone explain why this is good in DDD?
First of all i think your original application service code looks better without applying your colleagues suggestions and this is not DDD related at all. Bare in mind that basic coding principles apply always, whether you're using DDD or writing a n-tier CRUD application. Specifically for your original code i mean:
You have a separate class for validation - this is good because you can reuse it.
You have a separate class for mapping from/to the data object - this is good as the domain object should not be bothered with any mapping details, especially when it is mapping to a data object.
On the other hand there are a few things which can be done better in terms of DDD:
Your mapper apparently can map in both ways (from domain object to data object and the other way around). In DDD domain objects are being created/materialised in a factory (no specific factory implementation is enforced) or/and in a repository. The important fact is that the domain object creation is the responsibility of domain itself, not the application service (as in your code).
You are validating the data object with your validator but i'm not sure you're doing the same input validation in your domain. In DDD many people take up an approach which can be summarized in a sentence: "Never let your domain get into an invalid state". I share that approach. In my projects i tend to have a separate validator for an entity if the validation logic is complex. This validator is used by the domain entity itself to validate the input parameters. For simple validation (eg. null and empty string checking) i leave it inside the domain object. As for your data object validation, most people (me included) tend to do it as "near" to the user interface as possible to get a better user experience (eg. faster response with validation errors).
One more thing worth mentioning is that, judging by your last code snippet, it think that you might've misunderstood your colleagues a bit at some point. Your RequestDTO class is not a domain object so even after getting some suggestions from your colleagues you shouldn't place the validation nor the mapping logic inside it. DTO is a data holder, nothing more.
TL;DR version
Mapping from a domain object to a data object is the responsibility of application layer (preffereably implemented in a separate mapper class).
Domain objects should not be mapped from data object by mappers in application layer. Creation of domain objects is the responsibility of domain itself (by means of factories).
Always validate data comming into your domain. Never let your domain get into an invalid state. Validation logic for entities should be placed inside the domain.

capture changes to properties of an object

I have multiple business objects in my application (C#, Winforms, WinXP). When the user executes some action on the UI, each of these objects are modified and updated by different parts of the application. After each modification, I need to first check what has changed and then log these changes made to the object. The purpose of logging this is to create a comprehensive tracking of activity going on in the application.
Many among these objects contain contain lists of other objects and this nesting can be several levels deep. The 2 main requirements for any solution would be
capture changes as accurately as possible
keep performance cost to minimum.
eg of a business object:
public class MainClass1
{
public MainClass1()
{
detailCollection1 = new ClassDetailCollection1();
detailCollection2 = new ClassDetailCollection2();
}
private Int64 id;
public Int64 ID
{
get { return id; }
set { id = value; }
}
private DateTime timeStamp;
public DateTime TimeStamp
{
get { return timeStamp; }
set { timeStamp = value; }
}
private string category = string.Empty;
public string Category
{
get { return category; }
set { category = value; }
}
private string action = string.Empty;
public string Action
{
get { return action; }
set { action = value; }
}
private ClassDetailCollection1 detailCollection1;
public ClassDetailCollection1 DetailCollection1
{
get { return detailCollection1; }
}
private ClassDetailCollection2 detailCollection2;
public ClassDetailCollection2 DetailCollection2
{
get { return detailCollection2; }
}
//more collections here
}
public class ClassDetailCollection1
{
private List<DetailType1> detailType1Collection;
public List<DetailType1> DetailType1Collection
{
get { return detailType1Collection; }
}
private List<DetailType2> detailType2Collection;
public List<DetailType2> DetailType2Collection
{
get { return detailType2Collection; }
}
}
public class ClassDetailCollection2
{
private List<DetailType3> detailType3Collection;
public List<DetailType3> DetailType3Collection
{
get { return detailType3Collection; }
}
private List<DetailType4> detailType4Collection;
public List<DetailType4> DetailType4Collection
{
get { return detailType4Collection; }
}
}
//more other Types like MainClass1 above...
I can assume that I will have access to the old values and new values of the object.
In that case I can think of 2 ways to try to do this without being told what has explicitly changed.
use reflection and iterate thru all properties of the object and compare
those with the corresponding
properties of the older object. Log
any properties that have changed. This
approach seems to be more flexible, in
that I would not have to worry if any
new properties are added to any of the
objects. But it also seems performance
heavy.
Log changes in the setter of all the properties for all the objects.
Other than the fact that this will
need me to change a lot of code, it
seems more brute force. This will be
maintenance heavy and inflexible if
some one updates any of the Object
Types. But this way it may also be
preformance light since I will not
need to check what changed and log
exactly what properties are changed.
Suggestions for any better approaches and/or improvements to above approaches are welcome
I developed a system like this a few years ago. The idea was to track changes to an object and store those changes in a database, like version control for objects.
The best approach is called Aspect-Oriented Programming, or AOP. You inject "advice" into the setters and getters (actually all method execution, getters and setters are just special methods) allowing you to "intercept" actions taken on the objects. Look into Spring.NET or PostSharp for .NET AOP solutions.
I may not be able to give you a good answer, but I will tell you that in the overwhelming majority of cases, option 1 is NOT a good answer. We're dealing with a very similar reflective "graph-walker" in our project; seemed like a good idea at the time, but it is a nightmare, for the following reasons:
You know the object changed, but without a high level of knowledge in the reflective "change handling" class about the workings of objects above it, you may not know why. If that information is important to you, you have to give it to the change handler, most l;ikely through a field or property on the domain object, requiring changes to your domain and imparting knowledge to the domain about the business logic.
Changes can affect multiple objects, but logs for changes at every level may not be desired; for instance, the client may not want to see a change to a Borrower's outstanding loan count in the log when a new Loan is approved, but they do want to see changes due to consolidations. Managing rules about logging in these cases requires change handling classes to know about more of the structure than just one object, which can very quickly make a change-handling object VERY big, and VERY brittle.
The requirements of your graph walker are probably more than you know; if your object graph includes backreferences or cross-references, the walker must know where it's been, and the simplest comprehensive way to do that is to keep a list of objects it's processed, and check the current object against those it's handled before processing it (making anti-backtracking an N^2 operation). It must also not consider changes to objects in the graph that will not be persisted when you persist the top level (references that are not "cascaded"). NHibernate gives you the ability to plug into its own graph-walker and abide by the cascade rukles in your mappings, which helps, but if you're using a roll-your-own DAL, or you DO want to log changes to objects that NHibernate won't cascade to, you're going to have to set this all up yourself.
A piece of logic in a handler may make a change that requires an update to a "parent" object (updating a calculated field, perhaps). Now, you have to go back and re-evaluate the changed object if the change is of interest to another piece of the change handling logic.
If you have logic that requires creation and persistence of a new object, you must do one of two things; attach the new object to the graph somewhere (where it may or may not be picked up by the walker), or persist the new object in its own transaction (if you're using an ORM, the object CANNOT reference an object from the other graph with a "cascade" setting that will cause it to be saved first).
Finally, being highly reflective in both walking the graph and finding the "handlers" for a particular object, passing a complex tree into such a framework is a guaranteed speed bump in your application.
I think you'll save yourself a lot of headaches if you skip the "change handler" reflective pattern, and include the creation of audit logs or any pre-persistence logic in the "unit of work" you're performing up at the business layer, through a set of "audit loggers". This allows the logic making the changes to employ an algorithm selection pattern such as Command or Strategy to tell your audit framework exactly what kind of change is happening, so it can pick the logger that will produce the required logging messages.
See here how adempiere did the changelog: http://wiki.adempiere.net/Change_Log

Validation Framework in .NET that can do edits between fields

From my experience many validation frameworks in .NET allow you to validate a single field at a time for doing things like ensuring a field is a postal code or email address for instance. I usually call these within-field edits.
In my project we often have to do between-field-edits though. For instance, if you have a class like this:
public class Range
{
public int Min { get; set; }
public int Max { get; set; }
}
you might want to ensure that Max is greater than Min. You might also want to do some validation against an external object. For instance given you have a class like this:
public class Person
{
public string PostalCode { get; set; }
}
and for whatever reason you want to ensure that Postal Code exists in a database or a file provided to you. I have more complex examples like where a user provides a data dictionary and you want to validate your object against that data dictionary.
My question is: can we use any of the existing validation frameworks (TNValidate, NHibernate Validator) for .NET or do we need to use a rules engine or what?? How do you people in the real world deal with this situation? :-)
There's only one validation framework that I know well and that is Enterprise Library Validation Application Block, or VAB for short. I will answer your questions from the context of the VAB.
First question: Can you do state (between-field) validation in VAB?
Yes you can. There are multiple ways to do this. You can choose for the self validation mechanism, as follows:
[HasSelfValidation]
public class Range
{
public int Min { get; set; }
public int Max { get; set; }
[SelfValidation]
public void ValidateRange(ValidationResults results)
{
if (this.Max < this.Min)
{
results.AddResult(
new ValidationResult("Max less than min", this, "", "", null));
}
}
}
I must say I personally don't like this type of validations, especially when validating my domain entities, because I like to keep my validations separate from the validation logic (and keep my domain logic free from references to any validation framework). However, they need considerably less code than the alternative, which is writing a custom validator class. Here's an example:
[ConfigurationElementType(typeof(CustomValidatorData))]
public sealed class RangeValidator : Validator
{
public RangeValidator(NameValueCollection attributes)
: base(string.Empty, string.Empty) { }
protected override string DefaultMessageTemplate
{
get { throw new NotImplementedException(); }
}
protected override void DoValidate(object objectToValidate,
object currentTarget, string key, ValidationResults results)
{
Range range = (Range)currentTarget;
if (range.Max < range.Min)
{
this.LogValidationResult(results,
"Max less than min", currentTarget, key);
}
}
}
After writing this class you can hook this class up in your validation configuration file like this:
<validation>
<type name="Range" defaultRuleset="Default" assemblyName="[Range Assembly]">
<ruleset name="Default">
<validator type="[Namespace].RangeValidator, [Validator Assembly]"
name="Range Validator" />
</ruleset>
</type>
</validation>
Second question: How to do complex validations with possible interaction a database (with VAB).
The examples I give for the first question are also usable for this. You can use the same techniques: self validation and custom validator. Your scenario where you want to check a value in a database is actually a simple one, because the validity of your object is not based on its context. You can simply check the state of the object against the database. It gets more complicated when the context in which an object lives gets important (but it is possible with VAB). Imagine for instance that you want to write a validation that ensures that every customer, at a given moment in time, has no more than two unshipped orders. This not only means that you have to check the database, but perhaps new orders that are added or orders are deleted within that same context. This problem is not VAB specific, you will have the same problems with every framework you choose. I've written an article that describes the complexities we're facing with in these situations (read and shiver).
Third question: How do you people in the real world deal with this situation?
I do these types of validation with the VAB in production code. It works great, but VAB is not very easy to learn. Still, I love what we can do with VAB, and it will only get better when v5.0 comes out. When you want to learn it, start with reading the ValidationHOL.pdf document that you can found in the Hands-On Labs download.
I hope this helps.
I build custom validation controls when I need anything that's not included out of the box. The nice thing here is that these custom validators are re-usable and they can act on multiple fields. Here's an example I posted to CodeProject of an AtLeastOneOf validator that lets you require that at least one field in a group has a value:
http://www.codeproject.com/KB/validation/AtLeastOneOfValidator.aspx
The code included in the download should work as an easy to follow sample of how you could go about it. The downside here is that Validation controls included with ASP.Net don't often work well with asp.net-ajax.

In domain-driven design, would it be a violation of DDD to put calls to other objects' repostiories in a domain object?

I'm currently refactoring some code on a project that is wrapping up, and I ended up putting a lot of business logic into service classes rather than in the domain objects. At this point most of the domain objects are data containers only. I had decided to write most of the business logic in service objects, and refactor everything afterwards into better, more reuseable, and more readable shapes. That way I could decide what code should be placed into domain objects, and which code should be spun off into new objects of their own, and what code should be left in a service class. So I have some code:
public decimal CaculateBatchTotal(VendorApplicationBatch batch)
{
IList<VendorApplication> applications = AppRepo.GetByBatchId(batch.Id);
if (applications == null || applications.Count == 0)
throw new ArgumentException("There were no applications for this batch, that shouldn't be possible");
decimal total = 0m;
foreach (VendorApplication app in applications)
total += app.Amount;
return total;
}
This code seems like it would make a good addition to a domain object, because it's only input parameter is the domain object itself. Seems like a perfect candidate for some refactoring. But the only problem is that this object calls another object's repository. Which makes me want to leave it in the service class.
My questions are thus:
Where would you put this code?
Would you break this function up?
Where would someone who's following strict Domain-Driven design put it?
Why?
Thanks for your time.
Edit Note: Can't use an ORM on this one, so I can't use a lazy loading solution.
Edit Note2: I can't alter the constructor to take in parameters, because of how the would-be data layer instantiates the domain objects using reflection (not my idea).
Edit Note3: I don't believe that a batch object should be able to total just any list of applications, it seems like it should only be able to total applications that are in that particular batch. Otherwise, it makes more sense to me to leave the function in the service class.
You shouldn't even have access to the repositories from the domain object.
What you can do is either let the service give the domain object the appropriate info or have a delegate in the domain object which is set by a service or in the constructor.
public DomainObject(delegate getApplicationsByBatchID)
{
...
}
I'm no expert on DDD but I remember an article from the great Jeremy Miller that answered this very question for me. You would typically want logic related to your domain objects - inside those objects, but your service class would exec the methods that contain this logic. This helped me push domain specific logic into the entity classes, and keep my service classes less bulky (as I found myself putting to much logic inside the service classes like you mentioned)
Edit: Example
I use the enterprise library for simple validation, so in the entity class I will set an attribute like so:
[StringLengthValidator(1, 100)]
public string Username {
get { return mUsername; }
set { mUsername = value; }
}
The entity inherits from a base class that has the following "IsValid" method that will ensure each object meets the validation criteria
public bool IsValid()
{
mResults = new ValidationResults();
Validate(mResults);
return mResults.IsValid();
}
[SelfValidation()]
public virtual void Validate(ValidationResults results)
{
if (!object.ReferenceEquals(this.GetType(), typeof(BusinessBase<T>))) {
Validator validator = ValidationFactory.CreateValidator(this.GetType());
results.AddAllResults(validator.Validate(this));
}
//before we return the bool value, if we have any validation results map them into the
//broken rules property so the parent class can display them to the end user
if (!results.IsValid()) {
mBrokenRules = new List<BrokenRule>();
foreach (Microsoft.Practices.EnterpriseLibrary.Validation.ValidationResult result in results) {
mRule = new BrokenRule();
mRule.Message = result.Message;
mRule.PropertyName = result.Key.ToString();
mBrokenRules.Add(mRule);
}
}
}
Next we need to execute this "IsValid" method in the service class save method, like so:
public void SaveUser(User UserObject)
{
if (UserObject.IsValid()) {
mRepository.SaveUser(UserObject);
}
}
A more complex example might be a bank account. The deposit logic will live inside the account object, but the service class will call this method.
Why not pass in an IList<VendorApplication> as the parameter instead of a VendorApplicationBatch? The calling code for this presumably would come from a service which would have access to the AppRepo. That way your repository access will be up where it belongs while your domain function can remain blissfully ignorant of where that data came from.
As I understand it (not enough info to know if this is the right design) VendorApplicationBatch should contain a lazy loaded IList inside the domain object, and the logic should stay in the domain.
For Example (air code):
public class VendorApplicationBatch {
private IList<VendorApplication> Applications {get; set;};
public decimal CaculateBatchTotal()
{
if (Applications == null || Applications.Count == 0)
throw new ArgumentException("There were no applications for this batch, that shouldn't be possible");
decimal Total = 0m;
foreach (VendorApplication App in Applications)
Total += App.Amount;
return Total;
}
}
This is easily done with an ORM like NHibernate and I think it would be the best solution.
It seems to me that your CalculateTotal is a service for collections of VendorApplication's, and that returning the collection of VendorApplication's for a Batch fits naturally as a property of the Batch class. So some other service/controller/whatever would retrieve the appropriate collection of VendorApplication's from a batch and pass them to the VendorApplicationTotalCalculator service (or something similar). But that may break some DDD aggregate root service rules or some such thing I'm ignorant of (DDD novice).

Categories