How to test DB.Configuration.AutoDetectChangesEnabled = false - c#

I'm trying to write some tests for a class using NSubstitute.
Class constructor is:
public class ClassToTest : IClassToTest
{
private IDataBase DB;
public ClassToTest(IDatabase DB)
{
this.DB = DB;
this.DB.Configuration.AutoDetectChangesEnabled = false;
}
Here is my UnitTests class:
[TestFixture]
public class ClassToTestUnitTests
{
private ClassToTest _testClass;
[SetUp]
public void SetUp()
{
var Db = Substitute.For<IDatabase>();
//Db.Configuration.AutoDetectChangesEnabled = false; <- I've tried to do it like this
var dummyData = Substitute.For<DbSet<Data>, IQueryable<Data>, IDbAsyncEnumerable<Data>>().SetupData(GetData());
Db.Data.Returns(dummyData);
_testClass = new ClassToTest(Db);
}
Whenever I try to test some method, the test fails and there is a NullReferenceException and it goes in StackTrace to the SetUp method.
When I commented out the
this.DB.Configuration.AutoDetectChangesEnabled = false; in ClassToTest constructor the tests work fine.
Edit:
public interface IInventoryDatabase
{
DbSet<NamesV> NamesV { get; set; }
DbSet<Adress> Adresses { get; set; }
DbSet<RandomData> Randomdata { get; set; }
// (...more DbSets)
System.Data.Entity.Database Database { get; }
DbContextConfiguration Configuration { get; }
int SaveChanges();
}

The reason for the NullReferenceException is that NSubstitute cannot automatically substitute for DbContextConfiguration (it can only do so for purely virtual classes).
Normally we could work around this by manually configuration this property, something like Db.Configuration.Returns(myConfiguration), but in this case DbContextConfiguration does not seem to have a public constructor so we are unable to create an instance for myConfiguration.
At this stage I can think of two main options: wrap the problematic class in a more testable adapter class; or switch to testing this at a different level. (My preference is the latter which I'll explain below.)
The first option involves something like this:
public interface IDbContextConfiguration {
bool AutoDetectChangesEnabled { get; set; }
// ... any other required members here ...
}
public class DbContextConfigurationAdapter : IDbContextConfiguration {
DbContextConfiguration config;
public DbContextConfigurationAdapter(DbContextConfiguration config) {
this.config = config;
}
public bool AutoDetectChangedEnabled {
get { return config.AutoDetectChangedEnabled; }
set { config = value; }
}
}
Then updating IInventoryDatabase to using the more testable IDbContextConfiguration type. My opposition to this approach is that it can end up requiring a lot of work for something that should be fairly simple. This approach can be very useful for cases where we have behaviours that make sense to be grouped under a logical interface, but for working with an AutoDetectChangedEnabled property this seems unnecessary work.
The other option is to test this at a different level. I think the friction in testing the current code is that we are trying to substitute for details of Entity Framework, rather than interfaces we've created for partitioning the logical details of our app. Search for "don't mock types you don't own" for more information on why this can be a problem (I've written about it before here).
One example of testing at a different level is to switch to an in-memory database for testing this part of the code. This will tell you much more valuable information: given a known state of the test database, you are demonstrating the queries return the expected information. This is in contrast to a test showing we are calling Entity Framework in the way we think is required.
To combine this approach with mocking (not necessarily required!), we can create a higher level interface and substitute for that for testing our application code, then make an implementation of that interface and test that using the in-memory database. We have then divided the application into two parts that we can test independently: first that our app uses data from the data access interface correctly, and secondly that our implementation of that interface works as expected.
So that would give us something like this:
public interface IAppDatabase {
// These members just for example. Maybe instead of something general like
// `GetAllNames()` we have operations specific to app operations such as
// `UpdateAddress(Guid id, Address newAddress)`, `GetNameFor(SomeParams p)` etc.
Task<List<Name>> GetAllNames();
Task<Address> LookupAddress(Guid id);
}
public class AppDatabase : IAppDatabase {
// ...
public AppDatabase(IInventoryDatabase db) { ... }
public Task<List<Name>> GetAllNames() {
// use `db` and Entity Framework to retrieve data...
}
// ...
}
The AppDatabase class we test with an in-memory database. The rest of the app we test with respect to a substitute IAppDatabase.
Note that we can skip the mocking step here by using the in-memory database for all relevant tests. Using mocking may be easier than setting up all the required data in the database, or may make tests run faster. Or maybe not -- I suggest considering both options.
Hope this helps.

Related

DataAccessLayer and organize solution

I have a MVC solution :
In my different file :
Library.DataAccessLayer.LibraryContext.cs :
namespace Library.DataAccessLayer
{
public class LibraryContext : DbContext
{
public LibraryContext() : base("DefaultConnection")
{
}
public DbSet<Book> Books { get; set; }
public DbSet<Author> Authors { get; set; }
}
}
Library.DataAccessLayer.Models.Author.cs :
namespace Library.DataAccessLayer.Models
{
public class Author
{
[Key]
public int AuthorID { get; set; }
[Required]
[StringLength(50)]
[Display(Name = "First Name")]
public string FirstName { get; set; }
.........
}
}
Library.DataAccessLayer.Repositories.AuthorRepository.cs :
namespace Library.DataAccessLayer.Repositories
{
public class AuthorRepository : IDisposable, IAuthorRepository
{
private LibraryContext context;
public AuthorRepository(LibraryContext context)
{
this.context = context;
}
public IEnumerable<Author> GetAuthors()
{
return context.Authors.ToList();
}
public Author GetAuthorById(int id)
{
return .........
}
............
}
}
And in Library.Controllers.AuthorController :
namespace Library.Controllers
{
public class AuthorController : Controller
{
private IAuthorRepository authorRepository;
public AuthorController()
{
this.authorRepository = new AuthorRepository(new LibraryContext());
}
public ActionResult Index()
{
var authors = authorRepository.GetAuthors();
return View(authors);
}
}
}
1/ This architecture is coherent ?
2/ Is it really useful to declare interfaces for my repositories which are implemented in my repository classes ?
3/ In my AuthorRepository, the declaration and call of the LibraryContext is correct?
4/ In my AuthorController, my declaration and call of AuthorRepository is correct ?
5/ In which folder can we put the file LibraryContext ? (If necessary and useful)
6/ Is it good to group repositories interfaces and repositories class in the same folder? If not, how to separate and name the various folders ?
7/ How to improve that?
I need your advices.
Thanks
I find the architecture simple, which is good, maybe as your application grows you'll need more layers
Yeah, is really good, specially if you are going to use dependency injection, which leads to the next questions.
Your implementation is right, as for dependencies you should use some design patterns as, again, dependency injection or factory. All your dependencies should be instantiated outside.
This one should be the same as the previous one, the repository should be instantiated outside.
The structure is supposed to accommodate your needs, but I find the example in here pretty useful
Many developers do it that way, some also, and some others, like me, keep them in separate files, personally I create a 'contracts' folder alongside my repositories and keep the interfaces there.
The best way to find a convention that accommodates you is reading code from other developers, there you will find many styles, structures, architecture and pattern implementations.
I hope you find this useful, May the force be with you
This question is more fit to Codereview website, but I think it deserves an answer here:
Repositories are fine, but you should also consider defining a service layer. Services are responsible for aggregating information using repositories and providing this information using service models. Sending back data models (e.g. Authors) might lead to trouble because:
serialization can fail if navigation properties create cycles
you want to provide more information that is not related to data layer (e.g. some computed stuff)
Example:
class AuthorServiceModel
{
int AuthorId { get; set; }
string FirstName { get; set; }
// ...
}
class LibraryService : ILibraryService // if DI is used
{
public AuthorServiceModel GetAuthorById(int id)
{
// error handling/logging may be put here, if an invalid id is provided
var author = context.Authors.Get(id);
// auto mapping can be used to avoid the typing - check http://automapper.org/
var sm = new AuthorServiceModel { AuthorId = author.AuthorId, FirstName = author.FirstName };
return sm;
}
//
}
The controller will never have to know about your data access layer
2) Repositories unification - if most of your repositories are doing just the standard operations (get all, get by identifier, update entity, remove entity etc.), you may define a generic typed repository that helps you to avoid the repetition:
class Repository<T> : IRepository<T>
{
private LibraryContext context;
public IQueryable<T> All => context.Set<T>().AsQueryable();
public IQueryable<T> AllNoTracking => context.Set<T>().AsNoTracking();
public T Get(int id)
{
return context.Set<T>().Find(id);
}
// other methods
}
3) Implementing interfaces can be useful when using Dependency Injection - DI (e.g. Ninject). This removes some coupling between your classes and also allows automatic testing (bindings can be changed to mock objects). E.g.:
public class LibraryContext : DbContext, ILibraryContext
{
public LibraryContext() : base("DefaultConnection")
{
}
// other methods here
}
public class AuthorRepository : IDisposable, IAuthorRepository
{
private ILibraryContext context;
// the context will be injected and should not be provided by the caller, if DI is used
public AuthorRepository(ILibraryContext context)
{
this.context = context;
}
// other methods come here
}
public class AuthorController : Controller
{
// this allows for automatic injection based on defined bindings
[Inject]
public IAuthorRepository authorRepository { get; set; }
public AuthorController()
{
// no need for this, as DI takes care of the initialization
// also, controller does not have to know about your data access classes
// this.authorRepository = new AuthorRepository(new LibraryContext());
}
public ActionResult Index()
{
var authors = authorRepository.GetAuthors();
return View(authors);
}
}
4) Grouping of files is a matter of taste, but I usually recommend groping them semantically (all classes and interfaces that do similar jobs in a folder). Also, data context and repositories are quite coupled and they can reside in the same project/assembly.
Also, services can be separated in their own project/assembly.
NOTE: For a more thorough analysis, consider providing all the code related to your patterns and post your question on Codereview (they deal with complete and working code, not just fragments). It is a great chance that someone will cover all the topics from naming to the patterns).

Improve design with IOC/DI

I'm currently trying to find a better design for my multi-module solution using DI/IOC, but now I'm somehow lost. I have a solution where different kind of entities can be distributed to recipients via different channels.
This is a simplified version of my classes:
#region FTP Module
public interface IFtpService
{
void Upload(FtpAccount account, byte[] data);
}
public class FtpService : IFtpService
{
public void Upload(FtpAccount account, byte[] data)
{
}
}
#endregion
#region Email Module
public interface IEmailService :IDistributionService
{
void Send(IEnumerable<string> recipients, byte[] data);
}
public class EmailService : IEmailService
{
public void Send(IEnumerable<string> recipients, byte[] data)
{
}
}
#endregion
public interface IDistributionService { }
#region GenericDistributionModule
public interface IDistributionChannel
{
void Distribute();
}
public interface IDistribution
{
byte[] Data { get; }
IDistributionChannel DistributionChannel { get; }
void Distribute();
}
#endregion
#region EmailDistributionModule
public class EmailDistributionChannel : IDistributionChannel
{
public void Distribute()
{
// Set some properties
// Call EmailService???
}
public List<string> Recipients { get; set; }
}
#endregion
#region FtpDistributionModule
public class FtpDistributionChannel : IDistributionChannel
{
public void Distribute()
{
// Set some properties
// Call FtpService???
}
public FtpAccount ftpAccount { get; set; }
}
#endregion
#region Program
public class Report
{
public List<ReportDistribution> DistributionList { get; private set; }
public byte[] reportData{get; set; }
}
public class ReportDistribution : IDistribution
{
public Report Report { get; set; }
public byte[] Data { get { return Report.reportData; } }
public IDistributionChannel DistributionChannel { get; private set; }
public void Distribute()
{
DistributionChannel.Distribute();
}
}
class Program
{
static void Main(string[] args)
{
EmailService emailService = new EmailService();
FtpService ftpService = new FtpService();
FtpAccount aAccount;
Report report;
ReportDistribution[] distributions =
{
new ReportDistribution(new EmailDistributionChannel(new List<string>("test#abc.xyz", "foo#bar.xyz"))),
new ReportDistribution(new FtpDistributionChannel(aAccount))
};
report.DistributionList.AddRange(distributions);
foreach (var distribution in distributions)
{
// Old code:
// if (distribution.DistributionChannel is EmailDistributionChannel)
// {
// emailService.Send(...);
// }else if (distribution.DistributionChannel is FtpDistributionChannel)
// {
// ftpService.Upload(...);
// }else{ throw new NotImplementedException();}
// New code:
distribution.Distribute();
}
}
}
#endregion
In my current solution it is possible to create and store persistent IDistribution POCOs (I'am using a ReportDistribution here) and attach them to the distributable entity (a Report in this example).
E.g. someone wants to distribute an existing Report via Email to a set of recipients. Therefore he creates a new ReportDistribution' with anEmailDistributionChannel'. Later he decides to distribute the same Report via FTP to a specified FtpServer. Therefore he creates another ReportDistribution with an FtpDistributionChannel.
It is possible to distribute the same Report multiple times on the same or different channels.
An Azure Webjob picks up stored IDistribution instances and distributes them. The current, ugly implementation uses if-else to distribute Distributions with a FtpDistributionChannel via a (low-level) FtpService and EmailDistributionChannels with an EmailService.
I'm now trying to implement the interface method Distribute() on FtpDistributionChannel and EmailDistributionChannel. But for this to work the entities need a reference to the services. Injecting the Services into the entities via ConstructorInjection seems to be considered bad style.
Mike Hadlow comes up with three other solutions:
Creating Domain Services. I could e.g. create a FtpDistributionService, inject a FtpService and write a Distribute(FtpDistributionChannel distribution) method (and also a EmailDistributionService). Apart from the drawback mentioned by Mike, how can I select a matching DistributionService based on the IDistribution instance? Replacing my old if-else with another one does not feel right
Inject IFtpService/EMailService into the Distribute() method. But how should I define the Distribute() method in the IDistribution interface? EmailDistributionChannel needs an IEmailService while FtpDistributionChannel need an IFtpService.
Domain events pattern. I'm not sure how this can solve my problem.
Let me try to explain why I came up with this quite complicated solution:
It started with a simple list of Reports. Soon someone asked me to send reports to some recipients (and store the list of recipients). Easy!
Later, someone else added the requirement to send a report to a FtpAccount. Different FtpAccounts are managed in the application, therefore the selected account should also be stored.
This was to the point where I added the IDistributionChannel abstraction. Everything was still fine.
Then someone needed the possibility to also send some kind of persistent Logfiles via Email. This lead to my solution with IDistribution/IDistributionChannel.
If now someone needs to distribute some other kind of data, I can just implement another IDistribution for this data. If another DistributionChannel (e.g. Fax) is required, I implement it and it is available for all distributable entities.
I would really appreciate any help/ideas.
First of all, why do yo create interfaces for the FtpAccount? The class is isolated and provide no behavior that need to be abstracted away.
Let's start with your original problem and build from there. The problem as I interpret it as that you want to send something to a client using a different set of mediums.
By expressing it in code it can be done like this instead:
public void SendFileToUser(string userName, byte[] file)
{
var distributions = new []{new EmailDistribution(), new FtpDistribution() };
foreach (var distribution in distributions)
{
distribution.Distribute(userName, file);
}
}
See what I did? I added a bit of context. Because your original use case was way to generic. It's not often that you want to distribute some arbitrary data to an arbitrary distribution service.
The change that I made introduces a domain and a real problem.
With that change we can also model the rest of the classes a bit different.
public class FtpDistributor : IDistributor
{
private FtpAccountRepository _repository = new FtpAccountRepository();
private FtpClient _client = new FtpClient();
public void Distribute(string userName, byte[] file)
{
var ftpAccount = _repository.GetAccount(userName);
_client.Connect(ftpAccount.Host);
_client.Authenticate(ftpAccount.userName, ftpAccount.Password);
_Client.Send(file);
}
}
See what I did? I moved the responsibility of keeping track of the FTP account to the actual service. In reality you probably have an administration web or similar where the account can be mapped to a specific user.
By doing so I also isolated all handling regarding FTP to within the service and therefore reduced the complexity in the calling code.
The email distributor would work in the same way.
When you start to code problems like this, try to go from top->down. It's otherwise easy to create an architecture that seems to be SOLID while it doesn't really solve the actual business problem.
Update
I've read your update and I don't see why you must use the same classes for the new requirements?
Then someone needed the possibility to also send some kind of persistent Logfiles via Email
That's an entirely different use case and should be separated from the original use case. Create new code for it. The SmtpClient in .NET is quite easy to us and do not need to be abstracted away.
If now someone needs to distribute some other kind of data, I can just implement another IDistribution for this data.
Why? what complexity are you trying to hide?
If another DistributionChannel (e.g. Fax) is required, I implement it and it is available for all distributable entities
No. Distributing thing A is not the same as distributing thing B. You can't for instance transport parts of a large bridge on an airpane, either a freight ship or a truck is required.
What I'm trying to say is that creating too generic abstractions/contracts to promote code reuse seems like a good idea, but it usually just make your application more complex or less readable.
Create abstractions when there is real complexity issues and not on before hand.

Design Pattern decisions - REST API & DAL

I am working on application that has WCF REST API and below some DAL. Everything is written in C#.
All REST methods are GET, but many of them have generic string parameter (among other params) that I parse and map to a list object. It works well.
When it comes to mapping to Dto object I would like to use some design pattern to instantiate correct Dto based on mapped REST params. Not sure is it possible since I have that generic string parameter (param name will not be the same all the time) ?
Also, based on created Dto type I would like to choose appropriate DB method to call, command design pattern for this one, I guess?
Thanks for help,
I could explain more if needed.
I have developed same kind of application (WCF REST service).
I have created .net solution and added below project
BusinessLayer
DataAcessLayer
DataService (WCF Service)
EntityLayer
DataService:
public SnapshotData GetSnapshot(string symbol, int nocache)
{
SnapshotData objSnapshotData;
try
{
objSnapshotData = (new SnapshotBAL()).GetSanpshotData(symbol);
SerializeObject(objSnapshotData, localCacheKey);
return objSnapshotData;
}
catch (Exception ex)
{
return null;
}
}
BusinessLayer:
namespace BusinessLayer
{
public class SnapshotBAL
{
public Snapshot GetSanpshot(string symbol)
{
return (new SnaapshotDAL()).GetSanpshot(symbol);
}
}
}
EntiryLayer:
namespace EntityLayer
{
public class Snapshot
{
public DateTime time { get; set; }
public double price { get; set; }
}
}
DataAccessLayer:
namespace DataAccessLayer
{
public class SnaapshotDAL : PrototypeDB
{
public Snapshot GetSanpshot(string symbol)
{
AddParameter("o_snapshot");
AddParameter("i_symbol", symbol);
Snapshot objSanapshot = new Snapshot();
return ObjectHelper.FillObject<Snapshot>(typeof(Snapshot), GetReader("A_SM7_V1_P.GetSnapshotQuick"));
}
}
}
The key line in the question is this:
...design pattern to instantiate correct Dto based on mapped REST params
To me this sounds like you want to use the Factory Pattern.
Urgh. Yes I know, cargo cult programming etc, BUT(!), there are good reasons:
You want to intialise a class (the DAL) based upon some settings
You want those settings defined at the top level (REST mapping)
You want lower level code to be totally ignorant of the settings (right?) so that they can change arbitrarily without requiring system wide refactors.
Sure, you could always just pass an instance of the DAL down the stack but that isn't always possible and can get a bit scrappy.
Alternatively...
Consider creating a DAL implementation that can be made aware of the various switches and will delegate calls to the correct DAL implementation. This might actually be lighter weight than a straight up factory.

Resolution for Model View Presenter Testing... Do I use DTO's or Domain objects or both?

The basic issue is how to test a presenter.
Take:
Domain object (will eventually be persisted to DB)
Base attributes are Id (DB ID, Int/GUID/Whatever) and TransientID (Local ID until saved, GUID)
DomainObject
namespace domain {
public class DomainObject {
private int _id;
private Guid transientId;
public DomainObject()
{
_transient_Id = Guid.NewGuid();
}
}
}
PresenterTest:
var repository = Mock.StrictMock();
var view = Mock.StrictMock();
view.Save += null;
var saveEvent = LastCall.Ignore().GetEventRaiser();
var domainObject = new DomainObject() {Id = 0, Attribute = "Blah"};
Mock.ExpectCall(Repository.Save(domainObject)).Returns(True);
Mock.ReplayAll();
var sut = new Presenter(repository, view);
Save_Event.raise(view, EventArgs.Empty);
Mock.Verify()
So the problem here is that the domain object identity is calculated with ID and failing that it's calculated with transientID, there's no way to know what the transientID will be so I can't have the mock repository check for equality.
The workarounds so far are:
1) LastCall.Ignore and content myself with jsut testing that the method got called but not test the content of the call.
2) Write a DTO to test against and save to a service. The service than handles the mapping to domain.
3) Write a fake testRepository that uses custom logic to determine success.
--1 doesn't test the majority of the logic. --2 is a lot of extra code for no good purpose --3 Seems potentially brittle.
Right now I'm leaning towards DTO's and a service in the theory that it gives the greatest isolation between tiers but is probably 75% unnecessary...
there's no way to know what the transientID will be so I can't have the mock repository check for equality.
Actually, I think there is an opportunity here.
Instead of calling Guid.NewGuid(), you could create your own GuidFactory class that generates GUIDs. By default, it would use Guid.NewGuid() internally, but you could take control of it for tests.
public static class GuidFactory
{
static Func<Guid> _strategy = () => Guid.NewGuid();
public static Guid Build()
{
return _strategy();
}
public static void SetStrategy(Func<Guid> strategy)
{
_strategy = strategy;
}
}
In your constructor, you replace Guid.NewGuid() with GuidFactory.Build().
In your test setup, you override the strategy to suit your needs -- return a known Guid that you can use elsewhere in your tests or just output the default result to a field.
For example:
public class PseudoTest
{
IList<Guid> GeneratedGuids = new List<Guid>();
public void SetUpTest()
{
GuidFactory.SetStrategy(() =>
{
var result = Guid.NewGuid();
GeneratedGuids.Add(result);
return result;
});
}
public void Test()
{
systemUnderTest.DoSomething();
Assert.AreEqual(GeneratedGuids.Last(), someOtherGuid);
}
}
WPF has helped me realize that you really don't need to do much testing if any on the Controller/Presenter/VM. You really should focus all of your tests on the models and services you use. All business logic should be there, the view model or presenter or controller should be as light as possible, and only role is to transfer back and forth between the model and the view.
What's the point of testing whether you call a service when a button command makes it to the presenter? Or testing whether an event is wired properly?
Don't get me wrong, I still have a very small test fixture for the view models or controllers but really the focus of tests should be on the models, let the integration tests test the success of the view and the presenter.
Skinny controllers/VMs/Presenters.
Fat Models.
This is my answer because I ran into the same issue trying to test viewmodels, I wasted so much time trying to figure out how best to test them and another developer gave a great talk on Model-View patterns with this argument. Don't spend too much time making tests for these, focus on the models/services.

Where to put global rules validation in DDD

I'm new to DDD, and I'm trying to apply it in real life. There is no questions about such validation logic, as null check, empty strings check, etc - that goes directly to entity constructor/property. But where to put validation of some global rules like 'Unique user name'?
So, we have entity User
public class User : IAggregateRoot
{
private string _name;
public string Name
{
get { return _name; }
set { _name = value; }
}
// other data and behavior
}
And repository for users
public interface IUserRepository : IRepository<User>
{
User FindByName(string name);
}
Options are:
Inject repository to entity
Inject repository to factory
Create operation on domain service
???
And each option more detailed:
1 .Inject repository to entity
I can query repository in entities constructor/property. But I think that keeping reference to repository in entity is a bad smell.
public User(IUserRepository repository)
{
_repository = repository;
}
public string Name
{
get { return _name; }
set
{
if (_repository.FindByName(value) != null)
throw new UserAlreadyExistsException();
_name = value;
}
}
Update: We can use DI to hide dependency between User and IUserRepository via Specification object.
2. Inject repository to factory
I can put this verification logic in UserFactory. But what if we want to change name of already existing user?
3. Create operation on domain service
I can create domain service for creating and editing users. But someone can directly edit name of user without calling that service...
public class AdministrationService
{
private IUserRepository _userRepository;
public AdministrationService(IUserRepository userRepository)
{
_userRepository = userRepository;
}
public void RenameUser(string oldName, string newName)
{
if (_userRepository.FindByName(newName) != null)
throw new UserAlreadyExistException();
User user = _userRepository.FindByName(oldName);
user.Name = newName;
_userRepository.Save(user);
}
}
4. ???
Where do you put global validation logic for entities?
Thanks!
Most of the times it is best to place these kind of rules in Specification objects.
You can place these Specifications in your domain packages, so anybody using your domain package has access to them. Using a specification, you can bundle your business rules with your entities, without creating difficult-to-read entities with undesired dependencies on services and repositories. If needed, you can inject dependencies on services or repositories into a specification.
Depending on the context, you can build different validators using the specification objects.
Main concern of entities should be keeping track of business state - that's enough of a responsibility and they shouldn't be concerned with validation.
Example
public class User
{
public string Id { get; set; }
public string Name { get; set; }
}
Two specifications:
public class IdNotEmptySpecification : ISpecification<User>
{
public bool IsSatisfiedBy(User subject)
{
return !string.IsNullOrEmpty(subject.Id);
}
}
public class NameNotTakenSpecification : ISpecification<User>
{
// omitted code to set service; better use DI
private Service.IUserNameService UserNameService { get; set; }
public bool IsSatisfiedBy(User subject)
{
return UserNameService.NameIsAvailable(subject.Name);
}
}
And a validator:
public class UserPersistenceValidator : IValidator<User>
{
private readonly IList<ISpecification<User>> Rules =
new List<ISpecification<User>>
{
new IdNotEmptySpecification(),
new NameNotEmptySpecification(),
new NameNotTakenSpecification()
// and more ... better use DI to fill this list
};
public bool IsValid(User entity)
{
return BrokenRules(entity).Count() == 0;
}
public IEnumerable<string> BrokenRules(User entity)
{
return Rules.Where(rule => !rule.IsSatisfiedBy(entity))
.Select(rule => GetMessageForBrokenRule(rule));
}
// ...
}
For completeness, the interfaces:
public interface IValidator<T>
{
bool IsValid(T entity);
IEnumerable<string> BrokenRules(T entity);
}
public interface ISpecification<T>
{
bool IsSatisfiedBy(T subject);
}
Notes
I think Vijay Patel's earlier answer is in the right direction, but I feel it's a bit off. He suggests that the user entity depends on the specification, where I belief that this should be the other way around. This way, you can let the specification depend on services, repositories and context in general, without making your entity depend on them through a specification dependency.
References
A related question with a good answer with example: Validation in a Domain Driven Design.
Eric Evans describes the use of the specification pattern for validation, selection and object construction in chapter 9, pp 145.
This article on the specification pattern with an application in .Net might be of interest to you.
I would not recommend disallowing to change properties in entity, if it's a user input.
For example, if validation did not pass, you can still use the instance to display it in user interface with validation results, allowing user to correct the error.
Jimmy Nilsson in his "Applying Domain-Driven Design and Patterns" recommends to validate for a particular operation, not just for persisting. While an entity could be successfully persisted, the real validation occurs when an entity is about to change it's state, for example 'Ordered' state changes to 'Purchased'.
While creating, the instance must be valid-for-saving, which involves checking for uniqueness. It's different from valid-for-ordering, where not only uniqueness must be checked, but also, for example, creditability of a client, and availability at the store.
So, validation logic should not be invoked on a property assignments, it should be invoked upon aggregate level operations, whether they are persistent or not.
Edit: Judging from the other answers, the correct name for such a 'domain service' is specification. I've updated my answer to reflect this, including a more detailed code sample.
I'd go with option 3; create a domain service specification which encapsulates the actual logic that performs the validation. For example, the specification initially calls a repository, but you could replace it with a web service call at a later stage. Having all that logic behind an abstract specification will keep the overall design more flexible.
To prevent someone from editing the name without validating it, make the specification a required aspect of editing the name. You can achieve this by changing the API of your entity to something like this:
public class User
{
public string Name { get; private set; }
public void SetName(string name, ISpecification<User, string> specification)
{
// Insert basic null validation here.
if (!specification.IsSatisfiedBy(this, name))
{
// Throw some validation exception.
}
this.Name = name;
}
}
public interface ISpecification<TType, TValue>
{
bool IsSatisfiedBy(TType obj, TValue value);
}
public class UniqueUserNameSpecification : ISpecification<User, string>
{
private IUserRepository repository;
public UniqueUserNameSpecification(IUserRepository repository)
{
this.repository = repository;
}
public bool IsSatisfiedBy(User obj, string value)
{
if (value == obj.Name)
{
return true;
}
// Use this.repository for further validation of the name.
}
}
Your calling code would look something like this:
var userRepository = IoC.Resolve<IUserRepository>();
var specification = new UniqueUserNameSpecification(userRepository);
user.SetName("John", specification);
And of course, you can mock ISpecification in your unit tests for easier testing.
I’m not an expert on DDD but I have asked myself the same questions and this is what I came up with:
Validation logic should normally go into the constructor/factory and setters. This way you guarantee that you always have valid domain objects. But if the validation involves database queries that impact your performance, an efficient implementation requires a different design.
(1) Injecting Entities: Injecting entities can be technical difficult and also makes managing application performance very hard due to the fragmentation of you database logic. Seemingly simple operations can now have an unexpectedly performance impact. It also makes it impossible to optimize your domain object for operations on groups of the same kind of entities, you no longer can write a single group query, and instead you always have individual queries for each entity.
(2) Injecting repository: You should not put any business logic in repositories. Keep repositories simple and focused. They should act as if they were collections and only contain logic for adding, removing and finding objects (some even spinoff the find methods to other objects).
(3) Domain service This seems the most logical place to handle the validation that requires database querying. A good implementation would make the constructor/factory and setters involved package private, so that the entities can only be created / modified with the domain service.
I would use a Specification to encapsulate the rule. You can then call when the UserName property is updated (or from anywhere else that might need it):
public class UniqueUserNameSpecification : ISpecification
{
public bool IsSatisifiedBy(User user)
{
// Check if the username is unique here
}
}
public class User
{
string _Name;
UniqueUserNameSpecification _UniqueUserNameSpecification; // You decide how this is injected
public string Name
{
get { return _Name; }
set
{
if (_UniqueUserNameSpecification.IsSatisifiedBy(this))
{
_Name = value;
}
else
{
// Execute your custom warning here
}
}
}
}
It won't matter if another developer tries to modify User.Name directly, because the rule will always execute.
Find out more here
In my CQRS Framework, every Command Handler class also contains a ValidateCommand method, which then calls the appropriate business/validation logic in the Domain (mostly implemented as Entity methods or Entity static methods).
So the caller would do like so:
if (cmdService.ValidateCommand(myCommand) == ValidationResult.OK)
{
// Now we can assume there will be no business reason to reject
// the command
cmdService.ExecuteCommand(myCommand); // Async
}
Every specialized Command Handler contains the wrapper logic, for instance:
public ValidationResult ValidateCommand(MakeCustomerGold command)
{
var result = new ValidationResult();
if (Customer.CanMakeGold(command.CustomerId))
{
// "OK" logic here
} else {
// "Not OK" logic here
}
}
The ExecuteCommand method of the command handler will then call the ValidateCommand() again, so even if the client didn't bother, nothing will happen in the Domain that is not supposed to.
in short you have 4 options:
IsValid method: transition an entity to a state (potentially invalid) and ask it to validate itself.
Validation in application services.
TryExecute pattern.
Execute / CanExecute pattern.
read more here
Create a method, for example, called IsUserNameValid() and make that accessible from everywhere. I would put it in the user service myself. Doing this will not limit you when future changes arise. It keeps the validation code in one place (implementation), and other code that depends on it will not have to change if the validation changes You may find that you need to call this from multiple places later on, such as the ui for visual indication without having to resort to exception handling. The service layer for correct operations, and the repository (cache, db, etc.) layer to ensure that stored items are valid.
I like option 3. Simplest implementation could look so:
public interface IUser
{
string Name { get; }
bool IsNew { get; }
}
public class User : IUser
{
public string Name { get; private set; }
public bool IsNew { get; private set; }
}
public class UserService : IUserService
{
public void ValidateUser(IUser user)
{
var repository = RepositoryFactory.GetUserRepository(); // use IoC if needed
if (user.IsNew && repository.UserExists(user.Name))
throw new ValidationException("Username already exists");
}
}
Create domain service
Or I can create domain service for
creating and editing users. But
someone can directly edit name of user
without calling that service...
If you properly designed your entities this should not be an issue.

Categories