Where to put global rules validation in DDD - c#

I'm new to DDD, and I'm trying to apply it in real life. There is no questions about such validation logic, as null check, empty strings check, etc - that goes directly to entity constructor/property. But where to put validation of some global rules like 'Unique user name'?
So, we have entity User
public class User : IAggregateRoot
{
private string _name;
public string Name
{
get { return _name; }
set { _name = value; }
}
// other data and behavior
}
And repository for users
public interface IUserRepository : IRepository<User>
{
User FindByName(string name);
}
Options are:
Inject repository to entity
Inject repository to factory
Create operation on domain service
???
And each option more detailed:
1 .Inject repository to entity
I can query repository in entities constructor/property. But I think that keeping reference to repository in entity is a bad smell.
public User(IUserRepository repository)
{
_repository = repository;
}
public string Name
{
get { return _name; }
set
{
if (_repository.FindByName(value) != null)
throw new UserAlreadyExistsException();
_name = value;
}
}
Update: We can use DI to hide dependency between User and IUserRepository via Specification object.
2. Inject repository to factory
I can put this verification logic in UserFactory. But what if we want to change name of already existing user?
3. Create operation on domain service
I can create domain service for creating and editing users. But someone can directly edit name of user without calling that service...
public class AdministrationService
{
private IUserRepository _userRepository;
public AdministrationService(IUserRepository userRepository)
{
_userRepository = userRepository;
}
public void RenameUser(string oldName, string newName)
{
if (_userRepository.FindByName(newName) != null)
throw new UserAlreadyExistException();
User user = _userRepository.FindByName(oldName);
user.Name = newName;
_userRepository.Save(user);
}
}
4. ???
Where do you put global validation logic for entities?
Thanks!

Most of the times it is best to place these kind of rules in Specification objects.
You can place these Specifications in your domain packages, so anybody using your domain package has access to them. Using a specification, you can bundle your business rules with your entities, without creating difficult-to-read entities with undesired dependencies on services and repositories. If needed, you can inject dependencies on services or repositories into a specification.
Depending on the context, you can build different validators using the specification objects.
Main concern of entities should be keeping track of business state - that's enough of a responsibility and they shouldn't be concerned with validation.
Example
public class User
{
public string Id { get; set; }
public string Name { get; set; }
}
Two specifications:
public class IdNotEmptySpecification : ISpecification<User>
{
public bool IsSatisfiedBy(User subject)
{
return !string.IsNullOrEmpty(subject.Id);
}
}
public class NameNotTakenSpecification : ISpecification<User>
{
// omitted code to set service; better use DI
private Service.IUserNameService UserNameService { get; set; }
public bool IsSatisfiedBy(User subject)
{
return UserNameService.NameIsAvailable(subject.Name);
}
}
And a validator:
public class UserPersistenceValidator : IValidator<User>
{
private readonly IList<ISpecification<User>> Rules =
new List<ISpecification<User>>
{
new IdNotEmptySpecification(),
new NameNotEmptySpecification(),
new NameNotTakenSpecification()
// and more ... better use DI to fill this list
};
public bool IsValid(User entity)
{
return BrokenRules(entity).Count() == 0;
}
public IEnumerable<string> BrokenRules(User entity)
{
return Rules.Where(rule => !rule.IsSatisfiedBy(entity))
.Select(rule => GetMessageForBrokenRule(rule));
}
// ...
}
For completeness, the interfaces:
public interface IValidator<T>
{
bool IsValid(T entity);
IEnumerable<string> BrokenRules(T entity);
}
public interface ISpecification<T>
{
bool IsSatisfiedBy(T subject);
}
Notes
I think Vijay Patel's earlier answer is in the right direction, but I feel it's a bit off. He suggests that the user entity depends on the specification, where I belief that this should be the other way around. This way, you can let the specification depend on services, repositories and context in general, without making your entity depend on them through a specification dependency.
References
A related question with a good answer with example: Validation in a Domain Driven Design.
Eric Evans describes the use of the specification pattern for validation, selection and object construction in chapter 9, pp 145.
This article on the specification pattern with an application in .Net might be of interest to you.

I would not recommend disallowing to change properties in entity, if it's a user input.
For example, if validation did not pass, you can still use the instance to display it in user interface with validation results, allowing user to correct the error.
Jimmy Nilsson in his "Applying Domain-Driven Design and Patterns" recommends to validate for a particular operation, not just for persisting. While an entity could be successfully persisted, the real validation occurs when an entity is about to change it's state, for example 'Ordered' state changes to 'Purchased'.
While creating, the instance must be valid-for-saving, which involves checking for uniqueness. It's different from valid-for-ordering, where not only uniqueness must be checked, but also, for example, creditability of a client, and availability at the store.
So, validation logic should not be invoked on a property assignments, it should be invoked upon aggregate level operations, whether they are persistent or not.

Edit: Judging from the other answers, the correct name for such a 'domain service' is specification. I've updated my answer to reflect this, including a more detailed code sample.
I'd go with option 3; create a domain service specification which encapsulates the actual logic that performs the validation. For example, the specification initially calls a repository, but you could replace it with a web service call at a later stage. Having all that logic behind an abstract specification will keep the overall design more flexible.
To prevent someone from editing the name without validating it, make the specification a required aspect of editing the name. You can achieve this by changing the API of your entity to something like this:
public class User
{
public string Name { get; private set; }
public void SetName(string name, ISpecification<User, string> specification)
{
// Insert basic null validation here.
if (!specification.IsSatisfiedBy(this, name))
{
// Throw some validation exception.
}
this.Name = name;
}
}
public interface ISpecification<TType, TValue>
{
bool IsSatisfiedBy(TType obj, TValue value);
}
public class UniqueUserNameSpecification : ISpecification<User, string>
{
private IUserRepository repository;
public UniqueUserNameSpecification(IUserRepository repository)
{
this.repository = repository;
}
public bool IsSatisfiedBy(User obj, string value)
{
if (value == obj.Name)
{
return true;
}
// Use this.repository for further validation of the name.
}
}
Your calling code would look something like this:
var userRepository = IoC.Resolve<IUserRepository>();
var specification = new UniqueUserNameSpecification(userRepository);
user.SetName("John", specification);
And of course, you can mock ISpecification in your unit tests for easier testing.

I’m not an expert on DDD but I have asked myself the same questions and this is what I came up with:
Validation logic should normally go into the constructor/factory and setters. This way you guarantee that you always have valid domain objects. But if the validation involves database queries that impact your performance, an efficient implementation requires a different design.
(1) Injecting Entities: Injecting entities can be technical difficult and also makes managing application performance very hard due to the fragmentation of you database logic. Seemingly simple operations can now have an unexpectedly performance impact. It also makes it impossible to optimize your domain object for operations on groups of the same kind of entities, you no longer can write a single group query, and instead you always have individual queries for each entity.
(2) Injecting repository: You should not put any business logic in repositories. Keep repositories simple and focused. They should act as if they were collections and only contain logic for adding, removing and finding objects (some even spinoff the find methods to other objects).
(3) Domain service This seems the most logical place to handle the validation that requires database querying. A good implementation would make the constructor/factory and setters involved package private, so that the entities can only be created / modified with the domain service.

I would use a Specification to encapsulate the rule. You can then call when the UserName property is updated (or from anywhere else that might need it):
public class UniqueUserNameSpecification : ISpecification
{
public bool IsSatisifiedBy(User user)
{
// Check if the username is unique here
}
}
public class User
{
string _Name;
UniqueUserNameSpecification _UniqueUserNameSpecification; // You decide how this is injected
public string Name
{
get { return _Name; }
set
{
if (_UniqueUserNameSpecification.IsSatisifiedBy(this))
{
_Name = value;
}
else
{
// Execute your custom warning here
}
}
}
}
It won't matter if another developer tries to modify User.Name directly, because the rule will always execute.
Find out more here

In my CQRS Framework, every Command Handler class also contains a ValidateCommand method, which then calls the appropriate business/validation logic in the Domain (mostly implemented as Entity methods or Entity static methods).
So the caller would do like so:
if (cmdService.ValidateCommand(myCommand) == ValidationResult.OK)
{
// Now we can assume there will be no business reason to reject
// the command
cmdService.ExecuteCommand(myCommand); // Async
}
Every specialized Command Handler contains the wrapper logic, for instance:
public ValidationResult ValidateCommand(MakeCustomerGold command)
{
var result = new ValidationResult();
if (Customer.CanMakeGold(command.CustomerId))
{
// "OK" logic here
} else {
// "Not OK" logic here
}
}
The ExecuteCommand method of the command handler will then call the ValidateCommand() again, so even if the client didn't bother, nothing will happen in the Domain that is not supposed to.

in short you have 4 options:
IsValid method: transition an entity to a state (potentially invalid) and ask it to validate itself.
Validation in application services.
TryExecute pattern.
Execute / CanExecute pattern.
read more here

Create a method, for example, called IsUserNameValid() and make that accessible from everywhere. I would put it in the user service myself. Doing this will not limit you when future changes arise. It keeps the validation code in one place (implementation), and other code that depends on it will not have to change if the validation changes You may find that you need to call this from multiple places later on, such as the ui for visual indication without having to resort to exception handling. The service layer for correct operations, and the repository (cache, db, etc.) layer to ensure that stored items are valid.

I like option 3. Simplest implementation could look so:
public interface IUser
{
string Name { get; }
bool IsNew { get; }
}
public class User : IUser
{
public string Name { get; private set; }
public bool IsNew { get; private set; }
}
public class UserService : IUserService
{
public void ValidateUser(IUser user)
{
var repository = RepositoryFactory.GetUserRepository(); // use IoC if needed
if (user.IsNew && repository.UserExists(user.Name))
throw new ValidationException("Username already exists");
}
}

Create domain service
Or I can create domain service for
creating and editing users. But
someone can directly edit name of user
without calling that service...
If you properly designed your entities this should not be an issue.

Related

Reduce/remove repetitive business logic checks in ASP.NET Core

Is there a way to reduce/remove constant duplication of user access checks (or some other checks) in a business layer?
Let's consider a following example: simple CRUD application with one entity BlogPost:
public class BlogPost
{
public int Id { get; set; }
public string Title { get; set; }
public string Body { get; set; }
public int AuthorId { get; set; }
}
In PUT/DELETE requests before modifying or deleting entity I need to make a check whether the user that's making request is author of BlogPost, so he is permitted to delete/edit it.
So both in UpdateBlogPost and DeleteBlogPost of imaginary BlogPostService I'll have to write something like this:
var blogPostInDb = _blogPostRepository.GetBlogPost();
if(blogPostInDb == null)
{
// throw exception or do whatever is needed
}
if(blogPostInDb.AuthorId != _currentUser.Id)
{
// throw exception etc...
}
This kind of code will be the same for both Updateand Delete methods as well as other methods that may be added in future and the same for all entities.
Is there any way to reduce or completely remove such duplication?
I thought this over and came up with following solutions, but they don't satisfy me fully.
First solution
Using filters. We can create some custom filters like [EnsureEntityExists] and [EnsureUserCanManageEntity] but this way we're spreading some of business logic in our API layer and it's not flexible enough since we need to create such filter for every entity. Perhaps some kind of generic filter can be made using reflection.
Also there is another problem with this approach, let's say we've made such filter that's checking our rules. We're fetching entity from db, doing checks, throwing exceptions and all that stuff and letting controller method execute. BUT in service layer we need to fetch entity again, so we're making two roundtrips to db. Maybe I'm overthinking this problem and that's fine to make 2 roundtrips, taking into account that fact that caching can be applied.
Second solution
Since I'm using CQRS (or at least some kind of it) I have MediatR library and I can make use of Pipeline Behaviors and even pass fetched entity further into pipeline via mutating TRequest (which I don't wanna do). This solution requires some common interface for all requests to be able to retrieve id of the entity. The roundtrip problem also applicable here too.
public interface IBlogPostAccess
{
public int Id { get; set; }
}
public class ChangeBlogPostCommand: IRequest, IBlogPostAccess
{
// ...
}
public class DeleteBlogPostCommand: IRequest, IBlogPostAccess
{
// ...
}
public class BlogPostAccessBehavior<TRequest, TResponse> : IPipelineBehavior<TRequest, TResponse> where TRequest : IBlogPostAccess
{
// all nessesary stuff injected via DI
public BlogPostAccessBehavior()
{
}
public async Task<TResponse> Handle(TRequest request, CancellationToken cancellationToken, RequestHandlerDelegate<TResponse> next)
{
var blogPostInDb = _blogPostRepository.GetBlogPost(request.Id);
if(blogPostInDb == null)
{
// throw exception or do whatever is needed
}
if(blogPostInDb.AuthorId != _currentUser.Id)
{
// throw exception etc...
}
return await next();
}
}
Third solution
Create something like request context service. In a very simplified way it will be a dictionary that will be persisted across request where we can store data (in this case our BlogPost that we've fetched in filter/pipeline). This seems lame and recalls me a ViewBag in ASP.NET MVC.
Fourth solution
It's more enhancement than a solution, but we can use GuardClause or extension methods to reduce nesting of if statements.
Again, maybe I'm overthinking this problem or it's not a problem at all or that's a design issue. Any help, thoughts appreciated.
If you are concerned about many database calls you could try caching the returned objects per request with something like LazyCache https://github.com/alastairtree/LazyCache
I would not recommend caching across requests...
For code organization, I would recommend extracting the authorization logic into a separate method and calling that method each request. Benefit is that if the logic changes then only need to updated it in one place.
For example something like this:
bool canEdit(userId){
var user = getUserByUserId(userId);
if(user.IsAdmin) return true;
//depending on where this method lives might have access to blogpost here
if(_blogPost.AuthorId == userId) return true;
return false;
}

How to test DB.Configuration.AutoDetectChangesEnabled = false

I'm trying to write some tests for a class using NSubstitute.
Class constructor is:
public class ClassToTest : IClassToTest
{
private IDataBase DB;
public ClassToTest(IDatabase DB)
{
this.DB = DB;
this.DB.Configuration.AutoDetectChangesEnabled = false;
}
Here is my UnitTests class:
[TestFixture]
public class ClassToTestUnitTests
{
private ClassToTest _testClass;
[SetUp]
public void SetUp()
{
var Db = Substitute.For<IDatabase>();
//Db.Configuration.AutoDetectChangesEnabled = false; <- I've tried to do it like this
var dummyData = Substitute.For<DbSet<Data>, IQueryable<Data>, IDbAsyncEnumerable<Data>>().SetupData(GetData());
Db.Data.Returns(dummyData);
_testClass = new ClassToTest(Db);
}
Whenever I try to test some method, the test fails and there is a NullReferenceException and it goes in StackTrace to the SetUp method.
When I commented out the
this.DB.Configuration.AutoDetectChangesEnabled = false; in ClassToTest constructor the tests work fine.
Edit:
public interface IInventoryDatabase
{
DbSet<NamesV> NamesV { get; set; }
DbSet<Adress> Adresses { get; set; }
DbSet<RandomData> Randomdata { get; set; }
// (...more DbSets)
System.Data.Entity.Database Database { get; }
DbContextConfiguration Configuration { get; }
int SaveChanges();
}
The reason for the NullReferenceException is that NSubstitute cannot automatically substitute for DbContextConfiguration (it can only do so for purely virtual classes).
Normally we could work around this by manually configuration this property, something like Db.Configuration.Returns(myConfiguration), but in this case DbContextConfiguration does not seem to have a public constructor so we are unable to create an instance for myConfiguration.
At this stage I can think of two main options: wrap the problematic class in a more testable adapter class; or switch to testing this at a different level. (My preference is the latter which I'll explain below.)
The first option involves something like this:
public interface IDbContextConfiguration {
bool AutoDetectChangesEnabled { get; set; }
// ... any other required members here ...
}
public class DbContextConfigurationAdapter : IDbContextConfiguration {
DbContextConfiguration config;
public DbContextConfigurationAdapter(DbContextConfiguration config) {
this.config = config;
}
public bool AutoDetectChangedEnabled {
get { return config.AutoDetectChangedEnabled; }
set { config = value; }
}
}
Then updating IInventoryDatabase to using the more testable IDbContextConfiguration type. My opposition to this approach is that it can end up requiring a lot of work for something that should be fairly simple. This approach can be very useful for cases where we have behaviours that make sense to be grouped under a logical interface, but for working with an AutoDetectChangedEnabled property this seems unnecessary work.
The other option is to test this at a different level. I think the friction in testing the current code is that we are trying to substitute for details of Entity Framework, rather than interfaces we've created for partitioning the logical details of our app. Search for "don't mock types you don't own" for more information on why this can be a problem (I've written about it before here).
One example of testing at a different level is to switch to an in-memory database for testing this part of the code. This will tell you much more valuable information: given a known state of the test database, you are demonstrating the queries return the expected information. This is in contrast to a test showing we are calling Entity Framework in the way we think is required.
To combine this approach with mocking (not necessarily required!), we can create a higher level interface and substitute for that for testing our application code, then make an implementation of that interface and test that using the in-memory database. We have then divided the application into two parts that we can test independently: first that our app uses data from the data access interface correctly, and secondly that our implementation of that interface works as expected.
So that would give us something like this:
public interface IAppDatabase {
// These members just for example. Maybe instead of something general like
// `GetAllNames()` we have operations specific to app operations such as
// `UpdateAddress(Guid id, Address newAddress)`, `GetNameFor(SomeParams p)` etc.
Task<List<Name>> GetAllNames();
Task<Address> LookupAddress(Guid id);
}
public class AppDatabase : IAppDatabase {
// ...
public AppDatabase(IInventoryDatabase db) { ... }
public Task<List<Name>> GetAllNames() {
// use `db` and Entity Framework to retrieve data...
}
// ...
}
The AppDatabase class we test with an in-memory database. The rest of the app we test with respect to a substitute IAppDatabase.
Note that we can skip the mocking step here by using the in-memory database for all relevant tests. Using mocking may be easier than setting up all the required data in the database, or may make tests run faster. Or maybe not -- I suggest considering both options.
Hope this helps.

Pass information from one layer to another

Abstract view: I want to pass information from one layer to another (note: when there's a better title for this thread let me know).
I have a ViewModel which communications with my Views and my Service layer.
And I have a Service layer communication with my persistence layer.
Let's assume I have the following classes:
public class EmployeeViewModel()
{
// The following properties are binded to my View (bidirectional communication)
public Firstname ...
public Lastname ...
public Email ...
public void PerformingSearch()
{
...
EmployeeService.Search(...);
...
}
}
public class EmployeeService()
{
public List<Employee> Search(...)
{
// Searching in db
}
}
What is best practice to hand over the data from my ViewModel to my Service layer (e. g. for performing a search)?
I see a few options (ViewModel perspective):
EmployeeService.Search(Firstname, Lastname, Email);
EmployeeService.Search(employeeSearchModel); // In this case I would need another model. How should the model be instantiated?
EmployeeService.Search(this); // Convertion has to be done somewhere
Is there any design pattern for this problem? How is it called? What option is best? Did I miss anything?
Describing your problem space
Your particular example tells me that your current architecture contains a service layer that sort of acts as a proxy to your data access layer. Without more in-depth knowledge of your architecture I would suggest a possible solution to keep it simple as much as your environment allows.
Now let's try to pick a strategy to get a possible solution model.
Your user story sounds like: "a user submits information to obtain a list of employees".
Your current use-case simplified:
UI: submits some information that you need to serve;
VM: receives the search terms and passes it next to the service layer;
SL: sends the received data to Data Access Layer (and maybe updates the response values to VM properties);
DAL: looks up information in persistence store and returns the obtained values.
A refactored use-case example:
VM: invokes a query with the needed values encapsulated and set the properties to display in the UI.
Looks easier right?
Enter: Command Query Separation
In short CQS:
States that every method should either be a command that performs
an action, or a query that returns data to the caller, but not both.
In your particular case we need to focus on queries, where:
Queries: Return a result and do not change the observable state of the system (are free of side effects).
But how does this help you? Let's see.
A very good and detailed explanation of CQS query-side can be read fully at the "Meanwhile... on the query side of my architecture" blog post from Steven.
Query concept applied to your problem
Defining an interface for the query object
public interface IQuery<TResult> {}
The query handler definition:
public interface IQueryHandler<TQuery, TResult> where TQuery : IQuery<TResult>
{
TResult Handle(TQuery query);
}
Now here is an implementation of your "search" query object. This is effectively the answer for your "how to pass information" question :
public class FindEmployeeBySearchTextQuery : IQuery<Employee>
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Email { get; set; }
}
And last a query handler that you will pass in your query object:
public class FindEmployeeBySearchTextQueryHandler
: IQueryHandler<FindEmployeeBySearchTextQuery, List<Employee>>
{
private readonly IDbContext db;
public FindEmployeeBySearchTextQueryHandler(IDbContext db)
{
this.db = db;
}
public List<Employee> Handle(FindEmployeeBySearchTextQuery query)
{
return (
from employee in this.db.employees
where employee.FirstName.Contains(query.FirstName) ||
employee.LastName.Contains(query.LastName) ||
employee.Email == query.Email
select employee )
.ToList();
}
}
Note: this Handle() example implementation uses Entity Frameworks' IDbContext, you have got to rework/modify this according to your needs (ADO.NET, NHibernate, etc.).
And finally in your view model:
public class EmployeeViewModel()
{
private readonly IQueryHandler _queryHandler;
public EmployeeViewModel(IQueryHandler queryHandler)
{
_queryHandler = queryHandler;
}
public void PerformingSearch()
{
var query = new FindEmployeeBySearchTextQuery
{
FirstName = "John",
LastName = "Doe",
Email = "stack#has.been.over.flowed.com"
};
List<Employee> employees = _queryHandler.Handle(query);
// .. Do further processing of the obtained data
}
}
This example assumes that you are using Dependency Injection.
You get IQueryHandler implementation injected into your view models constructor and later work with the received implementation.
Using this approach your code becomes cleaner, more use-case driven and will have better isolation of responsibilities which you can easily test and decorate with further cross-cutting concerns.

Design Pattern decisions - REST API & DAL

I am working on application that has WCF REST API and below some DAL. Everything is written in C#.
All REST methods are GET, but many of them have generic string parameter (among other params) that I parse and map to a list object. It works well.
When it comes to mapping to Dto object I would like to use some design pattern to instantiate correct Dto based on mapped REST params. Not sure is it possible since I have that generic string parameter (param name will not be the same all the time) ?
Also, based on created Dto type I would like to choose appropriate DB method to call, command design pattern for this one, I guess?
Thanks for help,
I could explain more if needed.
I have developed same kind of application (WCF REST service).
I have created .net solution and added below project
BusinessLayer
DataAcessLayer
DataService (WCF Service)
EntityLayer
DataService:
public SnapshotData GetSnapshot(string symbol, int nocache)
{
SnapshotData objSnapshotData;
try
{
objSnapshotData = (new SnapshotBAL()).GetSanpshotData(symbol);
SerializeObject(objSnapshotData, localCacheKey);
return objSnapshotData;
}
catch (Exception ex)
{
return null;
}
}
BusinessLayer:
namespace BusinessLayer
{
public class SnapshotBAL
{
public Snapshot GetSanpshot(string symbol)
{
return (new SnaapshotDAL()).GetSanpshot(symbol);
}
}
}
EntiryLayer:
namespace EntityLayer
{
public class Snapshot
{
public DateTime time { get; set; }
public double price { get; set; }
}
}
DataAccessLayer:
namespace DataAccessLayer
{
public class SnaapshotDAL : PrototypeDB
{
public Snapshot GetSanpshot(string symbol)
{
AddParameter("o_snapshot");
AddParameter("i_symbol", symbol);
Snapshot objSanapshot = new Snapshot();
return ObjectHelper.FillObject<Snapshot>(typeof(Snapshot), GetReader("A_SM7_V1_P.GetSnapshotQuick"));
}
}
}
The key line in the question is this:
...design pattern to instantiate correct Dto based on mapped REST params
To me this sounds like you want to use the Factory Pattern.
Urgh. Yes I know, cargo cult programming etc, BUT(!), there are good reasons:
You want to intialise a class (the DAL) based upon some settings
You want those settings defined at the top level (REST mapping)
You want lower level code to be totally ignorant of the settings (right?) so that they can change arbitrarily without requiring system wide refactors.
Sure, you could always just pass an instance of the DAL down the stack but that isn't always possible and can get a bit scrappy.
Alternatively...
Consider creating a DAL implementation that can be made aware of the various switches and will delegate calls to the correct DAL implementation. This might actually be lighter weight than a straight up factory.

When should I write code in the controller vs. model?

Without a doubt I know what the controllers and models are used for. However, I am able to write code that interacts with my db, for example adding users to a table, on either the controller or model. At what times should I write code in the controller vs. in model? Even though both work, what would be a more organized or practical way. Could you please post examples if the answer is ambiguous?Thx
For that, you should add a logic layer or logic classes. The controller should determine wants to do and can do, shuffle them in the right direction (logic layer), then determine what to show the user after the logic. Putting the logic in a separate layer will help keep your controllers lean and promote code reuse.
In the domain core, we only have models with properties. All logic is performed in a different layer, except for things like a property that returns fields concatenated in a format.
Code to access the database should be in service layer instead of keeping in Controller or Model.
Accessing Database Entities from Controller
Here is my answer for the above question, you can also read others answers why you should keep in separate layer.
namespace MyProject.Web.Controllers
{
public class MyController : Controller
{
private readonly IKittenService _kittenService ;
public MyController(IKittenService kittenService)
{
_kittenService = kittenService;
}
public ActionResult Kittens()
{
// var result = _kittenService.GetLatestKittens(10);
// Return something.
}
}
}
namespace MyProject.Domain.Kittens
{
public class Kitten
{
public string Name {get; set; }
public string Url {get; set; }
}
}
namespace MyProject.Services.KittenService
{
public interface IKittenService
{
IEnumerable<Kitten> GetLatestKittens(int fluffinessIndex=10);
}
}
namespace MyProject.Services.KittenService
{
public class KittenService : IKittenService
{
public IEnumerable<Kitten> GetLatestKittens(int fluffinessIndex=10)
{
using(var db = new KittenEntities())
{
return db.Kittens // this explicit query is here
.Where(kitten=>kitten.fluffiness > 10)
.Select(kitten=>new {
Name=kitten.name,
Url=kitten.imageUrl
}).Take(10);
}
}
}
}
ASP.NET MVC and MVC, in general, is a presentation layer pattern; thus your interaction with the database should be in a layer beyond the presentation layer, usually a data-access layer, but it could be a service layer or business layer as well.

Categories