Loading Subrecords in the Repository Pattern - c#

Using LINQ TO SQL as the underpinning of a Repository-based solution. My implementation is as follows:
IRepository
FindAll
FindByID
Insert
Update
Delete
Then I have extension methods that are used to query the results as such:
WhereSomethingEqualsTrue() ...
My question is as follows:
My Users repository has N roles. Do I create a Roles repository to manage Roles? I worry I'll end up creating dozens of Repositories (1 per table almost except for Join tables) if I go this route. Is a Repository per Table common?

If you are building your Repository to be specific to one Entity (table), such that each Entity has the list of methods in your IRepository interface that you listed above, then what you are really doing is an implementation of the Active Record pattern.
You should definitely not have one Repository per table. You need to identify the Aggregates in your domain model, and the operations that you want to perform on them. Users and Roles are usually tightly related, and generally your application would be performing operations with them in tandem - this calls for a single repository, centered around the User and it's set of closely related entities.
I'm guessing from your post that you've seen this example. The problem with this example is that all the repositories are sharing the same CRUD functionality at the base level, but he doesn't go beyond this and implement any of the domain functions. All the repositories in that example look the same - but in reality, real repositories don't all look the same (although they should still be interfaced), there will be specific domain operations associated with each one.
Your repository domain operations should look more like:
userRepository.FindRolesByUserId(int userID)
userRepository.AddUserToRole(int userID)
userRepository.FindAllUsers()
userRepository.FindAllRoles()
userRepository.GetUserSettings(int userID)
etc...
These are specific operations that your application wants to perform on the underlying data, and the Repository should provide that. Think of it as the Repository represents the set of atomic operations that you would perform on the domain. If you choose to share some functionality through a generic repository, and extend specific repositories with extension methods, that's one approach that may work just fine for your app.
A good rule of thumb is that it should be rare for your application to need to instantiate multiple repositories to complete an operation. The need does arise, but if every event handler in your app is juggling six repositories just to take the user's input and correctly instantiate the entities that the input represents, then you probably have design problems.

Is a Repository per Table common?
No, but you can still have several repositiories. You should build a repository around an aggregate.
Also, you might be able to abstract some functionality from all the repositories... and, since you are using Linq-to-Sql, you probably can...
You can implement a base repository which in a generic way implements all this common functionality.
The following example serves only to prove this point. It probably needs a lot of improvement...
interface IRepository<T> : IDisposable where T : class
{
IEnumerable<T> FindAll(Func<T, bool> predicate);
T FindByID(Func<T, bool> predicate);
void Insert(T e);
void Update(T e);
void Delete(T e);
}
class MyRepository<T> : IRepository<T> where T : class
{
public DataContext Context { get; set; }
public MyRepository(DataContext context)
{
Context = Context;
}
public IEnumerable<T> FindAll(Func<T,bool> predicate)
{
return Context.GetTable<T>().Where(predicate);
}
public T FindByID(Func<T,bool> predicate)
{
return Context.GetTable<T>().SingleOrDefault(predicate);
}
public void Insert(T e)
{
Context.GetTable<T>().InsertOnSubmit(e);
}
public void Update(T e)
{
throw new NotImplementedException();
}
public void Delete(T e)
{
Context.GetTable<T>().DeleteOnSubmit(e);
}
public void Dispose()
{
Context.Dispose();
}
}

To me the repository pattern is about putting a thin wrapper around your data access methodology. LINQ to SQL in your case, but NHibernate, hand-rolled in others. What I've found myself doing is create a repository-per-table for that is extremely simple (like bruno lists and you already have). That is responsible for finding things and doing CRUD operations.
But then I have a service level that deals more with aggregate roots, as Johannes mentions. I would have a UserService with a method like GetExistingUser(int id). This would internally call the UserRepository.GetById() method to retrieve the user. If your business process requires the user class returned by GetExistingUser() to pretty much always need the User.IsInRoles() property to be filled, then simply have the UserService depend upon both the UserRepository and RoleRepository. In pseudo code it could look something like this:
public class UserService
{
public UserService(IUserRepository userRep, IRoleRepository roleRep) {...}
public User GetById(int id)
{
User user = _userService.GetById(id);
user.Roles = _roleService.FindByUser(id);
return user;
}
The userRep and roleRep would be constructed with your LINQ to SQL bits something like this:
public class UserRep : IUserRepository
{
public UserRep(string connectionStringName)
{
// user the conn when building your datacontext
}
public User GetById(int id)
{
var context = new DataContext(_conString);
// obviously typing this freeform but you get the idea...
var user = // linq stuff
return user;
}
public IQueryable<User> FindAll()
{
var context = // ... same pattern, delayed execution
}
}
Personally I would make the repository classes internally scoped and have the UserService and other XXXXXService classes public so keep your consumers of the service API honest. So again I see repositories as more closely linked to the act of talking to a datastore, but your service layer being more closely aligned to the needs of your business process.
I've often found myself really overthinking the flexibility of Linq to Objects and all that stuff and using IQuerable et al instead of just building service methods that spit out what I actually need. User LINQ where appropriate but don't try to make the respository do everything.
public IList<User> ActiveUsersInRole(Role role)
{
var users = _userRep.FindAll(); // IQueryable<User>() - delayed execution;
var activeUsersInRole = from users u where u.IsActive = true && u.Role.Contains(role);
// I can't remember any linq and i'm type pseudocode, but
// again the point is that the service is presenting a simple
// interface and delegating responsibility to
// the repository with it's simple methods.
return activeUsersInRole;
}
So, that was a bit rambling. Not sure if I really helped any, but my advise is to avoid getting too fancy with extension methods, and just add another layer to keep each of the moving parts pretty simple. Works for me.

If we write our repository layer as detailed as Womp suggests, what do we put in our service layer. Do we have to repeat same method calls, which would mostly consists of calls to corresponding repository method, for use in our controllers or codebehinds? This assumes that you have a service layer, where you write your validation, caching, workflow, authentication/authorization code, right? Or am I way off base?

Related

Where should I put the complex queries using the Repository Pattern?

I have an application in which I use Entity Framework, and I have a class called BaseRepository<T> with a few basic CRUD methods, such as (Get, GetAll, Update, Delete, Insert), and from this class I generate my specific repositories, such like BaseRepository <Products>, BaseRepository<People>, BaseRepository<Countries> and many more.
The problem is that, when I have a complex logic in the service, that involves making joins of several tables and that does not return an entity, but an object that is handled in the service (it is neither a DB entity nor a DTO), I find that repositories don't help me much with just basic CRUD operations.
Where should I put this query complex? in which of the repositories should it be? How do I join these repositories? The problem is that I see that the repositories handle a single entity, what should I do in this case? I've been doing some research and read that returning IQueryable<T> is bad practice, so I rule out that possibility of sending IQueryable<T> of the tables I'm going to join the service and do it there.
I've researched and found no clear answer. I would like to know how and where these complex queries are organized, since I also want to respect the responsibility of each repository with its respective entity.
I would like to know how and where these complex queries are organized, since I also want to respect the responsibility of each repository with its respective entity.
The complex queries are the responsibility of the code that is requesting the data, not the responsibility of the repository. The single responsibility of the repository is to provide access to the data, not to provide every possible request shape that may be needed in any use case. You really want to write methods in your repositories like:
customerRepo.GetCustomerWithLastTenOrdersAndSupportCasesAsDTO()
or
customerRepo.GetCustomerForCustomerSupportHomePage()
Of course not. So your repository provides a property of type IQueryable<T> or DbSet<T> which can be used as a starting point for consumers to add whatever queries they need.
I've been doing some research and read that returning IQueryable is bad practice
Don't beleive everything you read. There's really not a good alternative to exposing IQueryable<T> from your repository. And once you digest that, there's not much of a reason to have any repository type other than your DbContext subtype.
Its hard to answer without having a code to understand what you want to achieve, hopefully my answer gives you an idea on how you can use abstract classes to override your Queryable Collection. If your requirement is more complex, can you provide more information with example code.
Create your BaseRepository like this -
public abstract class BaseRepository<T>
{
public IQueryable<T> Collection { get; set; }
public readonly DbContext _context;
public BaseRepository(DbContext context)
{
_context = context;
Collection = SetQueryableCollection();
}
public virtual IQueryable<T> SetQueryableCollection() => _context.Set<T>();
// CRUD Operations here for e.g. -
public virtual async Task<List<T>> Get()
{
return await Collection.ToListAsync();
}
}
Now, the class that inherits this -
public class ProductRepository : BaseRepository<Product>
{
public ProductRepository(MyContext context) : base(context)
{
}
//now override the method that sets the collection
public override IQueryable<Product> SetQueryableCollection() =>
_context.Set<Product>().Include(p => p.Brand).ThenInclude(...);
// things to keep in mind, the _context is public in the way I've done it. You can change that and directly expose the Collection and set it to your need per entity type.
}
So now your GET method uses the overriden method to set the Collection.

Applying Repository Pattern to ReportViewModels

public interface IRepository<TEntity>
{
TEntity FindById(Guid id);
void Add(TEntity entity);
void Remove(TEntity entity);
}
This is a simple generic repository. If I have a Product entity, this repository can insert, update, and delete (using Entity Framework).
But I have report based types that I created.
For example:
Products that grouped by salesmen.
Number of orders sent by each shipper.
public class OrdersWithShipper {
public string ShipperName{get;set;}
public string NumberOfOrder{get;set;}
}
And so on.
So I should create complex queries to many tables that related. But query result object is not representing with repository TEntity entity type.
I have many report type like this. How can I solve this problem?
The direct problem of this question is:
So I should create complex queries to many tables that related. But
query result object is not representing with repository TEntity entity
type.
I would say you should not be using the repository pattern here as it breaks it. E.g a repository should be dealing with returning and managing the domain object it is designed for, in order to support domain behavior, not random query objects to support reporting behavior.
By not sticking to the same type you will almost certainly end up not knowing where to draw the line, e.g what query object goes with what repository etc... So you should just keep it simple.
Apply a different pattern to reporting (or querying) for example. Maybe create classes for your view models (View Model Builders?) that are directly dependent on IDbSet<T> for their querying logic.
Or, abstract further and have a query handler / query provider pattern (this would be my choice).
Look at the answer here:
Well designed query commands and/or specifications
I have used similar pattern to this with great success.
My technique in doing this is to wrap the Repository in a Service class that can accept the ViewModels. In other words, my Controllers are using the Service classes that I make and not the repositories directly.
Also, I use the AutoMapper library to map
Example below:
public class OrderWithShipperProductService()
{
public ProductRepository { get; set; }
public OrderWithShipperProductService(ProductRepository repo)
{
this.ProductRepository = repo;
}
public void AddOrderWithShipperProduct(OrderWithShipperProduct model)
{
var entity = Mapper.Map<TEntity>(model);
ProductRepository.Add(entity);
}
}
You may need to learn Automapper here.
But you may also map it yourself in the constructor if you want.
Or you may also create your own mapping functions.

How can I perform aggregate operations via the repository pattern?

I've seen various blog posts (and much conflicting advice) about the repository pattern, and so I'll start by saying that the code below is probably not following the repository pattern in many people's opinion. However, it's a common-enough implementation, and whether it adheres to Fowler's original definition or not, I'm still interested in understanding more about how this implementation is used in practice.
Suppose I have a project where data access is abstracted via an interface such as the one below, which provides basic CRUD operations.
public interface IGenericRepository<T>
{
void Add(T entity);
void Remove(T entity);
void Update(T entity);
IEnumerable<T> Fetch(Expression<Func<T,bool>> where);
}
Further suppose that I have a service layer built atop that, for example:
public class FooService
{
private IGenericRepository<Foo> _fooRespository;
...
public IEnumerable<Foo> GetBrightlyColoredFoos()
{
return _fooRepository.Fetch(f => f.Color == "pink" || f.Color == "yellow");
}
}
Now suppose that I now need to know how many brightly colored Foos there are, without actually wanting to enumerate them. Ideally, I want to implement a CountBrightlyColoredFoos() method in my service, but the repository implementation gives me no way to achieve that other than by fetching them all and counting them - which is potentially very inefficient.
I could extend the repository to add a Count() method, but what about other aggregate functions that I might need, such as Min() or Max(), or Sum(), or... you get the idea.
Likewise, what if I wanted to get a list of the distinct Foo colors (SELECT DISTINCT). Again, the simple repository provides no way to do that sort of thing either.
Keeping the repository simple to make it easy to test/mock is very laudable, but how do you then address these requirements? Surely there are only two ways to go - a more complex repository, or a "back-door" for the service layer to use that bypasses the repository (and thus defeats its purpose).
I would say you need to change your design. What you want to do is have one "main" generic repository that has your basic CRUD, but also smaller repositories for each entity. You will then just have to draw a line on where to place certain operations (like sum, count, max, etc.) Most likely not all your entities are going to have to get counted, summed, etc. and most of the time you won't be able to add a generic version that applies to all entities for aggregate functions.
Base Repository:
public abstract class BaseRep<T> : IBaseRep<T> where T : class
{
//basic CRUD
}
Foo Repository:
public class FooRep : BaseRep<Foo>, IFooRep
{
//foo specific functions
}

Unit of Work + Repository Pattern: The Fall of the Business Transaction Concept

Combining Unit of Work and Repository Pattern is something used fairly widely nowadays. As Martin Fowler says a purpose of using UoW is to form a Business Transaction while being ignorant of how repositories actually work (being persistent ignorant). I've reviewed many implementations; and ignoring specific details (concrete/abstract class, interface,...) they are more or less similar to what follows:
public class RepositoryBase<T>
{
private UoW _uow;
public RepositoryBase(UoW uow) // injecting UoW instance via constructor
{
_uow = uow;
}
public void Add(T entity)
{
// Add logic here
}
// +other CRUD methods
}
public class UoW
{
// Holding one repository per domain entity
public RepositoryBase<Order> OrderRep { get; set; }
public RepositoryBase<Customer> CustomerRep { get; set; }
// +other repositories
public void Commit()
{
// Psedudo code:
For all the contained repositories do:
store repository changes.
}
}
Now my problem:
UoW exposes public method Commit to store the changes. Also, because each repository has a shared instance of UoW, each Repository can access method Commit on UoW. Calling it by one repository makes all other repositories store their changes too; hence the result the whole concept of transaction collapses:
class Repository<T> : RepositoryBase<T>
{
private UoW _uow;
public void SomeMethod()
{
// some processing or data manipulations here
_uow.Commit(); // makes other repositories also save their changes
}
}
I think this must be not allowed. Considering the purpose of the UoW (business transaction), the method Commit should be exposed only to the one who started a Business Transaction for example Business Layer. What surprised me is that I couldn't find any article addressing this issue. In all of them Commit can be called by any repo being injected.
PS: I know I can tell my developers not to call Commit in a Repository but a trusted Architecture is more reliable than trusted developers!
I do agree with your concerns. I prefer to have an ambient unit of work, where the outermost function opening a unit of work is the one that decides whether to commit or abort. Functions called can open a unit of work scope which automatically enlists in the ambient UoW if there is one, or creates a new one if there is none.
The implementation of the UnitOfWorkScope that I used is heavily inspired by how TransactionScope works. Using an ambient/scoped approach also removes the need for dependency injection.
A method that performs a query looks like this:
public static Entities.Car GetCar(int id)
{
using (var uow = new UnitOfWorkScope<CarsContext>(UnitOfWorkScopePurpose.Reading))
{
return uow.DbContext.Cars.Single(c => c.CarId == id);
}
}
A method that writes looks like this:
using (var uow = new UnitOfWorkScope<CarsContext>(UnitOfWorkScopePurpose.Writing))
{
Car c = SharedQueries.GetCar(carId);
c.Color = "White";
uow.SaveChanges();
}
Note that the uow.SaveChanges() call will only do an actual save to the database if this is the root (otermost) scope. Otherwise it is interpreted as an "okay vote" that the root scope will be allowed to save the changes.
The entire implementation of the UnitOfWorkScope is available at: http://coding.abel.nu/2012/10/make-the-dbcontext-ambient-with-unitofworkscope/
Make your repositories members of your UoW. Don't let your repositories 'see' your UoW. Let UoW handle the transaction.
Don't pass in the UnitOfWork, pass in an interface that has the methods you need. You can still implement that interface in the original concrete UnitOfWork implementation if you want:
public interface IDbContext
{
void Add<T>(T entity);
}
public interface IUnitOfWork
{
void Commit();
}
public class UnitOfWork : IDbContext, IUnitOfWork
{
public void Add<T>(T entity);
public void Commit();
}
public class RepositoryBase<T>
{
private IDbContext _c;
public RepositoryBase(IDbContext c)
{
_c = c;
}
public void Add(T entity)
{
_c.Add(entity)
}
}
EDIT
After posting this I had a rethink. Exposing the Add method in the UnitOfWork implementation means it is a combination of the two patterns.
I use Entity Framework in my own code and the DbContext used there is described as "a combination of the Unit-Of-Work and Repository pattern".
I think it is better to split the two, and that means I need two wrappers around DbContext one for the Unit Of Work bit and one for the Repository bit. And I do the repository wrapping in RepositoryBase.
The key difference is that I do not pass the UnitOfWork to the Repositories, I pass the DbContext. That does mean that the BaseRepository has access to a SaveChanges on the DbContext. And since the intention is that custom repositories should inherit BaseRepository, they get access to a DbContext too. It is therefore possible that a developer could add code in a custom repository that uses that DbContext. So I guess my "wrapper" is a bit leaky...
So is it worth creating another wrapper for the DbContext that can be passed to the repository constructors to close that off? Not sure that it is...
Examples of passing the DbContext:
Implementing the Repository and Unit of Work
Repository and Unit of Work in Entity Framework
John Papa's original source code
Realize it has been a while since this was asked, and people may have died of old age, transferred to management etc. but here goes.
Taking inspiration from databases, transaction controllers and the two phase commit protocol, the following changes to the patterns should work for you.
Implement the unit of work interface described in Fowler's P of EAA book, but inject the repository into each UoW method.
Inject the unit of work into each repository operation.
Each repository operation calls the appropriate UoW operation and injects itself.
Implement the two phase commit methods CanCommit(), Commit() and Rollback() in the repositories.
If required, commit on the UoW can run Commit on each repository or it can commit to the data store itself. It can also implement a 2 phase commit if that is what you want.
Having done this, you can support a number of different configurations depending on how you implement the repositories and the UoW. e.g. from simple data store without transactions, single RDBMs, multiple heterogeneous data stores etc. The data stores and their interactions can be either in the repositories or in the UoW, as the situation requires.
interface IEntity
{
int Id {get;set;}
}
interface IUnitOfWork()
{
void RegisterNew(IRepsitory repository, IEntity entity);
void RegisterDirty(IRepository respository, IEntity entity);
//etc.
bool Commit();
bool Rollback();
}
interface IRepository<T>() : where T : IEntity;
{
void Add(IEntity entity, IUnitOfWork uow);
//etc.
bool CanCommit(IUnitOfWork uow);
void Commit(IUnitOfWork uow);
void Rollback(IUnitOfWork uow);
}
User code is always the same regardless of the DB implementations and looks like this:
// ...
var uow = new MyUnitOfWork();
repo1.Add(entity1, uow);
repo2.Add(entity2, uow);
uow.Commit();
Back to the original post. Because we are method injecting the UoW into each repo operation the UoW does not need to be stored by each repository, meaning Commit() on the Repository can be stubbed out, with Commit on the UoW doing the actual DB commit.
In .NET, data access components typically automatically enlist to ambient transactions. Hence, saving changes intra-transactionally becomes separated from comitting the transaction to persist the changes.
Put differently - if you create a transaction scope you can let the developers save as much as they want. Not until the transaction is committed the observable state of the database(s) will be updated (well, what is observable depends on the transaction isolation level).
This shows how to create a transaction scope in c#:
using (TransactionScope scope = new TransactionScope())
{
// Your logic here. Save inside the transaction as much as you want.
scope.Complete(); // <-- This will complete the transaction and make the changes permanent.
}
I too have been recently researching this design pattern and by utilizing the Unit Of Work and Generic Repository Pattern I was able to extract the Unit of Work "Save Changes" for the Repository implementation. My code is as follows:
public class GenericRepository<T> where T : class
{
private MyDatabase _Context;
private DbSet<T> dbset;
public GenericRepository(MyDatabase context)
{
_Context = context;
dbSet = context.Set<T>();
}
public T Get(int id)
{
return dbSet.Find(id);
}
public IEnumerable<T> GetAll()
{
return dbSet<T>.ToList();
}
public IEnumerable<T> Where(Expression<Func<T>, bool>> predicate)
{
return dbSet.Where(predicate);
}
...
...
}
Essentially all we are doing is passing in the data context and utilizing the entity framework's dbSet methods for basic Get, GetAll, Add, AddRange, Remove, RemoveRange, and Where.
Now we will create a generic interface to expose these methods.
public interface <IGenericRepository<T> where T : class
{
T Get(int id);
IEnumerable<T> GetAll();
IEnumerabel<T> Where(Expression<Func<T, bool>> predicate);
...
...
}
Now we would want to create an interface for each entity in entity Framework and inherit from IGenericRepository so that the interface will expect to have the method signatures implemented within the inherited repositories.
Example:
public interface ITable1 : IGenericRepository<table1>
{
}
You will follow this same pattern with all of your entities. You will also add any function signatures in these interfaces that are specific to the entities. This would result in the repositories needing to implement the GenericRepository methods and any custom methods defined in the interfaces.
For the Repositories we will implement them like this.
public class Table1Repository : GenericRepository<table1>, ITable1
{
private MyDatabase _context;
public Table1Repository(MyDatabase context) : base(context)
{
_context = context;
}
}
In the example repository above I am creating the table1 repository and inheriting the GenericRepository with a type of "table1" then I inherit from the ITable1 interface. This will automatically implement the generic dbSet methods for me, thus allowing me to only focus on my custom repository methods if any. As I pass the dbContext to the constructor I must also pass the dbContext to the base Generic Repository as well.
Now from here I will go and create the Unit of Work repository and Interface.
public interface IUnitOfWork
{
ITable1 table1 {get;}
...
...
list all other repository interfaces here.
void SaveChanges();
}
public class UnitOfWork : IUnitOfWork
{
private readonly MyDatabase _context;
public ITable1 Table1 {get; private set;}
public UnitOfWork(MyDatabase context)
{
_context = context;
// Initialize all of your repositories here
Table1 = new Table1Repository(_context);
...
...
}
public void SaveChanges()
{
_context.SaveChanges();
}
}
I handle my transaction scope on a custom controller that all other controllers in my system inherit from. This controller inherits from the default MVC controller.
public class DefaultController : Controller
{
protected IUnitOfWork UoW;
protected override void OnActionExecuting(ActionExecutingContext filterContext)
{
UoW = new UnitOfWork(new MyDatabase());
}
protected override void OnActionExecuted(ActionExecutedContext filterContext)
{
UoW.SaveChanges();
}
}
By implementing your code this way. Every time a request is made to the server at the beginning of an action a new UnitOfWork will be created and will automatically create all the repositories and make them accessible to the UoW variable in your controller or classes. This will also remove your SaveChanges() from your repositories and place it within the UnitOfWork repository. And last this pattern is able to utilize only a single dbContext throughout the system via dependency injection.
If you are concerned about parent/child updates with a singular context you could utilize stored procedures for your update, insert, and delete functions and utilize entity framework for your access methods.
In a very simple application
In some applications, the domain model and the database entities are identical, and there is no need to do any data mapping between them. Let's call them "domain entities". In such applications, the DbContext can act both as a repository and a unit of work simultaneously. Instead of doing some complicated patterns, we can simply use the context:
public class CustomerController : Controller
{
private readonly CustomerContext context; // injected
[HttpPost]
public IActionResult Update(CustomerUpdateDetails viewmodel)
{
// [Repository] acting like an in-memory domain object collection
var person = context.Person.Find(viewmodel.Id);
// [UnitOfWork] keeps track of everything you do during a business transaction
person.Name = viewmodel.NewName;
person.AnotherComplexOperationWithBusinessRequirements();
// [UnitOfWork] figures out everything that needs to be done to alter the database
context.SaveChanges();
}
}
Complex queries on larger apps
If your application gets more complex, you'll start writing some large Linq queries in order to access your data. In that situation, you'll probably need to introduce a new layer that handle these queries, in order to prevent yourself from copy pasting them across your controllers. In that situation, you'll end up having two different layers, the unit of work pattern implemented by the DbContext, and the repository pattern that will simply provide some Linq results executing over the former. Your controller is expected to call the repository to get the entities, change their state and then call the DbContext to persist the changes to the database, but proxying the DbContext.SaveChanges() through the repository object is an acceptable approximation:
public class PersonRepository
{
private readonly PersonDbContext context;
public Person GetClosestTo(GeoCoordinate location) {} // redacted
}
public class PersonController
{
private readonly PersonRepository repository;
private readonly PersonDbContext context; // requires to Equals repository.context
public IActionResult Action()
{
var person = repository.GetClosestTo(new GeoCoordinate());
person.DoSomething();
context.SaveChanges();
// repository.SaveChanges(); would save the injection of the DbContext
}
}
DDD applications
It gets more interesting when domain models and entities are two different group of classes. This will happen when you will start implementing DDD, as this requires you to define some aggregates, which are clusters of domain objects that can be treated as a single unit. The structure of aggregates does not always perfectly map to your relational database schema, as it can provides multiple level of abstractions depending on the use case you're dealing with.
For instance, an aggregate may allow a user to manage multiple addresses, but in another business context you'll might want to flatten the model and limit the modeling of the person's address to the latest value only:
public class PersonEntity
{
[Key]
public int Id { get; set; }
public string Name { get; set; }
public bool IsValid { get; set; }
public ICollection<AddressEntity> Addresses { get; set; }
}
public class AddressEntity
{
[Key]
public int Id { get; set; }
public string Value { get; set; }
public DateTime Since { get; set; }
public PersonEntity Person { get; set; }
}
public class Person
{
public int Id { get; set; }
public string Name { get; set; }
public string CurrentAddressValue { get; private set; }
}
Implementing the unit of work pattern
First let's get back to the definition:
A unit of work keeps track of everything you do during a business transaction that can affect the database. When you're done, it figures out everything that needs to be done to alter the database as a result of your work.
The DbContext keeps tracks of every modification that happens to entities and will persist them to the database once you call the SaveChanges() method. Like in the simpler example, unit of work is exactly what the DbContext does, and using it as a unit of work is actually how Microsoft suggest you'd structure a .NET application using DDD.
Implementing the repository pattern
Once again, let's get back to the definition:
A repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection.
The DbContext, cannot act as a repository. Although it behaves as an in-memory collection of entities, it does not act as an in-memory collection of domain objects. In that situation, we must implement another class for the repository, that will act as our in-memory collection of domain models, and will map data from entities to domain models. However, you will find a lot of implementations that are simply a projection of the DbSet in the domain model and provide IList-like methods that simply maps entities back and reproduce the operations on the DbSet<T>.
Although this implementation might be valid in multiple situations, it overemphasizes over the collection part, and not enough on the mediator part of the definition.
A repository is a mediator between the domain layer and the infrastructure layer, which means its interface is defined in the domain layer. Methods described in the interface are defined in the domain layer, and they all must have a meaning in the business context of the program. Ubiquitous language being a central concept of DDD, these methods must provide a meaningful name, and perhaps "adding a person" is not the right business way to name this operation.
Also, all persistence-related concepts are strictly limited to the implementation of the repository. The implementation defines how a given business operation translates in the infrastructure layer, as a series of entities manipulation that will eventually be persisted to the database through an atomic database transaction. Also note that the Add operation on a domain model does not necessarily implies an INSERT statement in the database and a Remove will sometimes end up in an UPDATE or even multiple INSERT statements !
Actually, here is a pretty valid implementation of a repository pattern:
public class Person
{
public void EnsureEnrollable(IPersonRepository repository)
{
if(!repository.IsEnrollable(this))
{
throw new BusinessException<PersonError>(PersonError.CannotEnroll);
}
}
}
public class PersonRepository : IPersonRepository
{
private readonly PersonDbContext context;
public IEnumerable<Person> GetAll()
{
return context.Persons.AsNoTracking()
.Where(person => person.Active)
.ProjectTo<Person>().ToList();
}
public Person Enroll(Person person)
{
person.EnsureEnrollable(this);
context.Persons.Find(person.Id).Active = true;
context.SaveChanges(); // UPDATE statement
return person;
}
public bool IsEnrollable(Person person)
{
return context.Persons.Any(entity => entity.Id == person.Id && !entity.Active);
}
}
Business transaction
You're saying a purpose of using unit of work is to form a Business Transaction, which is wrong. The purpose of the unit of work class is to keeps track of everything you do during a business transaction that can affect the database, to alter the database as a result of your work in an atomic operation. The repositories do share the unit of work instances, but bear in mind that dependency injection usually uses a scoped lifetime manager when injecting dbcontext. This means that instances are only shared within the same http request context, and different requests will not share changes tracking. Using a singleton lifetime manager will share instances among different http request which will provoke havoc in your application.
Calling the unit of work save changes method from a repository is actually how you are expected to implementation a DDD application. The repository is the class that knows about the actual implementation of the persistence layer, and that will orchestrate all database operations to commit/rollback at the end of transaction. Saving changes from another repository when calling save changes is also the expected behavior of the unit of work pattern. The unit of work accumulates all changes made by all repositories until someone calls a commit or a rollback. If a repository makes changes to the context that are not expected to be persisted in the database, then the problem is not the unit of work persisting these changes, but the repository doing these changes.
However, if your application does one atomic save changes that persists change operations from multiple repositories, it probably violates one of the DDD design principles. A repository is a one-to-one mapping with an aggregate, and an aggregate is a cluster of domain objects that can be treated as a single unit. If you are using multiple repositories, then you are trying to modify multiple units of data in a single transaction.
Either your aggregate is designed too small, and you need to make a larger one that holds all data for your single transaction, with a repository that will handle all that data in a single transaction ; either you're trying to make a complex transaction that spans over a wide part of your model, and you will need to implement this transaction with eventual consistency.
Yes, this question is a concern to me, and here's how I handle it.
First of all, in my understanding Domain Model should not know about Unit of Work. Domain Model consists of interfaces (or abstract classes) that don't imply the existence of the transactional storage. In fact, it does not know about the existence of any storage at all. Hence the term Domain Model.
Unit of Work is present in the Domain Model Implementation layer. I guess this is my term, and by that I mean a layer that implements Domain Model interfaces by incorporating Data Access Layer. Usually, I use ORM as DAL and therefore it comes with built-in UoW in it (Entity Framework SaveChanges or SubmitChanges method to commit the pending changes). However, that one belongs to DAL and does not need any inventor's magic.
On the other hand, you are referring to the UoW that you need to have in Domain Model Implementation layer because you need to abstract away the part of "committing changes to DAL". For that, I would go with Anders Abel's solution (recursive scropes), because that addresses two things you need to solve in one shot:
You need to support saving of aggregates as one transaction, if the aggregate is an initiator of the scope.
You need to support saving of aggregates as part of the parent transaction, if the aggregate is not the initiator of the scope, but is part of it.

MVC3 Exposing repository functionality through service

I've been playing around with asp.net MVC3 a bit and have been struggling to decide where to place my business logic. I've settled on using a service layer for now:
public class AnimalsService : IAnimalsService
{
private readonly IAnimalsRepository _animalsRepository;
public AnimalsService(IAnimalsRepository animalsRepository)
{
_animalsRepository = animalsRepository;
}
public IQueryable<Animal> GetFiveLeggedAnimals()
{
...
}
}
The controller would look something like this:
public class AnimalsController : Controller
{
private readonly IAnimalsService _animalsService;
public AnimalsController(IAnimalsService animalsService)
{
_animalsService = animalsService;
}
public ViewResult ListFiveLeggedAnimals()
{
var animals = _animalsService.GetFiveLeggedAnimals();
return View(animals);
}
}
I have basic CRUD logic in the repository (All, Find, UpdateOrInsert, Delete). If I want to use these CRUD methods in my controller:
1) Do I have to create wrapper methods in the service for these respository calls?
2) Would it not make more sense for me to just include the GetFiveLeggedAnimals method and other business logic in the repository?
3) Could I implement the IAnimalsRepository interface in the AnimalsService and then call the base methods (I realise this is possible but I assume its bad practice)?
1) Do I have to create wrapper methods in the service for these respository calls?
Mostly, yes. Typically, you want to offer CRUD for your domain models in the service layer. This way, the controller does not need to work with the repository directly (in fact, it never should). You can add more more sophisticated logic later without having to change external code. For example, consider you wanted to implement a newsfeed. Now every time a five-legged animal is inserted, you want to create a news item and push it to five-legged-animal-fans. Another common example is email notifications.
2) Would it not make more sense for me to just include the GetFiveLeggedAnimals method and other business logic in the repository?
Business logic should be in the Service Layer or in the Domain Model objects themselves, and only there. In fact (see 3), I wouldn't specifically offer an IAnimalRepository at all, if possible.
For instance, in a NoSQL-Environment, the database driver pretty much is a repository. On the other hand, when using a complex ORM mapping and stored procedures (where part of the biz logic is in the DB), you don't really have a choice but offer explicit interfaces that know the stored procedures.
I'd go for a IRepository<T> and use the Query Object pattern, if possible. I think LINQ can also be considered a Query Object / Repository based pattern.
3) Could I implement the IAnimalsRepository interface in the AnimalsService and then call the base methods (I realise this is possible but I assume its bad practice)?
To call the base methods, you'd have to inherit from a concrete implementation, e.g. from ConcreteAnimalsRepository.
Also, if your service implements the IAnimalsRepository interface directly or indirectly, it makes the (unfiltered) CRUD operations available to everyone.
My take: Don't inherit, aggregate. A service layer has a repository, but it isn't a repository itself: The service layer handles all the additional application logic (permissions, notifications) and the repository is a very thin wrapper around the db layer.
As an extreme example, what if deleting something directly was forbidden, and only the service would be allowed to make use of it when inserting a newer revision of sth.? This can be easily built when aggregating.
Repository by definition should be a generic collection-like class that abstracts DB interactions. It would contain typical methods for persistence like Get(object id), Add(T), Remove(T) and possibly implement IQueryable<T>.
The service would look like the following code.
public class AnimalsService : IAnimalsService
{
private readonly IRepository<Animal> _repository;
public AnimalsService(IRepository<Animal> repository)
{
_repository = repository;
}
public IEnumerable<Animal> GetFiveLeggedAnimals()
{
// animal specific business logic
}
}
I think is not good to use simple CRUD operation in the Controller and have a wrapper in the Service class, you should keep all business logic in the service layer, not in controller
for example you want to create a new Animal
in the controller you will have method
look at the example
// not good design
public ActionResult Create(AnimalInput input)
{
Animal animal = new Animal { Name = input.Name}; // set the other propreties
// if you have a CRUD operations in service class you will call
animalService.UpdateOrInsert(animal);
}
// better disign
public ActionResult Create(AnimalInput input)
{
animalService.Create(input.Name);
}
in the service class implementation you should have
follow
public void Create(string name)
{
Animal animal = new Animal { Name = input.Name};
animalRepository.UpdateOrInsert(animal);
}
for the methods like GetAll or GetFiveLeggedAnimals(); you can have wrapper in the service classes I think it's ok . And I want to give you adives allways when you write some code in controller or in Service class keep in mind how you will test this code
and don't forget about SOLID

Categories