Summary
This question is for a methodology. The answer should be a link to the holy grail in working with contexts for the described scenario.
We have been experiencing different problems in our MVC web application project, related to the use of dbContext.
After reading many question-answer blogs, articles ... including proposals with repositories and injection patterns, Owin, Entity Framework, Ninject, we are still not clear about the right way to work with dbContext’s.
Is there any article, demo, with “The Way” to do it in a more complex application than just “CRUD” operations using separation between MVVC-presentation / Domain Entities / Logic / DataAccess layers, including Identity security handling users and roles permissions?
Description
Previously, our approach was to create dbContext objects when needed in each repository.
Soon we discovered errors like “dbContext is disposed” since the connection dies together with the repository function. This makes the retrieved objects “partially available” to the upper layers in the app (using the trick .ToList(), limited because we can access collections and attributes but not later navigation into the object child tables, and so on). Also using 2 contexts from different repositories, we got an exception telling that 2 contexts are trying to register changes to the same object.
Due to timed commitments to deliver prototypes, we created a single static dbContext shared for the whole application, which is called from everywhere when needed (Controllers, Models, Logic, DataAccess, database initializers). We are aware that is a very dirty workaround but it has been working better than the previous approach.
Still with problems: dbContext can handle only 1 async method call at a time, and we can have many calls (eg. userManager.FindByNameAsync - there are only async methods). Exception: “A second operation started on this context before a previous asynchronous operation completed”.
We were thinking about creating the context as the very first step when an action is called in the controller, then to carry this object as “relay race” to every other layer or function called. In this way the connection will live from the “click in the browser” until the response is loaded back on it. But we don’t like the idea that every single function must have an extra parameter “context” just to share the connection through the layers for the entire operation route.
We are sure that we are not the first ones wondering about what is the right way to use contexts.
Application layers
We have these (logical) layers, differents workspaces, but same webapp MVC project, top to down:
Views: HTML + Razor + JQuery + CSS. Code here is restricted to the layout, but some HTML might depend on the Role. Method calls are to controllers only, plus utils (like formatting).
ViewModels: The data container to be exchanged between Controllers and Views. Classes only define attributes, plus functions to convert to and from Domain entities only (Translators).
Controllers: Actions called from the browser result in calls to functions in the Logic layers. Authentication here restricts access to actions or limits inside an action. Controllers avoid using Domain entities but ViewModels, so that to communicate with Logic layer ViewModels translation functions are called.
Domain Entities: Used for the logic layer, and used to create database tables by Entity Framework.
Logic Classes: A Domain entity has an EntityLogic class with all the operations. These are the core where all the rules that are common and abstracted from specific consumer clients (ViewModels are unknown).
Repositories: To access the database. Not sure if we do need this since Domain entities are already mapped to objects in database by Entity Framework.
Typical scenario
The browser calls an action (POST) in the Products controller to edit a product. The ProductViewModel is used as container of the data.
The controller action is restricted to a collection of roles. Inside the action, depending of the role, a different Logic function is called and ProductViewModel is translated to ProductDomainEntity and passed as parameter.
The logic EditProduct function calls others functions in different logic classes and also use localization and security to restrict or filter. The logic may or may not call a Repository to access the data, or to use a global context for all, and deliver the resulting domain entity collections to the Logic.
Based on the results, the logic may or may not try to navigate the results’ children collections. The results are given back to the controller action as domain entity (or collection of), and depending of this results, the controller may call more Logic, or redirect to another action or respond with a View translating the results to the right ViewModel.
Where, when and how to create the dbContext to support the whole operation in the best way?
UPDATE: All classes within the Logic layer are static. The methods are called from controllers simply like this:
UserLogic.GetCompanyUserRoles(user)
, or
user.GetCompanyRoles()
where GetCompanyRoles() is an extension method for User implemented in UserLogic. Thus, no instances for Logic classes means no constructors to receive a dbContext to use inside its methods.
I want a static method inside a static class to know where to get the instance of the dbContext active to the current HttpRequest.
Could NInject and OnePerRequestHttpModule help with this? Someone who tried?
I don't believe there is a "Holy Grail" or magic-bullet answer to this or any other problem with EF / DbContexts. Because of that, I also don't believe that there is one definitive answer to your question, and that any answers will be primarily opinion-based. However I have found personally that using a CQRS pattern rather than a repository pattern allows for more control and fewer problems when dealing with EF semantics and quirks. Here are a few links that you may (or may not) find helpful:
https://stackoverflow.com/a/21352268/304832
https://stackoverflow.com/a/21584605/304832
https://www.cuttingedge.it/blogs/steven/pivot/entry.php?id=91
https://www.cuttingedge.it/blogs/steven/pivot/entry.php?id=92
http://github.com/danludwig/tripod
Some more direct answers:
...This makes the retrieved objects “partially available” to the upper layers in the app (using the trick .ToList(), limited because we can access collections and attributes but not later navigation into the object child tables, and so on). Also using 2 contexts from different repositories, we got an exception telling that 2 contexts are trying to register changes to the same object.
The solutions to these problems are to 1) eager load all of the child and navigation properties that you will need when initially executing the query instead of lazy loading, and 2) only work with 1 DbContext instance per HTTP request (inversion of control containers can help with this).
Due to timed commitments to deliver prototypes, we created a single static dbContext shared for the whole application, which is called from everywhere when needed (Controllers, Models, Logic, DataAccess, database initializers). We are aware that is a very dirty workaround but it has been working better than the previous approach.
This is actually much worse than a "dirty workaround", as you will start to see very strange and hard to debug errors when you have a static DbContext instance. I am very surprised to hear that this is working better than your previous approach, but it only points out that there are more problems with your previous approach if this one works better.
We were thinking about creating the context as the very first step when an action is called in the controller, then to carry this object as “relay race” to every other layer or function called. In this way the connection will live from the “click in the browser” until the response is loaded back on it. But we don’t like the idea that every single function must have an extra parameter “context” just to share the connection through the layers for the entire operation route
This is what an Inversion of Control container can do for you, so that you don't have to keep passing around instances. If you register your DbContext instance one per HTTP request, you can use the container (and constructor injection) to get at that instance without having to pass it around in method arguments (or worse).
ViewModels: The data container to be exchanged between Controllers and Views. Classes only define attributes, plus functions to convert to and from Domain entities only (Translators).
Little piece of advice: Don't declare functions like this on your ViewModels. ViewModels should be dumb data containers, void of behavior, even translation behavior. Do the translation in your controllers, or in another layer (like a Query layer). ViewModels can have functions to expose derived data properties that are based on other data properties, but without behavior.
Logic Classes: A Domain entity has an EntityLogic class with all the operations. These are the core where all the rules that are common and abstracted from specific consumer clients (ViewModels are unknown).
This could be the fault in your current design. Boiling all of your business rule and logic into entity-specific classes can get messy, especially when dealing with repositories. What about business rules and logic that span entities or even aggregates? Which entity logic class would they belong to?
A CQRS approach pushes you out of this mode of thinking about rules and logic, and more into a paradigm of thinking about use cases. Each "browser click" is probably going to boil down to some use case that the user wants to invoke or consume. You can find out what the parameters of that use case are (for example, which child / navigation data to eager load) and then write 1 (one) query handler or command handler to wrap the entire use case. When you find common subroutines that are part of more than one query or command, you can factor those out into extension methods, internal methods, or even other command and query handlers.
If you are looking for a good place to start, I think that you will get the most bang for your buck by first learning how to properly use a good Inversion of Control container (like Ninject or SimpleInjector) to register your EF DbContext so that only 1 instance gets created for each HTTP request. This should help you avoid your disposal and multi-context exceptions at the very least.
I always use a BaseController that contains a dbContext and passes it to the logic functions (Extensions i call). That way you only use one context per call and if something fails it will do a rollback.
Example:
Controller1 that inherits BaseController
Controller1 now have access to the property db that is a context
Controller1 contains an action "Action1"
Action1 will call the function "LogicFunctionX(db, value1, Membership.CurrentUserId, true)"
In Action1 you can call other logic functions or even call them inside "LogicFunctionX". Always passing the property db through functions.
To save the context i do it inside the controller (mostly) after calling all the logic functions.
Note: the argument true that i pass in LogicFunctionX is to save the context inside or not. Like:
if(Save)
db.SaveChanges();
I had several problems before doing this.
Related
I need my controllers to have several functionalities, e.g.,
Some of the controllers save my entities to a database, so I use the Repository pattern here (passing the Repository in the constructor).
Some of the controllers, on the other hand, do this PLUS sending my entities to an external service. In this case I pass the service client in the constructor of the Controller too.
Since my controller cannot inherit from two classes, e.g.,
DbEntityController: The one that deals with a database repository.
ServiceEntityController: The one that deals with a remote service client.
Can you give me a couple of advises about how to implement a sort of composition pattern for MVC Controllers? I am actually concerned about the URL routing etc. Is this going to work if my action methods delegate their tasks to an "inner" controller.
Any ideas will help a lot!
UPDATE:
These are my constructors more or less (I use DI with Unity by the way)
DbEntityController(IDbRepository<TEntity>)
ServiceEntityController(IServiceRepository<TEntity>)
OrderController(IDbRepository<IOrder>, IServiceRepository<IOrder>)
AddressController(IDbRepository<IAddress>)
Somehow OrderController inherits from both DbEntityController and ServiceEntityController.
I am trying to learn some concepts about DDD and the part of persisting Aggregates is confusing me a bit. I have read various answers on the topic on SO but none of them seem to answer my question.
Let's say I have an Aggregate root of Product. Now I do not want to inject the ProductRepository that will persist this aggregate root in the constructor of the Product class itself. Imagine me writting code like
var prod = new Product(Factory.CreateProductRepository(), name, costprice);
in the UI layer. If I do not want to inject my repository via dependency injection in the Aggregate Root, then the question is where should this code go? Should I create a class only for persisting this AR? Can anyone suggest what is the correct & recommended approach to solve this issue?
My concern is not which ORM to use or how to make this AR ORM friendly or easy to persist, my question is around the right use of repositories or any persistence class.
Application Services
You are right, the domain layer should know nothing about persistence. So injecting the repository into Product is indeed a bad idea.
The DDD concept you are looking for is called Application Service. An application service is not part of the domain layer, but lives in the service layer (sometimes called application layer). Application services represent a use case (as opposed to a domain concept) and have the following responsibilities:
Perform input validation
Enforce access control
Perform transaction control
The last point means that an application service will query a repository for an aggregate of a specific type (e.g. by ID), modify it by using one of its methods, and then pass it back to the repository for updating the DB.
Repository Ganularity
Concerning your second question
Should I create a class only for persisting this AR?
Yes, creating one repository per aggregate is a common approach. Often, standard repository operations like getById(), update(), delete(), etc. are extracted into a reusable class (either a base class or by aggregation).
You can also create additional repositories for non-domain information, e.g. statistical data. In these cases, make sure that you don't accidentally miss a domain concept, however.
I am pretty new to MVC 4, and I have worked mostly with web forms up to this moment in C#. I understand the pattern of MVC, the routing, calling actions and so on.
But what about the actions which are responsible for fetching data from the database, for example by firing stored procedures? I have seen some tutorials where they put the logic for connecting to the database directly in the actions.
However I am thinking of a more centralized way to do it. For example, I can put all the functions which are firing stored procedures in a separate class named DatabaseCoordinator.cs in a folder named Helpers for example. Then I can call them from the actions in the controllers.
In that way I will know that I can find all of my methods for the database in one class, which is a very clean solution, I think (or at least in web forms). However I want to follow the pattern of MVC, and use only models, views and controllers as the name of the pattern itself implies.
So what is the best practice for that? Should I make a separate class for this, or implement the logic directly in the controllers, or perhaps somewhere else?
You should certainly make a separate repository class to contain all of your data access operations.
There is a good worked example here:
http://www.asp.net/mvc/tutorials/getting-started-with-ef-using-mvc/implementing-the-repository-and-unit-of-work-patterns-in-an-asp-net-mvc-application
I recommend that you put your data access code somewhere other than in your controller. The controller's primary purpose is to gather together the information for display on a page or the reverse - to take the data from the page that is posted back and feed it to the code responsible for business rules and data access.
For most MVC projects (heck, for most projects really!) I build separate class library projects - at minimum one for business rules and data access, though typically I'll make those two separate projects. The purpose of separating the logic is really for simpler future maintenance and reusability. If you keep your various logical parts separate, you can easily swap them out if your logic or database needs to change, or you can easily consume the business rules and data from a new type of user interface; for example, if you decided to implement your project as a Windows forms application in addition to your web system, you could (theoretically) just reuse your business logic and data access logic libraries and only rebuild the user layer. However, if you build your logic into your controller, you really can't reuse that logic without extracting it and converting it to the new application model you're using.
So, simply put, definitely keep 99% of your logic and data access out of your controller. Only put what you must put into your controller, the rest in a separate class, or where appropriate, in separate class libraries.
Good luck!
The Controllers and Views tend to stay within the same project, but it's common to split the data access classes and models into their own seperate class library, as this allows other projects to utilise them.
This will allow you, in the future, to maybe add a windows forms/wpf interface or maybe a mobile device interface, leveraging the work you already have in the standalone class library.
Another thing to consider, is looking into how to use ViewModels in your MVC application. It's a common technique when Views require more than one domain object. Using View Models in MVC.
Check out the Unit of Work Pattern (UOW) combined with the Repository Pattern. It doesn't matter if you ultimately call a stored procedure or an inline linq query to return results, your caller shouldn't know or care how GetPersons is ultimately implemented. The UOW pattern combined with the Repository pattern is a very popular way to expose an Entity Framework database in the ASP.NET community. You will find different ways to do it, some are over-kill and some just create dependencies with no actual benefit but you will find a way that feels right to you with those patterns.
After more experience, I would like to change my answer and state that the Repository Pattern and thus the Unit of Work pattern are pointless layers of abstraction to prevent you from working with Entity Framework, which is your data layer abstraction! directly.
Other than being able to swap out databases from say Microsoft SQL PostgreSQL (when would this ever happen in the real world?) and control the structure of complex queries that you don't want repeated in your code, I see no real value to the repository pattern. To include CreatedBy,ModifiedBy values on Insert/Update you need only override EntityFramework. To encapsulate queries that include business rules such as where active = 1 and isdeleted = 0 just extend Linq queries with extension methods.
I think I've hit that "paralysis by analysis" state.
I have an MVC app, using EF as an ORM.
So I'm trying to decide on the best data access pattern, and so far I'm thinking putting all data access logic into controllers is the way to go.. but it kinda doesn't sound right.
Another option is creating an external repository, handling data interactions.
Here's my pros/cons:
If embedding data access to controllers, I will end up with code like this:
using (DbContext db = new DbContext())
{
User user = db.Users.Where(x=>x.Name == "Bob").Single();
user.Address.Street = "some st";
db.SaveChanges();
}
So with this, I get full benefits of lazy loading, I close connection right after I'm done, I'm flexible on where clause - all the niceties.
The con - I'm mixing a bunch of stuff in a single method - data checking, data access, UI interactions.
With Repository, I'm externalizing data access, and in theory can just replace repos if I decide to use ado.net or go with different database.
But, I don't see a good clean way to realize lazy loading, and how to control DbContext/connection life time.
Say, I have IRepository interface with CRUD methods, how would I load a List of addresses that belong to a given user ? Making methods like GetAddressListByUserId looks ugly, wrong,
and will make me to create a bunch of methods that are just as ugly, and make little sense when using ORM.
I'm sure this problem been solved like million times, and hope there's a solution somewhere..
And one more question on repository pattern - how do you deal with objects that are properties ? E.g. User has a list of addresses, how would you retrieve that list ? Create a repository for the address ? With ORM the address object doesn't have to have a reference back to user, nor Id field, with repo - it will have to have all that. More code, more exposed properties..
The approach you choose depends a lot on the type of project you are going to be working with. For small projects where a Rapid Application Development (RAD) approach is required, it might almost be OK to use your EF model directly in the MVC project and have data access in the controllers, but the more the project grows, the more messy it will become and you will start running into more and more problems. In case you want good design and maintainability, there are several different approaches, but in general you can stick to the following:
Keep your controllers and Views clean. Controllers should only control the application flow and not contain data access or even business logic. Views should only be used for presentation - give it a ViewModel and it will present it as Html (no business logic or calculations). A ViewModel per view is a pretty clean way of doing it.
A typical controller action would look like:
public ActionResult UpdateCompany(CompanyViewModel model)
{
if (ModelState.IsValid)
{
Company company = SomeCompanyViewModelHelper.
MapCompanyViewModelToDomainObject(model);
companyService.UpdateCompany(company);
return RedirectToRoute(/* Wherever you go after company is updated */);
}
// Return the same view with highlighted errors
return View(model);
}
Due to the aforementioned reasons, it is good to abstract your data access (testability, ease of switching the data provider or ORM or whatever, etc.). The Repository pattern is a good choice, but here you also get a few implementation options. There's always been a lot of discussion about generic/non-generic repositories, whether or not one should return IQueryables, etc. But eventually it's for you to choose.
Btw, why do you want lazy loading? As a rule, you know exactly what data you require for a specific view, so why would you choose to fetch it in a deferred way, thus making extra database calls, instead of eager loading everything you need in one call? Personally, I think it's okay to have multiple Get methods for fetching objects with or without children. E.g.
public class CompanyRepository
{
Get(int Id);
Get(string name);
GetWithEmployees(int id);
...
}
It might seem a bit overkill and you may choose a different approach, but as long as you have a pattern you follow, maintaining the code is much easier.
Personally I do it this way:
I have an abstract Domain layer, which has methods not just CRUD, but specialized methods, for example UsersManager.Authenticate(), etc. It inside uses data access logic, or data-access layer abstraction (depending on the level of abstraction I need to have).
It is always better to have an abstract dependency at least. Here are some pros of it:
you can replace one implementation with another at a later time.
you can unit test your controller when needed.
As of controller itself, let it have 2 constructors: one with an abstract domain access class (e.g. facade of domain), and another (empty) constructor which chooses the default implementation. This way your controller lives well during web application run-time (calling empty constructor) and during the unit-testing (with mock domain layer injected).
Also, to be able to easily switch to another domain at a later time, be sure to inject the domain creator, instead of domain itself. This way, localizing the domain layer construction to the domain creator, you can switch to another implementation at any time, by just reconstructing the domain creator (by creator I mean some kind of factory).
I hope this helps.
Addition:
I would not recommend having CRUD methods in domain layer, because this will become a nightmare whenever you rich the unit-testing phase, or even more, when you need to change the implementation to the new one at a later time.
It really comes down to where you want your code. If you need to have data access for an object you can put it behind an IRepository object or in the controller doesn't matter: you will still wind up with either a series of GetByXXX calls or the equivilent code. Either way you can lazy load and control the lifetime of the connection. So now you need to ask yourself: where do I want my code to live?
Personally, I would argue to get it out of the controller. By that I mean moving it to another layer. Probably using an IRespository type of pattern where you have a series of GetByXXX calls. Sure they are ugly. Wrong? I would argue otherwise. At least they are all contained within the same logical layer together rather than being scattered throughout the controllers where they are mixed in with validation code, etc.
I've just started a new project and have naturally opted to use a lot of new tech.
I'm using (Fluent) NHibernate, ASP.NET MVC 3 and am trying to apply the Repository pattern.
I've decided to seperate my Business Logic into a seperate project and define services which wrap my repositories so that I can return POCOs instead of the NHibernate proxies and maintain more seperation between my Front end and DA logic. This will also give me the power to easily provide the same logic as an API later (a requirement).
I have chosen to use a generic IRepository<T> interface where T is one of my NHibernate mapped Entities which all implement IEntity (my interface only a marker really).
The problem is this goes against the aggregate root pattern and I'm starting to feel the pain of the anemic domain model.
If I change an object that is hanging of another
Root <- changed
Child <- changed
In my service I have to do the following:
public void AddNewChild(ChildDto child, rootId)
{
var childEntity = Mapper.Map<ChildDto,ChildEntity>(child);
var rootEntity = _rootrepository.FindById(rootId);
rootEntity.Children.Add(childEntity);
_childRepository.SaveOrUpdate(child);
_rootRepository.SaveOrUpdate(root);
}
If I don't save the child first I get an exception from NHibernate. I feel like my generic repository (I currently require 5 of them in one service) is not the right way to go.
public Service(IRepository<ThingEntity> thingRepo, IRepository<RootEntity> rootRepo, IRepository<ChildEntity> childRepo, IRepository<CategoryEntity> catRepo, IRepository<ProductEntity> productRepo)
I feel like instead of making my code more flexible, it's making it more brittle. If I add a new table I need to go and change the constructor in all my tests (I'm using DI for the implementation so that's not too bad) but it seems a bit smelly.
Does anyone have any advice on how to restructure this sort of architecture?
Should I be making my repositories more specific? Is the service abstraction layer a step too far?
EDIT: There's some great related questions which are helping:
Repository Pattern Best Practice
repository pattern help
Architectural conundrum
When you have an Aggregate, the Repository is the same for the aggregate parent (root) and its children because the life cycle of the children is controlled by the root object.
Your "Save" method for the root object type should be also directly responsible for persisting the changes to the children records instead of delegating it into yet another repository(ies).
Additionally, in a "proper" Aggregate pattern, the child records have no identity of their own (at least one that is visible outside the Aggregate). This has several direct consequences:
There can be no foreign keys from outside records/aggregates to those children records.
Resulting from point 1., every time you save the root object state, you can delete and recreate the child records on the database. This usually will make your persistence logic easier if you bump into precedence problems.
Note: the reverse case of 1. is not true. Child records in an aggregate can have foreign keys to other root records.
I feel like instead of making my code more flexible, it's making it more brittle. If I add a new table I need to go and change the constructor in all my tests (I'm using DI for the implementation so that's not too bad) but it seems a bit smelly.
Implement the Unit Of Work pattern in your repository. That means practically you have a unit of work class which holds your services inject via ctor or property injection. Futheremore it holds a commit and/or transaction method. Only inject the IUnitOfWork instance in your services. When you add a repository you just have to change the unit of work not touch the business logic (services).