I don't understand why with Unit Of Work + Repository Pattern I can use only one data instance (which wraps the DbContext and provides repositories with the relevant entities) per controller in MVC. Without the pattern I need to make an instance of DbContext in every action because of the concurrency problem.
Thanks in advance!
You should have one DbContext per request from the user (which make sense if you think about it).
This is not per controller however - if you have it is working this way for you I'd guess you have the lifecyle set up incorrectly on your Inversion of control container.
A few code samples might help.
Related
Maybe this question has some explanation around but could not find the best solution to this:
I was reading this blog post from Mark Seemann about the captive dependencies and as far as I understand at the end of the post he comes to conclusion never use or at least try to avoid the captive dependencies otherwise there will be troubles (so far is OK). Here is another post from Autofac documentation.
They suggest using captive dependencies only by purpose (when you know what you are doing!). This made me think about a situation I have on my website. I have like 10 services, all of them rely on DbContext for database operations. I think they could easily be registered as InstancePerLifetimeScope if I fix the problem for DbContext not to hold it forever in memory attached to my services. (I am using Autofac in my case). So, I thought a good starting point would be to create all of these as per lifetime instances and the DbContext as instance per request. Then in my services, I would use something like that:
public class MyService
{
private readonly IDbContext _dbContext = DependencyResolver.Current.GetService<IDbContext>();
private MyModel GetMyModel() => Mapper.Map<MyModel>(_dbContext.MyTable.FirstOrDefault());
}
And then in my startup class I have:
builder.RegisterType<ApplicationDbContext>().As<IDbContext>().InstancePerRequest();
builder.RegisterType<MyService>().As<IMyService>().InstancePerLifetimeScope();
Does this pattern work correctly, I mean not keeping the dbContext forever attached to any service, so it will be disposed at the end of the request and if it works, is there any performance issue from this line:
private readonly IDbContext _dbContext = DependencyResolver.Current.GetService<IDbContext>();
compared to constructor injection(There are many invocations from dbContext to database so I am afraid to get IDbContext every time I want to use it because it might be resource consuming) ?
The reason I want dbContext to be instance per request and not instance per dependency is that I have implemented the unit of work pattern on top of the dbContext object.
A normal method in my controller would look like:
public ActionResult DoSth()
{
using(var unitOfWork = UnitOfWorkManager.NewUnitOfWork())
{
//do stuff
try
{
unitOfWork.Commit();
return View();
}
catch(Exception e)
{
unitOfWork.RollBack();
LoggerService.Log(e);
return View();
}
}
}
If this works fine then there is another issue I am concerned of. So, if I can make my services as instances per lifetime (except DbContext), is there any issue to apply async-await on every method inside of the services to make them non-blocking methods. I am asking this if there is any issue using async-await for the dbContext instance, so, for example, I would have something like this:
public async MyModel GetMyModel()
{
var result = //await on a new task which will use dbcontext instance here
return Mapper.Map<MyModel>(result);
}
Any advice or suggestion is much appreciated!
I'd approach the issue from a distance.
There are some architectural choices which can make your life easier. In web development it's practical to design your application to have stateless service layer (all the state is persisted in DB) and to fit the one HTTP request, one business operation principle (in other words one service method for one controller action).
I don't know how your architecture looks (there's not enough info in your post to determine) but chances are it meets the criteria I described above.
In this case it's easy to decide which component lifetime to choose: DbContext and service classes can be transient (InstancePerDependency in terminology of Autofac) or per request (InstancePerRequest) - it doesn't really matter. The point is that they have the same lifetime so the problem of captive dependencies doesn't arise at all.
Further implications of the above:
You can just use ctor injection in your service classes without worries. (Anyway, service locator pattern would be the last option after investigating lifetime control possibilities like lifetime scopes and IOwned<T>.)
EF itself implements the unit of work pattern via SaveChanges which is suitable most of the cases. Practically, you only need to implement an UoW over EF if its transaction handling doesn't meet your needs for some reason. These are rather special cases.
[...] is there any issue to apply async-await on every method inside of the
services to make them non-blocking methods.
If you apply the async-await pattern consistently (I mean all async operations are awaited) straight up to your controller actions (returning Task<ActionResult> instead of ActionResult), there'll be no issues. (However, keep in mind that in ASP.NET MVC 5 async support is not complete - async child actions are not supported.)
The answer, as always, is it depends... This configuration can work if:
Your scopes are created within the request boundary. Is your unit of work creating a scope?
You don't resolve any of your InstancePerLifetimeScope services before creating your scope. Otherwise they potentially live longer than they should if you create multiple scopes within the request.
I personally would just recommend making anything that depends on DbContext (either directly or indirectly) InstancePerRequest. Transient would work as well. You definitely want everything within one unit of work to be using the same DbContext. Otherwise, with Entity Framework's first level cache, you may have different services retrieving the same database record, but operating on different in-memory copies if they're not using the same DbContext. Last update would win in that case.
I would not reference your container in MyService, just constructor inject it. Container references in your domain or business logic should be used sparingly and only as a last resort.
Is it correct to create Unit of Work in order to share the DbContext among the Repositories?
If isn't what is the recommendation? I really think it is needed to share the DbContext sometimes.
I'm asking this because of the answer for this question: In-memory database doesn't save data
Is it correct to create Unit of Work in order to share the DbContext among the Repositories?
It is design decision, but yes. There is no problem in doing that. It is absolutely valid that code from multiple repositories is executed under one single connection.
I really think it is needed to share the DbContext sometimes.
Absolutely; there are many times when you need to share DbContext.
Your linked answer is really good. I specially like the three points it mention. OP on that question is doing some unnecessary complicated things like Singleton, Service Locator and Async calls without understanding how they work. All these are good things but only if they are used at right time at right place.
Following is from your linked answer:
The best thing is that all of these could be avoided if people stopped attempting to create a Unit of Work + Repository pattern over yet another Unit of Work and Repository. Entity Framework Core already implements these:
Yes; this is true. But even so, repository and UoW may be helpful in some cases. This is design decision based on business needs. I have explained this in my answers below:
https://stackoverflow.com/a/49850950/5779732
https://stackoverflow.com/a/50877329/5779732
Using ORM directly in calling code has following issues:
It makes code little more complicated.
Database code is merged in business logic.
As many ORM objects are used in-line in calling code, it is very hard to unit test the code.
All those issues could be overcome by creating Concrete Repositories in Data Access Layer. DAL should expose concrete repositories to calling code (BLL or Services or Controller whatever) through interfaces. This way, your database and ORM code is fully consumed in DAL and you can easily unit-test calling code by mocking repositories. Refer this article explaining benefit of repository even with ORMs.
Apart from all above, one other issue generally discussed is "What if we decide to change ORM in future". This is entirely false in my personal understanding. It happens very rarely and in most cases, should not be considered while design.
I recommend avoid overthinking and over-design. Focus on your business objectives.
Refer this example code to understand how to inject UoW in repositories. The code sample is with Dapper. But overall design may still useful to you.
What you need is a class that contains multiple repositories and creates a UoW. Then, when you have a use case in which you need to use multiple repositories with shared UoW, this class creates it and pass it to repositories.
I typically call this class Service, but I think there is not some standardized naming.
I am trying to learn entity framework and related patterns. While searching I came across the site: http://www.asp.net/mvc...
I checked the patterns, but I could not understand one point. According to my investigations, dbcontex lifetime should be very little because it has in-memory object model and these changes should be persisted to database as fast as possible. If not, there will be conflicts in multi-user scenarios.
When I look at the above tutorial, I see that for every controller there is only one uow defined. I wonder if this means as long as I am on a one page of the site doing CRUD operations I am using the same dbcontext. But shouldn't its lifetime shorter? For example for every action one uow could be defined.
Could somebody please explain the lifetime of uow?
Defining a DbContext as a private class variable vs. defining it as a local variable shouldn't make any difference.
Every time an HTTP request is created, the controller is initialized (as well as any of it's class variables) and the action is called. Instances of ontrollers will not persist between different requests, nor will any instances of DbContext.
Check out this article about why you don't have to worry about the lifetime of a DbContext.
EDIT
I realized a caveat to this answer a few days after I posted it, and I would have felt guilty had I not updated it.
The above statement is true if every action uses your DbContext. If only a few of your actions use it, however, you might be better off using a locally-scoped DbContext instead. This will prevent unnecessary creation of a DbContext class variable any time you call an action that doesn't require usage of it. Will this make your code more efficient? Yes - but insignificantly so - and you'll have to instantiate a DbContext every time you want to use it, which will result in slightly messier code than just having one class-variable at the top.
Each action being called is a new instance of the controller. Just set a breakpoint in your controller constructor and you will see that it is called everytime you make a request to any action on the controller.
Generally a DBContext is scoped per web request in a web application. So if you inject a DBContext into your controller that generally will give you what you need.
In the example given, the controller is taking the responsibility of creating the instance of the DbContext and doing the dispose. A better practice is to let IoC container to take the responsibility of the lifetime control for the instance of DbContext, and implement constructor injection to inject the DbContext into the MVC/WebApi controller.
As for WCF service, my preference is to indicate the below attribute
[ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]
for the service, and also specify the lifetime of the DbContext so that only one DbContext instance would be create per call.
you may need to have some lifetime management if DI will be implemented for WCF service or MVC/WebApi.
Ref: https://msdn.microsoft.com/en-us/library/dn178463(v=pandp.30).aspx#_Lifetime_Management
This post is also highly recommended for your question.
One DbContext per web request... why?
I have a View where I am searching my db for an object(i.e. Books)..
My controller for this view depends on a BooksRepository that implements a search method.
Everything works fine. I also have the option to do an advanced search which presents a larger form in a Modal popup. This form has many fields including a Dropdown box to select an 'Author' to search by.
I would like to pass a list of authors in my viewmodel so in my conroller I instantiate an instance of my view model, I need to call a repository method to bring back the list of authors's...
My thinking is that this GetAuthors() method should be in an AuthorRepository...
Is it bad practice to inject multiple repo's into a controller? or should I have an Author controller that gets injected with the author repo...and call a method in the Author controller from my BookSearch controller?
I think it's perfectly fine to refer to multiple repositories in a controller. A controller's job is to wrap data in a model and pass it to a view, regardless of how it gets to the data. Doing cross-controller calls can get messy.
I don't think it's a bad idea to inject the two repositories you need into the controller. Actually it sounds like a good practice.
But if you feel things are going out of hands, you might want to create an Application Service that would orchestrate a function in which you could inject several repositories. That would also be a way to move logic away from the controller.
But in this case, I think you are doing it right.
Read this book: http://www.infoq.com/minibooks/domain-driven-design-quickly
Personally, I would think that books and authors are pretty specific entities.... unless you're planning on having an author write a song as well, and you want to have a music repository and a book repository, I would probably keep the authors and books in the same repository, as you're more than likely going to need them both at the same time.
Even then, you could have a music repository and a book repository that both pull from the same author table. There's nothing wrong with that. And no, having more than one repository in a controller is not a "no-no", but unless you're using dependency injection, it can start to get hairy as you add more repositories.
I have a few controllers that reference more than one repository. Just be careful if each of your repositories instantiate their own data context (or EF ObjectContext). Speaking in terms of Entity Framework, if you start navigating entity references with two open contexts you'll have problems.
Other than that, it works fine for me.
From an architect point of view.
If you feel your mvc controllers are getting out of hand with dependencies, then its time to think about 2 things.
have a look at the design and determine if you need facade classes to represent complex subsystems, besides its better for unit testing anyway (there are such thing as 4 tier apps)
look at some of the other design patterns that can help solve this issue before it becomes it becomes a problem (strategy with DI, visitor possibly)
Also, I bet in this situation the unit tests are more of a pain, if you can't unit test it in a simple way, it should be flagged for improvement
Good luck,
If you take a look at this SO question I have a question on the next step.
Imagine you have two repositories generating Items and SubItems. I also have a UnitOfWork which acts as a context for changes to the (in this simple case) two different items.
There seem to be a few ways of generating a UnitOfWork, sometimes this is injected into the repository, sometimes this can be generated by a factory (and then either injected or retrieved from the factory.
My question is how does the UnitOfWork notify the repositories that its changes are now to be committed?
I guess I can have the repository subscribe to events on the UnitOfWork for commit/rollback.
Second question, the idea of the unit of work is, if I have this right, to co-ordinate updates that may conflict. Using my example of Item and SubItem (an Item has a number of SubItems) the UnitOfWork coordinates this so the Item is written first allowing the SubItem to be written? Now I seem to need the unit of work to know about the repositories which seems wrong.
Thanks.
The way I structured my repository was to have the UnitOfWork simply be a "token", spawned by a BeginUnitOfWork() method on the Repo, that then had to be passed to pretty much any other method on the Repo that made DB calls. The only thing it has to know how to do, conceptually, is be disposed, which when that happens causes the NHibernate session associated with that UOW to be closed. It does this by being given a delegate to a protected method in the Repo that it then called back in its Dispose method. What this does for me is completely abstract the actual data-access mechanism; I can implement the same pattern regardless of the back end, and users of the pattern cannot hack the UnitOfWork to get at the actual data-access mechanism.
YMMV; it does require classes that need to perform DB objects to be dependent on the repository as well as the unit of work. You could use additional delegates to expose methods on the UnitOfWork itself that would allow it to be the only dependency.