I'm getting stuck into NHibernate at the moment and I'm trying to work out how best to correctly design my Repositories whilst correctly managing the lifetime of the Session.
From the examples I've seen, it seems common practise to inject an ISession into each of the repositories as follows:
public class SomeRepository : IRepository
{
private readonly ISession _session;
public SomeRepository(ISession session)
{
_session = session;
}
public IList<T> LoadSomeData()
{
return _session.Query<T>().Where(...).ToList();
}
}
So this is fine, but I can see a couple of cases where there will be problems:
1) There may not be 1 Session for the lifetime of the app.
2) When accessing NHibernate we should always wrap our calls in a Transaction - if the transaction faults then we must Rollback the transaction and close the Session.
In both these cases the Repository would be defunct as it references a closed Session.
My particular application is a long running process, which will only occasionally call into NHibernate, so I would expect a high turnover Sessions instead of keeping 1 Session open for the lifetime of the app.
I am therefore wondering what the established pattern for dealing with this particular situation? I see several potential solutions, but it's hard to see which is the best practise:
1) Rebuild the repositories (and create a new Session) every time the process needs to do some DB work.
2) Inject the Repositories with a SessionFactory. The Repository then exposes this SessionFactory. The consumer of each repository then Opens a new Session at the same time as starting a transaction.
3) Create a UnitOfWork class which is injected into the Repositories and is responsible for managing Session lifetimes & Transactions. Each time a call into a Repository is made, is calls into UnitOfWork, which creates a brand new Session and the call is executed within a Transaction. The repositories therefore have no knowledge of the Sessions/Transactions.
To me, 3) seems like the best solution, and I've seen a few examples of this online, but in all of the examples they only create 1 session within UnitOfWork, and they don't recreate it if the transaction is rolled back.
In addition, a limitation with 3) is that the UnitOfWork is tied to a particular repository, and so you couldn't have a transaction which made calls into different Repositories.
Hope that makes sense and would appreciate any guidance.
Thanks
Actually Session in NHibernate can be used as UnitOfWork. But if you want to use this pattern, here's the implementation: http://nhibernate.info/doc/patternsandpractices/nhibernate-and-the-unit-of-work-pattern.html.
But however the best solution in your list is 1. So you need to create session and repositories every time you need to make a job with DB.
Using the UnitOfWork it will looks like:
using(var uow = new UnitOfWork())
{
var rep1 = new SomeRepository1(uow);
rep1.DoSomeJob();
var rep2 = new SomeRepository2(uow);
rep2.DoSomeOtherJob();
}
Using the native ISession:
using(var session = SessionFactory.OpenSession())
using(var tr = session.BeginTransaction())
{
var rep1 = new SomeRepository1(session);
rep1.DoSomeJob();
var rep2 = new SomeRepository2(session);
rep2.DoSomeOtherJob();
tr.Commit();
}
Also you can use the power of DI/IoC containers to manage sessions lifetime.
If you interested in last option I can show you an example to.
Related
Apparently (and quite possibly) there's a flaw in my current UnitOfWork implementation, because I have connection errors when doing many calls at once.
Exception:
The underlying provider failed on Open.
Inner Exception:
The connection was not closed. The connection's current state is
connecting.
This results in a HTTP 500 response on the client side.
UnitOfWork implementation
public class ScopedUnitOfWork : IUnitOfWork
{
public Entities Context { get; set; }
public UnitOfWorkState State { get; set; }
public ScopedUnitOfWork(IEnvironmentInformationProvider environmentInformationProvider)
{
this.Context = new Entities(environmentInformationProvider.ConnectionString);
this.State = UnitOfWorkState.Initialized;
}
public UowScope GetScope()
{
this.State = UnitOfWorkState.Working;
return new UowScope(this);
}
public SaveResult Save()
{
if (this.State != UnitOfWorkState.Working)
throw new InvalidOperationException("Not allowed to save out of Scope. Request an UowScope instance by calling method GetScope().");
this.Context.SaveChanges();
this.State = UnitOfWorkState.Finished;
return new SaveResult(ResultCodes.Ok);
}
}
Working on a single UowScope would solve the issue but that's not possible given the current circumstance, because each request is completely separate. De facto each request IS using an UoWScope, but apparently it goes wrong when the UoW receives many calls at once.
The UoW is injected through Unity IoC, so I suppose it's a singleton in effect.
The question
Is there a way to adapt the UoW so that separate high-frequency requests are not an issue?
Preferably I'd solve this server side, not client side, any tips? Thanks!
Disclaimer
I don't claim I fully understand UoW, so my implementation may need improvement, be gentle :). Any improvements on that are certainly welcome!
UPDATE
I -know- the EF Context is an UoW, I use mine at Domain level to enable transactional processing of data that is functionality related. And it's also by customer demand, I have no choice.
The issue you have is that the unit of work object is effectively a singleton as your IoC framework is keeping it around for the duration of your application. This means that your context is also being kept as a singleton as it's inside the UoW. So you will almost certainly get multiple concurrent calls to your context which will throw exceptions.
However, I think you are misusing the concept of what a UoW supposed to do. A UoW is there to provide a container for a group of transactions. For example lets say you have an eCommerce platform. When you create an order, you will insert a row in the orders table, then as part of the same transaction you will also insert rows into the order items table, update a users loyalty points etc. So you should do all this inside a single unit of work, commit it, then destroy it. Let the IoC framework (Unity in this case) create your unit of work for each session.
In the Business Logic Layer of an Entity Framework-based application, all methods acting on DB should (as I've heard) be included within:
using(FunkyContainer fc = new FunkyContainer())
{
// do the thing
fc.SaveChanges();
}
Of course, for my own convenience often times those methods use each other, for the sake of not repeating myself. The risk I see here is the following:
public void MainMethod()
{
using(FunkyContainer fc = new FunkyContainer())
{
// perform some operations on fc
// modify a few objects downloaded from DB
int x = HelperMethod();
// act on fc again
fc.SaveChanges();
}
}
public int HelperMethod()
{
using(FunkyContainer fc2 = new FunkyContainer())
{
// act on fc2 an then:
fc2.SaveChanges();
return 42;
}
}
I doesn't look good to me, when the container fc2 is created, while fc is still open and has not been saved yet. So this leads to my question number one:
Is having multiple containers open at the same time and acting on them carelessly an acceptable practice?
I came to a conclusion, that I could write a simple guard-styled object like this:
public sealed class FunkyContainerAccessGuard : IDisposable
{
private static FunkyContainer GlobalContainer { get; private set; }
public FunkyContainer Container // simply a non-static adapter for syntactic convenience
{
get
{
return GlobalContainer;
}
}
private bool IsRootOfHierarchy { get; set; }
public FunkyContainerAccessGuard()
{
IsRootOfHierarchy = (GlobalContainer == null);
if (IsRootOfHierarchy)
GlobalContainer = new FunkyContainer();
}
public void Dispose()
{
if (IsRootOfHierarchy)
{
GlobalContainer.Dispose();
GlobalContainer = null;
}
}
}
Now the usage would be as following:
public void MainMethod()
{
using(FunkyContainerAccessGuard guard = new FunkyContainerAccessGuard())
{
FunkyContainer fc = guard.Container;
// do anything with fc
int x = HelperMethod();
fc.SaveChanges();
}
}
public int HelperMethod()
{
using(FunkyContainerAccessGuard guard = new FunkyContainerAccessGuard())
{
FunkyContainer fc2 = guard.Container;
// do anything with fc2
fc2.SaveChanges();
}
}
When the HelperMethod is called by MainMethod, the GlobalContainer is already created, and its used by both methods, so there is no conflict. Moreover, HelperMethod can be also used separately, and then it creates its own container.
However, this seems like a massive overkill to me; so:
Has this problem been already solved in form of some class (IoC?) or at least some nice design pattern?
Thank you.
Is having multiple containers open at the same time and acting on them carelessly an acceptable practice?
Generally this is perfectly acceptable, sometimes even necessary, but you have to be caucious with that. To have multiple containers at the same time is especially handy when doing multithreading operations. Because of how db works generally each thread should have its own DbContext that should not be shared with other threads. Downside to using multiple DbContext at the same time is that each of them will use separate db connection, and sometimes they are limited, what may lead to application occasionally being unable to connect to database. Other downside is the fact that entity generated by one DbContext may not be used with entity generated by other DbContext. In your example HelperMethod returns primitive type, so this is perfectly safe, but if it would return some entity object that in MainMethod you would like to assign for instance to some navigation property of entity created by MainMethod DbContext then you will receive an exception. To overcome this in MainMethod you would have to use Id of entity returned by HelperMethod to retrieve that entity once more, this time with fc context. On the other hand there is an advantage of using multiple contexts - if one context have some troubles, for instance it tried to save something that violated index constaint, then all next trials of saving changes will result in the same exception as the faulty change will still be pending. If you use multiple DbContexts then if one would fail, then second will operate independently - this is why DbContexts should not live long. So generally I would say the best usage rule would be:
Each thread should use a separate DbContext
All methods that executes on the same thread should share the same DbContext
Of course the above applies if the job to be done is short. DbContext should not live long. The best example would be web applications - there each server request is handled by separate thread and the operations to generate response generally do not take long. In such case all methods executed to generate one response should share for convenience the same DbContext. But each request should be served by separate DbContext.
Has this problem been already solved in form of some class (IoC?) or at least some nice design pattern?
What you need to assure is that your DbContext class is singleton per thread, but each thread has its own instance of that class. In my opinion best way to assure this is with IoC. For instance in Autofac in web applications I register my DbContext with the following rule:
builder
.RegisterType<MyDbContext>()
.InstancePerHttpRequest();
This way autofac IoC generates one DbContext per request and share existing instance within the request serving thread. You do not need to care here for disposing your DbContext. Your IoC will do this when your thread is over.
Working in multiple connections at the same time is not the right approach most of the time because:
You can get distributed deadlocks that SQL Server cannot resolve.
You might not see data that was previously written but not yet committed.
You can't share entities across context boundaries (here: methods).
More resource usage.
No ability to transact across context boundaries (here: methods).
These are very severe disadvantages. Usually, the best model is to have one context, connection and transaction for the request that the app is processing (HTTP or WCF request). That's very simple to set up and avoids a lot of issues.
EF is supposed to be used as a live object model. Do not cripple it by reducing it to CRUD.
static FunkyContainer GlobalContainer
That does not work. You shouldn't share a context across requests. Super dangerous. Consider storing a context in HttpContext.Items or whatever is the per-request store in your app.
A Repository as defined by Martin Fowler is supposed to act like an in-memory domain object collection. This allows the application (in theory) to be ignorant of the persistence mechanism.
So under normal circumstances you'd have something like this:
public void MyBusinessLogicMethod () {
...
IRepository<Customer> repository = myIocContainer.Resolve<IRepository<Customer>>();
repository.Add(customer);
}
If however you have a series of inserts/updates that you wish to do and want a mechanism to roll back should any of them fail you'd need some sort of UnitOfWork implementation:
public void MyBusinessLogicMethod () {
...
using (IUnitOfWork uow = new UnitOfWork()){
IRepository<Customer> customerRepo = myIocContainer.Resolve<IRepository<Customer>>(uow);
customerRepo.Add(customer);
IRepository<Order> orderRepo = myIocContainer.Resolve<IRepository<Order>>(uow);
orderRepo.Add(order);
IRepository<Invoice> invoiceRepo = myIocContainer.Resolve<IRepository<Invoice>>(uow);
invoiceRepo.Update(invoice);
uow.Save();
}
}
However if you had some bizarre requirement that your Customer Repository was acting against a SqlServer database, your Order Repository against a MySql database and your Invoice Repository against a PostgreSQL database, how would you go about handling the Transactions for each database session?
Now this is a bit of contrived example for sure but every Repository implementation I've come across seems to know at some level that it's really a particular database and ORM being used.
Imagine another scenario where you have 2 repositories where one is going to a database and the other is calling a web service. The whole point of Repositories is that the application shouldn't care what data source you are going to but without jumping through some massive hoops I don't see how these scenarios can be accounted for without the application knowing at some level "FYI this is going to data source x so we'd better treat it differently".
Is there a pattern or implementation that addresses this issue? It seems to me if you are using Database x and ORM y for your entire application then Repositories work splendidly, but if due to technical debt that course deviates then the benefits of repositories are greatly reduced.
In your UnitOfWork, as suggested, you should use a TransactionScope transaction.
It elevates, in your case, to MSDTC and ensure all enlisted operations are correctly executed before commit or otherwise rollback.
I have a situation where I need to retrieve data from a query which executes for almost half a minute and bring it to a web page. (There is no way to reduce this time because the maximum amount of optimization has been performed on it)
I use a four layer architecture along with Entity Framework ( EF, Data Access Layer, Biz Logic Layer, UI) for my application.
I'm trying to use the singleton method when an instance to the DAL is created (The DAL in turn retrieves data from the database) so that I will be able to re-use this instance and hence additional instances wont be created within the same session.
How do I go about setting the session state and checking the availability of the instance in the State Server?
public static Singleton getInstance() {
if (**instance == null**)
instance = new Singleton();
return instance;
}
What should reside within the if block? What condition should I check for in the if block? I'm really unsure as to what I must do.
PS: This session must have a timeout of 5 mins. I hear this can be specified in the Web.config file. is it true?
To be honest you should rather use Entity Framework context and create it every time you need access to database, i.e. in each method. It is optimized to be used that way. Connection pooling will make sure there is not penalty in recreating EF context each time. This is the best practice.
But your DAL might be more than just simple DB access. If you want to have it as a singleton separate for each session you must create the instance on the first request, store it into the Session and check if it's there before using. With thread safety the code could look like that:
class DALClass
{
private static object instanceLock = new object();
public static DALClass Instance
{
get
{
if (Session["DALInstance"] == null)
{
lock (instanceLock)
{
if (Session["DALInstance"] == null)
{
Session["DALInstance"] = new DALClass();
}
}
}
return (DALClass)Session["DALInstance"];
}
}
}
It sounds to me like you have a well defined architecture which would suit dependency injection. Using DI you could just get your IOC container to return you a singleton object or a transient one. However, be very careful using singletons in a web environment as they often cause more trouble than they are worth.
If the query you are running contains user specific data then I would probably place the results of that query into session within the code which composes the UI part of your application if you are using a pattern like MVC that would be in the controller or MVP in the presenter.
If these patterns aren’t in use then you could consider placing the information into session inside the business layer but only if you wrap up the session and inject in that dependency into your business object, e.g. something like “IUserSession”. The business project should not contain a reference to “system.Web” or anything like that.
I'm looking into implementing the IRepository pattern using NHibernate and I have question that I've not been able to answer searching the net.
Assume I have 3 Repositories, PersonRepository, PersonAddressRepository and PersonAccountRepository. Now assume that business logic dictates that there be an "Deactivate Person" process that calls PersonRepository.Deactivate(), PersonAddressRepository.Deactivate() and PersonAccountRepository.Deactivate().
I want to be able to do something along the lines of..
using (ITransaction transaction = session.BeginTransaction()) {
session.Update(Person);
session.Update(PersonAddress);
session.Update(PersonAccount);
}
So that if any of those updates fail that the entire process rolls back within the database. Now at the moment my understanding of NHibernate is you can only create a Session per object so..
var cfg = new Configuration();
cfg.Configure();
cfg.AddAssembly(typeof(Person).Assembly);
ISessionFactory sessionFactory = cfg.BuildSessionFactory();
using (ISession session = sessionFactory.OpenSession()) {
using (ITransaction transaction = session.BeginTransaction()) {
session.Save(Person);
}
Is this correct or am I mistaken? What are the best practices for Transactions regarding multi table updates and Transactions with regards to NHibernate.
Thanks in advance.
You should not create transactions in the repositories or somewhere else "bellow". Transactions are defined by the application logic. This is one of the most common mistakes I see in transaction handling.
I wrote a transaction service which manages the transactions:
using (TransactionService.CreateTransactionScope())
{
repositoryA.DoX();
repositoryB.DoY();
TransactionService.Commit();
}
The repository is getting the session with an open transaction from the service:
TransactionService.Session.CreateQuery("...");
Depending on your environment, you need to make it a bit more complicated. For instance, the session may not be visible to the business logic and should to be put onto another interface etc.
I thought NHibernate understands the System.Transactions.TransactionScope class. Why wouldn't you use that?
One thing you can do -this is how I do it right now- is pass the ISession instance that should be used to your repository instances.
What I will do, in the future, is this:
I have a UnitOfWork class, which is quite generic and is a wrapper around NHibernate's ISession object. This UnitOfWork class contains no 'application' or domain specific methods
In a project which uses NHibernate (and my UnitOfWork wrapper), I'll create a set of extension methods on the UnitOfWork class, that look like this:
public static class UnitOfWorkExtension`
{
public static IPersonRepository GetPersonRepository( this UnitOfWork uow)
{
return new PersonRepository(uow);
}
public static IAccountRepository GetAccountRepository( this UnitofWork uow )
{
return new AccountRepository(uow);
}
}
Then, this would allow me to do this, for instance:
using( var uow = unitOfWorkFactory.CreateUnitOfWork() )
{
var person = uow.GetPersonRepository().GetPerson (1);
var accounts = uow.GetAccountRepository().GetAccountsForPerson(person);
}
But, looking at your example, I'm wondering whether you should have a repository for 'PersonAddress' and 'PersonAccount'.
In my humble opinion, Person is an 'aggregate root' which consists of a PersonAddress and a PersonAccount in your example, and there should be a PersonRepository which handles the Person aggregate root (including the PersonAddress and PersonAccount objects - which are, in fact not entities but value objects (as I see it)).