As MSDN confirms, in EF 5 and on, the DbContext class is "a combination of the Unit-Of-Work and Repository patterns." In the web applications I build, I tend to implement the Repository and Unit-Of-Work patterns on top of the existing DbContext class. Lately, like many others out there, I've found that this is overkill in my scenario. I am not worried about the underlying storage mechanism ever changing from SQL Server, and while I appreciate the benefits that unit testing would bring, I still have a lot to learn about it before actually implementing it in a live application.
Thus, my solution is to use the DbContext class directly as the Repository and Unit-Of-Work, and then use StructureMap to inject one instance per request to individual service classes, allowing them to do work on the context. Then in my controllers, I inject each service I need and call the methods necessary by each action accordingly. Also, each request is wrapped in a transaction created off of the DbContext at the beginning of the request and either rolled back if any type of exception occurred (whether it be an EF error or application error) or committed if all is well. A sample code scenario is below.
This sample uses the Territory and Shipper tables from the Northwind sample database. In this sample admin controller, a territory and a shipper are being added at the same time.
Controller
public class AdminController : Controller
{
private readonly TerritoryService _territoryService;
private readonly ShipperService _shipperService;
public AdminController(TerritoryService territoryService, ShipperService shipperService)
{
_territoryService = territoryService;
_shipperService = shipperService;
}
// all other actions omitted...
[HttpPost]
public ActionResult Insert(AdminInsertViewModel viewModel)
{
if (!ModelState.IsValid)
return View(viewModel);
var newTerritory = // omitted code to map from viewModel
var newShipper = // omitted code to map from viewModel
_territoryService.Insert(newTerritory);
_shipperService.Insert(newShipper);
return RedirectToAction("SomeAction");
}
}
Territory Service
public class TerritoryService
{
private readonly NorthwindDbContext _dbContext;
public TerritoryService(NorthwindDbContext dbContext)
{
_dbContext = dbContext;
}
public void Insert(Territory territory)
{
_dbContext.Territories.Add(territory);
}
}
Shipper Service
public class ShipperService
{
private readonly NorthwindDbContext _dbContext;
public ShipperService(NorthwindDbContext dbContext)
{
_dbContext = dbContext;
}
public void Insert(Shipper shipper)
{
_dbContext.Shippers.Add(shipper);
}
}
Creation of Transaction on Application_BeginRequest()
// _dbContext is an injected instance per request just like in services
HttpContext.Items["_Transaction"] = _dbContext.Database.BeginTransaction(System.Data.IsolationLevel.ReadCommitted);
Rollback or Commit of Transaction on Application_EndRequest
var transaction = (DbContextTransaction)HttpContext.Items["_Transaction"];
if (HttpContext.Items["_Error"] != null) // populated on Application_Error() in global
{
transaction.Rollback();
}
else
{
transaction.Commit();
}
Now this all seems to work well, but the only question I have now is where is it best to call the SaveChanges() function on the DbContext? Should I call it in each Service layer method?
public class TerritoryService
{
// omitted code minus changes to Insert() method below
public void Insert(Territory territory)
{
_dbContext.Territories.Add(territory);
_dbContext.SaveChanges(); // <== Call it here?
}
}
public class ShipperService
{
// omitted code minus changes to Insert() method below
public void Insert(Shipper shipper)
{
_dbContext.Shippers.Add(shipper);
_dbContext.SaveChanges(); // <== Call it here?
}
}
Or should I leave the service class Insert() methods as is and just call SaveChanges() right before the transaction is committed?
var transaction = (DbContextTransaction)HttpContext.Items["_Transaction"];
// HttpContext.Items["_Error"] populated on Application_Error() in global
if (HttpContext.Items["_Error"] != null)
{
transaction.Rollback();
}
else
{
// _dbContext is an injected instance per request just like in services
_dbContext.SaveChanges(); // <== Call it here?
transaction.Commit();
}
Is either way okay? Is it safe to call SaveChanges() more than once since it is wrapped in a transaction? Are there any issues I may run into by doing so? Or is it best to call SaveChanges() just once right before the transaction is actually committed? I personally would rather just call it at the end right before the transaction is committed, but I want to be sure I am not missing any gotcha's with transactions or doing something wrong? If you read this far, thanks for taking the time to help. I know this was a long question.
You would call SaveChanges() when it's time to commit a single, atomic persistence operation. Since your services don't really know about each other or depend on each other, internally they have no way to guarantee one or the other is going to commit the changes. So in this setup I imagine they would each have to commit their changes.
This of course leads to the problem that these operations might not be individually atomic. Consider this scenario:
_territoryService.Insert(newTerritory); // success
_shipperService.Insert(newShipper); // error
In this case you've partially committed the data, leaving the system in a bit of an unknown state.
Which object in this scenario is in control over the atomicity of the operation? In web applications I think that's usually the controller. The operation, after all, is the request made by the user. In most scenarios (there are exceptions, of course) I imagine one would expect the entire request to succeed or fail.
If this is the case and your atomicity belongs at the request level then what I would recommend is getting the DbContext from the IoC container at the controller level and passing it to the services. (They already require it on their constructors, so not a big change there.) Those services can operate on the context, but never commit the context. The consuming code (the controller) can then commit it (or roll it back, or abandon it, etc.) once all of the services have completed their operations.
While different business objects, services, etc. should each internally maintain their own logic, I find that usually the objects which own the atomicity of operations are at the application level, governed by the business processes being invoked by the users.
You're basically creating a repository here, rather than a service.
To answer your question you could just ask yourself another question. "How will I be using this functionality?"
You're adding a couple of records, removing some records, updating some records. We could say that you're calling your various methods about 30 times. If you call SaveChanges 30 times you're making 30 round-trips to the database, causing a lot of traffic and overhead which COULD be avoided.
I usually recommend doing as few database round-trips as possible, and limit the amount of calls to SaveChanges(). Therefore I recommend that you add a Save() method to your repository/service layer and call it in the layer which calls your repository/service layer.
Unless it is absolutely required to save something before doing something else you shouldn't call it 30 times. You should call it 1 single time. If it is necessary to save something before doing something else you could still call SaveChanges in that absolute moment of requirement in the layer calling your repository/service layer.
Summary/TL;DR: Make a Save() method in your repository/service layer instead of calling SaveChanges() in each repository/service method. This will boost your performance and spare you the unnecessary overhead.
Related
Apparently (and quite possibly) there's a flaw in my current UnitOfWork implementation, because I have connection errors when doing many calls at once.
Exception:
The underlying provider failed on Open.
Inner Exception:
The connection was not closed. The connection's current state is
connecting.
This results in a HTTP 500 response on the client side.
UnitOfWork implementation
public class ScopedUnitOfWork : IUnitOfWork
{
public Entities Context { get; set; }
public UnitOfWorkState State { get; set; }
public ScopedUnitOfWork(IEnvironmentInformationProvider environmentInformationProvider)
{
this.Context = new Entities(environmentInformationProvider.ConnectionString);
this.State = UnitOfWorkState.Initialized;
}
public UowScope GetScope()
{
this.State = UnitOfWorkState.Working;
return new UowScope(this);
}
public SaveResult Save()
{
if (this.State != UnitOfWorkState.Working)
throw new InvalidOperationException("Not allowed to save out of Scope. Request an UowScope instance by calling method GetScope().");
this.Context.SaveChanges();
this.State = UnitOfWorkState.Finished;
return new SaveResult(ResultCodes.Ok);
}
}
Working on a single UowScope would solve the issue but that's not possible given the current circumstance, because each request is completely separate. De facto each request IS using an UoWScope, but apparently it goes wrong when the UoW receives many calls at once.
The UoW is injected through Unity IoC, so I suppose it's a singleton in effect.
The question
Is there a way to adapt the UoW so that separate high-frequency requests are not an issue?
Preferably I'd solve this server side, not client side, any tips? Thanks!
Disclaimer
I don't claim I fully understand UoW, so my implementation may need improvement, be gentle :). Any improvements on that are certainly welcome!
UPDATE
I -know- the EF Context is an UoW, I use mine at Domain level to enable transactional processing of data that is functionality related. And it's also by customer demand, I have no choice.
The issue you have is that the unit of work object is effectively a singleton as your IoC framework is keeping it around for the duration of your application. This means that your context is also being kept as a singleton as it's inside the UoW. So you will almost certainly get multiple concurrent calls to your context which will throw exceptions.
However, I think you are misusing the concept of what a UoW supposed to do. A UoW is there to provide a container for a group of transactions. For example lets say you have an eCommerce platform. When you create an order, you will insert a row in the orders table, then as part of the same transaction you will also insert rows into the order items table, update a users loyalty points etc. So you should do all this inside a single unit of work, commit it, then destroy it. Let the IoC framework (Unity in this case) create your unit of work for each session.
I need to update a few tables in my DB in a single transaction and I read that using DbContext.SaveChanges should be the way to do so.
However I also read that the lifetime of the DbContext should be as short as possible because it grows over time as it loads more entities.
Also I read that in order to make it thread-safe, each action should have its own DbContext.
Should I have a DbContext for each table I want to change and call SaveChanges on each DbContext? Wouldn't the last SaveChanges call override the changes of the previous calls?
What is the best way to do it? (I need this for a website)
Entity Framework is not thread-safe. An MVC controller is instantiated per request. Thus if you use one DbContext per request, you're safe as long as you don't manually spawn threads in your controller actions (which you shouldn't do anyway).
Now if you have concurrency in your application, like a reservation system where multiple users are out to access the same scarce resources that can run out (like tickets), you'll have to implement logic around that yourself. No thread safety is going to help you there anyway.
That's why you're being asked for code in comments, because explaining thread safety in general is way too broad, and probably not applicable to your situation.
Simple way is, to have one DbContext per request, ASP.NET MVC does all thread safety, each controller instance in ASP.NET MVC is isolated for every request, you don't have to worry about race conditions. As long as you don't create threads and just simply do data transformation in action method using single DbContext, you will not have any problem.
Basically DbContext does nothing, it just queues SQL query to target database, it is the database which handles multi threading, race conditions. To protect your data, you should use transactions and add validations in your database to make sure they are saved correctly
public abstract class DbContextController : Controller{
public AppDbContext DB { get; private set;}
public DbContextController(){
DB = new AppDbContext();
}
protected override void OnDisposing(bool disposing){
DB.Dispose();
}
}
If you inherit any class from DbContextController and use DB throughout the life of controller, you will not have any problem.
public ActionResult ProcessProducts(){
foreach(var p in DB.Products){
p.Processed = true;
foreach(var order in p.Orders){
order.Processed = true;
}
}
DB.SaveChanges();
}
However, if you use any threads like in following example,
public ActionResult ProcessProducts(){
Parallel.ForEach(DB.Products, p=>{
p.Processed = true;
// this fails, as p.Orders query is fired
// from same DbContext in multiple threads
foreach(var order in p.Orders){
order.Processed = true;
}
});
DB.SaveChanges();
}
In the Business Logic Layer of an Entity Framework-based application, all methods acting on DB should (as I've heard) be included within:
using(FunkyContainer fc = new FunkyContainer())
{
// do the thing
fc.SaveChanges();
}
Of course, for my own convenience often times those methods use each other, for the sake of not repeating myself. The risk I see here is the following:
public void MainMethod()
{
using(FunkyContainer fc = new FunkyContainer())
{
// perform some operations on fc
// modify a few objects downloaded from DB
int x = HelperMethod();
// act on fc again
fc.SaveChanges();
}
}
public int HelperMethod()
{
using(FunkyContainer fc2 = new FunkyContainer())
{
// act on fc2 an then:
fc2.SaveChanges();
return 42;
}
}
I doesn't look good to me, when the container fc2 is created, while fc is still open and has not been saved yet. So this leads to my question number one:
Is having multiple containers open at the same time and acting on them carelessly an acceptable practice?
I came to a conclusion, that I could write a simple guard-styled object like this:
public sealed class FunkyContainerAccessGuard : IDisposable
{
private static FunkyContainer GlobalContainer { get; private set; }
public FunkyContainer Container // simply a non-static adapter for syntactic convenience
{
get
{
return GlobalContainer;
}
}
private bool IsRootOfHierarchy { get; set; }
public FunkyContainerAccessGuard()
{
IsRootOfHierarchy = (GlobalContainer == null);
if (IsRootOfHierarchy)
GlobalContainer = new FunkyContainer();
}
public void Dispose()
{
if (IsRootOfHierarchy)
{
GlobalContainer.Dispose();
GlobalContainer = null;
}
}
}
Now the usage would be as following:
public void MainMethod()
{
using(FunkyContainerAccessGuard guard = new FunkyContainerAccessGuard())
{
FunkyContainer fc = guard.Container;
// do anything with fc
int x = HelperMethod();
fc.SaveChanges();
}
}
public int HelperMethod()
{
using(FunkyContainerAccessGuard guard = new FunkyContainerAccessGuard())
{
FunkyContainer fc2 = guard.Container;
// do anything with fc2
fc2.SaveChanges();
}
}
When the HelperMethod is called by MainMethod, the GlobalContainer is already created, and its used by both methods, so there is no conflict. Moreover, HelperMethod can be also used separately, and then it creates its own container.
However, this seems like a massive overkill to me; so:
Has this problem been already solved in form of some class (IoC?) or at least some nice design pattern?
Thank you.
Is having multiple containers open at the same time and acting on them carelessly an acceptable practice?
Generally this is perfectly acceptable, sometimes even necessary, but you have to be caucious with that. To have multiple containers at the same time is especially handy when doing multithreading operations. Because of how db works generally each thread should have its own DbContext that should not be shared with other threads. Downside to using multiple DbContext at the same time is that each of them will use separate db connection, and sometimes they are limited, what may lead to application occasionally being unable to connect to database. Other downside is the fact that entity generated by one DbContext may not be used with entity generated by other DbContext. In your example HelperMethod returns primitive type, so this is perfectly safe, but if it would return some entity object that in MainMethod you would like to assign for instance to some navigation property of entity created by MainMethod DbContext then you will receive an exception. To overcome this in MainMethod you would have to use Id of entity returned by HelperMethod to retrieve that entity once more, this time with fc context. On the other hand there is an advantage of using multiple contexts - if one context have some troubles, for instance it tried to save something that violated index constaint, then all next trials of saving changes will result in the same exception as the faulty change will still be pending. If you use multiple DbContexts then if one would fail, then second will operate independently - this is why DbContexts should not live long. So generally I would say the best usage rule would be:
Each thread should use a separate DbContext
All methods that executes on the same thread should share the same DbContext
Of course the above applies if the job to be done is short. DbContext should not live long. The best example would be web applications - there each server request is handled by separate thread and the operations to generate response generally do not take long. In such case all methods executed to generate one response should share for convenience the same DbContext. But each request should be served by separate DbContext.
Has this problem been already solved in form of some class (IoC?) or at least some nice design pattern?
What you need to assure is that your DbContext class is singleton per thread, but each thread has its own instance of that class. In my opinion best way to assure this is with IoC. For instance in Autofac in web applications I register my DbContext with the following rule:
builder
.RegisterType<MyDbContext>()
.InstancePerHttpRequest();
This way autofac IoC generates one DbContext per request and share existing instance within the request serving thread. You do not need to care here for disposing your DbContext. Your IoC will do this when your thread is over.
Working in multiple connections at the same time is not the right approach most of the time because:
You can get distributed deadlocks that SQL Server cannot resolve.
You might not see data that was previously written but not yet committed.
You can't share entities across context boundaries (here: methods).
More resource usage.
No ability to transact across context boundaries (here: methods).
These are very severe disadvantages. Usually, the best model is to have one context, connection and transaction for the request that the app is processing (HTTP or WCF request). That's very simple to set up and avoids a lot of issues.
EF is supposed to be used as a live object model. Do not cripple it by reducing it to CRUD.
static FunkyContainer GlobalContainer
That does not work. You shouldn't share a context across requests. Super dangerous. Consider storing a context in HttpContext.Items or whatever is the per-request store in your app.
I'm struggling to understand the relationship between the Repository and Unit of Work patterns despite this kind of question being asked so many times. Essentially I still don't understand which part would save/commit data changes - the repository or the unit of work?
Since every example I've seen relates to using these in conjunction with a database/OR mapper let's make a more interesting example - lets persist the data to the file system in data files; according to the patterns I should be able to do this because where the data goes is irrelevant.
So for a basic entity:
public class Account
{
public int Id { get; set; }
public string Name { get; set; }
}
I imagine the following interfaces would be used:
public interface IAccountRepository
{
Account Get(int id);
void Add(Account account);
void Update(Account account);
void Remove(Account account);
}
public interface IUnitOfWork
{
void Save();
}
And I think in terms of usage it would look like this:
IUnitOfWork unitOfWork = // Create concrete implementation here
IAccountRepository repository = // Create concrete implementation here
// Add a new account
Account account = new Account() { Name = "Test" };
repository.Add(account);
// Commit changes
unitOfWork.Save();
Bearing in mind that all data will be persisted to files, where does the logic go to actually add/update/remove this data?
Does it go in the repository via the Add(), Update() and Remove() methods? It sounds logical to me to have all the code which reads/writes files in one place, but then what is the point of the IUnitOfWork interface?
Does it go in the IUnitOfWork implementation, which for this scenario would also be responsible for data change tracking too? To me this would suggest that the repository can read files while the unit of work has to write files but that the logic is now split into two places.
Repository can work without Unit Of Work, so it can also have Save method.
public interface IRepository<T>
{
T Get(int id);
void Add(T entity);
void Update(T entity);
void Remove(T entity);
void Save();
}
Unit Of Work is used when you have multiple repositories (may have different data context). It keeps track of all changes in a transaction until you call Commit method to persist all changes to database(file in this case).
So, when you call Add/Update/Remove in the Repository, it only changes the status of the entity, mark it as Added, Removed or Dirty... When you call Commit, Unit Of Work will loop through repositories and perform actual persistence:
If repositories share the same data context, the Unit Of Work can work directly with the data context for higher performance(open and write file in this case).
If repositories have different data context(different databases or files), the Unit Of Work will call each repository's Save method in a same TransactionScope.
I'm actually quite new to this but as nobody wiser has posted:
The code which CRUDs happens in the repositories as you would expect, but when Account.Add (for example) is called, all that happens is that an Account object is added to the list of things to be added later (the change is tracked).
When unitOfWork.Save() is called the repositories are allowed to look through their list of what has changed Or the UoW's list of what has changed (depending on how you choose to implement the pattern) and act appropriately - so in your case there might be a List<Account> NewItemsToAdd field that has been tracking what to add based on calls to .Add(). When the UoW says it's OK to save, the repository can actually persist the new items as files, and if successful clear the list of new items to add.
AFAIK the point of the UoW is to manage the Save across multiple repositories (which combined are the logical unit of work that we want to commit).
I really like your question.
I've used Uow / Repository Pattern with Entity Framework and it shows how much EF actually does (how the context tracks the changes until SaveChanges is finally called). To implement this design pattern in your example you need to write quite a bit of code to manage the changes.
Ehe, things are tricky. Imagine this scenario: one repo saves something in a db, other on the file system and the third something on the cloud. How do you commit that?
As a guideline, the UoW should commit things, however in the above scenario, Commit is just an illusion as you have 3 very different things to update. Enter eventual consistency, which means that all things will be consistent eventually (not in the same moment as you're used with a RDBMS).
That UoW is called a Saga in a message driven architecture. The point is every saga bit can be executed at different time. Saga completes only when all 3 repositories are updated.
You don't see this approach as often, because most of the time you'll work with a RDBMS, but nowadays NoSql is quite common so a classic transactional approach is very limited.
So, if you're sure you work ONLY with ONE rdbms, use a transaction with the UoW and pass teh associated connection to each repository. At the end, UoW will call commit.
If you know or expect you might have to work with more than one rdbms or a storage that doesn't support transactions, try to familiarize yourself with a message driven architecture and with the saga concept.
Using the file system can complicate things quite much if you want to do it on yourself.
Only write when the UoW is committed.
What you have to do is to let the repositories enqueue all IO operations in the UnitOfWork. Something like:
public class UserFileRepository : IUserRepository
{
public UserFileRepository(IUnitOfWork unitOfWork)
{
_enquableUow = unitOfWork as IEnquableUnitOfWork;
if (_enquableUow == null) throw new NotSupportedException("This repository only works with IEnquableUnitOfWork implementations.");
}
public void Add(User user)
{
_uow.Append(() => AppendToFile(user));
}
public void Uppate(User user)
{
_uow.Append(() => ReplaceInFile(user));
}
}
By doing so you can get all changes written to the file(s) at the same time.
The reason that you don't need to do that with DB repositories is that the transaction support is built into the DB. Hence you can tell the DB to start a transaction directly and then just use it to fake a Unit Of Work.
Transaction support
Will be complex as you have to be able to roll back changes in the files and also prevent different threads/transactions from accessing the same files during simultaneous transactions.
normally, repositories handle all reads, and unit-of-work handles all writes,but for sure you can handle all reads and writes by only using one of these two
(but if only using repository pattern, it will be very tedious to maintain maybe 10 repositories,more worse,maybe result in inconsistent reads and writes be overwritten),
advantage of mix using both is ease of tracing status change and ease of handling concurrency and consistent problems.
for better understanding,you can refer links: Repository Pattern with Entity Framework 4.1 and Parent/Child Relationships
and
https://softwareengineering.stackexchange.com/questions/263502/unit-of-work-concurrency-how-is-it-handled
I am learning EF and have seen many examples, and during my learning I came to know about using repository and unit of work patterns. I got why to use repository but I do not have understanding of unit of work really is.
Having no understanding is making DAL understanding difficult. Kindly guide me.
Thanks
The DataContext or ObjectContext is the Unit of Work.
So, your DAL will save, delete and retrieve objects and your DataContext/ObjectContext will keep track of your objects, manage transactions and apply changes.
This is an example just to illustrate the idea of the solution.
using(var context = new ObjectContext()) { // Unit of Work
var repo = new ProductRepository(context);
var product = repo.GetXXXXXXX(...);
...
// Do whatever tracking you want to do with the object context. For instance:
// if( error == false) {
// context.DetectChanges();
// context.SaveChanges(SaveOptions.AcceptAllChangesAfterSave);
// }
}
And your repository will look like:
public abstract class Repository {
public Respository(ObjectContext context){
CurrentContext = context;
}
protected ObjectContext CurrentContext { get; private set; }
}
public class ProductRespository : Repository {
public ProductRespository(ObjectContext context) : base(context){
}
public Product GetXXXXXX(...){
return CurrentContext... ; //Do something with the context
}
}
Another way is to put the unit of work (Object context) globally:
You need to define what will be your unit of work scope. For this example, it will be a web request. In a real world implementation, I'd use dependency injection for that.
public static class ContextProvider {
public static ObjectContext CurrentContext {
get { return HttpContext.Items["CurrentObjectContext"];
}
public static void OpenNew(){
var context = new ObjectContext();
HttpContext.Items["CurrentObjectContext"] = context;
}
public static void CloseCurrent(){
var context = CurrentContext;
HttpContext.Items["CurrentObjectContext"] = null;
// Do whatever tracking you want to do with the object context. For instance:
// if( error == false) {
// context.DetectChanges();
// context.SaveChanges(SaveOptions.AcceptAllChangesAfterSave);
// }
context.Dispose();
}
}
In this example, ObjectContext is the unit of work and it will live in the current request. In your global asax you could add:
protected void Application_BeginRequest(object sender, EventArgs e){
ContextProvider.OpenNew();
}
protected void Application_EndRequest(object sender, EventArgs e){
ContextProvider.CloseCurrent();
}
In your Repositories, you just call ContextProvider.CurrentContext
One of the most common design patterns in enterprise software development is the Unit of Work. According to Martin Fowler, the Unit of Work pattern "maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems."
The Unit of Work pattern isn't necessarily something that you will explicitly build yourself, but the pattern shows up in almost every persistence tool that I'm aware of. The ITransaction interface in NHibernate, the DataContext class in LINQ to SQL, and the ObjectContext class in the Entity Framework are all examples of a Unit of Work. For that matter, the venerable DataSet can be used as a Unit of Work.
For more detail info Please click here to read this article, it's a good one.
For Tutorial on Implementing the Repository and Unit of Work Patterns in an ASP.NET MVC (MVC 4 and EF 5) Application (9 of 10) please click here
For EF 6 and MVC 5 tutorial please click here
I hope this will help, it helped me!
Unit of Work
Maintains a list of objects affected by a business transaction and coordinates the writing out of changes and the resolution of concurrency problems.
When you're pulling data in and out of a database, it's important to
keep track of what you've changed; otherwise, that data won't be
written back into the database. Similarly you have to insert new
objects you create and remove any objects you delete.
You can change the database with each change to your object model, but
this can lead to lots of very small database calls, which ends up
being very slow. Furthermore it requires you to have a transaction
open for the whole interaction, which is impractical if you have a
business transaction that spans multiple requests. The situation is
even worse if you need to keep track of the objects you've read so you
can avoid inconsistent reads.
A Unit of Work keeps track of everything you do during a business
transaction that can affect the database. When you're done, it figures
out everything that needs to be done to alter the database as a result
of your work.
http://martinfowler.com/eaaCatalog/unitOfWork.html