The context objects generated by Entity Framework are not thread-safe.
What if I use two separate entity contexts, one for each thread (and call SaveChanges() on each) - will this be thread-safe?
// this method is called from several threads concurrently
public void IncrementProperty()
{
var context = new MyEntities();
context.SomeObject.SomeIntProperty++;
context.SaveChanges();
}
I believe entity framework context implements some sort of 'counter' variable which keeps track of whether the current values in the context are fresh or not.
With the code above - called from separate threads - do I still need to lock around the increment/savechanges?
If so, what is the preferred way to accomplish this in this simple scenario?
More than one thread operating on a single Entity Framework context is not thread safe.
A separate instance of context for each thread is thread-safe. As long as each thread of execution has its own instance of EF context you will be fine.
In your example, you may call that code from any number of threads concurrently and each will be happily working with its own context.
However, I would suggest implementing a 'using' block for this as follows:
// this method is called from several threads concurrently
public void IncrementProperty()
{
using (var context = new MyEntities())
{
context.SomeObject.SomeIntProperty++;
context.SaveChanges();
}
}
You can use the factory approach inject your DbContext as a factory instead of an instance perse, take a look at this: https://github.com/vany0114/EF.DbContextFactory
It's safer and you avoid to hardcode the instance creation into your repositories.
http://elvanydev.com/EF-DbContextFactory/
There is an extension to Ninject to do that in a very easy way, just calling the method kernel.AddDbContextFactory<YourContext>(); also you need to change your repository by receiving a Func<YourContext>
I believe "SomeObject.SomeIntProperty" is static. This has nothing to do with Entity being threadsafe. If you are writing to a Static variables in a multithreaded environment you should always wrap them with a double check lock to ensure thread safety.
Related
TL;DR ThreadLocal<T>.Value points to same location if Thread.CurrentThread stays the same. Is there anything similar for AsyncLocal<T>.Value (e.g. would SychronizationContext.Current or ExecutionContext.Capture() suffice for all scenarios)?
Imagine we have created some snapshot of data structure which is kept in thread-local storage (e.g. ThreadLocal<T> instance) and passed it to axillary class for later use. This axillary class is used to restore this data structure to snapshot state. We don't want to restore this snapshot onto different thread, so we can check on which thread axillary class was created. For example:
class Storage<T>
{
private ThreadLocal<ImmutableStack<T>> stackHolder;
public IDisposable Push(T item)
{
var bookmark = new StorageBookmark<T>(this);
stackHolder.Value = stackHolder.Value.Push(item);
return bookmark;
}
private class StorageBookmark<TInner> :IDisposable
{
private Storage<TInner> owner;
private ImmutableStack<TInner> snapshot;
private Thread boundThread;
public StorageBookmark(Storage<TInner> owner)
{
this.owner = owner;
this.snapshot = owner.stackHolder.Value;
this.boundThread = Thread.CurrentThread;
}
public void Dispose()
{
if(Thread.CurrentThread != boundThread)
throw new InvalidOperationException ("Bookmark crossed thread boundary");
owner.stackHolder.Value = snapshot;
}
}
}
With this, we essentialy bound StorageBookmark to specific thread, and, therefore, to specific version of data structure in ThreadLocal storage. And we did that by assuring we don't cross "thread context" with the help of Thread.CurrentThread
Now, to question at hand. How can we achieve the same behavior with AsyncLocal<T> instead of ThreadLocal<T>? To be precise, is there anything similar to Thread.CurrentThread which can be checked at times of construction and usage to control that "async context" has not been crossed (That means AsyncLocal<T>.Value would point to same object as when bookmark was constructed).
It seems either SynchronizationContext.Current or ExecutionContext.Capture() may suffice, but I'm not sure which is better and that there is no catch (or even that would work in all possible situations)
What you're hoping to do is fundamentally contrary to the nature of asynchronous execution context; you're not required to (and therefore can't guarantee) that all Tasks created within your asynchronous context will be awaited immediately, in the same order they were created, or ever at all, but their creation within the scope of the calling context makes them part of the same asynchronous context, period.
It may be challenging to think of asynchronous execution context as different from thread contexts, but asynchrony is not synonymous with parallelism, which is specifically what logical threads support. Objects stored in Thread Local Storage that aren't intended to be shared/copied across threads can generally be mutable because execution within a logical thread will always be able to guarantee relatively constrained sequential logic (if some special treatment may be necessary to ensure compile-time optimizations don't mess with you, though this is rare and only necessary in very specific scenarios). For that reason the ThreadLocal in your example doesn't really need to be an ImmutableStack, it could just be a Stack (which has much better performance) since you don't need to worry about copy-on-write or concurrent access. If the stack were publicly accessible then it would be more concerning that someone could pass the stack to other threads which could push/pop items, but since it's a private implementation detail here the ImmutableStack could actually be seen as unnecessary complexity.
Anyway, Execution Context, which is not a unique concept to .NET (implementations on other platforms may differ in some ways, though in my experience never by much) is very much like (and directly related to) the call stack, but in a way that considers new asynchronous tasks to be new calls on the stack which may need to both share the caller's state as it was at the time the operation was executed, as well as diverge since the caller may continue to create more tasks and create/update state in ways that will not make logical sense when reading a sequential set of instructions. It is generally recommended that anything placed in the ExecutionContext be immutable, though in some cases all copies of the context still pointing to the same instance reference should necessarily share mutable data. IHttpContext, for instance, is stored on the default implementation of IHttpContextAccessor using AsyncLocal, so all tasks created in the scope of a single request will have access to the same response state, for example. Allowing multiple concurrent contexts to make mutations to the same reference instance necessarily introduces the possibility of issues, both from concurrency and logical order of execution. For instance, multiple tasks trying to set different results on an HTTP response will either result in an exception or unexpected behavior. You can try, to some extent, to help the consumer here, but at the end of the day it is the consumer's responsibility to understand the complexity of the nuanced implementation details they're dependent on (which is generally a code smell, but sometimes a necessary evil in real world situations).
That scenario aside, as said, for the sake of ensuring all nested contexts function predictably and safely it's generally recommended to only store immutable types and to always restore the context to its previous value (as you're doing with you disposable stack mechanism). The easiest way to think of the copy-on-write behavior is as though every single new task, new thread pool work item, and new thread gets its own clone of the context, but if they point to the same reference type (i.e. all have copies of the same reference pointer) they all have the same instance; the copy-on-write is simply an optimization that prevents copying when unnecessary, but can essentially be completely ignored and thought of as every logical task having its own copy of the context (which is very much like that ImmutableStack or a string). If the only way to update anything about the current value that the immutable collection item points to is to reassign it to a new modified instance then you never have to worry about cross-context pollution (just like that ImmutableStack you're using).
Your example doesn't show anything about how the data is accessed or what types are passed in for T so there's no way to see what issue you might face, but if what you're concerned about is nested tasks disposing the "Current" context or the IDisposable value being assigned to a field somewhere and accessed from a different thread, there's a few things you can try and some points worth considering:
The closest equivalent to your current check would be:
if(stackHolder.Value != this)
throw new InvalidOperationException ("Bookmark disposed out of order or in wrong context");
A simple ObjectDisposedException will throw an exception from at least one context if two contexts try to dispose it.
Though this generally isn't recommended, if you want to be absolutely certain the object was disposed at least once you could throw an exception in the finalizer of the IDisposable implementation (being sure to call GC.SuppressFinalize(this) in the Dispose method).
By combining the previous two, while it won't guarantee that it was disposed in the exact some task/method block that created it, you can at least guarantee that an object is disposed once and only once.
Due to the fundamental importance of the ways ExecutionContext is supposed to be flowed and controlled, it is the responsibility of the execution engine (typically the runtime, task scheduler, etc. but also in cases where a third party is using Tasks/Threads in novel ways) to ensure ExecutionContext flow is captured and suppressed where appropriate. If a thread, scheduler, or synchronization migration occurs in the root context, the ExecutionContext should not be flowed into the next logical task's ExecutionContext the thread/scheduler processes in the context where the task formerly executed. For example, if a task continuation starts on a ThreadPool thread and then awaits continuation that causes the next logical operations to continue on a different ThreadPool Thread than it was originally started on or some other I/O resource completion thread, when the original thread is returned the the ThreadPool it should not continue to reference/flow the ExecutionContext of the task which is no longer logically executing within it. Assuming no additional tasks are created in parallel and left astray, once execution resumes in the root awaiter it will be the only execution context that continues to have a reference to the context. When a Task completes, so does its execution context (or, rather, its copy of it).
Even if unobserved background tasks are started and never awaited, if the data stored in the AsyncLocal is immutable, the copy-on-write behavior combined with your immutable stack will ensure that parallel clones of execution contexts can never pollute each other
With the first check and used with immutable types, you really don't need to worry about cloned parallel execution contexts unless you're worried about them gaining access to sensitive data from previous contexts; when they Dispose the current item only the stack of the current execution context (e.g. the nested parallel context, specifically) reverts to the previous; all cloned contexts (including the parent) are not modified.
If you are worried about nested contexts accessing parent data by disposing things they shouldn't there are relatively simple patterns you can use to separate the IDisposable from the ambient value, as well as suppression patterns like those used in TransactionScope to, say, temporarily set the current value to null, etc.
Just to reiterate in a practical way, let's say, for instance, that you store an ImmutableList in one of your bookmarks. If the item stored in the ImmutableList is mutable then context pollution is possible.
var someImmutableListOfMutableItems = unsafeAsyncLocal.Value;
// sets Name for all contexts pointing to the reference type.
someImmutableListOfMutableItems[0].Name = "Jon"; // assigns property setter on shared reference of Person
// notice how unsafeAsyncLocal.Value never had to be reassigned?
Whereas an immutable collection of immutable items will never pollute another context unless something is super fundamentally wrong about how execution context is being flowed (contact a vendor, file a bug report, raise the alarm, etc.)
var someImmutableListOfImmutableItems = safeAsyncLocal.Value;
someImmutableListOfImmutableItems = someImmutableListOfImmutableItems.SetItem(0,
someImmutableListOfImmutableItems[0].SetName("Jon") // SetName returns a new immutable Person instance
); // SetItem returns new immutable list instance
// notice both the item and the collection need to be reassigned. No other context will be polluted here
safeAsyncLocal.Value = someImmutableListOfImmutableItems;
EDIT: Some articles for people who want to read something perhaps more coherent than my ramblings here :)
https://devblogs.microsoft.com/pfxteam/executioncontext-vs-synchronizationcontext/
https://weblogs.asp.net/dixin/understanding-c-sharp-async-await-3-runtime-context
And for some comparison, here's an article about how context is managed in JavaScript, which is single threaded but supports an asynchronous programming model (which I figure might help to illustrate how they relate/differ):
https://blog.bitsrc.io/understanding-execution-context-and-execution-stack-in-javascript-1c9ea8642dd0
The logical call context has the same flow semantics as the execution context, and therefore as AsyncLocal. Knowing that, you can store a value in the logical context to detect when you cross "async context" boundaries:
class Storage<T>
{
private AsyncLocal<ImmutableStack<T>> stackHolder = new AsyncLocal<ImmutableStack<T>>();
public IDisposable Push(T item)
{
var bookmark = new StorageBookmark<T>(this);
stackHolder.Value = (stackHolder.Value ?? ImmutableStack<T>.Empty).Push(item);
return bookmark;
}
private class StorageBookmark<TInner> : IDisposable
{
private Storage<TInner> owner;
private ImmutableStack<TInner> snapshot;
private Thread boundThread;
private readonly object id;
public StorageBookmark(Storage<TInner> owner)
{
id = new object();
this.owner = owner;
this.snapshot = owner.stackHolder.Value;
CallContext.LogicalSetData("AsyncStorage", id);
}
public void Dispose()
{
if (CallContext.LogicalGetData("AsyncStorage") != id)
throw new InvalidOperationException("Bookmark crossed async context boundary");
owner.stackHolder.Value = snapshot;
}
}
}
public class Program
{
static void Main()
{
DoesNotThrow().Wait();
Throws().Wait();
}
static async Task DoesNotThrow()
{
var storage = new Storage<string>();
using (storage.Push("hello"))
{
await Task.Yield();
}
}
static async Task Throws()
{
var storage = new Storage<string>();
var disposable = storage.Push("hello");
using (ExecutionContext.SuppressFlow())
{
Task.Run(() => { disposable.Dispose(); }).Wait();
}
}
}
I have a FooContext class that captures some HTTP request-specific runtime values from the request inside an ASP.NET Web API application
public class FooContext
{
private readonly ISet<string> _set = new HashSet<string>();
public void AddToSet(string s) => _set.Add(s);
// Copied so that caller won't modify _set
public ISet<string> GetStrings() => new HashSet<string>(_set);
}
Multiple consumers depends on this FooContext and will call AddToSet/GetStrings and depending on the result, run different business logic.
I want to guarantee there will only be one instance per of FooContext per HTTP request so I registered inside the DI container as request-scoped (using Autofac here as an example but I guess most containeirs are roughly the same):
protected override void Load(ContainerBuilder builder)
{
builder.RegisterType<FooContext>().InstancePerRequest();
}
My understanding is, FooContext is not thread-safe because threads may call GetStrings/AddToSet at the same time on the same FooContext instance (since it is request-scoped). It is not guaranteed that each HTTP request will complete on one single thread.
I do not explictly create new threads nor call Task.Run() in my application but I do use a lot of async-await with ConfigureAwait(false), which means the continuation may be on a different thread.
My questions are:
Is it true that FooContext is not thread-safe? Is my understanding above correct?
If this is indeed thread unsafe, and I want to allow multiple readers but only one exclusive writer, should I apply a ReaderWriterLockSlim on the ISet<string>?
Update
Since a commenter comments that my question is unanswerable without showing FooContext's usage, I will do it here. I use FooContext in an IAutofacActionFilter to capture several parameters that are being passed in the controller method:
public class FooActionFilter : IAutofacActionFilter
{
private readonly FooContext _fooContext;
public FooActionFilter(FooContext fooContext)
=> _fooContext = fooContext;
public Task OnActionExecutingAsync(
HttpActionContext actionContext,
CancellationToken cancellationToken)
{
var argument = (string)actionContext.ActionArguments["mystring"];
_fooContext.AddToSet(argument);
return Task.CompletedTask;
}
}
Then in other service classes that control business logic:
public class BarService
{
private readonly FooContext _fooContext;
public BarService(FooContext fooContext)
=> _fooContext = fooContext;
public async Task DoSomething()
{
var strings = _fooContext.GetStrings();
if (strings.Contains("foo"))
{
// Do something
}
}
}
It is not guaranteed that each HTTP request will complete on one single thread.
When using async/await, the request might run on multiple threads, but the request will flow from one thread to the other, meaning that the request will not run on multiple threads in parallel.
This means that the classes that you cache on a per-request basis don't have to be thread-safe, since their state is not accessed in parallel. They do need to be able to be accessed from multiple threads sequentially though, but this is typically only a problem if you start storing thread-ids, or any other thread-affinit state (using ThreadLocal<T> for instance).
So don't do any special synchronization or use any concurrent data structures (such as ConcurrentDictionary), because it will only complicate your code, while this is not needed (unless you forget to await for some operation, because in that case you will accidentally run operations in parallel, which can cause all sorts of problems. Welcome to beautiful world of async/await).
Multiple consumers depends on this FooContext and will call AddToSet/GetStrings and depending on the result, run different business logic.
Those consumers can depend on FooContext as long as they have a lifetime that is either Transient or InstancePerRequest. And this holds for their consumers all the way up the call graph. If you violate this rule, you will have a Captive Dependency, which may cause your FooContext instance to be reused by multiple requests, which will cause concurrency problems.
You do have to take some care when working with DI in multi-threaded applications though. This documentation does give some pointers.
I do not explictly create new threads nor call Task.Run() in my application but I do use a lot of async-await with ConfigureAwait(false), which means the continuation may be on a different thread.
Yes, so it is being accessed in a multithreaded fashion.
Is it true that FooContext is not thread-safe? Is my understanding above correct?
Yes.
I would say that the calling code is actually in error: ConfigureAwait(false) means "I don't need the request context", but if the code uses FooContext (part of the request context), then it does need the request context.
If this is indeed thread unsafe, and I want to allow multiple readers but only one exclusive writer, should I apply a ReaderWriterLockSlim on the ISet?
Almost certainly not. A lock would probably work just fine.
Using reader/writer locks just because some code does reads and other does writes is a common mistake. Reader/writer locks are only necessary when some code does reads, other does writes, and there is a significant amount of concurrent read access.
Consider this scenario:
The repository:
public void Save(object entity) {
lock (static object) {
context.Add(entity);
context.SaveChanges();
}
}
The application:
//somewhere
new Thread(() => Repository.Save(entity)).Start()
//...
This "action" occurs several times. Randomly the db context raises an AccessViolationException during saving operation (unhandled exception despite the try-catch o.0).
I have already read that this kind of exception can be caused by a concurrent access on the context object.
The question is: why does this exception occur in my case? Note that the lock should ensure the thread-safe access. How can I resolve this problem?
I have an idea... I think that the context.SaveChanges is not "blocking". When a thread commits the saving another thread enters in the Save method causes a concurrency access...
What you are doing with your DbContext sounds wrong and will inevitably lead to problems as your application grows. The EntityFramework DbContext is designed as a Unit of Work. Create it, use it, dispose it; and definitely do not start doing things with it on different threads. A transaction cannot span multiple threads simultaneously - it just can't (or even if there was some funky way of wangling it, it shouldn't), every thread gets it's own transaction.
You need to rework things so that whenever you work with your DbContext you do it as:
using (var context = new MyContext())
{
// 1. get entities
// 2. work with entities
// 3. save changes
}
// ALL IN A SINGLE THREAD
The EntityFramework itself will not prevent you from working in some other way, but as you are beginning to discover at some point things will break in a hard to diagnose way.
There is a thing that's been bugging me for a long time about Entity Framework.
Last year I wrote a big application for a client using EF. And during the development everything worked great.
We shipped the system in august. But after some weeks I started to see weird memory leaks on the production-server. My ASP.NET MVC 4 process was taking up all the resources of the machine after a couple of days running (8 GB). This was not good. I search around on the net and saw that you should surround all your EF queries and operations in a using() block so that the context can be disposed.
In a day I refactored all my code to use using() and this solved my problems, since then the process sits on a steady memory usage.
The reason I didn't surround my queries in the first place however is that I started my first controllers and repositories from Microsofts own scaffolds included in Visual Studio, these did not surround it's queries with using, instead it had the DbContext as an instance variable of the controller itself.
First of all: if it's really important to dispose of the context (something that wouldn't be weird, the dbconnection needs to be closed and so on), Microsoft maybe should have this in all their examples!
Now, I have started working on a new big project with all my learnings in the back of my head and I've been trying out the new features of .NET 4.5 and EF 6 async and await. EF 6.0 has all these async methods (e.g SaveChangesAsync, ToListAsync, and so on).
public Task<tblLanguage> Post(tblLanguage language)
{
using (var langRepo = new TblLanguageRepository(new Entities()))
{
return langRepo.Add(RequestOrganizationTypeEnum, language);
}
}
In class TblLanguageRepo:
public async Task<tblLanguage> Add(OrganizationTypeEnum requestOrganizationTypeEnum, tblLanguage language)
{
...
await Context.SaveChangesAsync();
return langaugeDb;
}
However, when I now surround my statements in a using() block I get the exception, DbContext was disposed, before the query has been able to return. This is expected behaviour. The query runs async and the using block is finished ahead of the query. But how should I dispose of my context in a proper way while using the async and await functions of ef 6??
Please point me in the right direction.
Is using() needed in EF 6? Why do Microsoft's own examples never feature that? How do you use async features and dispose of your context properly?
Your code:
public Task<tblLanguage> Post(tblLanguage language)
{
using (var langRepo = new TblLanguageRepository(new Entities()))
{
return langRepo.Add(RequestOrganizationTypeEnum, language);
}
}
is disposing the repository before returning a Task. If you make the code async:
public async Task<tblLanguage> Post(tblLanguage language)
{
using (var langRepo = new TblLanguageRepository(new Entities()))
{
return await langRepo.Add(RequestOrganizationTypeEnum, language);
}
}
then it will dispose the repository just before the Task completes. What actually happens is when you hit the await, the method returns an incomplete Task (note that the using block is still "active" at this point). Then, when the langRepo.Add task completes, the Post method resumes executing and disposes the langRepo. When the Post method completes, the returned Task is completed.
For more information, see my async intro.
I would go for the 'one DbContext per request' way, and reuse the DbContext within the request. As all tasks should be completed at the end of the request anyway, you can safely dispose it again.
See i.e.: One DbContext per request in ASP.NET MVC (without IOC container)
Some other advantages:
some entities might already be materialized in the DbContext from
previous queries, saving some extra queries.
you don't have all those extra using statements cluttering your code.
If you are using proper n-tiered programming patters, your controller should never even know that a database request is being made. That should all happen in your service layer.
There are a couple of ways to do this. One is to create 2 constructors per class, one that creates a context and one that accepts an already existing context. This way, you can pass the context around if you're already in the service layer, or create a new one if it's the controller/model calling the service layer.
The other is to create an internal overload of each method and accept the context there.
But, yes, you should be wrapping these in a using.
In theory, the garbage collection SHOULD clean these up without wrapping them, but I don't entirely trust the GC.
I agree with #Dirk Boer that the best way to manage DbContext lifetime is with an IoC container that disposes of the context when the http request completes. However if that is not an option, you could also do something like this:
var dbContext = new MyDbContext();
var results = await dbContext.Set<MyEntity>.ToArrayAsync();
dbContext.Dispose();
The using statement is just syntactic sugar for disposing of an object at the end of a code block. You can achieve the same effect without a using block by simply calling .Dispose yourself.
Come to think of it, you shouldn't get object disposed exceptions if you use the await keyword within the using block:
public async Task<tblLanguage> Post(tblLanguage language)
{
using (var langRepo = new TblLanguageRepository(new Entities()))
{
var returnValue = langRepo.Add(RequestOrganizationTypeEnum, language);
await langRepo.SaveChangesAsync();
return returnValue;
}
}
If you want to keep your method synchronous but you want to save to DB asynchronously, don't use the using statement. Like #danludwig said, it is just a syntactic sugar. You can call the SaveChangesAsync() method and then dispose the context after the task is completed. One way to do it is this:
//Save asynchronously then dispose the context after
context.SaveChangesAsync().ContinueWith(c => context.Dispose());
Take note that the lambda you pass to ContinueWith() will also be executed asynchronously.
IMHO, it's again an issue caused by usage of lazy-loading. After you disposed your context, you can't lazy-load a property anymore because disposing the context closes the underlying connection to the database server.
If you do have lazy-loading activated and the exception occurs after the using scope, then please see https://stackoverflow.com/a/21406579/870604
I have a project on MVC. We chose EF for our DB transactions. We created some managers for the BLL layer. I found a lot of examples, where "using" statement is used, i.e.
public Item GetItem(long itemId)
{
using (var db = new MyEntities())
{
return db.Items.Where(it => it.ItemId == itemId && !it.IsDeleted).FirstOrDefault();
}
}
Here we create a new instance of DBcontext MyEntities().
We using "using" in order to "ensure the correct use of IDisposable objects."
It's only one method in my manager. But I have more than ten of them.
Every time I call any method from the manager I'll be using "using" statement and create another DBcontext in the memory. When will the garbage collector (GC) dispose them? Does anyone know?
But there is alternative usage of the manager methods.
We create a global variable:
private readonly MyEntities db = new MyEntities();
and use DBcontext in every method without "using" statement. And method looks like this:
public Item GetItem(long itemId)
{
return db.Items.Where(it => it.ItemId == itemId && !it.IsDeleted).FirstOrDefault();
}
Questions:
What is the proper way of using DBcontext variable?
What if we wouldn't use "usage" statement (because it affects the performance) - GC will do all for that?
I'm a "rookie" in EF usage and still haven't found the unequivocal answer for this question.
I think you will find many suggesting this style of pattern. Not just me or Henk
DBContext handling
Yes, Ideally Using statements for DBContext subtypes
Even better Unit Of Work patterns that are managed with Using, that have a context and dispose the context Just 1 of many UoW examples, this one from Tom Dykstra
The Unit Of Work Manager should be New each Http request
The context is NOT thread safe so make sure each thread has its own context.
Let EF cache things behind the scenes.
Test Context creation times. after several Http request. Do you still have a concern?
Expect problems if you store the context in static. any sort of concurrent access will hurt and if you are using parallel AJAX calls, assume 90+% chance of problems if using a static single context.
For some performance tips, well worth a read
The proper or best practice way of using DBContext variable is with the Using.
using (var db = new MyEntities())
{
return db.Items.Where(it => it.ItemId == itemId && !it.IsDeleted).FirstOrDefault();
}
The benefit is many things are done automatically for us. For example once the block of code is completed the dispose is called.
Per MSDN EF Working with DbContext
The lifetime of the context begins when the instance is created and
ends when the instance is either disposed or garbage-collected. Use
using if you want all the resources that the context controls to be
disposed at the end of the block. When you use using, the compiler
automatically creates a try/finally block and calls dispose in the
finally block.