isolation level in linq to sql , DataContext - c#

Need suggestion for the Dbml File with linq , We have database with large amount of data. some times there is lock in table.so we need to apply isolation level with read uncommited (We know some disadvantage for this isolation level)on dbml class.
i have apple below code in dbml file as a partial class
partial class MainDataContext
{
public MainDataContext()
{
base.Connection.BeginTransaction(System.Data.IsolationLevel.ReadUncommitted);
}
}
Is it a proper way to implement ? or give any halpfull suggestion on it.
Thanks

If you do that, you will need to attach the transaction to every command on that connection, which isn't something LINQ-to-SQL is going to do for you (although there are ways to make it know about a transaction instance). Perhaps one option is to use the overload that accepts a connection, and simply supply an already-open connection upon which you've already stomped the isolation level via:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
Of course, if you do that - then it is now your job to dispose the connection properly when you are done: LINQ-to-SQL will assume you are managing the connection lifetime.
Another option with a LINQ-to-SQL data context is to use the ExecuteQuery<T>(sql, args) method, which allows you to pass in your own raw TSQL - this obviously means you aren't really using LINQ any more, but it allows you to add NOLOCK etc in a few places where it makes tactical sense (just using the data-context for the materializer). This is more granular, and allows you to focus on your high throughput / highly concurrent tables.

You can place code that interacts with the db in a TransactionScope block and set the desired Isolation level for the TransactionScope.
TransactionOptions _transactionOptions = new TransactionOptions() { IsolationLevel = IsolationLevel.Snapshot };
using (TransactionScope transactionScope = new TransactionScope(TransactionScopeOption.Required, _transactionOptions))
{
//your code here
}
And of course, taking this one step further, you can encapsulate the creation of a transactionScope in a static Factory-like method so that it's easier wherever it's needed and in case you want to change the isolation level there will be one singurla place to change it. Depending on your requirements, choose what's best for you.

Related

Which is better to dispose the object?

In case of this method:
public void Delete(int id)
{
using (var connection = GetOpenConnection())
{
connection.Execute($"DELETE FROM MyTable WHERE Id = {id}");
}
}
Or just:
GetOpenConnection().Execute($"DELETE FROM MyTable WHERE Id = {id}");
I wonder if the second is the best option to ease the maintenance and simplify.
First option gives you predictability: connection object returned from GetOpenConnection() will be disposed as soon as connection.Execute finishes.
On the other hand, if you use second approach, you can hope that the connection would be closed at some time in the future, but you have absolutely no certainty of when, and even if, it is going to happen.
Therefore one should prefer the first approach.
Note: Consider parameterizing your query. Even though in your situation insertion of the id into the query is non-threatening because id's type is int, it is a good idea to use parameters consistently throughout your code.
Answering this requires an understanding of how Sql Server (and other databases) use connections, and how ADO.Net uses connection pooling.
Database servers tend to only be able to handle a limited a number of active connections at a time. It has partly to do with the limited available ephemeral ports on a system, but other factors can come into play, as well. This is means it's important to make sure connections are either always closed promptly, or that we carefully limit connection use. If we want a database to scale to a large number of users, we have to do both.
.Net addresses this situation in two ways. First, the ADO.Net library you use for database access (System.Data and company) includes a feature called Connection Pooling. This feature pools and caches connections for you, to make it efficient to quickly open and close connections as needed. The feature means you should not try to keep a shared connection object active for the life of an application or session. Let the connection pool handle this, and create a brand new connection object for most trips to the database.
The other way it addresses the issue is with the IDisposable pattern. IDisposable provides an interface with direct support in the runtime via the using keyword, such that you can be sure unmanaged resources for an object — like that ephemeral port on the database server your connection was holding onto — are cleaned up promptly and in a deterministic way, even if an exception is thrown. This feature makes sure all those short-lived connections you create because of the connection pooling feature really are as short-lived as they need to be.
In other words, the using block in the first sample serves an important function. It's a mistake to omit it. On a busy system it can even lead to a denial of service situation for your database.
You get a sense of this in the question title itself, which asks, "Which is better to dispose the object?" Only one of those two samples disposes the object at all.
You could approach the design in this manner.
using(var context = new CustomerFactory().Create())
return context.RetrieveAll();
Then inside your CustomerContext you would have the dispose logic, the database connection, and your query. But you could create inherit a DbConnectionManager class, which will deal with the connection. But the entire class will be disposed, which would also salvage the connection manager.
public interface ICustomerRepository : IDisposable
{
IEnumerable<Customer> RetrieveAll();
}
public interface ICustomerFactory
{
ICustomerRepository Create();
}
public class CustomerFactory : ICustomerFactory
{
public ICustomerRepository Create() => new CustomerContext();
}
public class CustomerContext : ICustomerRepository
{
public CustomerContext()
{
// Instantiate your connection manager here.
}
public IEnumerable<Customer> RetrieveAll() => dbConnection.Query<Customer>(...);
}
That would be if you want to stub out an expressive call, kind of representing your fluid syntax in option two, without the negative impact.

Nested Transaction Behavior in EF6

I'm currently using TransactionScope to manage transactions in my data layer, but I've been running into issues with nested transactions and async whereby the connection seems to close during the nested transaction or the transaction is promoted to MSDTC. I've not found the exact problem but after reading around it looks like this scenario isn't particuarly well supported and that I should be using Database.BeginTransaction() instead.
My problem is that I can't find information on how Database.BeginTransaction() works with nested transactions, particularly in my scenario where i'm wanting to use the ambient transaction rather than create a new one. My suspicion is that it isn't intended to work this way and if I want to manage nested transactions I should abstract out transaction management to give me more control.
Not wanting to add in unnecessary layers of abstractions I wanted to know if anyone has experience in this area and could confirm the behavior of Database.BeginTransaction() when nested inside another transaction?
Additional information about my DAL: Based on CQS pattern, I tend to encapsulate Db related code in command or query handlers, so a simplified/contrived example of how this nesting occurs would be:
public class AddBlogPostHandler
{
private readonly MyDbContext _myDbContext;
public AddBlogPostHandler(MyDbContext myDbContext)
{
_myDbContext = myDbContext;
}
public async Task ExecuteAsync(AddBlogPostCommand command)
{
using (var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
// .. code to create and add a draft blog post to the context
await _myDbContext.SaveChangesAsync();
var publishBlogPostCommand = new PublishBlogPostCommand();
// ..set some variables on the PublishBlogPostCommand
await PublishBlogPostAsync(command);
scope.Complete();
}
}
}
public class PublishBlogPostHandler
{
private readonly MyDbContext _myDbContext;
public PublishBlogPostHandler(MyDbContext myDbContext)
{
_myDbContext = myDbContext;
}
public async Task ExecuteAsync(PublishBlogPostCommand command)
{
using (var scope = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
// .. some code to do one set of update
await _myDbContext.SaveChangesAsync();
// .. some other db updates that need to be run separately
await _myDbContext.SaveChangesAsync();
scope.Complete();
}
}
}
There is no such thing as nested transactions in the sense that the inner one can commit or rollback independently. Nested transactions really only maintain a ref count. At the last commit we get a physical commit. At the first rollback we get a physical rollback. Just making sure you are aware of that.
It is important to avoid MSDTC usage. This is possible both with TransactionScope and with BeginTransaction. With the former you need to explicitly Open the connection inside the scope so that EF does not open new connections all the time.
As you have read in the issue this is a flaw in EF (which L2S did not have). Please take the time to comment on the issue to make sure the team is aware that customers are running into this problem.
particularly in my scenario where i'm wanting to use the ambient transaction rather than create a new one.
This is perfect for TransactionScope. I think your switch to BeginTransaction is based on a misunderstanding. Maybe you can clarify in the comments.
confirm the behavior of Database.BeginTransaction() when nested inside another transaction
Explained in the first paragraph.
Additional information about my DAL: Based on CQS pattern, I tend to encapsulate Db related code in command or query handlers, so a simplified/contrived example of how this nesting occurs would be:
The code looks fine except for the missing db.Connection.Open() call (as explained above).
This pattern will support executing multiple queries and commands in the same transaction. Just wrap another scope around it. Make sure to not open connections twice, e.g. check conn.State before taking action.

Keep one common DataContext per DAL instead of create it for each database call?

In my DALs I currently use a new DataContext instance for each method, i.e. create the context for each data call, then dispose it (with using). I remember I read that was sort of a best practice.
Now I think that I probably better use one common DataContext per DAL which will require less lines to write and will allow to update changes in the database without attaching the entities to the newly created context.
But I am not sure whether this will impact the productivity of the application. Are there negative things which may appear with this new approach, like maybe "each context reserves a connection line with a database" or "there are only a limited number of contexts available per application"?
From what I read and my own conclusion, the basic rule is: use a single DataContext instance for each short time set of operations, this means:
Use new (separate) instance of DataContext for each operation (transaction) in long living parent objects, such as DALs. For example, the main form has a DAL which uses a DataContext, the main form is the most long living object in a desktop application, thus having a single instance of a DataContext to serve all the main form data operations will not be a good solution due to the increasing cache and risk of the data to become obsolete.
Use single (common) instance of DataContext for all operations in short time living parent objects. For example, if we have a class which executes a set of data operations in a short amount of time, such as takes data from a database, operates with them, updates them, saves the changes to the database and gets disposed, we better create one single instance of the DataContext and use it in all the DAL methods. This relates to a web applications and services as well since they are stateless and are being executed per request.
Example of when I see a requirement of a common DataContext:
DAL:
// Common DAL DataContext field.
DataContext Context = new DataContext();
public IEnumerable<Record> GetRecords()
{
var records = Context.Records;
foreach (var record in records)
{
yield return record;
}
}
public void UpdateData()
{
Context.SaveChanges();
}
BLL:
public void ManageData()
{
foreach (var record in DAL.GetRecords())
{
record.IsUpdated = true;
DAL.UpdateData();
}
}
With this approach you will end up with a lot of objects created in memory (potentially, the whole db) and (which can be even more important), those objects will not correspond to current values in the db (if the db gets updated outside of your application/machine). So, in order to use memory efficiently and to have up-to-data values for your entities, it's really better to create data context per transaction.

LINQ to SQL: Reusing DataContext

I have a number of static methods that perform simple operations like insert or delete a record. All these methods follow this template of using:
public static UserDataModel FromEmail(string email)
{
using (var db = new MyWebAppDataContext())
{
db.ObjectTrackingEnabled = false;
return (from u in db.UserDataModels
where u.Email == email
select u).Single();
}
}
I also have a few methods that need to perform multiple operations that use a DataContext:
public static UserPreferencesDataModel Preferences(string email)
{
return UserDataModel.Preferences(UserDataModel.FromEmail(email));
}
private static UserPreferencesViewModel Preferences(UserDataModel user)
{
using(var db = new MyWebAppDataContext())
{
var preferences = (from u in db.UserDataModels
where u == user
select u.Preferences).Single();
return new UserPreferencesViewModel(preferences);
}
}
I like that I can divide simple operations into faux-stored procedures in my data models with static methods like FromEmail(), but I'm concerned about the cost of having Preferences() invoking two connections (right?) via the two using DataContext statements.
Do I need to be? Is what I'm doing less efficient than using a single using(var db = new MyWebAppDataContext()) statement?
If you examine those "two" operations, you might see that they could be performed in 1 database roundtrip. Minimizing database roundtrips is a major performance objective (second to minimizing database io).
If you have multiple datacontexts, they view the same record differently. Normally, ObjectTracking requires that the same instance is always used to represent a single record. If you have 2 DataContexts, they each do their own object tracking on their own instances.
Suppose the record changes between DC1 observing it and and DC2 observing it. In this case, the record will not only have 2 different instances, but those different instances will have different values. It can be very challenging to express business logic against such a moving target.
You should definately retire the DataContext after the UnitOfWork, to protect yourself from stale instances of records.
Normally you should use one context for one logical unit of work. So have a look at the unit of work pattern, ex. http://dotnet.dzone.com/news/using-unit-work-pattern-entity
Of cause there is some overhead in creating a new DataContext each time. But its a good practice to do as Ludwig stated: One context per unit of work.
Its using connection pooling so its not a too expensive operation.
I also think creating a new DataContext each time is the correct way but this link explains different approaches for handling the data context. Linq to SQL DataContext Lifetime Management
I developed a wrapper component that uses an interface like:
public interface IContextCacher {
DataContext GetFromCache();
void SaveToCache(DataContext ctx);
}
And use a wrapper to instantiate the context; if it exists in cache, it's pulled from there, otherwise, a new instance is created and pushed to the Save method, and all future implementations would get the value from the getter.
Depending on the type of application would be the actual caching mechanism. Say for instance, an ASP.NET web application. This could store the context in the items collection, so its alive for the request only. For a windows app, it could pull it from some singleton collection. It could be whatever you wanted under the scenes.

How do I implement this command to prevent deadlocks with LINQ to SQL?

I would like to implement SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED in my project which uses LINQ to SQL. My understanding is that this will affect all select statements globally.
Do I put this in my DAL which contains the context object? If so, how?
Thanks!
Mark
You can do this on a per DataContext / unit of work basis like this:
using (var con = new SqlConnection(constr))
{
con.Open();
using (var tran =
new con.BeginTransaction(IsolationLevel.ReadUncommitted))
{
using (var db = new MyDataContext(con))
{
// You need to set the transaction in .NET 3.5 (not in 4.0).
db.Transaction = tran;
// Do your stuff here.
db.SubmitChanges();
}
tran.Commit();
}
}
Of course you can abstract the creation and committing and disposal of the connection and transaction away, but this example will work.
Note that this will not set the isolation level globally, just for the LINQ statements that are executed within the context of that particular DataContext class.
Linq, strictly speaking, is not a database query language. It is a domain query language that can be translated to a DB query language like SQL. As such, you cannot use just Linq to set a database isolation level.
I would look at the tools your ORM gives you; most, at some level, involve an ADO-style SQLConnection and SQLTransactions. You should be able to expose these given your ORM's "session" object, in order to set isolation levels and execute other non-DML database commands.
I like the ideas in this article for creating a base class that will set the desired transaction isolation level for you. In my view, this feature should have been included in the framework.
http://www.codeproject.com/KB/linq/LINQ_Transactions.aspx
Try setting READ COMMITTED SNAPSHOT on the entire database.
See here: http://www.codinghorror.com/blog/2008/08/deadlocked.html

Categories