MSSQL Timeout error - c#

I'm using MSSQL Server 2008. I have to work with several databases at a time. Some tomes the system gives an error "Transaction Timeout" when the insert or update records. But it works after a few minute.
There are few users are using differences windows applications to manipulate data in databases.
I want to know,
is there any relation between this issue and multiple database usage ?
Is this query type (multiple database linked in a query) will effect to the Timeout ?

Highly depends on the timeout's reason. Frequently it is caused when resources are locked by one application and the second application waits too long. Multiple databases, even in single instance, are using Distibuted Transaction Coordinator.
A transaction within a single instance of the Database Engine that
spans two or more databases is actually a distributed transaction. The
instance manages the distributed transaction internally; to the user,
it operates as a local transaction.
http://technet.microsoft.com/en-us/library/jj856598(v=sql.110).aspx
And the DTC is much slower than working in the scope of the same database, so it causes data to be locked for a wider timeframe and might cause timeouts.

If you're using SqlConnection, SqlCommand and Transaction you may want to check and set each of these timeout property to manage properly your application behaviour.
See TransactionOptions for IsolationLevel and TimeOut (the first one maybe has to be Required)
See SqlCommand.CommandTimeout if you're using commands.
A good way to use transaction could be
TimeSpan timeout = TimeSpan.FromSeconds(300); // 5 minutes
using(TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, timeout))
{
// Code ...
}
Keep in mind that a lot of concurrent transactions could affect SQL Server efficiency. Maybe you need different instances of the same Server to use it properly, but the cons come with synchronization of each instance.

Related

Why is DbContext.Database.CurrentTransaction always null?

Is there anyway to find if a DbContext is enlisted in any transaction while enlist=false in the connection string?
I was tracing DbContext.Database.CurrentTransaction, but I noticed it is always null.
I know when enlist=false, all opened connections will not enlist themselves in an ambient transaction, is that right?
If (2) is correct, how to enlist DbContext in an transaction where TransactionScope is used?
Finally, I noticed using clones of DependentTransaction with multiple DbContext and multiple threads while enlist=false will not promote the transaction to distributed one, but I wonder am I still able to commit and rollback in case an exception happened using the dependent transaction while enlist=false?
if (4) is incorrect, then is there any way to fully avoid DistributedTransaction while being able to open multiple connections with a single transaction scope?
FYI, currently Oracle database is employed; however, in future MySQL is going to be in operation as well.
Thank you
I cant really say something to 1 , 2 and 3.. But:
The distribution-thing is not absolutely clear to me. But however, MS escalates Transactions from LTM (Lightweigt Transaction Manger) to DTC if a some criteria come to play. For Example if you try to access different databases in one transaction..
The escalation from LTM to DTC or the decission if an escalation will be forced is a system-decission. Until now I didnt find a way to change this behaviour. Thats why you need to think about the transactions in general. If there is a possibility to avoid multiple-database access you may rethink your transactions.
For further information I´ll recommend Why is my TransactionScope trying to use MSDTC when used in an EF Code First app? , MSSQL Error 'The underlying provider failed on Open' and How do I use TransactionScope in C#?

Long running transactions with EF and SQL Server using Read Commited

I'm using EF6.1 and SQL Server on a WPF thick-client application and by default I'm opening a transaction with each DbContext I instantiate (I commit it and reopen it on every SaveChanges() unless specified otherwise). The isolation level for these transactions is READ COMMITED (IsolationLevel.ReadCommited).
I'm by default opening a new context (thus a new transaction) on each "main view". The application is kind of a fake-MDI app and each MDI View will use its own DbContext... "main views" (every MDI tab/window) can contain other secondary views (think of small modal windows for specific data entry and things like that) which will share the same context (and transaction) as the opened in the main view. I'm using a structure like UseCase -> Views -> ViewModels... generally a "UseCase" will open up a DbContext and can spawn multiple views, which will share it. Those secondary views usually call SaveChanges() without committing the transaction, that's why I want to have them in first place.
I've done some performance tests with a single user on a lab server and there doesn't seem to exist any difference (performance-wise) either opening the transaction when instancing the context, or not having transactions at all (other than the one EF opens by default on SaveChanges()).
I'm no SQL Server expert, so I'm wondering if there are any implications (when the app is used by multiple users on a production server) on having many long-running transactions opened on SQL Server with that isolation level (I understand the implications on other isolation levels which may lock reads, but it's not the case). I'm handling concurrency errors manually when committing the transactions.
Am I doing the right thing here, should I stick to short-living transactions, or is it just a matter of preference?
I've been trying to find an answer to this but haven't found anything definitive (there's some people that says long-living transactions are not a good idea, but they don't seem to explain why).
there's some people that says long-living transactions are not a good
idea, but they don't seem to explain why
Just a couple of reasons:
MS SQL transaction, depending on its isolation level, could obtain record (more general) or even metadata (more exotic) locks. The more time transaction lives, the more locks it could obtain, hence, the probability of deadlocks increases.
Also, uncommitted transaction means server resource utilization. Transaction log will grow and its data for active transactions could not be truncated, server must remember all of things, which has been done within transaction to commit them or rollback.
by default I'm opening a transaction with each DbContext I instantiate
There should be a reason to do this. The only reason I can imagine, is non-EF changes to a database, which must be consistent with EF changes. Otherwise, you're doing an extra-job, which at least useless, and could waste resources of database server.

How do I minimize or inform users of database connection lag / failure?

I'm maintaining a ASP/C# program that uses an MS SQL Server 2008 R2 for its database requirements.
On normal and perfect days, everything works fine as it is. But we don't live in a perfect world.
An Application (for Leave, Sick Leave, Overtime, Undertime, etc.) Approval process requires up to ten separate connections to the database. The program connects to the database, passes around some relevant parameters, and uses stored procedures to do the job. Ten times.
Now, due to the structure of the entire thing, which I can not change, a dip in the connection, or heck, if I put a debug point in VS2005 and let it hang there long enough, the Application Approval Process goes incomplete. The tables are often just joined together, so a data mismatch - a missing data here, a primary key that failed to update there - would mean an entire row would be useless.
Now, I know that there is nothing I can do to prevent this - this is a connection issue, after all.
But are there ways to minimize connection lag / failure? Or a way to inform the users that something went wrong with the process? A rollback changes feature (either via program, or SQL), so that any incomplete data in the database will be undone?
Thanks.
But are there ways to minimize connection lag / failure? Or a way to
inform the users that something went wrong with the process? A
rollback changes feature (either via program, or SQL), so that any
incomplete data in the database will be undone?
As we discussed in the comments, transactions will address many of your concerns.
A transaction comprises a unit of work performed within a database
management system (or similar system) against a database, and treated
in a coherent and reliable way independent of other transactions.
Transactions in a database environment have two main purposes:
To provide reliable units of work that allow correct recovery from failures and keep a database consistent even in cases of system
failure, when execution stops (completely or partially) and many
operations upon a database remain uncompleted, with unclear status.
To provide isolation between programs accessing a database concurrently. If this isolation is not provided, the program's outcome
are possibly erroneous.
Source
Transactions in .Net
As you might expect, the database is integral to providing transaction support for database-related operations. However, creating transactions from your business tier is quite easy and allows you to use a single transaction across multiple database calls.
Quoting from my answer here:
I see several reasons to control transactions from the business tier:
Communication across data store boundaries. Transactions don't have to be against a RDBMS; they can be against a variety of entities.
The ability to rollback/commit transactions based on business logic that may not be available to the particular stored procedure you are calling.
The ability to invoke an arbitrary set of queries within a single transaction. This also eliminates the need to worry about transaction count.
Personal preference: c# has a more elegant structure for declaring transactions: a using block. By comparison, I've always found transactions inside stored procedures to be cumbersome when jumping to rollback/commit.
Transactions are most easily declared using the TransactionScope (reference) abstraction which does the hard work for you.
using( var ts = new TransactionScope() )
{
// do some work here that may or may not succeed
// if this line is reached, the transaction will commit. If an exception is
// thrown before this line is reached, the transaction will be rolled back.
ts.Complete();
}
Since you are just starting out with transactions, I'd suggest testing out a transaction from your .Net code.
Call a stored procedure that performs an INSERT.
After the INSERT, purposely have the procedure generate an error of any kind.
You can validate your implementation by seeing that the INSERT was rolled back automatically.
Transactions in the Database
Of course, you can also declare transactions inside a stored procedure (or any sort of TSQL statement). See here for more information.
If you use the same SQLConnection, or other connection types that implement IDbConnection, you can do something similar to transactionscopes but without the need to create the security risk that is a transactionscope.
In VB:
Using scope as IDbTransaction = mySqlCommand.Connection.BeginTransaction()
If blnEverythingGoesWell Then
scope.Commit()
Else
scope.Rollback()
End If
End Using
If you don't specify commit, the default is to rollback the transaction.

Does TransactionScope makes a cross process lock on the database in LinQ queries?

I got several applications run onto the one machine, also with an MSSQL server on that machine.
Applications are various typed, like WPF, WCF Service, MVC App and so on.
All of them accessing the only database, which is located on the sql server.
The access mode is the simple LinQ-to-SQL class calls.
In each database concact I make some queries, some checks and some db-writes.
My question is:
Can I be sure that calls inside those transaction scopes are not running at the same time (are thread and process safe) by using simple TransactionScope instance?
Using a transaction scope will obviously make a particular connection transactional. The use of transaction scopes in itself doesn't stop two different processes on a machine doing the same thing at once. It does ensure that all actions performed are either committed or rolled back. The view of data each process sees depends on the isolation level, which by default is serializable, which can easily lead to deadlocks. A more practical isolation level is read comitted, preferably with snapshot isolation as this further reduces deadlocks and waits times.
If you want to ensure only one instance of application is doing something, you can use a mutex or use a database lock that all different processes will attempt to acquire and if necessary wait for.

TransactionScope with IsolationLevel set to Serializable is locking all SQL SELECTs

I'm using PowerShell transactions; which create a CommittableTransaction with an IsolationLevel of Serializable. The problem is that when I am executing a Transaction in this context all SELECTs are blocked on the tables affected by the transaction on any connection besides the one executing the transaction. I can perform gets from within the transaction but not anywhere else. This includes SSMS and other cmdlets executions. Is this expected behavior? Seems like I'm missing something...
PS Script:
Start-Transaction
Add-Something -UseTransaction
Get-Something #hangs here until timeout
Add-Something -UseTransaction
Undo-Transaction
Serializable transactions will block any updates on the ranges scanned under this isolation. By itself the serialization isolation level does not block reads. If you find that reads are blocked, something else must be at play and it depends on what you do in those scripts.
Sounds as if your database has ALLOW_SNAPSHOT_ISOLATION=OFF. This setting controls the concurrency mechanism used by the database:
ALLOW_SNAPSHOT_ISOLATION=OFF: This is the traditional mode of SQL Server, with lock based concurrency. This mode may lead to locking problems.
ALLOW_SNAPSHOT_ISOLATION=ON: This is avaliable since SQL Server 2005, and uses MVCC, pretty similar to what Oracle or Postgresql do. This is better for concurrency as readers do not block writers and writers do not block readers.
Note that this two modes do not behave in the same way, so you must code your transactions for assuming one mode or the other.

Categories