SQL Server management hangs when I have snapshot isolation in code - c#

I have set my database snapshot_isolation_state_desc = ON
In c# when I start a new transaction
var dbTransaction = _myContext.Database.BeginTransaction(IsolationLevel.Snapshot);
// delete
--- break point
//insert
on the break point when I go to sql management studio and query a table it hangs until i complete the transaction. I would like to be able to see the data in the table not just hang.. But I also want to complete my c# transaction. Am I using the wrong Isolation level?
Thanks in advance

Am I using the wrong Isolation level?
Yes. The default isolation level in SSMS is READ_COMMITTED so writers (the app code) will block readers (SSMS query) unless you've turned on the READ_COMMITTED_SNAPSHOT database option. Each session can run in different isolation level and the behavior of each will depend on the chosen level of that session.
Set the desired isolation level in the SSMS query window prior to querying the table so that your query is not blocked by the uncommitted change made by the app code:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;

Related

Isolation level in C# or SQL - which one will be used?

I have set isolation level in C# code as readcommitted, and I am calling a stored procedure which is timing out due to some reason. The stored procedure does not have any set isolation level statement.
In SQL Server, database level isolation level is read committed snapshot.
So which isolation level will be used? The one defined in SQL Server, or the one set from C#?
There is no such thing as a 'database isolation level'. What you describe is a database options, called READ_COMMITTED_SNAPSHOT:
READ_COMMITTED_SNAPSHOT { ON | OFF } ON Enables Read-Committed Snapshot option at the database level. When it's enabled, DML statements start generating row versions even when no transaction uses Snapshot Isolation. Once this option is enabled, the transactions specifying the read committed isolation level use row versioning instead of locking.
So when READ_COMMITTED_SNAPSHOT is ON a transaction that specified read committed isolation level will instead see a snapshot isolation level.
It is important to understand that there is another database option: ALLOW_SNAPSHOT_ISOLATION that also must be set to ON for Snapshot isolation to occur. See Snapshot Isolation in SQL Server.
When in doubt, you can always check sys.dm_tran_current_transaction which has a column named transaction_is_snapshot:
Snapshot isolation state. This value is 1 if the transaction is started under snapshot isolation. Otherwise, the value is 0.
Also, there are subtle differences between true snapshot isolation level and read committed isolation that is changed to snapshot by READ_COMMITTED_SNAPSHOT.
Commands to set the transaction isolation level are processed in order received. So the last one wins. On SQL Server, you can set the default transaction isolation level, but is just a default.

SET TRANSACTION ISOLATION LEVEL works only with transactions?

In the official example here we have the SET TRANSACTION ISOLATION LEVEL being used in conjunction with an explicitly defined transaction.
My question is, if I execute a query from a SqlCommand, like:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
SELECT * from MyTable
would I benefit from the new isolation level I set?
Or do I need to explicitly define a transaction like this?
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
BEGIN TRANSACTION;
SELECT * from MyTable
COMMIT TRANSACTION;
UPDATE:
As per Randy Levy's answer, I will update my query as follows:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
SELECT * from MyTable;
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;
This is to overcome possible isolation level leaks when using pooling.
Yes, you would benefit from the transaction isolation level that you set even if not within an explicit BEGIN TRANSACTION. When you set the transaction isolation level it is set on a connection level.
From SET TRANSACTION ISOLATION LEVEL (Transact-SQL):
Only one of the isolation level options can be set at a time, and it
remains set for that connection until it is explicitly changed.
One "gotcha" (issue) that can occur is that the isolation level can leak between different connections when using pooling. If you are explicitly setting an isolation level in one (or some) particular piece(s) of code (but using the default most other places) and also using connection pooling. This can cause strange issues if code expects the default isolation level "A" but obtains a connection that had the isolation level explicitly set to "B".
It seems this issue is now fixed in later versions of SQL Server: SQL Server: Isolation level leaks across pooled connections
The first one
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
SELECT * from MyTable
will work. The transaction level you set applies to each subsequent transaction, and your SELECT statement is its own implicit transaction.
You would only need to explicitly start a transaction if you needed to ensure some degree of consistency throughout multiple reads. For example if you use SERIALIZABLE then you could wrap multiple SELECTs in a transaction and ensure that the underlying data isn't modified while you're reading it.
Every statement in SQL Server is run in the context of a transaction. When you do something like
select * from [dbo].[foobar];
SQL Server really does:
begin transaction;
select * from [dbo].[foobar];
commit;
So, setting an explicit transaction isolation level does affect transactions. Even the implicit ones that the database engine starts on your behalf!

NHibernate IStatelessSession CreateQuery failure

When we released last friday, I received an error which I do not get on acceptance. The error message is:
could not execute update query[SQL: delete from dbo.MyTable where col1=? and col2=? and col3=? and col4=? and col5=?]
My C# code is as follows:
var hqlDelete = "DELETE MyTable m WHERE m.Col1 = :var_1 AND m.Col2 = :var_2 AND m.Col3= :var_3 AND m.Col4 = :var_4 AND m.Col5= :var_5";
var deletedEntities = session.CreateQuery(hqlDelete)
.SetString("var_1", variable1)
.SetString("var_2", variable2)
.SetString("var_3", variable3)
.SetString("var_4", variable4)
.SetString("var_5", variable5)
.ExecuteUpdate();
transaction.Commit();
session.Close();
Now, as I said, the error did not trigger when testing on acceptance. Also, when I test with the production database (code from my developer seat), it works without problems too.
The code is triggered when I call a web service and POST a "measurement" to it. The only difference is that I call the service when testing, and on production an other company sends measurements to the web service.
I think it might have something to do with the amount of sessions/transactions, but that would not really explain why the variables show up as ? in the error message.
Any ideas? Is there more information I could supply so you can help me with this one?
Edit: InnerExeption is
{"Transaction (Process ID 68) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction."}
Solving deadlocks can be a hard matter, especially when using an ORM. Deadlocks usually occurs because locks on database objects are not acquired in the same order by different processes (or threads), causing them to wait for each other.
An ORM does not give you much control on lock acquiring order. You may rework your queries ordering, but this could be tedious. Especially when caching causes some of them to not hit DB. Moreover, it should be done with the same ordering on any other application using the same database.
You may detect deadlock errors and do what the message say: retry the whole process. With NHibernate, this means discarding the current session and retry your whole unit of work.
If your database is SQL Server, there is a default setting which greatly increase deadlocks risk: the disabling of read committed snapshot mode. If it is disabled on your database, you may greatly reduce deadlock risks by enabling it. This mode allows reads under read committed isolation level to cease issuing read locks.
You may check this setting with
select snapshot_isolation_state_desc, is_read_committed_snapshot_on
from sys.databases
where name = 'YourDbName'
You may enable this setting with
alter database YourDbName
set allow_snapshot_isolation on
alter database YourDbName
set read_committed_snapshot on
This requires having none running transaction on the target db. And of course, this requires admin rights on DB.
On an application for which I was not having the option to change this setting, I had to go a more quirky way: setting NHibernate default isolation mode (connection.isolation configuration parameter) to ReadUncommitted. My application was mostly read-only, and I was elevating the isolation mode explicitly on the few transactions having to read then write data (using session.BeginTransaction(System.Data.IsolationLevel.ReadCommitted) by example).
You should also check the isolation modes which are currently used by all applications using the database: are some of them using higher isolation level than actually required? (RepeatableRead and Serializable should be avoided if possible.) This is a time consuming process since it requires a good understanding of isolation levels, while studying each use case for determining what is the appropriate minimal isolation level.

SQL Server - Prevent lost update and deadlocks

I am working on a project with 2 applications developed in C# (.NET framework 4) -
WCF service (exposed to customers)
ASP.NET webforms application (Head office use).
Both applications can select and update rows from a common “accounts” table in a SQL Server 2005 database. Each row in the accounts table holds a customer balance.
The business logic for a request in both applications involves selecting a row from "accounts" table, doing some processing based on the balance, followed by updating the balance in the database. The process between selecting and updating of the balance cannot participate in a transaction.
I realized it is possible between selecting the row and updating it, the row could be selected and updated by the another request from same or different application.
I found this issue described in the below article. I am referring to 2nd scenario of "lost update".
http://www.codeproject.com/Articles/342248/Locks-and-Duration-of-Transactions-in-MS-SQL-Serve
The second scenario is when one transaction (Transaction A) reads a
record and retrieve the value into a local variable and that same
record will be updated by another transaction (Transaction B). And
later Transaction A will update the record using the value in the
local variable. In this scenario the update done by Transaction B can
be considered as a "Lost Update".
I am looking for a way to prevent the above situation and to prevent balance from becoming negative if multiple concurrent requests are received for the same row. A row should be selected and updated by only a single request (from either application) at a time to ensure the balance is consistent.
I am thinking along the lines of blocking access to a row as soon as it has been selected by one request. Based on my research below are my observations.
Isolation levels
With 'Repeatable read' isolation level it is possible for 2 transactions to select a common row.
I tested this be opening 2 SSMS windows. In both windows I started a transaction with Repeatable read isolation level followed by select on a common row. I was able to select the row in each transaction.
Next I tried to update the same row from each transaction. The statements kept running for few seconds. Then the update from 1st transaction was successful while the update from 2nd transaction failed with the below message.
Error 1205 : Transaction (Process ID) was deadlocked on lock resources
with another process and has been chosen as the deadlock victim. Rerun
the transaction.
So if I am using transaction with Repeatable read it should not be possible for 2 concurrent transactions to update the same row. Sql server automatically chooses to rollback 1 transactions. Is this correct?
But I would also like to avoid the deadlock error by allowing a particular row to be selected by a single transaction only.
Rowlock
I found the below answer on Stackoverflow that mentioned use of ROWLOCK hint to prevent deadlock. (see the comment of the accepted answer).
Minimum transaction isolation level to avoid "Lost Updates"
I started a transaction and used a select statement with ROWLOCK and UPDLOCK. Then in a new SSMS window, I started another transaction and tried to use the same select query (with same locks). This time I was not able to select the row. The statement kept running in the new SSMS window.
So use of Rowlock with transactions seems to be blocking rows for select statements which use the same lock hints.
I would appreciate it if someone could answer the below questions.
Are my observations regarding isolation levels and rowlock correct?
For the scenario that I described should I use ROWLOCK and UPDLOCK hints to block access to a row? If not what is the correct approach?
I am planning to place my select and update code in a transaction. The first select query in the transaction will use the ROWLOCK and UPDLOCK hints. This will prevent the record from being selected by another transaction that uses select with the same locks to retrieve the same row.
I would suggest SQL Isolation level of SNAPSHOT. Very similar to Oracles lock management.
See http://www.databasejournal.com/features/mssql/snapshot-isolation-level-in-sql-server-what-why-and-how-part-1.html
If your code is not too complicated, you can probably implement this without any changes. Bare in mind that some visibility may be affected (ie Dirty reads may not give dirty data)
I find this blanket system easier and more precise than using query hints all over the place.
Configure the database using:
SET ALLOW_SNAPSHOT_ISOLATION ON
Then use this to prefix your transaction statements:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT

Allow select while transation is running

Here is the code to modify table from in one transatcion. As I know why IsolationLevel Serializable the read is not blocked, but I can't select records from the table. How can I run transaction while not blocking selects from the table ?
TransactionOptions opt = new TransactionOptions();
opt.IsolationLevel = IsolationLevel.Serializable;
using (TransactionScope scope = new TransactionScope(
TransactionScopeOption.Required, opt))
{
// inserts record into table
myObj.MyAction();
// trying to select the table from Management Studio
myObj2.MyAction();
scope.Complete();
}
Have a look at http://msdn.microsoft.com/en-us/library/ms173763.aspx for an explanation of the isolation levels in SQL Server. SERIALIZABLE offers the highest level of isolation and takes range locks on tables which are held until the transaction completes. You'll have to use a lower isolation level to allow concurrent reads during your transaction.
It doesn't matter what isolation level your (insert, update, etc) code is running under - it matters what isolation level the SELECT is running under.
By default, this is READ COMMITTED - so your SELECT query is unable to proceed whilst there is *un*committed data in the table. You can change the isolation level that the select is running under using SET TRANSACTION ISOLATION LEVEL to allow it to READ UNCOMMITTED. Or specify a table hint (NOLOCK).
But whatever you do, it has to be done to the connection/session where the select is running. There's no way for you to tell SQL Server "Please, ignore the settings that other connections have set, just break their expectations".
If you generally want selects to be able to proceed on a database wide basis, you might look into turning on READ_COMMITTED_SNAPSHOT. This is a global change to the database - not something that can or should be toggled on or off for the running of a single statement or set of statements, but it then allow READ COMMITTED queries to continue, without requiring locks.
Serializable is the highest transaction level. It will hold the most restricted locks.
What are you trying to protect with an isolation level of Serializable.
Read Commited Snapshot might be more appropriate, but we would need more information to be sure.

Categories