Here is the code to modify table from in one transatcion. As I know why IsolationLevel Serializable the read is not blocked, but I can't select records from the table. How can I run transaction while not blocking selects from the table ?
TransactionOptions opt = new TransactionOptions();
opt.IsolationLevel = IsolationLevel.Serializable;
using (TransactionScope scope = new TransactionScope(
TransactionScopeOption.Required, opt))
{
// inserts record into table
myObj.MyAction();
// trying to select the table from Management Studio
myObj2.MyAction();
scope.Complete();
}
Have a look at http://msdn.microsoft.com/en-us/library/ms173763.aspx for an explanation of the isolation levels in SQL Server. SERIALIZABLE offers the highest level of isolation and takes range locks on tables which are held until the transaction completes. You'll have to use a lower isolation level to allow concurrent reads during your transaction.
It doesn't matter what isolation level your (insert, update, etc) code is running under - it matters what isolation level the SELECT is running under.
By default, this is READ COMMITTED - so your SELECT query is unable to proceed whilst there is *un*committed data in the table. You can change the isolation level that the select is running under using SET TRANSACTION ISOLATION LEVEL to allow it to READ UNCOMMITTED. Or specify a table hint (NOLOCK).
But whatever you do, it has to be done to the connection/session where the select is running. There's no way for you to tell SQL Server "Please, ignore the settings that other connections have set, just break their expectations".
If you generally want selects to be able to proceed on a database wide basis, you might look into turning on READ_COMMITTED_SNAPSHOT. This is a global change to the database - not something that can or should be toggled on or off for the running of a single statement or set of statements, but it then allow READ COMMITTED queries to continue, without requiring locks.
Serializable is the highest transaction level. It will hold the most restricted locks.
What are you trying to protect with an isolation level of Serializable.
Read Commited Snapshot might be more appropriate, but we would need more information to be sure.
Related
Our production setup is that we have an application server with applications connecting to a SQL Server 2016 database. On the application server there is several IIS applications which run under a GMSA account. The GMSA account has db_datawriter and db_datareader privileges on the database.
Our team have db_datareader privileges on the same SQL Server database. We require this for production support purposes.
We recently had an incident where a team member invoked a query on SQL Server Management Studio on their local machine:
SELECT * FROM [DatabaseA].[dbo].[TableA] order by CreateDt desc;
TableAhas about 1.4m records and there are multiple blob type columns. CreateDt is a DATETIME2 type column.
We have RedGate SQL Monitor configured for the SQL Server Database Server. This raised a long-running query alert that ran for 1738 seconds.
At the same time one of our web applications (.NET 4.6) which exclusively inserts new records to TableA was experiencing constant query timeout errors:
Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
These errors occurred for almost the exact same 1738 second period. This leads me to believe these are connected.
My understanding is that a SELECT query only creates a Shared lock and would not block access to this table for another connection. Is my understanding correct here?
My question is that is db_datareader safe for team members? Is there a lesser privilege that would allow reading data but absolutely no way for blocking behaviours to be created.
The presence of SELECT * (SELECT STAR) in a query, leads generally to do not use an index and make a SCAN of the table.
With many LOBs (BLOBs or CLOBS or NCLOBs) and many rows, the order by clause will take a long time to :
generate the entries
make a sort on CreateDt
So a read lock (shared lock) is put while reading all the data of the table. This lock accepts other shared locks but prohibit to put an exclusive lock to modify data (INSERT, UPDATE, DELETE). This may guarantee to other users that the data won't be modified.
This locking technics is well known as pessimistic lock. The locks are taken before beginning the execution of the query and relaxed at the end. So reader blocks writers and writers blocks all.
The other technic, that SQL Server can do, called optimistic locking, consists to use a copy of the data, without any locking and verify at the end of the execution that the data involved in writes has not been modified since the beginning. So the blocking is less...
To do a pessimistic locking you have the choise to allow or to force:
ALTER DATABASE CURRENT SET ALLOW_SNAPSHOT_ISOLATION ON;
ALTER DATABASE CURRENT SET READ_COMMITTED_SNAPSHOT ON;
In SQL Server, writers block readers, and readers block writers.
This query doesn't have a where clause and will touch the entire table, probably starting with an IS (Intent Shared) and eventually escalating to a shared lock that updates/inserts/deletes can't access while the lock is there. This is likely held during that very long sort, the order by is causing.
It can be bypassed in several ways, but I don't assume you're actually after how, seeing as whoever ran the query was probably not really thinking straight anyway, and this is not a regular occurrence.
Nevertheless, here are some ways to bypass:
Read Committed Snapshot Isolation
With (nolock). But only if you don't really care about the data that is retrieved, as it can return rows twice, rows that were never committed and skip rows altogether.
Reducing the columns you return and reading from a non-clustered index instead.
But to answer your question, yes selects can block inserts.
I have set my database snapshot_isolation_state_desc = ON
In c# when I start a new transaction
var dbTransaction = _myContext.Database.BeginTransaction(IsolationLevel.Snapshot);
// delete
--- break point
//insert
on the break point when I go to sql management studio and query a table it hangs until i complete the transaction. I would like to be able to see the data in the table not just hang.. But I also want to complete my c# transaction. Am I using the wrong Isolation level?
Thanks in advance
Am I using the wrong Isolation level?
Yes. The default isolation level in SSMS is READ_COMMITTED so writers (the app code) will block readers (SSMS query) unless you've turned on the READ_COMMITTED_SNAPSHOT database option. Each session can run in different isolation level and the behavior of each will depend on the chosen level of that session.
Set the desired isolation level in the SSMS query window prior to querying the table so that your query is not blocked by the uncommitted change made by the app code:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT;
Does a transaction lock my table when I'm running multiple queries?
Example: if another user will try to send data in same time which I use transaction, what will happen?
Also how can I avoid this, but also to be sure that all data has inserted successfully into database?
Begin Tran;
Insert into Customers (name) values(name1);
Update CustomerTrans
set CustomerName = (name2);
Commit;
You have to implement transaction smartly. Below are some performance related points :-
Locking Optimistic/Pessimistic. In pessimistic locking whole table is locked. but in optimistic locking only specific row is locked.
Isolation level Read Committed/Read Uncommitted. When table is locked it depends upon on your business scenario if it allowed you then you can go for dirty read using with NoLock.
Try to use where clause in update and do proper indexing. For any heavy query check the query plan.
Transaction timeout should be very less. So if the table is locked then it should throw error and In catch block you can retry.
These are few points you can do.
You cannot avoid that multiples users load data to the database. It is neither feasible nor clever to lock every time a single user requested the usage of a table. Actually you do not have to worry about it, because the DB itself will provide mechanism to avoid such issues. I would recommend you reading into ACID properties.
Atomicity
Consistency
Isolation
Durability
What may happen is that you could suffer a ghost read, which basically consist that you cannot read data unless the user who is inserting data commits. And even if you have finished inserting data and do not commit, there is a fair chance that you will not see the changes.
DDL operations such as creation, removal, etc. are themselves committed at the end. However DML operation, such as update, insert, delete, etc. are not committed at the end.
When we released last friday, I received an error which I do not get on acceptance. The error message is:
could not execute update query[SQL: delete from dbo.MyTable where col1=? and col2=? and col3=? and col4=? and col5=?]
My C# code is as follows:
var hqlDelete = "DELETE MyTable m WHERE m.Col1 = :var_1 AND m.Col2 = :var_2 AND m.Col3= :var_3 AND m.Col4 = :var_4 AND m.Col5= :var_5";
var deletedEntities = session.CreateQuery(hqlDelete)
.SetString("var_1", variable1)
.SetString("var_2", variable2)
.SetString("var_3", variable3)
.SetString("var_4", variable4)
.SetString("var_5", variable5)
.ExecuteUpdate();
transaction.Commit();
session.Close();
Now, as I said, the error did not trigger when testing on acceptance. Also, when I test with the production database (code from my developer seat), it works without problems too.
The code is triggered when I call a web service and POST a "measurement" to it. The only difference is that I call the service when testing, and on production an other company sends measurements to the web service.
I think it might have something to do with the amount of sessions/transactions, but that would not really explain why the variables show up as ? in the error message.
Any ideas? Is there more information I could supply so you can help me with this one?
Edit: InnerExeption is
{"Transaction (Process ID 68) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction."}
Solving deadlocks can be a hard matter, especially when using an ORM. Deadlocks usually occurs because locks on database objects are not acquired in the same order by different processes (or threads), causing them to wait for each other.
An ORM does not give you much control on lock acquiring order. You may rework your queries ordering, but this could be tedious. Especially when caching causes some of them to not hit DB. Moreover, it should be done with the same ordering on any other application using the same database.
You may detect deadlock errors and do what the message say: retry the whole process. With NHibernate, this means discarding the current session and retry your whole unit of work.
If your database is SQL Server, there is a default setting which greatly increase deadlocks risk: the disabling of read committed snapshot mode. If it is disabled on your database, you may greatly reduce deadlock risks by enabling it. This mode allows reads under read committed isolation level to cease issuing read locks.
You may check this setting with
select snapshot_isolation_state_desc, is_read_committed_snapshot_on
from sys.databases
where name = 'YourDbName'
You may enable this setting with
alter database YourDbName
set allow_snapshot_isolation on
alter database YourDbName
set read_committed_snapshot on
This requires having none running transaction on the target db. And of course, this requires admin rights on DB.
On an application for which I was not having the option to change this setting, I had to go a more quirky way: setting NHibernate default isolation mode (connection.isolation configuration parameter) to ReadUncommitted. My application was mostly read-only, and I was elevating the isolation mode explicitly on the few transactions having to read then write data (using session.BeginTransaction(System.Data.IsolationLevel.ReadCommitted) by example).
You should also check the isolation modes which are currently used by all applications using the database: are some of them using higher isolation level than actually required? (RepeatableRead and Serializable should be avoided if possible.) This is a time consuming process since it requires a good understanding of isolation levels, while studying each use case for determining what is the appropriate minimal isolation level.
I am working on a project with 2 applications developed in C# (.NET framework 4) -
WCF service (exposed to customers)
ASP.NET webforms application (Head office use).
Both applications can select and update rows from a common “accounts” table in a SQL Server 2005 database. Each row in the accounts table holds a customer balance.
The business logic for a request in both applications involves selecting a row from "accounts" table, doing some processing based on the balance, followed by updating the balance in the database. The process between selecting and updating of the balance cannot participate in a transaction.
I realized it is possible between selecting the row and updating it, the row could be selected and updated by the another request from same or different application.
I found this issue described in the below article. I am referring to 2nd scenario of "lost update".
http://www.codeproject.com/Articles/342248/Locks-and-Duration-of-Transactions-in-MS-SQL-Serve
The second scenario is when one transaction (Transaction A) reads a
record and retrieve the value into a local variable and that same
record will be updated by another transaction (Transaction B). And
later Transaction A will update the record using the value in the
local variable. In this scenario the update done by Transaction B can
be considered as a "Lost Update".
I am looking for a way to prevent the above situation and to prevent balance from becoming negative if multiple concurrent requests are received for the same row. A row should be selected and updated by only a single request (from either application) at a time to ensure the balance is consistent.
I am thinking along the lines of blocking access to a row as soon as it has been selected by one request. Based on my research below are my observations.
Isolation levels
With 'Repeatable read' isolation level it is possible for 2 transactions to select a common row.
I tested this be opening 2 SSMS windows. In both windows I started a transaction with Repeatable read isolation level followed by select on a common row. I was able to select the row in each transaction.
Next I tried to update the same row from each transaction. The statements kept running for few seconds. Then the update from 1st transaction was successful while the update from 2nd transaction failed with the below message.
Error 1205 : Transaction (Process ID) was deadlocked on lock resources
with another process and has been chosen as the deadlock victim. Rerun
the transaction.
So if I am using transaction with Repeatable read it should not be possible for 2 concurrent transactions to update the same row. Sql server automatically chooses to rollback 1 transactions. Is this correct?
But I would also like to avoid the deadlock error by allowing a particular row to be selected by a single transaction only.
Rowlock
I found the below answer on Stackoverflow that mentioned use of ROWLOCK hint to prevent deadlock. (see the comment of the accepted answer).
Minimum transaction isolation level to avoid "Lost Updates"
I started a transaction and used a select statement with ROWLOCK and UPDLOCK. Then in a new SSMS window, I started another transaction and tried to use the same select query (with same locks). This time I was not able to select the row. The statement kept running in the new SSMS window.
So use of Rowlock with transactions seems to be blocking rows for select statements which use the same lock hints.
I would appreciate it if someone could answer the below questions.
Are my observations regarding isolation levels and rowlock correct?
For the scenario that I described should I use ROWLOCK and UPDLOCK hints to block access to a row? If not what is the correct approach?
I am planning to place my select and update code in a transaction. The first select query in the transaction will use the ROWLOCK and UPDLOCK hints. This will prevent the record from being selected by another transaction that uses select with the same locks to retrieve the same row.
I would suggest SQL Isolation level of SNAPSHOT. Very similar to Oracles lock management.
See http://www.databasejournal.com/features/mssql/snapshot-isolation-level-in-sql-server-what-why-and-how-part-1.html
If your code is not too complicated, you can probably implement this without any changes. Bare in mind that some visibility may be affected (ie Dirty reads may not give dirty data)
I find this blanket system easier and more precise than using query hints all over the place.
Configure the database using:
SET ALLOW_SNAPSHOT_ISOLATION ON
Then use this to prefix your transaction statements:
SET TRANSACTION ISOLATION LEVEL SNAPSHOT