NHibernate IStatelessSession CreateQuery failure - c#

When we released last friday, I received an error which I do not get on acceptance. The error message is:
could not execute update query[SQL: delete from dbo.MyTable where col1=? and col2=? and col3=? and col4=? and col5=?]
My C# code is as follows:
var hqlDelete = "DELETE MyTable m WHERE m.Col1 = :var_1 AND m.Col2 = :var_2 AND m.Col3= :var_3 AND m.Col4 = :var_4 AND m.Col5= :var_5";
var deletedEntities = session.CreateQuery(hqlDelete)
.SetString("var_1", variable1)
.SetString("var_2", variable2)
.SetString("var_3", variable3)
.SetString("var_4", variable4)
.SetString("var_5", variable5)
.ExecuteUpdate();
transaction.Commit();
session.Close();
Now, as I said, the error did not trigger when testing on acceptance. Also, when I test with the production database (code from my developer seat), it works without problems too.
The code is triggered when I call a web service and POST a "measurement" to it. The only difference is that I call the service when testing, and on production an other company sends measurements to the web service.
I think it might have something to do with the amount of sessions/transactions, but that would not really explain why the variables show up as ? in the error message.
Any ideas? Is there more information I could supply so you can help me with this one?
Edit: InnerExeption is
{"Transaction (Process ID 68) was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim. Rerun the transaction."}

Solving deadlocks can be a hard matter, especially when using an ORM. Deadlocks usually occurs because locks on database objects are not acquired in the same order by different processes (or threads), causing them to wait for each other.
An ORM does not give you much control on lock acquiring order. You may rework your queries ordering, but this could be tedious. Especially when caching causes some of them to not hit DB. Moreover, it should be done with the same ordering on any other application using the same database.
You may detect deadlock errors and do what the message say: retry the whole process. With NHibernate, this means discarding the current session and retry your whole unit of work.
If your database is SQL Server, there is a default setting which greatly increase deadlocks risk: the disabling of read committed snapshot mode. If it is disabled on your database, you may greatly reduce deadlock risks by enabling it. This mode allows reads under read committed isolation level to cease issuing read locks.
You may check this setting with
select snapshot_isolation_state_desc, is_read_committed_snapshot_on
from sys.databases
where name = 'YourDbName'
You may enable this setting with
alter database YourDbName
set allow_snapshot_isolation on
alter database YourDbName
set read_committed_snapshot on
This requires having none running transaction on the target db. And of course, this requires admin rights on DB.
On an application for which I was not having the option to change this setting, I had to go a more quirky way: setting NHibernate default isolation mode (connection.isolation configuration parameter) to ReadUncommitted. My application was mostly read-only, and I was elevating the isolation mode explicitly on the few transactions having to read then write data (using session.BeginTransaction(System.Data.IsolationLevel.ReadCommitted) by example).
You should also check the isolation modes which are currently used by all applications using the database: are some of them using higher isolation level than actually required? (RepeatableRead and Serializable should be avoided if possible.) This is a time consuming process since it requires a good understanding of isolation levels, while studying each use case for determining what is the appropriate minimal isolation level.

Related

nHibernate locking - does LockMode UPGRADE block subsequent a NONE read on the same row

We have an card order process which can take a few seconds to run. Data in this process is run using the LockMode.UPGRADE in nHibernate.
A second (webhook) process with runs with LockMode.NONE is occasionally being triggered before the first order process completed creating a race condition and appears to be using the original row data. Its not being blocked until the first is complete so is getting old data.
Since our database is not running with NO WAIT or any of the other SNAPSHOT COMMITTED settings.
My Question is: Can lockmode.none somehow ignore the UPGRADE lock and read the old data (cache perhaps?)
Thanks.
Upgrade locks are for preventing another transaction to acquire the same data for upgrade too.
A transaction running in read committed mode without additional lock will still be able to read them. You need an exclusive lock for blocking read committed reads (provided they are without snapshot isolation level). Read more on Technet.
As far as I know, NHibernate does not provide a way to issue exclusive locks on entity reads. (You will have them by actually updating the entities in the transaction.) You may workaround this by using CreateSQLQuery for issuing your locks directly in DB. Or in your webhook process, acquire upgrade lock too.
But this path will surely lead to deadlocks or lock contentions.
More generally, avoiding explicitly locking by adapting code patterns is preferable. I have just written an example here, see the "no explicit lock" solution.
As for your webhook process, is there really any differences between those two situation?
It obtains its data, which was not having any update lock on it, but before it processes it, its data get updated by the order process.
It obtains its data, including some which was having an update lock on them.
Generally you will still have the same trouble to solve, and avoiding 2. will not resolve 1. Here too, that is the way the process work with its data that need to be adapted for supporting concurrency. (Hint: using long running transactions for solving 1. will surely lead to locks contention too.)

How do I minimize or inform users of database connection lag / failure?

I'm maintaining a ASP/C# program that uses an MS SQL Server 2008 R2 for its database requirements.
On normal and perfect days, everything works fine as it is. But we don't live in a perfect world.
An Application (for Leave, Sick Leave, Overtime, Undertime, etc.) Approval process requires up to ten separate connections to the database. The program connects to the database, passes around some relevant parameters, and uses stored procedures to do the job. Ten times.
Now, due to the structure of the entire thing, which I can not change, a dip in the connection, or heck, if I put a debug point in VS2005 and let it hang there long enough, the Application Approval Process goes incomplete. The tables are often just joined together, so a data mismatch - a missing data here, a primary key that failed to update there - would mean an entire row would be useless.
Now, I know that there is nothing I can do to prevent this - this is a connection issue, after all.
But are there ways to minimize connection lag / failure? Or a way to inform the users that something went wrong with the process? A rollback changes feature (either via program, or SQL), so that any incomplete data in the database will be undone?
Thanks.
But are there ways to minimize connection lag / failure? Or a way to
inform the users that something went wrong with the process? A
rollback changes feature (either via program, or SQL), so that any
incomplete data in the database will be undone?
As we discussed in the comments, transactions will address many of your concerns.
A transaction comprises a unit of work performed within a database
management system (or similar system) against a database, and treated
in a coherent and reliable way independent of other transactions.
Transactions in a database environment have two main purposes:
To provide reliable units of work that allow correct recovery from failures and keep a database consistent even in cases of system
failure, when execution stops (completely or partially) and many
operations upon a database remain uncompleted, with unclear status.
To provide isolation between programs accessing a database concurrently. If this isolation is not provided, the program's outcome
are possibly erroneous.
Source
Transactions in .Net
As you might expect, the database is integral to providing transaction support for database-related operations. However, creating transactions from your business tier is quite easy and allows you to use a single transaction across multiple database calls.
Quoting from my answer here:
I see several reasons to control transactions from the business tier:
Communication across data store boundaries. Transactions don't have to be against a RDBMS; they can be against a variety of entities.
The ability to rollback/commit transactions based on business logic that may not be available to the particular stored procedure you are calling.
The ability to invoke an arbitrary set of queries within a single transaction. This also eliminates the need to worry about transaction count.
Personal preference: c# has a more elegant structure for declaring transactions: a using block. By comparison, I've always found transactions inside stored procedures to be cumbersome when jumping to rollback/commit.
Transactions are most easily declared using the TransactionScope (reference) abstraction which does the hard work for you.
using( var ts = new TransactionScope() )
{
// do some work here that may or may not succeed
// if this line is reached, the transaction will commit. If an exception is
// thrown before this line is reached, the transaction will be rolled back.
ts.Complete();
}
Since you are just starting out with transactions, I'd suggest testing out a transaction from your .Net code.
Call a stored procedure that performs an INSERT.
After the INSERT, purposely have the procedure generate an error of any kind.
You can validate your implementation by seeing that the INSERT was rolled back automatically.
Transactions in the Database
Of course, you can also declare transactions inside a stored procedure (or any sort of TSQL statement). See here for more information.
If you use the same SQLConnection, or other connection types that implement IDbConnection, you can do something similar to transactionscopes but without the need to create the security risk that is a transactionscope.
In VB:
Using scope as IDbTransaction = mySqlCommand.Connection.BeginTransaction()
If blnEverythingGoesWell Then
scope.Commit()
Else
scope.Rollback()
End If
End Using
If you don't specify commit, the default is to rollback the transaction.

Does TransactionScope makes a cross process lock on the database in LinQ queries?

I got several applications run onto the one machine, also with an MSSQL server on that machine.
Applications are various typed, like WPF, WCF Service, MVC App and so on.
All of them accessing the only database, which is located on the sql server.
The access mode is the simple LinQ-to-SQL class calls.
In each database concact I make some queries, some checks and some db-writes.
My question is:
Can I be sure that calls inside those transaction scopes are not running at the same time (are thread and process safe) by using simple TransactionScope instance?
Using a transaction scope will obviously make a particular connection transactional. The use of transaction scopes in itself doesn't stop two different processes on a machine doing the same thing at once. It does ensure that all actions performed are either committed or rolled back. The view of data each process sees depends on the isolation level, which by default is serializable, which can easily lead to deadlocks. A more practical isolation level is read comitted, preferably with snapshot isolation as this further reduces deadlocks and waits times.
If you want to ensure only one instance of application is doing something, you can use a mutex or use a database lock that all different processes will attempt to acquire and if necessary wait for.

What's the best way to manage concurrency in a database access application?

A while ago, I wrote an application used by multiple users to handle trades creation.
I haven't done development for some time now, and I can't remember how I managed the concurrency between the users. Thus, I'm seeking some advice in terms of design.
The original application had the following characteristics:
One heavy client per user.
A single database.
Access to the database for each user to insert/update/delete trades.
A grid in the application reflecting the trades table. That grid being updated each time someone changes a deal.
I am using WPF.
Here's what I'm wondering:
Am I correct in thinking that I shouldn't care about the connection to the database for each application? Considering that there is a singleton in each, I would expect one connection per client with no issue.
How can I go about preventing the concurrency of the accesses? I guess I should lock when modifying the data, however don't remember how to.
How do I set up the grid to automatically update whenever my database is updated (by another user, for example)?
Thank you in advance for your help!
Consider leveraging Connection Pooling to reduce # of connections. See: http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx
lock as late as possible and release as soon as possible to maximize concurrency. You can use TransactionScope (see: http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.aspx and http://blogs.msdn.com/b/dbrowne/archive/2010/05/21/using-new-transactionscope-considered-harmful.aspx) if you have multiple db actions that need to go together to manage consistency or just handle them in DB stored proc. Keep your query simple. Follow the following tips to understand how locking work and how to reduce resource contention and deadlock: http://www.devx.com/gethelpon/10MinuteSolution/16488
I am not sure other db, but for SQL, you can use SQL Dependency, see http://msdn.microsoft.com/en-us/library/a52dhwx7(v=vs.80).aspx
Concurrency is usually granted by the DBMS using locks. Locks are a type of semaphore that grant the exclusive lock to a certain resource and allow other accesses to be restricted or queued (only restricted in the case you use uncommited reads).
The number of connections itself does not pose a problem while you are not reaching heights where you might touch on the max_connections setting of your DBMS. Otherwise, you might get a problem connecting to it for maintenance purposes or for shutting it down.
DBMSes usually use a concept of either table locks (MyISAM) or row locks (InnoDB, most other DBMSes). The type of lock determines the volume of the lock. Table locks can be very fast but are usually considered inferior to row level locks.
Row level locks occur inside a transaction (implicit or explicit). When manually starting a transaction, you begin your transaction scope. Until you manually close the transaction scope, all changes you make will be attributes to this exact transaction. The changes you make will also obey the ACID paradigm.
Transaction scope and how to use it is a topic far too long for this platform, if you want, I can post some links that carry more information on this topic.
For the automatic updates, most databases support some kind of trigger mechanism, which is code that is run at specific actions on the database (for instance the creation of a new record or the change of a record). You could post your code inside this trigger. However, you should only inform a recieving application of the changes, not really "do" the changes from the trigger, even if the language might make it possible. Remember that the action which triggered the code is suspended until you finish with your trigger code. This means that a lean trigger is best, if it is needed at all.

TransactionScope with IsolationLevel set to Serializable is locking all SQL SELECTs

I'm using PowerShell transactions; which create a CommittableTransaction with an IsolationLevel of Serializable. The problem is that when I am executing a Transaction in this context all SELECTs are blocked on the tables affected by the transaction on any connection besides the one executing the transaction. I can perform gets from within the transaction but not anywhere else. This includes SSMS and other cmdlets executions. Is this expected behavior? Seems like I'm missing something...
PS Script:
Start-Transaction
Add-Something -UseTransaction
Get-Something #hangs here until timeout
Add-Something -UseTransaction
Undo-Transaction
Serializable transactions will block any updates on the ranges scanned under this isolation. By itself the serialization isolation level does not block reads. If you find that reads are blocked, something else must be at play and it depends on what you do in those scripts.
Sounds as if your database has ALLOW_SNAPSHOT_ISOLATION=OFF. This setting controls the concurrency mechanism used by the database:
ALLOW_SNAPSHOT_ISOLATION=OFF: This is the traditional mode of SQL Server, with lock based concurrency. This mode may lead to locking problems.
ALLOW_SNAPSHOT_ISOLATION=ON: This is avaliable since SQL Server 2005, and uses MVCC, pretty similar to what Oracle or Postgresql do. This is better for concurrency as readers do not block writers and writers do not block readers.
Note that this two modes do not behave in the same way, so you must code your transactions for assuming one mode or the other.

Categories