I am running into situations in my application where I need to use table lock hints or set the transaction isolation level to something other than the default Read Committed, in order to resolve deadlock issues. I am using a service oriented architecture, with each service call operating as an atomic operation, and Linq To Sql is serving as a lightweight DAL. Each service call calls my Business Layer and declares a new transaction like this:
using (var scope = new TransactionScope())
{
// Get datacontext
// do Business Logic stuff, including database operations via Linq To Sql
// Save transaction
scope.Complete();
}
The problem is sometimes I have complicated business logic that requires many database operations. Some reads, some writes, some reads for updating, etc, all within the same service call, and thus the same transaction.
I have read about the inability of Linq To Sql to add table lock hints to your linq query, with the suggested solution of using TransactionScope isolation levels instead. That's great and all, but in my situation, where each Transaction is for the purpose of an atomic service call, I don't see where this would work. For example, if I need to read one table without locking and dirty reads may be OK, and turn around and do another read for the purpose of updating, and do an update. I don't want to set Read Uncommitted for the entire transaction, only one particular read, so what do I do?
Is there not an extension I can implement that will allow me to add table lock hints, without using views or stored procedures, or using datacontext.ExecuteQuery("my raw sql string here")
I think the best answer here is to use multiple Transactions, and batch the transactions that only read "dirty" in one batch, and the updates that require read committed in another batch. If any information needs to cross batches, setup a temporary in memory cache for that data.
Related
I want to read / write into a DB from multiple threads.
After some research, I remembered the ACID rules. Do I need to call myTrans = myConnection.BeginTransaction(); every time I want to read/write from inside a thread, in order to keep this Transaction safe from dirty reads/writes (and myTrans.Commit();)? In normal SQL I would use SET TRANSACTION ISOLATION LEVEL SERIALIZABLE to secure it.
How do i do that in C# ?
Thanks in advance
You only need to call BeginTransaction() if you need multiple statements included in the same transaction. It's not normally necessary for ACID rules for single statements, as individual sessions — each call to ExecuteReader()/ExecuteScalar()/ExecuteNonQuery()/Fill() — gives you an implicit transaction.
Even across multiple statements, my tendancy is to put the statements into the same long SQL string (or stored procedure) and include any needed transaction instructions as part of the SQL.
In terms of thread-safety, the best thing to do is use a separate, brand new connection object for each transaction, and wrap it in a using block. Connections are not thread-safe, and so the way to protect them is giving each thread (or transaction within a thread) it's own connection it doesn't have to share.
Even within a thread, it's better NOT to re-use the same connection. There is a feature called Connection Pooling, where the connection object you see in the C# code is a light-weight wrapper for a much-heavier actual connection that is shared from a pool. Trying to re-use the same connection object throughout a thread or application optimizes for the light thing at the expense of the heavy thing.
I am trying to use a DocumentDb write as a part of a transaction like below -
using (var scope = new TransactionScope)
{
//first transaction
//write to document db
//third transaction
}
I observed that if the third transaction fails, documentDb write is not rolled back and I still see the document in the collection. The first transaction (NEventStore in this case) rolls back perfectly. Does anyone know if DocumentDb supports TrnasactionScope. What if I have a nested transaction?
Thanks!
Edit:
So looks like TransactionScope is not supported with DocumentDb and it knows nothing about them. Is there a way to make DocumentDb transactions part of an external transaction from C#? Has anyone come across this use case before?
Edit 2: Follow-up question and answer here as suggested
DocumentDB operations are independent from TransactionScope. Once an operation returns, it's done. The database service doesn't know anything about TransactionScope and isn't connected to it in any way.
DocumentDB does have a transaction scope of its own, when working with server-side stored procedures. You can have multiple database calls within the stored proc, and if everything is successful, there's an implicit commit upon the stored procedure exiting. If something goes wrong and an exception is thrown, an implicit rollback is executed for all operations performed to the database within the stored procedure's scope.
Lots of SQL users don't understand what to do where transactions are not available.
You should implement compensation logic your own or use frameworks like Windows Workflow Foundation. Compensation logic is related to Enterprise Integration Patterns. You also may use correlation ID pattern to check if big operation was done.
SQL people manage big operations in the same way when it is necessary to make long running transaction.
https://www.amazon.com/Enterprise-Integration-Patterns-Designing-Deploying/dp/0321200683/ref=sr_1_1?ie=UTF8&qid=1480917322&sr=8-1&keywords=integration+patterns
I'm maintaining a ASP/C# program that uses an MS SQL Server 2008 R2 for its database requirements.
On normal and perfect days, everything works fine as it is. But we don't live in a perfect world.
An Application (for Leave, Sick Leave, Overtime, Undertime, etc.) Approval process requires up to ten separate connections to the database. The program connects to the database, passes around some relevant parameters, and uses stored procedures to do the job. Ten times.
Now, due to the structure of the entire thing, which I can not change, a dip in the connection, or heck, if I put a debug point in VS2005 and let it hang there long enough, the Application Approval Process goes incomplete. The tables are often just joined together, so a data mismatch - a missing data here, a primary key that failed to update there - would mean an entire row would be useless.
Now, I know that there is nothing I can do to prevent this - this is a connection issue, after all.
But are there ways to minimize connection lag / failure? Or a way to inform the users that something went wrong with the process? A rollback changes feature (either via program, or SQL), so that any incomplete data in the database will be undone?
Thanks.
But are there ways to minimize connection lag / failure? Or a way to
inform the users that something went wrong with the process? A
rollback changes feature (either via program, or SQL), so that any
incomplete data in the database will be undone?
As we discussed in the comments, transactions will address many of your concerns.
A transaction comprises a unit of work performed within a database
management system (or similar system) against a database, and treated
in a coherent and reliable way independent of other transactions.
Transactions in a database environment have two main purposes:
To provide reliable units of work that allow correct recovery from failures and keep a database consistent even in cases of system
failure, when execution stops (completely or partially) and many
operations upon a database remain uncompleted, with unclear status.
To provide isolation between programs accessing a database concurrently. If this isolation is not provided, the program's outcome
are possibly erroneous.
Source
Transactions in .Net
As you might expect, the database is integral to providing transaction support for database-related operations. However, creating transactions from your business tier is quite easy and allows you to use a single transaction across multiple database calls.
Quoting from my answer here:
I see several reasons to control transactions from the business tier:
Communication across data store boundaries. Transactions don't have to be against a RDBMS; they can be against a variety of entities.
The ability to rollback/commit transactions based on business logic that may not be available to the particular stored procedure you are calling.
The ability to invoke an arbitrary set of queries within a single transaction. This also eliminates the need to worry about transaction count.
Personal preference: c# has a more elegant structure for declaring transactions: a using block. By comparison, I've always found transactions inside stored procedures to be cumbersome when jumping to rollback/commit.
Transactions are most easily declared using the TransactionScope (reference) abstraction which does the hard work for you.
using( var ts = new TransactionScope() )
{
// do some work here that may or may not succeed
// if this line is reached, the transaction will commit. If an exception is
// thrown before this line is reached, the transaction will be rolled back.
ts.Complete();
}
Since you are just starting out with transactions, I'd suggest testing out a transaction from your .Net code.
Call a stored procedure that performs an INSERT.
After the INSERT, purposely have the procedure generate an error of any kind.
You can validate your implementation by seeing that the INSERT was rolled back automatically.
Transactions in the Database
Of course, you can also declare transactions inside a stored procedure (or any sort of TSQL statement). See here for more information.
If you use the same SQLConnection, or other connection types that implement IDbConnection, you can do something similar to transactionscopes but without the need to create the security risk that is a transactionscope.
In VB:
Using scope as IDbTransaction = mySqlCommand.Connection.BeginTransaction()
If blnEverythingGoesWell Then
scope.Commit()
Else
scope.Rollback()
End If
End Using
If you don't specify commit, the default is to rollback the transaction.
A while ago, I wrote an application used by multiple users to handle trades creation.
I haven't done development for some time now, and I can't remember how I managed the concurrency between the users. Thus, I'm seeking some advice in terms of design.
The original application had the following characteristics:
One heavy client per user.
A single database.
Access to the database for each user to insert/update/delete trades.
A grid in the application reflecting the trades table. That grid being updated each time someone changes a deal.
I am using WPF.
Here's what I'm wondering:
Am I correct in thinking that I shouldn't care about the connection to the database for each application? Considering that there is a singleton in each, I would expect one connection per client with no issue.
How can I go about preventing the concurrency of the accesses? I guess I should lock when modifying the data, however don't remember how to.
How do I set up the grid to automatically update whenever my database is updated (by another user, for example)?
Thank you in advance for your help!
Consider leveraging Connection Pooling to reduce # of connections. See: http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx
lock as late as possible and release as soon as possible to maximize concurrency. You can use TransactionScope (see: http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.aspx and http://blogs.msdn.com/b/dbrowne/archive/2010/05/21/using-new-transactionscope-considered-harmful.aspx) if you have multiple db actions that need to go together to manage consistency or just handle them in DB stored proc. Keep your query simple. Follow the following tips to understand how locking work and how to reduce resource contention and deadlock: http://www.devx.com/gethelpon/10MinuteSolution/16488
I am not sure other db, but for SQL, you can use SQL Dependency, see http://msdn.microsoft.com/en-us/library/a52dhwx7(v=vs.80).aspx
Concurrency is usually granted by the DBMS using locks. Locks are a type of semaphore that grant the exclusive lock to a certain resource and allow other accesses to be restricted or queued (only restricted in the case you use uncommited reads).
The number of connections itself does not pose a problem while you are not reaching heights where you might touch on the max_connections setting of your DBMS. Otherwise, you might get a problem connecting to it for maintenance purposes or for shutting it down.
DBMSes usually use a concept of either table locks (MyISAM) or row locks (InnoDB, most other DBMSes). The type of lock determines the volume of the lock. Table locks can be very fast but are usually considered inferior to row level locks.
Row level locks occur inside a transaction (implicit or explicit). When manually starting a transaction, you begin your transaction scope. Until you manually close the transaction scope, all changes you make will be attributes to this exact transaction. The changes you make will also obey the ACID paradigm.
Transaction scope and how to use it is a topic far too long for this platform, if you want, I can post some links that carry more information on this topic.
For the automatic updates, most databases support some kind of trigger mechanism, which is code that is run at specific actions on the database (for instance the creation of a new record or the change of a record). You could post your code inside this trigger. However, you should only inform a recieving application of the changes, not really "do" the changes from the trigger, even if the language might make it possible. Remember that the action which triggered the code is suspended until you finish with your trigger code. This means that a lean trigger is best, if it is needed at all.
I am having more then 20 transaction in in one shot. I want to use transaction scope with this. Is it possible? And If possible then what is the advantage of using transacation scope class over simple transaction.
What is the best practice to use transaction scope?
The advantages of TransactionScope are:
you don't have to pass a transaction around (ADO.NET should enlist automatically)
which means that you can even use a TransactionScope to add transactions to existing, closed-source code (i.e. no changes are required)
a TransactionScope can (via DTC) span multiple resources (i.e. multiple databases, or a database and an MSMQ server, etc)
But you pay for this a little bit with speed. A connection-based transaction is slightly quicker (not much), but can only span a single resource and needs to be attached manually to all your DAL code. However, if you only talk to a single SQL2005/SQL2008 instance, then it can use the "LTM" - meaning it doesn't have to involve DTC (which is where most of the performance cost comes from) unless it absolutely needs to. Many TransactionScope operations can therefore complete talking only to the database.
If you want to span 20 operations, then TransactionScope should be ideal - it'll save you having to pass the transactions around, and allow each operation to manage their connection locally - making for highly re-usable code. Indeed, TransactionScope can be nested in the ways you expect, so you can have:
void Deposit(...) { /* uses TranScope to do a deposit */ }
void Debit(...) { /* uses TranScope to do a debit */ }
void Transfer(...) { /* uses a TranScope to span Debit/Deposit */ }
To do this using database transactions you'd need to pass the connection and transaction objects to each method, which quickly gets ungainly - especially if you need to code to work with/without an existing open transaction (lots of "if(tran==null)" etc).
For more information, see Transactions in .net