Consider the following code which does not rollback the transaction if an exception is caught.
transaction = connection.BeginTransaction();
command.Transaction = transaction;
try {
// interact with database here
catch {}
finally {
connection.Close();
}
What are the consequences of this and is it necessary to rollback the transaction?
The best is to generate your transaction inside a using block like this:
using( /*code to create the transaction you want )
{
//perform your transaction here
transaction.Commit();
}
If you code fails before the call to commit, it will automatically be rolled back as the using block is exited.
It will leave an open transaction on the database, which could potential block other queries.
Taken from here:
Consider the following general
guidelines when you use transactions
so that you can avoid causing
deadlocks:
Always access tables in the same order
across transactions in your
application. The likelihood of a
deadlock increases when you access
tables in a different order each time
you access them.
Keep transactions as
short as possible. Do not make
blocking or long-running calls from a
transaction. Keep the duration of the
transactions short. One approach is to
run transactions close to the data
source. For example, run a transaction
from a stored procedure instead of
running the transaction from a
different computer.
Choose a level of
isolation that balances concurrency
and data integrity. The highest
isolation level, serializable, reduces
concurrency and provides the highest
level of data integrity. The lowest
isolation level, read uncommitted,
gives the opposite result.
Related
Suppose I have a query:
begin tran
-- some other sql code
And then I forget to commit or roll back.
If another client tries to execute a query, what would happen?
As long as you don't COMMIT or ROLLBACK a transaction, it's still "running" and potentially holding locks.
If your client (application or user) closes the connection to the database before committing, any still running transactions will be rolled back and terminated.
You can actually try this yourself, that should help you get a feel for how this works.
Open two windows (tabs) in management studio, each of them will have it's own connection to sql.
Now you can begin a transaction in one window, do some stuff like insert/update/delete, but not yet commit. then in the other window you can see how the database looks from outside the transaction. Depending on the isolation level, the table may be locked until the first window is committed, or you might (not) see what the other transaction has done so far, etc.
Play around with the different isolation levels and no lock hint to see how they affect the results.
Also see what happens when you throw an error in the transaction.
It's very important to understand how all this stuff works or you will be stumped by what sql does, many a time.
Have fun! GJ.
Transactions are intended to run completely or not at all. The only way to complete a transaction is to commit, any other way will result in a rollback.
Therefore, if you begin and then not commit, it will be rolled back on connection close (as the transaction was broken off without marking as complete).
depends on the isolation level of the incomming transaction.
Sql transaction isolation explained
When you open a transaction nothing gets locked by itself. But if you execute some queries inside that transaction, depending on the isolation level, some rows, tables or pages get locked so it will affect other queries that try to access them from other transactions.
Example for Transaction
begin tran tt
Your sql statements
if error occurred
rollback tran tt
else
commit tran tt
As long as you have not executed commit tran tt , data will not be changed
Any uncomitted transaction will leave the server locked and other queries won't execute on the server. You either need to rollback the transaction or commit it. Closing out of SSMS will also terminate the transaction which will allow other queries to execute.
In addition to the potential locking problems you might cause you will also find that your transaction logs begin to grow as they can not be truncated past the minimum LSN for an active transaction and if you are using snapshot isolation your version store in tempdb will grow for similar reasons.
You can use dbcc opentran to see details of the oldest open transaction.
I really forget to commit a transaction. I have a query like codes below.
This stored procedure is called by .Net. When I test the function in .Net application, the exception will be captured in .Net application.
Exception message like below:
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 0, current count = 1.
When I realize the mistake, I have tried many times, both in .Net application and SQL Server Management Studio (2018). (In SSMS, the output statement will successfully output the result in Results tab, but shows the error message in Messages tab.)
Then I find the tables used in this transaction are locked. When I only select top 1000 without order desc, it can select the result. But when I select top 1000 with order desc, it will be running for a long time.
When I close the .Net application, the transaction was not committed (based on the data not changed in the transaction).
When I close the EXEC ... tab (which execute the forged commit query), SSMS will pop a warning window:
There are uncommitted transactions. Do you wish to commit these transactions?
I have tested the both the Yes and No choices.
If I click Yes, the transactions are committed.
If I click No, the transactions aren't committed.
After I close the tab, my locked table will be released, then I can query successfully.
begin try
-- some process
begin transaction
update ...
output ...
insert ...
-- I missing this commit statement below
commit transaction
end try
begin catch
if (xact_state()) = -1
begin
rollback transaction;
;throw
end;
-- this statement I want to compare to 1, but mistake write to -1, but since the throw statement let the mistake can't be triggerd
if (xact_state()) = 1
begin
commit transaction;
end;
end catch;
The behaviour is not defined, so you must explicit set a commit or a rollback:
http://docs.oracle.com/cd/B10500_01/java.920/a96654/basic.htm#1003303
"If auto-commit mode is disabled and you close the connection without explicitly committing or rolling back your last changes, then an implicit COMMIT operation is executed."
Hsqldb makes a rollback
con.setAutoCommit(false);
stmt.executeUpdate("insert into USER values ('" + insertedUserId + "','Anton','Alaf')");
con.close();
result is
2011-11-14 14:20:22,519 main INFO [SqlAutoCommitExample:55] [AutoCommit enabled = false]
2011-11-14 14:20:22,546 main INFO [SqlAutoCommitExample:65] [Found 0# users in database]
Does a transaction lock my table when I'm running multiple queries?
Example: if another user will try to send data in same time which I use transaction, what will happen?
Also how can I avoid this, but also to be sure that all data has inserted successfully into database?
Begin Tran;
Insert into Customers (name) values(name1);
Update CustomerTrans
set CustomerName = (name2);
Commit;
You have to implement transaction smartly. Below are some performance related points :-
Locking Optimistic/Pessimistic. In pessimistic locking whole table is locked. but in optimistic locking only specific row is locked.
Isolation level Read Committed/Read Uncommitted. When table is locked it depends upon on your business scenario if it allowed you then you can go for dirty read using with NoLock.
Try to use where clause in update and do proper indexing. For any heavy query check the query plan.
Transaction timeout should be very less. So if the table is locked then it should throw error and In catch block you can retry.
These are few points you can do.
You cannot avoid that multiples users load data to the database. It is neither feasible nor clever to lock every time a single user requested the usage of a table. Actually you do not have to worry about it, because the DB itself will provide mechanism to avoid such issues. I would recommend you reading into ACID properties.
Atomicity
Consistency
Isolation
Durability
What may happen is that you could suffer a ghost read, which basically consist that you cannot read data unless the user who is inserting data commits. And even if you have finished inserting data and do not commit, there is a fair chance that you will not see the changes.
DDL operations such as creation, removal, etc. are themselves committed at the end. However DML operation, such as update, insert, delete, etc. are not committed at the end.
I have a .net web page in which two stored procs are called.
Both of these have begin-commit transaction in sql server.Also I am calling the second proc multiple times depending on some if conditions.
I want to wrap this whole process under a single transaction. I have looked around and found Sqltransaction and TransactionScope clases in C# will help me in this situation.
But I have never used them, always use transaction in sql server, and so do not know if the transactions in .Net will have problems as both my stored procs have their own Begin-commit transaction in Sql Server.
If they do conflict is there a way to get them to work under a single transaction?
Yes, it is possible to call (e.g. existing or legacy) Stored Procs from .Net which use manual BEGIN TRAN, COMMIT TRAN / ROLLBACKs under a .Net TransactionScope, or if you manage the transaction from a SqlTransaction. (Although to state the obvious, if you can avoid using multiple transaction technologies, do so).
For the 'happy case' scenario, what will happen is that the ##TRANCOUNT will be increased when the SPROC transactions are called (just as per nested transactions in SqlServer). Transactions are only committed when ##TRANCOUNT hits zero after the outermost commit on the connection. i.e. inner commits will simply decrease ##TRANCOUNT. Note however that the same is not true for ROLLBACKS - unless you are using SAVEPOINTS, any rollback will rollback the entire transaction. You'll need to be very careful of matching up ##TRANCOUNTs.
You sound a bit undecided about TransactionScope vs SqlTransaction. TransactionScope is more versatile, in that it can span both single phase and distributed transactions (using DTC). However, if you only need to coordinate the transaction on a single connection, same database, then SqlTransaction would also be fine.
I get the following sql exception :
Transaction was deadlocked on lock resources with another process and has been chosen as the deadlock victim. Rerun the transaction. Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.
I don't have any transactions in any stored procedures, I do the transcation from .net and I always call them with using .
Have you guys met this before?
A transaction is a transaction, no matter where started. Whether in c# or the RDBMS.
Your using issues BEGIN TRANSCATION effectively.
MSDN (for SQL Server 2000 but still valid) recommends you retry automatically when a deadlock is detected, Rather than write code here, there are many results on Google for you to peruse.
When using the transaction you need to be careful as by default it sets the isolation level to serialisable. When the connection is released back into the pool it will still have that level set. This can seriously harm concurrency.
We have a test that runs within a transaction scope. We dispose of the transaction scope at the end to avoid changing the database.
This works fine in most cases.
However, when we use Entity Framework to execute a stored procedure which contains a transaction, which is committed inside the stored procedure. We get the following error:
"Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.\r\n "
Is it possible combine transaction scope with committing a transaction inside a stored procedure?
While you may or may not be able to solve this particular problem, I'd suggest that avoiding it entirely might be a better option. As you've seen, depending on a transaction to guarantee that your database is in a particular state doesn't always work. Also, because you are using multiple connections to the DB, you've automatically promoted any transactions that do occur to distributed transactions -- a subtle distinction, perhaps, but it changes the nature of the test. You may end up writing code to overcome the particular limitations of distributed transactions that wouldn't otherwise have been needed.
A better strategy would be -- for unit tests, anyway -- to mock out the database dependency, using in-memory mock or fake objects in place of the database. I've done something similar for LINQ to SQL (see my blog entry on the subject) For integration tests, I think you are better off using a test instance and writing set up code that reinitializes the state of the DB to known values before each test than introducing an extra transaction to clean things up. That way if your clean up code fails in a test, it won't affect other tests being run.
I use the following code inside an SP to handle contexts where a transaction may or may not be currently in force:-
DECLARE #InTran int
Set #InTran = ##TRANCOUNT
IF #InTran = 0 BEGIN TRANSACTION
/* Stuff happens */
IF #InTran = 0 AND ##TRANCOUNT > 0 COMMIT TRANSACTION
Only thing I'm not sure of is if ##TRANCOUNT reflects a transaction from a Transaction scope, its worth a shot though.