SQLite Read Uncommitted in same transaction as write - c#

I'm using SQLite.Net from sqlite.org
I open a connection and begin a transaction. Inside the transaction I insert some rows. In the same transaction I attempt to read the data I have written which I expected to be the default (within the same transaction) however it does not seem to be the case.
I have open the database in WAL mode.
How can I read uncommited in the current transaction?

OK, this was my bad - I was actually reading on a different connection ...

Related

Return from method without committing transaction [duplicate]

Suppose I have a query:
begin tran
-- some other sql code
And then I forget to commit or roll back.
If another client tries to execute a query, what would happen?
As long as you don't COMMIT or ROLLBACK a transaction, it's still "running" and potentially holding locks.
If your client (application or user) closes the connection to the database before committing, any still running transactions will be rolled back and terminated.
You can actually try this yourself, that should help you get a feel for how this works.
Open two windows (tabs) in management studio, each of them will have it's own connection to sql.
Now you can begin a transaction in one window, do some stuff like insert/update/delete, but not yet commit. then in the other window you can see how the database looks from outside the transaction. Depending on the isolation level, the table may be locked until the first window is committed, or you might (not) see what the other transaction has done so far, etc.
Play around with the different isolation levels and no lock hint to see how they affect the results.
Also see what happens when you throw an error in the transaction.
It's very important to understand how all this stuff works or you will be stumped by what sql does, many a time.
Have fun! GJ.
Transactions are intended to run completely or not at all. The only way to complete a transaction is to commit, any other way will result in a rollback.
Therefore, if you begin and then not commit, it will be rolled back on connection close (as the transaction was broken off without marking as complete).
depends on the isolation level of the incomming transaction.
Sql transaction isolation explained
When you open a transaction nothing gets locked by itself. But if you execute some queries inside that transaction, depending on the isolation level, some rows, tables or pages get locked so it will affect other queries that try to access them from other transactions.
Example for Transaction
begin tran tt
Your sql statements
if error occurred
rollback tran tt
else
commit tran tt
As long as you have not executed commit tran tt , data will not be changed
Any uncomitted transaction will leave the server locked and other queries won't execute on the server. You either need to rollback the transaction or commit it. Closing out of SSMS will also terminate the transaction which will allow other queries to execute.
In addition to the potential locking problems you might cause you will also find that your transaction logs begin to grow as they can not be truncated past the minimum LSN for an active transaction and if you are using snapshot isolation your version store in tempdb will grow for similar reasons.
You can use dbcc opentran to see details of the oldest open transaction.
I really forget to commit a transaction. I have a query like codes below.
This stored procedure is called by .Net. When I test the function in .Net application, the exception will be captured in .Net application.
Exception message like below:
Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 0, current count = 1.
When I realize the mistake, I have tried many times, both in .Net application and SQL Server Management Studio (2018). (In SSMS, the output statement will successfully output the result in Results tab, but shows the error message in Messages tab.)
Then I find the tables used in this transaction are locked. When I only select top 1000 without order desc, it can select the result. But when I select top 1000 with order desc, it will be running for a long time.
When I close the .Net application, the transaction was not committed (based on the data not changed in the transaction).
When I close the EXEC ... tab (which execute the forged commit query), SSMS will pop a warning window:
There are uncommitted transactions. Do you wish to commit these transactions?
I have tested the both the Yes and No choices.
If I click Yes, the transactions are committed.
If I click No, the transactions aren't committed.
After I close the tab, my locked table will be released, then I can query successfully.
begin try
-- some process
begin transaction
update ...
output ...
insert ...
-- I missing this commit statement below
commit transaction
end try
begin catch
if (xact_state()) = -1
begin
rollback transaction;
;throw
end;
-- this statement I want to compare to 1, but mistake write to -1, but since the throw statement let the mistake can't be triggerd
if (xact_state()) = 1
begin
commit transaction;
end;
end catch;
The behaviour is not defined, so you must explicit set a commit or a rollback:
http://docs.oracle.com/cd/B10500_01/java.920/a96654/basic.htm#1003303
"If auto-commit mode is disabled and you close the connection without explicitly committing or rolling back your last changes, then an implicit COMMIT operation is executed."
Hsqldb makes a rollback
con.setAutoCommit(false);
stmt.executeUpdate("insert into USER values ('" + insertedUserId + "','Anton','Alaf')");
con.close();
result is
2011-11-14 14:20:22,519 main INFO [SqlAutoCommitExample:55] [AutoCommit enabled = false]
2011-11-14 14:20:22,546 main INFO [SqlAutoCommitExample:65] [Found 0# users in database]

ADO.NET SQL Transaction - Only 1 call to database?

I am reviewing some code which is inserting some data into a MSSQL Database (MSSQL2014) using a Stored Procedure.
Each row of data results in a call to that database.
This is obviously inefficient for a number of reasons; not least of which is the fact that a call has to be made over the network for each line of data.
I am surmising that if they simply wrap all these individual sqlcommand calls (ExecuteNonQuery) into a transaction then this will ultimately result in a "batch insert" of sorts. When you actually "COMMIT" the transaction, the call accross the connection is made.
Is this correct? Will this send all the sqlcommands to the server in a single call? I have not been able to find a diagram or documentation which outlines the communication between the client and server when a transaction is used.
--> Begin Transaction
-> ExecuteNonQuery
-> ExecuteNonQuery
-> ExecuteNonQuery
-> ExecuteNonQuery
-> ExecuteNonQuery
-> ExecuteNonQuery
-> ExecuteNonQuery
-> ...
--> Commit Transaction (my assumption is that each of the SqlCommands
are sent over the connection object at this point)
On a sidenote I am more inclined to recommend that the developer rewrites the routine to make use of SQLBulkCopy or TableValued Parameters. This would however necessitate the re-factoring of the database stored procedure.
Thanks in Advance
There will be no batch insert. Neither the client nor the server optimize across statements. ADO.NET has no machinery to understand the SQL that you are sending. It cannot optimize anything. The server could but does not.
There will be a performance gain by using a transaction like this because the inserts do not need to flush the log as long as the transaction is pending.
There will be no batch insert. All the transaction will do is give you the ability to commit the inserts so others would be able to see the changes. No performance gains and if anything a performance hit. Not knowing what you are wanting to do here it would be hard to give you a proper answer

SQL Server atomic increment

I need to update several rows of one of my tables as an atomic operation.
The update concerns incrementing some values in int columns of certain rows. I need to increment values in several rows as a single action.
What would be the best way to do this?
Answering this question for me comes down to answering the following two:
If I use LINQ to SQL, how do I achieve the atomicity of the increment
operation (do I use transaction, or is there a better way)?
Are stored procedures executed atomically (in case I invoke the procedure on the DB)?
I am working in C# with SQL Server.
In SQL Server Atomicity between different operations is achieved by using Explicit Transactions, Where the user Explicitly Starts a transaction by using the key words BEGIN TRANSACTION and once all the operations are done without any erros you can commit the transaction by using key words COMMIT TRANSACTION, in case of an error/exception you can undo the work anywhere in the ongoing transaction by using key words ROLLBACK TRANSACTION
Write Ahead Strategy
SQL server uses Write Ahead Strategy to make sure the atomicity of the transactions and durability of data, When we are making any changes/Updates to the data, SQL Server takes following steps
Loads data pages into a buffer cache.
Updates the copy in the buffer.
Creates a log record in a log cache.
Saves the log record to disk via the checkpoint process.
Saves the data to disk.
So anywhere in the process of all these steps if you decide to ROLLBACK the transaction. Your is actual data on the disk is left unchanged.
My Suggestion
BEGIN TRY
BEGIN TRANSACTION
------ Your Code Here ------
---- IF everything Goes fine (No errors/No Exceptions)
COMMIT TRANSACTION
END TRY
BEGIN CATCH
ROLLBACK TRANSACTION --< this will ROLLBACK any half done operations
-- Your Code here ---------
END CATCH
I found my answer: The increment cannot be realized through LINQ to SQL directly. However, stored procedures can be called from LINQ, and increment can be realized there.
My solution was to create a stored procedure that would execute necessary updates within a single while loop in a transaction. This way all the updates are executed as a single, atomic, operation.
The UPDATE statement is atomic by itself.

Is it possible to retrieve the current SQL Transaction in c# when connection was closed?

Is it possible to retrieve the current SQL Transaction in c# when connection was closed?
sqlTransaction.Save("savePoint");
sqlConnection.Close() // im purposely closing it to test
if (sqlConnection.State == ConnectionState.Close)
{
sqlConnection.Open():
// is it possible to resume the sql transaction when I re-open
// the sql connection? how?
}
SqlTransaction.Save does not 'save' the transaction, instead it creates a transaction savepoint, which is something completely different:
Creates a savepoint in the transaction
that can be used to roll back a part
of the transaction, and specifies the
savepoint name.
A savepoint can be used before the transaction is committed to partially rollback some of the work done by the transaction. A typical example would be an attempt to do an update that may fail, so you create a savepoint before doing the update and in case of failure you rollback to the savepoint, thus preserving all the work done prior to the savepoint.
See Exception handling and nested transactions for an example of how to use savepoints.
Now back to your question, is there a way for a connection to start a connection, close, and when re-open, pick up the same transaction? Technically there is, by using the (now deprecated) sp_getbindtoken and sp_bindsession. But this is just a curiosity, there is absolutely no valid scenario for you to attempt to 'reuse' a transaction across two different sessions (two re-opens of a connection).
No, SQL Server will rollback any uncommitted transactions when the connection is terminated.
This seems to be a misunderstanding of a database transaction. Transactions are all-or-nothing conversations with the database. If you close the line of communication with the database by closing the connection, the conversation is over and the changes are not committed (the "nothing" part of "all-or-nothing").
No, I don't think you can do this.

Managing a Network failure during a Transactional SqlBulkCopy

I'm researching SqlClient's SqlBulkCopy in ADO.Net and have the following questions.
What will happen, if there was a network error during a SqlBulkCopy operation, running under as part of a transaction over a huge number of records?
Will the transaction be left open (neither committed, nor rolled back) in the server, until we manually kill it?
What is the best approach for sending a large number of records in two DataTables (InvoiceHeader, InvoiceDetails) in a DataSet to respective SQL Server tables(InvoiceHeader, InvoiceDetails)?
Thank you.
EDIT:
A few details I wanted to add, but forgot:
This is for .Net v3.5; I'm using Enterprise Library for all database interactions.
Assuming you are using a TransactionScope, my understanding is that no, the transaction will not be left open, because SQL Server will detect the ambient transaction and auto enlist. This means that the worst case is that the transaction times out, rolling back. You can change the transaction binding to specify what to do in the event of a timeout (you probably want explicit unbind).

Categories