sqlite only supports 1 transaction? - c#

While using ADO.NET (maybe i am wrong, i dont know what its called) i notice that i can only begin a transaction with a connection and a command seems to have command.Transaction which gets me the transaction data but doesnt start a transaction itself? Actually while looking i see this in System.Data.SQLite
// Summary:
// The transaction associated with this command. SQLite only supports one transaction
// per connection, so this property forwards to the command's underlying connection.
[Browsable(false)]
[DesignerSerializationVisibility(DesignerSerializationVisibility.Hidden)]
public SQLiteTransaction Transaction { get; set; }
So SQLite only supports one transaction period? i tried opening another connection but then my transaction threw an exception about the DB being locked. So i cannot have more then one connection concurrent as well?

One transaction per connection, yes, but it can have more than one connection (each with its own active transaction).
Update: interesting. I didn't know about shared-cache mode. If your connection is using that mode, only one transaction is available for all the connections using the same shared-cache. See SQLite shared-cache mode.

I'm not sure about multiple connections, it probably has to do with the fact that a connection locks the file since SQLite is a file-based DB and not a server-based DB (on a server-based DB the server keeps all of the files locked and deals with concurrent connections).
You can only have one transaction OPEN at a time. This should be make intuitive sense, since everything that happens after you begin a transaction is in that transaction, until either a rollback or commit. Then you can start a new one. SQLite requires all command to be in a transaction, so if you don't manually open a new one, it'll do so for you.
If you're worried about nested transactions, you can fake them with savepoint. Documentation

Within 1 transaction you can only read/write to 1 connection until the transaction is done. Therefore you have to pass the connection object if you do a business transaction that spans several sql statements like this:
public class TimeTableService
{
ITimeTableDataProvider _provider = new TimeTableDataProvider();
public void CreateLessonPlanner(WizardData wizardData)
{
using (var con = _provider.GetConnection())
using (var trans = new TransactionScope())
{
con.Open();
var weekListA = new List<Week>();
var weekListB = new List<Week>();
LessonPlannerCreator.CreateLessonPlanner(weekListA, weekListB, wizardData);
_provider.DeleteLessonPlanner(wizardData.StartDate, con);
_provider.CreateLessonPlanner(weekListA, con);
_provider.CreateLessonPlanner(weekListB, con);
_provider.DeleteTimeTable(TimeTable.WeekType.A, con);
_provider.StoreTimeTable(wizardData.LessonsWeekA.ToList<TimeTable>(), TimeTable.WeekType.A, con);
_provider.DeleteTimeTable(TimeTable.WeekType.B, con);
_provider.StoreTimeTable(wizardData.LessonsWeekB.ToList<TimeTable>(), TimeTable.WeekType.B, con);
trans.Complete();
}
}
}
The connection and transactoin resources are automatically released/closed by the using-statement.
In every dataprovider method you then do
using(var cmd = new SQLiteCommand("MyStatement",con)
{
// Create params + ExecuteNonQuery
}
The TransactionScope class is new in .NET 3.5 and is doing automatically a rollback if an exception occurs. Easy handling...

Related

Implement SQL Server 2016 Snapshot Isolation in C#/ADO.NET for SELECT queries

In order to prevent long read operations from blocking short, frequent write operations in my existing mixed OLTP/reporting web application backed by SQL Server 2016, I want to use Snapshot Isolation on a few long-running queries. The queries have already been well indexed and just take a long time to run due to large amounts of data, and while I may use RCSI later, I would like to start by using the less invasive Snapshot Isolation.
My question is: how do I enable Snapshot Isolation on my SELECT queries in C#? It seems I would have to wrap my select queries in a transaction, which just feels totally wrong.
List<Cat> cats = new List<Cat>();
TransactionOptions transactionOption = new TransactionOptions
{
IsolationLevel = IsolationLevel.Snapshot
};
using (TransactionScope transactionScope = new TransactionScope(TransactionScopeOption.Required, transactionOption))
{
using (SqlConnection sqlConnection = new SqlConnection(databaseConnectionString))
{
using (SqlCommand sqlCommand = sqlConnection.CreateCommand())
{
sqlCommand.CommandText = "proc_Select_A_Billion_Cats";
sqlCommand.CommandType = CommandType.StoredProcedure;
sqlConnection.Open();
using (SqlDataReader sqlDataReader = sqlCommand.ExecuteReader())
{
while (sqlDataReader.Read())
{
// SOME CODE HERE can read the Cat object in from data reader
// Add this cat to the collection
cats.Add(cat);
}
}
}
}
}
From http://www.levibotelho.com/development/plugging-isolation-leaks-in-sql-server/ I can see that in SQL Server 2014 and later the isolation level will be reset when the connection is returned to the pool, so that's good.
But can it be right to wrap a stored proc that does only SELECT in an ADO.NET transaction? Is there not some better way than this to implement Snapshot Isolation in C#?
No, you need to set the isolation level in the BeginTransaction methid:
If a database has been enabled for snapshot isolation but is not
configured for READ_COMMITTED_SNAPSHOT ON, you must initiate a
SqlTransaction using the IsolationLevel.Snapshot enumeration value
when calling the BeginTransaction method.
Otherwise, the default Read Committed mode is used and the READ will be blocked by the transactions modifying the table (possibly).

SQL Timeout Expired When It Shouldn't

I am using the SqlConnection class and running into problems with command time outs expiring.
First off, I am using the SqlCommand property to set a command timeout like so:
command.CommandTimeout = 300;
Also, I have ensured that the Execution Timeout setting is set to 0 to ensure that there should be no timeouts on the SQL Management side of things.
Here is my code:
using (SqlConnection conn = new SqlConnection(connection))
{
conn.Open();
SqlCommand command = conn.CreateCommand();
var transaction = conn.BeginTransaction("CourseLookupTransaction");
command.Connection = conn;
command.Transaction = transaction;
command.CommandTimeout = 300;
try
{
command.CommandText = "TRUNCATE TABLE courses";
command.ExecuteNonQuery();
List<Course> courses = CourseHelper.GetAllCourses();
foreach (Course course in courses)
{
CourseHelper.InsertCourseLookupRecord(course);
}
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
Log.Error(string.Format("Unable to reload course lookup table: {0}", ex.Message));
}
}
I have set up logging and can verify exactly 30 seconds after firing off this function, I receive the following error message in my stack trace:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
In the interest of full disclosure: InsertCourseLookupRecord() found inside the above using statements foreach, is performing another query to the same table in the same database. Here is the query it is performing:
INSERT INTO courses(courseid, contentid, name, code, description, url, metakeywords, metadescription)
VALUES(#courseid, #contentid, #name, #code, #description, #url, #metakeywords, #metadescription)"
There is over 1400 records in this table.
I will certify any individual(s) that helps me solve this as a most supreme grand wizard.
I believe what is happening is that you have a deadlock situation that is causing your query in the InsertCourseLookupRecord() function to fail. You are not passing your connection to InsertCourseLookupRecord() so I am assuming you are running that in a separate connection. So what happens is:
You started a transaction.
You truncate the table.
InsertCourseLookupRecord starts another connection and tries to insert
data into that table, but the table is locked because your
transaction isn't committed yet.
The connection in the function InsertCourseLookupRecord() times out at the timeout value defined for that connection of 30 seconds.
You could change the function to accept the command object as a parameter and use it inside the function instead of creating a new connection. This will then become part of the transaction and will all be committed together.
To do this change your function definition to:
public static int InsertCourseLookupRecord(string course, SqlCommand cmd)
Take all the connection code out of the function because you're going to use the cmd object. Then when you're ready to execute your query:
myCommand.Parameters.Clear(); //need only if you're using command parameters
cmd.CommandText = "INSERT BLAH BLAH BLAH";
cmd.ExecuteNonQuery();
It will run under the same connection and transaction context.
You call it like this in your using block:
CourseHelper.InsertCourseLookupRecord(course, command);
You could also just take the code in the InsertCourseLookupRecord and put it inside the for loop instead and then reuse the command object in your using block without needing to pass it to a function at all.
Because you are using two separate SqlConnection objects you are deadlocking your self due to the SqlTransaction you started in your outer code. The query in InsertCourseLookupReacord and maybe in GetAllCourses get blocked by the TRUNCATE TABLE courses call which has not yet been committed. They wait 300 seconds for the truncate to be committed then time out.
You have a few options.
Pass the SqlConnection and SqlTransaction in to GetAllCourses and InsertCourseLookupRecord so they can be part of the same transaction.
Use a "ambient transaction" by getting rid of the SqlTransaction and using a System.Transaction.TransactionScope instead. This causes all connections that are opened to the server all share a transaction. This can cause maintenance issues as depending on what the queries are doing it as it may need to invoke the Distributed Transaction Coordinator which may be disabled on some computers (from the looks of what you showed you would need the DTC as you have two open connections at the same time).
The best option is try to change your code to do option 1, but if you can't do option 2.
Taken from the documentation:
CommandTimeout has no effect when the command is executed against a context connection (a SqlConnection opened with "context connection=true" in the connection string)
Please review your connection string, thats the only possibility I can think of.

Start and close SQL connection - At start and in the end or every time needed?

I'm making a web application in C# with ASP.Net, and I'm using an SQL database for it. In the guides I saw (about SqlConnection and SqlCommand) they open and close the SQL connection each time they want to send a query. But I'm not sure this is the best way to handle the connection, because I also learned some PHP, and as far as I know in PHP you open a connection only once, at start. So what is the best way to handle the connection?
You should generally have one connection, and transaction, for the length of the web request.
If you have two connections, you could potentially have inconsistent data. e.g. in the first query you check the bank balance, then you close it. You then add $5 to the balance and save that away in the next connection. What if someone else added $5 between the two connections? It would go missing.
The easiest thing to do is to open the connection global.asax BeginRequest and save that away into the HttpContext. Whenever you need a connection, pull it from there. Make sure that you .Close the connection in your EndRequest
Also, read up here on connection pooling: http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx
It is not 'expensive' to repeatedly open and close connections, provided the username is the same each time.
Ado.Net has connection pooling already managed for you, so you don't need to worry about reusing connections.
You could use the Using statement to close and dispose connection:
DataTable dt = new DataTable();
using (SqlConnection conn = new SqlConnection("my_connection_string"))
{
using (SqlDataAdapter adapter = new SqlDataAdapter("SELECT * from Table", conn))
{
conn.open();
adapter.Fill(dt);
}
}
Normal practice in .NET is to open a SqlConnection and use a SqlCommand, then dispose of both of them.
using(SqlConnection connection = new SqlConnection("myConnectionString"))
{
using(SqlCommand command = new SqlCommand("dbo.SomeProc"))
{
// Execute the command, when the code leaves the using block, each is disposed
}
}
As a connection to database is an expensive resource it's better to open sql connection as late as possible and close early. So you should open the connection just before executing a command and close it once you have executed it.
On the other hand if you are going to execute many commands in a short interval of time it's better to open the connection once and close after all the commands are executed.
For starters they do not as there is a connection pool that is being used, but SqlCommand has an overload to take a connection object, so you could keep the connection around and pass it.
Generally:
using (var ts = new TransactionScope())
{
using (var connection = new SqlConnection(...))
{
using (var command = new SqlCommand(connection, ...)
{
...
}
using (var command = new SqlCommand(connection, ...)
{
...
}
}
ts.Complete();
}
Of course I would recommend using LINQ-to-SQL or EF at this point.
Try Data Access Application Block in the Enterprise Library if you do not want to worry about opening and closing the connections.

Do I need to wrap a TransactionScope by try/catch?

The MSDN contains this code example:
// This function takes arguments for 2 connection strings and commands to create a transaction
// involving two SQL Servers. It returns a value > 0 if the transaction is committed, 0 if the
// transaction is rolled back. To test this code, you can connect to two different databases
// on the same server by altering the connection string, or to another 3rd party RDBMS by
// altering the code in the connection2 code block.
static public int CreateTransactionScope(
string connectString1, string connectString2,
string commandText1, string commandText2)
{
// Initialize the return value to zero and create a StringWriter to display results.
int returnValue = 0;
System.IO.StringWriter writer = new System.IO.StringWriter();
try
{
// Create the TransactionScope to execute the commands, guaranteeing
// that both commands can commit or roll back as a single unit of work.
using (TransactionScope scope = new TransactionScope())
{
using (SqlConnection connection1 = new SqlConnection(connectString1))
{
// Opening the connection automatically enlists it in the
// TransactionScope as a lightweight transaction.
connection1.Open();
// Create the SqlCommand object and execute the first command.
SqlCommand command1 = new SqlCommand(commandText1, connection1);
returnValue = command1.ExecuteNonQuery();
writer.WriteLine("Rows to be affected by command1: {0}", returnValue);
// If you get here, this means that command1 succeeded. By nesting
// the using block for connection2 inside that of connection1, you
// conserve server and network resources as connection2 is opened
// only when there is a chance that the transaction can commit.
using (SqlConnection connection2 = new SqlConnection(connectString2))
{
// The transaction is escalated to a full distributed
// transaction when connection2 is opened.
connection2.Open();
// Execute the second command in the second database.
returnValue = 0;
SqlCommand command2 = new SqlCommand(commandText2, connection2);
returnValue = command2.ExecuteNonQuery();
writer.WriteLine("Rows to be affected by command2: {0}", returnValue);
}
}
// The Complete method commits the transaction. If an exception has been thrown,
// Complete is not called and the transaction is rolled back.
scope.Complete();
}
}
catch (TransactionAbortedException ex)
{
writer.WriteLine("TransactionAbortedException Message: {0}", ex.Message);
}
catch (ApplicationException ex)
{
writer.WriteLine("ApplicationException Message: {0}", ex.Message);
}
// Display messages.
Console.WriteLine(writer.ToString());
return returnValue;
}
I'm unclear:
When do I need to catch TransactionAbortedException?
I'm currently storing a connection to the DB over app lifetime instead of opening and closing it each time. Does this conflict with using transactions? (Or is this a flaw in general?)
What do I need to do in case I don't care about whether the value was actually inserted? (I just want to make sure the DB does not get corrupted.)
No, TransactionScope doesn't need try/catch, because you should already be using it via using, and the last step (as per the example) should be to mark it completed. If it hits Dispose() without being completed, then it rolls back.
Re your bullets:
you don't, unless you have a very specific reason to handle that exception; I have never had to handle that scenario in a specific way - it is so unexpected that I'm happy for it to bubble up as a genuine exceptional event
storing your own open connection is not a great idea here - in addition to the potential threading issues, and having unnecessary open connections: you're ignoring the fact that there is already connection pooling built in by default (at least, with SQL Server). Meaning : you create/open a new SqlConnection as you need it (tightly scoped) and the pool worries about actual connections - you are potentially adding overhead here, but gaining very little (it is very quick to find an existing connection in the pool). If you check carefully, you'll probably notice that even when you Close() your connection, the db-server reports it in use; that is because your Close() merely put it back into the pool for re-use
nothing much; the transaction should be ACID
It is not a good practice to keep an open connection for ever.
If you are working on a webapplication, then you will easily reach to max connections.
Better keep the connection instance with you always and open the connnection Do what ever db operation and then close the connection.
Use Using (){}, is the right way to handle the DB component.

C# SQL Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction

I'm getting this error (Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.) when trying to run a stored procedure from C# on a SQL Server 2005 database. I'm not actively/purposefully using transactions or anything, which is what makes this error weird. I can run the stored procedure from management studio and it works fine. Other stored procedures also work from C#, it just seems to be this one with issues. The error returns instantly, so it can't be a timeout issue. The code is along the lines of:
SqlCommand cmd = null;
try
{
// Make sure we are connected to the database
if (_DBManager.CheckConnection())
{
cmd = new SqlCommand();
lock (_DBManager.SqlConnection)
{
cmd.CommandText = "storedproc";
cmd.CommandType = System.Data.CommandType.StoredProcedure;
cmd.Connection = _DBManager.SqlConnection;
cmd.Parameters.AddWithValue("#param", value);
int affRows = cmd.ExecuteNonQuery();
...
}
}
else
{
...
}
}
catch (Exception ex)
{
...
}
It's really got me stumped. Thanks for any help
It sounds like there is a TransactionScope somewhere that is unhappy. The _DBManager.CheckConnection and _DBManager.SqlConnection sounds like you are keeping a SqlConnection hanging around, which I expect will contribute to this.
To be honest, in most common cases you are better off just using the inbuilt connection pooling, and using your connections locally - i.e.
using(var conn = new SqlConnection(...)) { // or a factory method
// use it here only
}
Here you get a clean SqlConnection, which will be mapped to an unmanaged connection via the pool, i.e. it doesn't create an actual connection each time (but will do a logical reset to clean it up).
This also allows much more flexible use from multiple threads. Using a static connection in a web app, for example, would be horrendous for blocking.
From the code it seems that you are utilizing an already opened connection. May be there's a transaction pending previously on the same connection.

Categories