SQLite DB Insert Very Slow - c#

I am using an SQLite database, and inserting records into it. This takes a hugely long time! I have seen people who say they can process a couple thousand in a minute. I have around 2400 records. Each record takes 30s-2m to complete. Recreating the database is not an option. I have tried to create one transaction different ways. I need to use the timer, because I am using a ProgressBar to show me that something is happening. Here is the code I am using:
string con;
con = string.Format(#"Data Source={0}", documentsFolder);
SQLiteConnection sqlconnection = new SQLiteConnection(con);
SQLiteCommand sqlComm = sqlconnection.CreateCommand();
sqlconnection.Open();
SQLiteTransaction transaction = sqlconnection.BeginTransaction();
Timer timer2 = new Timer();
timer2.Interval = 1000;
timer2.Tick += (source, e) =>
{
URL u = firefox.URLs[count2];
string newtitle = u.title;
form.label1.Text = count2 + "/" + pBar.Maximum;
string c_urls = "insert or ignore into " + table + " (id,
url, title, visit_count, typed_count, last_visit_time, hidden) values (" + dbID + ",'" + u.url + "','"
+ newtitle + "',1,1, " + ToChromeTime(u.visited) + ", 0)";
string c_visited = "insert or ignore into " + table2 + " (id,
url,
visit_time, transition) values (" + dbID2 + "," + dbID + "," +
ToChromeTime(u.visited) + ",805306368)";
sqlComm = new SQLiteCommand(c_urls, sqlconnection);
sqlComm.ExecuteNonQuery();
sqlComm = new SQLiteCommand(c_visited, sqlconnection);
sqlComm.ExecuteNonQuery();
dbID++;
dbID2++;
pBar.Value = count2;
if (pBar.Maximum == count2)
{
pBar.Value = 0;
timer.Stop();
transaction.Commit();
sqlComm.Dispose();
sqlconnection.Dispose();
sqlconnection.Close();
}
count2++;
};
timer2.Start();
What am I doing wrong?

This is what I would address, in order. It may or may not fix the problem, but it won't hurt to see (and it might just do some magic):
Ensure the Database is not being contended with updates (from another thread, process, or even timer!). Writers will acquire locks and unclosed/over-long-running transactions can interact in bad ways. (For updates that take "30 seconds to 2 minutes" I would imagine there is an issue obtaining locks. Also ensure the media the DB is on is sufficient, e.g. local drive.)
The transaction is not being used (??). Move the transaction inside the timer callback, attach it to the appropriate SQLCommands, and dispose it before the callback ends. (Use using).
Not all SQLCommand's are being disposed correctly. Dispose each and every one. (The use of using simplifies this. Do not let it bleed past the callback.)
Placeholders are not being used. Not only is this simpler and easier to use, but it is also ever so slightly more friendly to SQLite and the adapter.
(Example only; there may be errors in the following code.)
// It's okay to keep long-running SQLite connections.
// In my applications I have a single application-wide connection.
// The more important thing is watching thread-access and transactions.
// In any case, we can keep this here.
SQLiteConnection sqlconnection = new SQLiteConnection(con);
sqlconnection.Open();
// In timer event - remember this is on the /UI/ thread.
// DO NOT ALLOW CROSS-THREAD ACCESS TO THE SAME SQLite CONNECTION.
// (You have been warned.)
URL u = firefox.URLs[count2];
string newtitle = u.title;
form.label1.Text = count2 + "/" + pBar.Maximum;
try {
// This transaction is ONLY kept about for this timer callback.
// Great care must be taken with long-running transactions in SQLite.
// SQLite does not have good support for (long running) concurrent-writers
// because it must obtain exclusive file locks.
// There is no Table/Row locks!
sqlconnection.BeginTransaction();
// using ensures cmd will be Disposed as appropriate.
using (var cmd = sqlconnection.CreateCommand()) {
// Using placeholders is cleaner. It shouldn't be an issue to
// re-create the SQLCommand because it can be cached in the adapter/driver
// (although I could be wrong on this, anyway, it's not "this issue" here).
cmd.CommandText = "insert or ignore into " + table
+ " (id, url, title, visit_count, typed_count, last_visit_time, hidden)"
+ " values (#dbID, #url, 'etc, add other parameters')";
// Add each parameter; easy-peasy
cmd.Parameters.Add("#dbID", dbID);
cmd.Parameter.Add("#url", u.url);
// .. add other parameters
cmd.ExecuteNonQuery();
}
// Do same for other command (runs in the same TX)
// Then commit TX
sqlconnection.Commit();
} catch (Exception ex) {
// Or fail TX and propagate exception ..
sqlconnection.Rollback();
throw;
}
if (pBar.Maximum == count2)
{
pBar.Value = 0;
timer.Stop();
// All the other SQLite resources are already
// cleaned up!
sqlconnection.Dispose();
sqlconnection.Close();
}

I'm not sure if this is your problem, but your general pattern of using ADO.NET is wrong - you shouldn't create new command(s) per each insert (and repeatedly pay for query preparation).
Instead, do the following:
Before the loop:
Create command(s) once.
Create appropriate bound parameters.
In the loop:
Just assign appropriate values to the bound parameters.
And execute the command(s).
You could also consider using less fine-grained transactions: try putting several inserts in the same transaction to minimize paying for transaction durability.
You might also want to take a look at this post.

You can try one of the following to improve performance :
Wrap all the inserts in a transaction - Can help in reducing the actual writes to the DB.
Use WAL - The Write-Ahead-Log is a journaling mode that speeds up writes and enables concurrency. (Not recommended if your DB is in a Network location).
Synchronous NORMAL - The Synchronous Mode dictates the the frequency at which data is actually flushed to the physical memory (fsync() calls). This can be time taking on some machines and hence the frequency at which this flush occurs is critical. Make sure to explicitly open connections with "Synchronous=NORMAL" ideal for most scenarios. There is a huge difference between Synchronous MODE as FULL and NORMAL (NORMAL is ~1000 times better).
Find more details in a similar post => What changed between System.Data.SQLite version 1.0.74 and the most recent 1.0.113?

Related

C# - Postgres - Memory leak issue over time

Problem: The memory leaks & accumulates over time, and eventually reaches its 99% capacity
This is a question I previously asked on the same topic. I did as the guy who answered told me to do, but I am still experiencing memory leak issues. I really do not understand from where the memory is accumulating. I observed the Windows Task Manager, and found out that periodically memory clears, but the memory accumulation rate is faster than clearing rate, and as a result, memory capacity reaches 99% at the end.
Here is my C# code:
var connString = "Host=x.x.x.x;Port=5432;Username=postgres;Password=password;Database=database";
#Info.Trace("PostGre ");
using (var conn = new Npgsql.NpgsqlConnection(connString)){
conn.Open();
int ctr = 0;
// Insert some data
using (var cmd = new Npgsql.NpgsqlCommand())
{
cmd.Connection = conn;
var par_1 = cmd.Parameters.Add("#r", NpgsqlTypes.NpgsqlDbType.Timestamp);
var par_2 = cmd.Parameters.Add("#p", NpgsqlTypes.NpgsqlDbType.Double);
while(#tag.TerminateTimeScaleLoop == 100)
{
#Info.Trace("Pushed Data: PostGre A " + ctr.ToString());
cmd.CommandText = "INSERT INTO TORQX VALUES (#r,#p)";
par_1.Value = System.DateTime.Now.ToUniversalTime();
par_2.Value = #Tag.RigData.Time.TORQX;
cmd.ExecuteNonQuery();
ctr = ctr + 1;
}
}
#Info.Trace("Pushed Data: PostGre A Terminated");
conn.Close();
}
What is causing the memory accumulation? Can I prevent it from accumulating? If preventing accumulation is impossible, can I manually clear memory? What set of codes will do that?
I have practically no experience with C#, and I was assigned to do a hot fix on this C# code because the person who wrote this code isn't available now. I have lots of experience in Python, but no experience with C#, so please give me suggestions in a really explicit way... otherwise I will have no clue. Thanks!

If my C# times out with a stored procedure call, does the procedure continue running?

I have just a general type of question. If I have a C# application that calls a SQL Server stored procedure, and the C# application times out, does the procedure call on the server continue running to it's completion?
No. Below is a reproduction. When the timeout occurs the running process will be killed, halting it immediately. If you do not have a transaction specified, work that has been done in the stored procedure prior to the timeout will be persisted. Similarly, if the connection to the server is severed by some outside force, SQL Server will kill the running process.
using (var conn = new SqlConnection(#"Data Source=.;Initial Catalog=Test;Integrated Security=True"))
{
conn.Open();
using (var setupTable = new SqlCommand(#"
IF NOT EXISTS (
SELECT *
FROM
sys.schemas s
INNER JOIN sys.tables t ON
t.[schema_id] = s.[schema_id]
WHERE
s.name = 'dbo' AND
T.name = 'TimeoutTest')
BEGIN
CREATE TABLE dbo.TimeoutTest
(
ID int IDENTITY(1,1) PRIMARY KEY,
CreateDate datetime DEFAULT(getdate())
);
END
-- remove any rows from previous runs
TRUNCATE TABLE dbo.TimeoutTest;", conn))
{
setupTable.ExecuteNonQuery();
}
using (var checkProcExists = new SqlCommand(#"
SELECT COUNT(*)
FROM
sys.schemas s
INNER JOIN sys.procedures p ON
p.[schema_id] = s.[schema_id]
WHERE
s.name = 'dbo' AND
p.name = 'AddTimeoutTestRows';", conn))
{
bool procExists = ((int)checkProcExists.ExecuteScalar()) == 1;
if (!procExists)
{
using (var setupProc = new SqlCommand(#"
CREATE PROC dbo.AddTimeoutTestRows
AS
BEGIN
DECLARE #stop_time datetime;
SET #stop_time = DATEADD(minute, 1, getdate());
WHILE getdate() < #stop_time
BEGIN
INSERT INTO dbo.TimeoutTest DEFAULT VALUES;
-- wait 10 seconds between inserts
WAITFOR DELAY '0:00:10';
END
END", conn))
{
setupProc.ExecuteNonQuery();
}
}
}
bool commandTimedOut = false;
try
{
using (var longExecution = new SqlCommand("EXEC dbo.AddTimeoutTestRows;", conn))
{
// The time in seconds to wait for the command to execute.
// Explicitly setting the timeout to 30 seconds for clarity.
longExecution.CommandTimeout = 30;
longExecution.ExecuteNonQuery();
}
}
catch (SqlException ex)
{
if (ex.Message.Contains("Timeout"))
{
commandTimedOut = true;
}
else
{
throw;
}
}
Console.WriteLine(commandTimedOut.ToString());
// Wait for an extra 30 seconds to let any execution on the server add more rows.
Thread.Sleep(30000);
using (var checkTableCount = new SqlCommand(#"
SELECT COUNT(*)
FROM
dbo.TimeoutTest t;", conn))
{
// Only expecting 3, but should be 6 if server continued on without us.
int rowCount = (int)checkTableCount.ExecuteScalar();
Console.WriteLine(rowCount.ToString("#,##0"));
}
}
Console.ReadLine();
produces the following output
True
3
even though running the stored procedure from Management Studio will add 6 rows in the one minute time frame.
Short answer is Yes .... here is some info to back up my claim
There are actually several places where a application can 'time out' but one is a Command Execution time out ...
command execution time out - This property is the cumulative time-out for all network reads during command execution or processing of the results. A time-out can still occur after the first row is returned, and does not include user processing time, only network read time.
A Command is not going to rollback on its own if it times out. This will need a transaction around the code if it times out.
if a timeout can occur when rows are being returned than means a timeout can occur anytime C# is not going to tell SQL Server to stop running the command. Things can be done about that such as wrapping the command in a transaction
Source: https://blogs.msdn.microsoft.com/mattn/2008/08/29/sqlclient-timeouts-revealed/ ... and experience
If you are using the SQlCommand Class, once the app times out , the query execution will be rolled back.
My thoughts about that are that it's all about the connection opened by the call to the procedure.
If your code is executed within a using block or if it is garbage collected then i think that the execution of the SP will be rolled back.
Conn = new SqlConnection(ConnStr);
Conn.Open();
myCommand = new SqlCommand();
myCommand.CommandTimeout = 180000;
myCommand.Connection = Conn;
myCommand.CommandType = System.Data.CommandType.StoredProcedure;

Memory Problems in C# using while(true)

i would like to write an Client in C# which checks if a User is logged in on different Clients. The Client should run 24/7 and refreshes a Database with some State Information for each Client.
My Problem is: The Command Line Tool takes more and more Memory, so ill think that there is a Problem that i allocate Memory which never gets released.
I think it is that i am creating a ManagementScope, but i cannot all the Dispose() Method for it.
Here is my Code:
static void Main(string[] args)
{
Ping pingSender = new Ping();
PingOptions options = new PingOptions();
string sqlconnectionstring = "Data Source=(local)\\SQLEXPRESS;Initial Catalog=clientstat;User ID=...;Password=....;Integrated Security=SSPI";
SqlConnection clientread = new SqlConnection(sqlconnectionstring);
clientread.Open();
// Use the default Ttl value which is 128,
// but change the fragmentation behavior.
options.DontFragment = true;
string username = "";
// Create a buffer of 32 bytes of data to be transmitted.
string data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
byte[] buffer = Encoding.ASCII.GetBytes(data);
int timeout = 120;
while (true)
{
SqlCommand clientcommand = new SqlCommand("SELECT * FROM Client WHERE StateID = #stateid", clientread);
clientcommand.Parameters.Add(new SqlParameter("stateid", 1));
SqlDataReader clientreader = clientcommand.ExecuteReader();
while (clientreader.Read())
{
string ipadress = Convert.ToString(clientreader["IP"]);
string clientid = Convert.ToString(clientreader["ID"]);
if (ipadress != string.Empty && clientid != string.Empty)
{
// First Try To Ping Computer
PingReply reply = pingSender.Send(ipadress, timeout, buffer, options);
if (reply.Status == IPStatus.Success)
{
try
{
ManagementScope managementScope = new ManagementScope((#"\\" + ipadress + #"\root\cimv2"));
managementScope.Options.Username = "....";
managementScope.Options.Password = "...";
managementScope.Options.EnablePrivileges = true;
// ObjectQuery to Check if User is logged on
ObjectQuery objectQuery = new ObjectQuery("SELECT * FROM Win32_ComputerSystem");
ManagementObjectSearcher managementObjectSearcher = new ManagementObjectSearcher(managementScope, objectQuery);
ManagementObjectCollection querycollection = managementObjectSearcher.Get();
foreach (ManagementObject mo in querycollection)
{
// Check Here UserName
username = Convert.ToString(mo["UserName"]);
if (username != "")
{
Console.WriteLine(ipadress + " " + username);
}
}
querycollection.Dispose();
managementObjectSearcher.Dispose();
}
catch (Exception x)
{
Console.WriteLine(x.Message);
}
}
}
else
{
Console.WriteLine(clientid + " has no IP-Adress in Database");
}
}
clientcommand.Dispose();
clientreader.Close();
clientreader.Dispose();
}
}
}
Any Ideas or Suggestions what i can improve here? Or what exactly could be a Problem?
Thanks in advance
Idea 1:
You have to Dispose the ManagementObject to release the unmanaged COM resources.
Unfortunately, there is a bug in the Dispose implementation of it. Here are more details about it.
Credits should go to this answer that provides a workaround using GC.Collect(). Unfortunately, it costs.
That's why it is better to use a counter to perform the GC.Collect every n loops, with a n value you will manually tune until the performances are acceptable.
Anyway, I would try to invoke the ManagementObject Dispose() using reflection.
Idea 2:
In general, re-using a opened connection for several queries is not good since it prevents the connection pooling mechanism to work as optimal. Therefore, the sqlconnection may retain resources if used so.
Instead, please include the SqlConnection create/open and close/dispose in the loop, as related to this question.
You should use using (and not invoke Dispose(), it's not needed). The "new" issue would be the nesting, which will look like this:
using (SqlConnection ...)
{
using (SqlCommand ...)
{
using (SqlDataReader ...)
{
...
}
}
}
Basically, if you are instancing something which implements IDisposable, put a using there and be assured that .NET will handle memory for you (at least, it will try to).
Try to add a GC.Collect() call after each top level iteration (just to diagnose the issue). See if the memory behaves the same. If not then you don't have an issue, the GC might just be optimistic and delay collections.
Each iteration uses a non trivial amount of space due to the data reader buffers and what not so if those are just not collected you will observe memory just increasing.
It is just a false alarm though. If your system becomes memory constrained or the app triggers some kind of internal GC threshold collection will happen just fine.
The reason you are getting this exception is you are using while(true) statement and you are not using break; anywhere in a loop to break it explicitly. So the while loop is executing in infinite times and thus taking whole lot of memory. I think you should use Windows Service instead of while(true) to run it 24/7 and do it's operation without exception.
I'm not sure why you're creating a new SqlCommand on each loop iteration.
Just parameterize the SqlCommand, and in the loop iteration, and set the parameters, rather than creating a new SqlCommand.
Do that, and let me know how the memory looks. Remember one more thing - the GC won't kick in until it kicks in (i.e. non-determinism rules are in effect). Unless you really want to run a GC.Collect in your loop (that's sheer madness, IMHO). Another words, the constant creation/disposal of the objects is probably making the memory grow. Remember that a naive Dispose isn't going to make the memory magically shrink. Also keep in mind the memory management model of .NET and you should be all right.

C# Multiple threads (Parallel) accessing static MySQL Connection {get;set;} with locks

I have an application that runs a Parral.Foreach from a DataTable. Within that Parallel (anywhere from 3-10 parallels) the class executes either an update or select statement. I have a MySQL connection that {get;set;} see below.
public static MySqlConnection Connection { get; set; }
public static MySqlConnection OpenCon(string ServerAddress,int PortAddress, string UserID, string Password,int ConnectionTimeOut)
{
MySqlConnection masterOpenCON = new MySqlConnection("server=" + ServerAddress + ";Port=" + PortAddress + ";UID=" + UserID + ";PASSWORD=" + Password + ";connectiontimeout="+ConnectionTimeOut+";");
masterOpenCON.Open();
return masterOpenCON;
}
Here is my Parallel
Parallel.ForEach(urlTable.AsEnumerable(), drow =>
{
WebSiteCrawlerClass WCC = new WebSiteCrawlerClass();
if (drow.ItemArray[0].ToString().Contains("$"))
{
WCC.linkGrabberwDates(drow.ItemArray[0].ToString(), "www.as.com");
}
});
Now within WCC.LinkGrabberwDates executes a mysql command like so
string mysql_Update = "update trad_live" + StaticStringClass.tableID + " set price = '"+priceString+"',LastProcessDate = Now() where ListingID = 'AT"+ IDValue+"'";
MySQLProcessing.MySQLProcessor.MySQLInsertUpdate(mysql_UpdateExistingr,"mysql_UpdateExisting");
And here is MySQLInsertUpdate
public static void MySQLInsertUpdate(string MySQLCommand,string mysqlcommand_name)
{
try
{
MySqlCommand MySQLCommandFunc = new MySqlCommand(MySQLCommand, Connection);
MySQLCommandFunc.CommandTimeout = 240000;
MySQLCommandFunc.ExecuteNonQuery();
}
catch (Exception ex)
{
}
}
The two things that concern me is performance and data integrity. I know adding a lock will slow done the performance but will increase data integrity.
I would prefer not to create 10+ connections to the server, so my question is this.
Where would the lock actually be placed in the above code, per sql statement or inside of the public void MySQLInsertUpdate. My next question is, is there a better way besides lock/additional connections per thread?
I do realize that this is currently static but i am in the process of changing the static status
IMO accessing a single static connection in parallel and trying to manage the lock yourself seems like a bad design.
Why not use the connection pooling built in? That will ensure that there are only X number of open connections (X being however many you want). So if you only want 1 DB connection, you could just set the connection pool min and max sizes to 1.
That would also let you "scale up" the number of concurrent DB queries in configuration.
I also don't see any transaction handling in your code there, so your implementation might vary based on how you want to handle that. If there is a failure in 1 parallel, would all the updates/inserts roll back together? Or would each insert/update be its own commit?

TransactionScope helper that exhausts connection pool without fail - help?

A while back I asked a question about TransactionScope escalating to MSDTC when I wasn't expecting it to. (Previous question)
What it boiled down to was, in SQL2005, in order to use a TransactionScope, you can only instance and open a single SqlConnection within the life of the TransactionScope. With SQL2008, you can instance multiple SqlConnections, but only a single one can be open at any given time. SQL2000 will always escalate to DTC...we don't support SQL2000 in our application, a WinForms app, BTW.
Our solution to single-connection-only problem was to create a TransactionScope helper class, called LocalTransactionScope (aka 'LTS'). It wraps a TransactionScope and, most importantly, creates and maintains a single SqlConnection instance for our application. The good news is, it works - we can use LTS across disparate pieces of code and they all join the ambient transaction. Very nice. The trouble is, every root LTS instance created will create and effectively kill a connection from the connection pool. By 'Effectively Kill' I mean it will instance a SqlConnetion, which will open a new connection (for whatever reason, it never reuses a connection from the pool,) and when that root LTS is disposed, it closes and disposes the SqlConnection which is supposed to release the connection back to the pool so that it can be reused, however, it clearly never is reused. The pool bloats until it's maxed out, and then the application fails when a max-pool-size+1 connection is established.
Below I've attached a stripped down version of the LTS code and a sample console application class that will demonstrate the connection pool exhaustion. In order to watch your connection pool bloat, use SQL Server Managment Studio's 'Activity Monitor' or this query:
SELECT DB_NAME(dbid) as 'DB Name',
COUNT(dbid) as 'Connections'
FROM sys.sysprocesses WITH (nolock)
WHERE dbid > 0
GROUP BY dbid
I'm attaching LTS here, and a sample console application that you can use to demonstrate for yourself that it will consume connections from the pool and never re-use nor release them. You will need to add a reference to System.Transactions.dll for LTS to compile.
Things to note: It's the root-level LTS that opens and closes the SqlConnection, which always opens a new connection in the pool. Having nested LTS instances makes no difference because only the root LTS instance establishes a SqlConnection. As you can see, the connection string is always the same, so it should be reusing the connections.
Is there some arcane condition we're not meeting that causes the connections not to be re-used? Is there any solution to this other than turning pooling off entirely?
public sealed class LocalTransactionScope : IDisposable
{
private static SqlConnection _Connection;
private TransactionScope _TransactionScope;
private bool _IsNested;
public LocalTransactionScope(string connectionString)
{
// stripped out a few cases that need to throw an exception
_TransactionScope = new TransactionScope();
// we'll use this later in Dispose(...) to determine whether this LTS instance should close the connection.
_IsNested = (_Connection != null);
if (_Connection == null)
{
_Connection = new SqlConnection(connectionString);
// This Has Code-Stink. You want to open your connections as late as possible and hold them open for as little
// time as possible. However, in order to use TransactionScope with SQL2005 you can only have a single
// connection, and it can only be opened once within the scope of the entire TransactionScope. If you have
// more than one SqlConnection, or you open a SqlConnection, close it, and re-open it, it more than once,
// the TransactionScope will escalate to the MSDTC. SQL2008 allows you to have multiple connections within a
// single TransactionScope, however you can only have a single one open at any given time.
// Lastly, let's not forget about SQL2000. Using TransactionScope with SQL2000 will immediately and always escalate to DTC.
// We've dropped support of SQL2000, so that's not a concern we have.
_Connection.Open();
}
}
/// <summary>'Completes' the <see cref="TransactionScope"/> this <see cref="LocalTransactionScope"/> encapsulates.</summary>
public void Complete() { _TransactionScope.Complete(); }
/// <summary>Creates a new <see cref="SqlCommand"/> from the current <see cref="SqlConnection"/> this <see cref="LocalTransactionScope"/> is managing.</summary>
public SqlCommand CreateCommand() { return _Connection.CreateCommand(); }
void IDisposable.Dispose() { this.Dispose(); }
public void Dispose()
{
Dispose(true); GC.SuppressFinalize(this);
}
private void Dispose(bool disposing)
{
if (disposing)
{
_TransactionScope.Dispose();
_TransactionScope = null;
if (!_IsNested)
{
// last one out closes the door, this would be the root LTS, the first one to be instanced.
LocalTransactionScope._Connection.Close();
LocalTransactionScope._Connection.Dispose();
LocalTransactionScope._Connection = null;
}
}
}
}
This is a Program.cs that will exhibit the connection pool exhaustion:
class Program
{
static void Main(string[] args)
{
// fill in your connection string, but don't monkey with any pooling settings, like
// "Pooling=false;" or the "Max Pool Size" stuff. Doesn't matter if you use
// Doesn't matter if you use Windows or SQL auth, just make sure you set a Data Soure and an Initial Catalog
string connectionString = "your connection string here";
List<string> randomTables = new List<string>();
using (var nonLTSConnection = new SqlConnection(connectionString))
using (var command = nonLTSConnection.CreateCommand())
{
command.CommandType = CommandType.Text;
command.CommandText = #"SELECT [TABLE_NAME], NEWID() AS [ID]
FROM [INFORMATION_SCHEMA].TABLES]
WHERE [TABLE_SCHEMA] = 'dbo' and [TABLE_TYPE] = 'BASE TABLE'
ORDER BY [ID]";
nonLTSConnection.Open();
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
string table = (string)reader["TABLE_NAME"];
randomTables.Add(table);
if (randomTables.Count > 200) { break; } // got more than enough to test.
}
}
nonLTSConnection.Close();
}
// we're going to assume your database had some tables.
for (int j = 0; j < 200; j++)
{
// At j = 100 you'll see it pause, and you'll shortly get an InvalidOperationException with the text of:
// "Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool.
// This may have occurred because all pooled connections were in use and max pool size was reached."
string tableName = randomTables[j % randomTables.Count];
Console.Write("Creating root-level LTS " + j.ToString() + " selecting from " + tableName);
using (var scope = new LocalTransactionScope(connectionString))
using (var command = scope.CreateCommand())
{
command.CommandType = CommandType.Text;
command.CommandText = "SELECT TOP 20 * FROM [" + tableName + "]";
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
Console.Write(".");
}
Console.Write(Environment.NewLine);
}
}
Thread.Sleep(50);
scope.Complete();
}
Console.ReadKey();
}
}
The expected TransactionScope/SqlConnection pattern is, according to MSDN:
using(TransactionScope scope = ...) {
using (SqlConnection conn = ...) {
conn.Open();
SqlCommand.Execute(...);
SqlCommand.Execute(...);
}
scope.Complete();
}
So in the MSDN example the conenction is disposed inside the scope, before the scope is complete. Your code though is different, it disposes the connection after the scope is complete. I'm not an expert in matters of TransactionScope and its interaction with the SqlConnection (I know some things, but your question goes pretty deep) and I can't find any specifications what is the correct pattern. But I'd suggest you revisit your code and dispose the singleton connection before the outermost scope is complete, similarly to the MSDN sample.
Also, I hope you do realize your code will fall apart the moment a second thread comes to play into your application.
Is this code legal?
using(TransactionScope scope = ..)
{
using (SqlConnection conn = ..)
using (SqlCommand command = ..)
{
conn.Open();
SqlCommand.Execute(..);
}
using (SqlConnection conn = ..) // the same connection string
using (SqlCommand command = ..)
{
conn.Open();
SqlCommand.Execute(..);
}
scope.Complete();
}

Categories