Memory Problems in C# using while(true) - c#

i would like to write an Client in C# which checks if a User is logged in on different Clients. The Client should run 24/7 and refreshes a Database with some State Information for each Client.
My Problem is: The Command Line Tool takes more and more Memory, so ill think that there is a Problem that i allocate Memory which never gets released.
I think it is that i am creating a ManagementScope, but i cannot all the Dispose() Method for it.
Here is my Code:
static void Main(string[] args)
{
Ping pingSender = new Ping();
PingOptions options = new PingOptions();
string sqlconnectionstring = "Data Source=(local)\\SQLEXPRESS;Initial Catalog=clientstat;User ID=...;Password=....;Integrated Security=SSPI";
SqlConnection clientread = new SqlConnection(sqlconnectionstring);
clientread.Open();
// Use the default Ttl value which is 128,
// but change the fragmentation behavior.
options.DontFragment = true;
string username = "";
// Create a buffer of 32 bytes of data to be transmitted.
string data = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa";
byte[] buffer = Encoding.ASCII.GetBytes(data);
int timeout = 120;
while (true)
{
SqlCommand clientcommand = new SqlCommand("SELECT * FROM Client WHERE StateID = #stateid", clientread);
clientcommand.Parameters.Add(new SqlParameter("stateid", 1));
SqlDataReader clientreader = clientcommand.ExecuteReader();
while (clientreader.Read())
{
string ipadress = Convert.ToString(clientreader["IP"]);
string clientid = Convert.ToString(clientreader["ID"]);
if (ipadress != string.Empty && clientid != string.Empty)
{
// First Try To Ping Computer
PingReply reply = pingSender.Send(ipadress, timeout, buffer, options);
if (reply.Status == IPStatus.Success)
{
try
{
ManagementScope managementScope = new ManagementScope((#"\\" + ipadress + #"\root\cimv2"));
managementScope.Options.Username = "....";
managementScope.Options.Password = "...";
managementScope.Options.EnablePrivileges = true;
// ObjectQuery to Check if User is logged on
ObjectQuery objectQuery = new ObjectQuery("SELECT * FROM Win32_ComputerSystem");
ManagementObjectSearcher managementObjectSearcher = new ManagementObjectSearcher(managementScope, objectQuery);
ManagementObjectCollection querycollection = managementObjectSearcher.Get();
foreach (ManagementObject mo in querycollection)
{
// Check Here UserName
username = Convert.ToString(mo["UserName"]);
if (username != "")
{
Console.WriteLine(ipadress + " " + username);
}
}
querycollection.Dispose();
managementObjectSearcher.Dispose();
}
catch (Exception x)
{
Console.WriteLine(x.Message);
}
}
}
else
{
Console.WriteLine(clientid + " has no IP-Adress in Database");
}
}
clientcommand.Dispose();
clientreader.Close();
clientreader.Dispose();
}
}
}
Any Ideas or Suggestions what i can improve here? Or what exactly could be a Problem?
Thanks in advance

Idea 1:
You have to Dispose the ManagementObject to release the unmanaged COM resources.
Unfortunately, there is a bug in the Dispose implementation of it. Here are more details about it.
Credits should go to this answer that provides a workaround using GC.Collect(). Unfortunately, it costs.
That's why it is better to use a counter to perform the GC.Collect every n loops, with a n value you will manually tune until the performances are acceptable.
Anyway, I would try to invoke the ManagementObject Dispose() using reflection.
Idea 2:
In general, re-using a opened connection for several queries is not good since it prevents the connection pooling mechanism to work as optimal. Therefore, the sqlconnection may retain resources if used so.
Instead, please include the SqlConnection create/open and close/dispose in the loop, as related to this question.

You should use using (and not invoke Dispose(), it's not needed). The "new" issue would be the nesting, which will look like this:
using (SqlConnection ...)
{
using (SqlCommand ...)
{
using (SqlDataReader ...)
{
...
}
}
}
Basically, if you are instancing something which implements IDisposable, put a using there and be assured that .NET will handle memory for you (at least, it will try to).

Try to add a GC.Collect() call after each top level iteration (just to diagnose the issue). See if the memory behaves the same. If not then you don't have an issue, the GC might just be optimistic and delay collections.
Each iteration uses a non trivial amount of space due to the data reader buffers and what not so if those are just not collected you will observe memory just increasing.
It is just a false alarm though. If your system becomes memory constrained or the app triggers some kind of internal GC threshold collection will happen just fine.

The reason you are getting this exception is you are using while(true) statement and you are not using break; anywhere in a loop to break it explicitly. So the while loop is executing in infinite times and thus taking whole lot of memory. I think you should use Windows Service instead of while(true) to run it 24/7 and do it's operation without exception.

I'm not sure why you're creating a new SqlCommand on each loop iteration.
Just parameterize the SqlCommand, and in the loop iteration, and set the parameters, rather than creating a new SqlCommand.
Do that, and let me know how the memory looks. Remember one more thing - the GC won't kick in until it kicks in (i.e. non-determinism rules are in effect). Unless you really want to run a GC.Collect in your loop (that's sheer madness, IMHO). Another words, the constant creation/disposal of the objects is probably making the memory grow. Remember that a naive Dispose isn't going to make the memory magically shrink. Also keep in mind the memory management model of .NET and you should be all right.

Related

C# - Postgres - Memory leak issue over time

Problem: The memory leaks & accumulates over time, and eventually reaches its 99% capacity
This is a question I previously asked on the same topic. I did as the guy who answered told me to do, but I am still experiencing memory leak issues. I really do not understand from where the memory is accumulating. I observed the Windows Task Manager, and found out that periodically memory clears, but the memory accumulation rate is faster than clearing rate, and as a result, memory capacity reaches 99% at the end.
Here is my C# code:
var connString = "Host=x.x.x.x;Port=5432;Username=postgres;Password=password;Database=database";
#Info.Trace("PostGre ");
using (var conn = new Npgsql.NpgsqlConnection(connString)){
conn.Open();
int ctr = 0;
// Insert some data
using (var cmd = new Npgsql.NpgsqlCommand())
{
cmd.Connection = conn;
var par_1 = cmd.Parameters.Add("#r", NpgsqlTypes.NpgsqlDbType.Timestamp);
var par_2 = cmd.Parameters.Add("#p", NpgsqlTypes.NpgsqlDbType.Double);
while(#tag.TerminateTimeScaleLoop == 100)
{
#Info.Trace("Pushed Data: PostGre A " + ctr.ToString());
cmd.CommandText = "INSERT INTO TORQX VALUES (#r,#p)";
par_1.Value = System.DateTime.Now.ToUniversalTime();
par_2.Value = #Tag.RigData.Time.TORQX;
cmd.ExecuteNonQuery();
ctr = ctr + 1;
}
}
#Info.Trace("Pushed Data: PostGre A Terminated");
conn.Close();
}
What is causing the memory accumulation? Can I prevent it from accumulating? If preventing accumulation is impossible, can I manually clear memory? What set of codes will do that?
I have practically no experience with C#, and I was assigned to do a hot fix on this C# code because the person who wrote this code isn't available now. I have lots of experience in Python, but no experience with C#, so please give me suggestions in a really explicit way... otherwise I will have no clue. Thanks!

Running Sql Server Agent Jobs from C#

While searching on above topic in the internet, I found two approaches,Both are working fine, But I need to know the difference between the two, Which one is suitable for what occasion etc... Our Jobs take some time and I need a way to wait till the Job finishes before the next C# line executes.
Approach One
var dbConn = new SqlConnection(myConString);
var execJob = new SqlCommand
{
CommandType = CommandType.StoredProcedure,
CommandText = "msdb.dbo.sp_start_job"
};
execJob.Parameters.AddWithValue("#job_name", p0);
execJob.Connection = dbConn;
using (dbConn)
{
dbConn.Open();
using (execJob)
{
execJob.ExecuteNonQuery();
Thread.Sleep(5000);
}
}
Approach Two
using System.Threading;
using Microsoft.SqlServer.Management.Smo;
using Microsoft.SqlServer.Management.Smo.Agent;
var server = new Server(#"localhost\myinstance");
var isStopped = false;
try
{
server.ConnectionContext.LoginSecure = true;
server.ConnectionContext.Connect();
var job = server.JobServer.Jobs[jobName];
job.Start();
Thread.Sleep(1000);
job.Refresh();
while (job.CurrentRunStatus == JobExecutionStatus.Executing)
{
Thread.Sleep(1000);
job.Refresh();
}
isStopped = true;
}
finally
{
if (server.ConnectionContext.IsOpen)
{
server.ConnectionContext.Disconnect();
}
}
sp_start_job - sample 1
Your first example calls your job via the sp_start_job system stored procedure.
Note that it kicks off the job asynchronously, and the thread sleeps for an arbitrary period of time (5 seconds) before continuing regardless of the job's success or failure.
SQL Server Management Objects (SMO) - sample 2
Your second example uses (and therefore has a dependency on) the SQL Server Management Objects to achieve the same goal.
In the second case, the job also commences running asynchronously, but the subsequent loop watches the Job Status until it is not longer Executing. Note that the "isStopped" flag appears to serve no purpose, and the loop could be refactored somewhat as:
job.Start();
do
{
Thread.Sleep(1000);
job.Refresh();
} while (job.CurrentRunStatus == JobExecutionStatus.Executing);
You'd probably want to add a break-out of that loop after a certain period of time.
Other Considerations
It seems the same permissions are required by each of your examples; essentially the solution using SMO is a wrapper around sp_start_job, but provides you with (arguably) more robust code which has a clearer purpose.
Use whichever suits you best, or do some profiling and pick the most efficient if performance is a concern.

Memory Mapped File gets deleted from memory

For some reason, when i read from a memory mapped file a couple of times it just gets randomly deleted from memory, i don't know what's going on. Is the kernel or GC deleting it from memory? If they are, how do i prevent them from doing so?
I am serializing an object to Json and writing it to memory.
I get an exception when trying to read again after a couple of times, i get FileNotFoundException: Unable to find the specified file.
private const String Protocol = #"Global\";
Code to write to memory mapped file:
public static Boolean WriteToMemoryFile<T>(List<T> data)
{
try
{
if (data == null)
{
throw new ArgumentNullException("Data cannot be null", "data");
}
var mapName = typeof(T).FullName.ToLower();
var mutexName = Protocol + typeof(T).FullName.ToLower();
var serializedData = JsonConvert.SerializeObject(data);
var capacity = serializedData.Length + 1;
var mmf = MemoryMappedFile.CreateOrOpen(mapName, capacity);
var isMutexCreated = false;
var mutex = new Mutex(true, mutexName, out isMutexCreated);
if (!isMutexCreated)
{
var isMutexOpen = false;
do
{
isMutexOpen = mutex.WaitOne();
}
while (!isMutexOpen);
var streamWriter = new StreamWriter(mmf.CreateViewStream());
streamWriter.WriteLine(serializedData);
streamWriter.Close();
mutex.ReleaseMutex();
}
else
{
var streamWriter = new StreamWriter(mmf.CreateViewStream());
streamWriter.WriteLine(serializedData);
streamWriter.Close();
mutex.ReleaseMutex();
}
return true;
}
catch (Exception ex)
{
return false;
}
}
Code to read from memory mapped file:
public static List<T> ReadFromMemoryFile<T>()
{
try
{
var mapName = typeof(T).FullName.ToLower();
var mutexName = Protocol + typeof(T).FullName.ToLower();
var mmf = MemoryMappedFile.OpenExisting(mapName);
var mutex = Mutex.OpenExisting(mutexName);
var isMutexOpen = false;
do
{
isMutexOpen = mutex.WaitOne();
}
while (!isMutexOpen);
var streamReader = new StreamReader(mmf.CreateViewStream());
var serializedData = streamReader.ReadLine();
streamReader.Close();
mutex.ReleaseMutex();
var data = JsonConvert.DeserializeObject<List<T>>(serializedData);
mmf.Dispose();
return data;
}
catch (Exception ex)
{
return default(List<T>);
}
}
The process that created the memory mapped file must keep a reference to it for as long as you want it to live. Using CreateOrOpen is a bit tricky for exactly this reason - you don't know whether disposing the memory mapped file is going to destroy it or not.
You can easily see this at work by adding an explicit mmf.Dispose() to your WriteToMemoryFile method - it will close the file completely. The Dispose method is called from the finalizer of the mmf instance some time after all the references to it drop out of scope.
Or, to make it even more obvious that GC is the culprit, you can try invoking GC explicitly:
WriteToMemoryFile("Hi");
GC.Collect();
GC.WaitForPendingFinalizers();
GC.Collect();
ReadFromMemoryFile().Dump(); // Nope, the value is lost now
Note that I changed your methods slightly to work with simple strings; you really want to produce the simplest possible code that reproduces the behaviour you observe. Even just having to get JsonConverter is an unnecessary complication, and might cause people to not even try running your code :)
And as a side note, you want to check for AbandonedMutexException when you're doing Mutex.WaitOne - it's not a failure, it means you took over the mutex. Most applications handle this wrong, leading to issues with deadlocks as well as mutex ownership and lifetime :) In other words, treat AbandonedMutexException as success. Oh, and it's good idea to put stuff like Mutex.ReleaseMutex in a finally clause, to make sure it actually happens, even if you get an exception. Thread or process dead doesn't matter (that will just cause one of the other contendants to get AbandonedMutexException), but if you just get an exception that you "handle" with your return false;, the mutex will not be released until you close all your applications and start again fresh :)
Clearly, the problem is that the MMF loose its context as explained by Luaan. But still nobody explains how to perform it:
The code 'Write to MMF file' must run on a separate async thread.
The code 'Read from MMF' will notify once read completed that the MMF had been read. The notification can be a flag in a file for example.
Therefore the async thread running the 'Write to MMF file' will run as long as the MMF file is read from the second part. We have therefore created the context within which the memory mapped file is valid.

SQLite DB Insert Very Slow

I am using an SQLite database, and inserting records into it. This takes a hugely long time! I have seen people who say they can process a couple thousand in a minute. I have around 2400 records. Each record takes 30s-2m to complete. Recreating the database is not an option. I have tried to create one transaction different ways. I need to use the timer, because I am using a ProgressBar to show me that something is happening. Here is the code I am using:
string con;
con = string.Format(#"Data Source={0}", documentsFolder);
SQLiteConnection sqlconnection = new SQLiteConnection(con);
SQLiteCommand sqlComm = sqlconnection.CreateCommand();
sqlconnection.Open();
SQLiteTransaction transaction = sqlconnection.BeginTransaction();
Timer timer2 = new Timer();
timer2.Interval = 1000;
timer2.Tick += (source, e) =>
{
URL u = firefox.URLs[count2];
string newtitle = u.title;
form.label1.Text = count2 + "/" + pBar.Maximum;
string c_urls = "insert or ignore into " + table + " (id,
url, title, visit_count, typed_count, last_visit_time, hidden) values (" + dbID + ",'" + u.url + "','"
+ newtitle + "',1,1, " + ToChromeTime(u.visited) + ", 0)";
string c_visited = "insert or ignore into " + table2 + " (id,
url,
visit_time, transition) values (" + dbID2 + "," + dbID + "," +
ToChromeTime(u.visited) + ",805306368)";
sqlComm = new SQLiteCommand(c_urls, sqlconnection);
sqlComm.ExecuteNonQuery();
sqlComm = new SQLiteCommand(c_visited, sqlconnection);
sqlComm.ExecuteNonQuery();
dbID++;
dbID2++;
pBar.Value = count2;
if (pBar.Maximum == count2)
{
pBar.Value = 0;
timer.Stop();
transaction.Commit();
sqlComm.Dispose();
sqlconnection.Dispose();
sqlconnection.Close();
}
count2++;
};
timer2.Start();
What am I doing wrong?
This is what I would address, in order. It may or may not fix the problem, but it won't hurt to see (and it might just do some magic):
Ensure the Database is not being contended with updates (from another thread, process, or even timer!). Writers will acquire locks and unclosed/over-long-running transactions can interact in bad ways. (For updates that take "30 seconds to 2 minutes" I would imagine there is an issue obtaining locks. Also ensure the media the DB is on is sufficient, e.g. local drive.)
The transaction is not being used (??). Move the transaction inside the timer callback, attach it to the appropriate SQLCommands, and dispose it before the callback ends. (Use using).
Not all SQLCommand's are being disposed correctly. Dispose each and every one. (The use of using simplifies this. Do not let it bleed past the callback.)
Placeholders are not being used. Not only is this simpler and easier to use, but it is also ever so slightly more friendly to SQLite and the adapter.
(Example only; there may be errors in the following code.)
// It's okay to keep long-running SQLite connections.
// In my applications I have a single application-wide connection.
// The more important thing is watching thread-access and transactions.
// In any case, we can keep this here.
SQLiteConnection sqlconnection = new SQLiteConnection(con);
sqlconnection.Open();
// In timer event - remember this is on the /UI/ thread.
// DO NOT ALLOW CROSS-THREAD ACCESS TO THE SAME SQLite CONNECTION.
// (You have been warned.)
URL u = firefox.URLs[count2];
string newtitle = u.title;
form.label1.Text = count2 + "/" + pBar.Maximum;
try {
// This transaction is ONLY kept about for this timer callback.
// Great care must be taken with long-running transactions in SQLite.
// SQLite does not have good support for (long running) concurrent-writers
// because it must obtain exclusive file locks.
// There is no Table/Row locks!
sqlconnection.BeginTransaction();
// using ensures cmd will be Disposed as appropriate.
using (var cmd = sqlconnection.CreateCommand()) {
// Using placeholders is cleaner. It shouldn't be an issue to
// re-create the SQLCommand because it can be cached in the adapter/driver
// (although I could be wrong on this, anyway, it's not "this issue" here).
cmd.CommandText = "insert or ignore into " + table
+ " (id, url, title, visit_count, typed_count, last_visit_time, hidden)"
+ " values (#dbID, #url, 'etc, add other parameters')";
// Add each parameter; easy-peasy
cmd.Parameters.Add("#dbID", dbID);
cmd.Parameter.Add("#url", u.url);
// .. add other parameters
cmd.ExecuteNonQuery();
}
// Do same for other command (runs in the same TX)
// Then commit TX
sqlconnection.Commit();
} catch (Exception ex) {
// Or fail TX and propagate exception ..
sqlconnection.Rollback();
throw;
}
if (pBar.Maximum == count2)
{
pBar.Value = 0;
timer.Stop();
// All the other SQLite resources are already
// cleaned up!
sqlconnection.Dispose();
sqlconnection.Close();
}
I'm not sure if this is your problem, but your general pattern of using ADO.NET is wrong - you shouldn't create new command(s) per each insert (and repeatedly pay for query preparation).
Instead, do the following:
Before the loop:
Create command(s) once.
Create appropriate bound parameters.
In the loop:
Just assign appropriate values to the bound parameters.
And execute the command(s).
You could also consider using less fine-grained transactions: try putting several inserts in the same transaction to minimize paying for transaction durability.
You might also want to take a look at this post.
You can try one of the following to improve performance :
Wrap all the inserts in a transaction - Can help in reducing the actual writes to the DB.
Use WAL - The Write-Ahead-Log is a journaling mode that speeds up writes and enables concurrency. (Not recommended if your DB is in a Network location).
Synchronous NORMAL - The Synchronous Mode dictates the the frequency at which data is actually flushed to the physical memory (fsync() calls). This can be time taking on some machines and hence the frequency at which this flush occurs is critical. Make sure to explicitly open connections with "Synchronous=NORMAL" ideal for most scenarios. There is a huge difference between Synchronous MODE as FULL and NORMAL (NORMAL is ~1000 times better).
Find more details in a similar post => What changed between System.Data.SQLite version 1.0.74 and the most recent 1.0.113?

TransactionScope helper that exhausts connection pool without fail - help?

A while back I asked a question about TransactionScope escalating to MSDTC when I wasn't expecting it to. (Previous question)
What it boiled down to was, in SQL2005, in order to use a TransactionScope, you can only instance and open a single SqlConnection within the life of the TransactionScope. With SQL2008, you can instance multiple SqlConnections, but only a single one can be open at any given time. SQL2000 will always escalate to DTC...we don't support SQL2000 in our application, a WinForms app, BTW.
Our solution to single-connection-only problem was to create a TransactionScope helper class, called LocalTransactionScope (aka 'LTS'). It wraps a TransactionScope and, most importantly, creates and maintains a single SqlConnection instance for our application. The good news is, it works - we can use LTS across disparate pieces of code and they all join the ambient transaction. Very nice. The trouble is, every root LTS instance created will create and effectively kill a connection from the connection pool. By 'Effectively Kill' I mean it will instance a SqlConnetion, which will open a new connection (for whatever reason, it never reuses a connection from the pool,) and when that root LTS is disposed, it closes and disposes the SqlConnection which is supposed to release the connection back to the pool so that it can be reused, however, it clearly never is reused. The pool bloats until it's maxed out, and then the application fails when a max-pool-size+1 connection is established.
Below I've attached a stripped down version of the LTS code and a sample console application class that will demonstrate the connection pool exhaustion. In order to watch your connection pool bloat, use SQL Server Managment Studio's 'Activity Monitor' or this query:
SELECT DB_NAME(dbid) as 'DB Name',
COUNT(dbid) as 'Connections'
FROM sys.sysprocesses WITH (nolock)
WHERE dbid > 0
GROUP BY dbid
I'm attaching LTS here, and a sample console application that you can use to demonstrate for yourself that it will consume connections from the pool and never re-use nor release them. You will need to add a reference to System.Transactions.dll for LTS to compile.
Things to note: It's the root-level LTS that opens and closes the SqlConnection, which always opens a new connection in the pool. Having nested LTS instances makes no difference because only the root LTS instance establishes a SqlConnection. As you can see, the connection string is always the same, so it should be reusing the connections.
Is there some arcane condition we're not meeting that causes the connections not to be re-used? Is there any solution to this other than turning pooling off entirely?
public sealed class LocalTransactionScope : IDisposable
{
private static SqlConnection _Connection;
private TransactionScope _TransactionScope;
private bool _IsNested;
public LocalTransactionScope(string connectionString)
{
// stripped out a few cases that need to throw an exception
_TransactionScope = new TransactionScope();
// we'll use this later in Dispose(...) to determine whether this LTS instance should close the connection.
_IsNested = (_Connection != null);
if (_Connection == null)
{
_Connection = new SqlConnection(connectionString);
// This Has Code-Stink. You want to open your connections as late as possible and hold them open for as little
// time as possible. However, in order to use TransactionScope with SQL2005 you can only have a single
// connection, and it can only be opened once within the scope of the entire TransactionScope. If you have
// more than one SqlConnection, or you open a SqlConnection, close it, and re-open it, it more than once,
// the TransactionScope will escalate to the MSDTC. SQL2008 allows you to have multiple connections within a
// single TransactionScope, however you can only have a single one open at any given time.
// Lastly, let's not forget about SQL2000. Using TransactionScope with SQL2000 will immediately and always escalate to DTC.
// We've dropped support of SQL2000, so that's not a concern we have.
_Connection.Open();
}
}
/// <summary>'Completes' the <see cref="TransactionScope"/> this <see cref="LocalTransactionScope"/> encapsulates.</summary>
public void Complete() { _TransactionScope.Complete(); }
/// <summary>Creates a new <see cref="SqlCommand"/> from the current <see cref="SqlConnection"/> this <see cref="LocalTransactionScope"/> is managing.</summary>
public SqlCommand CreateCommand() { return _Connection.CreateCommand(); }
void IDisposable.Dispose() { this.Dispose(); }
public void Dispose()
{
Dispose(true); GC.SuppressFinalize(this);
}
private void Dispose(bool disposing)
{
if (disposing)
{
_TransactionScope.Dispose();
_TransactionScope = null;
if (!_IsNested)
{
// last one out closes the door, this would be the root LTS, the first one to be instanced.
LocalTransactionScope._Connection.Close();
LocalTransactionScope._Connection.Dispose();
LocalTransactionScope._Connection = null;
}
}
}
}
This is a Program.cs that will exhibit the connection pool exhaustion:
class Program
{
static void Main(string[] args)
{
// fill in your connection string, but don't monkey with any pooling settings, like
// "Pooling=false;" or the "Max Pool Size" stuff. Doesn't matter if you use
// Doesn't matter if you use Windows or SQL auth, just make sure you set a Data Soure and an Initial Catalog
string connectionString = "your connection string here";
List<string> randomTables = new List<string>();
using (var nonLTSConnection = new SqlConnection(connectionString))
using (var command = nonLTSConnection.CreateCommand())
{
command.CommandType = CommandType.Text;
command.CommandText = #"SELECT [TABLE_NAME], NEWID() AS [ID]
FROM [INFORMATION_SCHEMA].TABLES]
WHERE [TABLE_SCHEMA] = 'dbo' and [TABLE_TYPE] = 'BASE TABLE'
ORDER BY [ID]";
nonLTSConnection.Open();
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
string table = (string)reader["TABLE_NAME"];
randomTables.Add(table);
if (randomTables.Count > 200) { break; } // got more than enough to test.
}
}
nonLTSConnection.Close();
}
// we're going to assume your database had some tables.
for (int j = 0; j < 200; j++)
{
// At j = 100 you'll see it pause, and you'll shortly get an InvalidOperationException with the text of:
// "Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool.
// This may have occurred because all pooled connections were in use and max pool size was reached."
string tableName = randomTables[j % randomTables.Count];
Console.Write("Creating root-level LTS " + j.ToString() + " selecting from " + tableName);
using (var scope = new LocalTransactionScope(connectionString))
using (var command = scope.CreateCommand())
{
command.CommandType = CommandType.Text;
command.CommandText = "SELECT TOP 20 * FROM [" + tableName + "]";
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
Console.Write(".");
}
Console.Write(Environment.NewLine);
}
}
Thread.Sleep(50);
scope.Complete();
}
Console.ReadKey();
}
}
The expected TransactionScope/SqlConnection pattern is, according to MSDN:
using(TransactionScope scope = ...) {
using (SqlConnection conn = ...) {
conn.Open();
SqlCommand.Execute(...);
SqlCommand.Execute(...);
}
scope.Complete();
}
So in the MSDN example the conenction is disposed inside the scope, before the scope is complete. Your code though is different, it disposes the connection after the scope is complete. I'm not an expert in matters of TransactionScope and its interaction with the SqlConnection (I know some things, but your question goes pretty deep) and I can't find any specifications what is the correct pattern. But I'd suggest you revisit your code and dispose the singleton connection before the outermost scope is complete, similarly to the MSDN sample.
Also, I hope you do realize your code will fall apart the moment a second thread comes to play into your application.
Is this code legal?
using(TransactionScope scope = ..)
{
using (SqlConnection conn = ..)
using (SqlCommand command = ..)
{
conn.Open();
SqlCommand.Execute(..);
}
using (SqlConnection conn = ..) // the same connection string
using (SqlCommand command = ..)
{
conn.Open();
SqlCommand.Execute(..);
}
scope.Complete();
}

Categories