I have .NET application with Oracle Database and NHibernate.
I need to handle some "Connected" event which raised before NHibernate executed first DbCommand with currently used OracleConnection. This is due to the necessity of primary context initialization. I need to be sure that before executing any command context was initialized.
Is there such a possibility in NHibernate?
P.S. I can not use Oracle ON LOGON TRIGGER for this purpose
UPD. The solution is:
public class CustomConnectionProvider : DriverConnectionProvider
{
public override System.Data.IDbConnection GetConnection()
{
var conn = (OracleConnection)base.GetConnection();
//init context
return conn;
}
}
You can use a custom DriverConnectionProvider. Normally this is used to dynamically change the connection but it seems like a good fit for your scenario. Since it's responsible for creating and opening the connection, you are guaranteed that no operations have occurred before it executes.
Related
I'm trying to create a SQL CLR stored procedure that will create a table, pass the table name onto a service which will bulk insert some data into it, display the results of the table, then clean up the table.
What I've tried so far:
Use SqlTransaction. Cancelling the transaction works, but it puts my query window into a state where I couldn't continue working on it.
The transaction active in this session has been committed or aborted by another session
Use TransactionScope. Same issue as 1.
Manually clean up the table in a finally clause by issuing a DROP TABLE SqlCommand. This doesn't seem to get run, though my SqlContext.Pipe.Send() prior to issuing the command does. It doesn't seem like it's related to any time constraints since if I issue a Thread.Sleep(2000) before printing another line, it still prints the second line whereas the command.ExecuteNonQuery() would stop before printing the second line.
Placing the manual cleanup code into a CER or SafeHandle. This doesn't work as having a CER requires some guarantees, including not allocating additional memory or calling methods that are not decorated with a ReliabilityContract.
Am I missing something obvious here? How do I handle the user cancelling their query?
Edit: There were multiple iterations of the code for each scenario, but the general actions taken are along the lines of the following:
[SqlProcedure]
public static void GetData(SqlString code)
{
Guid guid = Guid.NewGuid();
using (var connection = new SqlConnection("context connection=true"))
{
connection.Open();
try
{
SqlContext.Pipe?.Send("Constrain");
SqlCommand command1 = new SqlCommand($"CREATE TABLE qb.{code}_{guid:N} (Id INT)", connection);
command1.ExecuteNonQuery();
SqlContext.Pipe?.Send($"Create: qb.{code}_{guid:N}");
//emulate service call
Thread.Sleep(TimeSpan.FromSeconds(10));
SqlContext.Pipe?.Send($"Done: qb.{code}_{guid:N}");
}
finally
{
SqlContext.Pipe?.Send("1");
//drop table here instead of sleep
Thread.Sleep(2000);
SqlContext.Pipe?.Send("2");
}
}
}
Unfortunately SQLCLR does not handle query cancellation very well. However, given the error message, that seems to imply that the cancellation does its own ROLLBACK. Have you tried not using a Transaction within the SQLCLR code but instead handling it from outside? Such as:
BEGIN TRAN;
EXEC SQLCLR_Stored_Procedure;
IF (##TRANCOUNT > 0) ROLLBACK TRAN;
The workflow noted above would need to be enforced. This can be done rather easily by creating a wrapper T-SQL Stored Procedure that executes those 3 steps and only give EXECUTE permission to the wrapper Stored Procedure. If permissions are then needed for the SQLCLR Stored Procedure, that can be accomplished rather easily using module signing:
Create an Asymmetric Key in the same DB as the SQLCLR Stored Procedure
Create a User from that Asymmetric Key
GRANT that Key-based User EXECUTE permission on the SQLCLR Stored Procedure
Sign the wrapper T-SQL Stored Procedure, using ADD SIGNATURE, with that Asymmetric Key
Those 4 steps allow the T-SQL wrapper proc to execute the SQLCLR proc, while the actual application Login can only execute the T-SQL wrapper proc :-). And, in, the event that the cancellation aborts the execution prior to executing ROLLBACK, the Transaction should be automatically rolled-back when the connection closes.
Also, do you have XACT_ABORT set to ON or OFF? UPDATE: O.P. states that it is set to OFF, and setting to ON did not seem to behave any differently.
Have you tried checking the connection state in the finally block? I am pretty sure that the SqlConnection is Closed upon the cancellation. You could try the following approaches, both in the finally block:
Test for the connection state and if Closed, re-open the SqlConnection and then execute the non-query command.
UPDATE: O.P. states that the connection is still open. Ok, how about closing it and re-opening it?
UPDATE 2: O.P. tested and found that the connection could not be re-opened.
Since the context is still available, as proven by your print commands working, use something like SqlContext.Pipe.ExecuteAndSend(new SqlCommand("DROP TABLE..;"));
UPDATE: O.P. states that this did not work.
OR, since you create a guaranteed unique table name in the code, you can try creating the table as a global temporary table (i.e. prefixed with two pound-signs: ##TableName) which will a) be available to the bulk import process, and b) clean itself up when the connection fully closes. In this approach, you technically wouldn't need to perform any manual clean up.
Of course, when Connection Pooling is enabled, the automatic cleanup happens only after the connection is re-opened and the first command is executed. In order to force an immediate cleanup, you would have to connect to SQL Server with Connection Pooling disabled. Is it possible to use a different Connection String just when this Stored Procedure is to be executed that includes Pooling=false;? Given how this Stored Procedure is being used, it does not seem like you would suffer any noticeable performance degradation from disabling Connection Pooling on just this one specific call. To better understand how Connection Pooling – enabled or disabled – affects the automatic cleanup of temporary objects, please see the blog post I just published that details this very behavior:
Sessions, Temporary Objects, and the Afterlife
This approach is probably the best overall since you probably cannot guarantee either that ROLLBACK would be executed (first approach mentioned) or that a finally clause would be executed (assuming you ever got that to work). In the end, uncommitted Transactions will be rolled-back, but if someone executes this via SSMS and it aborts without the ROLLBACK, then they are still in an open Transaction and might not be aware of it. Also, think about the connection being forcibly closed, or the Session being killed, or the server being shutdown / restarted. In those cases tempdb is your friend, whether by using a global temporary table, or at the very least creating the permanent Table in tempdb so that it is automatically removed the next time that the SQL Server starts (due to tempdb being created new, as a copy of model, upon each start of the SQL Server service).
Stepping back there are probably much better ways to pass the data out of your CLR procedure.
1) you can simply use the SqlContext Pipe to return a resultset without creating a table.
2) you can create a temp table (#) in the calling code and access it from inside the CLR procedure. You might want to introduce a TSQL wrapper procedure to make this convenient.
Anyway using BEGIN TRAN/COMMIT | ROLLBACK worked for me:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Threading;
static class SqlConnectionExtensions
{
public static DataTable ExecuteDataTable(this SqlConnection con, string sql, params SqlParameter[] parameters)
{
var cmd = new SqlCommand(sql, con);
foreach (var p in parameters)
{
cmd.Parameters.Add(p);
}
using (var dr = cmd.ExecuteReader())
{
var dt = new DataTable();
dt.Load(dr);
return dt;
}
}
public static int ExecuteNonQuery(this SqlConnection con, string sql, params SqlParameter[] parameters)
{
var cmd = new SqlCommand(sql, con);
foreach (var p in parameters)
{
cmd.Parameters.Add(p);
}
return cmd.ExecuteNonQuery();
}
}
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void GetData(SqlString code)
{
Guid guid = Guid.NewGuid();
using (var connection = new SqlConnection("context connection=true"))
{
connection.Open();
try
{
connection.ExecuteNonQuery("begin transaction;");
SqlContext.Pipe?.Send("Constrain");
connection.ExecuteNonQuery($"CREATE TABLE qb.{code}_{guid:N} (Id INT)");
SqlContext.Pipe?.Send($"Create: qb.{code}_{guid:N}");
//emulate service call
Thread.Sleep(TimeSpan.FromSeconds(10));
SqlContext.Pipe?.Send($"Done: qb.{code}_{guid:N}");
connection.ExecuteNonQuery("commit transaction");
}
catch (Exception ex)
{
connection.ExecuteNonQuery("rollback;");
throw;
}
}
}
}
While it is not something advised for use, this would be an ideal match for sp_bindsession. With sp_bindsession, you would call sp_getbindtoken from the context session, pass the token to the server and call sp_bindsession from the service connection.
Afterwards, the two connections behave "as one", with temporaries and transactions being transparently propagated.
For my data access I use TransactionScopes at the API level to wrap entire operations in a single transaction so that my SQL operations can be somewhat composable. I have a web project that hosts an API and a separate service library that is the implementation and calls to SQL. At the beginning of an Operation (an API entry-point) I open the TransactionScope. Whenever a SqlConnection is needed within the processing of the Operation, ask for the AmbientConnection instead of directly making a new connection. AmbientConnection finds or creates a new SqlConnection for the current transation. Doing this is supposed to allow for good composibility but also avoid the invocation of the MSDTC because it should keep using the same connection for the each suboperation within the transaction. When the transaction is completed (with scope.complete()), the connection is automatically closed.
The problem is that every once in a while the MSDTC is still getting invoked and I cannot figure out why. I've used this before sucessfully and I believe I never got an MSDTC invoked. Two things seem different to me this time though: 1) I'm using SQL Server 2008 R1 (10.50.4000) - not my choice - and I'm aware that the MSDTC behavior changed beginning with this version and perhaps not all the kinks were worked out until later versions. 2) The use of async-await is new and I believe I'm having to use TransactionScopeAsyncFlowOption.Enabled to accommodate this new feature in case some part of the implementation is async. Perhaps more measures are necessary.
I tried Pooling=false in the connection string in case it was MSDTC getting invoked because of two independent logical connections handled errantly under a single pooled connection. But that didn't work.
API Operation
// Exposed API composing multiple low-level operations within a single TransactionScope
// independent of any database platform specifics.
[HttpPost]
public async Task<IHttpActionResult> GetMeTheTwoThings()
{
using (var scope = new TransactionScope(TransactionScopeOption.Required, TransactionScopeAsyncFlowOption.Enabled))
{
var result = new TwoThings(
await serviceLayer.GetThingOne(),
await serviceLayer.GetThingTwo());
scope.Complete();
return Ok(result);
}
}
Service layer implementation
public async Task<ThingOne> GetThingOne()
{
using (var cmd = connManagement.AmbientConnection.CreateCommand())
{
cmd.CommandType = System.Data.CommandType.StoredProcedure;
cmd.CommandText = "dbo.GetThingOne";
return (ThingOne)(await cmd.ExecuteScalarAsync());
}
}
public async Task<ThingTwo> GetThingTwo()
{
using (var cmd = connManagement.AmbientConnection.CreateCommand())
{
cmd.CommandType = System.Data.CommandType.StoredProcedure;
cmd.CommandText = "dbo.GetThingTwo";
return (ThingTwo)(await cmd.ExecuteScalarAsync());
}
}
AmbientConnection implementation
internal class SQLConnManagement
{
readonly string connStr;
readonly ConcurrentDictionary<Transaction, SqlConnection> txConnections = new ConcurrentDictionary<Transaction, SqlConnection>();
private SqlConnection CreateConnection(Transaction tx)
{
var conn = new SqlConnection(this.connStr);
// When the transaction completes, close the connection as well
tx.TransactionCompleted += (s, e) =>
{
SqlConnection closing_conn;
if (txConnections.TryRemove(e.Transaction, out closing_conn))
{
closing_conn.Dispose(); // closing_conn == conn
}
};
conn.Open();
return conn;
}
internal SqlConnection AmbientConnection
{
get
{
var txCurrent = Transaction.Current;
if (txCurrent == null) throw new InvalidOperationException("An ambient transaction is required.");
return txConnections.GetOrAdd(txCurrent, CreateConnection);
}
}
public SQLConnManagement(string connStr)
{
this.connStr = connStr;
}
}
Not to overcomplicate the post, but this might be relevant because it seems to me that every time MSDTC has been invoked the logged stack trace shows that this next mechanism has been involved. Certain data I cache with the built in ObjetCache because it doesn't change often and so I just get it at most once per minute or whatever. This is a little fancy, but I don't see why the Lazy generator would be treated any differently from a more typical call and why this specifically would cause the MSSDTC to sometimes be invoked. I've tried LazyThreadSafetyMode.ExecutionAndPublication too just in case but that doesn't help anyway (and then the exception just keeps getting delivered as the cached result for subsequent requests before the expiration, of course, and that's not desirable).
/// <summary>
/// Cache element that gets the item by key, or if it is missing, creates, caches, and returns the item
/// </summary>
static T CacheGetWithGenerate<T>(ObjectCache cache, string key, Func<T> generator, DateTimeOffset offset) where T : class
{
var generatorWrapped = new Lazy<T>(generator, System.Threading.LazyThreadSafetyMode.PublicationOnly);
return ((Lazy<T>)cache.AddOrGetExisting(
key,
generatorWrapped,
offset))?.Value ?? generatorWrapped.Value;
}
public ThingTwo CachedThingTwo
{
get
{
return CacheGetWithGenerate(
MemoryCache.Default,
"Services.ThingTwoData",
() => GetThingTwo(), // ok, GetThingTwo isn't async this time, fudged example
DateTime.Now.Add(TimeSpan.FromMinutes(1)));
}
}
Do you know why MSDTC is being invoked?
PublicationOnly means that two connections can be created and one thrown away. I'm surprised you made this bug because you explicitly stated PublicationOnly (as opposed to the default safety mode which is safe). You explicitly allowed this bug.
For some reason I did not see that you tried ExecutionAndPublication already. Since not using it is a bug please fix the code in the question.
CreateConnection is also broken in the sense that in case of exception on open the connection object is not getting disposed. Probably harmless but you never know.
Also, audit this code for thread aborts which can happen when ASP.NET times out a request. You are doing very dangerous and brittle things here.
The pattern that I use is to use an IOC container to inject a connection that is shared for the entire request. The first client for that connection opens it. The request end event closes it. Simple, and does away with all that nasty shared, mutable, multi-threaded state.
Why are you using a cache for data that you do not want to lose? This is probably the bug. Don't do that.
What is ?.Value ?? generatorWrapped.Value about? The dictionary can never return null. Delete that code. If it could return null then forcing the lazy value would create a second connection so that's a logic bug as well.
I'm creating an winforms application.
In which one form is made transparent, This form is used to show some popup message boxes, using a timer this form queries database in each seconds.
Currently I'm using database connection inside using method (here postgres Data Base).
Method 1
namespace MyApplication
{
public partial class frmCheckStatus: Form
{
private void timerCheckStatus_Tick(object sender, EventArgs e)
{
using (NpgsqlConnection conn = new NpgsqlConnection("My Connection String"))
{
conn.Open();
//Database queries
//Show popup message
conn.Close();//Forsing to close
}
}
}
}
so in each seconds this connection object is created and disposed.
Note : I'm not using this object for any other purpose or inside any forms or methods.
Is it good to create and use a single connection object global to this class, and use inside timer tick function?, and dispose on form close event
Method 2
namespace MyApplication
{
public partial class frmCheckStatus: Form
{
Private NpgsqlConnection conn = new NpgsqlConnection("My Connection String");
private void timerCheckStatus_Tick(object sender, EventArgs e)
{
//Here use conn object for queries.
conn.Open();
//Database queries
//Show popup message
conn.Close();//Forsing to close
}
private void frmCheckStatus_FormClosing(object sender, FormClosingEventArgs e)
{
conn.Dispose();
}
}
}
Which will be better?, considering memory, resource usage, execution time etc. Please give proper reason for your choice of method.
Looking at the documentation for your connection class (Here), it would appear that this supports connection pooling. This will mean that connections to the same endpoint (same connection string) will reuse existing connections rather than incurring the overhead of creating new ones.
Im not familiar with your particular connection, but if the behaviour is anything like SQLConnection class for ADO.net, repeatedly creating a new connection to the same connection string should not be particularly expensive (computationally).
As an aside, i would wrap your connection logic in try / finally to ensure it gets closed in the event of an application exception.
I can't see any advantage to instantiating a new connection every time you run a new query. I know it's done often in code, but there is overhead associated with it, however small. If you're running multiple queries from the start of the program to the end of the program, I think you should re-use the existing connection object.
If your goal is to make the connection "disappear" from the server (which I wouldn't generally worry about if this program runs on one machine -- if it runs on dozens, that's another story -- look up PgBounce), then that should be just as easily accomplished by turning connection pooling off, and then the Close() method would take care of it.
You kind of asked for pros and cons, and while it's not necessarily harmful to instantiate the connection within the loop, I can't imagine how it could be better.
For what it's worth, you may want to consider carrying the connection as a property (preferably outside of the form class, since you may want to eventually use it elsewhere). Something like this:
private NpgsqlConnection _PgConnection;
public NpgsqlConnection PgConnection
{
get
{
if (_PgConnection == null)
{
NpgsqlConnectionStringBuilder sb = new NpgsqlConnectionStringBuilder();
sb.Host = "hostname.whatever.com";
sb.Port = 5432;
sb.UserName = "scott";
sb.Password = "tiger";
sb.Database = "postgres";
sb.Pooling = true;
_PgConnection = new NpgsqlConnection(sb.ToString());
}
if (!_PgConnection.State.Equals(ConnectionState.Open))
_PgConnection.Open();
return _PgConnection;
}
set { _PgConnection = value; }
}
Then, within your form (or wherever you execute your SQL), you can just call the property:
NpgSqlCommand cmd = new NpgSqlCommand("select 1", Database.PgConnection);
...
Database.PgConnection.Close();
And you don't need to worry if the connection is open or closed, or if it's even been created yet.
The only open question would be if you want that connection to actually disappear on the server, which would be changed by altering the Pooled property.
I'm working on an ASP.NET MVC application which uses Linq to SQL to connect to one of about 2000 databases. We've noticed in our profiling tools that the application spends a lot of time making connections to the databases, and I suspect this is partly due to connection pool fragmentation as described here: http://msdn.microsoft.com/en-us/library/8xx3tyca(v=vs.110).aspx
Many Internet service providers host several Web sites on a single
server. They may use a single database to confirm a Forms
authentication login and then open a connection to a specific database
for that user or group of users. The connection to the authentication
database is pooled and used by everyone. However, there is a separate
pool of connections to each database, which increase the number of
connections to the server.
There is a relatively simple way to avoid this
side effect without compromising security when you connect to SQL
Server. Instead of connecting to a separate database for each user or
group, connect to the same database on the server and then execute the
Transact-SQL USE statement to change to the desired database.
I am trying to implement this solution in Linq to Sql so we have fewer open connections, and so there is more likely to be a connection available in the pool when we need one. To do that I need to change the database each time Linq to Sql attempts to run a query. Is there any way to accomplish this without refactoring the entire application? Currently we just create a single data context per request, and that data context may open and close many connections. Each time it opens the connection, I'd need to tell it which database to use.
My current solution is more or less like this one - it wraps a SqlConnection object inside a class that inherits from DbConnection. This allows me to override the Open() method and change the database whenever a connection is opened. It works OK for most scenarios, but in a request that makes many updates, I get this error:
System.InvalidOperationException: Transaction does not match
connection
My thought was that I would then wrap a DbTransaction object in a similar way to what I did with SqlConnection, and ensure that its connection property would point back to the wrapped connection object. That fixed the error above, but introduced a new one where a DbCommand was unable to cast my wrapped connection to a SqlConnection object. So then I wrapped DbCommand too, and now I get new and exciting errors about the transaction of the DbCommand object being uninitialized.
In short, I feel like I'm chasing specific errors rather than really understanding what's going on in-depth. Am I on the right track with this wrapping strategy, or is there a better solution I'm missing?
Here are the more interesting parts of my three wrapper classes:
public class ScaledSqlConnection : DbConnection
{
private string _dbName;
private SqlConnection _sc;
public override void Open()
{
//open the connection, change the database to the one that was passed in
_sc.Open();
if (this._dbName != null)
this.ChangeDatabase(this._dbName);
}
protected override DbTransaction BeginDbTransaction(IsolationLevel isolationLevel)
{
return new ScaledSqlTransaction(this, _sc.BeginTransaction(isolationLevel));
}
protected override DbCommand CreateDbCommand()
{
return new ScaledSqlCommand(_sc.CreateCommand(), this);
}
}
public class ScaledSqlTransaction : DbTransaction
{
private SqlTransaction _sqlTransaction = null;
private ScaledSqlConnection _conn = null;
protected override DbConnection DbConnection
{
get { return _conn; }
}
}
public class ScaledSqlCommand : DbCommand
{
private SqlCommand _cmd;
private ScaledSqlConnection _conn;
private ScaledSqlTransaction _transaction;
public ScaledSqlCommand(SqlCommand cmd, ScaledSqlConnection conn)
{
this._cmd = cmd;
this._conn = conn;
}
protected override DbConnection DbConnection
{
get
{
return _conn;
}
set
{
if (value is ScaledSqlConnection)
_conn = (ScaledSqlConnection)value;
else
throw new Exception("Only ScaledSqlConnections can be connections here.");
}
}
protected override DbTransaction DbTransaction
{
get
{
if (_transaction == null && _cmd.Transaction != null)
_transaction = new ScaledSqlTransaction(this._conn, _cmd.Transaction);
return _transaction;
}
set
{
if (value == null)
{
_transaction = null;
_cmd.Transaction = null;
}
else
{
if (value is ScaledSqlTransaction)
_transaction = (ScaledSqlTransaction)value;
else
throw new Exception("Don't set the transaction of a ScaledDbCommand with " + value.ToString());
}
}
}
}
}
I don't think that's going to work off a single shared connection.
LINQ to SQL works best with Unit of Work type connections - create your connection, do your atomically grouped work and close the connection as quickly as possible and reopen for the next task. If you do that then passing in a connection string (or using custom constructor that only passes a tablename) is pretty straight forward.
If factoring your application is a problem, you could use a getter to manipulate the cached DataContext 'instance' and instead create a new instance each time you request it instead of using the cached/shared instance and inject the connection string in the Getter.
But - I'm pretty sure this will not help with your pooling issue though. The SQL Server driver caches connections based on different connection string values - since the values are still changing you're right back to having lots of connections active in the connection string cache, which likely will result in lots of cache misses and therefore slow connections.
I think I figured out a solution that works for my situation. Rather than wrapping SqlConnection and overriding Open() to change databases, I'm passing the DBContext a new SqlConnection and subscribing to the connection's StateChanged event. When the state changes, I check to see if the connection has just been opened. If so, I call SqlConnection.ChangeDatabase() to point it to the correct database. I tested this solution and it seems to work - I see only one connection pool for all the databases rather than one pool for each db that has been accessed.
I realize this isn't the ideal solution in an ideal application, but for how this application is structured I think it should make a decent improvement for relatively little cost.
I think, that the best way is making UnitOfWork pattern with Repository pattern to work with Entity Framework. Entity Framework has FirstAsync, FirstOrDefaultAsync, this helped me to fix the same bug.
https://msdn.microsoft.com/en-us/data/jj819165.aspx
Please help!
Background info
I have a WPF application which accesses a SQL Server 2005 database. The database is running locally on the machine the application is running on.
Everywhere I use the Linq DataContext I use a using { } statement, and pass in a result of a function which returns a SqlConnection object which has been opened and had an SqlCommand executed using it before returning to the DataContext constructor.. I.e.
// In the application code
using (DataContext db = new DataContext(GetConnection()))
{
... Code
}
where getConnection looks like this (I've stripped out the 'fluff' from the function to make it more readable, but there is no additional functionality that is missing).
// Function which gets an opened connection which is given back to the DataContext constructor
public static System.Data.SqlClient.SqlConnection GetConnection()
{
System.Data.SqlClient.SqlConnection Conn = new System.Data.SqlClient.SqlConnection(/* The connection string */);
if ( Conn != null )
{
try
{
Conn.Open();
}
catch (System.Data.SqlClient.SqlException SDSCSEx)
{
/* Error Handling */
}
using (System.Data.SqlClient.SqlCommand SetCmd = new System.Data.SqlClient.SqlCommand())
{
SetCmd.Connection = Conn;
SetCmd.CommandType = System.Data.CommandType.Text;
string CurrentUserID = System.String.Empty;
SetCmd.CommandText = "DECLARE #B VARBINARY(36); SET #B = CAST('" + CurrentUserID + "' AS VARBINARY(36)); SET CONTEXT_INFO #B";
try
{
SetCmd.ExecuteNonQuery();
}
catch (System.Exception)
{
/* Error Handling */
}
}
return Conn;
}
I do not think that the application being a WPF one has any bearing on the issue I am having.
The issue I am having
Despite the SqlConnection being disposed along with the DataContext in Sql Server Management studio I can still see loads of open connections with :
status : 'Sleeping'
command : 'AWAITING COMMAND'
last SQL Transact Command Batch : DECLARE #B VARBINARY(36); SET #B = CAST('GUID' AS VARBINARY(36)); SET CONTEXT_INFO #B
Eventually the connection pool gets used up and the application can't continue.
So I can only conclude that somehow running the SQLCommand to set the Context_Info is meaning that the connection doesn't get disposed of when the DataContext gets disposed.
Can anyone spot anything obvious that would be stopping the connections from being closed and disposed of when the DataContext they are used by are disposed?
From MSDN (DataContext Constructor (IDbConnection)):
If you provide an open connection, the
DataContext will not close it.
Therefore, do not instantiate a
DataContext with an open connection
unless you have a good reason to do
this.
So basically, it looks like your connections are waiting for GC to finalize them before they will be released. If you have lots of code that does this, one approach might be to overide Dispose() in the data-context's partial class, and close the connection - just be sure to document that the data-context assumes ownership of the connection!
protected override void Dispose(bool disposing)
{
if(disposing && this.Connection != null && this.Connection.State == ConnectionState.Open)
{
this.Connection.Close();
this.Connection.Dispose();
}
base.Dispose(disposing);
}
Personally, I would happily give it (regular data-context, w/o the hack above) an open connection as long as I was "using" the connection (allowing me to perform multiple operations) - i.e.
using(var conn = GetConnection())
{
// snip: some stuff involving conn
using(var ctx = new FooContext(conn))
{
// snip: some stuff involving ctx
}
// snip: some more stuff involving conn
}
The SqlProvider used by the LINQ DataContext only closes the SQL connection (through SqlConnectionManager.DisposeConnection) if it was the one to open it. If you give an already-open SqlConnection object to the DataContext constructor, it will not close it for you. Thus, you should write:
using (SqlConnection conn = GetConnection())
using (DataContext db = new DataContext(conn))
{
... Code
}
I experienced the same issue using the Entity Framework. My ObjectContext was wrapped around a using block.
A connection was established when I called SaveChanges(), but after the using statement was out of scope, I noticed that SQL Management Studio still had a "AWAITING COMMAND" for the .NET SQL Client.
It looks like this has to do with the behavior of the ADO.NET provider which has connection pooling turned on by default.
From "Using Connection Pooling with SQL Server" on MSDN (emphasis mine):
Connection pooling reduces the number of times that new connections need to be opened. The pooler maintains ownership of the physical connection. It manages connections by keeping alive a set of active connections for each given connection configuration. Whenever a user calls Open on a connection, the pooler looks to see if there is an available connection in the pool. If a pooled connection is available, it returns it to the caller instead of opening a new connection. When the application calls Close on the connection, the pooler returns it to the pooled set of active connections instead of actually closing it. Once the connection is returned to the pool, it is ready to be reused on the next Open call.
Also ClearAllPools and ClearPool seems useful to explicitly close all pooled connections if needed.
I think the connection, while no longer referenced, is waiting for the GC to dispose of it fully.
Solution:
Create your own DataContext class which derives from the auto-generated one. (rename the base one so you don't have to change any other code).
In your derived DataContext - add a Dispose() function. In that - dispose the inner connection.
Well thanks for the help chaps, it has been solved now..
Essentially I took elements of most of the answers above and implemented the DataContext constructor as above (I already had overloaded the constructors so it wasn't a big change).
// Variable for storing the connection passed to the constructor
private System.Data.SqlClient.SqlConnection _Connection;
public DataContext(System.Data.SqlClient.SqlConnection Connection) : base(Connection)
{
// Only set the reference if the connection is Valid and Open during construction
if (Connection != null)
{
if (Connection.State == System.Data.ConnectionState.Open)
{
_Connection = Connection;
}
}
}
protected override void Dispose(bool disposing)
{
// Only try closing the connection if it was opened during construction
if (_Connection!= null)
{
_Connection.Close();
_Connection.Dispose();
}
base.Dispose(disposing);
}
The reason for doing this rather than some of the suggestions above is that accessing this.Connection in the dispose method throws a ObjectDisposedException.
And the above works as well as I was hoping!
The Dispose should close the connections, as MSDN points out:
If the SqlConnection goes out of
scope, it won't be closed. Therefore,
you must explicitly close the
connection by calling Close or
Dispose. Close and Dispose are
functionally equivalent. If the
connection pooling value Pooling is
set to true or yes, the underlying
connection is returned back to the
connection pool. On the other hand, if
Pooling is set to false or no, the
underlying connection to the server is
closed.
My guess would be that your problem has something to do with GetContext().