Can't Drop SQL Database After Exception During Insert Statement - c#

I'm running into an issue with an application I'm working on. Here is a breakdown of what is happening.
Create new database via code (succeeds)
Connect to newly created database and create tables (succeeds too)
Inserting data into previously created tables (this fails with an exception)
Show error message to the user
Open a new connection on the master database
Delete database created in step 1
Step 6 fails due to a:
System.Data.SqlClient.SqlException: Cannot drop database "ImportFail" because it is currently in use.
The error message kinda makes sense if you have unclosed connections or use a connection on the database you want to drop but from what I can see that is not the case.
I went into SQL Server Management Studio and looked at what is blocking the drop statement. Spid 52 seems to be blocked by 53. 53 however is the DDL statement that is creating the database in step 1 - which undoubtedly succeeds.
The creation of the database and the subsequent insert statements (or any SqlCommand in our codebase) is going through the following method. I can't see why any connection would be lingering around here even in case of an exception.

Add SqlConnection.ClearAllPools(); after the exception. This static method will physically close and remove pooled connections holding a shared lock due on the database that prevents the database from being dropped.
Note that closing/disposing connections only returns the connection to the pool and is not enough alone.

Related

How to open multiple sql connections with ADO when handling an nservicebus message

I have a message handler using NServiceBus that needs to execute SQL code on two different databases. The connection strings have different initial catalogs but are otherwise identical.
When the message is picked up, the first sql connection opens successfully but the second sql connection causes the following exception to be thrown when .Open is called.
Network access for Distributed Transaction Manager (MSDTC) has been
disabled. Please enable DTC for network access in the security
configuration for MSDTC using the Component Services Administrative
tool.
We don't use MSDTC.
Here's the code that fails. It will fail on connB.Open()
public void Handle(MyMsgCmd message)
{
using (SqlConnection connA = new SqlConnection(myConnectionStringA))
{
connA.Open();
}
using (SqlConnection connB = new SqlConnection(myConnectionStringB))
{
connB.Open();
}
}
This same code works perfectly fine when run from a command line application or web application. The exception is only thrown when it's called from NServiceBus.
Each of these connections will successfully open when opened first or when opened by itself but whenever there's a second connection present the second connection will always fail to open with the same exception even when it's known good.
Is there additional configuration needed to open more than one connection in sequence with NServiceBus?
It looks like by default NServiceBus wraps each message handler in a transaction and that causes queries to different database connections inside the same message handler to fail unless MSDTC is enabled.
I can disable that with BusConfiguration.Transactions().DoNotWrapHandlersExecutionInATransactionScope()
You can find more information on transaction in the NServiceBus documentation.
This isn't related exclusively to NServiceBus, we just provide different ways of connecting to a transport (like MSMQ, Azure Service Bus, etc), a persister and your own database.
But even without NServiceBus, when connecting to two databases, you need either a distributed transaction, or make sure the transaction is not escalated to a distributed transaction. The thing is, without distributed transactions, when one transaction successfully commits, the other transaction might fail. With the result that your two databases are not in-sync or consistent anymore.
If orders in DatabaseA are stored and inventory is tracked in DatabaseB, you might deduct 1 from inventory, but the order might never be stored because the transaction failed. You need to compensate for this yourself without distributed transactions.
THat's not to say distributed transactions are always the way to go. You're probably not using them because your DBA doesn't like them. MSDTC always puts serializable transactions on your data, which have the heaviest locks. The longer you keep them open, the more concurrently running transactions will need to wait. With possibly huge performance issues in your software.
On the other hand, it can be very, very difficult to create compensating transactions. And just think about the fact that DatabaseA might fail, DatabaseB might succeed. But what happens to the message? Is it gone from the queue? Or will it remain in the queue and be processed again? Will DatabaseB succeed again with the possible result of duplicate data?
Luckily you're already using NServiceBus. You might want to check out the Outbox feature that can help solve some of these issues.

Azure, SQL transactions and connections

The question:
How do you combine Linq, Azure and database connections with transactionscope, without getting transactions elevated to distributed?
Specifically, is workaround/solution 2 (see below) an acceptable solution, or is there a better way?
Background info:
Fetching the connection string from .config file.
using (var db = new DBDataContext())
using (var scope = new TransactionScope())
{
// Do DB stuff, then call a method that has more Linq code..
//..accessing the _same_ DB, and needs to be inside this transaction.
}
This seems to be best practice, and works fine when debugging on localhost.
When deployed to Azure, the transaction is 'elevated' to distributed, as soon as a Linq query is executed, despite the fact that we are using the exact same connection string. This causes a runtime exception.
Note that the called method has its own "using DBDataContext()", but no new transactionscope.
It seems that the connection pool manager is not certain that the new connection is to the same database, even though the connectionstring is identical.
There seem to be 3 workarounds:
1) Pass a reference to the existing connection
- Not acceptable. There are literally hundreds of methods that invoke the DB. It is not the callers responsibility.
2) Use a global (data layer) connection manager
- This is NOT best practice, and should be avoided. But why?
3) Use integrated security
- The connection pool manager may recognize the connection as identical to the existing connection when using integrated security.
- Have not tested this, because this solution is unacceptable. Should not be forced to use integrated security because of this issue.
Edit:
Using Azure SQL Database (NOT SQL Server on Azure VM).
Azure SQL Database does NOT support distributed transactions.
You kind of answered your own question here:
When deployed to Azure, the transaction is 'elevated' to distributed, as soon as a Linq query is executed, despite the fact that we are using the exact same connection string. This causes a runtime exception.
Any time a second connection is involved, the transaction is elevated. There may be some optimized cases that circumvent this (I'm not aware of any), but I don't think there's much you can do here. The same connection should be reused.
Think of how it would work without TransactionScope. Your code might look like:
using (var cn = GetDbConnection())
using (var tx = cn.BeginTransaction())
{
// do stuff with tx...
using (var cn2 = GetDbConnection())
{
// Now think about the transaction scope here...
// There is no way for cn2 to reuse cn's transaction.
// It must begin its own transaction. The only way to manage
// disconnected transactions of this nature is to elevate to
// a distributed transaction.
}
}
Edit: With regards to your question about a global connection manager, I'm not sure it's a bad idea, depending on your implementation. For the ASP.NET use case, we typically scope the database context per request. Any code down the chain that requires a connection should have its database context injected.
This ensures the same context (connection) is shared across the entire request. The transaction can then be committed automatically or manually, or automatically rolled back in the case of an exception. This is a pretty simple use case and admittedly may not fit the bill for your scenario, but it's one that has worked pretty well for us.
Edit2: Using lightweight transactions, you can avoid elevation by closing one connection BEFORE the next is opened. The transaction itself remains open until the you call ts.Complete, even across connections.
https://blogs.msdn.microsoft.com/adonet/2008/03/25/extending-lightweight-transactions-in-sqlclient/
You open outer connection “A”. The pool has no free appropriate connection, so inner connection “z” is set up and enlisted in the transaction, establishing a lightweight transaction. You now close “A”, which sets aside “z” to wait for the transaction to end. Next you open outer connection “B” (you could also open “A” again and get the same results). “B” looks for a free inner connection in the pool attached to the transaction, doesn’t find one, creates inner connection “y” and tries to enlist it in the transaction. The transaction, now finding two different resources trying to enlist, must promote (resources in general, and sql connections in particular, cannot share local transactions). Finally you end the transaction, which sends the commit or rollback across “z”, disconnects it from the transaction and returns it to the pool.
So this brings us to the extensions we added for Sql Server 2008 support. On the server, we added a new connection reset mode that does not roll back local transactions. This allows SqlClient to return the inner connection to the pool to be reused. In our example, when you open “B”, it will finds “z” waiting in the pool, associated with the transaction where “A” put it when you closed “A”. “B” appropriates and resets “z” (with the transaction-preserving reset) and happily continues working. Neither System.Transaction nor the server are aware that the application sees “z” as two separate connections. As far as they are concerned, there is only one connection, working on a single local transaction and no promotion is necessary.

How do I avoid "sleeping" processes in MSSQL?

Currently I'm experiencing alot of issues with Server Processes (Seen from sp_who2) that is "sleeping" instead of just, finishing (being removed), when I connection to my MSSQL Database, calls a Stored Procedure, get some data and then closing the connection.
What's the best way in C#/.NET to connect to a MSSQL database, call a Stored Procedure, retrieve data from the Stored Procedure (Data Reader) and then close the connection?
Is there a way to close/dispose so that the "sleeping" processes gets killed?
Does it have something to do with me creating new SqlConnections, opening them and closing them all the time?
My flow is as follows:
A request occur:
I create a new SqlConnection Instance that connects to my MSSQL Database.
I call a Stored Procedure, retrieve the data and presents it to the user.
I close the Connection with .Close();
I repeat all these steps for each request. Requests happen once every 5-10 seconds. (Sometimes slower, sometimes faster)
I'm sorry If this question is missing some details, but I hope this is enough to get a some what helpful answer.
Thanks!
you need to use SET XACT_ABORT ON or add some client rollback code
When a client timeout event occurs (.net CommandTimeout for example), the client sends an "ABORT" to SQL Server. SQL Server then simply abandons the query processing. No transaction is rolled back, no locks are released.
Now, the connection is returned to the connection pool, so it isn't closed on SQL Server. If this ever happens (via KILL or client reboot etc) then the transactions+locks will be cleared. Note that sp_reset_connection won't or doesn't clear them, even though it is advertised to do so
This detritus from the abort will block other processes.
The way to make SQL Server clear transactions+locks on client timeout (strictly, ABORT events) is to use SET XACT_ABORT ON.
You can verify this be opening 2 query windows in SSMS:
Window 1:
In menu Query..Query Options set a timeout of 5 seconds then run this
BEGIN TRAN
UPDATE sometable WITH (TABLOCKX) SET foo = foo WHERE 1 = 0;
WAITFOR DELAY '00:00:10' -- just has to be longer then timeout
Window 2, this will wait forever (or hit your timeout)
SELECT * FROM sometable

Restore NHibernate after lost Oracle database connection

I have a long running application that uses NHibernate.ISessionFactory to connect to an Oracle database.
Occasionally the database goes offline (e.g. for weekend maintenance), but even once the database is back online, subsequent queries fail with the following exception (inner exceptions also shown):
NHibernate.Exceptions.GenericADOException: could not execute query
[ select .....]
>> Oracle.ManagedDataAccess.Client.OracleException: ORA-03135: Connection lost contact
>> OracleInternal.Network.NetworkException: ORA-03135: Connection lost contact
>> System.Net.Sockets.SocketException: An established connection
was aborted by the software in your host machine
Restarting the application restores the functionality, but I would like the application to be able to automatically cope without a restart, by "resetting" the connection.
I have tried the following with my ISessionFactory when I hit this exception:
sf.EvictQueries();
sf.Close();
sf = null;
sf = <create new session factory>
but see the same exception after recreating the ISessionFactory. I assume this is because NHibernate is caching the underlying broken connection in some kind of connection pool?
How can I persuade NHibernate to create a genuinely new connection (or even just reset all state completely), and hence allow my application to fix the connection issue itself without an application restart?
EDIT:
Following A_J's answer, note that I am already calling using (var session = _sessionFactory.OpenSession()) for each database request.
I suspect you are opening ISession (call to ISessionFactory.OpenSession()) at startup and closing it at application end. This is wrong approach for any long running application.
You should manage connection at lower level of time. In web application, this is generally handled per request. In your case, you should find what that should be. If yours is windows service that does some activity after specified time then Timer_Tick event is good place.
I cannot suggest what that location could be in your application; you need to find out on your own.
Edit 1
Looking at your edit and comment, I do not think this has anything to do with NHibernate. May be that the connection pool is returning a disconnected/stale connection to NHibernate.
Refer this and this accepted answer.

BulkInsertCommand failed in Sync Framework 2.1

On occasion I get the following error when trying to synchronize from SQL Express to SQL Server using Sync Framework 2.1. Once a client gets this error they have to reinitialize the scope. There can't be anything wrong with the syntax like the error states because it runs no problem for long periods of time (with inserts happening). Any thoughts?
11:18:21 AM Failed to execute the command 'BulkInsertCommand' for table 'XXX'; the transaction was rolled back. Ensure that the command syntax is correct.
11:18:21 AM Microsoft.Synchronization
11:18:21 AM at Microsoft.Synchronization.Data.ChangeHandlerBase.CheckZombieTransaction(String commandName, String table, Exception ex)
From a trace log:
WARNING, OfflineAgentMonitor.vshost, 13, 04/05/2011 11:16:17:224, Bulk command BulkUpdateCommand failed with the following exception. Rows will be retried during single apply. System.Data.SqlClient.SqlException (0x80131904): Trying to pass a table-valued parameter with 19 column(s) where the corresponding user-defined table type requires 20 column(s).
try to enable Sync Fx tracing and check if Sync Fx logs the original exception. if i remember it right, the exception is normally raised when the db connection is lost. you should be able to retry the sync though without re-provisioning the scope.
This happened to me syncing between 2 SQL Azure databases. The initial cause was that the slave DB grew larger than it's provisioned size. I increased the size, but it was a good 20 minutes before the sync stopped throwing the error

Categories