Restore NHibernate after lost Oracle database connection - c#

I have a long running application that uses NHibernate.ISessionFactory to connect to an Oracle database.
Occasionally the database goes offline (e.g. for weekend maintenance), but even once the database is back online, subsequent queries fail with the following exception (inner exceptions also shown):
NHibernate.Exceptions.GenericADOException: could not execute query
[ select .....]
>> Oracle.ManagedDataAccess.Client.OracleException: ORA-03135: Connection lost contact
>> OracleInternal.Network.NetworkException: ORA-03135: Connection lost contact
>> System.Net.Sockets.SocketException: An established connection
was aborted by the software in your host machine
Restarting the application restores the functionality, but I would like the application to be able to automatically cope without a restart, by "resetting" the connection.
I have tried the following with my ISessionFactory when I hit this exception:
sf.EvictQueries();
sf.Close();
sf = null;
sf = <create new session factory>
but see the same exception after recreating the ISessionFactory. I assume this is because NHibernate is caching the underlying broken connection in some kind of connection pool?
How can I persuade NHibernate to create a genuinely new connection (or even just reset all state completely), and hence allow my application to fix the connection issue itself without an application restart?
EDIT:
Following A_J's answer, note that I am already calling using (var session = _sessionFactory.OpenSession()) for each database request.

I suspect you are opening ISession (call to ISessionFactory.OpenSession()) at startup and closing it at application end. This is wrong approach for any long running application.
You should manage connection at lower level of time. In web application, this is generally handled per request. In your case, you should find what that should be. If yours is windows service that does some activity after specified time then Timer_Tick event is good place.
I cannot suggest what that location could be in your application; you need to find out on your own.
Edit 1
Looking at your edit and comment, I do not think this has anything to do with NHibernate. May be that the connection pool is returning a disconnected/stale connection to NHibernate.
Refer this and this accepted answer.

Related

How to open multiple sql connections with ADO when handling an nservicebus message

I have a message handler using NServiceBus that needs to execute SQL code on two different databases. The connection strings have different initial catalogs but are otherwise identical.
When the message is picked up, the first sql connection opens successfully but the second sql connection causes the following exception to be thrown when .Open is called.
Network access for Distributed Transaction Manager (MSDTC) has been
disabled. Please enable DTC for network access in the security
configuration for MSDTC using the Component Services Administrative
tool.
We don't use MSDTC.
Here's the code that fails. It will fail on connB.Open()
public void Handle(MyMsgCmd message)
{
using (SqlConnection connA = new SqlConnection(myConnectionStringA))
{
connA.Open();
}
using (SqlConnection connB = new SqlConnection(myConnectionStringB))
{
connB.Open();
}
}
This same code works perfectly fine when run from a command line application or web application. The exception is only thrown when it's called from NServiceBus.
Each of these connections will successfully open when opened first or when opened by itself but whenever there's a second connection present the second connection will always fail to open with the same exception even when it's known good.
Is there additional configuration needed to open more than one connection in sequence with NServiceBus?
It looks like by default NServiceBus wraps each message handler in a transaction and that causes queries to different database connections inside the same message handler to fail unless MSDTC is enabled.
I can disable that with BusConfiguration.Transactions().DoNotWrapHandlersExecutionInATransactionScope()
You can find more information on transaction in the NServiceBus documentation.
This isn't related exclusively to NServiceBus, we just provide different ways of connecting to a transport (like MSMQ, Azure Service Bus, etc), a persister and your own database.
But even without NServiceBus, when connecting to two databases, you need either a distributed transaction, or make sure the transaction is not escalated to a distributed transaction. The thing is, without distributed transactions, when one transaction successfully commits, the other transaction might fail. With the result that your two databases are not in-sync or consistent anymore.
If orders in DatabaseA are stored and inventory is tracked in DatabaseB, you might deduct 1 from inventory, but the order might never be stored because the transaction failed. You need to compensate for this yourself without distributed transactions.
THat's not to say distributed transactions are always the way to go. You're probably not using them because your DBA doesn't like them. MSDTC always puts serializable transactions on your data, which have the heaviest locks. The longer you keep them open, the more concurrently running transactions will need to wait. With possibly huge performance issues in your software.
On the other hand, it can be very, very difficult to create compensating transactions. And just think about the fact that DatabaseA might fail, DatabaseB might succeed. But what happens to the message? Is it gone from the queue? Or will it remain in the queue and be processed again? Will DatabaseB succeed again with the possible result of duplicate data?
Luckily you're already using NServiceBus. You might want to check out the Outbox feature that can help solve some of these issues.

Azure, SQL transactions and connections

The question:
How do you combine Linq, Azure and database connections with transactionscope, without getting transactions elevated to distributed?
Specifically, is workaround/solution 2 (see below) an acceptable solution, or is there a better way?
Background info:
Fetching the connection string from .config file.
using (var db = new DBDataContext())
using (var scope = new TransactionScope())
{
// Do DB stuff, then call a method that has more Linq code..
//..accessing the _same_ DB, and needs to be inside this transaction.
}
This seems to be best practice, and works fine when debugging on localhost.
When deployed to Azure, the transaction is 'elevated' to distributed, as soon as a Linq query is executed, despite the fact that we are using the exact same connection string. This causes a runtime exception.
Note that the called method has its own "using DBDataContext()", but no new transactionscope.
It seems that the connection pool manager is not certain that the new connection is to the same database, even though the connectionstring is identical.
There seem to be 3 workarounds:
1) Pass a reference to the existing connection
- Not acceptable. There are literally hundreds of methods that invoke the DB. It is not the callers responsibility.
2) Use a global (data layer) connection manager
- This is NOT best practice, and should be avoided. But why?
3) Use integrated security
- The connection pool manager may recognize the connection as identical to the existing connection when using integrated security.
- Have not tested this, because this solution is unacceptable. Should not be forced to use integrated security because of this issue.
Edit:
Using Azure SQL Database (NOT SQL Server on Azure VM).
Azure SQL Database does NOT support distributed transactions.
You kind of answered your own question here:
When deployed to Azure, the transaction is 'elevated' to distributed, as soon as a Linq query is executed, despite the fact that we are using the exact same connection string. This causes a runtime exception.
Any time a second connection is involved, the transaction is elevated. There may be some optimized cases that circumvent this (I'm not aware of any), but I don't think there's much you can do here. The same connection should be reused.
Think of how it would work without TransactionScope. Your code might look like:
using (var cn = GetDbConnection())
using (var tx = cn.BeginTransaction())
{
// do stuff with tx...
using (var cn2 = GetDbConnection())
{
// Now think about the transaction scope here...
// There is no way for cn2 to reuse cn's transaction.
// It must begin its own transaction. The only way to manage
// disconnected transactions of this nature is to elevate to
// a distributed transaction.
}
}
Edit: With regards to your question about a global connection manager, I'm not sure it's a bad idea, depending on your implementation. For the ASP.NET use case, we typically scope the database context per request. Any code down the chain that requires a connection should have its database context injected.
This ensures the same context (connection) is shared across the entire request. The transaction can then be committed automatically or manually, or automatically rolled back in the case of an exception. This is a pretty simple use case and admittedly may not fit the bill for your scenario, but it's one that has worked pretty well for us.
Edit2: Using lightweight transactions, you can avoid elevation by closing one connection BEFORE the next is opened. The transaction itself remains open until the you call ts.Complete, even across connections.
https://blogs.msdn.microsoft.com/adonet/2008/03/25/extending-lightweight-transactions-in-sqlclient/
You open outer connection “A”. The pool has no free appropriate connection, so inner connection “z” is set up and enlisted in the transaction, establishing a lightweight transaction. You now close “A”, which sets aside “z” to wait for the transaction to end. Next you open outer connection “B” (you could also open “A” again and get the same results). “B” looks for a free inner connection in the pool attached to the transaction, doesn’t find one, creates inner connection “y” and tries to enlist it in the transaction. The transaction, now finding two different resources trying to enlist, must promote (resources in general, and sql connections in particular, cannot share local transactions). Finally you end the transaction, which sends the commit or rollback across “z”, disconnects it from the transaction and returns it to the pool.
So this brings us to the extensions we added for Sql Server 2008 support. On the server, we added a new connection reset mode that does not roll back local transactions. This allows SqlClient to return the inner connection to the pool to be reused. In our example, when you open “B”, it will finds “z” waiting in the pool, associated with the transaction where “A” put it when you closed “A”. “B” appropriates and resets “z” (with the transaction-preserving reset) and happily continues working. Neither System.Transaction nor the server are aware that the application sees “z” as two separate connections. As far as they are concerned, there is only one connection, working on a single local transaction and no promotion is necessary.

Advantage Data Provider Error 6097 in a windows service

I have C# windows service that polls an ADS 9.10 advantage database periodically during the day using Quartz.net. The window's service is still in development and hasn't be put live. On the test box the database is refreshed nightly. Whilst the database is being restore the windows service correctly logs error. Once the restore has finished the windows service continues running correcly for a few attempt but then this error occurs.
Error: Advantage.Data.Provider.AdsException: Error 6097: Bad IP address or port specified in the connection path or in the ADS.INI file. axServerConnect
at Advantage.Data.Provider.AdsInternalConnection.Connect()
at Advantage.Data.Provider.AdsPoolManager.GetConnection(String strConnectionString, AdsInternalConnection& internalConnection, AdsConnectionPool& pool)
at Advantage.Data.Provider.AdsConnection.Open()
The only way to resolve this is to stop and start the service. To me this means something must be caching a bad connection, Which I don't understand as I'm do the C# USING wrapper around the connection and command, thus disposing the connection after it finished. I've tried turning off the connection pooling in the connection string
AdsConnection.FlushConnectionPool(_connectionString)
AdsConnection.FlushConnectionPool()
Please note that I don't use the ADS.ini file, the IP address and port number are in the connection string.
One solution could be use a schedule task rather than the quartz job... but I like quartz so I would like to fix this problem.
The requirement to restart the service to resolve problem is probably due to caching of connection error codes by the ADS clients. I think the only solution to get around the error code caching is to use the RETRY_ADS_CONNECTS configuration in the ads.ini. The ads.ini file can be placed in the same directory as the service.
The 6097 error is returned when the database is down while being refreshed and then cached by the ADS ado client.

Stateless WCF service and database connection pooling

The question has been asked before here in StackOverflow, but in my experience, the answers were actually wrong. At least for .NET Framework 4.0 and SQL Server 2005 they are wrong.
I would need help to sort this out once and for all.
The question is - can a stateless WCF service use database connection pooling in some way?
See Can a Stateless WCF service ...
The earlier answers essentially stated that there is no problem and no difference to any other ADO.NET scenarios. However, I have not been able to get a stateless WCF service to use the connection pooling EVER, while I can see it always work outside WCF services. No matter what connection strings or parameters I am trying to use, it does not do it.
Database connection pooling is meant to be enabled by default, so a simple connection string should get me there, for instance on SQL Server Express:
SqlConnection sqlCn = new SqlConnection("Data Source=SERVER\SQLEXPRESS; Initial Catalog = xDB; Integrated Security = SSPI;")
Using this connection, in a Windows Form application, if I do 3 consecutive rounds of sqlCn.Open() -- query the database -- sqlCn.Close(), I am getting a long delay (for instance 2 seconds) on the first sqlCn.Open(), and no delays at all on queries and open / close afterwards. Exactly what I expect with database connection pooling.
But if I make 3 calls to a WCF service containing the same sqlCn.Open() -- query the database -- sqlCn.Close() code, I am getting the 2 second initial slow startup for EVERY single call.
My guess is that the connection pooling is entirely controlled by the ADO.NET objects created by my code, and since I am instantiating any ADO.NET classes I use (such as SqlConnection etc) inside my WCF service, they get destroyed when my service call is over and the connection pool along with it.
This may not be true, but if not, is there anything wrong with what I have done?
Anyone have any experience with that?
(Please do test any assumption or theory before posting)
1) Here's the documentation:
http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx
When a connection is first opened, a connection pool is created based
on an exact matching algorithm that associates the pool with the
connection string in the connection. Each connection pool is
associated with a distinct connection string. When a new connection is
opened, if the connection string is not an exact match to an existing
pool, a new pool is created. Connections are pooled per process, per
application domain, per connection string and when integrated security
is used, per Windows identity. Connection strings must also be an
exact match; keywords supplied in a different order for the same
connection will be pooled separately.
2) Per the same link, "By default, connection pooling is enabled in ADO.NET."
3) This is completely independent of whether the WCF call in question is stateless or not.
4) Finally:
We strongly recommend that you always close the connection when you
are finished using it so that the connection will be returned to the
pool. You can do this using either the Close or Dispose methods of the
Connection object, or by opening all connections inside a using
statement in C#, or a Using statement in Visual Basic. Connections
that are not explicitly closed might not be added or returned to the
pool.
I managed to resolve it myself.
I had to explicitly state "Pooling = true" (and add a non-zero "Min Pool Size") to my connection string. Then it was working consistently. If this was not set, it would sometimes factually work as expected, but mostly not.
I tested it also with different user accounts (SQL Server authentication with user name / password versus "Integrated Security = SSPI"). Both approaches work for a WCF service as long as you set "Pooling = true".
No data if this is a problem only for my installation / SQL Server version / ADO.NET version but it sure did take quite a while to resolve.

Handling DB connection breaks

My WCF service is keeping DB connections for futher sending SQL through they. Sometimes connection become broken for various reasons. Early there was special timer that checks connections every 1 minute. But it's not so good solution of problem. Could you please advice me some way to keep connections working properly or even though reconnect as soon as possible to deliver user stable service.
Thanks!
EDIT:
Database server is Oracle. I'm connecting to databse server using devart dotConnect for Oracle.
You don't have to "keep" database connections. Leave the reuse and caching of database connections to the .net framework.
Just use this kind of code and dispose the connection as soon as you are finished using it:
using(var connection = new SqlConnection(...))
{
//Your Code here
}
There is no problem in executing the code above for each call to the database. The connection information is cached and the second "new" connection to the database is very fast.
To read more about "ConnectionPooling" you might read this MSDN Articel.
Edit:
If you use pooling the connection is not really close but put back to the pool. The initial "handshake" between the client and the database is only done once per connection on the pool.
The component you are using supports the connection pooling as well:
Read1
Read 2

Categories