Handling DB connection breaks - c#

My WCF service is keeping DB connections for futher sending SQL through they. Sometimes connection become broken for various reasons. Early there was special timer that checks connections every 1 minute. But it's not so good solution of problem. Could you please advice me some way to keep connections working properly or even though reconnect as soon as possible to deliver user stable service.
Thanks!
EDIT:
Database server is Oracle. I'm connecting to databse server using devart dotConnect for Oracle.

You don't have to "keep" database connections. Leave the reuse and caching of database connections to the .net framework.
Just use this kind of code and dispose the connection as soon as you are finished using it:
using(var connection = new SqlConnection(...))
{
//Your Code here
}
There is no problem in executing the code above for each call to the database. The connection information is cached and the second "new" connection to the database is very fast.
To read more about "ConnectionPooling" you might read this MSDN Articel.
Edit:
If you use pooling the connection is not really close but put back to the pool. The initial "handshake" between the client and the database is only done once per connection on the pool.
The component you are using supports the connection pooling as well:
Read1
Read 2

Related

24/7 Applications Dot NET Applicaiton Open close conneciton

i am working on a applicaiton that will running 24/7.application life cycle is so simple. when ever new request come.its just update the record in database.
aplicaiton update record in different servers and in different database.
there is millions of request application entertain in an hour
for each request it open and close connecition as per below code.
internal int ExecuteNonQuery(string Query)
{
using (SqlConnection SqlConn = new SqlConnection(this.ConnectionString))
{
using (SqlCommand sqlComm = new SqlCommand(Query, SqlConn))
{
SqlConn.Open();
sqlComm.CommandTimeout = 60;
sqlComm.ExecuteNonQuery();
return 0;
}
}
}
i want to optimize my code
i dont want every time request come it will create a new connection for this i have read connection pooling mechanism in ado.net.
remember that i have different sql connection (maximum 10).
can i use connection pooling? or can i make my own logic to create sqlconection for each connection and opened them for all day.
also my application oftenly generate hand shake exception.
There are plenty of ways to enable connection pooling as described here
If you are using MsSQl with the OleDb driver for example, you have connection pooling out of the box.
Depending on your situation, you can modify the Max Number connections in your pool. To establish the number of this, check this answer here: Should I set max pool size in database connection string? What happens if I don't?
It's important to say that the connections are not destroyed, they are just returned to the pool. That makes it very efficient and it's the recommended way to go.
OleDb
The .NET Framework Data Provider for OLE DB automatically pools connections using OLE DB session pooling. Connection string arguments can be used to enable or disable OLE DB services including pooling.
ODBC
Connection pooling for the .NET Framework Data Provider for ODBC is managed by the ODBC Driver Manager that is used for the connection, and is not affected by the .NET Framework Data Provider for ODBC.
OracleClient
The .NET Framework Data Provider for Oracle provides connection pooling automatically for your ADO.NET client application. You can also supply several connection string modifiers to control connection pooling behavior (see "Controlling Connection Pooling with Connection String Keywords," later in this topic).

Azure, SQL transactions and connections

The question:
How do you combine Linq, Azure and database connections with transactionscope, without getting transactions elevated to distributed?
Specifically, is workaround/solution 2 (see below) an acceptable solution, or is there a better way?
Background info:
Fetching the connection string from .config file.
using (var db = new DBDataContext())
using (var scope = new TransactionScope())
{
// Do DB stuff, then call a method that has more Linq code..
//..accessing the _same_ DB, and needs to be inside this transaction.
}
This seems to be best practice, and works fine when debugging on localhost.
When deployed to Azure, the transaction is 'elevated' to distributed, as soon as a Linq query is executed, despite the fact that we are using the exact same connection string. This causes a runtime exception.
Note that the called method has its own "using DBDataContext()", but no new transactionscope.
It seems that the connection pool manager is not certain that the new connection is to the same database, even though the connectionstring is identical.
There seem to be 3 workarounds:
1) Pass a reference to the existing connection
- Not acceptable. There are literally hundreds of methods that invoke the DB. It is not the callers responsibility.
2) Use a global (data layer) connection manager
- This is NOT best practice, and should be avoided. But why?
3) Use integrated security
- The connection pool manager may recognize the connection as identical to the existing connection when using integrated security.
- Have not tested this, because this solution is unacceptable. Should not be forced to use integrated security because of this issue.
Edit:
Using Azure SQL Database (NOT SQL Server on Azure VM).
Azure SQL Database does NOT support distributed transactions.
You kind of answered your own question here:
When deployed to Azure, the transaction is 'elevated' to distributed, as soon as a Linq query is executed, despite the fact that we are using the exact same connection string. This causes a runtime exception.
Any time a second connection is involved, the transaction is elevated. There may be some optimized cases that circumvent this (I'm not aware of any), but I don't think there's much you can do here. The same connection should be reused.
Think of how it would work without TransactionScope. Your code might look like:
using (var cn = GetDbConnection())
using (var tx = cn.BeginTransaction())
{
// do stuff with tx...
using (var cn2 = GetDbConnection())
{
// Now think about the transaction scope here...
// There is no way for cn2 to reuse cn's transaction.
// It must begin its own transaction. The only way to manage
// disconnected transactions of this nature is to elevate to
// a distributed transaction.
}
}
Edit: With regards to your question about a global connection manager, I'm not sure it's a bad idea, depending on your implementation. For the ASP.NET use case, we typically scope the database context per request. Any code down the chain that requires a connection should have its database context injected.
This ensures the same context (connection) is shared across the entire request. The transaction can then be committed automatically or manually, or automatically rolled back in the case of an exception. This is a pretty simple use case and admittedly may not fit the bill for your scenario, but it's one that has worked pretty well for us.
Edit2: Using lightweight transactions, you can avoid elevation by closing one connection BEFORE the next is opened. The transaction itself remains open until the you call ts.Complete, even across connections.
https://blogs.msdn.microsoft.com/adonet/2008/03/25/extending-lightweight-transactions-in-sqlclient/
You open outer connection “A”. The pool has no free appropriate connection, so inner connection “z” is set up and enlisted in the transaction, establishing a lightweight transaction. You now close “A”, which sets aside “z” to wait for the transaction to end. Next you open outer connection “B” (you could also open “A” again and get the same results). “B” looks for a free inner connection in the pool attached to the transaction, doesn’t find one, creates inner connection “y” and tries to enlist it in the transaction. The transaction, now finding two different resources trying to enlist, must promote (resources in general, and sql connections in particular, cannot share local transactions). Finally you end the transaction, which sends the commit or rollback across “z”, disconnects it from the transaction and returns it to the pool.
So this brings us to the extensions we added for Sql Server 2008 support. On the server, we added a new connection reset mode that does not roll back local transactions. This allows SqlClient to return the inner connection to the pool to be reused. In our example, when you open “B”, it will finds “z” waiting in the pool, associated with the transaction where “A” put it when you closed “A”. “B” appropriates and resets “z” (with the transaction-preserving reset) and happily continues working. Neither System.Transaction nor the server are aware that the application sees “z” as two separate connections. As far as they are concerned, there is only one connection, working on a single local transaction and no promotion is necessary.

Restore NHibernate after lost Oracle database connection

I have a long running application that uses NHibernate.ISessionFactory to connect to an Oracle database.
Occasionally the database goes offline (e.g. for weekend maintenance), but even once the database is back online, subsequent queries fail with the following exception (inner exceptions also shown):
NHibernate.Exceptions.GenericADOException: could not execute query
[ select .....]
>> Oracle.ManagedDataAccess.Client.OracleException: ORA-03135: Connection lost contact
>> OracleInternal.Network.NetworkException: ORA-03135: Connection lost contact
>> System.Net.Sockets.SocketException: An established connection
was aborted by the software in your host machine
Restarting the application restores the functionality, but I would like the application to be able to automatically cope without a restart, by "resetting" the connection.
I have tried the following with my ISessionFactory when I hit this exception:
sf.EvictQueries();
sf.Close();
sf = null;
sf = <create new session factory>
but see the same exception after recreating the ISessionFactory. I assume this is because NHibernate is caching the underlying broken connection in some kind of connection pool?
How can I persuade NHibernate to create a genuinely new connection (or even just reset all state completely), and hence allow my application to fix the connection issue itself without an application restart?
EDIT:
Following A_J's answer, note that I am already calling using (var session = _sessionFactory.OpenSession()) for each database request.
I suspect you are opening ISession (call to ISessionFactory.OpenSession()) at startup and closing it at application end. This is wrong approach for any long running application.
You should manage connection at lower level of time. In web application, this is generally handled per request. In your case, you should find what that should be. If yours is windows service that does some activity after specified time then Timer_Tick event is good place.
I cannot suggest what that location could be in your application; you need to find out on your own.
Edit 1
Looking at your edit and comment, I do not think this has anything to do with NHibernate. May be that the connection pool is returning a disconnected/stale connection to NHibernate.
Refer this and this accepted answer.

.NET Oracle managed data access connection pooling not working or slow

I recently noticed that when our application does an SQL query to the Oracle database, it always takes at least 200 ms to execute. It doesn't matter how simple or complex the query is, the minimum time is about 200 ms. We're using the Oracle Managed Data Access driver for Oracle 11g.
I then created a simple console application to test the connection. I noticed that if I create the connection like in the example below, then every cmd.ExecuteReader method takes the extra 200 ms (opening the connection)?
using (OracleConnection con = new OracleConnection(connStr))
{
con.Open();
OracleCommand cmd = con.CreateCommand();
...
}
The connection state is always Closed when creating the connection like this (shouldn't it be open if the connections are pooled?).
If I open the connection at the start of the program and then pass the opened connection to the method, the cmd.ExecuteReader takes about 0-5 ms to return. I've tried to add Pooling=true to the connection string but it doesn't seem to do anything (it should be the default anyway).
Does this mean that the connection pooling is not working as it should? Or could there be any other reason why the cmd.ExecuteReader takes the extra 200 ms to execute?
The problem is almost the same as in this issue, except that we're using Oracle Connection pooling is slower than keeping one connection open
Is your database remote and the delay caused by network? In this case connection pooling works but the problem is that there is always a TCP communication roundtrip (and not even TNS packet). Unfortunately this happens with every Open call.
Managed data access implementation communicates in different way so the overhead takes place only at the very first Open call, then the Open method is free.
After a lot of testing and research I finally figured out where the extra 200ms comes from: my virtual computer's network adapter. I'm using VMWare Player and the connection was configured to "NAT" mode. When I changed the connection to "Bridged" mode the latency was removed.

Stateless WCF service and database connection pooling

The question has been asked before here in StackOverflow, but in my experience, the answers were actually wrong. At least for .NET Framework 4.0 and SQL Server 2005 they are wrong.
I would need help to sort this out once and for all.
The question is - can a stateless WCF service use database connection pooling in some way?
See Can a Stateless WCF service ...
The earlier answers essentially stated that there is no problem and no difference to any other ADO.NET scenarios. However, I have not been able to get a stateless WCF service to use the connection pooling EVER, while I can see it always work outside WCF services. No matter what connection strings or parameters I am trying to use, it does not do it.
Database connection pooling is meant to be enabled by default, so a simple connection string should get me there, for instance on SQL Server Express:
SqlConnection sqlCn = new SqlConnection("Data Source=SERVER\SQLEXPRESS; Initial Catalog = xDB; Integrated Security = SSPI;")
Using this connection, in a Windows Form application, if I do 3 consecutive rounds of sqlCn.Open() -- query the database -- sqlCn.Close(), I am getting a long delay (for instance 2 seconds) on the first sqlCn.Open(), and no delays at all on queries and open / close afterwards. Exactly what I expect with database connection pooling.
But if I make 3 calls to a WCF service containing the same sqlCn.Open() -- query the database -- sqlCn.Close() code, I am getting the 2 second initial slow startup for EVERY single call.
My guess is that the connection pooling is entirely controlled by the ADO.NET objects created by my code, and since I am instantiating any ADO.NET classes I use (such as SqlConnection etc) inside my WCF service, they get destroyed when my service call is over and the connection pool along with it.
This may not be true, but if not, is there anything wrong with what I have done?
Anyone have any experience with that?
(Please do test any assumption or theory before posting)
1) Here's the documentation:
http://msdn.microsoft.com/en-us/library/8xx3tyca.aspx
When a connection is first opened, a connection pool is created based
on an exact matching algorithm that associates the pool with the
connection string in the connection. Each connection pool is
associated with a distinct connection string. When a new connection is
opened, if the connection string is not an exact match to an existing
pool, a new pool is created. Connections are pooled per process, per
application domain, per connection string and when integrated security
is used, per Windows identity. Connection strings must also be an
exact match; keywords supplied in a different order for the same
connection will be pooled separately.
2) Per the same link, "By default, connection pooling is enabled in ADO.NET."
3) This is completely independent of whether the WCF call in question is stateless or not.
4) Finally:
We strongly recommend that you always close the connection when you
are finished using it so that the connection will be returned to the
pool. You can do this using either the Close or Dispose methods of the
Connection object, or by opening all connections inside a using
statement in C#, or a Using statement in Visual Basic. Connections
that are not explicitly closed might not be added or returned to the
pool.
I managed to resolve it myself.
I had to explicitly state "Pooling = true" (and add a non-zero "Min Pool Size") to my connection string. Then it was working consistently. If this was not set, it would sometimes factually work as expected, but mostly not.
I tested it also with different user accounts (SQL Server authentication with user name / password versus "Integrated Security = SSPI"). Both approaches work for a WCF service as long as you set "Pooling = true".
No data if this is a problem only for my installation / SQL Server version / ADO.NET version but it sure did take quite a while to resolve.

Categories