Connection pooling is slower than keeping one connection open - c#

I have observed an interesting behavior of the performance of connection pooling in an client application we created. When ever the user clicks on an object, more object specific data is loaded from the database. This takes somewhere between 10 and 30 queries per click depending on the object.
This was done by using connection pooling and each query was dispatched on a new connection from the pool and the connection was closed after the query ran.
I have analyzed the queries in the profiler for performance optimization and saw that there where a lot of audit login/logout entries. Additionally the performance was not optimal eventhough the queries themselfs where running well (only index seek/scan operators).
Just for trying it out I disabled the pooling and modified the code to keep one connection per client application and reusing it. This made the entire application a lot more responsive, and all the audit login/logout entries disappeared from the profiler.
How is this possible? Shouldn't the connections stay open or if they actually stay open at least not be this slow? Is it possible that we are using the SqlConnection class wrong resulting in disabled pooling?
I have read the other posts regarding pooling but have not found anything about a perceivable speed difference between pooling connections and reusing the same connection.
SqlConnection con = new SqlConnection(_connectionString);
The connection is handed off to a wrapping class Session which provides transactional functionality.
class Session{
Session(connection);
Abort();
Commit();
}
The connection is closed in Abort() and Commit(). One of these is always called.

If I understand you correctly - the connection is being "new" per session. if you want all instances to share the connection you should make it static.
put it in global.asax:
public static SqlConnection con;
protected void Application_Start(object sender, EventArgs e)
{
con = new SqlConnection(_connectionString);
}
in that way you will be sharing the same connection between your sessions.

Related

How to manage SqlConnection in C# for high frequency transaction?

I have an application that connect to a SQL Server database with high frequency. Inside this service, there are many scheduled tasks that run every second, and each time I'm executing some query.
I don't understand which solution is better in this condition.
Opening a single SqlConnection and keeping it open while application is running and execute all query with that connection
Each time I want to execute query, opening a new connection and after query execution, close the connection (does this solution suitable for so many scheduled task that runs every 1 second?)
I tried second solution, but is there any better choice?
How do ORMs like EF manage connections?
As you see i have many service. I cant change interval and the interval is important for me. but the code makes so many calls and im following a better way manage connection over database. Also I'm making connection with Using Statement.
Is there any better solution?
you should use SQL Connection Pool feature for that.
It automatically manages in the background if a connection needs to be open or can be reused.
Documentation: https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling?source=recommendations
Example copied from that page
using (SqlConnection connection = new SqlConnection(
"Integrated Security=SSPI;Initial Catalog=Northwind"))
{
connection.Open();
// Pool A is created.
}
using (SqlConnection connection = new SqlConnection(
"Integrated Security=SSPI;Initial Catalog=pubs"))
{
connection.Open();
// Pool B is created because the connection strings differ.
}
using (SqlConnection connection = new SqlConnection(
"Integrated Security=SSPI;Initial Catalog=Northwind"))
{
connection.Open();
// The connection string matches pool A.
}
By using the "using" statement, application checks if a connection in this pool can be reused before opening a new connection. So the overhead of opening and closing the connections disappears.
But after your last edit you seem to have other problems in your current architecture. Like the other poster recommends you can try to use the "with (nolock)" parameter in your sql statements. It creates dirty reads, but maybe that's ok for your application.
Alternatively if all your services use the same select statement maybe a stored procedure or a caching mechanism could help.
I assume that you are already opening/closing your SQL connections in either a "using" statement or explicitly in your code ( try/catch/finally ). If so you are already making use of connection pooling as it is enabled in ADO.Net by default ("By default, connection pooling is enabled in ADO.NET").
Therefore I don't think that your problem is so much a connection/resource problem as it is a database concurrency issue. I assume it to be either 1 of 2 issues :
Your code is making so many calls to the SQL server that it is exhausting all the available connections and nobody else can get one
Your code is locking tables in SQL that is causing other code/applications to timeout
If it is case 1, try and redesign your code to be "less chatty" to the database. Instead of making several inserts/updates per second, perhaps buffer the changes and make a single insert/update every 3-5 seconds in batch mode ( obvs if possible ). Or maybe your SQL statements are taking longer than 1 second to execute and you are calling them every second causing in a backlog scenario?
If it is case 2, try and redesign the SQL tables in such a way that the "reading" applications are not influenced by the "writing" application. Normally this involves a service that periodically writes aggregated data to a read-only table for viewing or at very least adding a "WITH(NOLOCK)" hint to the select clauses to allow dirty reads ( i.e. it wont lock the table to read, but may result in slightly out of date dataset i.e. eventual consistency )
Good luck

Is overutilisation of Sql Connections in C# a problem?

Throughout the program which I am currently working on, I realized that whenever I need to access to SQL Server, I just type a queryString and execute my query using SqlCommand and SqlConnection in C#. At a certain point during the program, I even opened a connection and ran a query in a "for loop".
Is it unhealthy for the program to constantly open-close connections?
***I am not very familiar with the terminology, therefore you might be having some problems understanding what I am talking about:
Is doing this very frequently may cause any problem?:
string queryString = "Some SQL Query";
public void(){
SqlConnection con = new Connection(conString);
SqlCommand cmd = new SqlCommand(queryString,con);
cmd.Parameters.AddWithValue("#SomeParam",someValue);
con.Open();
cmd.ExecuteNonQuery();
con.Close();
}
I use this template almost every class I create,(usually to get,update,insert data from/into a datatable).
Well, is it harmful?
The short answer is yes - it is inefficient to constantly open and close connections. Opening an actual connection is a very expensive process and managing a connection for the lifetime of its need (which usually is the lifetime of the application or process using it) is fraught with errors.
That is why connection pooling was introduced a long time ago. There is a layer beneath your application that will manage the physical opening/closing of connections in a more efficient way. This also helps prevent the chances that an open connection is lost and continues to stay open (which causes lots of problems). By default pooling is enabled so you don't need to do anything to use it.
With pooling - you write code to open a connection and use it for the duration of a particular section of code and then close it. If the connection pool has an open but unused connection, it will reuse it rather than open a new one. When you close the connection, that simply returns the connection to the pool and makes it available to the next open attempt. You should also get familiar with the c# using statement.

Could Issues Occur Using a Singleton SQLConnection in a Single User .NET Desktop Application?

I have inherited a .NET desktop application and am trying to track down an intermittant bug where database locks are not being released.
The application takes a Singleton approach to establishing a SQLConnection and I'm wondering if creating and holding open a single instance of a SQLConnection in memory could cause issues in this context?
I know connection pooling makes this code redundant. What I want to know is: given my scenario of each instance of a single-threaded desktop application having a single user, could this approach causes issues?
private static SqlConnection connection;
public static SqlConnection Connection
{
get
{
if (connection == null) {
connection = new SqlConnection();
connection.ConnectionString = "...";
connection.Open();
}
if (connection.State != ConnectionState.Open) {
connection.Open();
}
return connection;
}
}
Edit #1: I know this is not best practice; I know connections should be opened late and closed early; I know connection pooling makes this code redundant. In my specific instance, is an issue demonstrable?
This is an old and obsolete design which is now deprecated in favor of keeping the connectiuon open as little as possible and close and dispose it immediately after each usage witha using clause,
said that the pattern you have shown above should not, only because of itself, create database locks,
consider that even with a connection kept open longer time the connection itself is only a channel so most of the thing depends on the way you use it with Command and Reader objects which when possible you should anyway close and dispose as soon as no longer needed.
If there are some tables locked in the database are you sur ethat is not happening because of multiple users trying to access the same objects at the same time?
Using a Singleton for a SqlConnection object is a really, really bad idea. There is no reason to do this whatsoever.
If you are attempting to avoid a performance hit of "new SqlConnection()" or "connection.Open()" be advised that there really is no performance hit there because of the connection pooling going on behind the scenes. Connection Pooling handles the opening/closing of the expensive connections. Not the SqlConnection object.
You won't be able to open multiple SqlDataReaders/Commands with the connection at the same time and will run into thread blocking issues if you are trying to share the same connection object with multiple threads.
The Singleton pattern is the most over used and abused pattern and there are many side effects of the singleton that you may not be aware of. Very good talk about the dangers of singletons here http://www.youtube.com/watch?v=-FRm3VPhseI

ADO.Net SQL pooling in asp.net-mvc-4 application

I want to optimize the sql connection in my wep application its create in .net mvc 4, i read that ado.net automatically manager the connection pooling but i'm some lost respect how exactly implement that, is correct if i create a global object with the connection in the Application_Start class then pass the connection object through all data object in my application ? something like this
protected void Application_Start()
{
...
SqlConnection conn = new SqlConnection("Connection String...");
DAOPeople daoPeople = new DAOPeople(conn);
...
}
in that way i avoided create a new SqlConnection for each dao, is correct?
No, don't do that. You'll end up with a bottleneck at your connection object, as that single connection is shared across all sessions and requests to your app.
For connection pooling, you do the exact opposite: don't try to share or re-use a single connection object; do just create a new SqlConnection every time you need it, open it on the spot, and make sure it's disposed as soon as you're done via a using block. Even though your code looks like you're opening and closing a lot of connections, the connection pooling feature is built in and ensures you keep drawing from a small number of existing connections in the same pool.
That said, if you're on a really large site, you can do a little better. One thing large sites will do to help scale is avoid unnecessary memory allocations, and there is some memory that goes with creating an SqlConnection object. Instead they might, for example, have one main SqlConnection per HTTP request, with the possibility of either enabling MARS or having an additional secondary connection object in the request so they can run some things asynchronously. But this is only something the top 0.1% need to care about, and if you're at this level you're measuring to find out where the proper balance is for your particular site and load.

"open/close" SqlConnection or keep open?

I have my business-logic implemented in simple static classes with static methods. Each of these methods opens/closes SQL connection when called:
public static void DoSomething()
{
using (SqlConnection connection = new SqlConnection("..."))
{
connection.Open();
// ...
connection.Close();
}
}
But I think passing the connection object around and avoiding opening and closing a connection saves performance. I made some tests long time ago with OleDbConnection class (not sure about SqlConnection), and it definitely helped to work like this (as far as I remember):
//pass around the connection object into the method
public static void DoSomething(SqlConnection connection)
{
bool openConn = (connection.State == ConnectionState.Open);
if (!openConn)
{
connection.Open();
}
// ....
if (openConn)
{
connection.Close();
}
}
So the question is - should I choose the method (a) or method (b) ? I read in another stackoverflow question that connection pooling saved performance for me, I don't have to bother at all...
PS. It's an ASP.NET app - connections exist only during a web-request. Not a win-app or service.
Stick to option a.
The connection pooling is your friend.
Use Method (a), every time. When you start scaling your application, the logic that deals with the state will become a real pain if you do not.
Connection pooling does what it says on the tin. Just think of what happens when the application scales, and how hard would it be to manually manage the connection open/close state. The connection pool does a fine job of automatically handling this. If you're worried about performance think about some sort of memory cache mechanism so that nothing gets blocked.
Always close connections as soon as you are done with them, so they underlying database connection can go back into the pool and be available for other callers. Connection pooling is pretty well optimised, so there's no noticeable penalty for doing so. The advice is basically the same as for transactions - keep them short and close when you're done.
It gets more complicated if you're running into MSDTC issues by using a single transaction around code that uses multiple connections, in which case you actually do have to share the connection object and only close it once the transaction is done with.
However you're doing things by hand here, so you might want to investigate tools that manage connections for you, like DataSets, Linq to SQL, Entity Framework or NHibernate.
Disclaimer: I know this is old, but I found an easy way to demonstrate this fact, so I'm putting in my two cents worth.
If you're having trouble believing that the pooling is really going to be faster, then give this a try:
Add the following somewhere:
using System.Diagnostics;
public static class TestExtensions
{
public static void TimedOpen(this SqlConnection conn)
{
Stopwatch sw = Stopwatch.StartNew();
conn.Open();
Console.WriteLine(sw.Elapsed);
}
}
Now replace all calls to Open() with TimedOpen() and run your program. Now, for each distinct connection string you have, the console (output) window will have a single long running open, and a bunch of very fast opens.
If you want to label them you can add new StackTrace(true).GetFrame(1) + to the call to WriteLine.
There are distinctions between physical and logical connections. DbConnection is a kind of logical connection and it uses underlying physical connection to Oracle. Closing/opening DbConnection doesn't affect your performance, but makes your code clean and stable - connection leaks are impossible in this case.
Also you should remember about cases when there are limitations for parallel connections on db server - taking that into account it is necessary to make your connections very short.
Connection pool frees you from connection state checking - just open, use and immediately close them.
Normally you should keep one connect for each transaction(no parallel computes)
e.g when user execute charge action, your application need find user's balance first and update it, they should use same connection.
Even if ado.net has its connection pool, dispatching connection cost is very low, but reuse connection is more better choice.
Why not keep only one connection in application
Because the connection is blocking when you execute some query or command,
so that means your application is only doing one db operation at sametime,
how poor performance it is.
One more issue is that your application will always have a connection even though your user is just open it but no operations.If there are many user open your application, db server will cost all of its connection source in soon while your users have not did anything.

Categories