"open/close" SqlConnection or keep open? - c#

I have my business-logic implemented in simple static classes with static methods. Each of these methods opens/closes SQL connection when called:
public static void DoSomething()
{
using (SqlConnection connection = new SqlConnection("..."))
{
connection.Open();
// ...
connection.Close();
}
}
But I think passing the connection object around and avoiding opening and closing a connection saves performance. I made some tests long time ago with OleDbConnection class (not sure about SqlConnection), and it definitely helped to work like this (as far as I remember):
//pass around the connection object into the method
public static void DoSomething(SqlConnection connection)
{
bool openConn = (connection.State == ConnectionState.Open);
if (!openConn)
{
connection.Open();
}
// ....
if (openConn)
{
connection.Close();
}
}
So the question is - should I choose the method (a) or method (b) ? I read in another stackoverflow question that connection pooling saved performance for me, I don't have to bother at all...
PS. It's an ASP.NET app - connections exist only during a web-request. Not a win-app or service.

Stick to option a.
The connection pooling is your friend.

Use Method (a), every time. When you start scaling your application, the logic that deals with the state will become a real pain if you do not.
Connection pooling does what it says on the tin. Just think of what happens when the application scales, and how hard would it be to manually manage the connection open/close state. The connection pool does a fine job of automatically handling this. If you're worried about performance think about some sort of memory cache mechanism so that nothing gets blocked.

Always close connections as soon as you are done with them, so they underlying database connection can go back into the pool and be available for other callers. Connection pooling is pretty well optimised, so there's no noticeable penalty for doing so. The advice is basically the same as for transactions - keep them short and close when you're done.
It gets more complicated if you're running into MSDTC issues by using a single transaction around code that uses multiple connections, in which case you actually do have to share the connection object and only close it once the transaction is done with.
However you're doing things by hand here, so you might want to investigate tools that manage connections for you, like DataSets, Linq to SQL, Entity Framework or NHibernate.

Disclaimer: I know this is old, but I found an easy way to demonstrate this fact, so I'm putting in my two cents worth.
If you're having trouble believing that the pooling is really going to be faster, then give this a try:
Add the following somewhere:
using System.Diagnostics;
public static class TestExtensions
{
public static void TimedOpen(this SqlConnection conn)
{
Stopwatch sw = Stopwatch.StartNew();
conn.Open();
Console.WriteLine(sw.Elapsed);
}
}
Now replace all calls to Open() with TimedOpen() and run your program. Now, for each distinct connection string you have, the console (output) window will have a single long running open, and a bunch of very fast opens.
If you want to label them you can add new StackTrace(true).GetFrame(1) + to the call to WriteLine.

There are distinctions between physical and logical connections. DbConnection is a kind of logical connection and it uses underlying physical connection to Oracle. Closing/opening DbConnection doesn't affect your performance, but makes your code clean and stable - connection leaks are impossible in this case.
Also you should remember about cases when there are limitations for parallel connections on db server - taking that into account it is necessary to make your connections very short.
Connection pool frees you from connection state checking - just open, use and immediately close them.

Normally you should keep one connect for each transaction(no parallel computes)
e.g when user execute charge action, your application need find user's balance first and update it, they should use same connection.
Even if ado.net has its connection pool, dispatching connection cost is very low, but reuse connection is more better choice.
Why not keep only one connection in application
Because the connection is blocking when you execute some query or command,
so that means your application is only doing one db operation at sametime,
how poor performance it is.
One more issue is that your application will always have a connection even though your user is just open it but no operations.If there are many user open your application, db server will cost all of its connection source in soon while your users have not did anything.

Related

Is overutilisation of Sql Connections in C# a problem?

Throughout the program which I am currently working on, I realized that whenever I need to access to SQL Server, I just type a queryString and execute my query using SqlCommand and SqlConnection in C#. At a certain point during the program, I even opened a connection and ran a query in a "for loop".
Is it unhealthy for the program to constantly open-close connections?
***I am not very familiar with the terminology, therefore you might be having some problems understanding what I am talking about:
Is doing this very frequently may cause any problem?:
string queryString = "Some SQL Query";
public void(){
SqlConnection con = new Connection(conString);
SqlCommand cmd = new SqlCommand(queryString,con);
cmd.Parameters.AddWithValue("#SomeParam",someValue);
con.Open();
cmd.ExecuteNonQuery();
con.Close();
}
I use this template almost every class I create,(usually to get,update,insert data from/into a datatable).
Well, is it harmful?
The short answer is yes - it is inefficient to constantly open and close connections. Opening an actual connection is a very expensive process and managing a connection for the lifetime of its need (which usually is the lifetime of the application or process using it) is fraught with errors.
That is why connection pooling was introduced a long time ago. There is a layer beneath your application that will manage the physical opening/closing of connections in a more efficient way. This also helps prevent the chances that an open connection is lost and continues to stay open (which causes lots of problems). By default pooling is enabled so you don't need to do anything to use it.
With pooling - you write code to open a connection and use it for the duration of a particular section of code and then close it. If the connection pool has an open but unused connection, it will reuse it rather than open a new one. When you close the connection, that simply returns the connection to the pool and makes it available to the next open attempt. You should also get familiar with the c# using statement.

ADO.Net SQL pooling in asp.net-mvc-4 application

I want to optimize the sql connection in my wep application its create in .net mvc 4, i read that ado.net automatically manager the connection pooling but i'm some lost respect how exactly implement that, is correct if i create a global object with the connection in the Application_Start class then pass the connection object through all data object in my application ? something like this
protected void Application_Start()
{
...
SqlConnection conn = new SqlConnection("Connection String...");
DAOPeople daoPeople = new DAOPeople(conn);
...
}
in that way i avoided create a new SqlConnection for each dao, is correct?
No, don't do that. You'll end up with a bottleneck at your connection object, as that single connection is shared across all sessions and requests to your app.
For connection pooling, you do the exact opposite: don't try to share or re-use a single connection object; do just create a new SqlConnection every time you need it, open it on the spot, and make sure it's disposed as soon as you're done via a using block. Even though your code looks like you're opening and closing a lot of connections, the connection pooling feature is built in and ensures you keep drawing from a small number of existing connections in the same pool.
That said, if you're on a really large site, you can do a little better. One thing large sites will do to help scale is avoid unnecessary memory allocations, and there is some memory that goes with creating an SqlConnection object. Instead they might, for example, have one main SqlConnection per HTTP request, with the possibility of either enabling MARS or having an additional secondary connection object in the request so they can run some things asynchronously. But this is only something the top 0.1% need to care about, and if you're at this level you're measuring to find out where the proper balance is for your particular site and load.

Is the performance of non singleton sql connections better?

This is a follow up question to: Is it necessary to deconstruct singleton sql connections?
As a few comments there stated it is bad design to use a singleton for sql connection instead of doing multiple usings.
What intrigues me there though is one statement that the performance of the using variant is better than that of the singleton variant. Now as stated by me that it is a bad design is clear to me (I know most pros and cons for singletons there...especially the cons). What surprised me though was the performance statement.
As normally I would think: Ok opening and closing sql connections for 100-1000 times during a programs run SHOULD be less performant than doing this only once. Thus my question is: Is the performance of the non singleton variant really better and if so why?
Singletonexample:
public class SimpleClass
{
// Static variable that must be initialized at run time.
public static SqlConnection singletonConnection;
// Static constructor is called at most one time, before any
// instance constructor is invoked or member is accessed.
static SimpleClass()
{
singletonConnection = new SqlConnection("Data Source.....");
}
}
Usings example:
using (SqlConnection connection = new SqlConnection("Data ..."))
{
....
}
Basically the answer is simple:
Connecting to a database server typically consists of several
time-consuming steps. A physical channel such as a socket or a named
pipe must be established, the initial handshake with the server must
occur, the connection string information must be parsed, the
connection must be authenticated by the server, checks must be run for
enlisting in the current transaction, and so on. In practice, most
applications use only one or a few different configurations for
connections.
This means that during application execution, many
identical connections will be repeatedly opened and closed. To
minimize the cost of opening connections, ADO.NET uses an optimization
technique called connection pooling.
You shouldn't use singleton as some kind of 'performance accelerator' because it is not what it is used for. By using it to store one static SQL connection you are exposing yourself for many memory and connection problems. How you are supposed to close connection? How are you supposed to release memory consumed? When one connection is closed, you are closing it for all application users. How you are planning to reconnect with that approach?
What "connection pooling" basically means is that even if you are creating many SqlConnection objects, as long as they do not differ with connection string, it is possible to reuse existing connection.
Some detailed info can be found there.

Good technique for connections with PostgreSQL

I am using Npgsql to access PostgreSQL via .NET. I am concerned about the right way to perform connections to the database, since in my opinion this is an expensive operation to open a connection and then close it every single time I want to perform some transaction.
So here is the general idea:
public class PostgreSQL
{
private NpgsqlConnection conn; // <- one connection for this object, open all the time
public PostgreSQL(string connString)
{
conn = new NpgsqlConnection(connString);
conn.Open();
}
// ...here making some queries...
public void Close() => conn.Close(); // <- use this in the very end of the program
}
As you can see above, I have one connection for an instance of PostgreSQL class.
My question:
Is this approach right? Or should I open and close connection every single time I want to make a transaction - open as late as possible and close as soon as possible?
If I should open and close connections every single time - should I write a queue that would limit the amount of concurrent connections? Or PostgreSQL will handle it itself - and, theoretically, I can open 200 connections and it will be alright.
Please share your experience with me ^^
EDIT:
I will run 100-200 queries a second.
PostgreSQL supports connections pooling (pool size is customizable) so the common pattern:
using (NpgsqlConnection conn = new NpgsqlConnection(...))
{
...
}
should be the better choice.
In my opinion you should open a connection at the moment you need it, and close it right after it. This will prevent a lot of connections on the server to be kept alive.
In my experience, opening a connection doesn't take that much time (a few milliseconds, usually a fraction of your execution time), so you don't have to worry too much.

Should I open and close db for each query?

I am using old school ADO.net with C# so there is a lot of this kind of code. Is it better to make one function per query and open and close db each time, or run multiple queries with the same connection obect? Below is just one query for example purpose only.
using (SqlConnection connection = new SqlConnection(ConfigurationManager.ConnectionStrings["DBConnectMain"].ConnectionString))
{
// Add user to database, so they can't vote multiple times
string sql = " insert into PollRespondents (PollId, MemberId) values (#PollId, #MemberId)";
SqlCommand sqlCmd = new SqlCommand(sql, connection);
sqlCmd.Parameters.Add("#PollId", SqlDbType.Int);
sqlCmd.Parameters["#PollId"].Value = PollId;
sqlCmd.Parameters.Add("#MemberId", SqlDbType.Int);
sqlCmd.Parameters["#MemberId"].Value = Session["MemberId"];
try
{
connection.Open();
Int32 rowsAffected = (int)sqlCmd.ExecuteNonQuery();
}
catch (Exception ex)
{
//Console.WriteLine(ex.Message);
}
}
Well, you could measure; but as long as you are using the connections (so they are disposed even if you get an exception), and have pooling enabled (for SQL server it is enabled by default) it won't matter hugely; closing (or disposing) just returns the underlying connection to the pool. Both approaches work. Sorry, that doesn't help much ;p
Just don't keep an open connection while you do other lengthy non-db work. Close it and re-open it; you may actually get the same underlying connection back, but somebody else (another thread) might have made use of it while you weren't.
For most cases, opening and closing a connection per query is the way to go (as Chris Lively pointed out). However, There are some cases where you'll run into performance bottlenecks with this solution though.
For example, when dealing with very large volumes of relatively quick to execute queries that are dependent on previous results, I might suggest executing multiple queries in a single connection. You might encounter this when doing batch processing of data, or data massaging for reporting purposes.
Always be sure to use the 'using' wrapper to avoid mem leaks though, regardless of which pattern you follow.
If the methods are structured such that a single command is executed within a single method, then Yes: instantiate and dispose of the connection for each command.
If the methods are structured such that you have multiple commands executed in the same block of code, then the outer block needs to be the using clause for the connection.
ADO is very good about connection pooling so instantiating and disposing of the command object is going to be extremely fast and really won't impact performance.
As an example, we have a few pages that will execute update to 50 queries in order to compose the page. Because there is branching code to determine the queries to run, we have each of them wrapped with their own using (connection...) clauses.
We once ripped those out and grabbed one connection object and passed it to the individual methods. This had exactly zero performance improvement while complicating the hell out of the code with all the exception clauses every where to ensure the connection was properly disposed at the end. At the end of the test, we rolled back the code to how it was before. Much cleaner to know exactly what was going on and when a connection was being used.
Well, as always, it depends. If you have 5 database call to make within the same method call, you should probably use a single connection.
However, holding onto connection while nothing is happening isn't usually advised from a scalability standpoint.
ADO.NET is old school now? Wow, you just made me feel old. To me Rogue Wave ODBC using Borland C++ on Windows 3.1 is old school.
To answer, in general you want to understand how your data drivers work. Understand such concepts as connection pooling and learn to profile the transaction costs associate with connecting / disconnecting and executing queries. Then take that knowledge and apply it it your situation.

Categories