I came across an article saying that using my sqlConnection like this :
using (SqlConnection sqlConnection = new SqlConnection(Config.getConnectionString()))
{
using (SqlDataAdapter dataAdapter = new SqlDataAdapter(query, sqlConnection))
{
dataAdapter.Fill(dataSet);
}
}
increases performance because it disposes the objects at the end of your method. So i have been coding with 'Using' for a while now, after chatting with some other developers they said that that creating and destroying the instance multiple times wont really increase performance.
What are the performance implications on the sqlserver and system resources if I am using 'Using' on all of my dataAccess methods. Will the sqlServer be hit harder because of the connection being connected and reconnected multiple times?
SqlConnection, by default, has connection pooling enabled. The Dispose() simply releases the connection to the pool sooner. This means other code can then re-use this connection, reducing the connections to the SQL server, and reducing the time to establish a physical connection.
So yes: it can improve overall performance.
The alternatives:
if your code exits cleanly and you always remember to Close() the connection, then probably no difference
if your code throws an exception (that you haven't handled), or you forget to Close() the connection, then you could be leaving unused connections lying around until there is enough memory pressure to trigger GC and the finalizer. This could mean you need more physical connections to the SQL server (a pain), and every time a new underlying connection is needed it has to take the performance hit of establishing the actual database connection
Overall, though - think of IDisposable as a contract; it is your job as a .NET developer to notice IDisposable resources, and actively Dispose() them when you are done, ideally with using if the usage is tightly scoped (like in this case).
It has no significant influence on performance in most cases.
All the using().... construct makes sure is that the SqlConnection is freed / disposed of after it's done its job. That's all there is - no magic performance boost....
Sure - creating and disposing objects does cost a bit of performance - it's either that, or then you unnecessarily keep objects in your memory and connections to your SQL Server open for much longer than needed.
I would vote for using the using() {...} approach 100% of the time - it's cleaner, it's safer, it's just better programming practice. The performance "hit" you might take is miniscule and not worth the trouble.
Marc
It increases performance only in the sense that, after your connection instance has been disposed, the physical connection in the pool can be re-used by another thread. If you kept it open, then another thread trying to open a connection would add a new physical connection to the pool.
ADO.NET has such feature as connection pooling, therefore if you intensively open connections, most likely connection will not be disposed, only returned to the pool.
If you are doing several database operations after each other, you should use the same connection instead of creating one connection for each. Otherwise you should close the connection as soon as possible, so that it is returned to the connection pool and can be reused.
You should always use a using block for the connection, so that you are sure that they are closed properly. If you fail to close a connection object it will stay in memory until the garbage collector removes it, hogging a database connection. That means that the next opreation can't reuse the connection from the pool but it has to establish a completely new connection, which takes a lot longer.
there is a performance improvement .
if you use using
E.g :
using (SqlConnection sqlConnection = new SqlConnection("ConnectionString"))
{
}
The compiler automatically add try and finally.
alt text http://img111.imageshack.us/img111/4514/using.jpg
Related
I want to optimize the sql connection in my wep application its create in .net mvc 4, i read that ado.net automatically manager the connection pooling but i'm some lost respect how exactly implement that, is correct if i create a global object with the connection in the Application_Start class then pass the connection object through all data object in my application ? something like this
protected void Application_Start()
{
...
SqlConnection conn = new SqlConnection("Connection String...");
DAOPeople daoPeople = new DAOPeople(conn);
...
}
in that way i avoided create a new SqlConnection for each dao, is correct?
No, don't do that. You'll end up with a bottleneck at your connection object, as that single connection is shared across all sessions and requests to your app.
For connection pooling, you do the exact opposite: don't try to share or re-use a single connection object; do just create a new SqlConnection every time you need it, open it on the spot, and make sure it's disposed as soon as you're done via a using block. Even though your code looks like you're opening and closing a lot of connections, the connection pooling feature is built in and ensures you keep drawing from a small number of existing connections in the same pool.
That said, if you're on a really large site, you can do a little better. One thing large sites will do to help scale is avoid unnecessary memory allocations, and there is some memory that goes with creating an SqlConnection object. Instead they might, for example, have one main SqlConnection per HTTP request, with the possibility of either enabling MARS or having an additional secondary connection object in the request so they can run some things asynchronously. But this is only something the top 0.1% need to care about, and if you're at this level you're measuring to find out where the proper balance is for your particular site and load.
On a production system, I occasionally find the following error in the log:
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
In order to remedy this I increased the maximum pool size to an outrageously high 10,000:
connectionString="metadata=res:///MyEntities.csdl|res:///MyEntities.ssdl|res://*/MyEntities.msl;provider=System.Data.SqlClient;provider connection string="data source=localhost;initial catalog=MyDb;integrated security=True;MultipleActiveResultSets=True;Max Pool Size=10000;App=My Service""
But the problem still occurs. What other causes could there be to this error, other than the connection pool being maxed out?
EDIT: before anyone else suggests it, I do always use using(...) { } blocks whenever I open a connection to the DB, e.g.:
using (var db = new MyEntities())
{
// do stuff
}
How are you connecting to the database?
Having a larger number of connections will make your application live longer, but it's likely that the root problem is that you're not releasing all of your connections properly. Check that you are closing connections after opening them. e.g.
using (SqlConnection myConnection = new SqlConnection(ConnectionString))
{
... perform database query
}
will automatically close the connection when done.
Usually this happens because you in the code close, for example, a DataReader, but you do not close its associated connection.
In the code above, there are two solutions depending on what you would like to do.
1/ Explicitly close the connection when done.
connection.Close();
2/ Use the connection in a Using block, this guarantees that the system disposes the connection (and closes it) when the code exits the block.
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
// Do work here; connection closed on following line.
}
Always .Close() the connections. That means always doing it is very important practice. If you're not doing it, you are doing it wrong. Any application can overfill the ConnectionPool. So make sure you have invoke the .Close() each time you opened it to clear the pool. Do not depend on the GC to close the connections.
Make sure you call .Close() in try blocks, also in catch blocks.
From what I have gathered from other sources like this MSDN page the garbage collection of the using (...) { } block is insufficient to prevent your application from running out of connections. The reply by William Vaughn states that clearing the connection explicitly via the Close() method returns threads to the pool far more quickly than a reliance on garbage collection.
So, while you have done nothing wrong as the using (...) { } block is proper coding, its lack of efficiency is what is leaving threads tied up too long. You may also look into the Collect() method to "force" garbage collection, but as the documentation states, this may cause performance issues (so it might be an option, or it might be trading one problem for another).
There's a lot of non-detailed questions on this one, so here goes.
What is the best practice for connection handling in C# with SQL Server 2008? We have an assembly (which in our case is used by a WCF Service) that makes calls to an SQL Server. In general it seems like you need three objects to do this: The connection object, the command object, and the reader object.
The only reliable way we've been able to get the calls to work is to do the following:
Open the connection.
Create the Command in a using() { } block
Create the Reader to handle the response.
Dispose of the reader.
Implicitly dispose of the Command at the end of the using() block
Close the connection.
We ran into an unusual problem when running the same command multiple times iteratively, where it would complain that there was already a command or reader object attached to the connection that was still open. The only rock solid solution was to close and reopen the connection with every command we did, iterative or just sequential (different commands.)
So this is the question, since I come from a mysql_pconnect background on DB connection handling.
Is it going to significantly impact performance to be opening and closing a connection for each command?
If so for 1., what is the proper workaround, or code structure to handle serially repeating a command?
Is there any way to reuse a connection, command or reader at all?
If not for 3., does this really impact performance or memory usage significantly (As in, our users would notice.)
To answer point 1, if you look at the documentation for SqlConnection you'll see it explain about connection pooling. This means that the SQL Server provider has a collection of connections readily available and each SqlConnection created simply gets the next available connection. Therefore, to get the best performance, it is advisable to keep creating SqlConnection objects and using them for short operations and then disposing of them, thereby returning back to the connection pool.
For point 3, I believe you can re-use an SqlConnection if you do SqlCommand.ExecuteNonQuery(), but if you use an SqlDataReader you cannot re-use the connection - it is tied to the SqlDataReader and must be closed/disposed of once finished.
In addition to #PeterMonks answer:
The "expensive", unmanaged part of the SqlConnection is re-used by the provider (connection pooling) as long as you use the same connection string. So while there is a small overhead to creating a new managed wrapper each time, it isn't actually a 1:1 relationship with creating connections to the SQL server instance, so it isnt as expensive as you might think.
To serially repeat a command that returns a data reader, you must a) always execute the command on the same thread (commands are not thread safe) and b) Close() or Dispose() the DataReader instances before creating the next one. You can do that by putting the DataReaders in a using block as well.
Here is how you put the reader into a using block:
using (var dr = myCommand.ExecuteReader(...)) {
// Previous discussions have indicated that a close in here,
// while seemingly redundant, can possibly help with the error
// you are seeing.
dr.Close();
}
Another useful technique, as #DavidStratton mentions, is to enable MARS, but be aware that there is overhead associated with keeping resultsets open- you still want to close your readers as soon as you are done with them, because unclosed, undisposed readers do represent significant resource allocations on the server and the client.
I have my business-logic implemented in simple static classes with static methods. Each of these methods opens/closes SQL connection when called:
public static void DoSomething()
{
using (SqlConnection connection = new SqlConnection("..."))
{
connection.Open();
// ...
connection.Close();
}
}
But I think passing the connection object around and avoiding opening and closing a connection saves performance. I made some tests long time ago with OleDbConnection class (not sure about SqlConnection), and it definitely helped to work like this (as far as I remember):
//pass around the connection object into the method
public static void DoSomething(SqlConnection connection)
{
bool openConn = (connection.State == ConnectionState.Open);
if (!openConn)
{
connection.Open();
}
// ....
if (openConn)
{
connection.Close();
}
}
So the question is - should I choose the method (a) or method (b) ? I read in another stackoverflow question that connection pooling saved performance for me, I don't have to bother at all...
PS. It's an ASP.NET app - connections exist only during a web-request. Not a win-app or service.
Stick to option a.
The connection pooling is your friend.
Use Method (a), every time. When you start scaling your application, the logic that deals with the state will become a real pain if you do not.
Connection pooling does what it says on the tin. Just think of what happens when the application scales, and how hard would it be to manually manage the connection open/close state. The connection pool does a fine job of automatically handling this. If you're worried about performance think about some sort of memory cache mechanism so that nothing gets blocked.
Always close connections as soon as you are done with them, so they underlying database connection can go back into the pool and be available for other callers. Connection pooling is pretty well optimised, so there's no noticeable penalty for doing so. The advice is basically the same as for transactions - keep them short and close when you're done.
It gets more complicated if you're running into MSDTC issues by using a single transaction around code that uses multiple connections, in which case you actually do have to share the connection object and only close it once the transaction is done with.
However you're doing things by hand here, so you might want to investigate tools that manage connections for you, like DataSets, Linq to SQL, Entity Framework or NHibernate.
Disclaimer: I know this is old, but I found an easy way to demonstrate this fact, so I'm putting in my two cents worth.
If you're having trouble believing that the pooling is really going to be faster, then give this a try:
Add the following somewhere:
using System.Diagnostics;
public static class TestExtensions
{
public static void TimedOpen(this SqlConnection conn)
{
Stopwatch sw = Stopwatch.StartNew();
conn.Open();
Console.WriteLine(sw.Elapsed);
}
}
Now replace all calls to Open() with TimedOpen() and run your program. Now, for each distinct connection string you have, the console (output) window will have a single long running open, and a bunch of very fast opens.
If you want to label them you can add new StackTrace(true).GetFrame(1) + to the call to WriteLine.
There are distinctions between physical and logical connections. DbConnection is a kind of logical connection and it uses underlying physical connection to Oracle. Closing/opening DbConnection doesn't affect your performance, but makes your code clean and stable - connection leaks are impossible in this case.
Also you should remember about cases when there are limitations for parallel connections on db server - taking that into account it is necessary to make your connections very short.
Connection pool frees you from connection state checking - just open, use and immediately close them.
Normally you should keep one connect for each transaction(no parallel computes)
e.g when user execute charge action, your application need find user's balance first and update it, they should use same connection.
Even if ado.net has its connection pool, dispatching connection cost is very low, but reuse connection is more better choice.
Why not keep only one connection in application
Because the connection is blocking when you execute some query or command,
so that means your application is only doing one db operation at sametime,
how poor performance it is.
One more issue is that your application will always have a connection even though your user is just open it but no operations.If there are many user open your application, db server will cost all of its connection source in soon while your users have not did anything.
I read that .NET uses connection pooling.
For example, if I instantiate a bunch of SqlConnection objects with the same connection string, then internally .NET will know to use the same connection.
Is this correct?
Also, in a big web-based application, any tips on the best way to harness this "power" ?
Setting up the TCP connection between your Web application and SQL Server can be an expensive operation. Connection pooling allows connections to the database to be reused for subsequent data requests. Rather than setting up a new TCP connection on each request, a new connection is set up only when one is not available in the connection pool. When the connection is closed, it is returned to the pool where it remains connected to the database, as opposed to completely tearing down that TCP connection.
Always close your connections when you're finished with them. No matter what anyone says about garbage collection within the Microsoft .NET Framework, always call Close or Dispose explicitly on your connection when you are finished with it. Do not trust the common language runtime (CLR) to clean up and close your connection for you. The CLR will eventually destroy the class and force the connection closed, but you have no guarantee when the garbage collection on the object will actually happen.
To use connection pooling optimally, there are a couple of rules to live by. First, open the connection, do the work, and then close the connection. It's okay to open and close the connection multiple times on each request if you have to, rather than keeping the connection open and passing it around through different methods. Second, use the same connection string (and the same thread identity if you're using integrated authentication). If you don't use the same connection string, for example customizing the connection string based on the logged-in user, you won't get the same optimization value provided by connection pooling. And if you use integrated authentication while impersonating a large set of users, your pooling will also be much less effective.
The .NET CLR data performance counters can be very useful when attempting to track down any performance issues that are related to connection pooling.
http://msdn.microsoft.com/en-us/magazine/cc163854.aspx
If you use the following syntax, when ever the using block is left the dispose method will be called, even if an exception occurs.
using(SqlConnection connection = new SqlConnection())
{
// Work with connection object here.
}
//connection object gets disposed here.
not sure if this is entirely related, but I just took over a project and noticed the original programming team failed to do something very important.
when you have a SQLConnection, let's call it conn and you do this:
conn.Open();
and then you perform some SQL statement, be it a select, insert or update. it is entirely possible that it will fail. So of course, you should do this:
try { conn.Open() }
catch (SqlException ex)
{
//do your logging/exception handling
}
however, people forget to add the Finally block.
finally {
if (conn.State == System.Data.ConnectionState.Open)
conn.Close();
}
you want to make sure if you have an exception that the connection does not stay open, so make sure you close it.