Why so many sp_resetconnections for C# connection pooling? - c#

We have a web service coded in C# that makes many calls to MS SQL Server 2005 database. The code uses Using blocks combined with C#'s connection pooling.
During a SQL trace, we saw many, many calls to "sp_resetconnection". Most of these are short < 0.5 sec, however sometimes we get calls lasting as much as 9 seconds.
From what I've read sp_resetconnection is related to connection pooling and basically resets the state of an open connection. My questions:
Why does an open connection need its state reset?
Why so many of these calls!
What could cause a call to sp_reset connection to take a non-trivial amount of time.
This is quite the mystery to me, and I appreciate any and all help!

The reset simply resets things so that you don't have to reconnect to reset them. It wipes the connection clean of things like SET or USE operations so each query has a clean slate.
The connection is still being reused. Here's an extensive list:
sp_reset_connection resets the following aspects of a connection:
It resets all error states and numbers (like ##error)
It stops all EC's (execution contexts) that are child threads of a parent EC executing a parallel query
It will wait for any outstanding I/O operations that is outstanding
It will free any held buffers on the server by the connection
It will unlock any buffer resources that are used by the connection
It will release all memory allocated owned by the connection
It will clear any work or temporary tables that are created by the connection
It will kill all global cursors owned by the connection
It will close any open SQL-XML handles that are open
It will delete any open SQL-XML related work tables
It will close all system tables
It will close all user tables
It will drop all temporary objects
It will abort open transactions
It will defect from a distributed transaction when enlisted
It will decrement the reference count for users in current database; which release shared database lock
It will free acquired locks
It will releases any handles that may have been acquired
It will reset all SET options to the default values
It will reset the ##rowcount value
It will reset the ##identity value
It will reset any session level trace options using dbcc traceon()
sp_reset_connection will NOT reset:
Security context, which is why connection pooling matches connections based on the exact connection string
If you entered an application role using sp_setapprole, since application roles can not be reverted
The transaction isolation level(!)

Here's an explanation of What does sp_reset_connection do? which says, in part "Data access API's layers like ODBC, OLE-DB and SqlClient call the (internal) stored procedure sp_reset_connection when re-using a connection from a connection pool. It does this to reset the state of the connection before it gets re-used." Then it gives some specifics of what that system sproc does. It's a good thing.

sp_resetconnection will get called everytime you request a new connection from a pool.
It has to do this since the pool cannot guarantee the user (you, the programmer probably :)
have left the connection in a proper state. e.g. Returning an old connection with uncommited transactions would be ..bad.
The nr of calls should be related to the nr of times you fetch a new connection.
As for some calls taking non-trivial amount of time, I'm not sure. Could be the server is just very busy processing other stuff at that time. Could be network delays.

Basically the calls are the clear out state information. If you have ANY open DataReaders it will take a LOT longer to occur. This is because your DataReaders are only holding a single row, but could pull more rows. They each have to be cleared as well before the reset can proceed. So make sure you have everything in using() statements and are not leaving things open in some of your statements.
How many total connections do you have running when this happens?
If you have a max of 5 and you hit all 5 then calling a reset will block - and it will appear to take a long time. It really is not, it is just blocked waiting on a pooled connection to become available.
Also if you are running on SQL Express you can get blocked due to threading requirements very easily (could also happen in full SQL Server, but much less likely).
What happens if you turn off connection pooling?

Related

Web API one SQL connection to all users

I have a SQL server database with 200 concurrent users limitation. I want to keep the first connection created by any user opened and use it with all other users through my C# Web API. Is that possible?
SqlConnection is not intended to be used concurrently, so to do what you want would mean synchronizing all access, especially if there are transactions involved, or anything involving temporary tables that live longer than a single command. It can be done, but it isn't a good idea.
Note that SqlConnection is disposable, and when disposed: the underlying connection (that you never see) usually goes back to a pool. If you use consecutively (not concurrently) 200 SqlConnection instances, you might have only used a single underlying connection.
If you must put a hard limit on your concurrent connections, you'll have to create your own pool (which might be a pool of one), with your own synchronization code while you lease and release connections. But: it won't be trivial.

to open sql connection from c# code taken time?

Actually I am confuse about one thing.
I have .net application and using sql server 2008 for database.
Now, on my Method A i am filling datareader.
now, during while loop i am calling another method B by pass one property of result.
at that time i also open DB connection & close it.
same thing doing Calling another method C from method B.
at that time also open DB connection & close.
To fill final list of Method A. it is taking time.
so, my point is.To open DB connection & closing it. Is it time consuming process?
To open DB connection & closing it. it is time consuming process
Yes. If pooling is enabled and the connection is returned from the pool then opening and closing costs at least one round-trip, to reset the connection. If the connection is not pooled opening and closing is a full SSPI complete handshake. If SQL authentication is used and encryption is enabled, there is another complete SSL handshake to establish a secure channel before the SQL handshake. Even under ideal conditions, it takes 10s of ms, it can go up to whole seconds with some minimal network latency added.
A well written ASP.Net application needs one (pooled) connection per request, never more.
It is generally very bad practice to connect to a third part application (database, WebService or similar) in any kind of loop. Communication like this is always takes a relatively long time.
As the number of elements in the loop increases the application will become exponentially slower and slower.
A much better approach is to perform an operation for all the elements then pass the required data into your loop logic.
As with all things there are exceptions, loops where you have millions of entities to process and the connection is a small overhead may create circumstances where it's more efficient to process each entity atomically.

Is there any way to resume a (long) transaction after the underlying mysql connection has been lost?

I have a long running transaction performing a lot of delete queries on a database; the issue is that the mysql connection (to the server on the same machine) will be dropped for no reason every now and then.
Currently, my retry logic will detect the disconnection, reconnect, and restart the whole transaction from the beginning, which may never succeed if the connection's "dropping frequency" is too high.
Is it possible at all to reopen a lost connection to continue the transaction?
I am using MySQL Connector for .NET.
What you are asking is not possible for a Transaction. A transaction is to make sure that either each and every action performed on DataBase is completed or None are.
If your Connection Dropping frequency is too high and you don't have a control on fixing it then what you should do is to make simple queries without a transaction or Better Make the Number of Actions in your Transaction fewer and Send a Batch of Transactions instead of a Single Big Transaction.
And also add some data validation check codes to make sure every thing is right with entries.
Theoretically you can do exactly what you need with the XA transactions... but the limitations of mysql are rather drastic and make XA transactions on mysql a joke do be honest: both resume & join on start and end with suspend are not working (since 2006 when this was first released). So to answer you question no! No chance with mysql, forget it! Try increasing timeouts(both on client and server), memory pools, optimize the queries etc... mysql won't help you here.

Transfer SQL to MySQL C# Monitoring Program

I currently have a working program that is monitoring a few SQL tables and transferring data to MySQL tables. Essentially, I have a loop that checks every 30 seconds. My main concern is that I currently need to close and open the connection every time I loop. The reason for this is because I was getting errors about multiple transactions. When I close my connections, I also need to dispose the connection. I thought that disposing the transactions would have solved this problem, but I was still getting errors about multiple transactions.
This all seems to be working fine but I was wondering if there was a better way to do this without closing the connection.
I am not sure about your errors but it seems that you have to increase the number of connections to the remote computer. Have a look here http://msdn.microsoft.com/en-us/library/system.net.configuration.connectionmanagementelement.maxconnection.aspx
Also you can try to do is use only one connection to realize multiple SQLs.
If it is doesn't help then please provide your code to check it...
Were you committing your transactions in your loop? transaction.Commit() ... that could have been the issue... Hard to say with no code. No need to worry about opening and closing connections anyways since ADO.NET uses connection pooling behind the scenes. You only actually 'open' a connection the first time, after that is kept open in the pool to be used again. As others have said though, Post some code!

SQL CONNECTION best Practices

Currently there is discussion as to what are the pros and cons of having a single sql connection architecture.
To elaborate what we are discussing is, at application creation open a sql connection and at application close or error closing the sql connection. And not creating another connection at all, but using just that one to talk with the DB.
We are wondering what the community thinks.
Close the connection as soon as you do not longer need it for an undefined amount of time. By doing so, the connection returns to the connection-pool (if connection pooling is enabled), and can be (re)used by someone else.
(Connections are expensive resources, and are sometimes limited).
If you keep hold on a connection for the entire lifetime of an application, and you have multiple users for that application (thus multiple instances of the app, and multiple connections), and if your DB server is limited to have only x number of concurrent connections, then you could have a problem ....
See also best practices for ado.net
Follow this simple rule... Open connection as late as possible and close it as soon as possible.
I think it's a bad idea, for several reasons.
If you have 10,000 users using your application, that's 10,000 connections open constantly
If you have to restart your Sql Server, all those 10,000 connections are invalidated and your application will suddenly - assuming you've included reconnect logic - be making 10000 near-simultaneous re-connect requests.
To expand on point 1, you should close connections as soon as you can because otherwise you're using up a finite resource for, potentially, an inifinite period of time. If you had Sql Server configured to allow a maximum of 10,001 simultaneous connections, then you can only have 10,001 users running your application at any one time. If you open/close connections on demand then your application will scale much further as the likelihood of all the active users making use of the database simultaneously is, realistically, low.
Under the covers, ADO.NET uses connection pooling to manage the connections to the database. I would suggest leaving it up to the connection pool to take care of your connection needs. Keeping a connection open for the duration of your application is a bad idea.
I use a helpdesk system called Richmond Systems that uses one connection for the life of the application, and as a laptop user, it is a royal pain in the behind. Even when I carry my laptop around open, the jumps between the wireless access points are enough to drop the DB conenction. The software then complains about the DB conenction, gets into an error state and won't close. It has to be killed manually from Task Manager.
In short, DON'T HOLD OPEN A DATABASE CONNECTION FOR LONGER THAN NECESSARY.
But on the flip side, I'd be cautious about opening and closing connections too often. This is a lot cheaper with connection pooling than without, but even with pooling, the pool manager may decide to grow or shrink the pool, turning it back into an expensive operation.
My general rule is to open a connection when the user initiates some action, do the work, then close the connection before waiting for the next user input. For any given "Update" button click or whatever, I'll generally have only one connection. But you definately do not want to keep connections open while waiting for user input if you can at all help it for all the reasons others have mentioned. You could literally wait for days before the user presses another key or touches another button -- what if he leaves his computer on and goes on vacation? Tying up a resource for unpredictable amounts of time like that is bad news. In most cases, the elapsed time waiting for user input will far exceed the time doing actual work.

Categories