SQL timeout on last connection - c#

I have a weird issue. I have a C# application that makes multiple connections to SQL (7 in total). Everything had been working fine for a while and then all of the sudden, SQL times out on the last connection. That connection is pretty simple
public static void APP()
{
using (SqlConnection conn7 = new SqlConnection(ConfigurationManager.ConnectionStrings["Connect"].ConnectionString))
{
conn7.Open();
SqlCommand cmd7 = new SqlCommand("sp_proc", conn7);
cmd7.CommandType = System.Data.CommandType.StoredProcedure;
cmd7.ExecuteNonQuery();
conn7.Close();
}
}
My connection string looks like this.
add name="Connect" connectionString="Data Source=Server; Initial Catalog=DB; User ID=User; Password=password" providerName="System.Data.SqlClient"
I am doing a using on each one and I am closing each connection at the end of each class. Has anyone seen anything like this happen? Do I have too many connections?
I run each class in order from Main.

If it is timing out, there are 3 likely scenarios:
sp_proc is simply taking too long to run; you'll need to address the code
there is some kind of locking that is making it impossible to complete (perhaps an open transaction on a competing SPID that has touched the same data and taken conflicting locks)
there is some unrelated server load happening at the same time that is making it run too slow (this is unlikely to be the issue if it happens reliably)

I would recommend adding
cmd7.CommandTimeout = 6000
the time out is measured in seconds so put a time out that is acceptable for the users of the application,
I'd recommend this for all your SQL connections too just as a standard this way you should always have sufficient time to get the data.
one thing you might want to do is run a trace \ sql profile on the database that youre running against and also check for locking of some kind.
If this is timing out, I would think that there is a process that is being suspended or a lock of some kind somewhere

Related

How to manage SqlConnection in C# for high frequency transaction?

I have an application that connect to a SQL Server database with high frequency. Inside this service, there are many scheduled tasks that run every second, and each time I'm executing some query.
I don't understand which solution is better in this condition.
Opening a single SqlConnection and keeping it open while application is running and execute all query with that connection
Each time I want to execute query, opening a new connection and after query execution, close the connection (does this solution suitable for so many scheduled task that runs every 1 second?)
I tried second solution, but is there any better choice?
How do ORMs like EF manage connections?
As you see i have many service. I cant change interval and the interval is important for me. but the code makes so many calls and im following a better way manage connection over database. Also I'm making connection with Using Statement.
Is there any better solution?
you should use SQL Connection Pool feature for that.
It automatically manages in the background if a connection needs to be open or can be reused.
Documentation: https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql-server-connection-pooling?source=recommendations
Example copied from that page
using (SqlConnection connection = new SqlConnection(
"Integrated Security=SSPI;Initial Catalog=Northwind"))
{
connection.Open();
// Pool A is created.
}
using (SqlConnection connection = new SqlConnection(
"Integrated Security=SSPI;Initial Catalog=pubs"))
{
connection.Open();
// Pool B is created because the connection strings differ.
}
using (SqlConnection connection = new SqlConnection(
"Integrated Security=SSPI;Initial Catalog=Northwind"))
{
connection.Open();
// The connection string matches pool A.
}
By using the "using" statement, application checks if a connection in this pool can be reused before opening a new connection. So the overhead of opening and closing the connections disappears.
But after your last edit you seem to have other problems in your current architecture. Like the other poster recommends you can try to use the "with (nolock)" parameter in your sql statements. It creates dirty reads, but maybe that's ok for your application.
Alternatively if all your services use the same select statement maybe a stored procedure or a caching mechanism could help.
I assume that you are already opening/closing your SQL connections in either a "using" statement or explicitly in your code ( try/catch/finally ). If so you are already making use of connection pooling as it is enabled in ADO.Net by default ("By default, connection pooling is enabled in ADO.NET").
Therefore I don't think that your problem is so much a connection/resource problem as it is a database concurrency issue. I assume it to be either 1 of 2 issues :
Your code is making so many calls to the SQL server that it is exhausting all the available connections and nobody else can get one
Your code is locking tables in SQL that is causing other code/applications to timeout
If it is case 1, try and redesign your code to be "less chatty" to the database. Instead of making several inserts/updates per second, perhaps buffer the changes and make a single insert/update every 3-5 seconds in batch mode ( obvs if possible ). Or maybe your SQL statements are taking longer than 1 second to execute and you are calling them every second causing in a backlog scenario?
If it is case 2, try and redesign the SQL tables in such a way that the "reading" applications are not influenced by the "writing" application. Normally this involves a service that periodically writes aggregated data to a read-only table for viewing or at very least adding a "WITH(NOLOCK)" hint to the select clauses to allow dirty reads ( i.e. it wont lock the table to read, but may result in slightly out of date dataset i.e. eventual consistency )
Good luck

Is overutilisation of Sql Connections in C# a problem?

Throughout the program which I am currently working on, I realized that whenever I need to access to SQL Server, I just type a queryString and execute my query using SqlCommand and SqlConnection in C#. At a certain point during the program, I even opened a connection and ran a query in a "for loop".
Is it unhealthy for the program to constantly open-close connections?
***I am not very familiar with the terminology, therefore you might be having some problems understanding what I am talking about:
Is doing this very frequently may cause any problem?:
string queryString = "Some SQL Query";
public void(){
SqlConnection con = new Connection(conString);
SqlCommand cmd = new SqlCommand(queryString,con);
cmd.Parameters.AddWithValue("#SomeParam",someValue);
con.Open();
cmd.ExecuteNonQuery();
con.Close();
}
I use this template almost every class I create,(usually to get,update,insert data from/into a datatable).
Well, is it harmful?
The short answer is yes - it is inefficient to constantly open and close connections. Opening an actual connection is a very expensive process and managing a connection for the lifetime of its need (which usually is the lifetime of the application or process using it) is fraught with errors.
That is why connection pooling was introduced a long time ago. There is a layer beneath your application that will manage the physical opening/closing of connections in a more efficient way. This also helps prevent the chances that an open connection is lost and continues to stay open (which causes lots of problems). By default pooling is enabled so you don't need to do anything to use it.
With pooling - you write code to open a connection and use it for the duration of a particular section of code and then close it. If the connection pool has an open but unused connection, it will reuse it rather than open a new one. When you close the connection, that simply returns the connection to the pool and makes it available to the next open attempt. You should also get familiar with the c# using statement.

Fill(DataTable) succeeds in testing, hangs in Production

I have a console batch application which includes a process that uses SqlDataAdapter.Fill(DataTable) to perform a simple SELECT on a table.
private DataTable getMyTable(string conStr)
{
DataTable tb = new DataTable();
StringBuilder bSql = new StringBuilder();
bSql.AppendLine("SELECT * FROM MyDB.dbo.MyTable");
bSql.AppendLine("WHERE LEN(IdString) > 0");
try
{
string connStr = ConfigurationManager.ConnectionStrings[conStr].ConnectionString;
using (SqlConnection conn = new SqlConnection(connStr))
{
conn.Open();
using (SqlDataAdapter adpt = new SqlDataAdapter(bSql.ToString(), conn))
{
adpt.Fill(tb);
}
}
return tb;
}
catch (SqlException sx)
{
throw sx;
}
catch (Exception ex)
{
throw ex;
}
}
This method is executed synchronously, and was run successfully in several test environments over many months of testing -- both when started from the command-line or started under control of an AutoSys job.
When moved into production, however, the process hung up -- at the Fill method as nearly as we can tell. Worse, instead of timing out, it apparently started spawning new request threads, and after a couple hours, had consumed more than 5 GB of memory on the application server. This affected other active applications, making me very unpopular. There was no exception thrown.
The Connection String is about as plain-vanilla as they come.
"data source=SERVER\INSTANCE;initial catalog=MyDB;integrated security=True;"
Apologies if I use the wrong terms regarding what the SQL DBA reported below, but when we had a trace put on the SQL Server, it showed the Application ID (under which the AutoSys job was running) being accepted as a valid login. The server then appeared to process the SELECT query. However, it never returned a response. Instead, it went into an "awaiting command" status. The request thread appeared to remain open for a few minutes, then disappeared.
The DBA said there was no sign of a deadlock, but that he would need to monitor in real time to determine whether there was blocking.
This only occurs in the production environment; in test environments, the SQL Servers always responded in under a second.
The AutoSys Application ID is not a new one -- it's been used for several years with other SQL Servers and had no issues. The DBA even ran the SELECT query manually on the production SQL server logged in as that ID, and it responded normally.
We've been unable to reproduce the problem in any non-production environment, and hesitate to run it in production without a server admin standing by to kill the process. Our security requirements limit my access to view server logs and processes, and I usually have to engage another specialist to look at them for me.
We need to solve this problem sooner or later. The amount of data we're looking at is currently only a few rows, but will increase over the next few months. From what's happening, my best guess is that it involves communication and/or security between the application server and the SQL server.
Any additional ideas or items to investigate are welcome. Thanks everyone.
This may be tied to permissions. SQL Server does some odd things instead of giving a proper error message sometimes.
My suggestion, and this might improve performance anyway, is to write a stored procedure on the server side that executes the select, and call the stored procedure. This way, the DBA can ensure you have proper access to the stored procedure without allowing direct access to the table if for some reason that's being blocked, plus you should see a slight performance boost.
Though it may be caused by some strange permissions/ADO.NET issues as mentioned by #user1895086, I'd nonetheless would recommend to recheck a few things one more time:
Ensure that queries run manually by DBA and executed in your App are the same - either hardcode it or at least log just before running. It is better to be safe than sorry.
Try to select only few rows - it is always a good idea to not select the entire table if you can avoid it, and in our case SELECT TOP 1(or 100) query may not exhibit such problems. Perhaps there is just much more data than you think and ADO.Net just dutifully tries to load all those rows. Or perhaps not.
Try SqlDataReader to be sure that SqlDataAdapter does not cause any issues - yes, it uses the same DataAdapter internally, but we would at least exclude those additional operations from a list of suspects.
Try to get a hand on the dump with those 5 GB of memory - analyzing memory dumps is not a trivial task, but it won't be too difficult to understand what is eating those hefty chunks of memory. Because I somehow doubt that ADO.NET will just spawn a lot of additional objects for no reason.

"open/close" SqlConnection or keep open?

I have my business-logic implemented in simple static classes with static methods. Each of these methods opens/closes SQL connection when called:
public static void DoSomething()
{
using (SqlConnection connection = new SqlConnection("..."))
{
connection.Open();
// ...
connection.Close();
}
}
But I think passing the connection object around and avoiding opening and closing a connection saves performance. I made some tests long time ago with OleDbConnection class (not sure about SqlConnection), and it definitely helped to work like this (as far as I remember):
//pass around the connection object into the method
public static void DoSomething(SqlConnection connection)
{
bool openConn = (connection.State == ConnectionState.Open);
if (!openConn)
{
connection.Open();
}
// ....
if (openConn)
{
connection.Close();
}
}
So the question is - should I choose the method (a) or method (b) ? I read in another stackoverflow question that connection pooling saved performance for me, I don't have to bother at all...
PS. It's an ASP.NET app - connections exist only during a web-request. Not a win-app or service.
Stick to option a.
The connection pooling is your friend.
Use Method (a), every time. When you start scaling your application, the logic that deals with the state will become a real pain if you do not.
Connection pooling does what it says on the tin. Just think of what happens when the application scales, and how hard would it be to manually manage the connection open/close state. The connection pool does a fine job of automatically handling this. If you're worried about performance think about some sort of memory cache mechanism so that nothing gets blocked.
Always close connections as soon as you are done with them, so they underlying database connection can go back into the pool and be available for other callers. Connection pooling is pretty well optimised, so there's no noticeable penalty for doing so. The advice is basically the same as for transactions - keep them short and close when you're done.
It gets more complicated if you're running into MSDTC issues by using a single transaction around code that uses multiple connections, in which case you actually do have to share the connection object and only close it once the transaction is done with.
However you're doing things by hand here, so you might want to investigate tools that manage connections for you, like DataSets, Linq to SQL, Entity Framework or NHibernate.
Disclaimer: I know this is old, but I found an easy way to demonstrate this fact, so I'm putting in my two cents worth.
If you're having trouble believing that the pooling is really going to be faster, then give this a try:
Add the following somewhere:
using System.Diagnostics;
public static class TestExtensions
{
public static void TimedOpen(this SqlConnection conn)
{
Stopwatch sw = Stopwatch.StartNew();
conn.Open();
Console.WriteLine(sw.Elapsed);
}
}
Now replace all calls to Open() with TimedOpen() and run your program. Now, for each distinct connection string you have, the console (output) window will have a single long running open, and a bunch of very fast opens.
If you want to label them you can add new StackTrace(true).GetFrame(1) + to the call to WriteLine.
There are distinctions between physical and logical connections. DbConnection is a kind of logical connection and it uses underlying physical connection to Oracle. Closing/opening DbConnection doesn't affect your performance, but makes your code clean and stable - connection leaks are impossible in this case.
Also you should remember about cases when there are limitations for parallel connections on db server - taking that into account it is necessary to make your connections very short.
Connection pool frees you from connection state checking - just open, use and immediately close them.
Normally you should keep one connect for each transaction(no parallel computes)
e.g when user execute charge action, your application need find user's balance first and update it, they should use same connection.
Even if ado.net has its connection pool, dispatching connection cost is very low, but reuse connection is more better choice.
Why not keep only one connection in application
Because the connection is blocking when you execute some query or command,
so that means your application is only doing one db operation at sametime,
how poor performance it is.
One more issue is that your application will always have a connection even though your user is just open it but no operations.If there are many user open your application, db server will cost all of its connection source in soon while your users have not did anything.

OleDbException System Resources Exceeded

The following code executes a simple insert command. If it is called 2,000 times consecutively (to insert 2,000 rows) an OleDbException with message = "System Resources Exceeded" is thrown. Is there something else I should be doing to free up resources?
using (OleDbConnection conn = new OleDbConnection(connectionString))
using (OleDbCommand cmd = new OleDbCommand(commandText, conn))
{
conn.Open();
cmd.ExecuteNonQuery();
}
The system resources exceeded error is not coming from the managed code, its coming from you killing your database (JET?)
You are opening way too many connections, way too fast...
Some tips:
Avoid round trips by not opening a new connection for every single command, and perform the inserts using a single connection.
Ensure that database connection pooling is working. (Not sure if that works with OLEDB connections.)
Consider using a more optimized way to insert the data.
Have you tried this?
using (OleDBConnection conn = new OleDBConnection(connstr))
{
while (IHaveData)
{
using (OldDBCommand cmd = new OldDBCommand())
{
cmd.Connection = conn;
cmd.ExecuteScalar();
}
}
}
I tested this code out with an Access 2007 database with no exceptions (I went as high as 13000 inserts).
However, what I noticed is that it is terribly slow as you are creating a connection every time. If you put the "using(connection)" outside the loop, it goes much faster.
In addition to the above (connecting to the database only once), I would also like to make sure you're closing and disposing of your connections. As most objects in c# are managed wrt memory, connections and streams don't have this luxury always, so if objects like this aren't disposed of, they are not guaranteed to be cleaned up. This has the added effect of leaving that connection open for the life of your program.
Also, if possible, I'd look into using Transactions. I can't tell what you're using this code for, but OleDbTransactions are useful when inserting and updating many rows in a database.
I am not sure about the specifics but I have ran across a similar problem. We utilize an Access database with IIS to serve our clients. We do not have very many clients but there are alot of connections being opened and closed during a single session. After about a week of work, we recieve the same error and all connection attempts fail. To correct the problem, all we had to do was restart the worker processes.
After some research, I found (of course) that Access does not perform well in this environment. Resources do not get released correctly and over time the executable will run out. To solve this problem, we are going to move to an Oracle database. If this does not fix the problem, I will keep you updated on my findings.
This could be occurring because you are not disposing the Connection and Command object created. Always Dispose the object at the end.
OledbCommand.Dispose();

Categories