Unexplained timeouts when running stored procedures - c#

Background - I have a website & a windows scheduled job which are a part of an MSI and get installed on the same server. The website is used by the end-user to create some rules and the job is scheduled to run on a daily basis to create flat files for the rules created by end-user. The actual scenarios are way more complex than explained above.
Problem (with the website) - The website is working fine most of the times, but some times it just wont load the rule creation page - and the exception being logged it 'query timeout or SQL server not responding'
Problem (with the job) - The job is behaving just like the website and fails some times with the exception - 'query timeout or SQL server not responding'
What I've tried -
I've added 'Connection Timeout' to the SQL connection string - doesn't seem to help with the logging - which would tell me if it was a SQL connection timeout or a query timeout.
I've also run the stored procedures which are called by the website & job - and ALL the stored procedures complete well within the business defined timeout of 3600 seconds. The stored procedures actually complete in under a minute.
I've also run SQL profiler - but the TRACES also didn't help me - though I could see a lot of transactions but I couldn't justify something being wrong with the server.
What I seek - Are there any other reasons which could cause this? Is there something which I could look for?
Technology - SQL Server 2008 R2, ASP.Net, C#.Net
Restrictions - The code details can't be revealed due to client confidentiality, though I'm open to questions - which I'd try to answer keeping client confidentiality in mind.
Note - There is already a query timeout (3600s) & Connection Timeout
(30s) defined in the applicaiton config file.

So, I tried a few things here and there and was able to figure out root cause -
The SQL stored procedure was joining 2 tables from 2 different databases - one of which had varying number of records - these records were being updated/inserted by a different (3rd party) job. Since the time of the 3rd party job and my job was not same - no issue came up due to table locks, but the sheer volume of records caused my job to timeout when my timeout was not enough.
But, as I said I've given the business standard command timeout of 3600 seconds - somehow Enterprise Library was overriding my custom timeout with its own default command timeout of 30s - and hence the C# code part would come throw an exceptions even before the stored procedure had completed executing.
What I did - This may be of help for some of us -
I removed the reference of Enterprise Library from the project
Cleaned up my solution and checked into SVN.
Then cleaned up SVN as well.
I didn't build the application after removing Enterprise Library reference - obviously it wouldn't build due to reference errors.
After that, I took a clean checkout and added Enterprise Library again.
Now it seems to work even with varying number of records.

Just had the same problem also yesterday. Had a huge query taking 18 sec in SQL Server but was running out in C# even after 200 sec. I rebooted my computer disconnect the DB and even disconnect the server... nothing changed.
After reading some threads, I've notice a common feed about indexes. So I removed some indexes in my database, put some back and voilĂ !. Back to normal.
Here's maybe I thought could had happened. While I was running some test, I probably still had some zombie connections left and my colleague was creating some tables in the DB at the same time and linked them to tables used in my stored procedure. Even if the newly created tables had nothing to do with the stored procedure, having them linked with the other ones seems to have messed up with the indexes. Why only the C# couldn't work properly? My guess is there a memory cache in SQL Server not accessible when connecting some place else than SQL Server directly.
N.B. In my case, just altering the stored procedure didn't have any effect at all, even if it was a common "solution" among some threads.
Hope this helps if someone has the same problem. If anyone can find a better solution/explanation, please share!!!
Cheers,

I had similar problem with mssql and did not find any particular reason for this unstable behavior. My solution was to have the db re-indexed with
sp_updatestats
every hour.

You can use WITH RECOMPILE in your stored procedure definition to avoid the error of 'query timeout or SQL server not responding'
Here's the Microsoft article:
http://technet.microsoft.com/en-us/library/ms190439.aspx
Also see this for reference:
SQL Server: Effects of using 'WITH RECOMPILE' in proc definition?
Sample Code:
CREATE PROCEDURE [dbo].[sp_mystoredproc] (#param1 varchar(20) ,#param2 int)
WITH RECOMPILE
AS
... proc code ...

Related

DataAdapter.Fill performance anomaly

I have two DataBases (DB1 & DB2 : both DBs are same, DB2 is created from the backup of DB1). When I run a stored procedure SP1 on both DBs it takes approximately 2 seconds to give me an output (select statements) on both DBs.
Now the problem is when I point these DBs from a service and try to use DataAdapter.Fill method, it gives me different time(54 - 63 seconds on DB1 and 42 - 44 seconds on DB2) on both DBs consistently. Noted that I'm using same service to point DBs so it couldn't be service behave/performance. Now my question is:
What could be the reason for this? Any suggestions are welcome that What should I look into?
Helping Info:
Both DB are on different servers(identical configuration) but since executing the SP on SQL Server Management Studio take the same time
on both DBs so I ruled out the possibility of DB server performance.
Network delay could be a factor But higlly unlikely as both servers
are on same network and infact on same physical location. This is my
last option to check.
Some other services are using SQLDependency ON DB1. Which consistently fill DataAdapter(s), could this be the reason for my
DataAdapter fill method to slow down? (less likely as I'm guessing)
As requested in comments below is code that is filling the DataSet:
PS: The time mentioned above is the execution time of the code line highlighted in the above image.
That sounds very much like a query plan issue.
Erland Sommerskog has written an excellent article about this kind of problems,
Slow in the Application, Fast in SSMS?.
My first guess would be "The Default Settings", but it might be one of the other issues, too.
Have you tried not using the SQL.StoredProcedure and just run it as a line of SQL:
"exec dbname.dbo.storedprocname params".
Its a bit more work because you'll have to loop around the parameters to add to the string at the end but its a SQL string, it doesn't care what you are doing, its not doing anything funny behind the scenes. Should have similar times, if this is failing, try checking things like indexes etc.. on db tables that the Stored Procedure is using.
Step one - rebuild or reorg your indexes. This is usually the most common performance issue with SQL Server and is easy to fix. Restart SQL Server some times this also a matters

Rebuild Index Task Breaks Compliation of Stored Procedure

I have a maintenance plan that runs on my SQL Server 2008 server every morning before business hours. It was put in place a few years ago to help with some performance issues. The problem that I am seeing is that after that rebuild index finishes, there is a stored procedure in one of the databases that will go from taking nine seconds to run to taking seven minutes to run.
The solution I have found to fix it is to open SQL Management Studio and run:
EXEC sp_recompile N'stored_proc_name';
EXEC stored_prod_name #userId=579
After I run that, the SP fixes itself and goes back to running under nine seconds.
I've tried a couple of different paths to automate this, but it will only work if I run it from my computer through management studio. I tried to wrap it up in a little C# executable that ran a few minutes after the rebuild index job completes, but that didn't work. I also tried creating a SQL job to run it on the server after the rebuild index job completes, but that didn't work either. It has to be run from management studio.
So, two questions:
How can I stop rebuild index from breaking my SPs, or,
Any ideas on how or why my quick fix will only work in a very specific situation?
Thanks,
Mike
This sounds like standard parameter sniffing / parameter-based query-plan caching. The trick here is usually to use the OPTIMIZE FOR / UNKNOWN hint - either for the specific parameter that is causing the problem, or simply for all parameters. This makes it much less likely that a parameter-value with biased distribution will negatively impact the system for other values. A more extreme option (more useful when using command-text, not so useful when using stored procedures) is to embed the value directly into the TSQL rather than using a parameter. This... has impact, however, and should be used with caution.
In your case, I suspect that adding:
OPTION (OPTIMIZE FOR (#userId UNKNOWN))
to the end of your query will fix it.

Entity Framework stored procedure over remote connection

I'm using EF 4, and I'm stumped on another quirk... Basically, I have a fairly simple stored procedure that's responsible for retrieving data from SQL and returning a complex type. I have the stored procedure added to my model via a function import. It's more or less in the following structure.
using (ModelContainer context = GetNewModelContainer())
{
return context.GetSummary(id, startDate, endDate, type).ToList();
}
I should mention that the code above executes over a remote SQL connection. It takes nearly 10 minutes to execute. However, using SQL Server Management Studio over the remote connection, the stored procedure executes almost instantaneously.
There are only 100 records or so that are returned, and each record has approximately 30 fields.
When I run the code above locally (no remote connection) against a backup of the customer's database, it executes without any delay.
I'm stumped on what could be causing this performance hit. 10 minutes is unacceptable. I don't think it's the stored procedure. Could it be the serialization due to the remote connection? Any thoughts on how I can track and correct down the culprit?
The symptoms you are describing are those usually associated with an incorrectly cached query plan (due to parameter sniffing).
Ensure your statistics are up to date, and rebuild indexes if they are fragmented.
The canonical reference is: Slow in the Application, Fast in SSMS? An absolutely essential read.
Possible useful SO links:
Option Recompile makes query fast - good or bad?
Strange problem with SQL Server procedure execution plan
SQL Server function intermittent performance issues
Why is some sql query much slower when used with SqlCommand?

Transaction commit executes succesfully but not getting done

I've encountered a strange problem in Sql Server.
I have a pocket PC application which connects to a web service, which in turn, connects to a database and inserts lots of data. The web service opens a transaction for each pocket PC which connects to it. Everyday at 12 P.M., 15 to 20 people with different pocket PCs get connected to the web service simultaneously and finish the transfer successfully.
But after that, there remains one open transaction (visible in Activity Monitor) associated with 4000 exclusive locks. After a few hours, they vanish (probably something times out) and some of the transfered data is deleted. Is there a way I can prevent these locks from happening? Or recognize them programmatically and wait for an unlock?
Thanks a lot.
You could run sp_lock and check to see if there are any exclusive locks held on tables you're interested in. That will tell you the SPID of the offending connection, and you can use sp_who or sp_who2 to find more information about that SPID.
Alternatively, the Activity Monitor in Management Studio will give you graphical versions of this information, and will also allow you to kill any offending processes (the kill command will allow you to do the same in a query editor).
You can use SQL Server Profiler to monitor the statements that occuring including begin and end of transactions. There are also some tools from Microsoft Support which are great since they run profiler and blocking scripts. I'm looking to see if I can find these will update if I do/.
If you have an open transaction you should be able to see this in the activity monitor, so you can check if there are any open transactions before you restart the server.
Edit
It sounds like this problem happens at roughly the same time every day. You will want to turn it on before the problem happens.
I suspect you are doing something wrong in code, do you have command timeouts set to a large enough value to do their work, or possibly an error is skipping a COMMIT?
You can inspect what transactions are open by running:
DBCC OPENTRAN
The timeout on your select indicates that the transaction is still open with a lock on atleast part of the table.
How are you doing transactions over web services? How / where in your code are you commiting the transaction?
Doing lots of tests, I found out a deadlock is happening. But I couldn't find the reason, as I'm just inserting so many records in some independent tables.
These links helped a bit, but to no luck:
http://support.microsoft.com/kb/323630
http://support.microsoft.com/kb/162361
I even broke my transactions to smaller ones, but I still got the deadlock. I finally removed the transactions and changed the code to not delete them from the source database, and didn't get the deadlocks anymore.
As a lesson, now I know if you have some (more than one) large transactions getting executed on the same database at the same time, you'll sure have problems in SQL Server, I don't know about Oracle.

SQL Server Timeout troubleshooting

I have a web service that's been running fine without modification for a couple of years now. Suddenly today it decides that it would not like to function, and throws a SQL timeout:
System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
Interesting to note that this web service lives on the same server as the database, and also that if I pull the query out of a SQL trace and run it in management studio, it returns in under a second. But it's timing out after exactly 30 seconds when called from the web service without fail. I'm using the Enterprise Library to connect to the database, so I can't imagine that randomly started failing.
I'm not quite sure what could suddenly make it stop working. I've recycled the app pool it's in, and even restarted the SQL process that I saw it was using. Same behavior. Any way I can troubleshoot this?
UPDATE: Mitch nailed it. As soon as I added "WITH RECOMPILE" before the "AS" keyword in the sproc definition, it came back to life. Bravo!
The symptoms you describe are 99.9% certain of being due to an incorrectly cached query plan.
Please see these answers:
Big difference in execution time of stored proc between Managment Studio and TableAdapter
Rule of thumb on when to use WITH RECOMPILE option
which include the advice to rebuild indexes and ensure statistics are up to date as a starting point.
Do you have a regular index maintenance job scheduled and enabled?
The canonical reference is: Slow in the Application, Fast in SSMS?
Rebuild any relevant indexes.
Update statistics, check set options on query in profiler. SSMS might be using different connection set options.

Categories