I'm using EF 4, and I'm stumped on another quirk... Basically, I have a fairly simple stored procedure that's responsible for retrieving data from SQL and returning a complex type. I have the stored procedure added to my model via a function import. It's more or less in the following structure.
using (ModelContainer context = GetNewModelContainer())
{
return context.GetSummary(id, startDate, endDate, type).ToList();
}
I should mention that the code above executes over a remote SQL connection. It takes nearly 10 minutes to execute. However, using SQL Server Management Studio over the remote connection, the stored procedure executes almost instantaneously.
There are only 100 records or so that are returned, and each record has approximately 30 fields.
When I run the code above locally (no remote connection) against a backup of the customer's database, it executes without any delay.
I'm stumped on what could be causing this performance hit. 10 minutes is unacceptable. I don't think it's the stored procedure. Could it be the serialization due to the remote connection? Any thoughts on how I can track and correct down the culprit?
The symptoms you are describing are those usually associated with an incorrectly cached query plan (due to parameter sniffing).
Ensure your statistics are up to date, and rebuild indexes if they are fragmented.
The canonical reference is: Slow in the Application, Fast in SSMS? An absolutely essential read.
Possible useful SO links:
Option Recompile makes query fast - good or bad?
Strange problem with SQL Server procedure execution plan
SQL Server function intermittent performance issues
Why is some sql query much slower when used with SqlCommand?
Related
I wrote a program modeled after the accepted answer for Throttling asynchronous tasks. See code below. My program is essentially an ad-hoc DB "text search" tool. It runs on a Win 7 client machine. The program composes a large collection of SELECT statements (hundreds to thousands of them) and sends them to a remote DB server (The Win 7 client and the DB server are on the same A.D. domain. The DB server searched is only ever the read-only secondary server in an Always On Availability Group). Each invocation of the Download method creates 1-to-many SELECT queries (one SELECT per table column being searched) for one DB table. The Download method is invoked 1-to-thousands of times (once per table being searched).
TPL Dataflow works good so long as I limit the search to no more that 60-70 tables. More than that and it chokes (I think) the SQL Server Database and/or the DB server machine. I've played with various MaxDegreeOfParallelism and BoundedCapacity values in attempt to control the client. With MaxDegreeOfParallelism = 8 (the number of processors on my client machine, I can see that CPU and DISK peg on the DB server machine. With MaxDegreeOfParallelism = 1 and BoundedCapacity = 1, The CPU and DISK are ok on the DB server machine, but in the act of submitting the queries, my database read code:
SqlDataReader sqlDataReader = await sqlCommand.ExecuteReaderAsync();
dataTable_TableDataFromFieldQuery.Load(sqlDataReader);
eventually throws an exception "server has become unresponsive"
What tools can I use on the DB server to identify the choke point? Even assuming that I need to improves my queries, what do I look for and where?
Code from the other S.O. question modified for my use
var downloader = new TransformBlock<string, DataTable>(tableName => Download(tableName), new ExecutionDataflowBlockOptions {MaxDegreeOfParallelism={various values 1-5, BoundedCapacity={various values 1-5} } );
var buffer = new BufferBlock<DataTable>();
downloader.LinkTo(buffer);
foreach(var tableName in tableNames)
await downloader.SendAsync(tableName);
downloader.Complete();
await downloader.Completion;
IList<DataTable> responses;
if (buffer.TryReceiveAll(out responses))
{
//process all DataTables
}
This sounds like a bit of a mess, I'm not sure if you know about these tools but i think you should become very familiar with them.
The first is the SQL Server Profiler
Microsoft SQL Server Profiler is a graphical user interface to SQL
Trace for monitoring an instance of the Database Engine or Analysis
Services. You can capture and save data about each event to a file or
table to analyze later. For example, you can monitor a production
environment to see which stored procedures are affecting performance
by executing too slowly
And the Database Engine Tuning Advisor
The Microsoft Database Engine Tuning Advisor (DTA) analyzes databases
and makes recommendations that you can use to optimize query
performance. You can use the Database Engine Tuning Advisor to select
and create an optimal set of indexes, indexed views, or table
partitions without having an expert understanding of the database
structure or the internals of SQL Server
The Profiler, if used strategically should be able to help narrow down the problem. The Tuning Adivsor should be able to loosely aid in writing better queries.
Though without knowing exactly what you are doing and why it would be hard to give you a better answer. However, my gut feeling this is probably a locking/blocking, memory, or excessive threads. Truthfully a well planned and designed database should be able to eat this workload and not skip a beat.
Lastly, (and not knowing your level of expertise here) i would do a lot of research into indexing, query performance, table formalization, and Locking Hints
I have two DataBases (DB1 & DB2 : both DBs are same, DB2 is created from the backup of DB1). When I run a stored procedure SP1 on both DBs it takes approximately 2 seconds to give me an output (select statements) on both DBs.
Now the problem is when I point these DBs from a service and try to use DataAdapter.Fill method, it gives me different time(54 - 63 seconds on DB1 and 42 - 44 seconds on DB2) on both DBs consistently. Noted that I'm using same service to point DBs so it couldn't be service behave/performance. Now my question is:
What could be the reason for this? Any suggestions are welcome that What should I look into?
Helping Info:
Both DB are on different servers(identical configuration) but since executing the SP on SQL Server Management Studio take the same time
on both DBs so I ruled out the possibility of DB server performance.
Network delay could be a factor But higlly unlikely as both servers
are on same network and infact on same physical location. This is my
last option to check.
Some other services are using SQLDependency ON DB1. Which consistently fill DataAdapter(s), could this be the reason for my
DataAdapter fill method to slow down? (less likely as I'm guessing)
As requested in comments below is code that is filling the DataSet:
PS: The time mentioned above is the execution time of the code line highlighted in the above image.
That sounds very much like a query plan issue.
Erland Sommerskog has written an excellent article about this kind of problems,
Slow in the Application, Fast in SSMS?.
My first guess would be "The Default Settings", but it might be one of the other issues, too.
Have you tried not using the SQL.StoredProcedure and just run it as a line of SQL:
"exec dbname.dbo.storedprocname params".
Its a bit more work because you'll have to loop around the parameters to add to the string at the end but its a SQL string, it doesn't care what you are doing, its not doing anything funny behind the scenes. Should have similar times, if this is failing, try checking things like indexes etc.. on db tables that the Stored Procedure is using.
Step one - rebuild or reorg your indexes. This is usually the most common performance issue with SQL Server and is easy to fix. Restart SQL Server some times this also a matters
Background - I have a website & a windows scheduled job which are a part of an MSI and get installed on the same server. The website is used by the end-user to create some rules and the job is scheduled to run on a daily basis to create flat files for the rules created by end-user. The actual scenarios are way more complex than explained above.
Problem (with the website) - The website is working fine most of the times, but some times it just wont load the rule creation page - and the exception being logged it 'query timeout or SQL server not responding'
Problem (with the job) - The job is behaving just like the website and fails some times with the exception - 'query timeout or SQL server not responding'
What I've tried -
I've added 'Connection Timeout' to the SQL connection string - doesn't seem to help with the logging - which would tell me if it was a SQL connection timeout or a query timeout.
I've also run the stored procedures which are called by the website & job - and ALL the stored procedures complete well within the business defined timeout of 3600 seconds. The stored procedures actually complete in under a minute.
I've also run SQL profiler - but the TRACES also didn't help me - though I could see a lot of transactions but I couldn't justify something being wrong with the server.
What I seek - Are there any other reasons which could cause this? Is there something which I could look for?
Technology - SQL Server 2008 R2, ASP.Net, C#.Net
Restrictions - The code details can't be revealed due to client confidentiality, though I'm open to questions - which I'd try to answer keeping client confidentiality in mind.
Note - There is already a query timeout (3600s) & Connection Timeout
(30s) defined in the applicaiton config file.
So, I tried a few things here and there and was able to figure out root cause -
The SQL stored procedure was joining 2 tables from 2 different databases - one of which had varying number of records - these records were being updated/inserted by a different (3rd party) job. Since the time of the 3rd party job and my job was not same - no issue came up due to table locks, but the sheer volume of records caused my job to timeout when my timeout was not enough.
But, as I said I've given the business standard command timeout of 3600 seconds - somehow Enterprise Library was overriding my custom timeout with its own default command timeout of 30s - and hence the C# code part would come throw an exceptions even before the stored procedure had completed executing.
What I did - This may be of help for some of us -
I removed the reference of Enterprise Library from the project
Cleaned up my solution and checked into SVN.
Then cleaned up SVN as well.
I didn't build the application after removing Enterprise Library reference - obviously it wouldn't build due to reference errors.
After that, I took a clean checkout and added Enterprise Library again.
Now it seems to work even with varying number of records.
Just had the same problem also yesterday. Had a huge query taking 18 sec in SQL Server but was running out in C# even after 200 sec. I rebooted my computer disconnect the DB and even disconnect the server... nothing changed.
After reading some threads, I've notice a common feed about indexes. So I removed some indexes in my database, put some back and voilĂ !. Back to normal.
Here's maybe I thought could had happened. While I was running some test, I probably still had some zombie connections left and my colleague was creating some tables in the DB at the same time and linked them to tables used in my stored procedure. Even if the newly created tables had nothing to do with the stored procedure, having them linked with the other ones seems to have messed up with the indexes. Why only the C# couldn't work properly? My guess is there a memory cache in SQL Server not accessible when connecting some place else than SQL Server directly.
N.B. In my case, just altering the stored procedure didn't have any effect at all, even if it was a common "solution" among some threads.
Hope this helps if someone has the same problem. If anyone can find a better solution/explanation, please share!!!
Cheers,
I had similar problem with mssql and did not find any particular reason for this unstable behavior. My solution was to have the db re-indexed with
sp_updatestats
every hour.
You can use WITH RECOMPILE in your stored procedure definition to avoid the error of 'query timeout or SQL server not responding'
Here's the Microsoft article:
http://technet.microsoft.com/en-us/library/ms190439.aspx
Also see this for reference:
SQL Server: Effects of using 'WITH RECOMPILE' in proc definition?
Sample Code:
CREATE PROCEDURE [dbo].[sp_mystoredproc] (#param1 varchar(20) ,#param2 int)
WITH RECOMPILE
AS
... proc code ...
There is a stored procedure that does very heavy processes. It regenerates sql insert scripts while iterating over about 30 tables and after finished that process it starts to insert those scripts to a table named X. The process takes about 20 minutes and this is unacceptable. The last thing which you have to know is that the procedure calling by a webmethod which is created on .NET.
PS: Tables have indexes.
Here are my questions
I want to use multi-threading to solve the problem. But not sure if it would help? I will slice up the sp into 5 pieces and call it from 5 different threads at the same time. I wonder that would help to decrease the meantime between the start and the end of the processing?
Potentially, this would work as there will be no blocking or waiting for other sections of the stored procedure to complete. Obviously, you are still constrained by the physical resources of the server. To be honest, the only way you will tell for sure is to do it and measure the performance.
I would ensure that you analyse the dependencies of each part of the stored procedure accurately, then do it again just to make sure.
Good luck.
Have you taken into account the performance hit you will get with a large amount of inserts into tables with indexes. If you haven't already done so, try the insert part of the process on tables with no indexes and measure any gains. If it proves to be beneficial, script the index creation and run it at after the inserts.
Threading may help but my experience is that optimizing the sql process is likely to give more benefit. I'll be interested to hear your findings.
I am also trying to call the same SP from the console application with multiple threads and i don't see that my SP is running parallel. Even though i call it multiple times using thread, it is executing 1 after the other. I am using the Sql server 2008, (express edition) and also i have configured the Parallel Processing in SQL Server 2008. but it is not running parallel. any ideas or suggestion will be greatly appreciated.
Thanks
Venkat
I have a webpage that takes 10 minutes to run one query against a database, but the same query returns in less than a second when run from SQL Server Management Studio.
The webpage is just firing SQL at the database that is executing a stored procedure, which in turn is performing a pretty simple select over four tables. Again the code is basic ADO, setting the CommandText on an SqlCommand and then performing an ExecuteReader to get the data.
The webpage normally works quickly, but when it slows down the only way to get it speeded up is to defragment the indexes on the tables being queried (different ones different times), which doesn't seem to make sense when the same query executes so quickly manually.
I have had a look at this question but it doesn't apply as the webpage is literally just firing text at the database.
Does anyone have any good ideas why this is going slow one way and not the other?
Thanks
I would suspect parameter sniffing.
The cached execution plan used for your application's connection probably won't be usable by your SSMS connection due to different set options so it will generate a new different plan.
You can retrieve the cached plans for the stored procedure by using the query below. Then compare to see if they are different (e.g. is the slow one doing index seeks and bookmark lookups at a place where the other one does a scan?)
Use YourDatabase;
SELECT *
FROM sys.dm_exec_cached_plans
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
CROSS APPLY sys.dm_exec_query_plan(plan_handle)
cross APPLY sys.dm_exec_plan_attributes(plan_handle) AS epa
where sys.dm_exec_sql_text.OBJECTID=object_id('YourProcName')
and attribute='set_options'
Is there any difference between the command text of the query in the app and the query you are executing manually? Since you said that reindexing helps performance (which also updates statistics), it sounds like it may be getting stuck on a bad execution plan.
You might want to run a sql trace and capture the showplanxml event to see what the execution plan looks like, and also capture sql statement complete (though this can slow the server down if a lot of statements are coming through the system so be careful) to be sure the statement sent to SQL server is the same one you are running manually.