Update: Looks like the query does not throw any timeout. The connection is timing out.
This is a sample code for executing a query. Sometimes, while executing time consuming queries, it throws a timeout exception.
I cannot use any of these techniques:
1) Increase timeout.
2) Run it asynchronously with a callback. This needs to run in a synchronous manner.
please suggest any other techinques to keep the connection alive while executing a time consuming query?
private static void CreateCommand(string queryString,
string connectionString)
{
using (SqlConnection connection = new SqlConnection(
connectionString))
{
SqlCommand command = new SqlCommand(queryString, connection);
command.Connection.Open();
command.ExecuteNonQuery();
}
}
Since you are using ExecuteNonQuery which does not return any rows, you can try this polling based approach. It executes the query in an asyc manner (without callback)
but the application will wait (inside a while loop) until the query is complete. From MSDN. This should solve the timeout problem. Please try it out.
But, I agree with others that you should think more about optimizing the query to perform under 30 seconds.
IAsyncResult result = command.BeginExecuteNonQuery();
int count = 0;
while (!result.IsCompleted)
{
Console.WriteLine("Waiting ({0})", count++);
System.Threading.Thread.Sleep(1000);
}
Console.WriteLine("Command complete. Affected {0} rows.",
command.EndExecuteNonQuery(result));
You should first check your query to see if it's optimized and it isn't somehow running on missing indexes. 30 seconds is allot for most queries, even on large databases if they are properly tuned. If you have solid proof using the query plan that the query can't be executed any faster than that, then you should increase the timeout, there's no other way to keep the connection, that's the purpose of the timeout to terminate the connection if the query doesn't complete in that time frame.
I have to agree with Terrapin.
You have a few options on how to get your time down. First, if your company employs DBAs, I'd recommend asking them for suggestions.
If that's not an option, or if you want to try some other things first here are your three major options:
Break up the query into components that run under the timeout. This is probably the easiest.
Change the query to optimize the access path through the database (generally: hitting an index as closely as you can)
Change or add indices to affect your query's access path.
If you are constrained from using the default process of changing the timeout value you will most likely have to do a lot more work. The following options come to mind
Validate with your DBA's and another code review that you have truly optimized the query as best as possible
Work on the underlying DB structure to see if there is any gain you can get on the DB side, creating/modifying an idex(es).
Divide it into multiple parts, even if this means running procedures with multiple return parameters that simply call another param. (This option is not elegant, and honestly if your code REALLY is going to take this much time I would be going to management and re-discussing the 30 second timeout)
We recently had a similar issue on a SQL Server 2000 database.
During your query, run this query on your master database on the db server and see if there are any locks you should troubleshoot:
select
spid,
db_name(sp.dbid) as DBname,
blocked as BlockedBy,
waittime as WaitInMs,
lastwaittype,
waitresource,
cpu,
physical_io,
memusage,
loginame,
login_time,
last_batch,
hostname,
sql_handle
from sysprocesses sp
where (waittype > 0 and spid > 49) or spid in (select blocked from sysprocesses where blocked > 0)
SQL Server Management Studio 2008 also contains a very cool activity monitor which lets you see the health of your database during your query.
In our case, it was a networkio lock which kept the database busy. It was some legacy VB code which didn't disconnect its result set quick enough.
If you are prohibited from using the features of the data access API to allow a query to last more than 30 seconds, then we need to see the SQL.
The performance gains to be made by optimizing the use of ADO.NET are slight in comparison to the gains of optimizing the SQL.
And you already are using the most efficient method of executing SQL. Other techniques would be mind numbingly slower (although, if you did a quick retrieval of your rows and some really slow client side processing using DataSets, you might be able to get the initial retrieval down to less than 30 seconds, but I doubt it.)
If we knew if you were doing inserts, then maybe you should be using bulk insert. But we don't know the content of your sql.
This is an UGLY hack, but might help solve your problem temporarily until you can fix the real problem
private static void CreateCommand(string queryString,string connectionString)
{
int maxRetries = 3;
int retries = 0;
while(true)
{
try
{
using (SqlConnection connection = new SqlConnection(connectionString))
{
SqlCommand command = new SqlCommand(queryString, connection);
command.Connection.Open();
command.ExecuteNonQuery();
}
break;
}
catch (SqlException se)
{
if (se.Message.IndexOf("Timeout", StringComparison.InvariantCultureIgnoreCase) == -1)
throw; //not a timeout
if (retries >= maxRetries)
throw new Exception( String.Format("Timedout {0} Times", retries),se);
//or break to throw no error
retries++;
}
}
}
command.CommandTimeout *= 2;
That will double the default time-out, which is 30 seconds.
Or, put the value for CommandTimeout in a configuration file, so you can adjust it as needed without recompiling.
You should break your query up into multiple chunks that each execute within the timeout period.
If you absolutely cannot increase the timeout, your only option is to reduce the time of the query to execute within the default 30 second timeout.
I tend to dislike increasing the connection/command timeout since in my mind that would be a matter of taking care of the symptom, not the problem
have you thought about breaking the query down into several smaller chunks?
Also, have you ran your query against the Database Engine Tuning Advisor in:
Management Studio > Tools > Database Engine Tuning Advisor
Lastly, could we get a look at the query itself?
cheers
Have you tried wrapping your sql inside a stored procedure, they seem to have better memory management. Have seen timeouts like this before in plan sql statement with internal queries using classic ADO. i.e. select * from (select ....) t inner join somthingTable. Where the internal query was returning a very large number of results.
Other tips
1. Performing reads with the with(nolock) execution hint, it's dirty and I don't recommend it but it will tend to be faster.
2. Also look at the execution plan of the sql your trying to run and reduce the row scanning, the order in which you join tables.
3. look at adding some indexes to your tables for faster reads.
4. I've also found that deleting rows is very expensive, you could try and limit the number of rows per call.
5. Swap #table variables with #temporary tables has also worked for me in the past.
6. You may also have saved bad execution plan (heard, never seen).
Hope this helps
Update: Looks like the query does not
throw any timeout. The connection is
timing out.
I.o.w., even if you don't execute a query, the connection times out? because there are two time-outs: connection and query. Everybody seems to focus on the query, but if you get connection timeouts, it's a network problem and has nothing to do with the query: the connection first has to be established before a query can be ran, obviously.
It might be worth trying paging the results back.
just set sqlcommand's CommandTimeout property to 0, this will cause the command to wait until the query finishes...
eg:
SqlCommand cmd = new SqlCommand(spName,conn);
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandTimeout = 0;
Related
Wihin my C# code I am using the CommandTimeout function to ensure that any query that executes longer than 30s is terminated both from the server and database. However when listing the currently running queries on the database the query that was set to cancel after 30s runs well beyond 30s
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
SqlCommand sqlCommand = new SqlCommand(query, connection);
//Set Timeout to 30s
sqlCommand.CommandTimeout = 30;
SqlDataAdapter da = new SqlDataAdapter(sqlCommand);
da.Fill(response);
connection.Close();
da.Dispose();
}
Why is the query still running in the DB? Is my only option right now is to send another query from the server to kill the query (KILL [session_id]) after 30s?
EDIT: 300Mb of data is being returned for this query.
There are a number of posts on StackOverflow indicating that SqlCommand.CommandTimeout won't affect the behavior of SqlDataAdapter.Fill. Instead, you supposedly have to set the SqlDataAdapter's SelectCommand.CommandTimeout property.
However, there are other posts which seem to indicate that even this doesn't work. This one in particular makes me think that the query will only be canceled if the timeout occurs before the query starts yielding results. Once results start coming in, it appears to ignore all timeouts.
My recommendation would be to reconsider using SqlDataAdapter. Depending on your use case, maybe a library like Dapper would work better for you?
You may also want to consider reporting this as a defect to the .NET team. I've had mixed success in the past reporting such errors; it depends on whether the team wants to prioritize fixing the issue.
Update
It looks like this may be the intended, documented behavior, as Marc Gravell points out here.
lol: from the documentation
(https://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.commandtimeout(v=vs.110).aspx)
For example, with a 30 second time out, if Read requires two network
packets, then it has 30 seconds to read both network packets. If you
call Read again, it will have another 30 seconds to read any data that
it requires.
So: this timeout resets itself every Read. So: the only way it'll trip
is if any single Read operation takes longer than 2s. As long as the
SQL Server manages to get at least one row onto the pipe in that time:
it won't timeout via either API.
I'm a bit newbie still and I have been assigned with the task of maintaining previosuly done code.
I have a web that simulates SQL Management Studio, limitating deleting options for example, so basic users don't screw our servers.
Well, we have a function that expects a query or queries, it works fine, but our server RAM gets blown up with complex queries, maybe it's not that much data, but its casting xml and all that stuff that I still don't even understand in SQL.
This is the actual function:
public DataSet ExecuteMultipleQueries(string queries)
{
var results = new DataSet();
using (var myConnection = new SqlConnection(_connectionString))
{
myConnection.Open();
var sqlCommand = myConnection.CreateCommand();
sqlCommand.Transaction = myConnection.BeginTransaction(IsolationLevel.ReadUncommitted);
sqlCommand.CommandTimeout = AppSettings.SqlTimeout;
sqlCommand.CommandText = queries.Trim();
var dataAdapter = new SqlDataAdapter { SelectCommand = sqlCommand };
dataAdapter.Fill(results);
return results;
}
}
I'm a bit lost, I've read many different answers but either I don't understand them properly or they don't solve my problems in any way.
I know I could use Linq-toSql- or Entity, I tried them but I really don't know how to use them with an "unknown" query, I could try to research more anyway so if you think they will help me approaching a solution, by any means, I will try to learn it.
So to the point:
The function seems to stop at dataAdapter.Fill(results) when debugging, at that point is where the server tries to answer the query and just consume all its RAM and blocks itself. How can I solve this? I thought maybe by making SQL return a certain amount of data, store it in a certain collection, then continue returning data, and keep going until there is no more data to return from SQL, but I really don't know how to detect if there is any data left to return from SQL.
Also I thought about reading and storing in two different threads, but I don't know how the data that is in one thread can be stored in other thread async (and even less if it solves the issue).
So, yes, I don't have anything clear at all, so any guidance or tip would be highly appreciated.
Thanks in advance and sorry for the long post.
You can use pagination to fetch only part of the data.
Your code will be like this:
dataAdapter.Fill(results, 0, pageSize);
pageSize can be at size you want (100 or 250 for example).
You can get more information in this msdn article.
In order to investigate, try the following:
Start SQL profiler (it is usually installed along with SSMS and can be started from Management Studio, Tools menu)
Make sure you fill up some filters (either NT username or at least the database you are profiling). This is to catch as specific (i.e. only your) queries as possible
Include starting events to see when your query starts (e.g. RPC:Starting).
Start your application
Start the profiler before issuing the query (fill the adapter)
Issue the query -> you should see the query start in the profiler
Stop the profiler not to catch other queries (it puts overhead on SQL Server)
Stop the application (no reason to mess with server until the analysis is done)
Take the query within SQL Management Studio. I expect a SELECT that returns a lot of data. Do not run as it is, but put a TOP to limit its results. E.g. SELECT TOP 1000 <some columns> from ....
If the TOPed select runs slowly, you are returning too much data.
This may be due to returning some large fields such as N/VARCHAR(MAX) or VARBINARY(MAX). One possible solution is to exclude these fields from the initial SELECT and lazy-load this data (as needed).
Check these steps and come back with your actual query, if needed.
The question involves SQL Server and C#.
I am trying to execute multiple delete statements in one query. The format has the following structure:
DELETE from dbo.table where column = 'value'
repeated more than 100000 times
I build the command through a StringBuilder and call it that way in my C# code:
cmd.Connection = con;
int rows = cmd.ExecuteNonQuery();
However it takes a lot of time to execute and ends with this error:
The timeout period elapsed prior to completion of the operation or the server. Executing through Management Studio takes also a lot of time, and let me think that the query isn't performant enough. However, in this case the query is executed successfully.
Obviously, using fewer DELETE statements, the query ends properly due it's a very simple query. What is the best way to execute multiple statements and avoid this error?
Can't you do only a single DELETE execution with the "IN" instead of "=" in your query?
Do something like:
DELETE from dbo.table where column in ('Param1','Param2','(...)')
Also, you could check this article:
http://social.technet.microsoft.com/wiki/contents/articles/20651.sql-server-delete-a-huge-amount-of-data-from-a-table.aspx
Cheers!
I have a query (about 1600 lines stored as a Stored Procedure) that takes about 3 seconds to execute (after optimizing it by adding the correct indexes), when executed inside SQL Server Management Studio.
I wrote a wrapper for this in C# and provided myself with the ability to use a URI to execute this query. However, this takes more than 30 seconds to execute and because of this, when I run this query as part of a loop, the browser stalls due to too many pending requests. I wrote the wrapper like this:
try
{
string ConString = Constants.connString;
using (con = new SqlConnection(ConString))
{
cmd = new SqlCommand(sql, con);
con.Open();
dr = cmd.ExecuteReader();
while (dr.Read())
{
...
}
}
}
My connection string is this:
Data Source={0};Initial Catalog={1};Integrated Security=True;MultipleActiveResultSets=true
I know the query itself is good because I've run it multiple times inside SSMS and it worked fine (under 5 seconds on an average). And, I'd be happy to provide with more debug information, except that I don't know what to provide.
To troubleshoot such problems, where would I start?
EDIT:
I ran SQL Profiler and collected some stats. This is what I am observing. Very strange that it is the exact query being executed. Let me know if there is anything else I can do at this point.
Ok; Finally, found the answer here and here. Answer replicated here for convenience. Thanks goes to the original poster, Jacques Bosch, who in turn took it from here. Cannot believe this problem was solved in 2004!
The problem seems to be caused by SQL Server's Parameter Sniffing.
To prevent it, just assign your incoming parameter values to other variables declared right at the top of your SP.
See this nice Article about it
Example:
CREATE PROCEDURE dbo.MyProcedure
(
#Param1 INT
)
AS
declare #MyParam1 INT
set #MyParam1 = #Param1
SELECT * FROM dbo.MyTable WHERE ColumnName = #MyParam1
GO
I copied this information from eggheadcafe.com.
Queries often run faster within SQL Server management studio due to caching of query plans. Try running sp_recompile on your stored procedure before benchmarking. This will clear the query plan.
More details can be found here: http://www.sommarskog.se/query-plan-mysteries.html
Why don't U use XML instead of a Result Set ?
As far as I know using XML is much faster than Reading a result set.
so in this case you should use something like this :
SELECT *
FROM [Table Name]
FOR XML PATH('[Your Path]'), ELEMENTS XSINIL, ROOT('Your Root')
and after that I think you can serialize it in your Project.
I had the same problem once, and it turned out that the difference was caused by the ARITHABORT setting that is different if you connect via SQL management studio. The .net connection set the ARITHABORT setting to OFF, while SQL management studio sets this setting to ON.
You can try if you have the same problem by executing SET ARITHABORT OFF in the SQL Management Studio, and then execute your query.
Here is a thread that explains why this setting can cause such dramatic performance differences.
i have a sql query with multiple joins & it pulls data from a database for processing. This is supposed to be running on some scheduled basis. So day 1, it might pull 500, day 2 say 400.
Now, if the service is stopped for some reason & the data not processed, then on day3 there could be as much as 1000 records to process. This is causing timeout on the sql query.
How best to handle this situation without causing timeout & gradually reducing workload to process?
TIA
create a batch process. Allow no more than say n records to process. Lets say n = 100 ...
then make your select queries to only select top 100 until there are no more records to process.
YourCommandObject.CommandTimeout = 0;
This will allow your command to run forever.
Note this could cause database locks and other issues. If you use the batch process I described above and determine the longest running query you can set your connect timeout to what is necessary.
Look at your query, it could be that its not optimised, put indexes where appropriate. Without seeing your table structure, query, cant help any more.
One practical solution would be to increase the command timeout:
var com = yourConnection.CreateCommand();
com.CommandTimeout = 0;
...
The CommandTimeout property is the time (in seconds) to wait for the command to execute. The default is 30 seconds. A value of 0 indicates no limit, and should be avoided in a CommandTimeout because an attempt to execute a command will wait indefinitely.