What does SQL Server do with a timed out request? - c#

Suppose that I use C# to run a long running SQL Server stored procedure (lets say 30 minutes). Further suppose that I put a 1 hour timeout period on the query in C# such that if for whatever reason this SP takes longer than expected, I don't end up monopolizing the DB. Lastly, suppose that this stored procedure has a try/catch block in it to catch errors and do some clean-up should any steps inside it fail.
Some code (C#):
using (SqlCommand comm = new SqlCommand("longrunningstoredproc"))
{
comm.Connection = conn;
comm.CommandType = CommandType.StoredProcedure;
comm.CommandTimeout = 3600;
comm.ExecuteNonQuery();
}
/* Note: no transaction is used here, the transactions are inside the stored proc itself. */
T-SQL (basically amounts to the following):
BEGIN TRY
-- initiailize by inserting some rows into a working table somewhere
BEGIN TRANS
-- do long running work
COMMIT TRANS
BEGIN TRANS
-- do long running work
COMMIT TRANS
BEGIN TRANS
-- do long running work
COMMIT TRANS
BEGIN TRANS
-- do long running work
COMMIT TRANS
BEGIN TRANS
-- do long running work
COMMIT TRANS
-- etc.
-- remove the rows from the working table (and set another data point to success)
END TRY
BEGIN CATCH
-- remove the rows from the working table (but don't set the other data point to success)
END CATCH
My question is, what will SQL Server do with the query when the command times out from the C# side? Will it invoke the catch block of the SP, or will it just cut it off altogether such that I would need to perform the clean-up in C# code?

The timeout is enforced by ADO.NET. SQL Server does not know such a thing as a command timeout. The .NET client will send an "attention" TDS command. You can observe this behavior with SQL Profiler because it has an "attention" event.
When SQL Server receives the cancellation it will cancel the currently running query (just like SSMS does when you press the stop button). It will abort the batch (just like in SSMS). This means that no catch code can run. The connection will stay alive.
In my experience the transaction will be rolled back immediately. I don't think this is guaranteed though.
TL;DR: A timeout in ADO.NET behaves the same as if you had pressed stop in SSMS (or called SqlCommand.Cancel).
Here is reference for this: https://techcommunity.microsoft.com/t5/sql-server-support/how-it-works-attention-attention-or-should-i-say-cancel-the/ba-p/315511

The timeout is something that happens on the connection, not the running query.
This means that your BEGIN CATCH will not execute in the event of a timeout, as the query has no idea about it.
Write your cleanup in C#, in a catch(SqlException ex) block (testing for a timeout).

Related

c# MySql stored procedure timeout

I have a c# application that calls the same mysql stored procedure multiple times with different parameters. Its called about 250 times and each call takes about 30 seconds to complete.
There are some cases when for some reason a call takes much more time, and it blocks the next ones from running, so I would like to set a timeout for the stored procedures to stop when it takes more than say like 5 minutes. This way the others could still run and only the one that took too much time would be skipped.
I tried to use the command timeout of the mysql connection, but this does not kill the running stored procedure, only throws an exception in code which is not ideail because the next call will start while the previous one is still running.
Is there a way to set a mysql timout for the connection, or just kill a mysql thread/process (the sp) if it takes too much time? Closing the mysql command or connection did not do it, and clearing the connection pool did not help either.
To kill a running stored procedure, use MySqlCommand.Cancel (using the same MySqlCommand object that was used to start that stored procedure). Because MySqlCommand.ExecuteNonQuery (or ExecuteReader, etc.) will block the thread that called it, this will have to be done from another thread. One way to accomplish this would be with CancellationTokenSource, then registering a callback that will cancel the command:
// set up command
using (var command = new MySqlCommand("sproc_name", connection))
{
command.CommandType = CommandType.StoredProcedure;
// register cancellation to occur in five minutes
using (var cts = new CancellationTokenSource(TimeSpan.FromMinutes(5)))
using (cts.Token.Register(() => command.Cancel())
{
// execute the stored procedure as normal
using (var reader = command.ExecuteReader())
{
// use reader, or just call command.ExecuteNonQuery instead if that's what you need
}
}
}

Kill query after command timout

I'm trying to kill query triggered by ADO.NET command on postgresql database, after command timeout:
using (var command = new PgSqlCommand("public.timeout_test", connection))
{
command.CommandTimeout = 10;
command.CommandType = CommandType.StoredProcedure;
connection.Open();
command.ExecuteNonQuery();
}
In dotnet code timeout exception is being throwed correctly, but I wonder why query triggered by timout_test function is still in active state. If I run below query, then query executed by timeout_tets is listed as active:
SELECT * FROM pg_stat_activity where state = 'active';
Tried to test it with devart and npgsql connectors, but both of them behave in the same way, so I assume that it's intended behevior, but don't understand the reason. Also wanted to ask if there is a way to kill query after command timeout.
At least in Npgsql, CommandTimeout is implemented as a client-side socket timeout: after a certain amount of time Npgsql will simply close the network socket on its side. This doesn't cancel the query server-side, which will continue running.
You can set the PostgreSQL statement_timeout parameter to have it kill queries running more than a given amount of time; for best results, set statement_timeout to something lower than CommandTimeout - this will ensure that server timeout occurs before client timeout, preserving the connection and transmitting the server timeout to the client as a regular exception.
Another option is to manually trigger a cancellation from the client by calling NpgsqlCommand.Cancel(). You can do this whenever you want (e.g. when the user clicks a button), but contrary to statement_timeout it will obviously work only if the network is up.

How do I rollback a SQL CLR stored procedure when a user cancels the query?

I'm trying to create a SQL CLR stored procedure that will create a table, pass the table name onto a service which will bulk insert some data into it, display the results of the table, then clean up the table.
What I've tried so far:
Use SqlTransaction. Cancelling the transaction works, but it puts my query window into a state where I couldn't continue working on it.
The transaction active in this session has been committed or aborted by another session
Use TransactionScope. Same issue as 1.
Manually clean up the table in a finally clause by issuing a DROP TABLE SqlCommand. This doesn't seem to get run, though my SqlContext.Pipe.Send() prior to issuing the command does. It doesn't seem like it's related to any time constraints since if I issue a Thread.Sleep(2000) before printing another line, it still prints the second line whereas the command.ExecuteNonQuery() would stop before printing the second line.
Placing the manual cleanup code into a CER or SafeHandle. This doesn't work as having a CER requires some guarantees, including not allocating additional memory or calling methods that are not decorated with a ReliabilityContract.
Am I missing something obvious here? How do I handle the user cancelling their query?
Edit: There were multiple iterations of the code for each scenario, but the general actions taken are along the lines of the following:
[SqlProcedure]
public static void GetData(SqlString code)
{
Guid guid = Guid.NewGuid();
using (var connection = new SqlConnection("context connection=true"))
{
connection.Open();
try
{
SqlContext.Pipe?.Send("Constrain");
SqlCommand command1 = new SqlCommand($"CREATE TABLE qb.{code}_{guid:N} (Id INT)", connection);
command1.ExecuteNonQuery();
SqlContext.Pipe?.Send($"Create: qb.{code}_{guid:N}");
//emulate service call
Thread.Sleep(TimeSpan.FromSeconds(10));
SqlContext.Pipe?.Send($"Done: qb.{code}_{guid:N}");
}
finally
{
SqlContext.Pipe?.Send("1");
//drop table here instead of sleep
Thread.Sleep(2000);
SqlContext.Pipe?.Send("2");
}
}
}
Unfortunately SQLCLR does not handle query cancellation very well. However, given the error message, that seems to imply that the cancellation does its own ROLLBACK. Have you tried not using a Transaction within the SQLCLR code but instead handling it from outside? Such as:
BEGIN TRAN;
EXEC SQLCLR_Stored_Procedure;
IF (##TRANCOUNT > 0) ROLLBACK TRAN;
The workflow noted above would need to be enforced. This can be done rather easily by creating a wrapper T-SQL Stored Procedure that executes those 3 steps and only give EXECUTE permission to the wrapper Stored Procedure. If permissions are then needed for the SQLCLR Stored Procedure, that can be accomplished rather easily using module signing:
Create an Asymmetric Key in the same DB as the SQLCLR Stored Procedure
Create a User from that Asymmetric Key
GRANT that Key-based User EXECUTE permission on the SQLCLR Stored Procedure
Sign the wrapper T-SQL Stored Procedure, using ADD SIGNATURE, with that Asymmetric Key
Those 4 steps allow the T-SQL wrapper proc to execute the SQLCLR proc, while the actual application Login can only execute the T-SQL wrapper proc :-). And, in, the event that the cancellation aborts the execution prior to executing ROLLBACK, the Transaction should be automatically rolled-back when the connection closes.
Also, do you have XACT_ABORT set to ON or OFF? UPDATE: O.P. states that it is set to OFF, and setting to ON did not seem to behave any differently.
Have you tried checking the connection state in the finally block? I am pretty sure that the SqlConnection is Closed upon the cancellation. You could try the following approaches, both in the finally block:
Test for the connection state and if Closed, re-open the SqlConnection and then execute the non-query command.
UPDATE: O.P. states that the connection is still open. Ok, how about closing it and re-opening it?
UPDATE 2: O.P. tested and found that the connection could not be re-opened.
Since the context is still available, as proven by your print commands working, use something like SqlContext.Pipe.ExecuteAndSend(new SqlCommand("DROP TABLE..;"));
UPDATE: O.P. states that this did not work.
OR, since you create a guaranteed unique table name in the code, you can try creating the table as a global temporary table (i.e. prefixed with two pound-signs: ##TableName) which will a) be available to the bulk import process, and b) clean itself up when the connection fully closes. In this approach, you technically wouldn't need to perform any manual clean up.
Of course, when Connection Pooling is enabled, the automatic cleanup happens only after the connection is re-opened and the first command is executed. In order to force an immediate cleanup, you would have to connect to SQL Server with Connection Pooling disabled. Is it possible to use a different Connection String just when this Stored Procedure is to be executed that includes Pooling=false;? Given how this Stored Procedure is being used, it does not seem like you would suffer any noticeable performance degradation from disabling Connection Pooling on just this one specific call. To better understand how Connection Pooling – enabled or disabled – affects the automatic cleanup of temporary objects, please see the blog post I just published that details this very behavior:
Sessions, Temporary Objects, and the Afterlife
This approach is probably the best overall since you probably cannot guarantee either that ROLLBACK would be executed (first approach mentioned) or that a finally clause would be executed (assuming you ever got that to work). In the end, uncommitted Transactions will be rolled-back, but if someone executes this via SSMS and it aborts without the ROLLBACK, then they are still in an open Transaction and might not be aware of it. Also, think about the connection being forcibly closed, or the Session being killed, or the server being shutdown / restarted. In those cases tempdb is your friend, whether by using a global temporary table, or at the very least creating the permanent Table in tempdb so that it is automatically removed the next time that the SQL Server starts (due to tempdb being created new, as a copy of model, upon each start of the SQL Server service).
Stepping back there are probably much better ways to pass the data out of your CLR procedure.
1) you can simply use the SqlContext Pipe to return a resultset without creating a table.
2) you can create a temp table (#) in the calling code and access it from inside the CLR procedure. You might want to introduce a TSQL wrapper procedure to make this convenient.
Anyway using BEGIN TRAN/COMMIT | ROLLBACK worked for me:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Threading;
static class SqlConnectionExtensions
{
public static DataTable ExecuteDataTable(this SqlConnection con, string sql, params SqlParameter[] parameters)
{
var cmd = new SqlCommand(sql, con);
foreach (var p in parameters)
{
cmd.Parameters.Add(p);
}
using (var dr = cmd.ExecuteReader())
{
var dt = new DataTable();
dt.Load(dr);
return dt;
}
}
public static int ExecuteNonQuery(this SqlConnection con, string sql, params SqlParameter[] parameters)
{
var cmd = new SqlCommand(sql, con);
foreach (var p in parameters)
{
cmd.Parameters.Add(p);
}
return cmd.ExecuteNonQuery();
}
}
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void GetData(SqlString code)
{
Guid guid = Guid.NewGuid();
using (var connection = new SqlConnection("context connection=true"))
{
connection.Open();
try
{
connection.ExecuteNonQuery("begin transaction;");
SqlContext.Pipe?.Send("Constrain");
connection.ExecuteNonQuery($"CREATE TABLE qb.{code}_{guid:N} (Id INT)");
SqlContext.Pipe?.Send($"Create: qb.{code}_{guid:N}");
//emulate service call
Thread.Sleep(TimeSpan.FromSeconds(10));
SqlContext.Pipe?.Send($"Done: qb.{code}_{guid:N}");
connection.ExecuteNonQuery("commit transaction");
}
catch (Exception ex)
{
connection.ExecuteNonQuery("rollback;");
throw;
}
}
}
}
While it is not something advised for use, this would be an ideal match for sp_bindsession. With sp_bindsession, you would call sp_getbindtoken from the context session, pass the token to the server and call sp_bindsession from the service connection.
Afterwards, the two connections behave "as one", with temporaries and transactions being transparently propagated.

SQL Timeout Expired When It Shouldn't

I am using the SqlConnection class and running into problems with command time outs expiring.
First off, I am using the SqlCommand property to set a command timeout like so:
command.CommandTimeout = 300;
Also, I have ensured that the Execution Timeout setting is set to 0 to ensure that there should be no timeouts on the SQL Management side of things.
Here is my code:
using (SqlConnection conn = new SqlConnection(connection))
{
conn.Open();
SqlCommand command = conn.CreateCommand();
var transaction = conn.BeginTransaction("CourseLookupTransaction");
command.Connection = conn;
command.Transaction = transaction;
command.CommandTimeout = 300;
try
{
command.CommandText = "TRUNCATE TABLE courses";
command.ExecuteNonQuery();
List<Course> courses = CourseHelper.GetAllCourses();
foreach (Course course in courses)
{
CourseHelper.InsertCourseLookupRecord(course);
}
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
Log.Error(string.Format("Unable to reload course lookup table: {0}", ex.Message));
}
}
I have set up logging and can verify exactly 30 seconds after firing off this function, I receive the following error message in my stack trace:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
In the interest of full disclosure: InsertCourseLookupRecord() found inside the above using statements foreach, is performing another query to the same table in the same database. Here is the query it is performing:
INSERT INTO courses(courseid, contentid, name, code, description, url, metakeywords, metadescription)
VALUES(#courseid, #contentid, #name, #code, #description, #url, #metakeywords, #metadescription)"
There is over 1400 records in this table.
I will certify any individual(s) that helps me solve this as a most supreme grand wizard.
I believe what is happening is that you have a deadlock situation that is causing your query in the InsertCourseLookupRecord() function to fail. You are not passing your connection to InsertCourseLookupRecord() so I am assuming you are running that in a separate connection. So what happens is:
You started a transaction.
You truncate the table.
InsertCourseLookupRecord starts another connection and tries to insert
data into that table, but the table is locked because your
transaction isn't committed yet.
The connection in the function InsertCourseLookupRecord() times out at the timeout value defined for that connection of 30 seconds.
You could change the function to accept the command object as a parameter and use it inside the function instead of creating a new connection. This will then become part of the transaction and will all be committed together.
To do this change your function definition to:
public static int InsertCourseLookupRecord(string course, SqlCommand cmd)
Take all the connection code out of the function because you're going to use the cmd object. Then when you're ready to execute your query:
myCommand.Parameters.Clear(); //need only if you're using command parameters
cmd.CommandText = "INSERT BLAH BLAH BLAH";
cmd.ExecuteNonQuery();
It will run under the same connection and transaction context.
You call it like this in your using block:
CourseHelper.InsertCourseLookupRecord(course, command);
You could also just take the code in the InsertCourseLookupRecord and put it inside the for loop instead and then reuse the command object in your using block without needing to pass it to a function at all.
Because you are using two separate SqlConnection objects you are deadlocking your self due to the SqlTransaction you started in your outer code. The query in InsertCourseLookupReacord and maybe in GetAllCourses get blocked by the TRUNCATE TABLE courses call which has not yet been committed. They wait 300 seconds for the truncate to be committed then time out.
You have a few options.
Pass the SqlConnection and SqlTransaction in to GetAllCourses and InsertCourseLookupRecord so they can be part of the same transaction.
Use a "ambient transaction" by getting rid of the SqlTransaction and using a System.Transaction.TransactionScope instead. This causes all connections that are opened to the server all share a transaction. This can cause maintenance issues as depending on what the queries are doing it as it may need to invoke the Distributed Transaction Coordinator which may be disabled on some computers (from the looks of what you showed you would need the DTC as you have two open connections at the same time).
The best option is try to change your code to do option 1, but if you can't do option 2.
Taken from the documentation:
CommandTimeout has no effect when the command is executed against a context connection (a SqlConnection opened with "context connection=true" in the connection string)
Please review your connection string, thats the only possibility I can think of.

SQL Server stored procedure that returns processed records number

I have a Winforms application that executes a stored procedure which examines several rows (around 500k). In order to inform the user about how many rows have already been processed, I would need a stored procedure which returns a value every n rows. For example, every 1000 rows processed (most are INSERT).
Otherwise I would be able only to inform when ALL rows are processed. Any hints how to solve this?
I thought it could be useful to use a trigger or some scheduled task, but I cannot figure out how to implement it.
So this is very interesting question. I've tried it about 5 years ago with no success, so this is a little challenge for me :) Well, here's is what I've got for you.
To send message from SQL Server you need to use raiserror command with nowait option. So I've wrote a stored procedure
create procedure sp_test
as
begin
declare #i bigint, #m nvarchar(max)
select #i = 1
while #i < 10
begin
waitfor delay '00:00:01'
select #m = cast(#i as nvarchar(max))
raiserror(#m, 0, 0) with nowait
select #i = #i + 1
end
end
If you try to execute it in SSMS, you'll see that message appearing in message section while procedure still works. Ok, we got messages from server. Now we need to process it on the client.
To do that, I've created a SQLCommand like this
SqlCommand cmd = new SqlCommand("sp_Test");
cmd.Connection = new SqlConnection("Server=HOME;Database=Test;Trusted_Connection=True;");
now to catch a messages we using InfoMessage of SqlConnection object:
cmd.Connection.InfoMessage += Connection_InfoMessage;
static void Connection_InfoMessage(object sender, SqlInfoMessageEventArgs e)
{
Console.WriteLine(e.Message);
}
And now we're trying to display messages
cmd.Connection.Open();
try
{
SqlDataReader r = cmd.ExecuteReader();
}
finally
{
cmd.Connection.Close();
}
SUCCESS :)
BTW, you cannot use ExecuteNonQuery(), because it returns concatenated messages at the end of execution.
Also, you may want to run your query in background mode, so it will not lock you winform client.

Categories