I am using the SqlConnection class and running into problems with command time outs expiring.
First off, I am using the SqlCommand property to set a command timeout like so:
command.CommandTimeout = 300;
Also, I have ensured that the Execution Timeout setting is set to 0 to ensure that there should be no timeouts on the SQL Management side of things.
Here is my code:
using (SqlConnection conn = new SqlConnection(connection))
{
conn.Open();
SqlCommand command = conn.CreateCommand();
var transaction = conn.BeginTransaction("CourseLookupTransaction");
command.Connection = conn;
command.Transaction = transaction;
command.CommandTimeout = 300;
try
{
command.CommandText = "TRUNCATE TABLE courses";
command.ExecuteNonQuery();
List<Course> courses = CourseHelper.GetAllCourses();
foreach (Course course in courses)
{
CourseHelper.InsertCourseLookupRecord(course);
}
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
Log.Error(string.Format("Unable to reload course lookup table: {0}", ex.Message));
}
}
I have set up logging and can verify exactly 30 seconds after firing off this function, I receive the following error message in my stack trace:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
In the interest of full disclosure: InsertCourseLookupRecord() found inside the above using statements foreach, is performing another query to the same table in the same database. Here is the query it is performing:
INSERT INTO courses(courseid, contentid, name, code, description, url, metakeywords, metadescription)
VALUES(#courseid, #contentid, #name, #code, #description, #url, #metakeywords, #metadescription)"
There is over 1400 records in this table.
I will certify any individual(s) that helps me solve this as a most supreme grand wizard.
I believe what is happening is that you have a deadlock situation that is causing your query in the InsertCourseLookupRecord() function to fail. You are not passing your connection to InsertCourseLookupRecord() so I am assuming you are running that in a separate connection. So what happens is:
You started a transaction.
You truncate the table.
InsertCourseLookupRecord starts another connection and tries to insert
data into that table, but the table is locked because your
transaction isn't committed yet.
The connection in the function InsertCourseLookupRecord() times out at the timeout value defined for that connection of 30 seconds.
You could change the function to accept the command object as a parameter and use it inside the function instead of creating a new connection. This will then become part of the transaction and will all be committed together.
To do this change your function definition to:
public static int InsertCourseLookupRecord(string course, SqlCommand cmd)
Take all the connection code out of the function because you're going to use the cmd object. Then when you're ready to execute your query:
myCommand.Parameters.Clear(); //need only if you're using command parameters
cmd.CommandText = "INSERT BLAH BLAH BLAH";
cmd.ExecuteNonQuery();
It will run under the same connection and transaction context.
You call it like this in your using block:
CourseHelper.InsertCourseLookupRecord(course, command);
You could also just take the code in the InsertCourseLookupRecord and put it inside the for loop instead and then reuse the command object in your using block without needing to pass it to a function at all.
Because you are using two separate SqlConnection objects you are deadlocking your self due to the SqlTransaction you started in your outer code. The query in InsertCourseLookupReacord and maybe in GetAllCourses get blocked by the TRUNCATE TABLE courses call which has not yet been committed. They wait 300 seconds for the truncate to be committed then time out.
You have a few options.
Pass the SqlConnection and SqlTransaction in to GetAllCourses and InsertCourseLookupRecord so they can be part of the same transaction.
Use a "ambient transaction" by getting rid of the SqlTransaction and using a System.Transaction.TransactionScope instead. This causes all connections that are opened to the server all share a transaction. This can cause maintenance issues as depending on what the queries are doing it as it may need to invoke the Distributed Transaction Coordinator which may be disabled on some computers (from the looks of what you showed you would need the DTC as you have two open connections at the same time).
The best option is try to change your code to do option 1, but if you can't do option 2.
Taken from the documentation:
CommandTimeout has no effect when the command is executed against a context connection (a SqlConnection opened with "context connection=true" in the connection string)
Please review your connection string, thats the only possibility I can think of.
Related
I'm trying to create a SQL CLR stored procedure that will create a table, pass the table name onto a service which will bulk insert some data into it, display the results of the table, then clean up the table.
What I've tried so far:
Use SqlTransaction. Cancelling the transaction works, but it puts my query window into a state where I couldn't continue working on it.
The transaction active in this session has been committed or aborted by another session
Use TransactionScope. Same issue as 1.
Manually clean up the table in a finally clause by issuing a DROP TABLE SqlCommand. This doesn't seem to get run, though my SqlContext.Pipe.Send() prior to issuing the command does. It doesn't seem like it's related to any time constraints since if I issue a Thread.Sleep(2000) before printing another line, it still prints the second line whereas the command.ExecuteNonQuery() would stop before printing the second line.
Placing the manual cleanup code into a CER or SafeHandle. This doesn't work as having a CER requires some guarantees, including not allocating additional memory or calling methods that are not decorated with a ReliabilityContract.
Am I missing something obvious here? How do I handle the user cancelling their query?
Edit: There were multiple iterations of the code for each scenario, but the general actions taken are along the lines of the following:
[SqlProcedure]
public static void GetData(SqlString code)
{
Guid guid = Guid.NewGuid();
using (var connection = new SqlConnection("context connection=true"))
{
connection.Open();
try
{
SqlContext.Pipe?.Send("Constrain");
SqlCommand command1 = new SqlCommand($"CREATE TABLE qb.{code}_{guid:N} (Id INT)", connection);
command1.ExecuteNonQuery();
SqlContext.Pipe?.Send($"Create: qb.{code}_{guid:N}");
//emulate service call
Thread.Sleep(TimeSpan.FromSeconds(10));
SqlContext.Pipe?.Send($"Done: qb.{code}_{guid:N}");
}
finally
{
SqlContext.Pipe?.Send("1");
//drop table here instead of sleep
Thread.Sleep(2000);
SqlContext.Pipe?.Send("2");
}
}
}
Unfortunately SQLCLR does not handle query cancellation very well. However, given the error message, that seems to imply that the cancellation does its own ROLLBACK. Have you tried not using a Transaction within the SQLCLR code but instead handling it from outside? Such as:
BEGIN TRAN;
EXEC SQLCLR_Stored_Procedure;
IF (##TRANCOUNT > 0) ROLLBACK TRAN;
The workflow noted above would need to be enforced. This can be done rather easily by creating a wrapper T-SQL Stored Procedure that executes those 3 steps and only give EXECUTE permission to the wrapper Stored Procedure. If permissions are then needed for the SQLCLR Stored Procedure, that can be accomplished rather easily using module signing:
Create an Asymmetric Key in the same DB as the SQLCLR Stored Procedure
Create a User from that Asymmetric Key
GRANT that Key-based User EXECUTE permission on the SQLCLR Stored Procedure
Sign the wrapper T-SQL Stored Procedure, using ADD SIGNATURE, with that Asymmetric Key
Those 4 steps allow the T-SQL wrapper proc to execute the SQLCLR proc, while the actual application Login can only execute the T-SQL wrapper proc :-). And, in, the event that the cancellation aborts the execution prior to executing ROLLBACK, the Transaction should be automatically rolled-back when the connection closes.
Also, do you have XACT_ABORT set to ON or OFF? UPDATE: O.P. states that it is set to OFF, and setting to ON did not seem to behave any differently.
Have you tried checking the connection state in the finally block? I am pretty sure that the SqlConnection is Closed upon the cancellation. You could try the following approaches, both in the finally block:
Test for the connection state and if Closed, re-open the SqlConnection and then execute the non-query command.
UPDATE: O.P. states that the connection is still open. Ok, how about closing it and re-opening it?
UPDATE 2: O.P. tested and found that the connection could not be re-opened.
Since the context is still available, as proven by your print commands working, use something like SqlContext.Pipe.ExecuteAndSend(new SqlCommand("DROP TABLE..;"));
UPDATE: O.P. states that this did not work.
OR, since you create a guaranteed unique table name in the code, you can try creating the table as a global temporary table (i.e. prefixed with two pound-signs: ##TableName) which will a) be available to the bulk import process, and b) clean itself up when the connection fully closes. In this approach, you technically wouldn't need to perform any manual clean up.
Of course, when Connection Pooling is enabled, the automatic cleanup happens only after the connection is re-opened and the first command is executed. In order to force an immediate cleanup, you would have to connect to SQL Server with Connection Pooling disabled. Is it possible to use a different Connection String just when this Stored Procedure is to be executed that includes Pooling=false;? Given how this Stored Procedure is being used, it does not seem like you would suffer any noticeable performance degradation from disabling Connection Pooling on just this one specific call. To better understand how Connection Pooling – enabled or disabled – affects the automatic cleanup of temporary objects, please see the blog post I just published that details this very behavior:
Sessions, Temporary Objects, and the Afterlife
This approach is probably the best overall since you probably cannot guarantee either that ROLLBACK would be executed (first approach mentioned) or that a finally clause would be executed (assuming you ever got that to work). In the end, uncommitted Transactions will be rolled-back, but if someone executes this via SSMS and it aborts without the ROLLBACK, then they are still in an open Transaction and might not be aware of it. Also, think about the connection being forcibly closed, or the Session being killed, or the server being shutdown / restarted. In those cases tempdb is your friend, whether by using a global temporary table, or at the very least creating the permanent Table in tempdb so that it is automatically removed the next time that the SQL Server starts (due to tempdb being created new, as a copy of model, upon each start of the SQL Server service).
Stepping back there are probably much better ways to pass the data out of your CLR procedure.
1) you can simply use the SqlContext Pipe to return a resultset without creating a table.
2) you can create a temp table (#) in the calling code and access it from inside the CLR procedure. You might want to introduce a TSQL wrapper procedure to make this convenient.
Anyway using BEGIN TRAN/COMMIT | ROLLBACK worked for me:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Threading;
static class SqlConnectionExtensions
{
public static DataTable ExecuteDataTable(this SqlConnection con, string sql, params SqlParameter[] parameters)
{
var cmd = new SqlCommand(sql, con);
foreach (var p in parameters)
{
cmd.Parameters.Add(p);
}
using (var dr = cmd.ExecuteReader())
{
var dt = new DataTable();
dt.Load(dr);
return dt;
}
}
public static int ExecuteNonQuery(this SqlConnection con, string sql, params SqlParameter[] parameters)
{
var cmd = new SqlCommand(sql, con);
foreach (var p in parameters)
{
cmd.Parameters.Add(p);
}
return cmd.ExecuteNonQuery();
}
}
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void GetData(SqlString code)
{
Guid guid = Guid.NewGuid();
using (var connection = new SqlConnection("context connection=true"))
{
connection.Open();
try
{
connection.ExecuteNonQuery("begin transaction;");
SqlContext.Pipe?.Send("Constrain");
connection.ExecuteNonQuery($"CREATE TABLE qb.{code}_{guid:N} (Id INT)");
SqlContext.Pipe?.Send($"Create: qb.{code}_{guid:N}");
//emulate service call
Thread.Sleep(TimeSpan.FromSeconds(10));
SqlContext.Pipe?.Send($"Done: qb.{code}_{guid:N}");
connection.ExecuteNonQuery("commit transaction");
}
catch (Exception ex)
{
connection.ExecuteNonQuery("rollback;");
throw;
}
}
}
}
While it is not something advised for use, this would be an ideal match for sp_bindsession. With sp_bindsession, you would call sp_getbindtoken from the context session, pass the token to the server and call sp_bindsession from the service connection.
Afterwards, the two connections behave "as one", with temporaries and transactions being transparently propagated.
I have TcpListener class and I'm using async/await reading and writing.
For this server I have created single database instance where I have prepared all database queries.
But for more then one TcpClient I'm keep getting exception:
An exception of type MySql.Data.MySqlClient.MySqlException occurred
in MySql.Data.dll but was not handled in user code
Additional information: There is already an open DataReader associated
with this Connection which must be closed first.
If I understand it correctly there can't be more then one database query at time which is problem with more then one async client.
So I simply added locks in my queries like this and everything seems fine.
// One MySqlConnection instance for whole program.
lock (thisLock)
{
var cmd = connection.CreateCommand();
cmd.CommandText = "SELECT Count(*) FROM logins WHERE username = #user AND password = #pass";
cmd.Parameters.AddWithValue("#user", username);
cmd.Parameters.AddWithValue("#pass", password);
var count = int.Parse(cmd.ExecuteScalar().ToString());
return count > 0;
}
I have also try the method with usings which create new connection for every query as mentioned from someone of SO community but this method is much more slower than locks:
using (MySqlConnection connection = new MySqlConnection(connectionString))
{
connection.Open(); // This takes +- 35ms and makes worse performance than locks
using (MySqlCommand cmd = connection.CreateCommand())
{
cmd.CommandText = "SELECT Count(*) FROM logins WHERE username = #user AND password = #pass";
cmd.Parameters.AddWithValue("#user", username);
cmd.Parameters.AddWithValue("#pass", password);
int count = int.Parse(cmd.ExecuteScalar().ToString());
return count > 0;
}
}
I used Stopwatch to benchmarks this methods and queries with one connection with locks are performed in +- 20ms which is +- only delay of network but with usings it is +- 55ms because of .Open() method which takes +- 35ms.
Why a lot of people use method with usings if there is much worse performance? Or am I doing something wrong?
You're right, opening connection is a time-consuming operation. To mitigate this, ADO.NET has Connection pooling. Check this article for details.
If you go on with your performance test and check timings for subsequent connections, you should see that time for connection.Open() improves and gets close to 0 ms because connections are actually taken from the Pool.
With your lock implementation, you actually use connection pool with just one connection. While this approach could show better performance within a trivial test, it will show very poor results in highly loaded applications.
I am developing a .NET web application with C# and SQL Server as database, being newbie for both technologies. I have a problem when I attempt to save a lot of information at the same time.
My code is simple:
SqlConnection cn = null;
try
{
using (TransactionScope scope = new TransactionScope())
{
using (cn = Util.getConnection())
{
cn.Open();
BusinessObject.saveObject1(param1, cn);
BusinessObject.saveObject2(param2, cn);
BusinessObject.saveObject3(param3, cn);
BusinessObject.saveObject4(param4, cn);
...
scope.Complete();
}
}
I always use the same connection object because if an error happens, I must revoke any change in my data and the same behaviour if the process is ok, is needed.
I don't mind it the process of saving takes a lot of time, in my application is perfectly normal because I have to save a lot of information.
The weirdness here is:
When I execute this function in the same local area network of the database, it perfectly works.
However, if I execute it from outside, such as, connected by a VPN and consequently with higher latency, I get an error with the message: "The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements."
I have tried to change the timeout of the database through the connection string in the web.config but it didn't resolve anything. Also, if after each executeNonQuery() statement I do a cn.Dispose(), I get an error in the next attempt to use the connection object.
Do I have to change the parameters of TransactionScope object? Is there another better way to do this?
Thanks in advanced.
This is due to a timeout error. The TransactionScope might decide to rollback the transaction in case of a timeout.
You need to increase the timeout value in the TransactionScope constructor. The default max. timeout is 10min.
Move scope.Complete(); outside the connection block (MSDN article).
Did you try setting the Timeout = 0 in your Util.getConnection() function ? A value of 0 indicates no limit. This will complete the job surely. By setting the Timeout = 0 will complete every long operation job.
I've a .Net 3.5 winforms application in which am running multiple steps.
Each step does some calculation and calls one or more stored procs.Some of these stored procs do multiple updates/inserts in the tables in the oracle database.
App UI has "process" and "cancel process" buttons for each step.If the use hits cancel process button, the application is supposed to rollback the database state to its previous state...ie. make the transaction ATOMIC.
So, my question here is, is this possible..?and if yes, to achieve this atomicity, what all things I need to take care of in the app and db side?
Do I need to use .Net's transaction API here?Also, is it required to use BEGIN/COMMIT TRANSACTION blocks in those stored procs??
Please share your thoughts.
Thanks.
First, yes, and second, your C# app (specifically the task layer) should manage the transactions, the sprocs should NOT be transactional unless you can guarantee the ability to do nested transactions that roll back when the parent rolls back (and I can't speak on that point WRT oracle)
http://msdn.microsoft.com/en-us/library/system.data.oracleclient.oracleconnection_methods%28v=VS.71%29.aspx
With your OracleConnection object, before you begin all your work, call BeginTransaction(). Then do all your OracleCommand operations with that connection.
Then if you call Transaction.RollBack or Transaction.Commit all sproc work you do should roll back or commit.
Example right from the link:
public void RunOracleTransaction(string myConnString)
{
OracleConnection myConnection = new OracleConnection(myConnString);
myConnection.Open();
OracleCommand myCommand = myConnection.CreateCommand();
OracleTransaction myTrans;
// Start a local transaction
myTrans = myConnection.BeginTransaction(IsolationLevel.ReadCommitted);
// Assign transaction object for a pending local transaction
myCommand.Transaction = myTrans;
try
{
myCommand.CommandText = "INSERT INTO Dept (DeptNo, Dname, Loc) values (50, 'TECHNOLOGY', 'DENVER')";
myCommand.ExecuteNonQuery();
myCommand.CommandText = "INSERT INTO Dept (DeptNo, Dname, Loc) values (60, 'ENGINEERING', 'KANSAS CITY')";
myCommand.ExecuteNonQuery();
myTrans.Commit();
Console.WriteLine("Both records are written to database.");
}
catch(Exception e)
{
myTrans.Rollback();
Console.WriteLine(e.ToString());
Console.WriteLine("Neither record was written to database.");
}
finally
{
myConnection.Close();
}
}
I'm getting this error (Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.) when trying to run a stored procedure from C# on a SQL Server 2005 database. I'm not actively/purposefully using transactions or anything, which is what makes this error weird. I can run the stored procedure from management studio and it works fine. Other stored procedures also work from C#, it just seems to be this one with issues. The error returns instantly, so it can't be a timeout issue. The code is along the lines of:
SqlCommand cmd = null;
try
{
// Make sure we are connected to the database
if (_DBManager.CheckConnection())
{
cmd = new SqlCommand();
lock (_DBManager.SqlConnection)
{
cmd.CommandText = "storedproc";
cmd.CommandType = System.Data.CommandType.StoredProcedure;
cmd.Connection = _DBManager.SqlConnection;
cmd.Parameters.AddWithValue("#param", value);
int affRows = cmd.ExecuteNonQuery();
...
}
}
else
{
...
}
}
catch (Exception ex)
{
...
}
It's really got me stumped. Thanks for any help
It sounds like there is a TransactionScope somewhere that is unhappy. The _DBManager.CheckConnection and _DBManager.SqlConnection sounds like you are keeping a SqlConnection hanging around, which I expect will contribute to this.
To be honest, in most common cases you are better off just using the inbuilt connection pooling, and using your connections locally - i.e.
using(var conn = new SqlConnection(...)) { // or a factory method
// use it here only
}
Here you get a clean SqlConnection, which will be mapped to an unmanaged connection via the pool, i.e. it doesn't create an actual connection each time (but will do a logical reset to clean it up).
This also allows much more flexible use from multiple threads. Using a static connection in a web app, for example, would be horrendous for blocking.
From the code it seems that you are utilizing an already opened connection. May be there's a transaction pending previously on the same connection.