i recently started with an existing project and it works with the Microsoft.Practices.EnterpriseLibrary.Data objects.
Now i want to execute multiple stored procedures in one transaction (1:n insert which have to all fail or succeed)
But i don't know how....
Can anyone help me out?
Typical code to execute a sp in this project looks like this:
Database oDatabase = DatabaseFactory.CreateDatabase(CONNECTION_STRING_KEY);
DbCommand oDbCommand = oDatabase.GetStoredProcCommand("upCustomer_Insert");
Int32 iCustomerKey = 0;
oDatabase.AddInParameter(oDbCommand, "Firstname", DbType.String, p_oCustomer.FirstName);
oDatabase.AddInParameter(oDbCommand, "Lastname", DbType.String, p_oCustomer.LastName);
oDatabase.ExecuteNonQuery(oDbCommand);
You need to make use of a DbTransaction:
using (DbConnection connection = db.CreateConnection())
{
connection.Open();
DbTransaction transaction = connection.BeginTransaction();
try
{
db.ExecuteNonQuery(transaction, sp1);
db.ExecuteNonQuery(transaction, sp2);
transaction.Commit();
}
catch
{
transaction.Rollback();
throw;
}
}
Notice how the first parameter to ExecuteNonQuery is the transaction to use.
More info here.
Transaction scope is not thread safe though. You can not use it for multi-thread applications is what I've read. This is a real PITA overall. MS still seems to not understand how to adequately scale software systems.
You could wrap the calls inside a transactionscope, see: http://msdn.microsoft.com/en-us/library/system.transactions.transactionscope.aspx
Related
I'm trying to create a SQL CLR stored procedure that will create a table, pass the table name onto a service which will bulk insert some data into it, display the results of the table, then clean up the table.
What I've tried so far:
Use SqlTransaction. Cancelling the transaction works, but it puts my query window into a state where I couldn't continue working on it.
The transaction active in this session has been committed or aborted by another session
Use TransactionScope. Same issue as 1.
Manually clean up the table in a finally clause by issuing a DROP TABLE SqlCommand. This doesn't seem to get run, though my SqlContext.Pipe.Send() prior to issuing the command does. It doesn't seem like it's related to any time constraints since if I issue a Thread.Sleep(2000) before printing another line, it still prints the second line whereas the command.ExecuteNonQuery() would stop before printing the second line.
Placing the manual cleanup code into a CER or SafeHandle. This doesn't work as having a CER requires some guarantees, including not allocating additional memory or calling methods that are not decorated with a ReliabilityContract.
Am I missing something obvious here? How do I handle the user cancelling their query?
Edit: There were multiple iterations of the code for each scenario, but the general actions taken are along the lines of the following:
[SqlProcedure]
public static void GetData(SqlString code)
{
Guid guid = Guid.NewGuid();
using (var connection = new SqlConnection("context connection=true"))
{
connection.Open();
try
{
SqlContext.Pipe?.Send("Constrain");
SqlCommand command1 = new SqlCommand($"CREATE TABLE qb.{code}_{guid:N} (Id INT)", connection);
command1.ExecuteNonQuery();
SqlContext.Pipe?.Send($"Create: qb.{code}_{guid:N}");
//emulate service call
Thread.Sleep(TimeSpan.FromSeconds(10));
SqlContext.Pipe?.Send($"Done: qb.{code}_{guid:N}");
}
finally
{
SqlContext.Pipe?.Send("1");
//drop table here instead of sleep
Thread.Sleep(2000);
SqlContext.Pipe?.Send("2");
}
}
}
Unfortunately SQLCLR does not handle query cancellation very well. However, given the error message, that seems to imply that the cancellation does its own ROLLBACK. Have you tried not using a Transaction within the SQLCLR code but instead handling it from outside? Such as:
BEGIN TRAN;
EXEC SQLCLR_Stored_Procedure;
IF (##TRANCOUNT > 0) ROLLBACK TRAN;
The workflow noted above would need to be enforced. This can be done rather easily by creating a wrapper T-SQL Stored Procedure that executes those 3 steps and only give EXECUTE permission to the wrapper Stored Procedure. If permissions are then needed for the SQLCLR Stored Procedure, that can be accomplished rather easily using module signing:
Create an Asymmetric Key in the same DB as the SQLCLR Stored Procedure
Create a User from that Asymmetric Key
GRANT that Key-based User EXECUTE permission on the SQLCLR Stored Procedure
Sign the wrapper T-SQL Stored Procedure, using ADD SIGNATURE, with that Asymmetric Key
Those 4 steps allow the T-SQL wrapper proc to execute the SQLCLR proc, while the actual application Login can only execute the T-SQL wrapper proc :-). And, in, the event that the cancellation aborts the execution prior to executing ROLLBACK, the Transaction should be automatically rolled-back when the connection closes.
Also, do you have XACT_ABORT set to ON or OFF? UPDATE: O.P. states that it is set to OFF, and setting to ON did not seem to behave any differently.
Have you tried checking the connection state in the finally block? I am pretty sure that the SqlConnection is Closed upon the cancellation. You could try the following approaches, both in the finally block:
Test for the connection state and if Closed, re-open the SqlConnection and then execute the non-query command.
UPDATE: O.P. states that the connection is still open. Ok, how about closing it and re-opening it?
UPDATE 2: O.P. tested and found that the connection could not be re-opened.
Since the context is still available, as proven by your print commands working, use something like SqlContext.Pipe.ExecuteAndSend(new SqlCommand("DROP TABLE..;"));
UPDATE: O.P. states that this did not work.
OR, since you create a guaranteed unique table name in the code, you can try creating the table as a global temporary table (i.e. prefixed with two pound-signs: ##TableName) which will a) be available to the bulk import process, and b) clean itself up when the connection fully closes. In this approach, you technically wouldn't need to perform any manual clean up.
Of course, when Connection Pooling is enabled, the automatic cleanup happens only after the connection is re-opened and the first command is executed. In order to force an immediate cleanup, you would have to connect to SQL Server with Connection Pooling disabled. Is it possible to use a different Connection String just when this Stored Procedure is to be executed that includes Pooling=false;? Given how this Stored Procedure is being used, it does not seem like you would suffer any noticeable performance degradation from disabling Connection Pooling on just this one specific call. To better understand how Connection Pooling – enabled or disabled – affects the automatic cleanup of temporary objects, please see the blog post I just published that details this very behavior:
Sessions, Temporary Objects, and the Afterlife
This approach is probably the best overall since you probably cannot guarantee either that ROLLBACK would be executed (first approach mentioned) or that a finally clause would be executed (assuming you ever got that to work). In the end, uncommitted Transactions will be rolled-back, but if someone executes this via SSMS and it aborts without the ROLLBACK, then they are still in an open Transaction and might not be aware of it. Also, think about the connection being forcibly closed, or the Session being killed, or the server being shutdown / restarted. In those cases tempdb is your friend, whether by using a global temporary table, or at the very least creating the permanent Table in tempdb so that it is automatically removed the next time that the SQL Server starts (due to tempdb being created new, as a copy of model, upon each start of the SQL Server service).
Stepping back there are probably much better ways to pass the data out of your CLR procedure.
1) you can simply use the SqlContext Pipe to return a resultset without creating a table.
2) you can create a temp table (#) in the calling code and access it from inside the CLR procedure. You might want to introduce a TSQL wrapper procedure to make this convenient.
Anyway using BEGIN TRAN/COMMIT | ROLLBACK worked for me:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Threading;
static class SqlConnectionExtensions
{
public static DataTable ExecuteDataTable(this SqlConnection con, string sql, params SqlParameter[] parameters)
{
var cmd = new SqlCommand(sql, con);
foreach (var p in parameters)
{
cmd.Parameters.Add(p);
}
using (var dr = cmd.ExecuteReader())
{
var dt = new DataTable();
dt.Load(dr);
return dt;
}
}
public static int ExecuteNonQuery(this SqlConnection con, string sql, params SqlParameter[] parameters)
{
var cmd = new SqlCommand(sql, con);
foreach (var p in parameters)
{
cmd.Parameters.Add(p);
}
return cmd.ExecuteNonQuery();
}
}
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void GetData(SqlString code)
{
Guid guid = Guid.NewGuid();
using (var connection = new SqlConnection("context connection=true"))
{
connection.Open();
try
{
connection.ExecuteNonQuery("begin transaction;");
SqlContext.Pipe?.Send("Constrain");
connection.ExecuteNonQuery($"CREATE TABLE qb.{code}_{guid:N} (Id INT)");
SqlContext.Pipe?.Send($"Create: qb.{code}_{guid:N}");
//emulate service call
Thread.Sleep(TimeSpan.FromSeconds(10));
SqlContext.Pipe?.Send($"Done: qb.{code}_{guid:N}");
connection.ExecuteNonQuery("commit transaction");
}
catch (Exception ex)
{
connection.ExecuteNonQuery("rollback;");
throw;
}
}
}
}
While it is not something advised for use, this would be an ideal match for sp_bindsession. With sp_bindsession, you would call sp_getbindtoken from the context session, pass the token to the server and call sp_bindsession from the service connection.
Afterwards, the two connections behave "as one", with temporaries and transactions being transparently propagated.
I am using the SqlConnection class and running into problems with command time outs expiring.
First off, I am using the SqlCommand property to set a command timeout like so:
command.CommandTimeout = 300;
Also, I have ensured that the Execution Timeout setting is set to 0 to ensure that there should be no timeouts on the SQL Management side of things.
Here is my code:
using (SqlConnection conn = new SqlConnection(connection))
{
conn.Open();
SqlCommand command = conn.CreateCommand();
var transaction = conn.BeginTransaction("CourseLookupTransaction");
command.Connection = conn;
command.Transaction = transaction;
command.CommandTimeout = 300;
try
{
command.CommandText = "TRUNCATE TABLE courses";
command.ExecuteNonQuery();
List<Course> courses = CourseHelper.GetAllCourses();
foreach (Course course in courses)
{
CourseHelper.InsertCourseLookupRecord(course);
}
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
Log.Error(string.Format("Unable to reload course lookup table: {0}", ex.Message));
}
}
I have set up logging and can verify exactly 30 seconds after firing off this function, I receive the following error message in my stack trace:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
In the interest of full disclosure: InsertCourseLookupRecord() found inside the above using statements foreach, is performing another query to the same table in the same database. Here is the query it is performing:
INSERT INTO courses(courseid, contentid, name, code, description, url, metakeywords, metadescription)
VALUES(#courseid, #contentid, #name, #code, #description, #url, #metakeywords, #metadescription)"
There is over 1400 records in this table.
I will certify any individual(s) that helps me solve this as a most supreme grand wizard.
I believe what is happening is that you have a deadlock situation that is causing your query in the InsertCourseLookupRecord() function to fail. You are not passing your connection to InsertCourseLookupRecord() so I am assuming you are running that in a separate connection. So what happens is:
You started a transaction.
You truncate the table.
InsertCourseLookupRecord starts another connection and tries to insert
data into that table, but the table is locked because your
transaction isn't committed yet.
The connection in the function InsertCourseLookupRecord() times out at the timeout value defined for that connection of 30 seconds.
You could change the function to accept the command object as a parameter and use it inside the function instead of creating a new connection. This will then become part of the transaction and will all be committed together.
To do this change your function definition to:
public static int InsertCourseLookupRecord(string course, SqlCommand cmd)
Take all the connection code out of the function because you're going to use the cmd object. Then when you're ready to execute your query:
myCommand.Parameters.Clear(); //need only if you're using command parameters
cmd.CommandText = "INSERT BLAH BLAH BLAH";
cmd.ExecuteNonQuery();
It will run under the same connection and transaction context.
You call it like this in your using block:
CourseHelper.InsertCourseLookupRecord(course, command);
You could also just take the code in the InsertCourseLookupRecord and put it inside the for loop instead and then reuse the command object in your using block without needing to pass it to a function at all.
Because you are using two separate SqlConnection objects you are deadlocking your self due to the SqlTransaction you started in your outer code. The query in InsertCourseLookupReacord and maybe in GetAllCourses get blocked by the TRUNCATE TABLE courses call which has not yet been committed. They wait 300 seconds for the truncate to be committed then time out.
You have a few options.
Pass the SqlConnection and SqlTransaction in to GetAllCourses and InsertCourseLookupRecord so they can be part of the same transaction.
Use a "ambient transaction" by getting rid of the SqlTransaction and using a System.Transaction.TransactionScope instead. This causes all connections that are opened to the server all share a transaction. This can cause maintenance issues as depending on what the queries are doing it as it may need to invoke the Distributed Transaction Coordinator which may be disabled on some computers (from the looks of what you showed you would need the DTC as you have two open connections at the same time).
The best option is try to change your code to do option 1, but if you can't do option 2.
Taken from the documentation:
CommandTimeout has no effect when the command is executed against a context connection (a SqlConnection opened with "context connection=true" in the connection string)
Please review your connection string, thats the only possibility I can think of.
I have a program that access database and excecute different methods that have a database call.
I have used one conenction for everything but it caused a timeout while executing a long task:
I basically had to go through the more than 6000 records and execute a stored procedure. I thing that caused a timeout since I used only one database connection for everything.
Then I changed the code, so I open and closing the connection for every method I call with "using" approach.
How should I handle the method that will be called a lot. Shouls I open/close connection everytime I access that method?
Or there is a different approach to it?
I do something like this:
foreach(record in MyCollection)//6000
{
using(connection = new SqlConnection(conString))
{
singledata = GetSingleData(record);
}
}
Here is a GetSingleData()
private byte[] GetSingleData(MyObject Data)
{
byte[] singleData = null;
using(SqlCommans......)
{
try
{
.......
//executing stored proc to get just a single row
reader = command.ExecuteReader();
while(reader.Read())
{
singleData = (byte[])reader["ColumnName"];
}
}
catch(SqlException ex)
{
if(!reader.isClosed)
reader.Close();
}
}
return singleData;
}
Is it efficient or I can set up some kind of counter and for each 500 records I can check if connection is closed and if it is then reopen it.
Thank's
Try using a persistent connection. Here's a post that might help if you want to try to tune your system (for MySQL):
http://www.mysqlperformanceblog.com/2011/04/19/mysql-connection-timeouts/
Hope that helps.
There is no such a thing as the only good way to do something. It all depends. In cases where agility is a must and you need to create ad-hoc solutions, opening and closing a connection in each method call might not be good theoretically, but accepted practically.
I urge you to read about these terms and concepts:
Connection pooling
Bulk operations (bulk update, bulk insert)
They might help you in getting more performance.
I've a .Net 3.5 winforms application in which am running multiple steps.
Each step does some calculation and calls one or more stored procs.Some of these stored procs do multiple updates/inserts in the tables in the oracle database.
App UI has "process" and "cancel process" buttons for each step.If the use hits cancel process button, the application is supposed to rollback the database state to its previous state...ie. make the transaction ATOMIC.
So, my question here is, is this possible..?and if yes, to achieve this atomicity, what all things I need to take care of in the app and db side?
Do I need to use .Net's transaction API here?Also, is it required to use BEGIN/COMMIT TRANSACTION blocks in those stored procs??
Please share your thoughts.
Thanks.
First, yes, and second, your C# app (specifically the task layer) should manage the transactions, the sprocs should NOT be transactional unless you can guarantee the ability to do nested transactions that roll back when the parent rolls back (and I can't speak on that point WRT oracle)
http://msdn.microsoft.com/en-us/library/system.data.oracleclient.oracleconnection_methods%28v=VS.71%29.aspx
With your OracleConnection object, before you begin all your work, call BeginTransaction(). Then do all your OracleCommand operations with that connection.
Then if you call Transaction.RollBack or Transaction.Commit all sproc work you do should roll back or commit.
Example right from the link:
public void RunOracleTransaction(string myConnString)
{
OracleConnection myConnection = new OracleConnection(myConnString);
myConnection.Open();
OracleCommand myCommand = myConnection.CreateCommand();
OracleTransaction myTrans;
// Start a local transaction
myTrans = myConnection.BeginTransaction(IsolationLevel.ReadCommitted);
// Assign transaction object for a pending local transaction
myCommand.Transaction = myTrans;
try
{
myCommand.CommandText = "INSERT INTO Dept (DeptNo, Dname, Loc) values (50, 'TECHNOLOGY', 'DENVER')";
myCommand.ExecuteNonQuery();
myCommand.CommandText = "INSERT INTO Dept (DeptNo, Dname, Loc) values (60, 'ENGINEERING', 'KANSAS CITY')";
myCommand.ExecuteNonQuery();
myTrans.Commit();
Console.WriteLine("Both records are written to database.");
}
catch(Exception e)
{
myTrans.Rollback();
Console.WriteLine(e.ToString());
Console.WriteLine("Neither record was written to database.");
}
finally
{
myConnection.Close();
}
}
I'm getting this error (Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.) when trying to run a stored procedure from C# on a SQL Server 2005 database. I'm not actively/purposefully using transactions or anything, which is what makes this error weird. I can run the stored procedure from management studio and it works fine. Other stored procedures also work from C#, it just seems to be this one with issues. The error returns instantly, so it can't be a timeout issue. The code is along the lines of:
SqlCommand cmd = null;
try
{
// Make sure we are connected to the database
if (_DBManager.CheckConnection())
{
cmd = new SqlCommand();
lock (_DBManager.SqlConnection)
{
cmd.CommandText = "storedproc";
cmd.CommandType = System.Data.CommandType.StoredProcedure;
cmd.Connection = _DBManager.SqlConnection;
cmd.Parameters.AddWithValue("#param", value);
int affRows = cmd.ExecuteNonQuery();
...
}
}
else
{
...
}
}
catch (Exception ex)
{
...
}
It's really got me stumped. Thanks for any help
It sounds like there is a TransactionScope somewhere that is unhappy. The _DBManager.CheckConnection and _DBManager.SqlConnection sounds like you are keeping a SqlConnection hanging around, which I expect will contribute to this.
To be honest, in most common cases you are better off just using the inbuilt connection pooling, and using your connections locally - i.e.
using(var conn = new SqlConnection(...)) { // or a factory method
// use it here only
}
Here you get a clean SqlConnection, which will be mapped to an unmanaged connection via the pool, i.e. it doesn't create an actual connection each time (but will do a logical reset to clean it up).
This also allows much more flexible use from multiple threads. Using a static connection in a web app, for example, would be horrendous for blocking.
From the code it seems that you are utilizing an already opened connection. May be there's a transaction pending previously on the same connection.