Please check the below code sample, I want A type process and B type process to be done both or none of them. Does below code success?
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required, new TimeSpan(0, 30, 0)))
{
con.Open();
//do A type process
con.Close();
con.Open();
//do B type process
con.Close();
scope.Complete();
}
P.S: (Please don't suggest to use 1 con, the reason is that I use the 3 tier architecture at this link (http://geekswithblogs.net/edison/archive/2009/04/05/a-simple-3-tier-layers-application-in-asp.net.aspx), and A-B type processes are called with a function (genericdata class) which opens and closes it's connection automatically.) So the above code is the interpretation of my actual code.
Using DTC, it will act like a layer between your DB layer and database. Which means any changes made to db will not be applied till you call .Complete(). It really doesn't matter which connection you use and how many database involved in transaction.
make sure you call .Complete() and end of transaction. Or even you can have nested transaction scope
Scope1
Scope2
Scope3
in above , whenever Scope1.Complete is called , data will be moved to database even though child scopes calls Complete
Related
I want to use Oracle Transaction for two different method in C#. But unable to figure out how to use it. Basically code in 3Layer Architecture.
Example :-
in Code behind file on Cancel Butto Click.
On_Cancel_Button_Click()
{
Cancel Method() //Cancelling Booked Ticket
If (Some Booked Ticket in Waiting)
Calling Confirm Method() //to confirm waiting list ticket
}
Cancel & Confirm both method is defined in Aspx.cs file and both are calling BAL class method then DAL Class Method, In BAL, DAL both are calling different Method (In DAL all ADO.Net Code written).
So how to implement Transaction for this scenario.
The simplest way is to handle the transaction in the outer scope.
Eg
void CancelAndConfirmTicket()
{
using (var con = new OracleConnection(...))
{
con.Open();
using (var tran = con.BeginTransaction())
{
Cancel(con);
Confirm(con);
tran.Commit();
}
}
There are other patterns for sharing a connection between methods (like Dependency Injection), or sharing a transaction (like TransactionScope), but the the idea is the same. You define the scope and logic of the transaction in an outer layer of the application, and "enlist" the inner layers in the transaction.
I'm trying to create a SQL CLR stored procedure that will create a table, pass the table name onto a service which will bulk insert some data into it, display the results of the table, then clean up the table.
What I've tried so far:
Use SqlTransaction. Cancelling the transaction works, but it puts my query window into a state where I couldn't continue working on it.
The transaction active in this session has been committed or aborted by another session
Use TransactionScope. Same issue as 1.
Manually clean up the table in a finally clause by issuing a DROP TABLE SqlCommand. This doesn't seem to get run, though my SqlContext.Pipe.Send() prior to issuing the command does. It doesn't seem like it's related to any time constraints since if I issue a Thread.Sleep(2000) before printing another line, it still prints the second line whereas the command.ExecuteNonQuery() would stop before printing the second line.
Placing the manual cleanup code into a CER or SafeHandle. This doesn't work as having a CER requires some guarantees, including not allocating additional memory or calling methods that are not decorated with a ReliabilityContract.
Am I missing something obvious here? How do I handle the user cancelling their query?
Edit: There were multiple iterations of the code for each scenario, but the general actions taken are along the lines of the following:
[SqlProcedure]
public static void GetData(SqlString code)
{
Guid guid = Guid.NewGuid();
using (var connection = new SqlConnection("context connection=true"))
{
connection.Open();
try
{
SqlContext.Pipe?.Send("Constrain");
SqlCommand command1 = new SqlCommand($"CREATE TABLE qb.{code}_{guid:N} (Id INT)", connection);
command1.ExecuteNonQuery();
SqlContext.Pipe?.Send($"Create: qb.{code}_{guid:N}");
//emulate service call
Thread.Sleep(TimeSpan.FromSeconds(10));
SqlContext.Pipe?.Send($"Done: qb.{code}_{guid:N}");
}
finally
{
SqlContext.Pipe?.Send("1");
//drop table here instead of sleep
Thread.Sleep(2000);
SqlContext.Pipe?.Send("2");
}
}
}
Unfortunately SQLCLR does not handle query cancellation very well. However, given the error message, that seems to imply that the cancellation does its own ROLLBACK. Have you tried not using a Transaction within the SQLCLR code but instead handling it from outside? Such as:
BEGIN TRAN;
EXEC SQLCLR_Stored_Procedure;
IF (##TRANCOUNT > 0) ROLLBACK TRAN;
The workflow noted above would need to be enforced. This can be done rather easily by creating a wrapper T-SQL Stored Procedure that executes those 3 steps and only give EXECUTE permission to the wrapper Stored Procedure. If permissions are then needed for the SQLCLR Stored Procedure, that can be accomplished rather easily using module signing:
Create an Asymmetric Key in the same DB as the SQLCLR Stored Procedure
Create a User from that Asymmetric Key
GRANT that Key-based User EXECUTE permission on the SQLCLR Stored Procedure
Sign the wrapper T-SQL Stored Procedure, using ADD SIGNATURE, with that Asymmetric Key
Those 4 steps allow the T-SQL wrapper proc to execute the SQLCLR proc, while the actual application Login can only execute the T-SQL wrapper proc :-). And, in, the event that the cancellation aborts the execution prior to executing ROLLBACK, the Transaction should be automatically rolled-back when the connection closes.
Also, do you have XACT_ABORT set to ON or OFF? UPDATE: O.P. states that it is set to OFF, and setting to ON did not seem to behave any differently.
Have you tried checking the connection state in the finally block? I am pretty sure that the SqlConnection is Closed upon the cancellation. You could try the following approaches, both in the finally block:
Test for the connection state and if Closed, re-open the SqlConnection and then execute the non-query command.
UPDATE: O.P. states that the connection is still open. Ok, how about closing it and re-opening it?
UPDATE 2: O.P. tested and found that the connection could not be re-opened.
Since the context is still available, as proven by your print commands working, use something like SqlContext.Pipe.ExecuteAndSend(new SqlCommand("DROP TABLE..;"));
UPDATE: O.P. states that this did not work.
OR, since you create a guaranteed unique table name in the code, you can try creating the table as a global temporary table (i.e. prefixed with two pound-signs: ##TableName) which will a) be available to the bulk import process, and b) clean itself up when the connection fully closes. In this approach, you technically wouldn't need to perform any manual clean up.
Of course, when Connection Pooling is enabled, the automatic cleanup happens only after the connection is re-opened and the first command is executed. In order to force an immediate cleanup, you would have to connect to SQL Server with Connection Pooling disabled. Is it possible to use a different Connection String just when this Stored Procedure is to be executed that includes Pooling=false;? Given how this Stored Procedure is being used, it does not seem like you would suffer any noticeable performance degradation from disabling Connection Pooling on just this one specific call. To better understand how Connection Pooling – enabled or disabled – affects the automatic cleanup of temporary objects, please see the blog post I just published that details this very behavior:
Sessions, Temporary Objects, and the Afterlife
This approach is probably the best overall since you probably cannot guarantee either that ROLLBACK would be executed (first approach mentioned) or that a finally clause would be executed (assuming you ever got that to work). In the end, uncommitted Transactions will be rolled-back, but if someone executes this via SSMS and it aborts without the ROLLBACK, then they are still in an open Transaction and might not be aware of it. Also, think about the connection being forcibly closed, or the Session being killed, or the server being shutdown / restarted. In those cases tempdb is your friend, whether by using a global temporary table, or at the very least creating the permanent Table in tempdb so that it is automatically removed the next time that the SQL Server starts (due to tempdb being created new, as a copy of model, upon each start of the SQL Server service).
Stepping back there are probably much better ways to pass the data out of your CLR procedure.
1) you can simply use the SqlContext Pipe to return a resultset without creating a table.
2) you can create a temp table (#) in the calling code and access it from inside the CLR procedure. You might want to introduce a TSQL wrapper procedure to make this convenient.
Anyway using BEGIN TRAN/COMMIT | ROLLBACK worked for me:
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
using System.Threading;
static class SqlConnectionExtensions
{
public static DataTable ExecuteDataTable(this SqlConnection con, string sql, params SqlParameter[] parameters)
{
var cmd = new SqlCommand(sql, con);
foreach (var p in parameters)
{
cmd.Parameters.Add(p);
}
using (var dr = cmd.ExecuteReader())
{
var dt = new DataTable();
dt.Load(dr);
return dt;
}
}
public static int ExecuteNonQuery(this SqlConnection con, string sql, params SqlParameter[] parameters)
{
var cmd = new SqlCommand(sql, con);
foreach (var p in parameters)
{
cmd.Parameters.Add(p);
}
return cmd.ExecuteNonQuery();
}
}
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static void GetData(SqlString code)
{
Guid guid = Guid.NewGuid();
using (var connection = new SqlConnection("context connection=true"))
{
connection.Open();
try
{
connection.ExecuteNonQuery("begin transaction;");
SqlContext.Pipe?.Send("Constrain");
connection.ExecuteNonQuery($"CREATE TABLE qb.{code}_{guid:N} (Id INT)");
SqlContext.Pipe?.Send($"Create: qb.{code}_{guid:N}");
//emulate service call
Thread.Sleep(TimeSpan.FromSeconds(10));
SqlContext.Pipe?.Send($"Done: qb.{code}_{guid:N}");
connection.ExecuteNonQuery("commit transaction");
}
catch (Exception ex)
{
connection.ExecuteNonQuery("rollback;");
throw;
}
}
}
}
While it is not something advised for use, this would be an ideal match for sp_bindsession. With sp_bindsession, you would call sp_getbindtoken from the context session, pass the token to the server and call sp_bindsession from the service connection.
Afterwards, the two connections behave "as one", with temporaries and transactions being transparently propagated.
I have an old application which is using Entity framework 4(ObjectContext). EF 6 has DbContext.
In EF4 I can explicitly open database connection and do something like as below
using(var context = new EmployeeContext)
{
context.Connection.Open();
// and then here I am accessing some database objects
// and then calling context.SaveChanes();
}
Also in some other files I have code like as below. code did not call context.Connection.Open();
using(var context = new EmployeeContext)
{
// here I am accessing some database objects
// and then calling context.SaveChanes();
}
I know both of above will work.Application is used by quite a number of users(around 1200 concurrent users at some time).
Now I went to my database server and ran below query during peak time usage of my application
SELECT
DB_NAME(dbid) as DBName,
COUNT(dbid) as NumberOfConnections,
loginame as LoginName
FROM
sys.sysprocesses
WHERE
dbid > 0
GROUP BY
dbid, loginame
It showed around some 5000 open connections at that time and that is when I am thinking; Is it because of
context.Connection.Open() as code is not explicitly calling context.Connection.Close() and because of this connection is still opened and this is increasing load on database server. Though context.Connection.Open() is enclosed inside using block.
Thoughts please?
If you look at this code
using(var context = new EmployeeContext)
{
context.Connection.Open();
// and then here I am accessing some database objects
// and then calling context.SaveChanes();
}
As soon as context goes out of scope of the using statement, context.Dispose() is called, which in turn closes the connection.
The explicit call to .Open() does not require an explicit call to .Close(), though you do want to make sure you leave the using block as soon as you no longer need the connection to be open.
I'm getting this error (Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.) when trying to run a stored procedure from C# on a SQL Server 2005 database. I'm not actively/purposefully using transactions or anything, which is what makes this error weird. I can run the stored procedure from management studio and it works fine. Other stored procedures also work from C#, it just seems to be this one with issues. The error returns instantly, so it can't be a timeout issue. The code is along the lines of:
SqlCommand cmd = null;
try
{
// Make sure we are connected to the database
if (_DBManager.CheckConnection())
{
cmd = new SqlCommand();
lock (_DBManager.SqlConnection)
{
cmd.CommandText = "storedproc";
cmd.CommandType = System.Data.CommandType.StoredProcedure;
cmd.Connection = _DBManager.SqlConnection;
cmd.Parameters.AddWithValue("#param", value);
int affRows = cmd.ExecuteNonQuery();
...
}
}
else
{
...
}
}
catch (Exception ex)
{
...
}
It's really got me stumped. Thanks for any help
It sounds like there is a TransactionScope somewhere that is unhappy. The _DBManager.CheckConnection and _DBManager.SqlConnection sounds like you are keeping a SqlConnection hanging around, which I expect will contribute to this.
To be honest, in most common cases you are better off just using the inbuilt connection pooling, and using your connections locally - i.e.
using(var conn = new SqlConnection(...)) { // or a factory method
// use it here only
}
Here you get a clean SqlConnection, which will be mapped to an unmanaged connection via the pool, i.e. it doesn't create an actual connection each time (but will do a logical reset to clean it up).
This also allows much more flexible use from multiple threads. Using a static connection in a web app, for example, would be horrendous for blocking.
From the code it seems that you are utilizing an already opened connection. May be there's a transaction pending previously on the same connection.
I am getting the following error when I try to call a stored procedure that contains a SELECT Statement:
The operation is not valid for the state of the transaction
Here is the structure of my calls:
public void MyAddUpdateMethod()
{
using (TransactionScope Scope = new TransactionScope(TransactionScopeOption.RequiresNew))
{
using(SQLServer Sql = new SQLServer(this.m_connstring))
{
//do my first add update statement
//do my call to the select statement sp
bool DoesRecordExist = this.SelectStatementCall(id)
}
}
}
public bool SelectStatementCall(System.Guid id)
{
using(SQLServer Sql = new SQLServer(this.m_connstring)) //breaks on this line
{
//create parameters
//
}
}
Is the problem with me creating another connection to the same database within the transaction?
After doing some research, it seems I cannot have two connections opened to the same database with the TransactionScope block. I needed to modify my code to look like this:
public void MyAddUpdateMethod()
{
using (TransactionScope Scope = new TransactionScope(TransactionScopeOption.RequiresNew))
{
using(SQLServer Sql = new SQLServer(this.m_connstring))
{
//do my first add update statement
}
//removed the method call from the first sql server using statement
bool DoesRecordExist = this.SelectStatementCall(id)
}
}
public bool SelectStatementCall(System.Guid id)
{
using(SQLServer Sql = new SQLServer(this.m_connstring))
{
//create parameters
}
}
When I encountered this exception, there was an InnerException "Transaction Timeout". Since this was during a debug session, when I halted my code for some time inside the TransactionScope, I chose to ignore this issue.
When this specific exception with a timeout appears in deployed code, I think that the following section in you .config file will help you out:
<system.transactions>
<machineSettings maxTimeout="00:05:00" />
</system.transactions>
I also come across same problem, I changed transaction timeout to 15 minutes and it works.
I hope this helps.
TransactionOptions options = new TransactionOptions();
options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
options.Timeout = new TimeSpan(0, 15, 0);
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required,options))
{
sp1();
sp2();
...
}
For any wanderer that comes across this in the future. If your application and database are on different machines and you are getting the above error especially when using TransactionScope, enable Network DTC access. Steps to do this are:
Add firewall rules to allow your machines to talk to each other.
Ensure the distributed transaction coordinator service is running
Enable network dtc access. Run dcomcnfg. Go to Component sevices > My Computer > Distributed Transaction Coordinator > Local DTC. Right click properties.
Enable network dtc access as shown.
Important: Do not edit/change the user account and password in the DTC Logon account field, leave it as is, you will end up re-installing windows if you do.
I've encountered this error when my Transaction is nested within another. Is it possible that the stored procedure declares its own transaction or that the calling function declares one?
For me, this error came up when I was trying to rollback a transaction block after encountering an exception, inside another transaction block.
All I had to do to fix it was to remove my inner transaction block.
Things can get quite messy when using nested transactions, best to avoid this and just restructure your code.
In my case, the solution was neither to increase the time of the "transactionscope" nor to increase the time of the "machineSettings" property of "system.transactions" of the machine.config file.
In this case there was something strange because this error only happened when the volume of information was very high.
So the problem was based on the fact that in the code inside the transaction there were many "foreach" that made updates for different tables (I had to solve this problem in a code developed by other personnel). If tests were performed with few records in the tables, the error was not displayed, but if the number of records was increased then the error was displayed.
In the end the solution was to change from a single transaction to several separate ones in the different "foreach" that were within the transaction.
You can't have two transactions open at the same time. What I do is that I specify transaction.Complete() before returning results that will be used by the 2nd transaction ;)
I updated some proprietary 3rd party libraries that handled part of the database process, and got this error on all calls to save. I had to change a value in the web.config transaction key section because (it turned out) the referenced transaction scope method had moved from one library to another.
There was other errors previously connected with that key, but I got rid of them by commenting the key out (I know I should have been suspicious at that point, but I assumed it was now redundant as it wasn't in the shiny new library).