I am using Microsoft.Practices.EnterpriseLibrary.Data for database related activities in my application. I have written some code where I am executing a deletion as well as updating some records using two ExecuteNonQuery. I want to put these in a single transaction. How I can implement that using Microsoft.Practices.EnterpriseLibrary.Data?
What modification is required in the following code to use a transaction?
Code is as following:
int iUpdate = 0;
Database db = DatabaseFactory.CreateDatabase(dbRegion);
try
{
string sSQL = "DELETE FROM table1 WHERE Number = 1 ";
db.ExecuteNonQuery(CommandType.Text, sSQL);
string sqlCommand = "spInsertToTable";
DbCommand dbCommand = db.GetStoredProcCommand(sqlCommand);
iUpdate = db.ExecuteNonQuery(dbCommand);
}
catch (Exception ex)
{
throw;
}
You can use TransactionScope object for this.
Related
I have referred this to perform rollback operation in my wpf c# application. The code that I tried is as follows:
using (OdbcConnection connection = new OdbcConnection("connectionString"))
{
OdbcCommand command = new OdbcCommand();
OdbcTransaction transaction = null;
command.Connection = connection;
try
{
connection.Open();
transaction = connection.BeginTransaction();
command.Connection = connection;
command.Transaction = transaction;
command.CommandText = "INSERT INTO TableA (A, B, C) VALUES (10,10,10)";
command.ExecuteNonQuery();
command.CommandText = "NSERT INTO TableB (D,E,F) VALUES (20,20,20)";
command.ExecuteNonQuery();
transaction.Commit();
}
catch(Exception ex)
{
Console.WriteLine(ex.Message);
try
{
transaction.Rollback();
}
catch
{
}
}
Intentionally the second query has been made wrong. My intention is that when I enter the catch block on calling transaction.Rollback() the values added due to executing of the first query in TableA are not reflected since Rollback was called. However this is not the case the values are not rolledback and are present in TableA. I have searched various resources online with no luck. I cannot use SqlConnection instead of OdbcConnection my application does not support that. Is there any work around this or alternative method that can achieve what I have in mind. Please help me out.
You have basically MSDN example. I had once another problem with ODBC and the issue was with ODBC vendor drivers. I would strongly recommend check that possibility.
Ok so i have a webform and 5 FileUpload control..a user can upload any number of files from 1 to 5 but if anyone of these files does not get uploaded then I want to rollback everything...
For ex:
if user has selected 4 files and if something unexpected occurs at 4th then I want to remove or rollback all the previous 3 file uploads..
I tried this..
try
{
using (TransactionScope scope = new TransactionScope())
{
dboperation dbinsert=new dboperation();
if (file1.ContentLength > 0)
{
.......
.......
dbo.insert(bytes, lastid, file2.FileName);
}
if (file2.ContentLength > 0)
{
.......
.......
dbo.insert(bytes, lastid, file2.FileName);
}
if (file3.ContentLength > 0)
{
.......
.......
dbo.insert(bytes, lastid, file2.FileName);
}//till ...file5
scope.Complete();
}//end of transactionscope
}
catch { }
'dboperation' is a class in c# file and 'dbinsert' is a method which is executing an insert stored procedure. My guess is that I need to use Transaction Scope but I am not sure if I am correct and even if I am how am I supposed to achieve this?
You need to implement transaction. You should start the transaction before inserting first one and catch any errors that occur. in case of error you have to rollback the transaction. And if all goes well you can commit your transaction.
You should also move you connection outside the dboperation or make a method in dboperation that takes connection from outside and uses that
for this you need to use Transaction something like this. I give you example.
class WithTransaction
{
public WithTransaction()
{
string FirstQuery = "INSERT INTO Table1 VALUES('Vineeth',24)";
string SecondQuery = "INSERT INTO Table2 VALUES('HisAddress')";
int ErrorVar = 0;
using (SqlConnection con = new SqlConnection("your connection string"))
{
try
{
SqlCommand ObjCommand = new SqlCommand(FirstQuery, con);
SqlTransaction trans;
con.Open();
trans = con.BeginTransaction();
ObjCommand.Transaction = trans;
//Executing first query
//What ever operation on your database do here
ObjCommand.ExecuteNonQuery(); //Exected first query
ObjCommand.CommandText = SecondQuery;
ObjCommand.ExecuteNonQuery(); //Exected first query
//Everything gone fine. So commiting
ObjCommand.Transaction.Commit();
}
catch (Exception ex)
{
Console.WriteLine("Error but we are rollbacking");
ObjCommand.Transaction.Rollback();
}
con.Close();
}
}
}
Or you can use TransactionScope
check this Link
TransactionScope
I hope this will help you.
I am coding a Sql-Server-ce application in C#.
Recently I have been converting my code to use using statements, as they are much cleaner. In my code I have a GetLastInsertedID function which is very simple - it returns the last inserted ID. The working version is as follows:
public static int GetLastInsertedID()
{
int key = 0;
try
{
SqlCeCommand cmd = new SqlCeCommand("SELECT CONVERT(int, ##IDENTITY)", DbConnection.ceConnection);
key = (int)cmd.ExecuteScalar();
}
catch (Exception ex)
{
MessageBox.Show("Could not get last inserted ID. " + ex.Message);
key = 0;
}
return key;
}
Below is the code that does NOT work once I wrap it in using statements:
public static int GetLastInsertedID()
{
int key = 0;
try
{
using (SqlCeConnection conn = new SqlCeConnection(DbConnection.compact))
{
conn.Open();
using (SqlCeCommand cmd = new SqlCeCommand("SELECT CONVERT(int, ##IDENTITY)", conn))
key = (int)cmd.ExecuteScalar();
}
}
catch (Exception ex)
{
MessageBox.Show("Could not get last inserted ID. " + ex.Message);
key = 0;
}
return key;
}
The error that I'm getting is specified cast is not valid. Although this error is usually self-explanatory, I cannot see why I would be getting it inside the second block of code, but not the first. This error occurs on the line key = (int)cmd.ExecuteScalar();.
What am I doing wrong with the second block of code?
From the ##IDENTITY documentation:
##IDENTITY and SCOPE_IDENTITY will return the last identity value generated in any table in the current session.
I think your change now starts a new session for each using statement. Therefore ##IDENTITY is null.
First of all, ##Identity will return any last generated ID from anywhere in SQL Server. Most probably you need to use SCOPE_IDENTITY() instead.
This shows your actual problem and design issue - you need to keep Connection and Command separate. Connection embeds transaction and though SCOPE_IDENTITY() will work until connection is closed; Command can be created, used and disposed.
So you need method which accept connection and use it to obtain identity - something like this (didn't check it but think idea should be clear):
public static int GetLastInsertedID(DbConnection connection)
{
try
{
string query = "SELECT CONVERT(int, SCOPE_IDENTITY())";
using (SqlCeCommand cmd = new SqlCeCommand(query, conn)) {
return (int)cmd.ExecuteScalar();
}
}
catch (Exception ex)
{
MessageBox.Show("Could not get last inserted ID. " + ex.Message);
return 0;
}
}
For working with connection you can create helper method like this:
public static SqlCeConnection OpenDefaultConnection()
{
SqlCeConnection conn = new SqlCeConnection(DbConnection.compact);
conn.Open();
return conn;
}
And use it like this:
...
using (SqlCeConnection conn = OpenDefaultConnection()) {
//... do smth
int id = GetLastInsertedID(conn);
//... do smth
}
...
in my opinion, the reason that it doesn't work is not related to the using statement.
If you use a static class to do the operation of connecting database, like DBHelper. The problem will be caused by that you close the connection of database before you execute the select ##identity and when you execute select ##identity, you open it again. This executing sequence will cause that the return result of select ##identity is NULL. That is, you can not use DBHelper.xxx() twice for getting the automated ID, because every time you call DBHelper.xxx(), the process of the opening database and the closing database will be done.
I have a solution but it maybe not the best one. Instead of using select ##identity, you can use select count(*) from xxx to get the same result.
Hope that it can help you
I have the following method for bulk insert of the data in tables.
First my code populates the data in data tables and inserts this data in corresponding tables using the SqlBulkCopy claas of the .net .
I have requirement that data should get inserted in all tables or neither of them.
For this I have used SqlTransaction class of the .net.
Scenario is, multiple threads execute the following code block at the same time.
public void Import()
{
using (SqlConnection sqlConnection = new SqlConnection(connectionString))
{
SqlTransaction sqlTrans =null;
try
{
sqlConnection.Open();
sqlTrans = sqlConnection.BeginTransaction(IsolationLevel.Serializable)
SqlCommand cmd = sqlConnection.CreateCommand();
cmd.CommandText = "select top 1 null from lockTable with(xlock)";
cmd.CommandTimeout = 3600*3;
cmd.Transaction = sqlTrans;
SqlDataReader reader = cmd.ExecuteReader();
foreach (DataTable dt in DataTables)
{
ImportIntoDatabase(sqlConnection, dt, sqlTrans);
}
reader.Close();
sqlTrans.Commit();
}
catch (Exception ex)
{
sqlTrans.Rollback();
throw ex;
}
}
}
private void ImportIntoDatabase(SqlConnection sqlConn, DataTable dt, SqlTransaction sqlTrans)
{
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(sqlConn, SqlBulkCopyOptions.Default, sqlTrans))
{
bulkCopy.BulkCopyTimeout = dt.Rows.Count * 10;
try
{
bulkCopy.DestinationTableName = dt.TableName;
bulkCopy.WriteToServer(dt);
}
catch (Exception ex)
{
throw ex;
}
}
}
To handle this concurrency, I have created one dummy table(table named 'lockTable'), in the database where the other table resides(the bulk insert tables). I am getting exclusive lock on this dummy table in the SqlTransaction having command time out as high as 3 hours.
Problem:
I am getting following exception
: Cannot access destination table 'Tbl1' (tbl1 is the table for bulk inserting)
followed by another exception, while rolling back the transaction in catch block
: Error While executing activity The server failed to resume the transaction. Desc:3a00000001.
The transaction active in this session has been committed or aborted by another session.
Can any one help me for this weird behavior of the code. I have already searched a lot on this issue on the internet, but I have not found anything helpful for me.
In Import (DataTable dt in DataTables) is not going to be thread safe.
sqlConnection already has an active reader from Import so that connection cannot be used in ImportIntoDatabase.
Echo smp - if you are locking a table then why multi threads?
If you want to build up the input while the SQL inserts are taking place then use
Asynch method such as SqlCommand.BeginExecuteReader. You get asynch without the overhead of a thread. And DataTables are relatively slow. I insert using TVP and light weight objects. A huge factor in insert performance is index fragmentation. If at all possible insert order by the order of the clustered index. The loop is simple build input, wait for asynch, run asych. Or build input may be read input from a queue. SQL insert to the same table(s) are typically not going to go faster in parallel. My experience is ordered serial inserts with no gap in time between inserts.
I got my problem solved out.
Following are the changes which I have made to my Import method
public void Import()
{
using (SqlConnection sqlConnection = new SqlConnection(connectionString))
{
sqlConnection.Open();
using (SqlTransaction sqlTrans = sqlConnection.BeginTransaction())
{
try
{
SqlCommand cmd = sqlConnection.CreateCommand();
cmd.CommandText = "select top 1 null from lockTable with(xlock)";
cmd.CommandTimeout = LOCK_TIME_OUT;
cmd.Transaction = sqlTrans;
SqlDataReader reader = cmd.ExecuteReader();
foreach (DataTable dt in DataTables)
{
ImportIntoDatabase(sqlConnection, dt, sqlTrans);
}
reader.Close();
sqlTrans.Commit();
}
catch (Exception ex)
{
sqlTrans.Rollback();
throw ex;
}
}
sqlConnection.Close();
}
}
If multiple threads have access to the "Import" Method, then shouldn't you be locking the content of this method?
I don't think you need a dummy table, you just need to lock the two methods above.
I would also mention that you should join all threads, so that you can tell when they have finished.
For an application we are developing we need to read n rows from a table and then selectively update those rows based on domain specific criteria. During this operation all other users of the database need to be locked out to avoid bad reads.
I begin a transaction, read the rows, and while iterating on the recordset build up a string of update statements. After I'm done reading the recordset, I close the recordset and run the updates. At this point I commit the transaction, however none of the updates are being performed on the database.
private static SQLiteConnection OpenNewConnection()
{
try
{
SQLiteConnection conn = new SQLiteConnection();
conn.ConnectionString = ConnectionString;//System.Configuration.ConfigurationManager.AppSettings["ConnectionString"];
conn.Open();
return conn;
}
catch (SQLiteException e)
{
LogEvent("Exception raised when opening connection to [" + ConnectionString + "]. Exception Message " + e.Message);
throw e;
}
}
SQLiteConnection conn = OpenNewConnection();
SQLiteCommand command = new SQLiteCommand(conn);
SQLiteTransaction transaction = conn.BeginTransaction();
// Also fails transaction = conn.BeginTransaction();
transaction = conn.BeginTransaction(IsolationLevel.ReadCommitted);
command.CommandType = CommandType.Text;
command.Transaction = transaction;
command.Connection = conn;
try
{
string sql = "select * From X Where Y;";
command.CommandText = sql;
SQLiteDataReader ranges;
ranges = command.ExecuteReader();
sql = string.Empty;
ArrayList ret = new ArrayList();
while (MemberVariable > 0 && ranges.Read())
{
// Domain stuff
sql += "Update X Set Z = 'foo' Where Y;";
}
ranges.Close();
command.CommandText = sql;
command.ExecuteNonQuery();
// UPDATES NOT BEING APPLIED
transaction.Commit();
return ret;
}
catch (Exception ex)
{
transaction.Rollback();
throw;
}
finally
{
transaction.Dispose();
command.Dispose();
conn.Close();
}
return null;
If I remove the transaction everything works as expected. The "Domain stuff" is domain specfic and other than reading values from the recordset doesn't access the database. Did I forget a step?
When you put a breakpoint on your transaction.Commit() line do you see it getting hit?
Final answer:
SQLite's locking does not work like you're assuming see http://www.sqlite.org/lockingv3.html. Given that, I think you're having a transaction scoping issue which can be easily resolved by reorganizing your code as such:
string selectSql = "select * From X Where Y;";
using(var conn = OpenNewConnection()){
StringBuilder updateBuilder = new StringBuilder();
using(var cmd = new SQLiteCommand(selectSql, conn))
using(var ranges = cmd.ExecuteReader()) {
while(MemberVariable > 0 && ranges.Read()) {
updateBuilder.Append("Update X Set Z = 'foo' Where Y;");
}
}
using(var trans = conn.BeginTransaction())
using(var updateCmd = new SQLiteCommand(updateBuilder.ToString(), conn, trans) {
cmd.ExecuteNonQuery();
trans.Commit();
}
}
Additional notes regarding some comments in this post/answer about transactions in SQLite. These apply to SQLite 3.x using Journaling and may or may not apply to different configurations - WAL is slightly different but I am not familiar with it. See locking in SQLite for the definitive information.
All transactions in SQLite are SERIALIZABLE (see the read_uncommitted pragma for one small exception). A new read won't block/fail unless the write process has started (there is an EXCLUSIVE/PENDING lock held) and a write won't start until all outstanding reads are complete and it can obtain an EXCLUSIVE lock (this is not true for WAL but the transaction isolation is still the same).
That is the entire sequence above won't be atomic in code and the sequence may be read(A) -> read(B) -> write(A) -> read(B), where A and B represent different connections (imagine on different threads). At both read(B) the data is still consistent even though there was a write in-between.
To make the sequence of code itself atomic a lock or similar synchronization mechanism is required. Alternatively, the lock/synchronization can be created with SQLite itself by using a locking_mode pragma of "exclusive". However, even if the code above is not atomic the data will adhere to the SQL serializable contract (excluding a serious bug ;-)
Happy coding
See Locking in SQLite, SQLite pragmas and Atomic Commit in SQLite