lock table after BeginTransaction MySql Transaction in c#.net - c#

How do i restrict other users to update or insert in a table after a certain transaction has begun ?
I tried this :
MySqlConnection con = new MySqlConnection("server=localhost;database=data;user=root;pwd=;");
con.Open();
MySqlTransaction trans = con.BeginTransaction();
try
{
string sql = "insert INTO transaction_ledger (trans_id,voucher_id,voucher_number,trans_date,ledger_code,company_code,trans_type, trans_amount,primary_ledger,narration,ledger_parent,trans_type_name,ledger_ref_code,r_trans_id,IsSync) VALUES (0, 'EReceipt-4',4,'2013-04-01','483', '870d7d83-05ec-4fbb-8e9d-801150bd3ed1', 'EReceipt',-233.22,1,'asadfsaf','Bank OD A/c','Receipt','4274',1173,'N')";
new MySqlCommand(sql, con, trans).ExecuteNonQuery();
sql = "insert INTO transaction_ledger (trans_id,voucher_id,voucher_number,trans_date,ledger_code,company_code,trans_type, trans_amount,primary_ledger,narration,ledger_parent,trans_type_name,ledger_ref_code,r_trans_id,IsSync) VALUES (0, 'EReceipt-4',4,'2013-04-01','4274', '870d7d83-05ec-4fbb-8e9d-801150bd3ed1', 'EReceipt',100,0,'asadfsaf','Sundry Creditors','Receipt','483',1173,'N')";
new MySqlCommand(sql, con, trans).ExecuteNonQuery();
sql = "insert INTO transaction_ledger (trans_id,voucher_id,voucher_number,trans_date,ledger_code,company_code,trans_type, trans_amount,primary_ledger,narration,ledger_parent,trans_type_name,ledger_ref_code,r_trans_id,IsSync) VALUES (0, 'EReceipt-4',4,'2013-04-01','427', '870d7d83-05ec-4fbb-8e9d-801150bd3ed1', 'EReceipt',133.22,0,'asadfsaf','Sundry Creditors','Receipt','483',1173,'N')";
new MySqlCommand(sql, con, trans).ExecuteNonQuery();
trans.Commit();
}
catch (Exception ex)
{
trans.Rollback();
}
finally
{
con.Close();
}
but this still allows to insert rows after BeginTransaction.

BeginTransaction does not mean that "your transaction has started and everything is locked". It just informs the RDBMS regarding your intent of initiating a transaction and that everything that you should do from now on should and must be considered atomic.
This means that you could call BeingTransaction and I could delete all data from all tables in your database and the RDBMS will happily let me do that. Hopefully, it should not let me drop the DB because you have an open connection to it, however, you never know these days. There might be some undocumented features I am not aware of.
Atomic means any action or set of actions must be performed as one. If any one of them fails that all of them fail. It is an everything or nothing concept.
Looks like you are inserting three rows into a table. If your table is empty or has very low number of rows, it might lock the whole table depending on the LOCK ESCALATION rules of your RDBMS. However, if it is a large or very large or partitioned table then the LOCK escalation rules might not guarantee a table lock. So, it might still be possible for multiple transactions to insert rows into your table at the same time. It all depends on how the RDBMS handles this situation and how your data model is structured.
Now to answer your question:
HINT - Look for a way to lock the entire table before you start inserting data.
However, this is usually not good but I am assuming that you have a reasonable reason to do it.
Hope this helps.

Related

Does sqlconnection have a default transaction? [duplicate]

Is a single call to ExecuteNonQuery() atomic or does it make sense to use Transactions if there are multiple sql statements in a single DbCommand?
See my example for clarification:
using (var ts = new TransactionScope())
{
using (DbCommand lCmd = pConnection.CreateCommand())
{
lCmd.CommandText = #"
DELETE FROM ...;
INSERT INTO ...";
lCmd.ExecuteNonQuery();
}
ts.Complete();
}
If you don't ask for a transaction, you (mostly) don't get one. SQL Server wants everything in transactions and so, by default (with no other transaction management), for each separate statement, SQL Server will create a transaction and automatically commit it. So in your sample (if there was no TransactionScope), you'll get two separate transactions, both independently committed or rolled back (on error).
(Unless you've turned IMPLICIT_TRANSACTIONS on on that connection, in which case you'll get one transaction but you need an explicit COMMIT or ROLLBACK at the end. The only people I've found using this mode are people porting from Oracle and trying to minimize changes. I wouldn't recommend turning it on for greenfield work because it'll just confuse people used to SQL Server's defaults)
It's not. SQL engine will treat this text as two separate instructions. TransactionScope is required (or any other form of transaction, i.e. implicit BEGIN TRAN-COMMIT in SQL text if you prefer).
No, as the above answers say the command (as opposed to individual statements within the command) will not be run inside a transaction
Will be easy to verify
Sample code
create table t1
(
Id int not null,
Name text
)
using (var conn = new SqlConnection(...))
using (var cmd = conn.CreateCommand())
{
cmd.CommandText = #"
insert into t1 values (1, 'abc');
insert into t1 values (null, 'pqr');
";
cmd.ExecuteNonQuery();
}
The second statement will fail. But the first statement will execute and you'll have a row in the table.

Is a single call to ExecuteNonQuery() atomic

Is a single call to ExecuteNonQuery() atomic or does it make sense to use Transactions if there are multiple sql statements in a single DbCommand?
See my example for clarification:
using (var ts = new TransactionScope())
{
using (DbCommand lCmd = pConnection.CreateCommand())
{
lCmd.CommandText = #"
DELETE FROM ...;
INSERT INTO ...";
lCmd.ExecuteNonQuery();
}
ts.Complete();
}
If you don't ask for a transaction, you (mostly) don't get one. SQL Server wants everything in transactions and so, by default (with no other transaction management), for each separate statement, SQL Server will create a transaction and automatically commit it. So in your sample (if there was no TransactionScope), you'll get two separate transactions, both independently committed or rolled back (on error).
(Unless you've turned IMPLICIT_TRANSACTIONS on on that connection, in which case you'll get one transaction but you need an explicit COMMIT or ROLLBACK at the end. The only people I've found using this mode are people porting from Oracle and trying to minimize changes. I wouldn't recommend turning it on for greenfield work because it'll just confuse people used to SQL Server's defaults)
It's not. SQL engine will treat this text as two separate instructions. TransactionScope is required (or any other form of transaction, i.e. implicit BEGIN TRAN-COMMIT in SQL text if you prefer).
No, as the above answers say the command (as opposed to individual statements within the command) will not be run inside a transaction
Will be easy to verify
Sample code
create table t1
(
Id int not null,
Name text
)
using (var conn = new SqlConnection(...))
using (var cmd = conn.CreateCommand())
{
cmd.CommandText = #"
insert into t1 values (1, 'abc');
insert into t1 values (null, 'pqr');
";
cmd.ExecuteNonQuery();
}
The second statement will fail. But the first statement will execute and you'll have a row in the table.

ExecuteNonQuery() not working to create temp table SqlServer

I am trying to create a temp table from the a select statement so that I can get the schema information from the temp table.
I am able to achieve this in SQL Server with the following code:
//This creates the temp table
SELECT location.id, location.name into #URM_TEMP_TABLE from location
//This retrieves column information from the temp table
SELECT * FROM tempdb.INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME like '#U%'
If I run the code in c# like so:
using (CONN = new SqlConnection(Settings.Default.UltrapartnerDBConnectionString))
{
var commandText = ReportToDisplay.ReportQuery.ToLower().Replace("from", "into #URM_TEMP_TABLE from");
using (SqlCommand command = CONN.CreateCommand())
{
//Create temp table
CONN.Open();
command.CommandText = commandText;
int retVal = command.ExecuteNonQuery();
CONN.Close();
//get column data from temp table
command.CommandText = "SELECT * FROM TEMPDB.INFORMATION_SCHEMA.Columns WHERE TABLE_NAME like '#U%'";
CONN.Open();
using (var reader = command.ExecuteReader())
{
while (reader.Read())
{
ColumnsForReport.Add(new ListBoxCheckBoxItemModel
{
Name = reader["COLUMN_NAME"].ToString(),
DataType = reader["DATA_TYPE"].ToString(),
IsSelected = false,
RVMCommandModel = this
});
}
}
CONN.Close();
//drop table
command.CommandText = "DROP TABLE #URM_TEMP_TABLE";
CONN.Open();
command.ExecuteNonQuery();
CONN.Close();
}
}
Everything works until it gets to the drop statement: Cannot drop the table '#URM_TEMP_TABLE'
So ExecuteNonQuery returns back 2547 - which is the number of rows the temp table is supposed to have in it. However, it seems that the table does not actually get created using this. Is ExecuteNonQuery the right method to call?
temporary tables are only in scope for the current session, in the code you've posted you're opening a connection, creating a temp table, closing connection.
then opening another connection (new session) and attempting to drop a table which is not in scope of that session.
You would need to drop the temp table within the same connection, or possibly make it a global temp table (##) - though in this case with two separate connections, a global temp table would still fall out of scope.
Additionally, as it was pointed out in the comments your temp tables will be cleaned up automatically - but if you really did want to drop them, you must do so from the session that created them.
EDIT taken from another SO thread:
Global temporary tables in SQL Server
Global temporary tables operate much like local temporary tables; they
are created in tempdb and cause less locking and logging than
permanent tables. However, they are visible to all sessions, until the
creating session goes out of scope (and the global ##temp table is no
longer being referenced by other sessions). If two different sessions
try the above code, if the first is still active, the second will
receive the following:
Server: Msg 2714, Level 16, State 6, Line 1 There is already an object
named '##people' in the database.
I have yet to see a valid justification for the use of a global ##temp
table. If the data needs to persist to multiple users, then it makes
much more sense, at least to me, to use a permanent table. You can
make a global ##temp table slightly more permanent by creating it in
an autostart procedure, but I still fail to see how this is
advantageous over a permanent table. With a permanent table, you can
deny permissions; you cannot deny users from a global ##temp table.
Looks like global temp tables still go out of scope... they're just bad to use in general IMO. Can you just drop the table in the same session or rethink your solution?

Can't immediately see data after committing sql transaction

Updated for more direct title..
I am having an issue with seeing committed data from my transactions immediately after the commit.
I am accepting data from user input, putting data into the DB (SQL Server 2005) in the form of a parent table with multiple children tables with 1-many relationships. I am using a transaction and committing it when complete. The data is committed to the DB but I immediately run several other queries (SELECT statements only) against the DB after the commit to check to see if all the rows for a group have been entered. These rows can be entered at any time by anyone and are each in their own transaction. So, say the "group" has 3 total rows that are required in the database, I want to commit my transaction and then read the DB to see if ANY groups are complete...including that of the transaction I just committed.
The problem is that apparently reading the data back after the commit doesn't bring back the transaction I just committed... if this transaction is the last row in the group then it gets missed by the reads and doesn't return.
So...
Transaction begins.
write rows...
transaction commits..
read to see if complete groups exist...
group with current transaction not returned. other complete groups returned.
One thing to note, submitting another transaction will cause this data to be returned... if this new transaction contains a row to complete another group I do not get the new row returned...
Here is my code.. abbreviated...
SqlConnection sqlConn = getConn(); //getconn is a function that returns a connection object
String sqlSTR = "<<<MY INSERT STATEMENT>>>";
SqlTransaction trans = sqlConn.BeginTransaction(System.Data.IsolationLevel.ReadUncommitted,"Trans");
SqlCommand sqlCMD;
try
{
sqlCMD = new SqlCommand(sqlSTR, sqlConn, trans);
//...Add all my data to the DB.
trans.Commit();
catch (SqlException sqlErr)
{
trans.Rollback("Trans");
throw;
}
finally
{
trans.Dispose();
}
Next is some code to read the DB looking for "completed groups" but the group this data belongs to, if completed, is not returned.
The data is stored... The data is retrievable by the same code by the next call.. but not here.
It's as if it takes time to commit the transaction by SQL and my reads are called BEFORE it's committed by SQL Server... even though I have the transaction set to allow reads to uncommitted data.
Is there someway to have the commit to wait for completion of the commit BEFORE moving on to other code?
I've tried doing a SELECT ##TRANCOUNT against the connection immediately after the commit but it ALWAYS returns 0.
Thanks!
EDIT:
public static SqlConnection getConn()
{
try
{
string sqlStr = WebConfigurationManager.ConnectionStrings["conn_Main"].ConnectionString;
SqlConnection sqlCN = new SqlConnection(sqlStr);
sqlCN.Open();
return sqlCN;
}
catch
{
throw;
}
}
I'm reading the records after the transaction by using a stored procedure...
In the SP I am using a basic select statement that does a couple of joins and not much else to pull back first a list of "groups" that have submissions and then each group individually pulls the individual groups. It's the 2nd part that doesn't seem to see the records inserted with the transaction... until the next transaction is committed.
I'm doing some more testing this morning to see if I can tell exactly when the records show up after committing the transaction.

SqlBulkCopy calculated field

I am working on moving a database from MS Access to sql server. To move the data into the new tables I have decided to write a sync routine as the schema has changed quite significantly and it lets me run testing on programs that run off it and resync whenever I need new test data. Then eventually I will do one last sync and start live on the new sql server version.
Unfortunately I have hit a snag, my method is below for copying from Access to SQLServer
public static void BulkCopyAccessToSQLServer
(string sql, CommandType commandType, DBConnection sqlServerConnection,
string destinationTable, DBConnection accessConnection, int timeout)
{
using (DataTable dt = new DataTable())
using (OleDbConnection conn = new OleDbConnection(GetConnection(accessConnection)))
using (OleDbCommand cmd = new OleDbCommand(sql, conn))
using (OleDbDataAdapter adapter = new OleDbDataAdapter(cmd))
{
cmd.CommandType = commandType;
cmd.Connection.Open();
adapter.SelectCommand.CommandTimeout = timeout;
adapter.Fill(dt);
using (SqlConnection conn2 = new SqlConnection(GetConnection(sqlServerConnection)))
using (SqlBulkCopy copy = new SqlBulkCopy(conn2))
{
conn2.Open();
copy.DestinationTableName = destinationTable;
copy.BatchSize = 1000;
copy.BulkCopyTimeout = timeout;
copy.WriteToServer(dt);
copy.NotifyAfter = 1000;
}
}
}
Basically this queries access for the data using the input sql string this has all the correct field names so I don't need to set columnmappings.
This was working until I reached a table with a calculated field. SQLBulkCopy doesn't seem to know to skip the field and tries to update the column which fails with error "The column 'columnName' cannot be modified because it is either a computed column or is the result of a union operator."
Is there an easy way to make it skip the calculated field?
I am hoping not to have to specify a full column mapping.
There are two ways to dodge this:
use the ColumnMappings to formally define the column relationship (you note you don't want this)
push the data into a staging table - a basic table, not part of your core transactional tables, whose entire purpose is to look exactly like this data import; then use a TSQL command to transfer the data from the staging table to the real table
I always favor the second option, for various reasons:
I never have to mess with mappings - this is actually important to me ;p
the insert to the real table will be fully logged (SqlBulkCopy is not necessarily logged)
I have the fastest possible insert - no constraint checking, no indexing, etc
I don't tie up a transactional table during the import, and there is no risk of non-repeatable queries running against a partially imported table
I have a safe abort option if the import fails half way through, without having to use transactions (nothing has touched the transactional system at this point)
it allows some level of data-processing when pushing it into the real tables, without the need to either buffer everything in a DataTable at the app tier, or implement a custom IDataReader

Categories