Managing a Network failure during a Transactional SqlBulkCopy - c#

I'm researching SqlClient's SqlBulkCopy in ADO.Net and have the following questions.
What will happen, if there was a network error during a SqlBulkCopy operation, running under as part of a transaction over a huge number of records?
Will the transaction be left open (neither committed, nor rolled back) in the server, until we manually kill it?
What is the best approach for sending a large number of records in two DataTables (InvoiceHeader, InvoiceDetails) in a DataSet to respective SQL Server tables(InvoiceHeader, InvoiceDetails)?
Thank you.
EDIT:
A few details I wanted to add, but forgot:
This is for .Net v3.5; I'm using Enterprise Library for all database interactions.

Assuming you are using a TransactionScope, my understanding is that no, the transaction will not be left open, because SQL Server will detect the ambient transaction and auto enlist. This means that the worst case is that the transaction times out, rolling back. You can change the transaction binding to specify what to do in the event of a timeout (you probably want explicit unbind).

Related

Why is DbContext.Database.CurrentTransaction always null?

Is there anyway to find if a DbContext is enlisted in any transaction while enlist=false in the connection string?
I was tracing DbContext.Database.CurrentTransaction, but I noticed it is always null.
I know when enlist=false, all opened connections will not enlist themselves in an ambient transaction, is that right?
If (2) is correct, how to enlist DbContext in an transaction where TransactionScope is used?
Finally, I noticed using clones of DependentTransaction with multiple DbContext and multiple threads while enlist=false will not promote the transaction to distributed one, but I wonder am I still able to commit and rollback in case an exception happened using the dependent transaction while enlist=false?
if (4) is incorrect, then is there any way to fully avoid DistributedTransaction while being able to open multiple connections with a single transaction scope?
FYI, currently Oracle database is employed; however, in future MySQL is going to be in operation as well.
Thank you
I cant really say something to 1 , 2 and 3.. But:
The distribution-thing is not absolutely clear to me. But however, MS escalates Transactions from LTM (Lightweigt Transaction Manger) to DTC if a some criteria come to play. For Example if you try to access different databases in one transaction..
The escalation from LTM to DTC or the decission if an escalation will be forced is a system-decission. Until now I didnt find a way to change this behaviour. Thats why you need to think about the transactions in general. If there is a possibility to avoid multiple-database access you may rethink your transactions.
For further information I´ll recommend Why is my TransactionScope trying to use MSDTC when used in an EF Code First app? , MSSQL Error 'The underlying provider failed on Open' and How do I use TransactionScope in C#?

ADO.NET SQL Transaction - Only 1 call to database?

I am reviewing some code which is inserting some data into a MSSQL Database (MSSQL2014) using a Stored Procedure.
Each row of data results in a call to that database.
This is obviously inefficient for a number of reasons; not least of which is the fact that a call has to be made over the network for each line of data.
I am surmising that if they simply wrap all these individual sqlcommand calls (ExecuteNonQuery) into a transaction then this will ultimately result in a "batch insert" of sorts. When you actually "COMMIT" the transaction, the call accross the connection is made.
Is this correct? Will this send all the sqlcommands to the server in a single call? I have not been able to find a diagram or documentation which outlines the communication between the client and server when a transaction is used.
--> Begin Transaction
-> ExecuteNonQuery
-> ExecuteNonQuery
-> ExecuteNonQuery
-> ExecuteNonQuery
-> ExecuteNonQuery
-> ExecuteNonQuery
-> ExecuteNonQuery
-> ...
--> Commit Transaction (my assumption is that each of the SqlCommands
are sent over the connection object at this point)
On a sidenote I am more inclined to recommend that the developer rewrites the routine to make use of SQLBulkCopy or TableValued Parameters. This would however necessitate the re-factoring of the database stored procedure.
Thanks in Advance
There will be no batch insert. Neither the client nor the server optimize across statements. ADO.NET has no machinery to understand the SQL that you are sending. It cannot optimize anything. The server could but does not.
There will be a performance gain by using a transaction like this because the inserts do not need to flush the log as long as the transaction is pending.
There will be no batch insert. All the transaction will do is give you the ability to commit the inserts so others would be able to see the changes. No performance gains and if anything a performance hit. Not knowing what you are wanting to do here it would be hard to give you a proper answer

C# - TSQL Parallel transactions on same connection instance

I am developing a C# ORM with php's Laravel-like Syntax.
When the ORM starts, it connects to the database and performs any query (it also supports two different connections for reading and writing), and reconnect to the db only if connection is losen or missing.
As this is a Web Framework ORM, how can I handle concurrent transactions on the same connection? Are they supported?
I saw I can manually assign the transaction object to the sqlcommand, but can I create parallel sqltransactions?
Example:
There is an URL of a REST action that will cause a transaction to be opened, some actions performed, and then transaction committed (e.g. perform an order on a shopping cart). What if, multiple users (so different WebOperationContext) call that URL? Is it possible to open and work with multiple "parallel" transactions and then commit them?
How do other ORM's handle this case? Did they use multiple connections?
Thanks for any support!
Mattia
SQL Server does not support parallel transactions on the same connection.
Normally, there is no need for that. Just open connections as you need them. This is cheap thanks to pooling.
A common model is to open connection and transaction right after another. Then, commit and dispose everything at the end.
That way concurrent HTTP requests do not interact at all which is good.

How do I minimize or inform users of database connection lag / failure?

I'm maintaining a ASP/C# program that uses an MS SQL Server 2008 R2 for its database requirements.
On normal and perfect days, everything works fine as it is. But we don't live in a perfect world.
An Application (for Leave, Sick Leave, Overtime, Undertime, etc.) Approval process requires up to ten separate connections to the database. The program connects to the database, passes around some relevant parameters, and uses stored procedures to do the job. Ten times.
Now, due to the structure of the entire thing, which I can not change, a dip in the connection, or heck, if I put a debug point in VS2005 and let it hang there long enough, the Application Approval Process goes incomplete. The tables are often just joined together, so a data mismatch - a missing data here, a primary key that failed to update there - would mean an entire row would be useless.
Now, I know that there is nothing I can do to prevent this - this is a connection issue, after all.
But are there ways to minimize connection lag / failure? Or a way to inform the users that something went wrong with the process? A rollback changes feature (either via program, or SQL), so that any incomplete data in the database will be undone?
Thanks.
But are there ways to minimize connection lag / failure? Or a way to
inform the users that something went wrong with the process? A
rollback changes feature (either via program, or SQL), so that any
incomplete data in the database will be undone?
As we discussed in the comments, transactions will address many of your concerns.
A transaction comprises a unit of work performed within a database
management system (or similar system) against a database, and treated
in a coherent and reliable way independent of other transactions.
Transactions in a database environment have two main purposes:
To provide reliable units of work that allow correct recovery from failures and keep a database consistent even in cases of system
failure, when execution stops (completely or partially) and many
operations upon a database remain uncompleted, with unclear status.
To provide isolation between programs accessing a database concurrently. If this isolation is not provided, the program's outcome
are possibly erroneous.
Source
Transactions in .Net
As you might expect, the database is integral to providing transaction support for database-related operations. However, creating transactions from your business tier is quite easy and allows you to use a single transaction across multiple database calls.
Quoting from my answer here:
I see several reasons to control transactions from the business tier:
Communication across data store boundaries. Transactions don't have to be against a RDBMS; they can be against a variety of entities.
The ability to rollback/commit transactions based on business logic that may not be available to the particular stored procedure you are calling.
The ability to invoke an arbitrary set of queries within a single transaction. This also eliminates the need to worry about transaction count.
Personal preference: c# has a more elegant structure for declaring transactions: a using block. By comparison, I've always found transactions inside stored procedures to be cumbersome when jumping to rollback/commit.
Transactions are most easily declared using the TransactionScope (reference) abstraction which does the hard work for you.
using( var ts = new TransactionScope() )
{
// do some work here that may or may not succeed
// if this line is reached, the transaction will commit. If an exception is
// thrown before this line is reached, the transaction will be rolled back.
ts.Complete();
}
Since you are just starting out with transactions, I'd suggest testing out a transaction from your .Net code.
Call a stored procedure that performs an INSERT.
After the INSERT, purposely have the procedure generate an error of any kind.
You can validate your implementation by seeing that the INSERT was rolled back automatically.
Transactions in the Database
Of course, you can also declare transactions inside a stored procedure (or any sort of TSQL statement). See here for more information.
If you use the same SQLConnection, or other connection types that implement IDbConnection, you can do something similar to transactionscopes but without the need to create the security risk that is a transactionscope.
In VB:
Using scope as IDbTransaction = mySqlCommand.Connection.BeginTransaction()
If blnEverythingGoesWell Then
scope.Commit()
Else
scope.Rollback()
End If
End Using
If you don't specify commit, the default is to rollback the transaction.

Is there any way to resume a (long) transaction after the underlying mysql connection has been lost?

I have a long running transaction performing a lot of delete queries on a database; the issue is that the mysql connection (to the server on the same machine) will be dropped for no reason every now and then.
Currently, my retry logic will detect the disconnection, reconnect, and restart the whole transaction from the beginning, which may never succeed if the connection's "dropping frequency" is too high.
Is it possible at all to reopen a lost connection to continue the transaction?
I am using MySQL Connector for .NET.
What you are asking is not possible for a Transaction. A transaction is to make sure that either each and every action performed on DataBase is completed or None are.
If your Connection Dropping frequency is too high and you don't have a control on fixing it then what you should do is to make simple queries without a transaction or Better Make the Number of Actions in your Transaction fewer and Send a Batch of Transactions instead of a Single Big Transaction.
And also add some data validation check codes to make sure every thing is right with entries.
Theoretically you can do exactly what you need with the XA transactions... but the limitations of mysql are rather drastic and make XA transactions on mysql a joke do be honest: both resume & join on start and end with suspend are not working (since 2006 when this was first released). So to answer you question no! No chance with mysql, forget it! Try increasing timeouts(both on client and server), memory pools, optimize the queries etc... mysql won't help you here.

Categories