Child Parent Transactions roll back - c#

I have a scenario in which I have to process multiple .sQL files, every file contains 3-4 insert or Update queries, now when any query in a file fails I do rollback whole transaction means whole file we be rolled back , and all other files executed before that file will get committed, I want an option where user can rollback entire transaction means all queries in a file executed and all files executed before that particular file containing error, and if user wants to skip that particular file with error we will just rollback single file which contains error all other files will get committed, I am using SQL Transaction right now , no TransactionScope but obviously I can switch too TransactionScope() if needed and possible,
Currently pseudo for my code (what i want) is as follows
Var Files[]
for each (string query in Files)
{
Execute(Query)
IF(TRUE)
CommitQuery()
Else
result=MBOX("IF You want to abort all files or skip this one")
if(result=abort)
rollbackall()
else
QueryRollBack()
}

It seems you are looking for SavePoints, i.e. the option to partially roll back and then resume a larger transaction. AFAIK TransactionScope doesn't support SavePoints so you'll need to deal directly with the native provider (e.g. SqlClient if your RDBMS is Sql Server). (i.e. you cannot leverage the ability of TransactionScope to implement DTC equivalent of SavePoints, e.g. across distributed databases, disparate RDBMS, or parallel transactions)
That said, I would suggest a strategy where the user elects to skip or abort up front, before transactional processing begins, as it will be expensive awaiting UI response while a large number of rows are still locked - this will likely cause contention issues.
Edit
Here's a small sample of using SavePoints. Foo1 and Foo3 are inserted, Foo2 is rolled back to the preceding save point.
using (var conn = new SqlConnection(ConfigurationManager.ConnectionStrings["Foo"].ConnectionString))
{
conn.Open();
using (var txn = conn.BeginTransaction("Outer"))
{
txn.Save("BeforeFoo1");
InsertFoo(txn, "Foo1");
txn.Save("BeforeFoo2");
InsertFoo(txn, "Foo2");
txn.Rollback("BeforeFoo2");
txn.Save("BeforeFoo3");
InsertFoo(txn, "Foo3");
txn.Commit();
}
}
Where InsertFoo is:
private void InsertFoo(SqlTransaction txn, string fooName)
{
using (var cmd = txn.Connection.CreateCommand())
{
cmd.Transaction = txn;
cmd.CommandType = CommandType.Text;
cmd.CommandText = "INSERT INTO FOO(Name) VALUES(#Name)";
cmd.Parameters.Add(new SqlParameter("#Name", SqlDbType.VarChar)).Value = fooName;
cmd.ExecuteNonQuery();
}
}
And the underlying table is:
create table Foo
(
FooId INT IDENTITY(1,1) NOT NULL PRIMARY KEY,
Name NVARCHAR(50)
)

Keep all insert, update queries in a try{..}catch(..){..} and if any exception occurs, in the catch roll the db transaction back.
private void InsertFoo(SqlTransaction txn, string fooName)
{
using (var cmd = txn.Connection.CreateCommand())
{
try
{
do your process here...
cmd.Commit();
}
catch(Exception ex)
{
cmd.Rollback();
}
}
}

Related

Deadlock: Insert into a filetable statements appears to block one another

We are having trouble inserting into a filetable. This is our current constellation, I will try to explain it as detailed as possible:
Basically we have three tables:
T_Document (main metadata of a document)
T_Version (versioned metadata of a document)
T_Content (the binary content of a document, FileTable)
Our WCF service saves the documents and is being used by multiple persons. The service will open a transaction and will call the method SaveDocument which saves the documents:
//This is the point where the tranaction starts
using (IDBTransaction tran = db.GetTransaction())
{
try
{
m_commandFassade.SaveDocument(document, m_loginName, db, options, lastVersion);
tran.Commit();
return document;
}
catch
{
tran.Rollback();
throw;
}
}
The SaveDocument method looks like this:
public void SaveDocument(E2TDocument document, string login, IDBConnection db, DocumentUploadOptions options, int lastVersion)
{
document.GuardNotNull();
options.GuardNotNull();
if (lastVersion == -1)
{
//inserting new T_Document
SaveDocument(document, db);
}
else
{
//updating the existing T_Document
UpdateDocument(document, db); //document already exists, updating it
}
Guid contentID = Guid.NewGuid();
//inserting the content
SaveDocumentContent(document, contentID, db);
//inserting the new / initial version
SaveDocumentVersion(document, contentID, db);
}
Basically all the methods you see are either inserting or updating those three tables. The insert content query, that appears to make some trouble like this:
INSERT INTO T_Content
(stream_id
,file_stream
,name)
VALUES
(#ContentID
,#Content
,#Title)
And the method (please take this as pseudo code):
private void SaveDocumentContent(E2TDocument e2TDokument, Guid contentID, IDBConnection db)
{
using (m_log.CreateScope<MethodScope>(GlobalDefinitions.TracePriorityForData))
{
Command cmd = CommandFactory.CreateCommand("InsertContents");
cmd.SetParameter("ContentID", contentID);
cmd.SetParameter("Content", e2TDokument.Content);
string title = string.Concat(e2TDokument.Titel.RemoveIllegalPathCharacters(), GlobalDefinitions.UNTERSTRICH,
contentID).TrimToMaxLength(MaxLength_T_Contents_Col_Name, SuffixLength_T_Contents_Col_Name);
cmd.SetParameter("Title", title);
db.Execute(cmd);
}
}
I have no experience in deadlock-analysis, but the deadlock graphs show me that when inserting the content into the filetable, it appears to be deadlocked with another process also writing into the same table at the same time.
(the other side shows the same statement, my application log confirms two concurrent tries to save documents)
The same deadlock appears 30 times a day. I already shrinked the transaction to a minimum, removing all unneccessary selects, but yet I had no luck to solve this issue.
What I'm most curious about is how its possible to deadlock on an insert into a filetable. Are there internal things that are being executed that I'm not aware of. I saw some strange statements in the profiler trace on that table, that we dont use anywhere in the code, e.g.:
set #pathlocator = convert(hierachyid, #path_locator__bin)
and things like:
if exists (
select 1
from [LGOL_Content01].[dbo].[T_Contents]
where parent_path_locator = #path_locator
)
If you need any more details, please let me know. Any tips how to proceed would be awesome.
Edit 1:\
Following you find the execution plan for the T_Content insert:
So, after hours and hours and research and consulting with microsoft, the deadlock is actually a filetable / sql server related bug, which will be hotfixed by Microsoft.

Use external System.Common.DBTransaction with Entity Framework 5

i'm stuck within a problem that is going to happen to everyone in ADO.NET mixed with Entity Framework contexts.
I've got a big procedure that handles and saves data into database by using multiple ways like ADO.NET dataadapters and direct CRUD commands against DB. All the procedure is wrapped by 2 using() blocks that creates and releases a DBConnection/DBTransaction and a try/catch block to commit or rollback the transaction. Unfortunately, in the middle of this routine, i have to recall a saving procedure implemented by using Entity Framework. This leads me to a problem:
According to the official documentation, Entity Framework 5 allows me to pass a connection with an transaction associated transaction (it should work, in debug mode, when i call SaveChanges() i don't receive any TimeOutException due to deadlocks, conversely if i pass a new connection it does), but unfortunately after SaveChanges() kicks in the connection is closed and associated transaction committed! Even if i set the flag 'contextOwnsConnection'!
As far as i know, if i migrate EF5 to EF6, things should work (am i right?), but unfortunately i can't, because the project i'm working on is very large and involves a lot of dependecies and it would take a large amount of time.
How can i make it work with EF5? Is there any trick or pattern to achieve the desired result? Am i right about the behavior of EF6? Does it worth EF6 migration?
Here you are a simple example of how does my code looks like.
For privacy reasons i can't post the original code but just imagine a situation like this with a lot of more complexity:
using(DbConnection conn = DBProvider.CreateConnection()){
//Open the created connection
conn.Open();
//Create a new transaction
using(DbTransaction tr = DBProvider.CreateTransaction()){
//Begin a new transaction
tr.Begin();
bool saveOk;
try{
//Updates customers by using dataadapter
dataAdapterCustomers.InsertCommand.Transaction = tr;
dataAdapterCustomers.UpdateCommand.Transaction = tr;
dataAdapterCustomers.DeleteCommand.Transaction = tr;
dataAdapterCustomers.Update();
//Updates stock items by using dataadapter
stockAdapterCustomers.InsertCommand.Transaction = tr;
stockAdapterCustomers.UpdateCommand.Transaction = tr;
stockAdapterCustomers.DeleteCommand.Transaction = tr;
stockAdapterCustomers.Update();
//...Many other DB accessing here...
//Updates stock quantity by using simple DBCommand
quantityUpdateCmd.Transaction = tr;
quantityUpdateCmd.ExecuteNonQuery();
//Updates stock statistics by using a simple DBCommand
updateStockStatsCmd.Transaction = tr;
updateStockStatsCmd.ExecuteNonQuery();
//...Many other DB accessing here...
//HERE:
//Creates a new activity and save it using EF.
//I use a UnitOfWork and i pass to it my connection and 'false' as contextOwnsConnection parameter
//(it 'll be used by the DBContext contained in my Unit of work)
using(ActivityUoW uow = new ActivityUoW(conn, false)){
Activity act = new Activity();
act.Name = "Saving activity";
act.Description = "Done by user";
act.Date = DateTime.Now;
uow.Activities.Add(act);
uow.SaveChanges();
}
//Based on activity result, launch a store procedure that makes other complex things.
UNFORTUNATELY THE CONNECTION HAS BEEN CLOSED AND TRANSACTION COMMITTED, SO THE FOLLOWING INSTRUCTION WILL FAIL.
launchActivityUpdateSpCmd.Transaction = tr;
launchActivityUpdateSpCmd.ExecuteNonQuery();
//...Many other DB accessing here...
//Data saved correctly
saveOk = true;
}
catch(Exception ex){
//There was an error during save
saveOk = false;
}
//Commit or rollback transaction according to save procedure result
if(saveOk)
tr.Commit();
else
tr.Rollback();
}
}
I didn't quite follow your question(s) and wasn't sure if your issue was related to how to handle transactions or if you had a question about EF5 to EF6 migrations. That being said, you have an interesting mixture of data access code.
Regarding transactions - I would look into using the TransactionScope which is part of the System.Transactions namespace.
For example:
try
{
using (var scope = new TransactionScope())
{
using (var conn = new SqlConnection("your connection string"))
{
conn.Open();
// your EF and ADO.NET code
}
scope.Complete();
}
}
catch (TransactionAbortedException ex)
{
}
catch (ApplicationException ex)
{
}

Properly implement concurrency control in ASP.NET WebAPI + SQL

I'm writing a very simple web application that serves as an endpoint for uploading money transactions from customers and saving them in SQL Server DB. It accepts requests with just 2 params: userid: 'xxx', balancechange: -19.99. If the user ID exists in the app database, then the balance is changed; if not - a new row is created for this ID.
The difficult part in all this is that the numer of requests is enormous and I have to implement the app in such a way that it works as fast as possible and resolves concurrency issues (if 2 requests for the same ID arrive simultaneously).
The app is a ASP.NET MVC WebAPI. I chose to use plain old ADO.NET for speed, and this is what I currently have:
private static readonly object syncLock = new object();
public void UpdateBalance(string userId, decimal balance)
{
lock (syncLock)
{
using (var sqlConnection = new SqlConnection(this.connectionString))
{
var command = new SqlCommand($"SELECT COUNT(*) FROM Users WHERE Id = '{userId}'", sqlConnection);
if ((int)command.ExecuteScalar() == 0)
{
command = new SqlCommand($"INSERT INTO Users (Id, Balance) VALUES ('{userId}', 0)", sqlConnection);
command.ExecuteNonQuery();
}
command = new SqlCommand($"UPDATE Users SET Balance = Balance + {balance} WHERE Id = {userId}", sqlConnection);
command.ExecuteNonQuery();
}
}
}
Called from a controller like this:
[HttpPost]
public IHttpActionResult UpdateBalance(string id, decimal balanceChange)
{
UpdateBalance(id, balanceChange);
return Ok();
}
The thing I'm concernred with is concurrency control using lock (syncLock). This would slow the app down under high load and doesn't allow multiple instances of the app to be deployed on different servers. What are ways to properly implement concurrency control here?
Note: I'd like to use a fast and DB-independent way of implementing concurrency control, as the current storage mechanism (SQL Server) can change in the future.
First, DB-independent code:
For this, you will want to look at DbProviderFactory. What this allows is passing the provider name (MySql.Data.MySqlClient, System.Data.SqlClient), then using the abstract classes (DbConnection, DbCommand) you interact with your DB.
Second, using transactions and paramaterized queries:
When you are working with a database, you ALWAYS want to have your queries paramaterized. If you use String.Format() or any other type of string concatenation, you open your query up to injection.
Transactions ensure all or nothing with your queries, and they can also lock down the table so that only queries within the transaction can access those tables. Transactions have two commands, Commit which will save the changes (if any) to the DB, and Rollback which discards any changes to the DB.
The following will assume that you already have an instance of DbProviderFactory in a class variable _factory.
public void UpdateBalance(string userId, decimal balanceChange)
{
//since we might need to execute two queries, we will create the paramaters once
List<DbParamater> paramaters = new List<DbParamater>();
DbParamater userParam = _factory.CreateParamater();
userParam.ParamaterName = "#userId";
userParam.DbType = System.Data.DbType.Int32;
userParam.Value = userId;
paramaters.Add(userParam);
DbParamater balanceChangeParam = _factory.CreateParamater();
balanceChangeParam.ParamaterName = "#balanceChange";
balanceChangeParam.DbType = System.Data.DbType.Decimal;
balanceChangeParam.Value = balanceChange;
paramaters.Add(balanceChangeParam);
//Improvement: if you implement a method to clone a DbParamater, you can
//create the above list in class construction instead of function invocation
//then clone the objects for the function.
using (DbConnection conn = _factory.CreateConnection()){
conn.Open(); //Need to open the connection before you start the transaction
DbTransaction trans = conn.BeginTransaction(System.Data.IsolationLevel.Serializable);
//IsolationLevel.Serializable locks the entire table down until the
//transaction is commited or rolled back.
try {
int changedRowCount = 0;
//We can use the fact that ExecuteNonQuery will return the number
//of affected rows, and if there are no affected rows, a
//record does not exist for the userId.
using (DbCommand cmd = conn.CreateCommand()){
cmd.Transaction = trans; //Need to set the transaction on the command
cmd.CommandText = "UPDATE Users SET Balance = Balance + #balanceChange WHERE Id = #userId";
cmd.Paramaters.AddRange(paramaters.ToArray());
changedRowCount = cmd.ExecuteNonQuery();
}
if(changedRowCount == 0){
//If no record was affected in the previous query, insert a record
using (DbCommand cmd = conn.CreateCommand()){
cmd.Transaction = trans; //Need to set the transaction on the command
cmd.CommandText = "INSERT INTO Users (Id, Balance) VALUES (#userId, #balanceChange)";
cmd.Paramaters.AddRange(paramaters.ToArray());
cmd.ExecuteNonQuery();
}
}
trans.Commit(); //This will persist the data to the DB.
}
catch (Exception e){
trans.Rollback(); //This will cause the data NOT to be saved to the DB.
//This is the default action if Commit is not called.
throw e;
}
finally {
trans.Dispose(); //Need to call dispose
}
//Improvement: you can remove the try-catch-finally block by wrapping
//the conn.BeginTransaction() line in a using block. I used a try-catch here
//so that you can more easily see what is happening with the transaction.
}
}

Unit testing with manual transactions and layered transactions

Due to a few restrictions I can't use entity Framework and thus need to use SQL Connections, commands and Transactions manually.
While writing unit tests for the methods calling these data layer operations I stumbled upon a few problems.
For the unit tests I NEED to do them in a Transaction as most of the operations are changing data by their nature and thus doing them outside a Transaction is problematic as that would change the whole base data. Thus I need to put a Transaction around these (with no commit fired at the end).
Now I have 2 different variants of how These BL methods work.
A few have Transactions themselves inside of them while others have no Transactions at all. Both of these variants cause problems.
Layered Transaction: Here I get errors that the DTC cancelled the distributed Transaction due to timeouts (although the timeout is being set to 15 minutes and it is running for only 2 minutes).
Only 1 Transaction: Here I get an error about the state of the Transaction when I come to the "new SQLCommand" line in the called method.
My question here is what can I do to correct this and get unit testing with manual normal and layered Transactions working?
Unit testing method example:
using (SqlConnection connection = new SqlConnection(Properties.Settings.Default.ConnectionString))
{
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction())
{
MyBLMethod();
}
}
Example for a Transaction using method (very simplified)
using (SqlConnection connection = new SqlConnection(Properties.Settings.Default.ConnectionString))
{
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction())
{
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.Transaction = transaction;
command.CommandTimeout = 900; // Wait 15 minutes before a timeout
command.CommandText = "INSERT ......";
command.ExecuteNonQuery();
// Following commands
....
Transaction.Commit();
}
}
Example for a non Transaction using method
using (SqlConnection connection = new SqlConnection(Properties.Settings.Default.ConnectionString))
{
connection.Open();
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.CommandTimeout = 900; // Wait 15 minutes before a timeout
command.CommandText = "INSERT ......";
command.ExecuteNonQuery();
}
On the face of it, you have a few options, depending upon what you want to test and your ability to spend money / change your code base.
At the moment, you’re effectively writing integration tests. If the database isn’t available then your tests will fail. This means the tests can be slow, but on the plus side if they pass you’re pretty confident that you code can hit the database correctly.
If you don’t mind hitting the database, then the minimum impact to changing your code / spending money would be for you to allow the transactions to complete and verify them in the database. You can either do this by taking database snapshots and resetting the database each test run, or by having a dedicated test database and writing your tests in such a way that they can safely hit the database over and over again and then verified. So for example, you can insert a record with an incremented id, update the record, and then verify that it can be read. You may have more unwinding to do if there are errors, but if you’re not modifying the data access code or the database structure that often then this shouldn’t be too much of an issue.
If you’re able to spend some money and you want to actually turn your tests into unit tests, so that they don’t hit the database, then you should consider looking into TypeMock. It’s a very powerful mocking framework that can do some pretty scary stuff. I believe it using the profiling API to intercept calls, rather than using the approach used by frameworks like Moq. There's an example of using Typemock to mock a SQLConnection here.
If you don’t have money to spend / you’re able to change your code and don’t mind continuing to rely on the database then you need to look at some way to share your database connection between your test code and your dataaccess methods. Two approaches that spring to mind are to either inject the connection information into the class, or make it available by injecting a factory that gives access to the connection information (in which case you can inject a mock of the factory during testing that returns the connection you want).
If you go with the above approach, rather than directly injecting SqlConnection, consider injecting a wrapper class that is also responsible for the transaction. Something like:
public class MySqlWrapper : IDisposable {
public SqlConnection Connection { get; set; }
public SqlTransaction Transaction { get; set; }
int _transactionCount = 0;
public void BeginTransaction() {
_transactionCount++;
if (_transactionCount == 1) {
Transaction = Connection.BeginTransaction();
}
}
public void CommitTransaction() {
_transactionCount--;
if (_transactionCount == 0) {
Transaction.Commit();
Transaction = null;
}
if (_transactionCount < 0) {
throw new InvalidOperationException("Commit without Begin");
}
}
public void Rollback() {
_transactionCount = 0;
Transaction.Rollback();
Transaction = null;
}
public void Dispose() {
if (null != Transaction) {
Transaction.Dispose();
Transaction = null;
}
Connection.Dispose();
}
}
This will stop nested transactions from being created + committed.
If you’re more willing to restructure your code, then you might want to wrap your dataaccess code in a more mockable way. So, for example you could push your core database access functionality into another class. Depending on what you’re doing you’ll need to expand on it, however you might end up with something like this:
public interface IMyQuery {
string GetCommand();
}
public class MyInsert : IMyQuery{
public string GetCommand() {
return "INSERT ...";
}
}
class DBNonQueryRunner {
public void RunQuery(IMyQuery query) {
using (SqlConnection connection = new SqlConnection(Properties.Settings.Default.ConnectionString)) {
connection.Open();
using (SqlTransaction transaction = connection.BeginTransaction()) {
SqlCommand command = new SqlCommand();
command.Connection = connection;
command.Transaction = transaction;
command.CommandTimeout = 900; // Wait 15 minutes before a timeout
command.CommandText = query.GetCommand();
command.ExecuteNonQuery();
transaction.Commit();
}
}
}
}
This allows you to unit test more of your logic, like the command generation code, without having to actually worry about hitting the database and you can test your core dataaccess code (the Runner) against the database once, rather than for every command you want to run against the database. I would still write integration tests for all dataaccess code, but I’d only tend to run them whilst actually working on that section of code (to ensure column names etc have been specified correctly).

Very slow foreach loop

I am working on an existing application. This application reads data from a huge file and then, after doing some calculations, it stores the data in another table.
But the loop doing this (see below) is taking a really long time. Since the file sometimes contains 1,000s of records, the entire process takes days.
Can I replace this foreach loop with something else? I tried using Parallel.ForEach and it did help. I am new to this, so will appreciate your help.
foreach (record someredord Somereport.r)
{
try
{
using (var command = new SqlCommand("[procname]", sqlConn))
{
command.CommandTimeout = 0;
command.CommandType = CommandType.StoredProcedure;
command.Parameters.Add(…);
IAsyncResult result = command.BeginExecuteReader();
while (!result.IsCompleted)
{
System.Threading.Thread.Sleep(10);
}
command.EndExecuteReader(result);
}
}
catch (Exception e)
{
…
}
}
After reviewing the answers , I removed the Async and used edited the code as below. But this did not improve performance.
using (command = new SqlCommand("[sp]", sqlConn))
{
command.CommandTimeout = 0;
command.CommandType = CommandType.StoredProcedure;
foreach (record someRecord in someReport.)
{
command.Parameters.Clear();
command.Parameters.Add(....)
command.Prepare();
using (dr = command.ExecuteReader())
{
while (dr.Read())
{
if ()
{
}
else if ()
{
}
}
}
}
}
Instead of looping the sql connection so many times, ever consider extracting the whole set of data out from sql server and process the data via the dataset?
Edit: Decided to further explain what i meant..
You can do the following, pseudo code as follow
Use a select * and get all information from the database and store them into a list of the class or dictionary.
Do your foreach(record someRecord in someReport) and do the condition matching as usual.
Step 1: Ditch the try at async. It isn't implemented properly and you're blocking anyway. So just execute the procedure and see if that helps.
Step 2: Move the SqlCommand outside of the loop and reuse it for each iteration. that way you don't incurr the cost of creating and destroying it for every item in your loop.
Warning: Make sure you reset/clear/remove parameters you don't need from the previous iteration. We did something like this with optional parameters and had 'bleed-thru' from the previous iteration because we didn't clean up parameters we didn't need!
Your biggest problem is that you're looping over this:
IAsyncResult result = command.BeginExecuteReader();
while (!result.IsCompleted)
{
System.Threading.Thread.Sleep(10);
}
command.EndExecuteReader(result);
The entire idea of the asynchronous model is that the calling thread (the one doing this loop) should be spinning up ALL of the asynchronous tasks using the Begin method before starting to work with the results with the End method. If you are using Thread.Sleep() within your main calling thread to wait for an asynchronous operation to complete (as you are here), you're doing it wrong, and what ends up happening is that each command, one at a time, is being called and then waited for before the next one starts.
Instead, try something like this:
public void BeginExecutingCommands(Report someReport)
{
foreach (record someRecord in someReport.r)
{
var command = new SqlCommand("[procname]", sqlConn);
command.CommandTimeout = 0;
command.CommandType = CommandType.StoredProcedure;
command.Parameters.Add(…);
command.BeginExecuteReader(ReaderExecuted,
new object[] { command, someReport, someRecord });
}
}
void ReaderExecuted(IAsyncResult result)
{
var state = (object[])result.AsyncState;
var command = state[0] as SqlCommand;
var someReport = state[1] as Report;
var someRecord = state[2] as Record;
try
{
using (SqlDataReader reader = command.EndExecuteReader(result))
{
// work with reader, command, someReport and someRecord to do what you need.
}
}
catch (Exception ex)
{
// handle exceptions that occurred during the async operation here
}
}
In SQL on the other end of a write is a (one) disk. You rarely can write faster in parallel. In fact in parallel often slows it down due to index fragmentation. If you can sort the data by primary (clustered) key prior to loading. In a big load even disable other keys, load data rebuild keys.
Not really sure what are doing in the asynch but for sure it was not doing what you expected as it was waiting on itself.
try
{
using (var command = new SqlCommand("[procname]", sqlConn))
{
command.CommandTimeout = 0;
command.CommandType = CommandType.StoredProcedure;
foreach (record someredord Somereport.r)
{
command.Parameters.Clear()
command.Parameters.Add(…);
using (var rdr = command.ExecuteReader())
{
while (rdr.Read())
{
…
}
}
}
}
}
catch (…)
{
…
}
As we were talking about in the comments, storing this data in memory and working with it there may be a more efficient approach.
So one easy way to do that is to start with Entity Framework. Entity Framework will automatically generate the classes for you based on your database schema. Then you can import a stored procedure which holds your SELECT statement. The reason I suggest importing a stored proc into EF is that this approach is generally more efficient than doing your queries in LINQ against EF.
Then run the stored proc and store the data in a List like this...
var data = db.MyStoredProc().ToList();
Then you can do anything you want with that data. Or as I mentioned, if you're doing a lot of lookups on primary keys then use ToDictionary() something like this...
var data = db.MyStoredProc().ToDictionary(k => k.MyPrimaryKey);
Either way, you'll be working with your data in memory at this point.
It seems executing your SQL command puts lock on some required resources and that's the reason enforced you to use Async methods (my guess).
If the database in not in use, try an exclusive access to it. Even then in there are some internal transactions due to data-model complexity consider consulting to database designer.

Categories