I need to insert new rows in two of my tables first table's auto generated id field is one of the field of second table .currently I'm using transaction for inserting data. My current code is given below
using (var context = new ApplicationDatabaseEntities())
{
using (var transaction = context.Database.BeginTransaction())
{
try
{
foreach (var item in itemList)
{
context.MyFirstEntity.Add(item);
context.SaveChanges();
mySecondEntity.MyFirstEntityId = item.Id;
context.MySecondEntity.Add(mySecondEntity.MyFirstEntityId );
context.SaveChanges();
}
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
Console.WriteLine(ex);
}
}
}
The above code is working fine .my question is, is this the correct way? I mean if I have 1000 or 2000 items for insertion does my current code affect the performance
Code can be improved with implicit transaction:
foreach (var item in itemList)
{
context.MyFirstEntity.Add(item);
mySecondEntity.MyFirstEntity = item;
context.MySecondEntity.Add(mySecondEntity);
}
context.SaveChanges();
Note: instead of id I've used navigation property.
It depends. If the whole batch has to be transactional then you have no other way as do in one big transaction. If transaction has to be guaranteed for tupples only then if the time of transaction is big enough you may face some locks. Then you can just do transaction in the loop for each tupple.
Also you can do what you are doing in one go without explicit transaction. You can SaveChanges after the loop and it will be transactional:
foreach (var item in itemList)
{
context.MyFirstEntity.Add(item);
mySecondEntity.MyFirstEntity = item;
context.MySecondEntity.Add(mySecondEntity);
}
context.SaveChanges();
You should wrap your transaction in a using statement and rollback when an exception is thrown.
using (DbContextTransaction transaction = context.Database.BeginTransaction())
{
try
{
foreach (var item in itemList)
{
context.MyFirstEntity.Add(item);
mySecondEntity.MyFirstEntity = item;
context.MySecondEntity.Add(mySecondEntity);
}
transaction.Commit();
}
catch (Exception e)
{
transaction.Rollback();
}
}
More info here.
Related
I have a worker thread, whose job is to insert Objects that are stored in a Queue into the database.
We are currently using Entity framework to do the inserts. Now my question is, do I need to make a new Db Instance for every insert? or can I safely re-use the same db instance over and over?
private static void MainWorker()
{
while (true)
{
try
{
if (IncomingDataQueue.Any())
{
if (IncomingDataQueue.TryDequeue(out var items))
{
//Insert into db
using (var db = GetNewDbInstance())
{
if (db != null)
{
db.DataRaw.AddRange(items);
db.SaveChanges();
//Skip everything and continue to the next loop
continue;
}
}
}
}
}
catch (Exception ex)
{
Debug.WriteException("Failed to insert DB Data", ex);
//Delay here in case we are hitting the db 2 hard.
Thread.Sleep(100);
}
//Wait here as we did not have any items in the queue, so wait before checkign again
Thread.Sleep(20);
}
}
Here is my function which gets a new DB Instance:
private static DbEntities GetNewDbInstance()
{
try
{
var db = new DbEntities();
db.Configuration.ProxyCreationEnabled = false;
db.Configuration.AutoDetectChangesEnabled = false;
return db;
}
catch (Exception ex)
{
Debug.WriteLine("Error in getting db instance" + ex.Message);
}
return null;
}
Now I have not had any issues to date, however, I worry that this solution will not scale well if we are for example doing 1000s of inserts per minute?
I then also worry that with 1 static db instance that we could get memory leaks or that object will keep growing and not manage it's db connections properly?
What is the correct way to use EF with long term db connections?
I've been searching everywhere, to try and get over this issue but I just can't figure this out.
I'm trying to make many changes to the DB with one single transaction using LINQ to SQL.
I've created a .dbml that represents the SQL Table, then I use basicaly this code:
foreach (var _doc in _r.Docs)
{
try
{
foreach (var _E in _Es)
{
Entity _newEnt = CreateNewEnt(_EListID, _doc, _fileName, _E);
_db.Etable.InsertOnSubmit(_newEnt);
_ECount++;
if (_ECount % 1000 == 0)
{
_db.SubmitChanges();
}
}
}
catch (Exception ex)
{
throw;
}
}
But when I do a SQL Profiler, the commands are all executed individually. It won't even start an SQL Transaction.
I've tried using TransactionScope (using statement and Complete()) and DbTransaction (BeginTransaction() and Commit()), none of them did anything at all, it just keeps on executing all commands individually, inserting everything like it was looping through all the inserts.
TransactionScope:
using(var _tans = new TransactionScope())
{
foreach (var _doc in _r.Docs)
{
try
{
foreach (var _E in _Es)
{
Entity _newEnt = CreateNewEnt(_EListID, _doc, _fileName, _E);
_db.Etable.InsertOnSubmit(_newEnt);
_ECount++;
if (_ECount % 1000 == 0)
{
_db.SubmitChanges();
}
}
}
catch (Exception ex)
{
throw;
}
}
_trans.Complete();
}
DbTransaction:
_db.Transaction = _db.Connection.BeginTransaction();
foreach (var _doc in _r.Docs)
{
try
{
foreach (var _E in _Es)
{
Entity _newEnt = CreateNewEnt(_EListID, _doc, _fileName, _E);
_db.Etable.InsertOnSubmit(_newEnt);
_ECount++;
if (_ECount % 1000 == 0)
{
_db.SubmitChanges();
}
}
}
catch (Exception ex)
{
throw;
}
}
_db.Transaction.Commit();
I also tried commiting transactions everytime I Submit the changes, but still nothing, just keeps on executing everything individually.
Right now I'm at a loss and wasting time :\
GSerg was right and pointed me to the right direction, Transactions do not mean multiple commands in one go, they just allow to "undo" all that was made inside given transaction if need be. Bulk statements do what I want to do.
You can download a Nuget Package directly from Visual Studio called "Z.LinqToSql.Plus" that helps with this. It extends DataContext from LINQ, and allows to do multiple insertions, updates or deletes in bulks, which means, in one single statement, like this:
foreach (var _doc in _r.Docs)
{
try
{
foreach (var _E in _Es)
{
Entity _newEnt = CreateNewEnt(_EListID, _doc, _fileName, _E);
_dictionary.add(_ECount, _newEnt); //or using a list as well
_ECount++;
if (_ECount % 20000 == 0)
{
_db.BulkInsert(_dictionary.Values); //inserts in bulk, there are also BulkUpdate and BulkDelete
_dictionary = new Dictionary<long, Entity>(); //restarts the dictionary to prepare for the next bulk
}
}
}
catch (Exception ex)
{
throw;
}
}
As in the code, I can even insert 20k entries in seconds. It's a very useful tool!
Thank you to everyone who tried helping! :)
I have the flowing transaction code :
using (var context = new StDbContext())
{
context.Database.Log = Console.Write;
if (!context.Database.Connection.State.HasFlag(ConnectionState.Open))
{
context.Database.Connection.OpenAsync();
}
using (var transaction = context.Database.BeginTransaction())
{
try
{
var unit_of_work = new UnitOfWork(context);
unit_of_work.Sessions.AddModelSession(Model, unit_of_work);
context.SaveChanges();
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
TestBaseCore.mNLogLoggerService.Error(ex.Message);
}
}
}
Inside unit_of_work.Sessions.AddModelSession(); I'm saving data for different tables (UnitOfWork.Project.AddProject(); .... UnitOfWork.Fixture.AddFixture() ..and so on)->but sometimes the code is crashing in `unit_of_work.Sessions.AddModelSession();' and I'm not able to catch any exception using try catch?
Is there a way to log in a log file all the database operations to be able to figure it out whats the real issue there?
my requirement is to process multiple cost files, which has million of records. after processing and validation I have to add those records to database.
For better performance I am using "yield" in foreach loop and return one record at a time, process that record and immediately add that one record to database with file number. During this file reading process if I come across any data validation error, I throw InvalidRecordException.
My requirement is to delete all the records from table related that file. in short, even if one record is invalid I want to mark that file as invalid file and not add even a single record of that file to database.
can anyone help me here, how can i make use of TransactionScope here.
public class CostFiles
{
public IEnumerable<string> FinancialRecords
{
get
{
//logic to get list of DataRecords
foreach (var dataRecord in DataRecords)
{
//some processing... which can throw InvalidRecord exception
yield return dataRecord;
}
yield break;
}
}
}
public void ProcessFileRecords(CostFiles costFile, int ImportFileNumber)
{
Database db = new Database();
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required))
{
try
{
foreach (var record in costFile.FinancialRecords)
{
db.Add(record, ImportFileNumber);
}
}
catch(InvalidRecordException ex)
{
//here i want to delete all the records from the table where import file number is same as input paramter ImportFileNumber
}
}
}
The purpose of a transaction scope is to create an "all or nothing" scenario, so either the whole transaction commits, or nothing at all commits. It looks like you already have the right idea (at least in terms of the TransactionScope. The scope won't actually commit the records to the database until you call TransactionScope.Complete(). If Complete() is not called, then the records are discarded when you leave the transaction scope. You could easily do something like this:
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required))
{
bool errorsEncountered = false;
try
{
foreach (var record in costFile.FinancialRecords)
{
db.Add(record, ImportFileNumber);
}
}
catch(InvalidRecordException ex)
{
//here i want to delete all the records from the table where import file number is same as input paramter ImportFileNumber
errorsEncountered = true;
}
if (!errorsEncountered)
{
scope.Complete();
}
}
Or you can just let the Add throw an exception and handle it outside of the transaction scope instead, as the exception will cause Complete() not to be called, and therefore no records added. This method has the additional advantage of stopping processing of additional records when we already know it will do nothing.
try
{
using (var scope = new TransactionScope(TransactionScopeOptions.Required))
{
foreach(var record in costFile.FinancialRecords)
{
db.Add(record, ImportFileNumber);
}
// if an exception is thrown during db.Add(), then Complete is never called
scope.Complete()
}
catch(Exception ex)
{
// handle your exception here
}
}
EDIT If you don't want your transaction elevated to a distributed transaction (which may have additional security/network requirements), make sure you reuse the same SqlConnection object for every database call within your transaction scope.
using (var conn = new SqlConnection("myConnectionString"))
{
conn.Open();
using (var scope = new TransactionScope(...))
{
foreach(var foo in foos)
{
db.Add(foo, conn);
}
scope.Complete();
}
}
I have the following function which reads from a firebird database. The Function works but does not handle exceptions (Required).
public IEnumerable<DbDataRecord> ExecuteQuery(string Query)
{
var FBC = new FbCommand(Query, DBConnection);
using (FbDataReader DBReader = FBC.ExecuteReader())
{
foreach (DbDataRecord record in DBReader)
yield return record;
}
}
Adding try/catch to this function gives an error regarding yield. I understand why I get the error but any workround I've tried has resulted in DBReader being disposed indirectly via using() too early or Dispose() not being called all. How do I get this code to use Exceptions & Cleanup without having to wrap the method or duplicate DBReader which might contain several thousand record?
Update:
Here is an example of an attempted fix. In this case DBReader is being disposed too early.
public IEnumerable<DbDataRecord> ExecuteQuery(string Query)
{
var FBC = new FbCommand(Query, DBConnection);
FbDataReader DBReader = null;
try
{
using (DBReader = FBC.ExecuteReader());
}
catch (Exception e)
{
Log.ErrorException("Database Execute Reader Exception", e);
throw;
}
foreach (DbDataRecord record in DBReader) <<- DBReader is closed at this stage
yield return record;
}
The code you've got looks fine to me (except I'd use braces round the yield return as well, and change the variable names to fit in with .NET naming conventions :)
The Dispose method will only be called on the reader if:
Accessing MoveNext() or Current in the reader throws an exception
The code using the iterator calls dispose on it
Note that a foreach statement calls Dispose on the iterator automatically, so if you wrote:
foreach (DbDataRecord record in ExecuteQuery())
{
if (someCondition)
{
break;
}
}
then that will call Dispose on the iterator at the end of the block, which will then call Dispose on the FbDataReader. In other words, it should all be working as intended.
If you need to add exception handling within the method, you would need to do something like:
using (FbDataReader DBReader = FBC.ExecuteReader())
{
using (var iterator = DBReader.GetEnumerator())
{
while (true)
{
DbDataRecord record = null;
try
{
if (!iterator.MoveNext())
{
break;
}
record = iterator.Current;
}
catch (FbException e)
{
// Handle however you want to handle it
}
yield return record;
}
}
}
Personally I'd handle the exception at the higher level though...
This line won't work, note the ; at the end, it is the entire scope of the using()
try
{
using (DBReader = FBC.ExecuteReader())
; // this empty statement is the scope of using()
}
The following would be the correct syntax except that you can't yield from a try/catch:
// not working
try
{
using (DBReader = FBC.ExecuteReader())
{
foreach (DbDataRecord record in DBReader)
yield return record;
}
}
catch (Exception e)
{
Log.ErrorException("Database Execute Reader Exception", e);
throw;
}
But you can stay a little closer to your original code:
// untested, ought to work
FbDataReader DBReader = null;
try
{
DBReader = FBC.ExecuteReader();
}
catch (Exception e)
{
Log.ErrorException("Database Execute Reader Exception", e);
throw;
}
using (DBReader)
{
foreach (DbDataRecord record in DBReader) // errors here won't be logged
yield return record;
}
To catch errors from the read loop as well see Jon Skeet's answer.