I'm getting strange things since updated to EF6,no sure this is related or not, but used to be good
I'm doing a set of work, then save it to DB , then do another , save another.
after a while,i check SQL server by sp_who2 , i found many dead connections from my computer.
Job is huge then there goes to 700 connections,
I have to kill them all manually in cycle.
program like:
while (jobDone == false)
{
var returnData=doOneSetJob();
myEntity dbconn= new myEntity;
foreach( var one in retrunData)
{
dbconn.targetTable.add(one );
try
{
dbconn.savechange();
/// even i put a dispose() here , still lots of dead connections
}
catch
{
console.writeline("DB Insertion Fail.");
dbconn.dispose();
dbconn= new myEntity();
}
}
dbconn.dispose()
}
You should consider refactoring your code so that your connection is cleaned up after your job is complete. For example:
using (var context = new DbContext())
{
while (!jobDone)
{
// Execute job and get data
var returnData = doOneSetJob();
// Process job results
foreach (var one in returnData)
{
try
{
context.TargetTable.Add(one);
context.SaveChanges();
}
catch (Exception ex)
{
// Log the error
}
}
}
}
The using statement will guarantee that your context is cleaned up properly, even if an error occurs while you are looping through the results.
In this case you should use a using statement. Taken from MSDN:
The using statement ensures that Dispose is called even if an exception occurs while you are calling methods on the object. You can achieve the same result by putting the object inside a try block and then calling Dispose in a finally block; in fact, this is how the using statement is translated by the compiler.
So, your code would look better like this:
using(var dbconn = new DbContext())
{
while (!jobDone)
{
foreach(var one in retrunData)
{
try
{
targetTable row = new TargetTable();
dbconn.TargetTable.add(row);
dbconn.SaveChanges();
}
catch (Exception ex)
{
Console.WriteLine("DB Insertion Fail.");
}
}
}
}
This way, even if your code fails at some point, the Context, resources and connections will be properly disposed.
Related
my requirement is to process multiple cost files, which has million of records. after processing and validation I have to add those records to database.
For better performance I am using "yield" in foreach loop and return one record at a time, process that record and immediately add that one record to database with file number. During this file reading process if I come across any data validation error, I throw InvalidRecordException.
My requirement is to delete all the records from table related that file. in short, even if one record is invalid I want to mark that file as invalid file and not add even a single record of that file to database.
can anyone help me here, how can i make use of TransactionScope here.
public class CostFiles
{
public IEnumerable<string> FinancialRecords
{
get
{
//logic to get list of DataRecords
foreach (var dataRecord in DataRecords)
{
//some processing... which can throw InvalidRecord exception
yield return dataRecord;
}
yield break;
}
}
}
public void ProcessFileRecords(CostFiles costFile, int ImportFileNumber)
{
Database db = new Database();
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required))
{
try
{
foreach (var record in costFile.FinancialRecords)
{
db.Add(record, ImportFileNumber);
}
}
catch(InvalidRecordException ex)
{
//here i want to delete all the records from the table where import file number is same as input paramter ImportFileNumber
}
}
}
The purpose of a transaction scope is to create an "all or nothing" scenario, so either the whole transaction commits, or nothing at all commits. It looks like you already have the right idea (at least in terms of the TransactionScope. The scope won't actually commit the records to the database until you call TransactionScope.Complete(). If Complete() is not called, then the records are discarded when you leave the transaction scope. You could easily do something like this:
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required))
{
bool errorsEncountered = false;
try
{
foreach (var record in costFile.FinancialRecords)
{
db.Add(record, ImportFileNumber);
}
}
catch(InvalidRecordException ex)
{
//here i want to delete all the records from the table where import file number is same as input paramter ImportFileNumber
errorsEncountered = true;
}
if (!errorsEncountered)
{
scope.Complete();
}
}
Or you can just let the Add throw an exception and handle it outside of the transaction scope instead, as the exception will cause Complete() not to be called, and therefore no records added. This method has the additional advantage of stopping processing of additional records when we already know it will do nothing.
try
{
using (var scope = new TransactionScope(TransactionScopeOptions.Required))
{
foreach(var record in costFile.FinancialRecords)
{
db.Add(record, ImportFileNumber);
}
// if an exception is thrown during db.Add(), then Complete is never called
scope.Complete()
}
catch(Exception ex)
{
// handle your exception here
}
}
EDIT If you don't want your transaction elevated to a distributed transaction (which may have additional security/network requirements), make sure you reuse the same SqlConnection object for every database call within your transaction scope.
using (var conn = new SqlConnection("myConnectionString"))
{
conn.Open();
using (var scope = new TransactionScope(...))
{
foreach(var foo in foos)
{
db.Add(foo, conn);
}
scope.Complete();
}
}
Using entity framework, I have a function that basically goes something like this:
using (var ctx = new Dal.MyEntities())
{
try
{
//...
// create a temp entity
Dal.Temp temp = new Dal.Temp();
// populate its children
// note that temp is set to cascade deletes down to it's children
temp.Children = from foo in foos
select new Dal.Children()
{
// set some properties...
Field1 = foo.field1,
Field2 = foo.field2
}
//...
// add temp row to temp table
ctx.Temp.Add(temp);
ctx.SaveChanges();
// some query that joins on the temp table...
var results = from d in ctx.SomeOtherTable
join t in temp.Children
on new { d.Field1, d.Field2 } equals new { t.Field1, d.Field2 }
select d;
if (results.Count() == 0)
{
throw new Exception("no results")
}
// Normal processing and return result
return results;
}
finally
{
if (temp != null && temp.ID != 0)
{
ctx.TempTables.Remove(temp);
ctx.SaveChanges();
}
}
}
The idea is that as part of the processing of a request I need to build a temporary table with some data that then gets used to join the main query and filter the results. Once the query has been processed, the temp table should be deleted. I put the deletion part in the finally clause so that if there is a problem with the query (an exception thrown), the temporary table will always get cleaned up.
This seems to work fine, except intermittently I have a problem were the SaveChanges in the finally block throws a deadlock exception with an error message along the lines of:
Transaction (Process ID 89) was deadlocked on lock resources with another process and
has been chosen as the deadlock victim. Rerun the transaction.
I can't reliably reproduce it, but it seems to happen most often if the previous query threw the "no results" exception. Note that, due to an error that was discovered on the front end, two identically requests were being submitted under certain circumstances, but nevertheless, the code should be able to handle that.
Does anybody have an clues as to what might be happening here? Is throwing an exception inside the using block a problem? Should I handle that differently?
Update, so the exception might be a red herring. I removed it altogether (instead returning an empty result) and I still have the problem. I've tried a bunch of variations on:
using (new TransactionScope(TransactionScopeOption.Required, new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted })
using (var ctx = new Dal.MyEntities())
{
}
But despite what I've read, it doesn't seem to make any difference. I still get intermittent deadlocks on the second SaveChanges to remove the temp table.
how about adding a
using (var ctx = new Dal.MyEntities())
{
try
{
//...
Dal.TempTable temp = new Dal.TempTable();
//...
ctx.TempTables.Add(temp);
// some query that joins on the temp table...
if (no
results are
returned)
{
throw new Exception("no results")
}
// Normal processing and return result
}
catch
{
ctx.TempTables.Remove(temp);
ctx.SaveChanges();
}
finally
{
if (temp != null && temp.ID != 0)
{
ctx.TempTables.Remove(temp);
ctx.SaveChanges();
}
}
}
I have a Web API 2 endpoint set up to test a 2+ second ADO.NET call. When attempting to "burst" this api, it fails horribly when using async methods. I'm getting connection timeouts and reader timeouts. This doesn't happen with the synchronous version of this method when "bursted". The real problem is as follows...
Async ADO.NET behaves strangely when exceptions are thrown. I can see the exceptions thrown in the Visual Studio output, but they are not caught by my code. As you'll see below, I've tried wrapping try/catches around just about everything and have had no results. I did this to be able to set break points. I understand that catching exceptions just to throw them is bad. Initially, I only wrapped the call in the API layer. The worst part is, this locks up the entire server.
Clearly, there's something I'm missing about async ADO.NET. Any ideas?
EDIT:
Just to clarify what I'm trying to do here. This is just some test code on my local computer that's talking to our developmental database. It was just to prove/disprove that we can handle more traffic with async methods against our longer running db calls. I think what's happening is that as the calls are stacking up. In doing so, the await'ed connections and readers are timing out because we're not getting back to them quickly enough. This is why it doesn't fail when it's ran synchronously. This is a completely different issue. My concern here is that the operations are not throwing exceptions in a way that can be caught. The below is not production code :)
Web API2 Controller:
[Route("api/async/books")]
[HttpGet]
public async Task<IHttpActionResult> GetBookAsync()
{
// database class instantiation removed
// search args instantiation removed
try
{
var books = await titleData.GetTitlesAsync(searchArgs);
return Ok(books);
}
catch (Exception ex)
{
return InternalServerError(ex);
}
}
Data Access:
public async Task<IEnumerable<Book>> GetTitlesAsync(SearchArgs args)
{
var procName = "myProc"
using(var connection = new SqlConnection(_myConnectionString))
using (var command = new SqlCommand(procName, connection) { CommandType = System.Data.CommandType.StoredProcedure, CommandTimeout = 90 })
{
// populating command parameters removed
var results = new List<Book>();
try
{
await connection.OpenAsync();
}
catch(Exception ex)
{
throw;
}
try
{
using (var reader = await command.ExecuteReaderAsync())
{
try
{
while (await reader.ReadAsync())
{
// FROM MSDN:
// http://blogs.msdn.com/b/adonet/archive/2012/07/15/using-sqldatareader-s-new-async-methods-in-net-4-5-beta-part-2-examples.aspx
// Since this is non-sequential mode,
// all columns should already be read in by ReadAsync
// Therefore we can access individual columns synchronously
var book = new Book
{
Id = (int)reader["ID"],
Title = reader.ValueOrDefault<string>("Title"),
Author = reader.ValueOrDefault<string>("Author"),
IsActive = (bool)reader["Item_Active"],
ImageUrl = GetBookImageUrl(reader.ValueOrDefault<string>("BookImage")),
ProductId = (int)reader["ProductID"],
IsExpired = (bool)reader["Expired_Item"]
};
results.Add(book);
}
}
catch(Exception ex)
{
throw;
}
}
}
catch(Exception ex)
{
throw;
}
return results;
}
}
I'm developing using Asp.net MVC 4, NHibernate and Session-per-request.
I have a service method which updates multiple databases so the work is wrapped in a TransactionScope. I have discovered that the NHibernate Session is not usable outside the TransactionScope due to it not being thread safe.
The code is similar to this:
public void ProcessItems()
{
var items = itemService.GetAll();
var mailMessages = new List<MailMessage>();
using(var scope = new TransactionScope())
{
foreach(var item in items)
{
itemService.UpdateOne(item);
itemService.UpdateTwo(item);
try
{
mailMessages.Add(itemService.GenerateMailMessage(item));
}
catch(Exception ex)
{
// we don't want exceptions caused be generating email to prevent DB work
if (ex is InvalidOperationException
|| ex is NullReferenceException
|| ex is FormatException
|| ex is ArgumentException
|| ex is ItemNotFoundException)
{
LogError(String.Format("Unable to generate email alert for item.Id:{0} - {1}", item.Id, ex.Message), log);
}
else
{
// For exception types we don't know we can ignore rethrow
throw;
}
}
scope.Complete()
}
mailService.SendMail(mailMessages);
}
The database updates are critical to the success of the method. The email alerts are not. I don't want problems with the generation of the email alerts to prevent the database updates taking place.
My questions are:
Given the constraints does this look like a reasonable approach?
I'm worried that an exception I haven't handled may be thrown when
generating the email message. This will cause the entire TransactionScope to
be rolled back. It feels like I want any exception to be ignored
if it happens in that try block of code. However I appreciate a
catch-all is a no-no so any other suggestions for making this more
robust are welcome.
EDIT
Just to clarify my question:
I know it would be better to generate and send the email after the TransactionScope. However I am unable to do this as GenerateMailMessage() makes use of the NHibernate Session which is not safe to use outside of the TransactionScope block.
I guess what I was really asking is would it be defensible to change the catch statement above to a geniune catch-all (still with logging taking place) in order to provide as much protection to the critical UpdateOne() and UpdateTwo() calls as possible?
Update
My advice would be to try to prevent the exception from occurring. Failing that, a catch-all is likely the only option you have remaining. Logging all exceptions is going to be critical here.
1st question: Your case isn't really a catch-all, you are catching all exceptions to query the type. My only advice is to log details for the exceptions you choose to consume.
2nd question: I would completely remove the generation of email from the scope if it is liable to fail. Once the transaction rolls back, all items will be rolled back too. Create and send all the emails on successful commit.
public void ProcessItems()
{
var items = itemService.GetAll();
var mailMessages = new List<MailMessage>();
bool committed = false;
using(var scope = new TransactionScope())
{
foreach(var item in items)
{
itemService.UpdateOne(item);
itemService.UpdateTwo(item);
}
scope.Complete()
committed = true;
}
if (committed)
{
// Embed creation code and exception handling here.
mailService.SendMail(mailMessages);
}
}
I'd suggest changing this around. Instead of generating the email there and then... keep a list of the successfully processed items in a local List and then do all the mail sends at the end after you've committed.
public void ProcessItems()
{
var items = itemService.GetAll();
var successItems = new List<Item>();
var mailMessages = new List<MailMessage>();
using(var scope = new TransactionScope())
{
foreach(var item in items)
{
itemService.UpdateOne(item);
itemService.UpdateTwo(item);
successItems.Add(item);
// you still need try/catch handling for DB updates that fail... or maybe you want it all to fail.
}
scope.Complete()
}
mailMessages = successItems.Select(i => itemService.GenerateMailMessage).ToList();
//Do stuff with mail messages
}
I am working on an app that is inserting a lot of new objects (rows) and relationships between them. But at a certain point when an error occurs I want all the changes to the DataContext be disgarded and "thrown away". So that after an error I have a clean copy of the DataContext that matches the state of the database.
Edit
Alternatively, you could make use of the DataContext.Transaction, and use that to .Commit() or .Rollback() your changes.
ORIG
Just throw away that DataContext & Re-instantiate it.
Something like...
public void MyMethod(string connStr)
{
try
{
DataClasses1DataContext dc = new DataClasses1DataContext(connStr);
for (int i = 0; i < 100; i++)
{
try
{
//Do Stuff
//Insert Objects
dc.SubmitChanges();
}
catch (Exception ex) //So if it bombs in the loop, log your exception
{
Log(ex);
}
finally //Reinstantiate your DC
{
dc = new DataClasses1DataContext(connStr);
}
}
}
catch (Exception bigEx)
{
Log(bigEx);
}
}
You could also use the TransactionScope in a using statement. If you don't call .Complete() on the TransactionScope, all changes are rolled back when it is disposed (which happens when leaving the using statement).