NHibernate rollback does not seem to work - c#

My question is simple. I have the below code and user is still inserted. When I check the database just after SaveOrUpdate (and before the rollback) I see user is already inserted. It's like flush mode and transaction not working. Where am I going wrong?
using (var session = sessionFactory.OpenSession())
{
session.FlushMode = FlushMode.Never;
using (var tran = session.BeginTransaction())
{
var user = CreateUser();
session.SaveOrUpdate(user);
tran.Rollback();
}
}

If we check "just after SaveOrUpdate" ... we are still inside of one transaction. Lot of stuff could happen before final decision (commit or rollback).
And one of the operation would be to decide if object should be Created or Updated. So in case, that the ID generator is set to native/identity (e.g. SQL server) - NHibernate must execute the INSERT to get the ID. Lot of operation could be postponed, but to receive the ID - there is no way to wait.
So most likely your ID need to be obtained from DB.. and that's why INSERT happens. BUT other stuff won't be written into DB, until Flush() is called... So, I wouldn't mark described behaviour as something special.

Related

Shared transaction across multiple connections, or ReadUncommitted in PostgreSQL

I want to open several connections within a single transaction scope, so that each connection could see the changes done by the previous ones.
I need this for tests - real code writes to the database, and testing code verifies the data was actually inserted/updated. In the end I rollback transaction scope so that the real database is not affected.
This approach works fine in SQL Server, but doesn't seem to work in PostgreSQL (I use 9.3 with Npgsql provider), below is a small example.
Here's the helper to run arbitrary query within a transaction scope
private void RunQuery(string query, Action<IDataReader> process)
{
using (var connection = new NpgsqlConnection(Config.ConnectionString)) {
connection.Open();
connection.EnlistTransaction(Transaction.Current);
using (var command = connection.CreateCommand()) {
command.CommandText = query;
using (var reader = command.ExecuteReader()) {
while (reader.Read()) {
process(reader);
}
}
}
}
}
..and here's the test code - it inserts into users table and then checks whether the user was actually inserted:
using (var scope = new TransactionScope()) {
//"tested scenario"
int id = 0;
RunQuery("INSERT INTO users (name) VALUES ('foo') RETURNING id;", reader => {
id = (int)reader.GetValue(0);
});
//checking
int id2 = 0;
RunQuery("SELECT id, name FROM users WHERE id=" + id, reader => {
id2 = (int)reader.GetValue(0);
});
Assert.That(id2, Is.Not.EqualTo(0));
}
The test above fails on Postgres as id2 is always zero. I tried TransactionScope constructor with TransactionOptions.ReadUncommitted but it doesn't seem to help. Note that if I run this against SQL Server (change NpgsqlConnection to SqlConection, use SCOPE_IDENTITY to retrieve the id) then everything works just fine and id2 is not zero.
As you may expect, selects within the same connection work for Postgres, but I don't need that, my goal is to use multiple connections on a shared transaction scope. I also don't need multithreading, those connections happen sequentially.
First a disclaimer: while I know a bit about postgresql, I know very little about .NET.
I suspect you are conflating two related but separate concepts - that of Distributed Transactions and the level of transaction isolation that exists.
According to the .NET Documentation, EnlistTransaction adds the connection into a distributed transaction. A distributed transaction is described as follows
A distributed transaction is a transaction that affects several
resources. For a distributed transaction to commit, all participants
must guarantee that any change to data will be permanent. Changes must
persist despite system crashes or other unforeseen events. If even a
single participant fails to make this guarantee, the entire
transaction fails, and any changes to data within the scope of the
transaction are rolled back.
In a database, such transactions are implemented by a two-phase commit process amongst what are actually separate transactions in the database. All of the participating transactions are progressed to the end of the first phase by executing PREPARE TRANSACTION. Once they are all in this state, then they can be fully committed by executing COMMIT PREPARED. If any of them fails during PREPARE TRANSACTION, then they can all be rolled back by ROLLBACK PREPARED. This guarantees that either they are all committed, or they are all rolled back.
When using middleware such as that provided by .NET, you do not see any of these details: the framework handles the two-phase commit for you.
So, you might be wondering what this has to do with the fact that you are not seeing changes made in one part of this distributed transaction reflected in another. The answer is probably nothing. The two transactions are actually completely separate - in fact it is possible for them to be on completely separate databases.
What you are trying to achieve - to be able to see changes made in one transaction from another prior to commit - is related to the level of transaction isolation.
The bad news for you is that it sounds like the isolation level you would like to have is 'read uncommitted', which is not supported in postgresql.
Maybe you need to describe what you are trying to achieve, at a higher level - it is likely there is another (maybe better) way to achieve it.

Entity Framework - force commit without transaction

I have a loop that looks a bit like this:
CountsToReport = rep.Counts_Get().Where(x => x.Status == "Completed");
foreach (var count in CountsToReport.ToList())
{
//do some stuff
//send an email
count.Status = "Reported";
rep.SaveChanges();
}
Where "rep" is a repository wrapper around an EF context.
When this runs, the unfortunate email recipient gets deluged with spam because the SaveChanges call doesn't actually commit the changes - so the loop keep getting the same counts, emailing them, and marking them as "Reported" but doesn't actually save the change.
If you stop the loop, and re-start the code, the change saves successfully. You can confirm this scenario by stepping through the code: the EF object in C# changes its Status, but the underlying data in SQL doesn't change.
I'm presuming this is because SaveChanges doesn't actually commit the transaction - it just marks the data as having changed ready for the end of the transaction. But we're not using transactions anywhere else in the DB, and it'd be a bit of a pain to change the repository for this one use case.
Is there any other way I can force EF to commit this change and escape my endless loop of doom? Or am I mistaken about the cause?
EDIT: Putting this in the repository and calling it instead of SaveChanges his fixes it:
public void SaveWithTransaction()
{
using (var transaction = new System.Transactions.TransactionScope())
{
db.SaveChanges();
transaction.Complete();
}
}
But it seems ugly. Still interested to know if there's another way round.
EDIT: This is deceptive. It looks like it's just the old add/modified problem again. Marking the object as modified seems to help.
It is quite possible that entities in your CountsToReport are detached from context.
Thus not only ChangeTracker doesn't see any sort of change done over Count, it doesn't know anything about the entity at all.
To have the issue fixed:
iterate through collection of CountsToReport
attach each entity back to context, something like db.Counts.Attach(count)
modify the count.Status and call db.SaveChanges();
like this:
foreach (var count in CountsToReport.ToList())
{
//do some stuff
//send an email
rep.Counts.Attach(count);
count.Status = "Reported";
rep.SaveChanges();
}
Have a look at the following link for more information about entity states and how to deal with them:
http://msdn.microsoft.com/en-us/data/jj592676.aspx

NHibernate session/transaction per request preventing multiple inserts

The MVC 3 project I am working on commits/rolls back whatever insert/update calls I make at the end of the current session. This works fine in most cases except for now where I am doing multiple inserts.
When a csv is uploaded to the site, each record is parsed and inserted into the DB. The problem is, if one record fails to insert, all previous records get rolled back and all subsequent attempts fail with the error:
don't flush the Session after an exception occurs
How can I disable the request-per-session (or do I have to "restart" the transaction somehow)?
EDIT: Details
The session per request stuff is configured in a class that implements IHttpModule. This class takes an HttpApplication context and UnitOfWork class. This is where the commits and rollbacks occur.
The UnitOfWork, mentioned above, is injected into the repositories using StructureMap.
Like I said, this behavior is fine for 99% of the rest of the site, I just need to get this bulk-update to ignore session-per-request transactions. Or I need to manually restart a transaction for each new record. I just don't know how.
ISession has FlushMode property with default value Auto,
Auto The ISession is sometimes flushed before query execution in order to ensure that queries never return stale state. This is the default flush mode.
Commit The ISession is flushed when Transaction.Commit() is called
Never The ISession is never flushed unless Flush() is explicitly called by the
application. This mode is very efficient for read only transactions
Unspecified Special value for unspecified flush mode.
Always The ISession is flushed before every query. This is almost always unnecessary and inefficient.
Try change ISession's FlushMode property on Commit value.
The session per request stuff is configured in a class that implements IHttpModule.
That session-per-request that you start in the HttpModule is something you do, not NHibernate. How to disable it, depends on your code. I personally don't like abstracting NHibernate in some UnitOfWork-class, because now you realize that the abstraction you used isn't good enough and a dirty hack is probably the only way to go now.
What you actually would like to do (and is not recommended) is:
foreach (var row in rows)
{
using (var session = SessionFactory.OpenSession())
using (var tx = session.BeginTransaction())
{
var whatever = ...
session.Save(whatever);
tx.Commit();
}
}

transactions using C# to update a database

I'm doing a multi-layer MVVM WPF application, which is connected to an Oracle database. I need some explanations about TransactionScope. Consider the following example:
using (TransactionScope ts = new TransactionScope())
{
...
bank.setID(BankName, Branch);
check.addCheck(check);
...
ts.Complete();
}
This code is for explanation only: bank.setID() updates a record, while addCheck actually inserts a record. I couldn't figure out how to test this. I wanted to execute the update and shutdown of the database before inserting with the second method, and then check to see if the update is rolled back. Is this already right? Am I on the right track ? Is this the purpose of TransactionScope?
Thanks in advance
Edit: I wasn't sure at first if you understood DB transactions, so here's a very short description:
TransactionScope is designed to wrap a database transaction with a mechanism that is exception safe.
You use it to wrap a set of operations that should be atomic with a transaction, so if you fail on one update, all of the DB actions within that transaction get rolled back.
You use it in a using block so that if your code throws an exception, the transaction gets rolled back, instead of committed to the database.
I wanted to execute the update and shutdown the database before inserting with the second method, and then check to see if the update is rolled back ... Am I on the right track ?
Yep. Your code should already handle it:
If the DB is shut down before addCheck does an insert, then the insert should throw an exception
Since an exception gets thrown, Complete should never get called
The finally block should automatically roll back the transaction when the (hidden) finally block is reached, and Complete hasn't been called
I couldn't figure out how to test this
If you would like to write a unit test for your code, then I wouldn't suggest the "take the DB offline in the middle of execution" approach.
Instead, I'd suggest making your DB logic flexible enough to point at different DBs. Then, either remove the table or some of the columns that addCheck inserts to. Try to set up your DB so that setID succeeds, but addCheck fails.
TransactionScope is well documented. MSDN it.
to test how it works, no need to take database offline. use this snippet:
using (TransactionScope ts = new TransactionScope())
{
try
{
...
bank.setID(BankName, Branch);
throw new System.InvalidOperationException("sht happens");
check.addCheck(check);
...
ts.Complete();
}
catch
{
//catch the exception
//ts.Complete() is not called, thus update/insert rollbacks
}
}

Entity Framework Transaction: What is better performance-wise?

Which has better performance:
using (GADEEntities context = new GADEEntities(_connectionString))
{
using (TransactionScope transaction = new TransactionScope())
{
AddToContext1(context);
AddToContext2(context);
AddToContext3(context);
...
context.SaveChanges();
transaction.Complete();
}
}
or
using (GADEEntities context = new GADEEntities(_connectionString))
{
using (TransactionScope transaction = new TransactionScope())
{
AddToContext1(context);
context.SaveChanges();
AddToContext2(context);
context.SaveChanges();
AddToContext3(context);
context.SaveChanges();
...
transaction.Complete();
}
}
At any time, this could translate into 5000+ inserts into a DB on a clients machine. Is either way any different?
It's very likely that your first version will be always faster, depending on what AddToContext exactly does. If your AddToContext method adds a single or only a few new objects to the context it will be definitely much faster. Calling SaveChanges after each insert (and probably also update and delete) slows the performance extremely down.
Here are a few measurements in a similar question:
Fastest Way of Inserting in Entity Framework
The way you have it set up, I do not think there is any significant difference. The data will be transmitted either way, and that's the real bottleneck.
There is very big difference because second version is terribly wrong.
What are you doing by this code:
AddToContext1(context);
context.SaveChanges(false);
You add record to context in Added state and let the context insert the record to the database but in the same time you are saying: "Let the data in Added state".
What happesn if you call this:
AddToContext2(context);
context.SaveChanges(false);
You add another recored to context in Added state and let the context insert all records in Added state to the database = the first record will be added again
It doesn't matter if AddToContext actually performs update because it will simply do the DB command again. So if you have 5.000 records you will insert or update the first one 5.000 times!
If you want to use second version you still have to accept changes during each saving.
Btw. SaveChanges overload accepting bool is obsolete in EFv4.

Categories