I am kind of new to the NHibernate... should I ever .BeginTransaction() in order to get item? Without it code throws, however with it looks ugly since no tx.Commit()/.CommitAsync() called explicitly. I assume end of IDisposable will do?
public override async Task<TDto> Get(int id)
{
using (var sessionBuilder = NHibernateConfiguration.Instance.BuildSessionFactory())
using (var session = sessionBuilder.OpenSession())
using (var tx = session.BeginTransaction())
{
return await session.GetAsync<TDto>(id);
}
}
It is recommended to wrap each call (individual or multiple calls) in transaction. This helps generating reliable results in case of multi-threaded/multi-process environment when other thread/process alter the data you are reading. The call also include Read call.
Other reason why it is recommended is because modern RDBMS always wrap each call in transaction if not already done. It costs. Better if programmer already know the Unit Of Work, that cost can be saved and also calls can be grouped far better.
But, it may not be necessary for any (Create, Read, Update, Delete) call depending on scenario. It is per case decision. If you avoiding doing this due to "code looks ugly" reason, please consider implementing UoW.
By the way, I hope you are not calling BuildSessionFactory for each call. That is costly and more importantly, not needed. Call it at startup of your application instead.
In response to comment:
I mean same as mentioned in comment by "Oskar Berggren"; its just not explicit enough. As question is more about transaction and UoW is more than just transaction, I preferred to just "mention" the UoW than going into details.
With reference to first paragraph, "multiple calls" could be enclosed in single Unit Of Work. Please understand that UoW is more than transaction. With ORM like NHibernate, use of UoW makes far better use of transaction. Also note that with full ORM, you do not need to do too much to implement UoW. ISession itself is UoW. You just need to scope-use-manage it properly.
Following answers may be helpful to you:
https://stackoverflow.com/a/51781877/5779732
https://stackoverflow.com/a/50721461/5779732
Related
If I'm doing something with the inserted values during an EntityDataSource's Inserted event, should I wrap e.Entity in a using() statement? I can't tell. Is that "in context"?
Should it be (as I've seen in other examples):
myEntity NewRecord = (myEntity)e.Entity;
myVar = NewRecord.DataValue;
Or is it appropriate practice to do:
using (myEntity NewRecord = new e.Entity())
{
myVar = NewRecord.DataValue;
}
(Don't think that syntax would be totally correct. Don't want to have to look up how that would work just to ask.)
From the MSDN documentation, all I can gather is that e.Entity is an object that is ... the entity. Helpful. So does it open a new connection and the whole rest of the package that I would assume a new entity would require?
In general, I'd say yes do it. Best practice? Yes for sure. However, the answer is more like "it depends". As a rule, anytime you use an object that implements IDisposable one should wrap it in a using statement unless you plan to keep the object between method invocations.
In the stateless world of the web (MVC) where I tend to live, I try to wrap my DbContext in a using statement. In Winforms/WPF I'm sure there are reasons to persist until some event takes place. These should be exceptions to the rule.
Using is used when you want to make sure you call the Dispose method when the program execution leaves the scope of the using.
For example say your code might throw an exception and you wanted to make sure it still called Dispose(). A using statement would guaranty it.
So if myEntity implemented iDisposable you would want to use the using statement. If it doesn't you wouldn't need to, but still could, I think... lol.
I felt some delay on Loading Contents while Using Transactions to Edit the contents,
(Testing this situation is a bit hard for me as I don't know how could be better to test it)
I have some doubts about Transactions usages:
There are some minor issues and things I should understand about Transactions
and these parts are related to this question :
When should we use Transactions in a Own-Made CMS ?
My-case-specific notes :
Should I use transactions on any CMS , While we have sprocs on Insert,Update,Retrieve, .... ?
Is the necessity of using transactions just when we are working on more tables than one ?
The Transaction strategy I used :
Adding Product Method ( Which uses add Product sproc ) :
TransactionOptions txOptions = new TransactionOptions();
using (TransactionScope txScope = new TransactionScope
(TransactionScopeOption.Required, txOptions))
{
try
{
connection.Open();
command.ExecuteNonQuery();
LastInserted = (int)pInsertedID.Value;
txScope.Complete();
}
catch (Exception ex)
{
logErrors.Warn(ex.Message);
}
finally
{
command.Dispose();
connection.Close();
}
Transactions may help to ensure consistency of the database. For example, if a stored procedure used to add a product inserts data in more than one table, and something fails along the way, a transaction might be helpful to rollback the whole operation, thus the database is free of half-baked products (e.g. lacking some critical info in related tables).
Transaction scopes (TransactionScope) are used to provide an ambient implicit transaction for whatever code runs inside a code block. These scopes may help to severely simplify the code, however, they also may add complexities in multithreaded environments (unfortunately, I don't know quite a lot about such cases).
Therefore, the code you provided would probably make sense to ensure database's consistency, especially if the command uses more than one table. It may add some performance overhead; however, you would be better off relying on gathered profiling data rather than any sort of feelings before conducting any optimizations (i.e. try to gather some quantitative data as to how slower things are under transactions). Modern database engines usually handle transactions quite efficiently; in my own experience, there were no transactions for removal due to their performance overhead.
The MVC 3 project I am working on commits/rolls back whatever insert/update calls I make at the end of the current session. This works fine in most cases except for now where I am doing multiple inserts.
When a csv is uploaded to the site, each record is parsed and inserted into the DB. The problem is, if one record fails to insert, all previous records get rolled back and all subsequent attempts fail with the error:
don't flush the Session after an exception occurs
How can I disable the request-per-session (or do I have to "restart" the transaction somehow)?
EDIT: Details
The session per request stuff is configured in a class that implements IHttpModule. This class takes an HttpApplication context and UnitOfWork class. This is where the commits and rollbacks occur.
The UnitOfWork, mentioned above, is injected into the repositories using StructureMap.
Like I said, this behavior is fine for 99% of the rest of the site, I just need to get this bulk-update to ignore session-per-request transactions. Or I need to manually restart a transaction for each new record. I just don't know how.
ISession has FlushMode property with default value Auto,
Auto The ISession is sometimes flushed before query execution in order to ensure that queries never return stale state. This is the default flush mode.
Commit The ISession is flushed when Transaction.Commit() is called
Never The ISession is never flushed unless Flush() is explicitly called by the
application. This mode is very efficient for read only transactions
Unspecified Special value for unspecified flush mode.
Always The ISession is flushed before every query. This is almost always unnecessary and inefficient.
Try change ISession's FlushMode property on Commit value.
The session per request stuff is configured in a class that implements IHttpModule.
That session-per-request that you start in the HttpModule is something you do, not NHibernate. How to disable it, depends on your code. I personally don't like abstracting NHibernate in some UnitOfWork-class, because now you realize that the abstraction you used isn't good enough and a dirty hack is probably the only way to go now.
What you actually would like to do (and is not recommended) is:
foreach (var row in rows)
{
using (var session = SessionFactory.OpenSession())
using (var tx = session.BeginTransaction())
{
var whatever = ...
session.Save(whatever);
tx.Commit();
}
}
Lately in apps I've been developing I have been checking the number of rows affected by an insert, update, delete to the database and logging an an error if the number is unexpected. For example on a simple insert, update, or delete of one row if any number of rows other than one is returned from an ExecuteNonQuery() call, I will consider that an error and log it. Also, I realize now as I type this that I do not even try to rollback the transaction if that happens, which is not the best practice and should definitely be addressed. Anyways, here's code to illustrate what I mean:
I'll have a data layer function that makes the call to the db:
public static int DLInsert(Person person)
{
Database db = DatabaseFactory.CreateDatabase("dbConnString");
using (DbCommand dbCommand = db.GetStoredProcCommand("dbo.Insert_Person"))
{
db.AddInParameter(dbCommand, "#FirstName", DbType.Byte, person.FirstName);
db.AddInParameter(dbCommand, "#LastName", DbType.String, person.LastName);
db.AddInParameter(dbCommand, "#Address", DbType.Boolean, person.Address);
return db.ExecuteNonQuery(dbCommand);
}
}
Then a business layer call to the data layer function:
public static bool BLInsert(Person person)
{
if (DLInsert(campusRating) != 1)
{
// log exception
return false;
}
return true;
}
And in the code-behind or view (I do both webforms and mvc projects):
if (BLInsert(person))
{
// carry on as normal with whatever other code after successful insert
}
else
{
// throw an exception that directs the user to one of my custom error pages
}
The more I use this type of code, the more I feel like it is overkill. Especially in the code-behind/view. Is there any legitimate reason to think a simple insert, update, or delete wouldn't actually modify the correct number of rows in the database? Is it more plausible to only worry about catching an actual SqlException and then handling that, instead of doing the monotonous check for rows affected every time?
Thanks. Hope you all can help me out.
UPDATE
Thanks everyone for taking the time to answer. I still haven't 100% decided what setup I will use going forward, but here's what I have taken away from all of your responses.
Trust the DB and .Net libraries to handle a query and do their job as they were designed to do.
Use transactions in my stored procedures to rollback the query on any errors and potentially use raiseerror to throw those exceptions back to the .Net code as a SqlException, which could handle these errors with a try/catch. This approach would replace the problematic return code checking.
Would there be any issue with the second bullet point that I am missing?
I guess the question becomes, "Why are you checking this?" If it's just because you don't trust the database to perform the query, then it's probably overkill. However, there could exist a logical reason to perform this check.
For example, I worked at a company once where this method was employed to check for concurrency errors. When a record was fetched from the database to be edited in the application, it would come with a LastModified timestamp. Then the standard CRUD operations in the data access layer would include a WHERE LastMotified=#LastModified clause when doing an UPDATE and check the record modified count. If no record was updated, it would assume a concurrency error had occurred.
I felt it was kind of sloppy for concurrency checking (especially the part about assuming the nature of the error), but it got the job done for the business.
What concerns me more in your example is the structure of how this is being accomplished. The 1 or 0 being returned from the data access code is a "magic number." That should be avoided. It's leaking an implementation detail from the data access code into the business logic code. If you do want to keep using this check, I'd recommend moving the check into the data access code and throwing an exception if it fails. In general, return codes should be avoided.
Edit: I just noticed a potentially harmful bug in your code as well, related to my last point above. What if more than one record is changed? It probably won't happen on an INSERT, but could easily happen on an UPDATE. Other parts of the code might assume that != 1 means no record was changed. That could make debugging very problematic :)
On the one hand, most of the time everything should behave the way you expect, and on those times the additional checks don't add anything to your application. On the other hand, if something does go wrong, not knowing about it means that the problem may become quite large before you notice it. In my opinion, the little bit of additional protection is worth the little bit of extra effort, especially if you implement a rollback on failure. It's kinda like an airbag in your car... it doesn't really serve a purpose if you never crash, but if you do it could save your life.
I've always prefered to raiserror in my sproc and handle exceptions rather than counting. This way, if you update a sproc to do something else, like logging/auditing, you don't have to worry about keeping the row counts in check.
Though if you like the second check in your code or would prefer not to deal with exceptions/raiserror, I've seen teams return 0 on successful sproc executions for every sproc in the db, and return another number otherwise.
It is absolutely overkill. You should trust that your core platform (.Net libraries, Sql Server) work correctly -you shouldn't be worrying about that.
Now, there are some related instances where you might want to test, like if transactions are correctly rolled back, etc.
If there's is a need for that check, why not do that check within the database itself? You save yourself from doing a round trip and it's done at a more 'centralized' stage - If you check in the database, you can be assured it's being applied consistently from any application that hits that database. Whereas if you put the logic in the UI, then you need to make sure that any UI application that hits that particular database applies the correct logic and does it consistently.
I've been searching for some time now in here and other places and can't find a good answer to why Linq-TO-SQL with NOLOCK is not possible..
Every time I search for how to apply the with(NOLOCK) hint to a Linq-To-SQL context (applied to 1 sql statement) people often answer to force a transaction (TransactionScope) with IsolationLevel set to ReadUncommitted. Well - they rarely tell this causes the connection to open an transaction (that I've also read somewhere must be ensured closed manually).
Using ReadUncommitted in my application as is, is really not that good. Right now I've got using context statements for the same connection within each other. Like:
using( var ctx1 = new Context()) {
... some code here ...
using( var ctx2 = new Context()) {
... some code here ...
using( var ctx3 = new Context()) {
... some code here ...
}
... some code here ...
}
... some code here ...
}
With a total execution time of 1 sec and many users on the same time, changing the isolation level will cause the contexts to wait for each other to release a connection because all the connections in the connection pool is being used.
So one (of many reasons) for changing to "nolock" is to avoid deadlocks (right now we have 1 customer deadlock per day). The consequence of above is just another kind of deadlock and really doesn't solve my issue.
So what I know I could do is:
Avoid nested usage of same connection
Increase the connection pool size at the server
But my problem is:
This is not possible within near future because of many lines of code re-factoring and it will conflict with the architecture (without even starting to comment whether this is good or bad)
Even though this of course will work, this is what I would call "symptomatic treatment" - as I don't know how much the application will grow and if this is a reliable solution for the future (and then I might end up with a even worse situation with a lot more users being affected)
My thoughts are:
Can it really be true that NoLock is not possible (for each statement without starting transactions)?
If 1 is true - can it really be true no one other got this problem and solved it in a generic linq to sql modification?
If 2 is true - why is this not a issue for others?
Is there another workaround I havn't looked at maybe?
Is the using of the same connection (nested) many times so bad practice that no-one has this issue?
1: LINQ-to-SQL does indeed not allow you to indicate hints like NOLOCK; it is possible to write your own TSQL, though, and use ExecuteQuery<T> etc
2: to solve in an elegant way would be pretty complicated, frankly; and there's a strong chance that you would be using it inappropriately. For example, in the "deadlock" scenario, I would wager that actually it is UPDLOCK that you should be using (during the first read), to ensure that the first read takes a write lock; this prevents a second later query getting a read lock, so you generally get blocking instead of deadlock
3: using the connection isn't necessarily a big problem (although note that new Context() won't generally share a connection; to share a connection you would use new Context(connection)). If seeing this issue, there are three likely solutions (if we exclude "use an ORM with hint support"):
using an explicit transaction (which doesn't have to be TransactionScope - it can be a connection level transaction) to specify the isolation level
write your own TSQL with hints
use a connection-level isolation level (noting the caveat I added as a comment)
IIRC there is also a way to subclass the data-context and override some of the transaction-creation code to control the isolation-level for the transactions that it creates internally.