I have a Project Table, a Stakeholder Table, and an Association Table (which takes a ProjectID and a StakeholderID as foreign keys).
I want to delete a single Project but must first delete all that Project's rows in the Association Table.
Here is the method. ProjectRow is a strongly typed DataRow created with the DataSet Designer.
public void RemoveProject(ProjectRow project)
{
try
{
var associations = from a in ds.Association.AsEnumerable()
where a.Project == project.ProjID
select a;
foreach (DataRow assoc in associations)
{
assoc.Delete();
}
project.Delete();
using (TransactionScope scope = new TransactionScope())
{
assocTableAdapter.Update(ds.Association);
System.Threading.Thread.Sleep(40000); // to test the transaction.
projTableAdapter.Update(ds.Project);
scope.Complete();
}
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
}
This method does achieve the effect required (stops associations being added to the deleted project during the transaction) but it seems to place a read and write lock on all the tables so I cannot even read from the Project Table during the sleep period.
I would like to be able to add other Project/Stakeholder pairs to the Association Table during the transaction. How do I achieve this?
Cheers.
A few links but you can hint that you'd like row level locking and the databaase engine may or may not take the suggestion. However, since you're letting the library handle the deletes, who knows what it's doing (short of turning on profiler and capturing statements). It could very well be issuing table locks or you simply have the misfortune of the row locks escalating to page locks and the rows you are attempting to access in your query outside the transaction happen to be on the same page.
Is it possible to force row level locking in SQL Server?
Why is SQL Server 2008 blocking SELECT's on long transaction INSERT's?
https://dba.stackexchange.com/questions/6512/difference-between-row-level-and-page-level-locking-and-consequences
What's a body to do? You need to balance your concurrency needs against your risk for bad data. Here's a fun poster about SQL Server Isolation Levels
Related
I have an Excel file with a long list of usernames. Col A contains old user names Col. B has the new names. I want to rename users in a SQL table based on the excel file. My question is the following:
Is it ok to call SQL with a using statement multiple times within a loop where I iterate through the excel? Or is there a better way where I open a single connection and make all the SQL update queries with “one” go?
The answer to this is always that "it depends" and "can you justify it".
Right or wrong aside, your business case may include circumstances than mean multiple connections are an acceptable solution.
When iterating a data list, although not the greatest performance, it is generally acceptable to execute individual statements that would only affect a single record in a database.
You might do this to capture specific error information about each row, and in your business logic you will not re-process the rows that did succeed.
If you need to fail the whole batch when one row fails then you would need to ensure that you use a transaction scope so that you can roll back the entire set.
You would however generally NOT create multiple connections. a standard code pattern would be to create a connection outside of the loop and re-use the same connection for each transaction.
using (var conn = new SqlConnection(connectionString))
{
...
start a transaction
...
try
{
foreach(var record in dataRecords)
{
try
{
...
Execute your transactions
...
}
catch(SqlException sx)
{
...
Process the exception,
record relevant information based on the input parameters for this record
...
throw; // or throw a new exception with formatted info...
}
}
}
catch(Exception ex)
{
...
rollback the transaction
...
}
}
A set-based approach usually offers greater performance, but that requires a bit of plumbing to set up form a best practises point of view, this advice from Gordon Linoff works well too.
Multiple transactions, or multiple executions within the same transaction is acceptable, multiple connections however should be avoided.
You should load the Excel table into a table in the database with two columns (at least) called old_username and new_username.
Then you can run an update directly in the database. You haven't specified the database. But because of the C# tag, I'll provide SQL Server syntax for the update -- this syntax varies by database:
update u
set username = nc.new_username
from users u join -- the table you want to update
name_changes nc
on u.username = nc.old_username;
That is, it is generally better to get the data into the database and do all the work there.
I use with winform in C# and Entity Framework.
In Database I have a contact table
Between "word" and "user", Word table has a lot of data (4000+).
I have a window with datagridview where there is a checkbox in each line that the user marks the words he wants.
And by pressing the save button I want to update all the records that he has changed in the table.
listWord = Program.DB.WordUseUser.Where(lw => lw.IdUser == thisIdUser).ToList();
///Clicking on the checkbox I add or remove from ListWord accordingly...
foreach (var item in listWord)
{
Program.DB.WordUseUser.Remove(item);
}
Program.DB.SaveChanges();
foreach (WordUseUser item in listWord)
{
Program.DB.WordUseUser.Add(item);
}
Program.DB.SaveChanges();
It takes a lot of time (of course ...)
And I'm looking for a more effective solution.
I tried to use a solution here:Fastest Way of Inserting in Entity Framework
But it only talks about updating existing data
And not updating and adding and deleting together
I would love for help !!
Fast reply - you have to do it inside explicit transaction.
Not only this is secure, but also this would be much more faster.
So, begin transaction - do your updates/inserts and commit transaction.
Every query creates it's own implicit transaction. Unless there is already existing transaction. So think of it as:
without creating a transaction database has to do 12000 operations (for every query: create transaction, execute query, commit transaction) and when you create an explicit transaction then it's just 4002 operations.
I am working with a situation where we are dealing with money transactions.
For example, I have a table of users wallets, with their balance in that row.
UserId; Wallet Id; Balance
Now in our website and web services, every time a certain transaction happens, we need to:
check that there is enough funds available to perform that transaction:
deduct the costs of the transaction from the balance.
How and what is the correct way to go about locking that row / entity for the entire duration of my transaction?
From what I have read there are some solutions where EF marks an entity and then compares that mark when it saves it back to the DB, however what does it do when another user / program has already edited the amount?
Can I achieve this with EF? If not what other options do I have?
Would calling a stored procedure possibly allow for me to lock the row properly so that no one else can access that row in the SQL Server whilst program A has the lock on it?
EF doesn't have built-in locking mechanism, you probably would need to use raw query like
using (var scope = new TransactionScope(...))
{
using (var context = new YourContext(...))
{
var wallet =
context.ExecuteStoreQuery<UserWallet>("SELECT UserId, WalletId, Balance FROM UserWallets WITH (UPDLOCK) WHERE ...");
// your logic
scope.Complete();
}
}
you can set the isolationlevel on the transaction in Entity framework to ensure no one else can change it:
YourDataContext.Database.BeginTransaction(IsolationLevel.RepeatableRead)
RepeatableRead
Summary:
Locks are placed on all data that is used in a query, preventing other users from updating the data. Prevents non-repeatable reads but phantom rows are still possible.
The whole point of a transactional database is that the consumer of the data determines how isolated their view of the data should be.
Irrespective of whether your transaction is serialized someone else can perform a dirty read on the same data that you just changed, but did not commit.
You should firstly concern yourself with the integrity of your view and then only accept a degredation of the quality of that view to improve system performance where you are sure it is required.
Wrap everthing in a TransactionScope with Serialized isolation level and you personally cannot really go wrong. Only drop the isolation level when you see it is genuinely required (i.e. when getting things wrong sometimes is OK).
Someone asks about this here: SQL Server: preventing dirty reads in a stored procedure
I have a C# program that needs to perform a group of mass updates (20k+) to a SQL Server table. Since other users can update these records one at a time via an intranet website, we need to build the C# program with the capability of locking the table down. Once the table is locked to prevent another user from making any alterations/searches we will then need to preform the requested updates/inserts.
Since we are processing so many records, we cannot use TransactionScope (seemed the easiest way at first) due to the fact our transaction winds up being handled by the MSDTC service. We need to use another method.
Based on what I've read on the internet using a SqlTransaction object seemed to be the best method, however I cannot get the table to lock. When the program runs and I step through the code below, I'm still able to perform updates and search via the intranet site.
My question is twofold. Am I using the SqlTransaction properly? If so (or even if not) is there a better method for obtaining a table lock that allows the current program running to search and preform updates?
I would like for the table to be locked while the program executes the code below.
C#
SqlConnection dbConnection = new SqlConnection(dbConn);
dbConnection.Open();
using (SqlTransaction transaction = dbConnection.BeginTransaction(IsolationLevel.Serializable))
{
//Instantiate validation object with zip and channel values
_allRecords = GetRecords();
validation = new Validation();
validation.SetLists(_allRecords);
while (_reader.Read())
{
try
{
record = new ZipCodeTerritory();
_errorMsg = string.Empty;
//Convert row to ZipCodeTerritory type
record.ChannelCode = _reader[0].ToString();
record.DrmTerrDesc = _reader[1].ToString();
record.IndDistrnId = _reader[2].ToString();
record.StateCode = _reader[3].ToString().Trim();
record.ZipCode = _reader[4].ToString().Trim();
record.LastUpdateId = _reader[7].ToString();
record.ErrorCodes = _reader[8].ToString();
record.Status = _reader[9].ToString();
record.LastUpdateDate = DateTime.Now;
//Handle DateTime types separetly
DateTime value = new DateTime();
if (DateTime.TryParse(_reader[5].ToString(), out value))
{
record.EndDate = Convert.ToDateTime(_reader[5].ToString());
}
else
{
_errorMsg += "Invalid End Date; ";
}
if (DateTime.TryParse(_reader[6].ToString(), out value))
{
record.EffectiveDate = Convert.ToDateTime(_reader[6].ToString());
}
else
{
_errorMsg += "Invalid Effective Date; ";
}
//Do not process if we're missing LastUpdateId
if (string.IsNullOrEmpty(record.LastUpdateId))
{
_errorMsg += "Missing last update Id; ";
}
//Make sure primary key is valid
if (_reader[10] != DBNull.Value)
{
int id = 0;
if (int.TryParse(_reader[10].ToString(), out id))
{
record.Id = id;
}
else
{
_errorMsg += "Invalid Id; ";
}
}
//Validate business rules if data is properly formatted
if (string.IsNullOrWhiteSpace(_errorMsg))
{
_errorMsg = validation.ValidateZipCode(record);
}
//Skip record if any errors found
if (!string.IsNullOrWhiteSpace(_errorMsg))
{
_issues++;
//Convert to ZipCodeError type in case we have data/formatting errors
_errors.Add(new ZipCodeError(_reader), _errorMsg);
continue;
}
else if (flag)
{
//Separate updates to appropriate list
SendToUpdates(record);
}
}
catch (Exception ex)
{
_errors.Add(new ZipCodeError(_reader), "Job crashed reading this record, please review all columns.");
_issues++;
}
}//End while
//Updates occur in one of three methods below. If I step through the code,
//and stop the program here, before I enter any of the methods, and then
//make updates to the same records via our intranet site the changes
//made on the site go through. No table locking has occured at this point.
if (flag)
{
if (_insertList.Count > 0)
{
Updates.Insert(_insertList, _errors);
}
if (_updateList.Count > 0)
{
_updates = Updates.Update(_updateList, _errors);
_issues += _updateList.Count - _updates;
}
if (_autotermList.Count > 0)
{
//_autotermed = Updates.Update(_autotermList, _errors);
_autotermed = Updates.UpdateWithReporting(_autotermList, _errors);
_issues += _autotermList.Count - _autotermed;
}
}
transaction.Commit();
}
SQL doesn't really provide a way to exclusively lock a table: it's designed to try to maximize concurrent use while keeping ACID.
You could try using these table hints on your queries:
TABLOCK
Specifies that the acquired lock is applied at the table level. The type of lock that
is acquired depends on the statement being executed. For example, a SELECT statement
may acquire a shared lock. By specifying TABLOCK, the shared lock is applied to the
entire table instead of at the row or page level. If HOLDLOCK is also specified, the
table lock is held until the end of the transaction.
TABLOCKX
Specifies that an exclusive lock is taken on the table.
UPDLOCK
Specifies that update locks are to be taken and held until the transaction completes.
UPDLOCK takes update locks for read operations only at the row-level or page-level. If
UPDLOCK is combined with TABLOCK, or a table-level lock is taken for some other
reason, an exclusive (X) lock will be taken instead.
XLOCK
Specifies that exclusive locks are to be taken and held until the transaction
completes. If specified with ROWLOCK, PAGLOCK, or TABLOCK, the exclusive locks apply
to the appropriate level of granularity.
HOLDLOCK/SERIALIZABLE
Makes shared locks more restrictive by holding them until a transaction is completed,
instead of releasing the shared lock as soon as the required table or data page is no
longer needed, whether the transaction has been completed or not. The scan is
performed with the same semantics as a transaction running at the SERIALIZABLE
isolation level. For more information about isolation levels, see SET TRANSACTION
ISOLATION LEVEL (Transact-SQL).
Alternatively, you could try SET TRANSACTION ISOLATION LEVEL SERIALIZABLE:
Statements cannot read data that has been modified but not yet committed by other
transactions.
No other transactions can modify data that has been read by the current transaction
until the current transaction completes.
Other transactions cannot insert new rows with key values that would fall in the
range of keys read by any statements in the current transaction until the current
transaction completes.
Range locks are placed in the range of key values that match the search conditions of
each statement executed in a transaction. This blocks other transactions from updating
or inserting any rows that would qualify for any of the statements executed by the
current transaction. This means that if any of the statements in a transaction are
executed a second time, they will read the same set of rows. The range locks are held
until the transaction completes. This is the most restrictive of the isolation levels
because it locks entire ranges of keys and holds the locks until the transaction
completes. Because concurrency is lower, use this option only when necessary. This
option has the same effect as setting HOLDLOCK on all tables in all SELECT statements
in a transaction.
But almost certainly, lock escalation will cause blocking and your users will be pretty much dead in the water (in my experience).
So...
Wait until you have a schedule maintenance window. Set the database in single-user mode, make your changes and bring it back online.
Try this: when you get records from you table (in the GetRecords() function?) use TABLOCKX hint:
SELECT * FROM Table1 (TABLOCKX)
It will queue all other reads and updates outside your transaction until the transaction is either commited or rolled back.
It's all about Isolation level here. Change your Transaction Isolation Level to ReadCommited (didn't lookup the Enum Value in C# but that should be close). When you execute the first update/insert to the table, SQL will start locking and no one will be able to read the data you're changing/adding until you Commit or Rollback thr transaction, provided they are not performing dirty reads (using NoLock on their SQL, or have the connection Isolation level set to Read Uncommited).. Be careful though, depending on how you're inserting/updating data you may lock the whole table for the duration of your transaction though which would cause timeout errors at the client when they try to read from this table while your transaction is open. Without seeing the SQL behind the updates though I can't tell if that will happen here.
As someone has pointed out, the transaction doesn't seem to be used after being taken out.
From the limited information we have on the app/purpose, it's hard to tell, but from the code snippet, it seems to me we don't need any locking. We are getting some data from source X (in this case _reader) and then inserting/updating into destination Y.
All the validation happens against the source data to make sure it's correct, it doesn't seem like we're making any decision or care for what's in the destination.
If the above is true then a better approach would be to load all this data into a temporary table (can be a real temp table "#" or a real table that we destroy afterwards, but the purpose is the same), and then in a single sql statement, we can do a mass insert/update from the temp table into our destination. Assuming the db schema is in decent shape, 20 (or even 30) thousand records should happen almost instantly without any need to wait for maintenance window or lock out users for extended periods of time
Also to strictly answer the question about using transaction, below is a simple sample on how to properly use a transaction, there should be plenty of other samples and info on the web
SqlConnection conn = new SqlConnection();
SqlCommand cmd1 = new SqlCommand();
SqlTransaction tran = conn.BeginTransaction();
...
cmd1.Transaction = tran;
...
tran.Commit();
I can't believe it is so hard to get someone to show me a simple working example. It leads me to believe that everyone can only talk like they know how to do it but in reality they don't.
I shorten the post down to only what I want the example to do. Maybe the post was getting to long and scared people away.
To get this bounty I am looking for a WORKING EXAMPLE that I can copy in VS 2010 and run.
What the example needs to do.
Show what datatype should be in my domain for version as a timestamp in mssql 2008
Show nhibernate automatically throwing the "StaleObjectException"
Show me working examples of these 3 scenarios
Scenario 1
User A comes to the site and edits Row1. User B comes(note he can see Row1) and clicks to edit Row1, UserB should be denied from editing the row until User A is finished.
Scenario 2
User A comes to the site and edits Row1. User B comes 30mins later and clicks to edit Row1. User B should be able to edit this row and save. This is because User A took too long to edit the row and lost his right to edit.
Scenario 3
User A comes back from being away. He clicks the update row button and he should be greeted with StaleObjectException.
I am using asp.net mvc and fluent nhibernate. Looking for the example to be done in these.
What I tried
I tried to build my own but I can't get it throw the StaleObjectException nor can I get the version number to increment. I tired opening 2 separate browser and loaded up the index page. Both browsers showed the same version number.
public class Default1Controller : Controller
{
//
// GET: /Default1/
public ActionResult Index()
{
var sessionFactory = CreateSessionFactory();
using (var session = sessionFactory.OpenSession())
{
using (var transaction = session.BeginTransaction())
{
var firstRecord = session.Query<TableA>().FirstOrDefault();
transaction.Commit();
return View(firstRecord);
}
}
}
public ActionResult Save()
{
var sessionFactory = CreateSessionFactory();
using (var session = sessionFactory.OpenSession())
{
using (var transaction = session.BeginTransaction())
{
var firstRecord = session.Query<TableA>().FirstOrDefault();
firstRecord.Name = "test2";
transaction.Commit();
return View();
}
}
}
private static ISessionFactory CreateSessionFactory()
{
return Fluently.Configure()
.Database(MsSqlConfiguration.MsSql2008
.ConnectionString(c => c.FromConnectionStringWithKey("Test")))
.Mappings(m => m.FluentMappings.AddFromAssemblyOf<TableA>())
// .ExposeConfiguration(BuidSchema)
.BuildSessionFactory();
}
private static void BuidSchema(NHibernate.Cfg.Configuration config)
{
new NHibernate.Tool.hbm2ddl.SchemaExport(config).Create(false, true);
}
}
public class TableA
{
public virtual Guid Id { get; set; }
public virtual string Name { get; set; }
// Not sure what data type this should be for timestamp.
// To eliminate changing to much started with int version
// but want in the end timestamp.
public virtual int Version { get; set; }
}
public class TableAMapping : ClassMap<TableA>
{
public TableAMapping()
{
Id(x => x.Id);
Map(x => x.Name);
Version(x => x.Version);
}
}
Will nhibernate stop the row from being retrieved?
No. Locks are only placed for the extent of a transaction, which in a web application ends when the request ends. Also, the default type of transaction isolation mode is Read committed which means that read locks are released as soon as the select statement terminates. If you are reading and making edits in the same request and transaction, you could place a read and write lock on the row at hand which would prevent other transactions from writing to or reading from that row. However, this type of concurrency control doesn't work well in a web application.
Or would the User B be able to still see the row but if he tried to save it would crash?
This would happen if [optimistic concurrency] was being used. In NHibernate, optimistic concurrency works by adding a version field. Save/update commands are issued with the version upon which the update was based. If that differs from the version in the database table, no rows are updated and NHibernate will throw.
What happens if User A say cancels and does not edit. Do I have to
release the lock myself or is there a timeout can be set to release
the lock?
No, the lock is released at the end of the request.
Overall, your best bet is to opt for optimistic concurrency with version fields managed by NHibernate.
How does it look in code? Do I setup in my fluent nhibernate to
generate a timestamp(not sure if I would timespan datatype).
I would suggest using a version column. If you're using FluentNhibernate with auto mappings, then if you make a column called Version of type int/long it will use that to version by default, alternatively you can use the Version() method in the mapping to do so (it's similar for timestamp).
So now I generated somehow the timestamp and the user is editing a
row(through a gui). Should I be storing the timestamp in memory or
something? Then when the user submits call from memory the timestamp
and id of the row and check?
When the user starts editing a row, you retrieve it and store the current version (the value of the version property). I would recommend putting the current version in a hidden field in the form. When the user saves his changes, you can either do a manual check against the version in the database (check that it's the same as the version in the hidden field), or you can set the version property to the value from the hidden field (if you are using databinding, you could do this automatically). If you set the version property, then when you try to save the entity, NHibernate will check that the version you're saving matches the version in the database, and throws an exception if it doesn't.
NHibernate will issue an update query something like:
UPDATE xyz
SET ,
Version = 16
WHERE Id = 1234 AND Version = 15
(assuming your version was 15) - in the process it will also increment the version field
If so that means the business logic is keeping track of the "row
locking" but in theory someone still could just go Where(x => x.Id ==
id) and grab that row and update at will.
If someone else updates the row via NHibernate, it will increment the version automatically, so when your user tries to save it with the wrong version you will get an exception which you need to decide how to handle (ie. try show some merge screen, or tell the user to try again with the new data)
What happens when the row gets updated? Do you set null to the timestamp?
It updates the version or timestamp (timestamp will get updated to the current time) automatically
What happens if the user never actually finishes updating and leaves. How does the row
every become unlocked again?
The row is not locked per se, it is instead using optimistic concurrency, where you assume that no-one will change the same row at the same time, and if someone does, then you need to retry the update.
Is there still a race condition what happens or is this next to
impossible in happening? I am just concerned 2 ppl try to get edit the
same row and both of them see it in their gui for editing but one is
actually going to get denied in the end because they lost the race
condition.
If 2 people try to edit the same row at the same time, one of them will lose if you're using optimistic concurrency. The benefit is that they will KNOW that there was a conflict, as opposed to either losing their changes and thinking that it updated, or overwriting someone else's changes without knowing about it.
So I did something like this
var test = session.Query.Where(x => x.Id ==
id).FirstOrDefault(); // send to user for editing. Has versioning on
it. user edits and send back the data 30mins later.
Codes does
test.Id = vm.Id; test.ColumnA = vm.ColumnA; test.Version = vm.Version;
session.Update(test); session.Commit(); So the above will work right?
The above will throw an exception if someone else has gone in and changed the row. That's the point of it, so you know that a concurrency issue has arisen. Typically you'd show the user a message saying "Someone else has changed this row" with the new row there and possibly their changes also so the user has to select which changes win.
but if I do this
test.Id = vm.Id;
test.ColumnA = vm.ColumnA;
session.Update(test);
session.Commit(); it would not commit right?
Correct as long as you haven't reloaded test (ie. you did test = new Xyz(), not test = session.Load() ) because the Timestamp on the row wouldn't match
If someone else updates the row via NHibernate, it will increment the
version automatically, so when your user tries to save it with the
wrong version you will get an exception which you need to decide how
to handle (ie. try show some merge screen, or tell the user to try
again with the new data)
Can I make it so when the record is grabbed this checked. I want to
keep it simple at first that only one person can edit at a time. The
other guy won't even be able to access the record to edit while
something is editing it.
That's not optimistic concurrency. As a simple answer you could add a CheckOutDate property which you set when someone starts editing it, and set it to null when they finish. Then when they start to edit, or when you show them the rows to edit you could exclude all rows where that CheckOutDate is newer than say the last 10 minutes (then you wouldn't need a scheduled task to reset it periodically)
The row is not locked per se, it is instead using optimistic
concurrency, where you assume that no-one will change the same row at
the same time, and if someone does, then you need to retry the update.
I am not sure what your saying does this mean I can do
session.query.Where(x => x.id == id).FirstOrDefault(); all day
long and it will keep getting me the record(thought it would keep
incrementing the version).
The query will NOT increment the version, only an update to it will increment the version.
I don't know that much about nHibernate itself, but if you are prepared to create some stored procs on the database it can >sort of< be done.
You will need one extra data column and two fields in your object model to store information against each row:
A 'hash' of all the field values (using SQL Server CHECKSUM 2008 and later or HASHBYTES for earlier editions) other than the hash field itself and the EditTimestamp field. This could be persisted to the table using INSERT/UPDATE triggers if needs be.
An 'edit-timestamp' of type datetime.
Change your procedures to do the following:
The 'select' procedure should include a where clause similar to 'edit-timestamp < (Now - 30 minutes)' and should update the 'edit-timestamp' to the current time. Run the select with appropriate locking BEFORE updating the row I'm thinking a stored procedure with hold locking such as this one here Use a persistent date/time rather than something like GETDATE().
Example (using fixed values):
BEGIN TRAN
DECLARE #now DATETIME
SET #now = '2012-09-28 14:00:00'
SELECT *, #now AS NewEditTimestamp, CHECKSUM(ID, [Description]) AS RowChecksum
FROM TestLocks
WITH (HOLDLOCK, ROWLOCK)
WHERE ID = 3 AND EditTimestamp < DATEADD(mi, -30, #now)
/* Do all your stuff here while the record is locked */
UPDATE TestLocks
SET EditTimestamp = #now
WHERE ID = 3 AND EditTimestamp < DATEADD(mi, -30, #now)
COMMIT TRAN
If you get a row back from this procedure then you 'have' the 'lock', otherwise, no rows will be returned and there's nothing to edit.
The 'update' procedure should add a where clause similar to 'hash = previously returned hash'
Example (using fixed values):
BEGIN TRAN
DECLARE #RowChecksum INT
SET #RowChecksum = -845335138
UPDATE TestLocks
SET [Description] = 'New Description'
WHERE ID = 3 AND CHECKSUM(ID, [Description]) = #RowChecksum
SELECT ##ROWCOUNT AS RowsUpdated
COMMIT TRAN
So in your scenarios:
User A edits a row. When you return this record from the database, the 'edit-timestamp' has been updated to the current time and you have a row so you know you can edit. User B would not get a row because the timestamp is still too recent.
User B edits the row after 30 minutes. They get a row back because the timestamp has passed more than 30 minutes ago. The hash of the fields will be the same as for user A 30 minutes ago as no updates have been written.
Now user B updates. The previously retrieved hash still matches the hash of the fields in the row, so the update statement succeeds, and we return the row-count to show that the row was updated. User A however, tries to update next. Because the value of the description field has changed, the hashvalue has changed, and so nothing is updated by the UPDATE statement. We get a result of 'zero rows updated' so we know that either the row has since been changed or the row was deleted.
There are probably some issues regarding scalability with all these locks going on and the above code could be optimised (might get problems with clocks going forward/back for example, use UTC), but I wrote these examples just to explain how it could work.
Outside of that I can't see how you can do this without utilising database level row-locking within the select transaction. It might be that you can request those locks via nHibernate, but that's beyond my knowledge of nHibernate I'm afraid.
Have you looked at the ISaveOrUpdateEventListener interface?
public class SaveListener : NHibernate.Event.ISaveOrUpdateEventListener
{
public void OnSaveOrUpdate(NHibernate.Event.SaveOrUpdateEvent e)
{
NHibernate.Persister.Entity.IEntityPersister p = e.Session.GetEntityPersister(null, e.Entity);
if (p.IsVersioned)
{
//TODO: check types etc...
MyEntity m = (MyEntity) e.Entity;
DateTime oldversion = (DateTime) p.GetVersion(m, e.Session.EntityMode);
DateTime currversion = (DateTime) p.GetCurrentVersion(m.ID, e.Session);
if (oldversion < currversion.AddMinutes(-30))
throw new StaleObjectStateException("MyEntity", m.ID);
}
}
}
Then in your Configuration, register it.
private static void Configure(NHibernate.Cfg.Configuration cfg)
{
cfg.EventListeners.SaveOrUpdateEventListeners = new NHibernate.Event.ISaveOrUpdateEventListener[] {new SaveListener()};
}
public static ISessionFactory CreateSessionFactory()
{
return Fluently.Configure().Database(...).
.Mappings(...)
.ExposeConfiguration(Configure)
.BuildSessionFactory();
}
And version the Properties you want to version in your Mapping class.
public class MyEntityMap: ClassMap<MyENtity>
{
public MyEntityMap()
{
Table("MyTable");
Id(x => x.ID);
Version(x => x.Timestamp);
Map(x => x.PropA);
Map(x => x.PropB);
}
}
The short answer to your question is you can't/shouldn't do this in a simple web application with nhibernates optimistic (version) and pessimistic (row locks) locking. The fact that your transactions are only as long as a request are your limiting factor.
What you CAN do is create another table and entity class, and mappings that manages these "locks". At the lowest level you need an Id of the object being edited and the Id of the user performing the editing, and a datetime of when the lock was acquired. I would make the Id of the object being edited the primary key since you want it to be exclusive...
When a user clicks on a row to edit, you can try to acquire a lock (create a new record in that table with the ids and current datetime). If the lock already exists for another user, than it will fail because you are trying to violate a primary key constraint.
If a lock is acquired, when the user clicks save you need to check that they still have a valid "lock" before performing the actual save. Then, perform the actual save and remove the lock record.
I would also recommend a background service/process that sweeps these locks periodically and removes the ones that have expired or are older than your time limit.
This is my prescribed way of dealing with "locks" in a web environment. Good luck!
Yes, it is possible to lock a row with nhibernate but if I understand well, your scenario is in a web context and then it is not the best practice.
The best practive is to use optimistic locking with automatic versioning as you mentioned.
Locking a row when page is opening and releasing it when page is unloading will quickly lead to dead lock the row (javascript issue, page not killed properly...).
Optimistic locking will make NHibernate throws an exception when flushing a transaction which contains objects modified by another session.
If you want to have true concurent modification of the same information you may try to think about a system which merge many users input inside a same document, but it is a system on its own, not managed by ORM.
You will have to choose a way to deal with session in a web environment.
http://nhibernate.info/doc/nh/en/index.html#transactions-optimistic
The only approach that is consistent with high concurrency and high
scalability is optimistic concurrency control with versioning.
NHibernate provides for three possible approaches to writing
application code that uses optimistic concurrency.
Hey you can try these sites
http://thesenilecoder.blogspot.ca/2012/02/nhibernate-samples-row-versioning-with.html
http://stackingcode.com/blog/2010/12/09/optimistic-concurrency-and-nhibernate