I have an ASP.Net web application that calls a customer to a station. Five employees are running this application simultaneously when they see a customer walk in they click on a ButtonGetCustomer to call the customer and come to their station.
Here is my issue. I am getting the data from SQL and storing it in a Datatable. Sometimes when two or more clerks click at the same time they call the same customer.
Any ideas in how to prevent this from happening?
I had a similar problems with thousands of people clicking the same button trying to claim a limited number of spots. Here is a similar solution:
When they click your button, run a stored procedure to mark that user as seen.
Your SPROC will first check to see if the user is marked as seen, if so, quit (I use RAISEERROR and pass a message back and catch the SQL Exception in code so you can tell them what user has already been called).
If the user hasn't been seen, the next thing your SPROC does is mark them as seen.
So the person who clicked the button either has success and sees the customer, or he gets a message saying the customer has already been seen.
The problem you are experiencing is a concurrency problem. Try wrapping the read of the datatable in a lock statement (there are several), the records you plan on returning to the calling thread should be flagged so that they are not picked up by other thread, try something like this:
private Object _syncObject = new Object();
private DataTable yourDataReadMethod() {
lock(_syncObject)
{
// Read records to return to calling thread.
// Flag records read so they are not given to other threads. You might need expiration date in case the records are not completed in a timely manner.
}
}
Furthermore, if you are updating a record after a call takes place you should compare a the db last updated date with a date that is persisted in the client form; if they differ than raise an exception, because this means that someone else has already updated the record. Hopefully that helps.
Related
I am currently working on some kind of ERP like WPF application with SQL Server as the database.
Up to now, I had to work only with small tasks that does not need row locking on the server side. So the basic was "Create SQLConnection-> Select Data in the DataTable -> close connection".
Now I would like to create the functionality to work on orders.
How could I Lock the records that has been selected till the user finishes the work so no other user can read that rows?
I think I should use transactions, but I am not sure how to keep the transaction alive until the next statement, because I am closing the connection after each command.
Locking data like that is a bad practice. A transaction is intended to ensure that your data is completely saved or not at all. They are not intended to lock the data for the reason specified in your question.
It sounds like the data being entered could be a lot so you don't want a user spending time entering data to only be met with an error because someone else changed the data. You could have a locked_by column that you set when a user is editing the data and simply not allow anyone else to edit the data if that column is not NULL. You could still allow reads of the data or exclude locked data from view with queries depending on your need.
You may also want to include a locked_time column so you know when it was locked. You could then clear the lock if it's stale, or at least query how long it's been locked allowing for an admin user to look for lengthy locks so they can contact that user or clear the lock.
The query could look like this:
UPDATE Table SET locked_by = #lockedByUser, locked_time = #lockedTime
WHERE Id = #fetchId and locked_by IS NULL
SELECT * FROM Table WHERE locked_by = #lockedByUser
If no data is returned, the lock failed or the id doesn't exist. Either way, the data isn't available. You could retrieve the records updated count, to also verify if the lock was successful or not.
Don't close the connection
open transaction
on the select use an uplock so record(s) are locked
perform updates
commit or rollback the transaction
Put some type of timer on it.
One way to handle concurrency via application is implement some kind of "LastServerUpdateDateTime" column on the table you are working on.
When User A pulls the data for a row the ViewModel will have that LastServerUpdateDateTime value saved. Your User A does their updates and then try to save back to the DB. If the LastServerUpdateDateTime value is the same, then that means there was no updates while you were working and you are good to save (and LastServerUpdateDateTime is also updated). If at any point while User A is working on a set of data on the application side, and User B comes in makes their changes and saves, then when User A eventually saves the LastServerUpdateDateTime will be different than what they initially pulled down and save will be rejected. Yes User A then has to redo their changes, but it shouldn't happen often (depending on your application of course) and you don't have to deal with direct DB locking or anything like that.
I will describe the mechanism that I have used with success in the past.
1) Create a document ID table. In this table, each record represents a document type and an ID which can be incremented whenever a new document is created. The importance of this table is really as a root lock; the document ID is not strictly needed.
2) Create a lock table. In this table, each record represents a lock which includes a reference to a document record, a reference to the lock owner, and some additional data such as when the lock was created, when it was last acted upon, its status, or anything else you find useful. Each record means "user A holds a lock on document type X, document ID Y".
3) When locking a document (fetch + lock), lock (SELECT/UPDATE) the relevant record in the document ID table. Then, check the lock table for an existing lock record, and INSERT a new one as appropriate. At this point you may choose to over-write an existing lock, or return an error to the user.
4) When updating a document, again lock (SELECT/UPDATE) the relevant record in the document ID table. Then verify the user holds a lock, and if so do the actual update, and then DELETE the lock record. If the user does not hold a lock, you may choose to allow the update if no other user holds a lock, or return an error.
With this mechanism, a user goes through a open/lock operation, and a save/unlock, or discard/unlock operation. Additionally, locks can be removed by a cron job or by an administrator, in case users fail to update or discard (which they will).
This approach avoids holding record locks and transactions open for long periods of time, which can cause concurrency issues. It also allows locks to survive software crashes. It also allows all kinds of flexibility; for example, my implementation allowed a lock to be "demoted" after some period of time, and once a lock was demoted, it could be over-written by an ordinary user, while still allowing the owner to perform an update as long as the lock remained.
I have a Winforms in C# that imports a CSV file.
The first method imports it but then calls a second method that opens a data connections and gets the last logNumber then returns and assigns the logNumber++ to each record then writes out new last logNumber to the table.
During this time I don't want another user to be able to access the logNumber table (I also have to do this with batchNumber). If I use pessimistic locking from what I have found it only will lock them while connection is open so if I go to the second method it will close. So how do I keep it locked?
I thought I would define the sqlconnection at the form lvl (public partial class frmCheckEntry : Form) I receive an error. I also need to keep a lock on records (rows) (Loaded to a datatable then into a datagrid view) when I open a child form and make changes. This is a multiuser system but I need to prevent other users from having or even seeing records being worked on by a user.
The solution is to keep the connection and the transaction open. You said you got an error when you did that. You need to interpret the error message and fix it. Then, it will work.
You could also try to mark "locked" rows with IsLocked = 1. Other users reading that row would then need to show a message like "This item is currently in use".
I'm currently using the method below to get the ID of the last inserted row.
Database.ExecuteNonQuery(query, parameters);
//if another client/connection inserts a record at this time,
//could the line below return the incorrect row ID?
int fileid = Convert.ToInt32(Database.Scalar("SELECT last_insert_rowid()"));
return fileid;
This method has been working fine so far, but I'm not sure it's completely reliable.
Supposing there are two client applications, each with their own separate database connections, that invoke the server-side method above at exactly the same time. Keeping in mind that the client threads run in parallel and that SQLite can only run one operation at a time (or so I've heard), would it be possible for one client instance to return the row ID of a record that was inserted by the other instance?
Lastly, is there a better way of getting the last inserted row ID?
If another client/connection inserts a record at this time, could the line below return the incorrect row ID?
No, since the write will either happen after the read, or before the read, but not during the read.
Keeping in mind that the client threads run in parallel and that SQLite can only run one operation at a time, would it be possible for one client to get the row ID of the record that was inserted by the other client?
Yes, of course.
It doesn't matter that the server-side methods are invoked at exactly the same time. The database's locking allows concurrent reads, though not concurrent writes, or reads while writing.
if you haven't already, have a read over SQLite's file locking model.
If both the insert command and the get last inserted row id command are inside the same write lock and no other insert command in that write lock can run between those two commands, then you're safe.
If you start the write lock after the insert command, than there's no way to be sure that another thread didn't get a write lock first. If another thread did get a write lock first, then you won't be able to execute a search for the row id until after that other thread has released their lock. By then, it could be too late if the other thread inserted a new row.
I have been facing this problem for long time.
I have two BUTTONS on my form. btnNEXT,btnSUBMIT ..
when user clicks on btnNEXT,details of the next record are displayed. Then user enters some data and clicks on btnSUBMIT. This action will update the details of that particular record.
Now, I have around 10 users working on it. when user1 clicks on btnNEXT, he ll get a record to modify. Now i want that record to be locked and no other user can see that record. when User1 enters details and clicks on btnSUMBIT, the record ll be updated and lock will be released.
Another Scenario:
User1 clicks on btnNEXT. then the record ll be locked. If the user closed the application without updating any data, the record should be unlocked.
What I have done:
Begin tran Select top 1 * from table with (updlock,readpast) where condition
Update table set a=1,b=2 where id=123 commit tran
above queries satisfy my conditions for locking and unlocking the rows. But i want to Begin the transaction in btnNEXT_Click event and Commit transaction in btnSUMBIT_Click event
How can i achieve this?.. I am unable to think beyond this. Please advice me if you have any alternative that can satisfy my whole scenario
Thanks a lot
This is the problem inherent in a stateless application. If a user abandons the session, by just walking off or simply closing the browser, there is no good way for you to know for sure that the session should be closed. The best solution that I have come up with is to use a timestamp as the locking field then regularly poll for records that have been locked for "too long". Not a perfect solution but it should address 90%+ of your issues.
edit after comment from OP:
#ARB, Transactions are used to execute a sequence of SQL statements that may potentially need to be rolled back. It is typically reserved for save actions (inserts, updates & deletes). You can not "roll back" a select statement (nothing to 'undo'). So wrapping your btnNext action (a select) and your btnSubmit action in a transaction is not needed. Additionally when I have used transactions it has been in the form of a single sequence of commands. I can't say I know you can't join a transaction in the middle, but I have never seen it done. Depending on the complexity of your save function following btnSubmit may be a good place to use a transaction, but then only if you are saving to multiple tables.
In summary:
Because of the stateless nature of web apps, and the inability to 'force' a user to close their session 'gracefully' you need a mechanism that 'unlocks' a record that has been locked for 'too long'.
Because there is nothing to 'roll back' in your btnNext action (a select command) there is no reason to include this in a transaction. If you wish to isolate your btnSubmit (save action) then that may be useful.
Yet another question.
So now my EventReceiver and its logic is working just fine. Except for one thing.
Basically it queries the whole list through CAML queries, and then passes the result to a DataTable object, and later on to a DataRow object...
Like all, in test environment, it works perfectly, but in production...
What happens is that the column I need updated gets update but not shown immediately. The item column receives the value I want, but it doesn't show at first refresh, you have to refresh the page again, then it appears...
The only difference is that in teste env. my list has like, 200 records, and in production, it has almost 5000 records.
Some questions:
Is there a way to define how many records you want? In CAML or in the DataTable object? Something like "SELECT TOP 100 ... "
If not, is there a way to make the refresh process stop and wait for the code execution?
Some Info:
It's WSS 3.0, and the event I'm intercepting is ItemAdded, which explains the refresh not waiting for my code.
Oh and considering changing to the ItemAdding event would be a little bit of a problem, because I need to capture the ID of the record, which is not yet available in ItemAdding because the list item has not been committed to the database yet.
Thanks in advance.
The problem here was the "GetDataTable()" method. When I ran the CAML query and filled a datatable with the results, it'd lose the order by modifier. But if I get the results with a SPListItemCollection object, it returns the row exactly how I wanted.
As seen in another post... "This is a nasty issue".
Similar question and answer here. You should be able to use the Rowlimit property of SPQuery.
After searching a lot, I ended up moving my code to the ItemAdding event, which is synchronous and will finish executing before SharePoint loads its page.
Even after I limited the result rows to 5, it would still load the page without the value I wanted to show.
Also, if you are considering capturing the value from a field that uses calculated value, and has a formula in it, be careful, because at least in my example, SharePoint didn't resolve the formula by event execution, so the field with the calculated value would always return null.