Select unlocked rows oracle - c#

I have an application in C# that uses an Oracle database.
I need a query to fetch the unlocked row from a table in oracle database.
How can I select all unlocked rows?
Is there any 'translator' out there that can translate this T-SQL (MS SQL Server) query to Oracle dialect?
SELECT TOP 1 * FROM TableXY WITH(UPDLOCK, READPAST);
I'm a little bit disappointed with Oracle lacking such a feature. They want to make me use AQ or what?

Oracle does have this feature, specifically the SKIP LOCKED portion of the SELECT statement. To quote:
SKIP LOCKED is an alternative way to handle a contending transaction
that is locking some rows of interest. Specify SKIP LOCKED to instruct
the database to attempt to lock the rows specified by the WHERE clause
and to skip any rows that are found to be already locked by another
transaction.
The documentation goes on to say it's designed for use in multi-consumer queues but this does not mean that you have to use it in this environment. Though the documentation says this there is a large caveat. You can't ask for the next N unlocked rows - only the next N rows, of which the unlocked ones will be returned.
SELECT *
FROM TableXY
WHERE ROWNUM = 1
FOR UPDATE SKIP LOCKED
Note that if the table you're selecting from is locked in exclusive mode, i.e. you've already instructed the database not to let any other session lock the table you will not get any rows returned until the exclusive lock is released.

I faced with the same problem recently, and after solved it, I wrote this blog entry:
http://nhisawesome.blogspot.com/2013/01/how-to-lock-first-unlocked-row-in-table.html
Feel free to leave a comment! Your comments are appreciated.
Short summary: instead of select first unlocked one and lock it, I select a bunch of records, then loop through the bunch and try to acquire a lock on it using SKIP LOCKED hint. If the selected one is not lockable, move on to the next one, until a lock acquired or none remain.

select for update nowait will error out if you select a row that is locked. Is that what you want? I am curious as to what problem you are trying to solve. Unless you have long-running transactions, the lock on a row would be transient from one moment to the next.
Example:
CREATE TABLE TEST
(
COL1 NUMBER(10) NOT NULL,
COL2 VARCHAR2(20 BYTE) NOT NULL
);
CREATE UNIQUE INDEX TEST_PK ON TEST
(COL1);
ALTER TABLE TEST ADD (
CONSTRAINT TEST_PK
PRIMARY KEY
(COL1)
USING INDEX TEST_PK
);
SQL Session #1:
SQL> insert into test values(1,'1111');
1 row created.
SQL> insert into test values(2,'2222');
1 row created.
SQL> commit;
Commit complete.
SQL> update test set col2='AAAA' where col1=1;
1 row updated.
SQL Session #2: Attempt to read locked row, get error:
SQL> select * from test where col1=1 for update nowait;
select * from test where col1=1 for update nowait
*
ERROR at line 1:
ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

Related

Select and read first record of table and delete record after that, in one stored procedure

I want to read records one-by-one and delete them after read. Table is a temp table, a multi thread program will use data of table. I need to read each record just once and not by multiple thread.
is there any solution by stored procedures to create this thread safe program(delete record just after read by first thread)?
First, I feel like I have to warn you that it's probably not the best idea to do it in SQL Server - relational databases works best with a set based approach, not on a row-by-row basis.
Reading and deleting each row individually will have a very poor performance.
Having said that, here's one way to delete a row, get it back to the client using the output clause, and (thanks to the rowlock hint) do it in a thread safe manner:
DELETE TOP(1)
FROM #tempTable WITH (ROWLOCK)
OUTPUT deleted.*
ORDER BY id
This should be your Stored Procedure code:
Create Procedure DeleteButOne
As
Begin
Select TOP 1 * From "Your Table Name"
DELETE FROM "Your Table Name" WHERE Id NOT IN (SELECT TOP 1 ID FROM "Your Table
Name")
End
And then you can execute procedure :
Execute DeleteButOne

SQL Server Isolation Level And Table Locking

So let's say I have a table in SQL server that serves as a queue for items that need processing. Something like this:
Id (bigint)
BatchGuid (guid)
BatchProcessed (bit)
...
...along with some other columns describing the item that needs to be processed, etc. So there are many running consumers that add records to this table as needed to indicate that an item needs to be processed.
So now let's say I have a job that is in charge of getting a batch of items from this table and processing them. Say we want to let it process 10 at a time. Now also assume that this job can have many instances running at once, so it is concurrently accessing the table (along with any other consumers who may be adding new records to the queue).
I was planning to do something like this:
using(var tx = new Transaction(Isolation.Serializable))
{
var batchGuid = //newGuid
executeSql("update top(10) [QUeueTable] set [BatchGuid] = batchGuid where [BatchGuid] is null");
var itemsToProcess = executeSql("select * from [QueueTable] where [BatchGuid] = batchGuid");
tx.Commit()
}
So basically what I'd be doing is starting a transaction as serializable, marking 10 items with a specific GUID, then getting those 10 items, then committing.
Is this a feasible strategy? I believe the isolation level of serializable will basically lock the whole table to prevent read/write until the transaction is complete - is this correct? Basically the transaction will block all other read/write operations on the table? I believe this is what I want in this case as I don't want to read dirty data and I don't want concurrent running jobs to stomp on each other when marking a batch of 10 to process.
Any insights as to whether I'm on the right track with this would be much appreciated. If there are better ways to accomplish this I'd welcome alternatives as well.
Serializable isolation mode does not necessarily lock the whole table. If you have an index on BatchGuid you will probably do ok, but if not then SQL will probably escalate to a table lock.
A few things you may want to look at:
Using the OUTPUT statement you can combine your UPDATE and SELECT into one query
You may need to use UPDLOCK if you have multiple processes running this query
You can do this in a single statement if you use the OUTPUT clause:
UPDATE TOP (10) [QueueTable]
OUTPUT inserted.*
SET [BatchGuid] = batchGuid
WHERE [BatchGuid] IS NULL;
Or more specifically:
var itemsToProcess = executeSql("update top(10) [QUeueTable] output inserted.* set [BatchGuid] = batchGuid where [BatchGuid] is null");
It is personally preference I suppose, but I have never been a fan of the UPDATE TOP(n) syntax, because you can't specify an ORDER BY, and in most cases when specfifying top, you want to specify an order by, I much prefer using something like:
UPDATE q
OUTPTUT inserted.*
SET [BatchGuid] = batchGuid
FROM ( SELECT TOP (10) *
FROM dbo.QueueTable
WHERE BatchGuid IS NULL
ORDER BY ID
) AS q
ADDENDUM
In response to the comment, I don't believe there is any chance of a race condition, but I was not 100% certain. The reason I don't believe this because although the query reads as a SELECT, and an UPDATE, it is syntactic sugar, it is just an update, and uses exactly the same plan, and locks as the top query. However, since I don't know for sure I decided to test:
First I set up a sample table in temp DB, and a logging table to log updated IDs
USE TempDB;
GO
CREATE TABLE dbo.T (ID BIGINT NOT NULL IDENTITY PRIMARY KEY, Col UNIQUEIDENTIFIER NULL);
INSERT dbo.T (Col)
SELECT TOP 1000000 NULL
FROM sys.all_objects a, sys.all_objects b;
CREATE TABLE dbo.T2 (ID BIGINT NOT NULL PRIMARY KEY);
Then in 10 different SSMS windows I ran this:
WHILE 1 = 1
BEGIN
DECLARE #ID UNIQUEIDENTIFIER = NEWID();
UPDATE T
SET Col = #ID
OUTPUT inserted.ID INTO dbo.T2 (ID)
FROM ( SELECT TOP 10 *
FROM dbo.T
WHERE Col IS NULL
ORDER BY ID
) t;
IF ##ROWCOUNT = 0
RETURN;
END
The whole process ran for 20 minutes updating ~500,000 rows before I stopped all 10 threads. Since updating the same row twice would throw and error when inserting to T2 as a primary key violation, and all 10 threads needed to be stopped this shows that there was no race condition, and to confirm this, I ran the following:
SELECT Col, COUNT(*)
FROM dbo.T
WHERE Col IS NOT NULL
GROUP BY Col
HAVING COUNT(*) <> 10;
Which, as expected, returned no rows.
I am happy to be proved wrong and to concede I was lucky in that none of these 100,000 iterations clashed, but I don't believe it was luck. I really believe there is a single lock, therefore it doesn't matter if you have a transaction or not, you just need the correct isolation level.

Database trigger or application code? which one should I use in this case?

Using c# and mssql 2008 R2, I have a service that inserts a user into the system, users have a unique number assigned to them which is something like a1, b1 or c1 which will get increased on user insertion but before insertion I have to check whether there is any previous unique number available or not( unique numbers can be deleted on user deletion). For example if there are 5 users in the database, a1,...,a5 are reserved and if you delete lets say 3rd user then a3 will be available for next user insertion. This can be done easily but since I have to read the available unique numbers on every insertion , I'm puzzled whether it's better to use insert trigger or use application code before insertion ?
Thanks in advance
I'm not sure about triggers and how you want to do it this way but you can definitely achieve that in the code/stored proc. This will be a 2 step process that will have to be in one transaction:
select the minimum available ID
insert the new record
One important thing is that you will want to lock the table during the select query to prevent other processes obtaining the same ID. You can do that using exclusive lock hint. The code might look like:
select min(T.ID) + 1 from TableName T with(xlock)
where not exists (select * from TableName T1 where T1.ID = T.ID + 1)

How to prevent other sessions to SELECT a row before UPDATE takes place

I am writing a call centre program in C# where multiple agents load the customers one by one from a table. In order to prevent more than one agent to load the same customer, I have added a new field to the table to show that the row is locked. When I select a row, I update that row and set the lock field to the ID of the agent who has selected that row. But the problem is during the time that I select the row and I lock it, another agent can select the same row since it's not locked yet ! Is there a way I can handle this situation ? The database is MySQL 5 / InnoDB
Assuming you can only lock 1 profile per agent:
--Check for no lock
UPDATE T SET LockField = 'abc' WHERE ProfileId = 1 AND LockField IS NULL;
--Check to see if we updated anything.
--If not, we can't show this row because someone else has it locked
SELECT ROW_COUNT();
Before i execute the update i have to select the id...
If you do the UPDATE in 1 statement, you don't. We're getting a little past my knowledge of MySQL syntax - but something like:
--Check for no lock
UPDATE T SET LockField = 'abc' WHERE ProfileId = (
SELECT ProfileId FROM T WHERE LockField IS NULL LIMIT 1
);
--Check to see what we updated
SELECT * FROM T WHERE LockField = 'abc';
works pretty easily.
If you want to get a little more complicated (or MySQL doesn't support the subquery), you can use an update lock with SELECT...FOR UPDATE:
START TRANSACTION;
--Put an update lock on the row till the xaction ends
--any other SELECT wanting an update lock will block until we're out of the xaction
SELECT #id = ID FROM T WHERE LockField IS NULL LIMIT 1 FOR UPDATE;
UPDATE T SET LockField = 'abc' WHERE ID = #id;
COMMIT TRANSACTION;
Check out LOCK TABLES and UNLOCK TABLES:
http://dev.mysql.com/doc/refman/5.6/en/lock-tables.html
You could use this in conjunction with Mark's answer.
What you're describing is Optimistic Concurrency vs. Pessimistic Concurrency.

Avoiding race-condition when manually implementing IDENTITY-like increment for a SQL Server DB column

I'm building an ASP.NET MVC 2 site that uses LINQ to SQL. In one of the places where my site accesses the DB, I think a race condition is possible.
DB Architecture
Here are some of the columns of the relevant DB table, named Revisions:
RevisionID - bigint, IDENTITY, PK
PostID - bigint, FK to PK of Posts table
EditNumber - int
RevisionText - nvarchar(max)
On my site, users can submit a Post and edit a Post later on. Users other than the original poster are able to edit a Post - so there is scope for multiple edits on a single Post simultaneously.
When submitting a Post, a record in the Posts table is created, as well as a record in the Revisions table with PostID set to the ID of the Posts record, RevisionText set to the Post text, and EditNumber set to 1.
When editing a Post, only a Revisions record is created, with EditNumber being set to 1 higher than the latest edit number.
Thus, the EditNumber column refers to how many times a Post has been edited.
Incrementing EditNumber
The challenge that I see in implementing those functions is incrementing the EditNumber column. As that column can't be an IDENTITY, I have to manipulate its value manually.
Here's my LINQ query for determining what EditNumber a new Revision should have:
using(var db = new DBDataContext())
{
var rev = new Revision();
rev.EditNumber = db.Revisions.Where(r => r.PostID == postID).Max(r => r.EditNumber) + 1;
// ... (fill other properties)
db.Revisions.InsertOnSubmit(rev);
db.SubmitChanges();
}
Calculating a maximum and incrementing it can lead to a race condition.
Is there a better way to implement that function?
Update directly in the database and return the new revision:
update Revisions
set EditNumber += 1
output INSERTED.EditNumber
where PostID = #postId;
Unfortunately, this is not possible in LINQ. In fact, is not possible in the client at all, no matter the technology used, short of doing pessimistic locking which has too many drawback to worth considering.
Updated:
Here is how I would insert a new revision (including first revision):
create procedure usp_insertPostRevision
#postId int,
#text nvarchar(max),
#revisionId bigint output
as
begin
set nocount on;
declare #nextEditNumber (EditNumber int not null);
declare #rc int = 0;
begin transaction;
begin try
update Posts
set LastRevision += 1
output INSERTED.LastRevision
into #nextEditNumber (EditNumber)
where PostId = #postId;
set #rc = ##rowcount;
if (#rc <> 1)
raiserror (N'Expected exactly one post with Id:%i. Found:%i',
16, 1 , #postId, #rc);
insert into Revisions
(PostId, Text, EditNumber)
select #postID, #text, EditNumber
from #nextEditNumber;
set #revisionId = scope_identity();
commit;
end try
begin catch
... // Error handling omitted
end catch
end
I omitted the error handling, see Exception handling and nested transactions for a template procedure than handles errors and nested transactions properly.
You'll notice the Posts table has a LastRevision field that is used as the increment for the post revisions. This is much better than computing the MAX each time you add a revision, as it avoid a (range) scan of Revisions. It also acts as a concurrency protection: only one transaction at a time will be able to update it, and only that transaction will proceed with inserting a new revision. Concurrent transactions will block and wait until the first one commits, then the next transaction unblocked will correctly update the revision number to +1.
Can multiple users edit the same post at the same time? If not then you do not have a race condition unless some how a single user can submit multiple edits simultaneously.
If revisions are only permitted by the user who submitted the comment then you're OK with the above - if multiple users can be revising a single comment then there's scope for problems.
Since there is only one record in the Posts table per Post, use a lock.
Read the record in the Posts table and use a table hint [WITH (ROWLOCK, XLOCKX)] to get an exclusive lock. Set the lock timeout to wait a few milliseconds.
If the process gets the lock, then it can add the revision record. If the process cannot get the lock, then have the process try again. After a few retries if the process cannot get a lock, return an error.
Since EditNumber is a property determined by membership in a collection, have the collection provide it.
Make EditNumber a computed column - COUNT of records for same post with lesser RevisionID.

Categories