We are working on a MVC application. In this application we have a payment module. Once user starting a recurring subscription, application will get two responses from paypal for payment complete, with the same TransactionId.
One is through “Success Url” and other one is through IPN listener.
We are using a “Transaction”table to keep paypal transaction details.
Application will check whether the “TransactionId” exist in the database, while getting a response from Paypal. So net result is first response from paypal will insert to “Transaction” table.
Recently We are having issues related with Entity Frame work concurrency. If the two response parellay comes, both the two records are inserting to the transaction table with the same “trnasction id”, even if we have the code for check existence of transactionid .
How do we prevent this duplicate insertion?
Both insertion is happening from different CONTEXT.
var ipnDetail = unitOfWork.TransactionDetailRepository.GetTransaction(transactionNumber);
if (ipnDetail == null)
{
}
We are using same code for both insertion. Only difference is we are calling from different EF Context.
You can also note the first inserted entry having greater time than second inserted record. Actually we are setting the date from code.
How do we solve this concurrency issue?
We tried to use a “ObjectContext.Refresh” for a solution. But it does not help us.
((IObjectContextAdapter)context).ObjectContext.Refresh(System.Data.Objects.RefreshMode.StoreWins, ((IObjectContextAdapter)context).ObjectContext.ObjectStateManager.GetObjectStateEntries(EntityState.Added));
Any help would be appreciable. Please note that application is in production environment.
Best Regards,
Ranish
If you have SQL Server 2014 or greater, the merge command is exactly what you need. It allows you to put the if condition at the right place in the operation.
The example below inserts your new transactionId if it does not exist in the database. Most alternatives involve a query followed by an insert, leaving a window in which another connection can sneak in an insert before yours completes.
You can find resources on the internet about calling a stored procedure from entity framework.
CREATE proc [dbo].[usp_InsertNewTransactionId](#transactionDate datetime2, #transactionId varchar(255))
as
begin
;with data as (select #transactionDate as transactionDate, #transactionId as transactionId)
merge transactions t
using data s
on s.transactionId = t.transactionId
when not matched by target
then insert ([date],transactionId) values (s.transactionDate, s.transactionId);
end
Wrap the entire checking and inserting logic inside TransactionScope.
using (var scope = new TransactionScope(TransactionScopeOption.RequiresNew))
{
// read & write logic here
scope.Complete();
}
RequiresNew will cause a new transaction to be used and it should be blocked on the db level so your another request should be waiting until the first has completed the transaction either by adding the Id or not.
Related
I am trying to fetch a collection of custom objects from an oracle db (v21) from a .net client. Since i cant do any type mapping i want to fetch it as json.
Here is the query:
select json_array("UDTARR") from sys.typetest
This is the result i see in sql developer (expected output):
This is what i get when i execute the same query via .net:
"[]"
The same strategy (json_array()) seems to work fine in .net for collections of primitive types as well as for non-collection-type fields of the same custom object.
Please someone tell me i´m missing something obvious?
Here are the type definitions in case someone wants to try to replicate the issue:
The type that is used in the field "UDTARR":
create type udtarray AS VARRAY(5) OF TEST_DATATYPEEX;
Type "TEST_DATATYPEEX":
create type TEST_DATATYPEEX AS OBJECT
(test_id NUMBER,
vc VARCHAR2(20),
vcarray stringarray)
Type "STRINGARRAY":
create type stringarray AS VARRAY(5) OF VARCHAR2(50);
Code for executing the query and reading the value:
string query = "select json_array(\"UDTARR\") from sys.typetest"
using (var command = new OracleCommand(query, con))
using (var reader = command.ExecuteReader()){
while (reader.Read()){
Console.WriteLine(reader.GetString(0))
}
}
In the Eventlog both queries are recorded, in both cases the user is connected with dba privileges:
(from sql developer)
Audit trail: LENGTH: '362' ACTION :[45] 'select json_array("UDTARR")
from sys.typetest' DATABASE USER:[3] 'SYS' PRIVILEGE :[6] 'SYSDBA'
(from .net)
Audit trail: LENGTH: '361' ACTION :[45] 'select json_array("UDTARR")
from sys.typetest' DATABASE USER:[3] 'SYS' PRIVILEGE :[6] 'SYSDBA'
UnCOMMITted data is only visible within the session that created it (and will ROLLBACK at the end of the session if it has not been COMMITted). If you can't see the data from another session (i.e. in C#) then make sure you have issued a COMMIT command in the SQL client where you INSERTed the data (i.e. SQL Developer).
Note: even if you connect as the same user, this will create a separate session and you will not be able to see the uncommitted data in the other session.
From the COMMIT documentation:
Until you commit a transaction:
You can see any changes you have made during the transaction by querying the modified tables, but other users cannot see the changes. After you commit the transaction, the changes are visible to other users' statements that execute after the commit.
You can roll back (undo) any changes made during the transaction with the ROLLBACK statement (see ROLLBACK).
Ok looks like i might be retarded... closing sql developer gave me a prompt that there where uncommited changes, after commiting and closing sql developer i am now also receiving the expected data in .net.
I have never seen behaviour like this in any other sql management tool but hey you live and you learn :)
I have a method that needs to "claim" a payment number to ensure it is available at a later time. I cannot just get a new payment number when ready to commit to the database, as the number is added to a signed token, and then the payment number is taken from the signed token later on when committing to the database to allow the token to be linked to the payment afterwards.
Payment numbers are sequential and the current method used in existing code is:
Create a Payment
Get the last payment number from the database
Increment the payment number
Use this payment number for the Payment
Update the database with the incremented payment number
In my service I am trying to prevent the following race-condition:
My service reads the payment number (eg. 100)
Another service uses and updates the payment number (now 101)
My service increments the number locally (to 101) and updates the database (still 101)
This would produce two payments with a payment number of 100.
Here is my implementation so far, in my Transaction class:
private DbSet<PaymentIdentifier> paymentIdentifier;
//...
private int ClaimNextPaymentNumber()
{
int nextPaymentNumber = -1;
using(var dbTransaction = db.Database.BeginTransaction())
{
int lastPaymentNumber = paymentIdentifier.ElementAt(0).Identifier;
nextPaymentNumber = lastPaymentNumber + 1;
paymentIdentifier.ElementAt(0).Identifier = nextPaymentNumber;
db.SaveChanges();
dbTransaction.Commit();
}
return nextPaymentNumber;
}
The PaymentIdentifier table has a single row and a single column "Identifier" (hence the .ElementAt(0)). I am unable to change the database structure as there is lots of legacy code relying on it that is very brittle.
Will having the code wrapped in a transaction (as I have done) protect against the race condition, or is there some Entity Framework / PostgreSQL idiosyncrasies I need to deal with to protect the identifier from being read whilst performing the transaction?
Thank you!
(As a side point, I believe lots of legacy code in the other software connecting to the database simply ignores the race condition and relies on it being "very fast")
It helps you with the race condition only if all code, including legacy, will use this method. If there is still code that continue using client side incrementing without transaction, you'll get the same problem. Just exchange 'My service' and 'Another service' in your description.
1. Another service reads the payment number (eg. 100) **without** transaction
2. My service uses and updates the payment number (now 101) **with** transaction
3. Another service increments the number locally (to 101) and updates the database (still 101) **without** transaction
Note that you can replace your code with simpler one by executing this query without explicit transaction.
update PaymentIdentifier set Identifier = Identifier + 1 returning Identifier;
But again, it will not solve your concurrency problem until you replace all places where the Identifier is incremented. If you can change that, you would better use SEQUENCE or Generators that will safely provide you with incremental Ids.
A transaction does not automaticaly lock your table. A Transaction just ensures that multiple changes to the database are done altogether or nothing at all (see the A (atomic) in ACID). But the thing you want is that only one session can read, add one, update the value. And after that is done the next session is allowed to do the same thing.
So you now have different possibilities:
Use a Sequence you can get the next value for example like that SELECT nextval('mysequencename'). If if two sessions try to get a value at the same time they will get two differnt values.
If you have more complex needs and want to store every "token" with additional data in a table. so every token is a row in the table with additional colums you could use table locking. With this you could restrict the access to table. So only one session is allowed to access the table at a time. But make sure that you use locks for as short as possible because this will become your performance bottleneck.
The database prevents the race condition by throwing a concurrency violation error in this case. So, I looked at how this is handled in the legacy code (following the suggestion by #sergey-l) and it uses a simple retry mechanism. So, I did the same:
private int ClaimNextPaymentNumber()
{
DbContextTransaction dbTransaction;
bool failed;
int paymentNumber = -1;
do
{
failed = false;
using(dbTransaction = db.Database.BeginTransaction())
{
try
{
paymentNumber = TryToClaimNextPaymentNumber();
}
catch(DbUpdateConcurrencyException ex)
{
failed = true;
ResetForClaimPaymentNumberRetry(ex);
}
dbTransaction.Commit();
concurrencyExceptionRetryCount = 0;
}
}
while(failed);
return paymentNumber;
}
I use with winform in C# and Entity Framework.
In Database I have a contact table
Between "word" and "user", Word table has a lot of data (4000+).
I have a window with datagridview where there is a checkbox in each line that the user marks the words he wants.
And by pressing the save button I want to update all the records that he has changed in the table.
listWord = Program.DB.WordUseUser.Where(lw => lw.IdUser == thisIdUser).ToList();
///Clicking on the checkbox I add or remove from ListWord accordingly...
foreach (var item in listWord)
{
Program.DB.WordUseUser.Remove(item);
}
Program.DB.SaveChanges();
foreach (WordUseUser item in listWord)
{
Program.DB.WordUseUser.Add(item);
}
Program.DB.SaveChanges();
It takes a lot of time (of course ...)
And I'm looking for a more effective solution.
I tried to use a solution here:Fastest Way of Inserting in Entity Framework
But it only talks about updating existing data
And not updating and adding and deleting together
I would love for help !!
Fast reply - you have to do it inside explicit transaction.
Not only this is secure, but also this would be much more faster.
So, begin transaction - do your updates/inserts and commit transaction.
Every query creates it's own implicit transaction. Unless there is already existing transaction. So think of it as:
without creating a transaction database has to do 12000 operations (for every query: create transaction, execute query, commit transaction) and when you create an explicit transaction then it's just 4002 operations.
I am working with a situation where we are dealing with money transactions.
For example, I have a table of users wallets, with their balance in that row.
UserId; Wallet Id; Balance
Now in our website and web services, every time a certain transaction happens, we need to:
check that there is enough funds available to perform that transaction:
deduct the costs of the transaction from the balance.
How and what is the correct way to go about locking that row / entity for the entire duration of my transaction?
From what I have read there are some solutions where EF marks an entity and then compares that mark when it saves it back to the DB, however what does it do when another user / program has already edited the amount?
Can I achieve this with EF? If not what other options do I have?
Would calling a stored procedure possibly allow for me to lock the row properly so that no one else can access that row in the SQL Server whilst program A has the lock on it?
EF doesn't have built-in locking mechanism, you probably would need to use raw query like
using (var scope = new TransactionScope(...))
{
using (var context = new YourContext(...))
{
var wallet =
context.ExecuteStoreQuery<UserWallet>("SELECT UserId, WalletId, Balance FROM UserWallets WITH (UPDLOCK) WHERE ...");
// your logic
scope.Complete();
}
}
you can set the isolationlevel on the transaction in Entity framework to ensure no one else can change it:
YourDataContext.Database.BeginTransaction(IsolationLevel.RepeatableRead)
RepeatableRead
Summary:
Locks are placed on all data that is used in a query, preventing other users from updating the data. Prevents non-repeatable reads but phantom rows are still possible.
The whole point of a transactional database is that the consumer of the data determines how isolated their view of the data should be.
Irrespective of whether your transaction is serialized someone else can perform a dirty read on the same data that you just changed, but did not commit.
You should firstly concern yourself with the integrity of your view and then only accept a degredation of the quality of that view to improve system performance where you are sure it is required.
Wrap everthing in a TransactionScope with Serialized isolation level and you personally cannot really go wrong. Only drop the isolation level when you see it is genuinely required (i.e. when getting things wrong sometimes is OK).
Someone asks about this here: SQL Server: preventing dirty reads in a stored procedure
I'm trying to implement Microsoft Synchronization services into a smart device application I am developing however I seem to have hit a brick wall and I am hoping someone will be able to provide a solution. I have managed to implement synchronization so that it downloads every record in a table, however I am wanting to filter the records so only the data relating to the user is downloaded. To achieve this I have added the a WHERE Operator.kde = #kde clause to the SelectIncrementalInsertsCommand, as shown in the following code.
this.SelectIncrementalInsertsCommand.CommandText = #"IF #sync_initialized = 0 SELECT dbo.Operator.[OperatorID], [kde], [OperatorName], [Pass] FROM dbo.Operator LEFT OUTER JOIN CHANGETABLE(CHANGES dbo.Operator, #sync_last_received_anchor) CT ON CT.[OperatorID] = dbo.Operator.[OperatorID] WHERE dbo.Operator.[kde] = #kde AND (CT.SYS_CHANGE_CONTEXT IS NULL OR CT.SYS_CHANGE_CONTEXT <> #sync_client_id_binary) ELSE BEGIN SELECT dbo.Operator.[OperatorID], [kde], [OperatorName], [Pass] FROM dbo.Operator JOIN CHANGETABLE(CHANGES dbo.Operator, #sync_last_received_anchor) CT ON CT.[OperatorID] = dbo.Operator.[OperatorID] WHERE dbo.Operator.[kde] = #kde AND (CT.SYS_CHANGE_OPERATION = 'I' AND CT.SYS_CHANGE_CREATION_VERSION <= #sync_new_received_anchor AND (CT.SYS_CHANGE_CONTEXT IS NULL OR CT.SYS_CHANGE_CONTEXT <> #sync_client_id_binary)); IF CHANGE_TRACKING_MIN_VALID_VERSION(object_id(N'dbo.Operator')) > #sync_last_received_anchor RAISERROR (N'SQL Server Change Tracking has cleaned up tracking information for table ''%s''. To recover from this error, the client must reinitialize its local database and try again',16,3,N'dbo.Operator') END ";
I have then declared the #kde parameter as follows.
this.SelectIncrementalInsertsCommand.Parameters.Add(new System.Data.SqlClient.SqlParameter("#kde", System.Data.SqlDbType.Int));
To pass in the parameter I have added the following line to the code responsible for initiating the synchronization.
syncAgent.Configuration.SyncParameters.Add(new SyncParameter("#kde", kde));
NOTE: the kde value is an integer that is passed into my sync method
Despite these filters being added, the synchronization process seems to be completely ignoring them and downloading all the data for every operator. I have investigated this issue online and my code seems identical to numerous tutorials I have read, however it still does not work as desired.
I am fairly new to Sync Services so if anyone could provide me with information and guidance to solving this issue I will be hugely grateful
Thank You in advance
have you tried this?
ADDING FILTER TO LOCAL DATABASE CACHE GENERATED SYNC
I would suggest you run SQL Profiler as well to see the actual commands being passed to SQL Server.
I managed to fix this problem, it turned out to be some "dirty" records on the database which for some reason were affecting synchronization. Once I deleted these records everything worked as it should.