I have some code, you can check project github, error contains in UploadContoller method GetExtensionId.
Database diagram:
Code (in this controller I sending files to upload):
[HttpPost]
public ActionResult UploadFiles(HttpPostedFileBase[] files, int? folderid, string description)
{
foreach (HttpPostedFileBase file in files)
{
if (file != null)
{
string fileName = Path.GetFileNameWithoutExtension(file.FileName);
string fileExt = Path.GetExtension(file.FileName)?.Remove(0, 1);
int? extensionid = GetExtensionId(fileExt);
if (CheckFileExist(fileName, fileExt, folderid))
{
fileName = fileName + $" ({DateTime.Now.ToString("dd-MM-yy HH:mm:ss")})";
}
File dbFile = new File();
dbFile.folderid = folderid;
dbFile.displayname = fileName;
dbFile.file_extensionid = extensionid;
dbFile.file_content = GetFileBytes(file);
dbFile.description = description;
db.Files.Add(dbFile);
}
}
db.SaveChanges();
return RedirectToAction("Partial_UnknownErrorToast", "Toast");
}
I want to create Extension in database if it not exist yet. And I do it with GetExtensionId:
private static object locker = new object();
private int? GetExtensionId(string name)
{
int? result = null;
lock (locker)
{
var extItem = db.FileExtensions.FirstOrDefault(m => m.displayname == name);
if (extItem != null) return extItem.file_extensionid;
var fileExtension = new FileExtension()
{
displayname = name
};
db.FileExtensions.Add(fileExtension);
db.SaveChanges();
result = fileExtension.file_extensionid;
}
return result;
}
In the SQL Server database I have unique constraint on displayname column of FileExtension.
Problem starts only if I uploading few files with the same extension and this extension not exist in database yet.
If I remove lock, in GetExtensionId will be Exception about unique constraint.
Maybe, for some reason, next iteration of foreach cycle calls GetExtensionId without waiting? I don't know.
But only if I set lock my code works fine.
If you know why it happens please explain.
This sounds like a simple concurrency race condition. Imagine two requests come in at once; they both check the FirstOrDefault, which correctly says "nope" for both. Then they both try and insert; one wins, one fails because the DB has changed. While EF manages transactions around SaveChanges, that transaction doesn't start from when you query the data initially
The lock appears to work, by preventing them getting into the looking code at the same time, but this is not a reliable solution for this in general, as it only works inside a single process, let alone node.
So: a few option here:
your code could detect the foreign key violation exception and recheck from the start (FirstOrDefault etc), which keeps things simple in the success case (which is going to be the majority of the time) and not horribly expensive in the failure case (just an exception and an extra DB hit) - pragmatic enough
you could move the "select if exists, insert if it doesn't" into a single operation inside the database inside a transaction (ideally serializable isolation level, and/or using the UPDLOCK hint) - this requires writing TSQL yourself, rather than relying on EF, but minimises round trips and avoids writing "detect failure and compensate" code
you could perform the selects and possible inserts inside a transaction via EF - complicated and messy, frankly: don't do this (and it would again need to be serializable isolation level, but now the serializable transaction spans multiple round trips, which can start to impact locking, if at scale)
Related
Experiencing an issue about updating mysql DB through EF. It's not the first time I'm dealing with it, so I had some ideas about why isn't my data getting changed. I tried changing an element in goods array; tried editing an object, recieved through LINQ-request (seen some examples of this method); made some attempts on marking element found in the database before editing (like EntityState and Attach()). Nothing of these made any difference, so I tried removing <asp:UpdatePanel> from Site.Master to see what happens (responsive for postback blocking to prevent page shaking on update), but nothing changed (while btnRedeemEdit.IsPostBack having its default value).
Code below is the function I use for updates.
protected void btnRedeemEdit_Click(object sender, EventArgs e)
{
if (!string.IsNullOrEmpty(Request.QueryString["id"]))
{
var db = new GoodContext();
var goods = db.Goods.ToList();
Good theGood = goods.FirstOrDefault(x => x.Id == int.Parse(Request.QueryString["id"]));
//db.Goods.Attach(theGood);//No effect
//db.Entry(theGood).State = System.Data.Entity.EntityState.Modified; //No effect
if (theGood != default)
{
theGood.AmountSold = GetInput().AmountSold;
theGood.APF = GetInput().APF;
theGood.Barcode = GetInput().Barcode;
theGood.Description = GetInput().Description;
theGood.ImagesUrl = GetInput().ImagesUrl;//"https://i.pinimg.com/564x/2d/b7/d8/2db7d8c53b818ce838ad8bf6a4768c71.jpg";
theGood.Name = GetInput().Name;
theGood.OrderPrice = GetInput().OrderPrice;
theGood.Profit = GetInput().Profit;
theGood.RecievedOn = GetInput().RecievedOn;//DateTime.Parse(GetInput().RecievedOn).Date.ToString();
theGood.TotalAmount = GetInput().TotalAmount;
theGood.WeightKg = GetInput().WeightKg;
//SetGoodValues(goods[editIndex],GetInput());//Non-working
db.SaveChanges();
Response.Redirect("/AdminGoods");
}
else Response.Write($"<script>alert('Good on ID does not exist');</script>");
}
else Response.Write($"<script>alert('Unable to change: element selected does not exist');</script>");
}
Notice, that no alerts appear during execution, so object in database can be found.
Are there any more things, that can be responsible for blocking database updates?
A few things to update & check:
Firstly, DbContexts should always be disposed, so in your case wrap the DbContext inside a using statement:
using (var db = new GoodContext())
{
// ...
}
Next, there is no need to load all goods from the DbContext, just use Linq to retrieve the one you want to update:
using (var db = new GoodContext())
{
Good theGood = db.Goods.SingleOrDefault(x => x.Id == int.Parse(Request.QueryString["id"]));
if (theGood is null)
{
Response.Write($"<script>alert('Good on ID does not exist');</script>");
return;
}
}
The plausible suspect is what does "GetInput()" actually do, and have you confirmed that it actually has the changes you want? If GetInput is a method that returns an object containing your changes then it only needs to be called once rather than each time you set a property:
(Inside the using() {} scope...)
var input = GetInput();
theGood.AmountSold = input.AmountSold;
theGood.APF = input.APF;
theGood.Barcode = input.Barcode;
theGood.Description = input.Description;
// ...
db.SaveChanges();
If input has updated values but after calling SaveChanges you aren't seeing updated values in the database then there are two things to check.
1) Check that the database connection string at runtime matches the database that you are checking against. The easiest way to do that is to get the connection string from the DbContext instance's Database.
EF 6:
using (var db = new GoodContext())
{
var connectionString = db.Database.Connection.ConnectionString; // Breakpoint here and inspect.
EF Core: (5/6)
using (var db = new GoodContext())
{
var connectionString = db.Database.GetConnectionString();
Often at runtime the DbContext will be initialized with a connection string from a web.config / .exe.config file that you don't expect so you're checking one database expecting changes while the application is using a different database / server. (More common than you'd expect:)
2) Check that you aren't disabling tracking proxies. By default EF will enable change tracking which is how it knows if/when data has changed for SaveChanges to generate SQL statements. Sometimes developers will encounter performance issues and start looking for ways to speed up EF including disabling change tracking on the DbContext. (A fine option for read-only systems, but a pain for read-write)
EF6 & EF Core: (DbContext initialization)
Configuration.AutoDetectChangesEnabled = false; // If you have this set to false consider removing it.
If you must disable change tracking then you have to explicitly set the EntityState of the entity to Modified before calling SaveChanges():
db.Entry(theGood).State = EntityState.Modified;
db.SaveChanges();
Using change tracking is preferable to using EntityState because with change tracking EF will only generate an UPDATE statement if any values have changed, and only for the values that changed. With EntityState.Modified EF will always generate an UPDATE statement for all non-key fields regardless if any of them had actually changed or not.
We are having trouble inserting into a filetable. This is our current constellation, I will try to explain it as detailed as possible:
Basically we have three tables:
T_Document (main metadata of a document)
T_Version (versioned metadata of a document)
T_Content (the binary content of a document, FileTable)
Our WCF service saves the documents and is being used by multiple persons. The service will open a transaction and will call the method SaveDocument which saves the documents:
//This is the point where the tranaction starts
using (IDBTransaction tran = db.GetTransaction())
{
try
{
m_commandFassade.SaveDocument(document, m_loginName, db, options, lastVersion);
tran.Commit();
return document;
}
catch
{
tran.Rollback();
throw;
}
}
The SaveDocument method looks like this:
public void SaveDocument(E2TDocument document, string login, IDBConnection db, DocumentUploadOptions options, int lastVersion)
{
document.GuardNotNull();
options.GuardNotNull();
if (lastVersion == -1)
{
//inserting new T_Document
SaveDocument(document, db);
}
else
{
//updating the existing T_Document
UpdateDocument(document, db); //document already exists, updating it
}
Guid contentID = Guid.NewGuid();
//inserting the content
SaveDocumentContent(document, contentID, db);
//inserting the new / initial version
SaveDocumentVersion(document, contentID, db);
}
Basically all the methods you see are either inserting or updating those three tables. The insert content query, that appears to make some trouble like this:
INSERT INTO T_Content
(stream_id
,file_stream
,name)
VALUES
(#ContentID
,#Content
,#Title)
And the method (please take this as pseudo code):
private void SaveDocumentContent(E2TDocument e2TDokument, Guid contentID, IDBConnection db)
{
using (m_log.CreateScope<MethodScope>(GlobalDefinitions.TracePriorityForData))
{
Command cmd = CommandFactory.CreateCommand("InsertContents");
cmd.SetParameter("ContentID", contentID);
cmd.SetParameter("Content", e2TDokument.Content);
string title = string.Concat(e2TDokument.Titel.RemoveIllegalPathCharacters(), GlobalDefinitions.UNTERSTRICH,
contentID).TrimToMaxLength(MaxLength_T_Contents_Col_Name, SuffixLength_T_Contents_Col_Name);
cmd.SetParameter("Title", title);
db.Execute(cmd);
}
}
I have no experience in deadlock-analysis, but the deadlock graphs show me that when inserting the content into the filetable, it appears to be deadlocked with another process also writing into the same table at the same time.
(the other side shows the same statement, my application log confirms two concurrent tries to save documents)
The same deadlock appears 30 times a day. I already shrinked the transaction to a minimum, removing all unneccessary selects, but yet I had no luck to solve this issue.
What I'm most curious about is how its possible to deadlock on an insert into a filetable. Are there internal things that are being executed that I'm not aware of. I saw some strange statements in the profiler trace on that table, that we dont use anywhere in the code, e.g.:
set #pathlocator = convert(hierachyid, #path_locator__bin)
and things like:
if exists (
select 1
from [LGOL_Content01].[dbo].[T_Contents]
where parent_path_locator = #path_locator
)
If you need any more details, please let me know. Any tips how to proceed would be awesome.
Edit 1:\
Following you find the execution plan for the T_Content insert:
So, after hours and hours and research and consulting with microsoft, the deadlock is actually a filetable / sql server related bug, which will be hotfixed by Microsoft.
I wrote a library, referenced by numerous applications, that tracks who is online and which application and page they are viewing.
The data is stored, using EF6, in a Sql Server 2008 table which tracks their username (primary key), application, page and timestamp. I only want to store the latest request for each person so each username should only be stored once.
The library code, which is called from the Global.asax of each application looks like this:
public static void Add(ApplicationType application, string username, string pageRequested)
{
using (var db = new CommonDAL()) // EF context
{
var exists = db.ActiveUsers.Find(username);
if (exists != null)
db.ActiveUsers.Remove(exists);
var activeUser = new ActiveUser() { ApplicationID = application.Value(), Username = username, PageRequested = pageRequested, TimeRequested = DateTime.Now };
db.ActiveUsers.Add(activeUser);
db.SaveChanges();
}
}
I'm intermittently getting the error Violation of PRIMARY KEY constraint 'PK_tblActiveUser_Username'. Cannot insert duplicate key in object 'dbo.tblActiveUser'. The duplicate key value is (xxxxxxxx)
What I can only guess is happening is Request A comes in, removes the existing username. Request B (from same user) then comes in, tries to remove the username, sees nothing exists. Request A then adds the username. Request B then tries to add the username. The error frequently seems to be triggered when a web server sends a client a 401 status, which again points to multiple requests within a short period of time triggering this.
I'm having trouble mocking this race condition using unit tests as I haven't done much async programming before, but tried to create async tests with delays to mock multiple simultaneous slow requests. I've tried to use using (var transaction = new TransactionScope()) and using (var transaction = db.Database.BeginTransaction(System.Data.IsolationLevel.ReadCommitted)) to lock the requests so request A can complete before request B begins but can't verify either one fixes the issue as I can't mock the situation reliably.
1) Which is the right way to prevent the exception (Most recent request is the one that ultimately is stored)?
2) Which is the right way to to write a unit test to prove this is working?
Since you only want to store the latest item, you could use a last update wins and avoid the race condition on who can insert first, the database handles the locks and the last to call update (which is the most recent) is what is in the table.
Something like the following should handle any primary key errors if you run into concurrency issues on the edge case that a brand new user has 2 requests at the same time and avoid an "infinite" loop of errors (well until a stack overflow exception any way).
public static void Add(ApplicationType application,
string username,
string pageRequested,
int recursionCount = 0)
{
using (var db = new CommonDAL()) // EF context
{
var exists = db.ActiveUsers.Find(username);
if (exists != null)
{
exists.propa = "someVal";
}
else
{
var activeUser = new ActiveUser
{
ApplicationID = application.Value(),
Username = username,
PageRequested = pageRequested,
TimeRequested = DateTime.Now
};
db.ActiveUsers.Add(activeUser);
}
try
{
db.SaveChanges();
}
catch(<Primary Key Violation>)
{
if(recursionCount < x)
{
Add(application, username, pageRequested, recursionCount++)
}
else
{
throw;
}
}
}
}
As for unit testing this, it will be very hard unless you insert an artificial delay or can force both threads to run at the same time. Sometimes the timing on the race conditions is in the millisecond range depending on the issue. Tasks may not work because they are not guaranteed to run at the same time, you throw them to the background thread pool and they run when they can. Old school threads may work but I don't know how to force it since the time between read and remove & create are most likely in the 5 ms range or less.
Assume I have an account_profile table, which has Score field that is similar to an account's money (the database type is BIGINT(20) and the EntityFramework type is long, because I don't need decimal). Now I have the following function:
public long ChangeScoreAmount(int userID, long amount)
{
var profile = this.Entities.account_profile.First(q => q.AccountID == userID);
profile.Score += amount;
this.Entities.SaveChanges();
return profile.Score;
}
However, I afraid that when ChangeScoreAmount are called multiple times concurrently, the final amount won't be correct.
Here are my current solutions I am thinking of:
Adding a lock with a static locking variable in the ChangeScoreAmount function, since the class itself may be instantiated multiple times when needed. It looks like this:
public long ChangeScoreAmount(int userID, long amount)
{
lock (ProfileBusiness.scoreLock)
{
var profile = this.Entities.account_profile.First(q => q.AccountID == userID);
profile.Score += amount;
this.Entities.SaveChanges();
return profile.Score;
}
}
The problem is, I have never tried a lock on static variable, so I don't know if it is really safe and if any deadlock would occur. Moreover, it may be bad if somewhere else outside this function, a change to Score field is applied midway.
OK this is no longer an option, because my server application will be run on multiple sites, that means the locking variable cannot be used
Creating a Stored Procedure in the database and call that Stored procedure in the function. However, I don't know if there is an "atomic" way to create that Store Procedure, so that it can only be called once at a time, since I still need to retrieve the value, changing it then update it again?
I am using MySQL Community 5.6.24 and MySQL .NET Connector 6.9.6 in case it matters.
NOTE My server application may be runned on multiple server machines.
You can use sql transactions with repeatable read isolation level instead of locking on the application. For example you can write
public long ChangeScoreAmount(int userID, long amount)
{
using(var ts = new TransactionScope(TransactionScopeOption.RequiresNew,
new TransactionOptions { IsolationLevel = IsolationLevel.RepeatableRead })
{
var profile = this.Entities.account_profile.First(q => q.AccountID == userID);
profile.Score += amount;
this.Entities.SaveChanges();
ts.Complete();
return profile.Score;
}
}
Transaction garantees that accountprofile record will not changed in db while you aren't commit or rollback.
I’m using HttpContext.Current.Cache to cache data from the DB (.Net 4 web application).
I want to make sure I don’t run into any threading synchronization problem.
Scenario: 3 users pointing to the same Company Object:
User A:
Profile.Company.Name = “CompX”;
Profile.Company.Desc = “CompXDesc”;
Profile.Company.Update(); //Update DB
User B:
String Name = Profile.Company.Name;
User C:
Profile.Company.Name = “CompY”;
Profile.Company.Update(); //Update DB
Questions:
Does the Cache provide any type of locking?
Should I add Locks like ReaderWriterLockSlim (how exactly)?
Existing Code:
ProfileBLL:
public CompanyBLL Company {
get {
return CompanyBLL.GetById(this.Company_ID);
}
}
// HttpContext.Current.Cache
public static CompanyBLL GetById(int Company_ID) {
string key = "GetById_" + Company_ID.ToString();
CompanyBLL ret = null;
if (Cache[key] != null) {
ret = (CompanyBLL)Cache[key];
}
else
{
ret = DAL_Company<CompanyBLL>.GetById(Company_ID);
Cache[key] = ret;
}
return ret;
}
Another option is to add TransactionScope on any DB update:
User A:
using (TransactionScope Scope = new TransactionScope()){
Profile.Company.Name = “CompX”;
Profile.Company.Desc = “CompXDesc”;
Profile.Company.Update(); //Update DB
Scope.Complete(); //COMMIT TRANS
}
User B:
String Name = Profile.Company.Name;
Will it solve any threading problem?
Thanks
You have nothing to worry about. The class is thread safe.
If you're using SQL to store cache then SQL will lock the row as it's being written (under pessimistic mode, which is default) so you wont have to worry about that. Transactions aren't going to provide thread safety but you should do it anyway when making changes that need to be consistent.
You can always add a lock around any "write" methods you have.
If you want to make sure that when any user calls a "read" method that they get the absolute latest then put a lock around those methods as well.