Let's assume that the a DB in MongoDB contains 5 documents, and I want to insert 10 new documents.
Transaction start, and MongoDB gradually update the number of documents present in the DB, but the documents aren't yet inserted in the DB until the commit occurs.
Let's assume that the application crashes after inserting the fifth document (user kill application from task manager or the power goes out): transaction isn't aborted and my DB contains only the 5 initial documents, but the index of the number of documents is updated to 10 (I could observe this by inserting Thread.Sleep before each insert)
Right after this, if i try insert new documents in the DB, but a MongoCommandException was returned:
WriteConflict error: this operation conflicted with another operation.
Please retry your operation or multi-document transaction..
I temporarily solved it with a workaround, creating a file that is deleted if the transaction is aborted or committed; therefore, if the file is not deleted, it means that the application terminated unexpectedly and transaction didn't complete successfull, then I do a restore of DB with Mongodump.
I use:
MongoDB 5.0.8 Server (replica-set)
MongoDB Compass for the GUI
MongoDB Tools 100.5.2
C# .NET 6.0
Thank you guys! :)
Related
The following error is intermittently thrown when attempting to add or update a document: "Microsoft.Isam.Esent.Interop.EsentOutOfLongValueIDsException: Long-value ID counter has reached maximum value. (perform offline defrag to reclaim free/unused LongValueIDs)"
I've attempted to perform this offline defrag according to
https://ravendb.net/docs/article-page/3.5/csharp/users-issues/recovering-from-esent-errors. I stopped the RavenDB service, navigated to the Databases folder in Adminstator command prompt and ran "esentutl /d DatabaseName". I then get the following error:
"Access to source database 'DatabaseName' failed with Jet error -1032.
Operation terminated with wrror -1032 after 20.31 seconds."
I have also tried to restart the server with RavenDB not set to start on start-up. I still get error -1032 when attempting to defrag.
Is performing the defrag operation the correct action? If so, what process(es) would I need to stop in order for those files to not be in use?
Thanks!
The solution was to run compact on raven. Raven studio > Manage Your Server > Compact. Compacting takes the database down, so I performed it on the replicated servers one at a time.
Recently we had Informix upgraded 11.7 to 12.10. After this deleting functionality not working as expected. Here is example query we are using:
SET LOCK MODE TO WAIT TO 5;
BEGIN WORK;
DELETE FROM DOCUMENT WHERE DOCUMENT_ID = 1000;
COMMIT WORK;
Above query not deleting any records and in some cases tables getting locked.
But same worked without any issues before upgrade.
And another interesting observation is, below query working without any issues
-- (SET LOCK MODE not used here):
BEGIN WORK;
DELETE FROM DOCUMENT WHERE DOCUMENT_ID = 1000;
COMMIT WORK;
We are using IBM Data Server Client 10.5.3 to connect to then Informix server using C#.
I am recently started working on Microsoft Sync framework 2.1 based project, which already developed, requirement is simple to sync db(server to client and client to server), but i am getting very frequent error stating blah_blah_selectchanges already exist(procedure), once i will delete all mentioned procedure it will work fine.. then again when i am trying with another machine this error comes again.. Now i am not getting a idea how to overcome with this error... I did some research and found additional tables(under sync db) created by the provisioning process: Products_Tracking, schema_info, scope_config, and scope_info. There are also other database objects such as triggers and stored procedures created by the provisioning process. I have one doubt if additional table/Procedure/Triggers is already exist on sync schema why it is creating again..
check how you're provisioning. Are you checking if the scope exists already? if there are existing scopes for the table and you want to add a new scope, you should specify SetCreateProceduresForAdditionalScopeDefault.
I'm working with EF 6.0 and SQL Server 2012 Express.
I save 10,000 records to database using DbContext.DbSet.AddRange(IEnumerable) and SaveChanges().
I noticed that when SaveChanges() is called, SQL Server held the connection. Other operations to SQL Server have to wait until 10,00 records are saved.
In this scenario, I don't want SQL server to lock the connection. I want to query data from another table, or read from the same table that is being updated.
What can I do to enable parallel for SQL Server? Or is it possible to do that?
To prevent SaveChanges() from blocking current thread you can use SaveChangesAsync(). This operation will block that instance of DbContext but not current thread or database itself. If you are using new DbContext() per request, that should be enough. Otherwise you should use new context to do your long insert:
using(var Ctx = new MyDbContext()) {
///Add or update objects here
Ctx.MyDbSet.AddRange(LargeList);
Ctx.SaveChangesAsync();
}
Remember that selecting from the table you are inserting into might yield unexpected results, depending whether select happened before or after asyc insert finished.
I'm having a helluva time wrapping a couple transactions to 2 different databases on the same SQL Server. I initially was having trouble with network DTC access and I resolved that. Now, the error that I continue to get is "Communication with the underlying transaction manager has failed."
We have some customer profiles in a database and when these profiles become outdated we want to move them to an 'archive' database for storage. The move is simply (italics for humor) adding them to the archive database and deleting them from the main/live database. I have a DataContext for each database. The code below performs the Add and then gets the error on the Delete when trying to use the second DataContext. I've only been working with LINQ for a few months and I've scoured articles for the past couple of days. I'd like to know if anything is wrong with my code or if there is still something not configured properly with the DTC or ???
We're running on VMware for my workstation and the server.
- Workstation is Windows 7 SP1
- Server is Windows and SQL Server 2008R2
Routine for the 'Move':
private int MoveProfileToArchiveDB( int iProfileId )
{
int rc = RC.UnknownError;
// get new Archive profile object
ProfileArchive.ProfileInfo piArchive = new ProfileArchive.ProfileInfo();
// 'Live' DataContext
using ( ProfileDataContext dbLive = new ProfileDataContext() )
{
// get Live profile
ProfileInfo piLive = ProfileInfo.GetProfile( dbLive, iProfileId );
// copy Live data to Archive profile object... including the id
ProfileArchive.ProfileInfo.CopyFromLive( piLive, piArchive, true );
}
bool bArchiveProfileExists = ProfileArchive.ProfileInfo.ProfileExists( piArchive.id );
// make the move a transaction...
using ( TransactionScope ts = new TransactionScope() )
{
// Add/Update to Archive db
using ( ProfileArchiveDataContext dbArchive = new ProfileArchiveDataContext() )
{
// if this profile already exists in the Archive db...
if ( bArchiveProfileExists )
{
// update the personal profile in Archive db
rc = ProfileArchive.ProfileInfo.UpdateProfile( dbArchive, piArchive );
}
else
{
// add this personal profile to the archive db
int iArchiveId = 0;
piArchive.ArchiveDate = DateTime.Now;
rc = ProfileArchive.ProfileInfo.AddProfile( dbArchive, piArchive, ref iArchiveId );
}
// if Add/Update was successful...
if ( rc == RC.Success )
{
// Delete from the Live db
using ( ProfileDataContext dbLive = new ProfileDataContext() )
{
// delete the personal profile from the Profile DB
rc = ProfileInfo.DeleteProfileExecCmd( dbLive, iProfileId ); // *** ERROR HERE ***
if ( rc == RC.Success )
{
// Transaction End (completed)
ts.Complete();
}
}
}
}
}
return rc;
}
NOTES:
I have a few different methods for the Delete and they all work outside the TransactionScope.
ProfileInfo is the main profile table and is roughly the same for both Live and Archive databases.
Any help is greatly appreciated! Thanks much...
Rather than continue criss cross comments, I decided to post this as an answer instead.
don't use error codes. That's what exceptions are for. The code flow is more difficult to read and error code returns invite to be ignored. Exceptions make the code easier to read and far less error prone.
If you use a TransactionScope, remember to always set the isolation level explicitly. See using new TransactionScope() Considered Harmful. The implicit isolation level of SERIALIZABLE is almost never called for and has tremendous negative scale impact.
Transaction escalation. Whenever multiple connections are opened inside a transaction scope they can escalate the transaction to a distributed transaction. The behavior differ from version to version, some have tried to document it, eg. TransactionScope: transaction escalation behavior:
SQL Server 2008 is much more intelligent then SQL Server 2005 and can
automatically detect if all the database connections in a certain
transaction point to the same physical database. If this is the case,
the transaction remains a local transaction and it is not escalated to
a distributed transaction. Unfortunately there are a few caveats:
If the open database connections are nested, the transaction is still
escalated to a distributed transaction.
If in the transaction, a
connection is made to another durable resource, the transaction is
immediately escalated to a distributed transaction.
Since your connection (from the two data contextes used) point to different databases, even on SQL Server 2008 your TransactionScope will escalate to a distributed transaction.
Enlisting your application into DTC is harmful in at least two ways:
throughput will sink through the floor. A database can support few thousand local transactions per second, but only tens (maybe low hundreds) of distributed transactions per second. Primarily this is because of the complexity of two phase commit.
DTC requires a coordinator: MSDTC. The [security enhancements made to MSDTC] make configuration more challenging and it certainly is unexpected for devs to discover that MSDTC is required in their app. The steps described in the article linked are probably what you're missing right now. For Windows Vista/Windows 7/Windows Server 2008/Windows Server 2008R2 the steps are described in MSDTC in Windows Vista and Windows Server 2008, in How to configure DTC on Windows 2008 and other similar articles.
Now if you fix MSDTC communication following the articles mentioned above, your code should be working, but I still believe this archiving should not occur in the client code running EF. There are far better tools, SSIS being a prime example. A nightly scheduled job running SSIS would transfer those unused profiles far more efficiently.