I get following error randomly when executing code from debug mode.
Cannot access a disposed object.
Object name: 'SqlDelegatedTransaction'.
Error is being thrown after few commands have been executed instantly, not an timeout issue
I have just one transaction, opened with
using(var scope = new TransactionScope(TransactionOption.Required))
Multiple connections are opened with same statement above in nested code.
i am using sqlserver 2008
What could be wrong?
When you use TransactionOption.Required the transaction joins the ambient transaction.
One possible theory is:
If you go through the transaction scope and do not call scope.Complete(), it will dispose the abmient transaction. The next code that tries to run against the database will then fail.
Another would be problems with respect to active result sets:
Are you using SQL Server 2000, which does not support Multiple Active Result Sets(MARS)
Does your connect string specify MultipleActiveResultSets= true
Related
I'm having trouble finding specific documentation around manually ending the session via .exit or .quit or if its even possible with FromSqlRaw.
Problem Statement:
On connect and execution of a stored procedure a Teradata session is created. After destruction of the db context the session appears to live on for some unknown amount of time. Restarting the service clears it immediately.
Attempts to manually end the session via .exit fail.
Examples:
With dependency injection:
services.AddDbContext<TeradataContext>(options =>
options.UseTeradata(Configuration.GetConnectionString("EDW"), opts => {
opts.CommandTimeout(120);
})
);
It's my understanding connection pooling would not happen as it is lifecycle and not added as a pool. But since garbage collection could take some time I decided to try moving it as a test:
using (var dbContext = new TeradataContext())
{
int result = dbContext.Database.ExecuteSqlRaw("CALL .... ({0});", bar);
dbContext.Database.CloseConnection();
}
The session on Teradata still persists and the next call fails as the stored procedure fails to create the already existing volatile table.
I tried adding:
dbContext.Database.ExecuteSqlRaw(".exit");
I get back:
[Teradata Database] [3706] Syntax error: expected something between ';' and '.'.
Does anyone know the correct way to call .exit here? Or any other way to force the close and end of session using Teradata Client 3.1, EF Core 3.1
dbContext.Database.CloseConnection();
But the session still persists so I don't believe their provider is performing a quit or exit.
I am not every experienced with TeraData...
I was really hoping to use EF but I guess I could fall back to other options if we can't figure this out.
Thanks for any insights.
I'm working with EF 6.0 and SQL Server 2012 Express.
I save 10,000 records to database using DbContext.DbSet.AddRange(IEnumerable) and SaveChanges().
I noticed that when SaveChanges() is called, SQL Server held the connection. Other operations to SQL Server have to wait until 10,00 records are saved.
In this scenario, I don't want SQL server to lock the connection. I want to query data from another table, or read from the same table that is being updated.
What can I do to enable parallel for SQL Server? Or is it possible to do that?
To prevent SaveChanges() from blocking current thread you can use SaveChangesAsync(). This operation will block that instance of DbContext but not current thread or database itself. If you are using new DbContext() per request, that should be enough. Otherwise you should use new context to do your long insert:
using(var Ctx = new MyDbContext()) {
///Add or update objects here
Ctx.MyDbSet.AddRange(LargeList);
Ctx.SaveChangesAsync();
}
Remember that selecting from the table you are inserting into might yield unexpected results, depending whether select happened before or after asyc insert finished.
I'm having a helluva time wrapping a couple transactions to 2 different databases on the same SQL Server. I initially was having trouble with network DTC access and I resolved that. Now, the error that I continue to get is "Communication with the underlying transaction manager has failed."
We have some customer profiles in a database and when these profiles become outdated we want to move them to an 'archive' database for storage. The move is simply (italics for humor) adding them to the archive database and deleting them from the main/live database. I have a DataContext for each database. The code below performs the Add and then gets the error on the Delete when trying to use the second DataContext. I've only been working with LINQ for a few months and I've scoured articles for the past couple of days. I'd like to know if anything is wrong with my code or if there is still something not configured properly with the DTC or ???
We're running on VMware for my workstation and the server.
- Workstation is Windows 7 SP1
- Server is Windows and SQL Server 2008R2
Routine for the 'Move':
private int MoveProfileToArchiveDB( int iProfileId )
{
int rc = RC.UnknownError;
// get new Archive profile object
ProfileArchive.ProfileInfo piArchive = new ProfileArchive.ProfileInfo();
// 'Live' DataContext
using ( ProfileDataContext dbLive = new ProfileDataContext() )
{
// get Live profile
ProfileInfo piLive = ProfileInfo.GetProfile( dbLive, iProfileId );
// copy Live data to Archive profile object... including the id
ProfileArchive.ProfileInfo.CopyFromLive( piLive, piArchive, true );
}
bool bArchiveProfileExists = ProfileArchive.ProfileInfo.ProfileExists( piArchive.id );
// make the move a transaction...
using ( TransactionScope ts = new TransactionScope() )
{
// Add/Update to Archive db
using ( ProfileArchiveDataContext dbArchive = new ProfileArchiveDataContext() )
{
// if this profile already exists in the Archive db...
if ( bArchiveProfileExists )
{
// update the personal profile in Archive db
rc = ProfileArchive.ProfileInfo.UpdateProfile( dbArchive, piArchive );
}
else
{
// add this personal profile to the archive db
int iArchiveId = 0;
piArchive.ArchiveDate = DateTime.Now;
rc = ProfileArchive.ProfileInfo.AddProfile( dbArchive, piArchive, ref iArchiveId );
}
// if Add/Update was successful...
if ( rc == RC.Success )
{
// Delete from the Live db
using ( ProfileDataContext dbLive = new ProfileDataContext() )
{
// delete the personal profile from the Profile DB
rc = ProfileInfo.DeleteProfileExecCmd( dbLive, iProfileId ); // *** ERROR HERE ***
if ( rc == RC.Success )
{
// Transaction End (completed)
ts.Complete();
}
}
}
}
}
return rc;
}
NOTES:
I have a few different methods for the Delete and they all work outside the TransactionScope.
ProfileInfo is the main profile table and is roughly the same for both Live and Archive databases.
Any help is greatly appreciated! Thanks much...
Rather than continue criss cross comments, I decided to post this as an answer instead.
don't use error codes. That's what exceptions are for. The code flow is more difficult to read and error code returns invite to be ignored. Exceptions make the code easier to read and far less error prone.
If you use a TransactionScope, remember to always set the isolation level explicitly. See using new TransactionScope() Considered Harmful. The implicit isolation level of SERIALIZABLE is almost never called for and has tremendous negative scale impact.
Transaction escalation. Whenever multiple connections are opened inside a transaction scope they can escalate the transaction to a distributed transaction. The behavior differ from version to version, some have tried to document it, eg. TransactionScope: transaction escalation behavior:
SQL Server 2008 is much more intelligent then SQL Server 2005 and can
automatically detect if all the database connections in a certain
transaction point to the same physical database. If this is the case,
the transaction remains a local transaction and it is not escalated to
a distributed transaction. Unfortunately there are a few caveats:
If the open database connections are nested, the transaction is still
escalated to a distributed transaction.
If in the transaction, a
connection is made to another durable resource, the transaction is
immediately escalated to a distributed transaction.
Since your connection (from the two data contextes used) point to different databases, even on SQL Server 2008 your TransactionScope will escalate to a distributed transaction.
Enlisting your application into DTC is harmful in at least two ways:
throughput will sink through the floor. A database can support few thousand local transactions per second, but only tens (maybe low hundreds) of distributed transactions per second. Primarily this is because of the complexity of two phase commit.
DTC requires a coordinator: MSDTC. The [security enhancements made to MSDTC] make configuration more challenging and it certainly is unexpected for devs to discover that MSDTC is required in their app. The steps described in the article linked are probably what you're missing right now. For Windows Vista/Windows 7/Windows Server 2008/Windows Server 2008R2 the steps are described in MSDTC in Windows Vista and Windows Server 2008, in How to configure DTC on Windows 2008 and other similar articles.
Now if you fix MSDTC communication following the articles mentioned above, your code should be working, but I still believe this archiving should not occur in the client code running EF. There are far better tools, SSIS being a prime example. A nightly scheduled job running SSIS would transfer those unused profiles far more efficiently.
Recently our QA team reported a very interesting bug in one of our applications. Our application is a C# .Net 3.5 SP1 based application interacting with a SQL Server 2005 Express Edition database.
By design the application is developed to detect database offline scenarios and if so to wait until the database is online (by retrying to connect in a timely manner) and once online, reconnect and resume functionality.
What our QA team did was, while the application is retrieving a bulk of data from the database, stop the database server, wait for a while and restart the database. Once the database restarts the application reconnects to the database without any issues but it started to continuously report the exception "Could not find prepared statement with handle x" (x is some number).
Our application is using prepared statements and it is already designed to call the Prepare() method again on all the SqlCommand objects when the application reconnects to the database. For example,
At application startup,
SqlCommand _commandA = connection.CreateCommand();
_commandA.CommandText = #"SELECT COMPANYNAME FROM TBCOMPANY WHERE ID = #ID";
_commandA.CommandType = CommandType.Text;
SqlParameter _paramA = _commandA.CreateParameter();
_paramA.ParameterName = "#ID";
_paramA.SqlDbType = SqlDbType.Int;
_paramA.Direction = ParameterDirection.Input;
_paramA.Size = 0;
_commandA.Parameters.Add(_paramA);
_commandA.Prepare();
After that we use ExceuteReader() on this _commandA with different #ID parameter values in each cycle of the application.
Once the application detects the database going offline and coming back online, upon reconnect to the database the application only executes,
_commandA.Prepare();
Two more strange things we noticed.
1. The above situation on happens with CommandType.Text type commands in the code. Our application also uses the same exact logic to invoke stored procedures but we never get this issue with stored procedures.
2. Up to now we were unable to reproduce this issue no matter how many different ways we try it in the Debug mode in Visual Studio.
Thanks in advance..
I think with almost 3 days of asking the question and close to 20 views of the question and 1 answer, I have to conclude that this is not a scenario that we can handle in the way we have tried with SQL server.
The best way to mitigate this issue in your application is to re-create the SqlCommand object instance again once the application detects that the database is online.
We did the change in our application and our QA team is happy about this modification since it provided the best (or maybe the only) fix for the issue they reported.
A final thanks to everyone who viewed and answered the question.
The server caches the query plan when you call 'command.Prepare'. The error indicates that it cannot find this cached query plan when you invoke 'Prepare' again. Try creating a new 'SqlCommand' instance and invoking the query on it. I've experienced this exception before and it fixes itself when the server refreshes the cache. I doubt there is anything that can be done programmatically on the client side, to fix this.
This is not necessarily related exactly to your problem but I'm posting this as I have spent a couple of days trying to fix the same error message in my application. We have a Java application using a C3P0 connection pool, JTDS driver, connecting to a SQL Server database.
We had disabled statement caching in our the C3P0 connection pool, but had not done this on the driver level. Adding maxStatements=0 to our connection URL stopped the driver caching statements, and fixed the error.
Either DotNetNuke UserController.GetUser(PortalId,UserId,false) or UserController.ValidateUser(...) inside TransactionScope is causing TransactionAbortedException and the innerException is TransactionPromotionException. The symptoms are the same as this.
Could anyone suggest me the solution to this issue?
Thanks a lot !
using (System.Transactions.TransactionScope ts = new System.Transactions.TransactionScope())
{
DotNetNuke.Entities.Users.UserInfo ui = DotNetNuke.Entities.Users.UserController.GetUser(PortalId, UserId, false);
ts.Complete();
}
By default DotNetNuke uses the ASP.NET 2.0 membership provider. As you pointed, Membership.GetUser() opens another database connection, which causes the exception inside TransactionScope.
If you want to use GetUser() inside TransactionScope, you'll either have to enable MSDTC or use SQL Server 2008. SQL 2008 allows multiple connections within a single TransactionScope, if the connections are to the same DBMS and are not open at the same time.
See Also:
TransactionScope automatically escalating to MSDTC on some machines?