I am developing a .NET web application with C# and SQL Server as database, being newbie for both technologies. I have a problem when I attempt to save a lot of information at the same time.
My code is simple:
SqlConnection cn = null;
try
{
using (TransactionScope scope = new TransactionScope())
{
using (cn = Util.getConnection())
{
cn.Open();
BusinessObject.saveObject1(param1, cn);
BusinessObject.saveObject2(param2, cn);
BusinessObject.saveObject3(param3, cn);
BusinessObject.saveObject4(param4, cn);
...
scope.Complete();
}
}
I always use the same connection object because if an error happens, I must revoke any change in my data and the same behaviour if the process is ok, is needed.
I don't mind it the process of saving takes a lot of time, in my application is perfectly normal because I have to save a lot of information.
The weirdness here is:
When I execute this function in the same local area network of the database, it perfectly works.
However, if I execute it from outside, such as, connected by a VPN and consequently with higher latency, I get an error with the message: "The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements."
I have tried to change the timeout of the database through the connection string in the web.config but it didn't resolve anything. Also, if after each executeNonQuery() statement I do a cn.Dispose(), I get an error in the next attempt to use the connection object.
Do I have to change the parameters of TransactionScope object? Is there another better way to do this?
Thanks in advanced.
This is due to a timeout error. The TransactionScope might decide to rollback the transaction in case of a timeout.
You need to increase the timeout value in the TransactionScope constructor. The default max. timeout is 10min.
Move scope.Complete(); outside the connection block (MSDN article).
Did you try setting the Timeout = 0 in your Util.getConnection() function ? A value of 0 indicates no limit. This will complete the job surely. By setting the Timeout = 0 will complete every long operation job.
Related
I'm trying to kill query triggered by ADO.NET command on postgresql database, after command timeout:
using (var command = new PgSqlCommand("public.timeout_test", connection))
{
command.CommandTimeout = 10;
command.CommandType = CommandType.StoredProcedure;
connection.Open();
command.ExecuteNonQuery();
}
In dotnet code timeout exception is being throwed correctly, but I wonder why query triggered by timout_test function is still in active state. If I run below query, then query executed by timeout_tets is listed as active:
SELECT * FROM pg_stat_activity where state = 'active';
Tried to test it with devart and npgsql connectors, but both of them behave in the same way, so I assume that it's intended behevior, but don't understand the reason. Also wanted to ask if there is a way to kill query after command timeout.
At least in Npgsql, CommandTimeout is implemented as a client-side socket timeout: after a certain amount of time Npgsql will simply close the network socket on its side. This doesn't cancel the query server-side, which will continue running.
You can set the PostgreSQL statement_timeout parameter to have it kill queries running more than a given amount of time; for best results, set statement_timeout to something lower than CommandTimeout - this will ensure that server timeout occurs before client timeout, preserving the connection and transmitting the server timeout to the client as a regular exception.
Another option is to manually trigger a cancellation from the client by calling NpgsqlCommand.Cancel(). You can do this whenever you want (e.g. when the user clicks a button), but contrary to statement_timeout it will obviously work only if the network is up.
I am using the SqlConnection class and running into problems with command time outs expiring.
First off, I am using the SqlCommand property to set a command timeout like so:
command.CommandTimeout = 300;
Also, I have ensured that the Execution Timeout setting is set to 0 to ensure that there should be no timeouts on the SQL Management side of things.
Here is my code:
using (SqlConnection conn = new SqlConnection(connection))
{
conn.Open();
SqlCommand command = conn.CreateCommand();
var transaction = conn.BeginTransaction("CourseLookupTransaction");
command.Connection = conn;
command.Transaction = transaction;
command.CommandTimeout = 300;
try
{
command.CommandText = "TRUNCATE TABLE courses";
command.ExecuteNonQuery();
List<Course> courses = CourseHelper.GetAllCourses();
foreach (Course course in courses)
{
CourseHelper.InsertCourseLookupRecord(course);
}
transaction.Commit();
}
catch (Exception ex)
{
transaction.Rollback();
Log.Error(string.Format("Unable to reload course lookup table: {0}", ex.Message));
}
}
I have set up logging and can verify exactly 30 seconds after firing off this function, I receive the following error message in my stack trace:
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
In the interest of full disclosure: InsertCourseLookupRecord() found inside the above using statements foreach, is performing another query to the same table in the same database. Here is the query it is performing:
INSERT INTO courses(courseid, contentid, name, code, description, url, metakeywords, metadescription)
VALUES(#courseid, #contentid, #name, #code, #description, #url, #metakeywords, #metadescription)"
There is over 1400 records in this table.
I will certify any individual(s) that helps me solve this as a most supreme grand wizard.
I believe what is happening is that you have a deadlock situation that is causing your query in the InsertCourseLookupRecord() function to fail. You are not passing your connection to InsertCourseLookupRecord() so I am assuming you are running that in a separate connection. So what happens is:
You started a transaction.
You truncate the table.
InsertCourseLookupRecord starts another connection and tries to insert
data into that table, but the table is locked because your
transaction isn't committed yet.
The connection in the function InsertCourseLookupRecord() times out at the timeout value defined for that connection of 30 seconds.
You could change the function to accept the command object as a parameter and use it inside the function instead of creating a new connection. This will then become part of the transaction and will all be committed together.
To do this change your function definition to:
public static int InsertCourseLookupRecord(string course, SqlCommand cmd)
Take all the connection code out of the function because you're going to use the cmd object. Then when you're ready to execute your query:
myCommand.Parameters.Clear(); //need only if you're using command parameters
cmd.CommandText = "INSERT BLAH BLAH BLAH";
cmd.ExecuteNonQuery();
It will run under the same connection and transaction context.
You call it like this in your using block:
CourseHelper.InsertCourseLookupRecord(course, command);
You could also just take the code in the InsertCourseLookupRecord and put it inside the for loop instead and then reuse the command object in your using block without needing to pass it to a function at all.
Because you are using two separate SqlConnection objects you are deadlocking your self due to the SqlTransaction you started in your outer code. The query in InsertCourseLookupReacord and maybe in GetAllCourses get blocked by the TRUNCATE TABLE courses call which has not yet been committed. They wait 300 seconds for the truncate to be committed then time out.
You have a few options.
Pass the SqlConnection and SqlTransaction in to GetAllCourses and InsertCourseLookupRecord so they can be part of the same transaction.
Use a "ambient transaction" by getting rid of the SqlTransaction and using a System.Transaction.TransactionScope instead. This causes all connections that are opened to the server all share a transaction. This can cause maintenance issues as depending on what the queries are doing it as it may need to invoke the Distributed Transaction Coordinator which may be disabled on some computers (from the looks of what you showed you would need the DTC as you have two open connections at the same time).
The best option is try to change your code to do option 1, but if you can't do option 2.
Taken from the documentation:
CommandTimeout has no effect when the command is executed against a context connection (a SqlConnection opened with "context connection=true" in the connection string)
Please review your connection string, thats the only possibility I can think of.
I am using AdomdConnection connection class to connect to the Cube. I am using following code.
using (var conn = new AdomdConnection(ConnString))
{
conn.Open();
var cube = conn.Cubes[name];
//Do something
conn.Close();
}
AdomdConnection.ConnectionTimeout Property does not have setter property.
The default value for connectionTimeOut property is 0, which sets the time to infinite.
I have two questions:
Is there any way to set the timeout property for AdomdConnection?
When the cube is busy and your try to run the program, after creating the connection when you open the connection (conn.open()), system does not come out of this statement & never executes the next line of code. In such cases the application becomes irresponsible and there is no exception thrown. How can I inform user about such scenarios & make a log entry.
I looked into this similar tread but did not found it useful.
Thank you
The documentation states this for AdomdConnection.ConnectionTimeout
Gets the time to wait for a connection to be established before the
AdomdConnection stops trying to connect and generates an error.
So that means the timeout just talking to the server.
If you want a timeout when your running an actual command use the AdomdCommand.CommandTimeout property.
Gets or sets the time to wait for a command to run before the
AdomdCommand stops trying to run the command and generates an error.
Both can be set with the connection string.
http://msdn.microsoft.com/en-us/library/microsoft.analysisservices.adomdclient.adomdconnection.connectionstring.aspx
I'm getting this error (Distributed transaction completed. Either enlist this session in a new transaction or the NULL transaction.) when trying to run a stored procedure from C# on a SQL Server 2005 database. I'm not actively/purposefully using transactions or anything, which is what makes this error weird. I can run the stored procedure from management studio and it works fine. Other stored procedures also work from C#, it just seems to be this one with issues. The error returns instantly, so it can't be a timeout issue. The code is along the lines of:
SqlCommand cmd = null;
try
{
// Make sure we are connected to the database
if (_DBManager.CheckConnection())
{
cmd = new SqlCommand();
lock (_DBManager.SqlConnection)
{
cmd.CommandText = "storedproc";
cmd.CommandType = System.Data.CommandType.StoredProcedure;
cmd.Connection = _DBManager.SqlConnection;
cmd.Parameters.AddWithValue("#param", value);
int affRows = cmd.ExecuteNonQuery();
...
}
}
else
{
...
}
}
catch (Exception ex)
{
...
}
It's really got me stumped. Thanks for any help
It sounds like there is a TransactionScope somewhere that is unhappy. The _DBManager.CheckConnection and _DBManager.SqlConnection sounds like you are keeping a SqlConnection hanging around, which I expect will contribute to this.
To be honest, in most common cases you are better off just using the inbuilt connection pooling, and using your connections locally - i.e.
using(var conn = new SqlConnection(...)) { // or a factory method
// use it here only
}
Here you get a clean SqlConnection, which will be mapped to an unmanaged connection via the pool, i.e. it doesn't create an actual connection each time (but will do a logical reset to clean it up).
This also allows much more flexible use from multiple threads. Using a static connection in a web app, for example, would be horrendous for blocking.
From the code it seems that you are utilizing an already opened connection. May be there's a transaction pending previously on the same connection.
I am getting the following error when I try to call a stored procedure that contains a SELECT Statement:
The operation is not valid for the state of the transaction
Here is the structure of my calls:
public void MyAddUpdateMethod()
{
using (TransactionScope Scope = new TransactionScope(TransactionScopeOption.RequiresNew))
{
using(SQLServer Sql = new SQLServer(this.m_connstring))
{
//do my first add update statement
//do my call to the select statement sp
bool DoesRecordExist = this.SelectStatementCall(id)
}
}
}
public bool SelectStatementCall(System.Guid id)
{
using(SQLServer Sql = new SQLServer(this.m_connstring)) //breaks on this line
{
//create parameters
//
}
}
Is the problem with me creating another connection to the same database within the transaction?
After doing some research, it seems I cannot have two connections opened to the same database with the TransactionScope block. I needed to modify my code to look like this:
public void MyAddUpdateMethod()
{
using (TransactionScope Scope = new TransactionScope(TransactionScopeOption.RequiresNew))
{
using(SQLServer Sql = new SQLServer(this.m_connstring))
{
//do my first add update statement
}
//removed the method call from the first sql server using statement
bool DoesRecordExist = this.SelectStatementCall(id)
}
}
public bool SelectStatementCall(System.Guid id)
{
using(SQLServer Sql = new SQLServer(this.m_connstring))
{
//create parameters
}
}
When I encountered this exception, there was an InnerException "Transaction Timeout". Since this was during a debug session, when I halted my code for some time inside the TransactionScope, I chose to ignore this issue.
When this specific exception with a timeout appears in deployed code, I think that the following section in you .config file will help you out:
<system.transactions>
<machineSettings maxTimeout="00:05:00" />
</system.transactions>
I also come across same problem, I changed transaction timeout to 15 minutes and it works.
I hope this helps.
TransactionOptions options = new TransactionOptions();
options.IsolationLevel = System.Transactions.IsolationLevel.ReadCommitted;
options.Timeout = new TimeSpan(0, 15, 0);
using (TransactionScope scope = new TransactionScope(TransactionScopeOption.Required,options))
{
sp1();
sp2();
...
}
For any wanderer that comes across this in the future. If your application and database are on different machines and you are getting the above error especially when using TransactionScope, enable Network DTC access. Steps to do this are:
Add firewall rules to allow your machines to talk to each other.
Ensure the distributed transaction coordinator service is running
Enable network dtc access. Run dcomcnfg. Go to Component sevices > My Computer > Distributed Transaction Coordinator > Local DTC. Right click properties.
Enable network dtc access as shown.
Important: Do not edit/change the user account and password in the DTC Logon account field, leave it as is, you will end up re-installing windows if you do.
I've encountered this error when my Transaction is nested within another. Is it possible that the stored procedure declares its own transaction or that the calling function declares one?
For me, this error came up when I was trying to rollback a transaction block after encountering an exception, inside another transaction block.
All I had to do to fix it was to remove my inner transaction block.
Things can get quite messy when using nested transactions, best to avoid this and just restructure your code.
In my case, the solution was neither to increase the time of the "transactionscope" nor to increase the time of the "machineSettings" property of "system.transactions" of the machine.config file.
In this case there was something strange because this error only happened when the volume of information was very high.
So the problem was based on the fact that in the code inside the transaction there were many "foreach" that made updates for different tables (I had to solve this problem in a code developed by other personnel). If tests were performed with few records in the tables, the error was not displayed, but if the number of records was increased then the error was displayed.
In the end the solution was to change from a single transaction to several separate ones in the different "foreach" that were within the transaction.
You can't have two transactions open at the same time. What I do is that I specify transaction.Complete() before returning results that will be used by the 2nd transaction ;)
I updated some proprietary 3rd party libraries that handled part of the database process, and got this error on all calls to save. I had to change a value in the web.config transaction key section because (it turned out) the referenced transaction scope method had moved from one library to another.
There was other errors previously connected with that key, but I got rid of them by commenting the key out (I know I should have been suspicious at that point, but I assumed it was now redundant as it wasn't in the shiny new library).