I have been experiencing db transaction timemouts in an ASP.NET 2.0 & SQL Server 2008 application for several days now.
The issue is something using SQL Server profiler tracing to "a function calling web service".
We have transaction include several stored procedure (sp). But stored procedure #3 has some bytes to save in DB, it consumes time. When the time goes over 25 seconds, it throws a timeout exception message:
Message : System.Web.Services.Protocols.SoapException: Server was unable to process request. ---> System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj)
My code:
using (System.Data.Common.DbConnection connection = o_DB.CreateConnection())
{
connection.Open();
System.Data.Common.DbTransaction o_Transaction = connection.BeginTransaction();
try
{
- exec sp1
- exec sp2
foreach{
- exec sp3
}
}
We tried some solutions in the web but it did not work. I hope someone can give a hand to me. Many Thanks.
We do not have timeout in the connection string in web.config, it does not suffer on exception after 15s (default)
We set the transaction timeout in web.config:
<system.transactions>
<machineSettings maxTimeout="00:00:30" />
</system.transactions>
Something like that
using (var ts = CreateTransactionScope(TimeSpan.FromSeconds(mySecondsVar)))
{
using (System.Data.Common.DbConnection connection = o_DB.CreateConnection())
{
using (IDbTransaction tran = connection.BeginTransaction()) {
try
{
// your code
}
catch {
tran.Rollback();
}
}
}
ts.Complete();
}
If is "ok", the dispose do the commit automatically.
This you can directly handle by code also.
Please see the below code
public DataSet getData(string command)
{
DataSet ds = new DataSet();
string connectionString = ConfigurationManager.ConnectionStrings["TESTDB"].ConnectionString;
using (var conn = new SqlConnection(connectionString))
{
using (var cmd = new SqlCommand(command, conn))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.CommandTimeout = 0;
SqlDataAdapter adapt = new SqlDataAdapter(cmd);
conn.Open();
adapt.Fill(ds);
conn.Close();
}
}
return ds;
}
The blow line will change the execution time to infinity or until query execution complete.
cmd.CommandTimeout = 0;
Related
The state of the connection stays active in pool, when there is an exception while executing a query or stored procedure through c#.
The Npgsql version that I am using is 4.1.7.
Here is the code that I am trying to execute.
NpgsqlCommand cmd = null;
NpgsqlDataAdapter sda = null;
NpgsqlConnection conn = null;
try {
string sql = "a_test";
conn = new NpgsqlConnection("Server=localhost;Port=5432;Username=admin;Password=test;Database=majordb;SearchPath=dbs;CommandTimeout=300;MaxPoolSize=500;Connection Idle Lifetime=180;Connection Pruning Interval=5;");
cmd = new NpgsqlCommand(sql);
cmd.CommandType = CommandType.StoredProcedure;
sda = new NpgsqlDataAdapter(cmd);
cmd.Connection = conn;
sda.Fill(dataTable);
}
catch (Exception e) {
//log
}
finally {
if(null != sda)
{
try
{
sda.Dispose();
}
catch (Exception)
{
}
}
try
{
cmd.Connection.Close();
cmd.Connection.Dispose();
}
catch (Exception)
{
}
try
{
cmd.Dispose();
}
catch (Exception)
{
}
}
If the above code executes properly without any exception, the connection state in pool goes to idle, which is correct. But if an exception occurs while executing, like below:
"Npgsql.NpgsqlException (0x80004005): Exception while reading from stream --->
System.IO.IOException: Unable to read data from the transport connection: A connection attempt
failed because the connected party did not properly respond after a period of time, or established
connection failed because connected host has failed to respond."
The connection state in pool shows as active for about 5 mins or so, even though the close/dispose methods are called in finally block. This means the close/dispose did not properly executed by Npgsql. If the program keeps the connection state in pool active for every connection ran within 5 mins, then there can be an issue with MaxPoolSize error.
I wanted to see the connection state to idle, even when there is an exception. How do I do this.
Please note: I am not looking for a solution to the exception that I listed above. I am looking for a solution where the connection state is changed to idle from active when there is an exception while executing the above code.
To know if the connection state is active or not I used the following query:
SELECT
pid,
usename,
application_name,
datname,
client_addr,
rank() over (partition by client_addr order by backend_start ASC) as rank,
state,
state_change,
current_timestamp,
query,
query_start,
backend_start,
FROM
pg_stat_activity
WHERE
pid <> pg_backend_pid( )
AND
application_name !~ '(?:psql)|(?:pgAdmin.+)'
AND
datname = current_database()
AND
usename = current_user
Any help is really appreciated.
In connectivity of SQL database using ADO.Net C#, I am getting exceptions randomly while working with thousands of data simultaneously which is being executed in thread such as
A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 0 - The wait operation timed out.)
The client was unable to establish a connection because of an error during connection initialization process before login. Possible causes include the following: the client tried to connect to an unsupported version of SQL Server; the server was too busy to accept new connections; or there was a resource limitation (insufficient memory or maximum allowed connections) on the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
My connection string is as
<add name="ConnDBString" connectionString="datasource;Initial Catalog=dbname;pooling=true;connection lifetime=120;Max Pool Size=1000" providerName="System.Data.SqlClient"/>
In reference of other questions I have optimized my code for connection establishment like below
public static int ExecuteNonQuery(string commandText, CommandType commandType, ref List<SqlParameter> parameters)
{
int result = 0;
if (!string.IsNullOrEmpty(commandText))
{
using (var cnn = new SqlConnection(Settings.GetConnectionString()))
{
var cmd = cnn.CreateCommand();
cmd.CommandText = commandText;
cmd.CommandType = commandType;
cmd.CommandTimeout = Convert.ToInt32(Settings.GetAppSetting("CommandTimeout") ?? "3600");
cmd.Parameters.AddRange(parameters.ToArray());
cnn.Open();
result = cmd.ExecuteNonQuery();
cmd.Dispose();
}
}
return result;
}
Please advise.
Please make sure whether your ram is reaching max limit as #mjwills say .
and i have write something . you can try this which might help you for max pool size issue .
public static int ExecuteNonQuery(string commandText, CommandType commandType, ref List<SqlParameter> parameters)
{
int result = 0;
if (!string.IsNullOrEmpty(commandText))
{
using (var cnn = new SqlConnection(Settings.GetConnectionString()))
using (var cmd = cnn.CreateCommand())
{
cmd.CommandText = commandText;
cmd.CommandType = commandType;
cmd.CommandTimeout = Convert.ToInt32(Settings.GetAppSetting("CommandTimeout") ?? "3600");
cmd.Parameters.AddRange(parameters.ToArray());
cnn.Open();
result = cmd.ExecuteNonQuery();
cnn.Close();
}
}
return result;
}
you can also make this method sync for utilise your memory .
I have an .exe project with 4 threads. Each thread makes a call to a WCF service hosted in a Windows Service and inserts a record (loop from 1 to 5,000 records). The test project will try to insert 20,000 records into the WCF service. The service behavior in the WCF service is per session.
I use a stored procedure to insert the records into SQL Server 2008R2 Express. The problem I'm having is with the SqlCommand. When only one thread is running, no error happens, but when two or more threads are running, the code throws an error, but not sure about the error type.
If you look at the code below, the error is raised when reading the result from the .ExecuteReader (it's a cast exception error). It does not return the errors that I have defined in the stored procedure (I'm guessing it never gets to the database), it returns an XML with all the parameters of the txn record, but it does not return only the current transaction, it also return record from transactions running on a different thread. If I execute the stored procedure directly in SQL Server Management Studio, it works fine, so I discarded any isolation level issue at the database side.
As you can see the method is not static, the SqlCommand is created and disposed on each call, so I'm really concerned about this. Any ideas?
private InsertInvoiceDataTable SaveTransaction(Transaction Trans, ClientInfo InfoCliente)
{
InsertInvoiceDataTable returnData = new InsertInvoiceDataTable();
try
{
using (SqlConnection con = new SqlConnection(ConnStr1)
{
using (SqlCommand cmd = new SqlCommand("InsertInvoice", con))
{
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("#TIPO", SqlDbType.Int).Value = Trans.InvoiceType;
cmd.Parameters.Add("#CAJERO", SqlDbType.VarChar).Value = Trans.Cashier;
cmd.Parameters.Add("#TERMID", SqlDbType.Int).Value = Trans.Term;
cmd.Parameters.Add("#DOB", SqlDbType.DateTime).Value = Trans.DOB;
cmd.Parameters.Add("#CLIID", SqlDbType.VarChar).Value = InfoCliente.ClientId;
cmd.Parameters.Add("#VENTANETA", SqlDbType.Decimal).Value = Convert.ToDecimal(Trans.SubTotal);
cmd.Parameters.Add("#IMPUESTO", SqlDbType.Decimal).Value = Convert.ToDecimal(Trans.TaxTotal);
cmd.Parameters.Add("#VENTATOTAL", SqlDbType.Decimal).Value = Convert.ToDecimal(Trans.Total);
con.Open();
using (SqlDataReader results = cmd.ExecuteReader())
{
while (results.Read())
{
InsertInvoiceRow row = returnData.NewInsertInvoiceRow();
try
{
row.TIPO_log = results["Type_log"].ToString();
row.VALOR_LOG = results["Value_log"].ToString();
}
catch (Exception ex)
{
returnData.AddInsertInvoiceRow("ERROR", ex.Message);
break;
}
returnData.AddInsertInvoiceRow(row);
}
}
con.Close();
cmd.Dispose();
}
}
}
catch (Exception ex)
{
Log.Error(ex);
returnData.AddInsertInvoiceRow("ERROR", ex.Message);
}
return returnData;
}
You are performing a DML operation, in your case INSERT (from your posted code new SqlCommand("InsertInvoice", con)) then why ExecuteReader() it rather should be cmd.ExecuteNonQuery()
Please read entire question before responding. And I apologize, I never seem to write short questions...
I am supporting a C# internal web app that hits SQL Server 2008 R2 running on a Windows Small Business Server 2011 SP1 box.
We have been getting a lot of SQL timeouts lately, here is an example exception:
System.Web.HttpUnhandledException: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
at System.Data.ProviderBase.DbConnectionFactory.GetConnection(DbConnection owningConnection)
at System.Data.ProviderBase.DbConnectionClosed.OpenConnection(DbConnection outerConnection, DbConnectionFactory connectionFactory)
at System.Data.SqlClient.SqlConnection.Open()
I have checked a few things, one of them being how the code handles connections and closing of connections. I have read in other threads that using a Using statement with your connection is adequate as it "...wraps the connection create in a try .. finally and places the connection disposal call inside the finally". The connection is closed even in the event of an exception.
So, I agree with and have used that method for years. Others have recommended explicitly closing connections even when using a Using statement with your connection. I think that would be redundant...
My question, however, is regarding the command object. Someone else wrote a large library of db methods for this app and they have (in all of the db methods) declared the SqlCommand object BEFORE the SqlConnection object using statement. They have also assigned the connection object to the command object before the connection using statement.
Is it better practice to declare and use the command object inside the connection using statement, and could doing it the other way cause sql connection timeouts (barring other causes of sql connection timeouts)? Take this code for example:
public Musician GetMusician(int recordId)
{
Musician objMusician = null;
SqlConnection con = new SqlConnection(_connectionString);
SqlCommand cmd = new SqlCommand();
cmd.Connection = con;
cmd.CommandText = "selectMusician";
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.AddWithValue("#id", recordId);
using (con)
{
con.Open();
SqlDataReader reader = cmd.ExecuteReader();
if (reader.HasRows)
{
reader.Read();
objMusician = new Musician((int)reader["id"]);
objMusician.Name = (string)reader["name"];
}
}
if objMusician != null)
{
objMusician.Albums = Albums.GetAlbums((int)objMusician.ID);
objMusician.Tours = Tours.GetTours((int)objMusician.ID);
objMusician.Interviews = Interviews.GetInterviews((int)objMusician.ID);
}
return objMusician;
}
Also know that the calling pages have try catches in them and it is the page that logs the error to our logging db. We let the exception bubble up to the calling method on the page, and it gets handled there.
You should explicitly close the connection when you're finished with it. You're never closing any connections so after you hit the connection pool limit you're going to get errors until you manually recycle the pool or it cycles on its own. Move the property assignment block inside the using block and do a con.Close(); cmd.Dispose(); before returning your objMusician:
using (con)
{
con.Open();
SqlDataReader reader = cmd.ExecuteReader();
if (reader.HasRows)
{
reader.Read();
objMusician = new Musician((int)reader["id"]);
objMusician.Name = (string)reader["name"];
}
if objMusician != null)
{
objMusician.Albums = Albums.GetAlbums((int)objMusician.ID);
objMusician.Tours = Tours.GetTours((int)objMusician.ID);
objMusician.Interviews = Interviews.GetInterviews((int)objMusician.ID);
}
con.Close();
cmd.Dispose();
return objMusician;
}
Don't know if it will help your timeout problem, but I've always structured my code like the following and not had that problem:
using(var cmd = new SqlCommand())
{
using(var con = new SqlConnection(ConnectionString))
{
con.Open();
cmd.Connection = con;
cmd.CommandText = "selectMusician";
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.AddWithValue("#id", recordId);
...
}
}
Was just reading on MSDN, it said "Call Dispose when you are finished using the Component. The Dispose method leaves the Component in an unusable state. After calling Dispose, you must release all references to the Component so the garbage collector can reclaim the memory that the Component was occupying." This means in order for the GC to immediately collect the connection, you must dispose the connection before disposing the command, otherwise the connection hangs around until the GC gets around to calling the Finalize on it.
Refactor your method as follows. You are likely running into a situation where a data reader has a reference to a connection, and it has not yet been disposed of.
public Musician GetMusician(int recordId)
{
Musician objMusician = null;
using(SqlConnection con = new SqlConnection(_connectionString))
{
con.Open();
using (SqlCommand cmd = new SqlCommand())
{
cmd.Connection = con;
cmd.CommandText = "selectMusician";
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.AddWithValue("#id", recordId);
using (SqlDataReader reader = cmd.ExecuteReader(CommandBehavior.CloseConnection))
{
if (reader.HasRows)
{
reader.Read();
objMusician = new Musician((int) reader["id"]);
objMusician.Name = (string) reader["name"];
}
if objMusician != null)
{
objMusician.Albums = Albums.GetAlbums((int)objMusician.ID);
objMusician.Tours = Tours.GetTours((int)objMusician.ID);
objMusician.Interviews = Interviews.GetInterviews((int)objMusician.ID);
}
}
}
return objMusician;
}
}
I'm using SqlBulkCopy to copy a batch of records from MySQL to SQL Server.
After exactly 30 seconds, I get this
System.Data.SqlClient.SqlException (0x80131904): Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
The statement has been terminated.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection)
at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning()
at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
at System.Data.SqlClient.SqlCommand.RunExecuteNonQueryTds(String methodName, Boolean async)
at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe)
at System.Data.SqlClient.SqlCommand.ExecuteNonQuery()
There's one 'Error' object inside the exception, with the following details:
Message: Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.
LineNumber: 0
Source: .Net SqlClient Data Provider
Procedure:
Index #1
Message: The statement has been terminated.
LineNumber: 1
Source: .Net SqlClient Data Provider
Procedure:
Here's the code
using (MySqlConnection sourceConnection = new MySqlConnection(AccManConnectionString)) {
sourceConnection.Open();
MySqlCommand commandSourceData = new MySqlCommand(string.Format(sql, VersionNum.ToString()), sourceConnection);
for (int i = 0; i < ParamNames.Length; i++)
{
commandSourceData.Parameters.AddWithValue(ParamNames[i], SetIDList[i]);
}
MySqlDataReader reader = commandSourceData.ExecuteReader();
using (SqlConnection destinationConnection = new SqlConnection(TimetableConnectionString))
{
try
{
destinationConnection.Open();
using (SqlBulkCopy bulkCopy = new SqlBulkCopy(destinationConnection))
{
bulkCopy.DestinationTableName = "NetworkID";
// Configure the batch sizes and timeouts (cofig code omitted)
bulkCopy.BatchSize = batchSize;
bulkCopy.BulkCopyTimeout = timeout;
try
{
bulkCopy.WriteToServer(reader);
SqlCommand update = new SqlCommand(string.Format("UPDATE p SET p.Username = n.Username FROM NetworkID n INNER JOIN Person p ON n.PersonID = p.PersonID and n.VersionID = {0} where p.VersionID = {0}", VersionNum), destinationConnection);
update.ExecuteNonQuery();
}
catch (SqlException ex)
{
log.Error("Exception caught", ex);
}
finally
{
reader.Close();
}
}
}
catch (Exception e)
{
log.Error("Exception caught", e);
}
}
}
I know there are plenty of timeout/batch size parameters I can (and have) experiment with. But my question is, from a coding point of view, is there any way of determining which database server is the one giving me problems?
Thanks
The timeout you are experiencing is likely to be influenced by the settting SqlBulkCopy.BulkCopyTimeOut which has a default of 30 seconds.
As for determining where the problem lies, your best bet is to catch the SqlException to see if it contains any more details, but in your instance I believe it will be your code (the client) timing out.
The documentation on SqlException has a good example of how to enumerate the errors contained in the exception.
Update 1
I can see you are using MySqlCommand, I'm guessing this is Devart, if so you haven't set a timeout on this command, for this one you'll need to use the syntax CommandTimeout.
MySqlCommand commandSourceData = new MySqlCommand(string.Format(sql, VersionNum.ToString()), sourceConnection);
commandSourceData.CommandTimeout = timeout;
You should also put one on your SqlCommand.
SqlCommand update = new SqlCommand(string.Format("UPDATE p SET p.Username = n.Username FROM NetworkID n INNER JOIN Person p ON n.PersonID = p.PersonID and n.VersionID = {0} where p.VersionID = {0}", VersionNum), destinationConnection);
update.CommandTimeout = timeout;
Update 2
Just reading the documentation on SqlBulkCopy and noticed the following:
If multiple active result sets (MARS) is disabled, WriteToServer makes
the connection busy. If MARS is enabled, you can interleave calls to
WriteToServer with other commands in the same connection.
I'm not sure if you are using MARS, but, your code above is calling into SQL to do an update after the WriteToServer method, but before the BulkCopy is closed (via Using). Can you try explicitly closing the SQLBulkCopy by calling close before the Update or move the Update outside of the Using statement for the BulkCopy.