Why does Oledb Connection.Close() take far too long to execute? - c#

During development of a desktop app to connect to a local database, I moved the database to a network location and now every time I call Connection.Close() the program hangs for 5-15 seconds. I would see this problem very infrequently when the database was stored on the local computer, but now that it's on the network, it hangs almost every time I attempt a Close(). The first call I make to the database I don't even query it, it's just a test connection that I open and close to make sure the user can connect, but it still hangs far too long.
I've seen this question asked before, but no one can offer a suggestion or solution to fix other than 'try using() {} to have c# clean it up.' This does not affect the Close() time in any way.
Is there an option in the connection string to address this issue? Does anyone know why this is happening?
The connection string I use is:
CONNECTION_STRING = "Provider=Microsoft.ACE.OLEDB.12.0; Data Source=\\NEWTORK\Shared\Database\Database.accdb; Persist Security Info=False;"
private void
Form_Login_Load(object sender, EventArgs e)
{
OleDbConnection Connection = new OleDbConnection();
Connection.ConnectionString = CONNECTION_STRING;
try
{
Console.Write("Connection Opening.....");
Connection.Open();
Console.WriteLine("Connection Opened");
Console.Write("Writing Status Text.....");
lbl_Status.Text = "Online";
Console.WriteLine("Written Status Text");
Console.Write("Connection Closing.....");
Connection.Close();
Console.WriteLine("Connection Closed");
}
catch (Exception Ex)
{
lbl_Status.Text = "Offline";
lbl_Status.ForeColor = System.Drawing.Color.FromArgb(255, 0, 0);
MessageBox.Show("Could not connect to Database");
}
}
In my output window, I see the opening messages and the writing status messages right away, but the app hangs just before the 'Console.Write("Connection Closing.....");' line. After 5-15 seconds, the close message appears in the window.
There are many connections in the app that do query the database and it seems to hang just before trying to close all of them as well. I do seems to notice that repeating the same query without closing the app sometimes results in quicker close times for the repeated close, but it always hangs during the first attempt for any query.

What ended up working for me:
When I was getting long lag times for the Connection.Close() method, I was using the Microsoft Access database engine 2016 (x64). For an unrelated reason, I needed to uninstall the 2016 engine and go with the 2010 (x86) version. Now my Connection.Close() times are averaging ~40 ms which is perfectly acceptable for my application.

If there are any pending transactions between the database and the program, then Close() will roll them back. It also has to request and drop it from the connection pool, which could take longer on a remote drive. Could this be your issue?
Here are the docs on the method.
To solve this, you could use a BackgroundWorker to execute it, like so:
var b = new BackgroundWorker();
b.DoWork += CloseDB;
b.RunWorkerCompleted += someMethodAfterClose;
b.RunWorkerAsync();
CloseDB:
public void CloseDB(object sender, DoWorkEventArgs e) {
someConnection.Close();
}

Often this can be the server settings. If the firewall + windows defender is active, then it scans all "file" accessing - the result is very slow opens, and of course close().
Try running the computer where the shared folder resides with firewall and windows defender turned off. I seen this often fix the large delays. Of course, using a socket based technology would eliminate this issue (eg: server based). However, you have what you have, and often it is not your code speed, but the "windows file" system, and that of network and anti-virus software that is the cause of this slowdown.

This might get flagged as "not an answer", but I have this problem as well, and no one seems to have found the real explanation, so instead of creating a redundant question, here are some of my symptoms:
Started happening much more often when the office updated to Windows 10 (from Windows 7).
Happens frequently but not always. Sometimes the OleDb connection close/dispose is very quick, sometimes it hangs ~10-15 s. The variation is seemingly random for any given connection attempt at any time, even with the same Access database, same machine, and same process.
Hanging can happen for databases located both locally and on network, though haven't tested if one is more frequent than the other.
Happens even if connection makes no changes or transactions (i.e. just SELECT queries).
Happens only with OleDb connections, i.e., a user opening and closing database via the Access interface is fine.
Update
In desperation, I tried the suggestion of closing the connection in a background thread, to avoid holding up the main thread during this process. But when I did that, whenever a subsequent connection was initiated later on, got this error:
Unhandled Exception: System.AccessViolationException: Attempted to read or write protected memory. This is often an indication that other memory is corrupt
Googling indicates this seems to happen with many people when dealing with Access database connections, but often for unknown reasons.
So I tried a different suggestion of changing conn string to include OLE DB Services=-1;. This appeared to fix issue at first. No expert on this, far as I can tell it basically avoids the hanging-on-close problem by leaving some of the connection's resources open behind the scenes for reuse, even after disposing the conn object. Fine with me...except eventually it appears to close those resources (probably some timeout), and then when a connection is later made, back to the inexplicable AccessViolationException above.
Potential Solution
From piecing together various web comments and my own experimentation:
OleDb does not play nice with multi-threading.
Closing the last open Access connection in an app, no matter what database, seems to trigger the unloading of some resources inside the OleDb library itself, hence this is a potentially time-consuming operation.
So what I did, add an empty Access database hidden inside my app, and on startup open an OleDb connection to it. I do NOT dispose the connection and maintain reference to the connection to keep it from closing on its own. That way, any disposing of subsequent connections never trigger the unloading of whatever resources OleDb is keeping around.
It's a hack but so far this appears to fix the original problem.
Of course the real answer is avoid Access databases!

Related

Is overutilisation of Sql Connections in C# a problem?

Throughout the program which I am currently working on, I realized that whenever I need to access to SQL Server, I just type a queryString and execute my query using SqlCommand and SqlConnection in C#. At a certain point during the program, I even opened a connection and ran a query in a "for loop".
Is it unhealthy for the program to constantly open-close connections?
***I am not very familiar with the terminology, therefore you might be having some problems understanding what I am talking about:
Is doing this very frequently may cause any problem?:
string queryString = "Some SQL Query";
public void(){
SqlConnection con = new Connection(conString);
SqlCommand cmd = new SqlCommand(queryString,con);
cmd.Parameters.AddWithValue("#SomeParam",someValue);
con.Open();
cmd.ExecuteNonQuery();
con.Close();
}
I use this template almost every class I create,(usually to get,update,insert data from/into a datatable).
Well, is it harmful?
The short answer is yes - it is inefficient to constantly open and close connections. Opening an actual connection is a very expensive process and managing a connection for the lifetime of its need (which usually is the lifetime of the application or process using it) is fraught with errors.
That is why connection pooling was introduced a long time ago. There is a layer beneath your application that will manage the physical opening/closing of connections in a more efficient way. This also helps prevent the chances that an open connection is lost and continues to stay open (which causes lots of problems). By default pooling is enabled so you don't need to do anything to use it.
With pooling - you write code to open a connection and use it for the duration of a particular section of code and then close it. If the connection pool has an open but unused connection, it will reuse it rather than open a new one. When you close the connection, that simply returns the connection to the pool and makes it available to the next open attempt. You should also get familiar with the c# using statement.

Fill(DataTable) succeeds in testing, hangs in Production

I have a console batch application which includes a process that uses SqlDataAdapter.Fill(DataTable) to perform a simple SELECT on a table.
private DataTable getMyTable(string conStr)
{
DataTable tb = new DataTable();
StringBuilder bSql = new StringBuilder();
bSql.AppendLine("SELECT * FROM MyDB.dbo.MyTable");
bSql.AppendLine("WHERE LEN(IdString) > 0");
try
{
string connStr = ConfigurationManager.ConnectionStrings[conStr].ConnectionString;
using (SqlConnection conn = new SqlConnection(connStr))
{
conn.Open();
using (SqlDataAdapter adpt = new SqlDataAdapter(bSql.ToString(), conn))
{
adpt.Fill(tb);
}
}
return tb;
}
catch (SqlException sx)
{
throw sx;
}
catch (Exception ex)
{
throw ex;
}
}
This method is executed synchronously, and was run successfully in several test environments over many months of testing -- both when started from the command-line or started under control of an AutoSys job.
When moved into production, however, the process hung up -- at the Fill method as nearly as we can tell. Worse, instead of timing out, it apparently started spawning new request threads, and after a couple hours, had consumed more than 5 GB of memory on the application server. This affected other active applications, making me very unpopular. There was no exception thrown.
The Connection String is about as plain-vanilla as they come.
"data source=SERVER\INSTANCE;initial catalog=MyDB;integrated security=True;"
Apologies if I use the wrong terms regarding what the SQL DBA reported below, but when we had a trace put on the SQL Server, it showed the Application ID (under which the AutoSys job was running) being accepted as a valid login. The server then appeared to process the SELECT query. However, it never returned a response. Instead, it went into an "awaiting command" status. The request thread appeared to remain open for a few minutes, then disappeared.
The DBA said there was no sign of a deadlock, but that he would need to monitor in real time to determine whether there was blocking.
This only occurs in the production environment; in test environments, the SQL Servers always responded in under a second.
The AutoSys Application ID is not a new one -- it's been used for several years with other SQL Servers and had no issues. The DBA even ran the SELECT query manually on the production SQL server logged in as that ID, and it responded normally.
We've been unable to reproduce the problem in any non-production environment, and hesitate to run it in production without a server admin standing by to kill the process. Our security requirements limit my access to view server logs and processes, and I usually have to engage another specialist to look at them for me.
We need to solve this problem sooner or later. The amount of data we're looking at is currently only a few rows, but will increase over the next few months. From what's happening, my best guess is that it involves communication and/or security between the application server and the SQL server.
Any additional ideas or items to investigate are welcome. Thanks everyone.
This may be tied to permissions. SQL Server does some odd things instead of giving a proper error message sometimes.
My suggestion, and this might improve performance anyway, is to write a stored procedure on the server side that executes the select, and call the stored procedure. This way, the DBA can ensure you have proper access to the stored procedure without allowing direct access to the table if for some reason that's being blocked, plus you should see a slight performance boost.
Though it may be caused by some strange permissions/ADO.NET issues as mentioned by #user1895086, I'd nonetheless would recommend to recheck a few things one more time:
Ensure that queries run manually by DBA and executed in your App are the same - either hardcode it or at least log just before running. It is better to be safe than sorry.
Try to select only few rows - it is always a good idea to not select the entire table if you can avoid it, and in our case SELECT TOP 1(or 100) query may not exhibit such problems. Perhaps there is just much more data than you think and ADO.Net just dutifully tries to load all those rows. Or perhaps not.
Try SqlDataReader to be sure that SqlDataAdapter does not cause any issues - yes, it uses the same DataAdapter internally, but we would at least exclude those additional operations from a list of suspects.
Try to get a hand on the dump with those 5 GB of memory - analyzing memory dumps is not a trivial task, but it won't be too difficult to understand what is eating those hefty chunks of memory. Because I somehow doubt that ADO.NET will just spawn a lot of additional objects for no reason.

Cancel background worker which is calling external process

I have created a TelNet server for a project I need to do which is working fine, however when a client connects to the server it needs to connect to a database, again this works fine when the connection information is correct and/or calls to the database do not take too long.
If the database call takes a long time (usually due to incorrect credentials or a badly optimised stored procedure) the server will crash with a Windows error message (i.e. not debuggable), which I understand is the underlying TCP system kicking in, which is fine. To resolve this I am putting all the database calls into BackgroundWorkers, so the server (and clients) continue to work, however I need to kill off this process if it is obviously taking too long.
I know about using BackgroundWorker.CancellationPending, but as this is a single method call to the database (via and external DLL), it will never get checked. Same issue with a self-made approach that I have seen elsewhere. The other option I have seen is using Thread.Abort(), but I also know that is unpredictable and unsafe, so probably best not to use that.
Does anyone have any suggestions how to accomplish this?
The problem here is that an external DLL is controlling the waiting. Normally, you could cancel ADO.NET connections or socket connections but this doesn't work here.
Two reliable approaches:
Move the connection into a child process that you can kill. Kill is safe (in contrast to Thread.Abort!) because all state of that process is gone at the same time.
Structure the application so that in case of cancellation the result of the connection attempt is just being ignored and that the app continues running something else. You just let the hanging connection attempt "dangle" in the background and throw away its result when it happens to return later.

Transfer SQL to MySQL C# Monitoring Program

I currently have a working program that is monitoring a few SQL tables and transferring data to MySQL tables. Essentially, I have a loop that checks every 30 seconds. My main concern is that I currently need to close and open the connection every time I loop. The reason for this is because I was getting errors about multiple transactions. When I close my connections, I also need to dispose the connection. I thought that disposing the transactions would have solved this problem, but I was still getting errors about multiple transactions.
This all seems to be working fine but I was wondering if there was a better way to do this without closing the connection.
I am not sure about your errors but it seems that you have to increase the number of connections to the remote computer. Have a look here http://msdn.microsoft.com/en-us/library/system.net.configuration.connectionmanagementelement.maxconnection.aspx
Also you can try to do is use only one connection to realize multiple SQLs.
If it is doesn't help then please provide your code to check it...
Were you committing your transactions in your loop? transaction.Commit() ... that could have been the issue... Hard to say with no code. No need to worry about opening and closing connections anyways since ADO.NET uses connection pooling behind the scenes. You only actually 'open' a connection the first time, after that is kept open in the pool to be used again. As others have said though, Post some code!

"open/close" SqlConnection or keep open?

I have my business-logic implemented in simple static classes with static methods. Each of these methods opens/closes SQL connection when called:
public static void DoSomething()
{
using (SqlConnection connection = new SqlConnection("..."))
{
connection.Open();
// ...
connection.Close();
}
}
But I think passing the connection object around and avoiding opening and closing a connection saves performance. I made some tests long time ago with OleDbConnection class (not sure about SqlConnection), and it definitely helped to work like this (as far as I remember):
//pass around the connection object into the method
public static void DoSomething(SqlConnection connection)
{
bool openConn = (connection.State == ConnectionState.Open);
if (!openConn)
{
connection.Open();
}
// ....
if (openConn)
{
connection.Close();
}
}
So the question is - should I choose the method (a) or method (b) ? I read in another stackoverflow question that connection pooling saved performance for me, I don't have to bother at all...
PS. It's an ASP.NET app - connections exist only during a web-request. Not a win-app or service.
Stick to option a.
The connection pooling is your friend.
Use Method (a), every time. When you start scaling your application, the logic that deals with the state will become a real pain if you do not.
Connection pooling does what it says on the tin. Just think of what happens when the application scales, and how hard would it be to manually manage the connection open/close state. The connection pool does a fine job of automatically handling this. If you're worried about performance think about some sort of memory cache mechanism so that nothing gets blocked.
Always close connections as soon as you are done with them, so they underlying database connection can go back into the pool and be available for other callers. Connection pooling is pretty well optimised, so there's no noticeable penalty for doing so. The advice is basically the same as for transactions - keep them short and close when you're done.
It gets more complicated if you're running into MSDTC issues by using a single transaction around code that uses multiple connections, in which case you actually do have to share the connection object and only close it once the transaction is done with.
However you're doing things by hand here, so you might want to investigate tools that manage connections for you, like DataSets, Linq to SQL, Entity Framework or NHibernate.
Disclaimer: I know this is old, but I found an easy way to demonstrate this fact, so I'm putting in my two cents worth.
If you're having trouble believing that the pooling is really going to be faster, then give this a try:
Add the following somewhere:
using System.Diagnostics;
public static class TestExtensions
{
public static void TimedOpen(this SqlConnection conn)
{
Stopwatch sw = Stopwatch.StartNew();
conn.Open();
Console.WriteLine(sw.Elapsed);
}
}
Now replace all calls to Open() with TimedOpen() and run your program. Now, for each distinct connection string you have, the console (output) window will have a single long running open, and a bunch of very fast opens.
If you want to label them you can add new StackTrace(true).GetFrame(1) + to the call to WriteLine.
There are distinctions between physical and logical connections. DbConnection is a kind of logical connection and it uses underlying physical connection to Oracle. Closing/opening DbConnection doesn't affect your performance, but makes your code clean and stable - connection leaks are impossible in this case.
Also you should remember about cases when there are limitations for parallel connections on db server - taking that into account it is necessary to make your connections very short.
Connection pool frees you from connection state checking - just open, use and immediately close them.
Normally you should keep one connect for each transaction(no parallel computes)
e.g when user execute charge action, your application need find user's balance first and update it, they should use same connection.
Even if ado.net has its connection pool, dispatching connection cost is very low, but reuse connection is more better choice.
Why not keep only one connection in application
Because the connection is blocking when you execute some query or command,
so that means your application is only doing one db operation at sametime,
how poor performance it is.
One more issue is that your application will always have a connection even though your user is just open it but no operations.If there are many user open your application, db server will cost all of its connection source in soon while your users have not did anything.

Categories