I have a C# Winforms app that is large and complex. It makes OleDB connections to an Access database at various times for various reasons. In a certain function we need to MOVE (copy + delete) the mdb file, but it can't be done because it's locked. I've tried lots of different things to unlock/release the mdb file, and sometimes it works.
But in a certain 100% reproducible scenario, it cannot be unlocked. We have 2 global oledb connection variables we reuse everywhere, for efficiency, and to avoid having 1-off connections everywhere. And these 2 connection vars are useful for when we want to CLOSE the connections, so we can delete the mdb.
Here is my function (which normally works - just not in this 1 case) to forcibly close/release the 2 oledb connections from our winforms app:
public static void CloseOleDBConnections(bool forceReleaseAll = false) {
if ( DCGlobals.Connection1 != null )
DCGlobals.Connection1.Close();
if ( DCGlobals.Connection2 != null )
DCGlobals.Connection2.Close();
if ( forceReleaseAll ) {
DCGlobals.Connection1.Dispose();
DCGlobals.Connection2.Dispose();
OleDbConnection.ReleaseObjectPool();
GC.Collect(GC.MaxGeneration);
GC.WaitForPendingFinalizers();
}
}
I am passing true into the above function.
One other thought: Certainly my Winforms app knows about all open oledbconnections. Is there no way to tell c# to find and iterate all open connections? When I close/exit my application - poof - the open connection to the mdb is released and I can delete the file. So something in .net knows about the connection and knows how to release it -- so how can I tap into that same logic without exiting the application?
Post Script
(I am aware that Access is bad, non-scalable, etc. - it's a legacy requirement and we're stuck with it for now).
I have seen numerous stack discussions (and on other forums) on this topic. I have tried numerous recommendations to no avail.
Disposed IDataReaders?
Do you disable all IDataReader objects properly? They may prevent the connection closing properly.
Tracking Solution
In any case, you need to at least better track all your connections. It sounds like a very large project. You need to be absolutely sure that all connections are being disposed.
1. New TrackedOleDbConnection object
Create a TrackedOleDbConnection object which inherits from OleDbConnection, but adds a static ConcurrentList named StillOpen. When the TrackedOleDbConnection is constructed, add to the list, when it's disposed (override that function), remove it.
public class TrackedOleDbConnection: OleDbConnection
{
public TrackedOleDbConnection() : base()
{
}
public TrackedOleDbConnection(string ConnectionString) : base(ConnectionString)
{
}
//You don't need to create a constructor for every overload of the baseclass, only for overloads your project uses
ConcurrentList<TrackedOleDbConnection> ActiveConnections = new ConcurrentList<TrackedOleDbConnection>();
void AddActiveConnection()
{
ActiveConnections.Add(this);
}
override void Dispose()
{
ActiveConnections.RemoveIfExists(this); //Pseudo-function
GC.SuppressFinalise(this);
}
//Destructor, to ensure the ActiveConnection is always removed, if Dispose wasn't called
~TrackedOleDbConnection()
{
//TODO: You should log when this function runs, so you know you still have missing Dispose calls in your code, and then find and add them.
Dispose();
}
}
2. Don't directly reference OleDbConnection anymore
Then do a simple Find and Replace across your solution to use TrackedOleDbConnection.
Then finally, during your CloseOleDBConnections function, you can access TrackedOleDbConnection.StillOpen to see if you've got a problem of an untracked connection around somewhere.
Wherever you find such untracked problems, don't use the single central references, but instead using to ensure your connection is disposed properly.
Probably if the only thing you need is to copy the file probably there is no need to mess with connections. Please take a look at this:
https://www.raymond.cc/blog/copy-locked-file-in-use-with-hobocopy/
It's highly likely that ADOX is not releasing the connection to the database. Make sure that you:
explicitly call 'Close' the ADOX Connection objects
call 'Dispose' them
call System.Runtime.InteropServices.Marshal.FinalReleaseComObject(db.ActiveConnection);
call System.Runtime.InteropServices.Marshal.Marshal.FinalReleaseComObject(db);
set them to Nothing/null
Also when something calls close on a file handle the close request is put in a queue to be processed by the kernel. In other word even closing a simple file doesn't happen instantly. For this, you may have to put in a time-boxed loop to check that the .LDB file is removed...though that will ultimately require the user to wait. Seek any other alternative to this approach, though it has been necessary with other formats/connections IME in the past.
Related
I'd like to ask a question. I've been trying to find some information regarding transactions with multiple connections, but I haven't been able to find any good source of information.
Now for what I'm trying to do. I have code that looks like this:
using (var Connection1 = m_Db.CreateConnection())
using (var Connection2 = m_Db.CreateConnection())
{
Connection1.DoRead(..., (IDataReader Reader) =>
{
// Do stuff
Connection2.DoWrite(...);
Connection2.DoRead(..., (IDataReader Reader) =>
{
// Do more stuff
using (var Connection3 = m_Db.CreateConnection())
{
Connection3.DoWrite(...);
Connection3.Commit(); // Is this even right?
}
});
});
Connection1.DoRead(..., (IDataReader) =>
{
// Do yet more stuff
});
Connection1.Commit();
Connection2.Commit();
}
Each CreateConnection creates a new transaction using MySqlConnection::BeginTransaction. The CreateConnection method creates a Connection object which wraps a MySqlConnection. The DoRead function executes some SQL, and disposes the IDataReader when done.
Every Connection will do a Rollback when disposed.
Now for some notes:
I have ONE server with multiple databases.
I am running MySql server with InnoDB databases.
I am doing both reads and writes to these databases.
For performance reasons and not to mess up the database, I am using transactions.
The code is (at least, for now) entirely serial. There are NO concurrent threads. All inserts and queries are done in serial fashion.
I use multiple connections to the database because a read or write is not allowed while another read is in progress (basically the reader object has not yet been disposed).
I basically want every connection to see all changes. So for example, after Connection 3 does some writes, Connection 1 should see those. But the data should be in the transaction and not written to the database (yet).
Now, as for my questions:
Does this work? Will everything ONLY be committed only once the last Commit function is called? Should I use another approach?
Is this right? Is my approach completely and utterly wrong and silly?
Any drawbacks? Especially regarding performance.
Thanks.
Welp, it seems no one knows. But that's okay.
For now, I just went with the method of just using one connection and reading all the results into a List>, then closing the reader, thereby avoiding the problem of having to use multiple connections.
Might there be performance problems? Maybe, but it's better than having to deal with uncertainty and deadlocks.
I just got notice about using "Using", as it´s very efficient in it´s way to handle disposable objects. As it will only use them in that certain command, then it will be removed.
But i don´t know where the limit goes, as i can´t see that you Always want´s to use it, and It´s Always efficient.
So my question here is. Is this a good way to use it, or is it unnecessary or will it even hurt my performance in any way?
void Sending(object sender, NAudio.Wave.WaveInEventArgs e)
{
using (UdpClient udpclient = new UdpClient())
{
if (connect == true && MuteMic.Checked == false)
{
udpclient.Send(e.Buffer, e.BytesRecorded, otherPartyIP.Address.ToString(), 1500);
}
}
}
It´s an event from NAudio, and what it does is, While WaveInEvent has any data, do this.
WaveInEvent is from an input device, so if i start recording with it (example mic), data will be available (the mic audio), and then i can do what i want with that data. In this case i am sending it over UDP.
But as you can see, i am using a local udpclient.
I don´t know if i should be using it there, or if i should have one created before so it can reuse it all the time, instead of making a new.
Hope i didn´t mess up my explanation to bad.
Is this a good way to use it, or is it unnecessary or will it even hurt my performance in any way?
You should always use it when any object implementes IDisposable. It doesn't have any negative impact on performance. All it will do is to ensure that object is properly disposed.
The using statement ensures that Dispose is called even if an exception occurs while you are calling methods on the object. You can achieve the same result by putting the object inside a try block and then calling Dispose in a finally block; in fact, this is how the using statement is translated by the compiler. Your code will more or less look like this for the compliler.
{
UdpClient udpclient = new UdpClient();
try
{
if (connect == true && MuteMic.Checked == false)
{
udpclient.Send(e.Buffer, e.BytesRecorded, otherPartyIP.Address.ToString(), 1500);
}
}
finally
{
if (udpclient!= null)
((IDisposable)udpclient).Dispose();
}
}
You can read the details of using here.
As Microsoft says, generally "As a rule, when you use an IDisposable object, you should declare and instantiate it in a using statement."using Statement (C# Reference) - MSDN - Microsoft. But sometimes, its a better idea to define your object as a non-local variable and call it in your local code whenever you need it rather than instantiate the object every time you want to use it and then dispose it. In your case, because you want to constantly send data with your UdpClient, repeatedly instantiating and disposing the object (that is done by using statement), might reduce the performance (in your words hurts your performance :) ); So I prefer to define a non-local variable in my application, call it whenever I want and then Dispose it when no longer needed.
My hosting company blocked my website for using more than 15 concurrent database connections. But in my code I closed each and every connection that I opened. But still they are saying that there are too many concurrent connections. And suggested me the solution that I should change the source code of my website. So please tell me the solution about this? And my website is dynamic, so would making it static simple HTML old days type will make a difference or not?
Also note that I tried this when no solution I can think of, before every con.open(), I added con.Close(), So that any other connection opened will be closed.
The first thing to do is to check when you open connections - see if you can minimise that. For example, and you doing "n+1" on different connections?
If you have a single server, the technical solution here is a semaphore - for example, something like:
someSemaphore.TakeOne();
try {
using(var conn = GetConnection()) {
...
}
} finally {
someSemaphore.Release();
}
which will (assuming someSemaphore is shared, for example static) ensure that you can only get into that block "n" times at once. In your case, you would create the semaphore with 15 spaces:
static readonly Semaphore someSemaphore = new Semaphore(15,15);
However! Caution is recommended: in some cases you could get a deadlock: imagine 2 threads poorly written each need 9 connections - thread A takes 7 and thread B takes 8. They both need more - and neither will ever get them. Thus, using WaitOne with a timeout is important:
static void TakeConnection() {
if(!someSemaphore.TakeOne(3000)) {
throw new TimeoutException("Unable to reserve connection");
}
}
static void ReleaseConnection() {
someSemaphore.Release();
}
...
TakeConnection();
try {
using(var conn = GetConnection()) {
...
}
} finally {
ReleaseConnection();
}
It would also be possible to wrap that up in IDisposable to make usage more convenient.
Change Hosting Company.
Seriously.
Unless you run a pathetic Little home blog.
You can easily have more than 15 pages / requests being handled at the same time. I am always wary of "run away Connections" but I would not consider 15 Connections to even be something worth mentioning. This is like a car rental Company complaining you drive more than 15km - this simply is a REALLY low Limit.
On a busy Website you can have 50, 100, even 200 open Connections just because you ahve that many requests at the same time.
This is something not so obvious, but even if you care about opening and closing your connections properly, you have to look at something particular.
If you make the smallest change on the text you use to build a connection string, .net will create a whole new connection instead of using one already opened (even if the connection uses MARS), so just in case, look for your code if you are creating connection strings on the fly instead of using a single one from your web config.
I believe SQL Connections are pooled. When you close one, you actually just return it to connection pool.
You can use SqlConnection.ClearPool(connection) or SqlConnection.ClearAllPools to actually close the connection, but it will affect the performance of your site.
Also, you can disable pooling by using connection string parameter Pooling=false.
There are also Max Pool Size (default 100), you may want to set it to a lower number.
This all might work, but i would also suggest you to switch providers ....
If you only fetch data from database then it is not very difficult to create some sort of cache. But if there full CRUD then the better solution is to change hosting provider.
I'm working with a .XML document in C# to which I'm selecting nodes from, adding nodes to, and deleting nodes many, many times over a span of my code.
All of the XML editing of this document is contained within a class, which other classes call to.
Since the Data Access class has no way of telling if the classes using it are done with editing the document, it has no logic as to if/when to save.
I could save after every modification of the document, but I'm concerned with performance issues.
Alternatively I could just assume/hope that it will be saved by the other classes that use it (I created a one-line public method to save the document, so another class can request a save).
The second option concerns me as I feel like I should have it globally enforced in some manner to avoid it being called upon and modifications not being committed. To this point there will never be a case where a rollback is needed; any change is a change that should be committed.
Does .Net (Or coding design) have a way to balance performance and safety in such a situation?
If you always want to save the changes (just don't know when) then you could add the save command to the class destructor. This way you know the changes will always be saved.
If you need additional help or want an example please leave a comment, otherwise select an answer as correct.
Update: It has been brought to my attention that the class destructor may fire after other objects (like a FileStream) have already been disposed.
I recommended that you test for this condition in your destructor and also that you implement and use the IDisposable interface. You can then subscribe to the either the Application.Exit event or Application.ApplicationExit event and call dispose there.
Be sure to keep the code in the destructor (but make sure you have it in a try block) in case the program crashes or there is some other, unexpected exit.
Basically your question says i all: You need to save, but you don't know when, as the knowledge about the savepoints is otside your class.
My recommendation is to wrap your calls - assuming you have something like public void MyClass.SomeEditing(int foo), create a wrapper like public void MyClass.SomeEditing(int foo, bool ShouldSave) with shouldsave defaultingto true.
This way, a consumer of your class can decide, wether he wants an immediate save or not, chosing false if he knows, an immediately following other edit will cause the save. Existing code, which calls the "old" API is protected by the default of "save imediately"
At first I assume I do need writerlock here but Im not sure (not much experience with that) what if I dont use it.
On the server side, there are client classes for each connected client. Each class contains public list which every other class can write to. Client requests are processed via threadpool workitems.
class client
{
public List <string> A;
someEventRaisedMethod(param)
{
client OtherClient=GetClientByID(param) //selects client class by ID sent by msg sender
OtherCLient.A.Add("blah");
}
}
What if two instances reference the same client and both try OtherCLient.A.Add("blah")? Isnt be here some writer lock? It works for me but I encounter some strange issues that I think are due to this.
Thank you!
(update: as always, Eric Lippert has a timely blog entry)
If you don't use a lock, you risk either missing data, state corruption, and probably the odd Exception - but only very occasionally, so very hard to debug.
Absolutely you need to synchronize here. I would expose a lock on the client (so we can span multiple operations):
lock(otherClient.LockObject) {
otherClient.A.Add("blah");
}
You could make a synchronous Add method on otherClient, but it is often useful to span multiple - perhaps to check Contains and then Add only if missing, etc.
Just to clarify 2 points:
all access to the list (even reads) must also take the lock; otherwise it doesn't work
the LockObject should be a readonly reference-type
for the second, perhaps:
private readonly object lockObject = new object();
public object LockObject {get {return lockObject;}}
From my point of view you should do the following:
Isolate the list into a separate class which implements either the IList Interface or only the subset which you require
Either add locking on a private object in the methods of your list class or use the ReaderWriterSlim implementation - as it is isolated there is only one place needed for changing in one single class
I don't know the C# internals, but I do remember reading awhile back about java example that could cause a thread to endlessly loop if it was reading a collection whilst an insert was being done on the collection (I think it was a hashtable), so make sure if you are using multiple threads that you lock on both read and write. Marc Gravell is correct that you should just create a global lock to handle this since it sounds like you have fairly low volume.
ReaderWriterLockSlim is also a good option if you do alot of reading and only a few write / update actions.