My hosting company blocked my website for using more than 15 concurrent database connections. But in my code I closed each and every connection that I opened. But still they are saying that there are too many concurrent connections. And suggested me the solution that I should change the source code of my website. So please tell me the solution about this? And my website is dynamic, so would making it static simple HTML old days type will make a difference or not?
Also note that I tried this when no solution I can think of, before every con.open(), I added con.Close(), So that any other connection opened will be closed.
The first thing to do is to check when you open connections - see if you can minimise that. For example, and you doing "n+1" on different connections?
If you have a single server, the technical solution here is a semaphore - for example, something like:
someSemaphore.TakeOne();
try {
using(var conn = GetConnection()) {
...
}
} finally {
someSemaphore.Release();
}
which will (assuming someSemaphore is shared, for example static) ensure that you can only get into that block "n" times at once. In your case, you would create the semaphore with 15 spaces:
static readonly Semaphore someSemaphore = new Semaphore(15,15);
However! Caution is recommended: in some cases you could get a deadlock: imagine 2 threads poorly written each need 9 connections - thread A takes 7 and thread B takes 8. They both need more - and neither will ever get them. Thus, using WaitOne with a timeout is important:
static void TakeConnection() {
if(!someSemaphore.TakeOne(3000)) {
throw new TimeoutException("Unable to reserve connection");
}
}
static void ReleaseConnection() {
someSemaphore.Release();
}
...
TakeConnection();
try {
using(var conn = GetConnection()) {
...
}
} finally {
ReleaseConnection();
}
It would also be possible to wrap that up in IDisposable to make usage more convenient.
Change Hosting Company.
Seriously.
Unless you run a pathetic Little home blog.
You can easily have more than 15 pages / requests being handled at the same time. I am always wary of "run away Connections" but I would not consider 15 Connections to even be something worth mentioning. This is like a car rental Company complaining you drive more than 15km - this simply is a REALLY low Limit.
On a busy Website you can have 50, 100, even 200 open Connections just because you ahve that many requests at the same time.
This is something not so obvious, but even if you care about opening and closing your connections properly, you have to look at something particular.
If you make the smallest change on the text you use to build a connection string, .net will create a whole new connection instead of using one already opened (even if the connection uses MARS), so just in case, look for your code if you are creating connection strings on the fly instead of using a single one from your web config.
I believe SQL Connections are pooled. When you close one, you actually just return it to connection pool.
You can use SqlConnection.ClearPool(connection) or SqlConnection.ClearAllPools to actually close the connection, but it will affect the performance of your site.
Also, you can disable pooling by using connection string parameter Pooling=false.
There are also Max Pool Size (default 100), you may want to set it to a lower number.
This all might work, but i would also suggest you to switch providers ....
If you only fetch data from database then it is not very difficult to create some sort of cache. But if there full CRUD then the better solution is to change hosting provider.
Related
I have a C# Winforms app that is large and complex. It makes OleDB connections to an Access database at various times for various reasons. In a certain function we need to MOVE (copy + delete) the mdb file, but it can't be done because it's locked. I've tried lots of different things to unlock/release the mdb file, and sometimes it works.
But in a certain 100% reproducible scenario, it cannot be unlocked. We have 2 global oledb connection variables we reuse everywhere, for efficiency, and to avoid having 1-off connections everywhere. And these 2 connection vars are useful for when we want to CLOSE the connections, so we can delete the mdb.
Here is my function (which normally works - just not in this 1 case) to forcibly close/release the 2 oledb connections from our winforms app:
public static void CloseOleDBConnections(bool forceReleaseAll = false) {
if ( DCGlobals.Connection1 != null )
DCGlobals.Connection1.Close();
if ( DCGlobals.Connection2 != null )
DCGlobals.Connection2.Close();
if ( forceReleaseAll ) {
DCGlobals.Connection1.Dispose();
DCGlobals.Connection2.Dispose();
OleDbConnection.ReleaseObjectPool();
GC.Collect(GC.MaxGeneration);
GC.WaitForPendingFinalizers();
}
}
I am passing true into the above function.
One other thought: Certainly my Winforms app knows about all open oledbconnections. Is there no way to tell c# to find and iterate all open connections? When I close/exit my application - poof - the open connection to the mdb is released and I can delete the file. So something in .net knows about the connection and knows how to release it -- so how can I tap into that same logic without exiting the application?
Post Script
(I am aware that Access is bad, non-scalable, etc. - it's a legacy requirement and we're stuck with it for now).
I have seen numerous stack discussions (and on other forums) on this topic. I have tried numerous recommendations to no avail.
Disposed IDataReaders?
Do you disable all IDataReader objects properly? They may prevent the connection closing properly.
Tracking Solution
In any case, you need to at least better track all your connections. It sounds like a very large project. You need to be absolutely sure that all connections are being disposed.
1. New TrackedOleDbConnection object
Create a TrackedOleDbConnection object which inherits from OleDbConnection, but adds a static ConcurrentList named StillOpen. When the TrackedOleDbConnection is constructed, add to the list, when it's disposed (override that function), remove it.
public class TrackedOleDbConnection: OleDbConnection
{
public TrackedOleDbConnection() : base()
{
}
public TrackedOleDbConnection(string ConnectionString) : base(ConnectionString)
{
}
//You don't need to create a constructor for every overload of the baseclass, only for overloads your project uses
ConcurrentList<TrackedOleDbConnection> ActiveConnections = new ConcurrentList<TrackedOleDbConnection>();
void AddActiveConnection()
{
ActiveConnections.Add(this);
}
override void Dispose()
{
ActiveConnections.RemoveIfExists(this); //Pseudo-function
GC.SuppressFinalise(this);
}
//Destructor, to ensure the ActiveConnection is always removed, if Dispose wasn't called
~TrackedOleDbConnection()
{
//TODO: You should log when this function runs, so you know you still have missing Dispose calls in your code, and then find and add them.
Dispose();
}
}
2. Don't directly reference OleDbConnection anymore
Then do a simple Find and Replace across your solution to use TrackedOleDbConnection.
Then finally, during your CloseOleDBConnections function, you can access TrackedOleDbConnection.StillOpen to see if you've got a problem of an untracked connection around somewhere.
Wherever you find such untracked problems, don't use the single central references, but instead using to ensure your connection is disposed properly.
Probably if the only thing you need is to copy the file probably there is no need to mess with connections. Please take a look at this:
https://www.raymond.cc/blog/copy-locked-file-in-use-with-hobocopy/
It's highly likely that ADOX is not releasing the connection to the database. Make sure that you:
explicitly call 'Close' the ADOX Connection objects
call 'Dispose' them
call System.Runtime.InteropServices.Marshal.FinalReleaseComObject(db.ActiveConnection);
call System.Runtime.InteropServices.Marshal.Marshal.FinalReleaseComObject(db);
set them to Nothing/null
Also when something calls close on a file handle the close request is put in a queue to be processed by the kernel. In other word even closing a simple file doesn't happen instantly. For this, you may have to put in a time-boxed loop to check that the .LDB file is removed...though that will ultimately require the user to wait. Seek any other alternative to this approach, though it has been necessary with other formats/connections IME in the past.
I'd like to ask a question. I've been trying to find some information regarding transactions with multiple connections, but I haven't been able to find any good source of information.
Now for what I'm trying to do. I have code that looks like this:
using (var Connection1 = m_Db.CreateConnection())
using (var Connection2 = m_Db.CreateConnection())
{
Connection1.DoRead(..., (IDataReader Reader) =>
{
// Do stuff
Connection2.DoWrite(...);
Connection2.DoRead(..., (IDataReader Reader) =>
{
// Do more stuff
using (var Connection3 = m_Db.CreateConnection())
{
Connection3.DoWrite(...);
Connection3.Commit(); // Is this even right?
}
});
});
Connection1.DoRead(..., (IDataReader) =>
{
// Do yet more stuff
});
Connection1.Commit();
Connection2.Commit();
}
Each CreateConnection creates a new transaction using MySqlConnection::BeginTransaction. The CreateConnection method creates a Connection object which wraps a MySqlConnection. The DoRead function executes some SQL, and disposes the IDataReader when done.
Every Connection will do a Rollback when disposed.
Now for some notes:
I have ONE server with multiple databases.
I am running MySql server with InnoDB databases.
I am doing both reads and writes to these databases.
For performance reasons and not to mess up the database, I am using transactions.
The code is (at least, for now) entirely serial. There are NO concurrent threads. All inserts and queries are done in serial fashion.
I use multiple connections to the database because a read or write is not allowed while another read is in progress (basically the reader object has not yet been disposed).
I basically want every connection to see all changes. So for example, after Connection 3 does some writes, Connection 1 should see those. But the data should be in the transaction and not written to the database (yet).
Now, as for my questions:
Does this work? Will everything ONLY be committed only once the last Commit function is called? Should I use another approach?
Is this right? Is my approach completely and utterly wrong and silly?
Any drawbacks? Especially regarding performance.
Thanks.
Welp, it seems no one knows. But that's okay.
For now, I just went with the method of just using one connection and reading all the results into a List>, then closing the reader, thereby avoiding the problem of having to use multiple connections.
Might there be performance problems? Maybe, but it's better than having to deal with uncertainty and deadlocks.
I was looking into the possibility that one of my applications might have a memory leak, so started playing about with some very basic code samples. One I ended up with, when left over time, started to increase greatly in terms of the number of Handles (>3000). It is a very simple Console application with the code as follows:
public static void Main(string[] args)
{
using (SqlConnection sqlConnection = new SqlConnection())
{
}
Console.ReadLine();
}
Taking out the SqlConnection call removes any Handle increase, so I am assuming it has something to do with the connection pool. But as this only runs once before basically going into a wait for input, why would the Handle count keep increasing?
Thanks.
If you are running it on .NET 4.0, this might be the case
https://connect.microsoft.com/VisualStudio/feedback/details/691725/sqlconnection-handle-leak-net-4-0
you will find that the majority of the object cache is composed of framework objects such as those created so you can access the config files and resources with out having to manually parse the files yourself
IIRC the default object cache is about 4000 objects.
you have to remember that just because your only creating and disposing of a single object doesn't mean that's all the frame work is doing
Say you have an Action in ASP.NET MVC in a multi-instance environment that looks something like this*:
public void AddLolCat(int userId)
{
var user = _Db.Users.ById(userId);
user.LolCats.Add( new LolCat() );
user.LolCatCount = user.LolCats.Count();
_Db.SaveChanges();
}
When a user repeatedly presses a button or refreshes, race conditions will occur, making it possible that LolCatCount is not similar to the amount of LolCats.
Question
What is the common way to fix these issues? You could fix it client side in JavaScript, but that might not always be possible. I.e. when something happens on a page refresh, or because someone is screwing around in Fiddler.
I guess you have to make some kind of a network based lock?
Do you really have to suffer the extra latency per call?
Can you tell an Action that it is only allowed to be executed once per User?
Is there any common pattern already in place that you can use? Like a Filter or attribute?
Do you return early, or do you really lock the process?
When you return early, is there an 'established' response / response code I should return?
When you use a lock, how do you prevent thread starvation with (semi) long running processes?
* just a stupid example shown for brevity. Real world examples are a lot more complicated.
Answer 1: (The general approach)
If the data store supports transactions you could do the following:
using(var trans = new TransactionScope(.., ..Serializable..)) {
var user = _Db.Users.ById(userId);
user.LolCats.Add( new LolCat() );
user.LolCatCount = user.LolCats.Count();
_Db.SaveChanges();
trans.Complete();
}
this will lock the user record in the database making other requests wait until the transaction has been committed.
Answer 2: (Only possible with single process)
Enabling sessions and using session will cause implicit locking between requests from the same user (session).
Session["TRIGGER_LOCKING"] = true;
Answer 3: (Example specific)
Deduce the number of LolCats from the collection instead of keeping track of it in a separate field and thus avoid inconsistency issues.
Answers to your specific questsions:
I guess you have to make some kind of a network based lock?
yes, database locks are common
Do you really have to suffer the extra latency per call?
say what?
Can you tell an Action that it is only allowed to be executed once per User
You could implement an attribute that uses the implicit session locking or some custom variant of it but that won't work between processes.
Is there any common pattern already in place that you can use? Like a Filter or attribute?
Common practice is to use locks in the database to solve the multi instance issue. No filter or attribute that I know of.
Do you return early, or do you really lock the process?
Depends on your use case. Commonly you wait ("lock the process"). However if your database store supports the async/await pattern you would do something like
var user = await _Db.Users.ByIdAsync(userId);
this will free the thread to do other work while waiting for the lock.
When you return early, is there an 'established' response / response code I should return?
I don't think so, pick something that fits your use case.
When you use a lock, how do you prevent thread starvation with (semi) long running processes?
I guess you should consider using queues.
By "multi-instance" you're obviously referring to a web farm or maybe a web garden situation where just using a mutex or monitor isn't going to be sufficient to serialize requests.
So... do you you have just one database on the back end? Why not just use a database transaction?
It sounds like you probably don't want to force serialized access to this one section of code for all user id's, right? You want to serialize requests per user id?
It seems to me that the right thinking about this is to serialize access to the source data, which is the LolCats records in the database.
I do like the idea of disabling the button or link in the browser for the duration of a request, to prevent the user from hammering away on the button over and over again before previous requests finish processing and return. That seems like an easy enough step with a lot of benefit.
But I doubt that is enough to guarantee the serialized access you want to enforce.
You could also implement shared session state and implement some kind of a lock on a session-based object, but it would probably need to be a collection (of user id's) in order to enforce the serializable-per-user paradigm.
I'd vote for using a database transaction.
I suggest, and personally use mutex on this case.
I have write here : Mutex release issues in ASP.NET C# code , a class that handle mutex but you can make your own.
So base on the class from this answer your code will be look like:
public void AddLolCat(int userId)
{
// I add here some text in front of the number, because I see its an integer
// so its better to make it a little more complex to avoid conflicts
var gl = new MyNamedLock("SiteName." + userId.ToString());
try
{
//Enter lock
if (gl.enterLockWithTimeout())
{
var user = _Db.Users.ById(userId);
user.LolCats.Add( new LolCat() );
user.LolCatCount = user.LolCats.Count();
_Db.SaveChanges();
}
else
{
// log the error
throw new Exception("Failed to enter lock");
}
}
finally
{
//Leave lock
gl.leaveLock();
}
}
Here the lock is base on the user, so different users will not block each other.
About Session Lock
If you use the asp.net session on your call then you may win a free lock "ticket" from the session. The session is lock each call until the page is return.
Read about that on this q/a:
Web app blocked while processing another web app on sharing same session
Does ASP.NET Web Forms prevent a double click submission?
jQuery Ajax calls to web service seem to be synchronous
Well MVC is stateless meaning that you'll have to handle with yourself manually. From a purist perspective I would recommend preventing the multiple presses by using a client-side lock, although my preference is to disable the button and apply an appropriate CSSClass to demonstrate its disabled state. I guess my reasoning is we cannot fully determine the consumer of the action so while you provide the example of Fiddler, there is no way to truly determine whether multiple clicks are applicable or not.
However, if you wanted to pursue a server-side locking mechanism, this article provides an example storing the requester's information in the server-side cache and returns an appropriate response depending on the timeout / actions you would want to implement.
HTH
One possible solution is to avoid the redundancy which can lead to inconsistent data.
i.e. If LolCatCount can be determined at runtime, then determine it at runtime instead of persisting this redundant information.
I've been searching for some time now in here and other places and can't find a good answer to why Linq-TO-SQL with NOLOCK is not possible..
Every time I search for how to apply the with(NOLOCK) hint to a Linq-To-SQL context (applied to 1 sql statement) people often answer to force a transaction (TransactionScope) with IsolationLevel set to ReadUncommitted. Well - they rarely tell this causes the connection to open an transaction (that I've also read somewhere must be ensured closed manually).
Using ReadUncommitted in my application as is, is really not that good. Right now I've got using context statements for the same connection within each other. Like:
using( var ctx1 = new Context()) {
... some code here ...
using( var ctx2 = new Context()) {
... some code here ...
using( var ctx3 = new Context()) {
... some code here ...
}
... some code here ...
}
... some code here ...
}
With a total execution time of 1 sec and many users on the same time, changing the isolation level will cause the contexts to wait for each other to release a connection because all the connections in the connection pool is being used.
So one (of many reasons) for changing to "nolock" is to avoid deadlocks (right now we have 1 customer deadlock per day). The consequence of above is just another kind of deadlock and really doesn't solve my issue.
So what I know I could do is:
Avoid nested usage of same connection
Increase the connection pool size at the server
But my problem is:
This is not possible within near future because of many lines of code re-factoring and it will conflict with the architecture (without even starting to comment whether this is good or bad)
Even though this of course will work, this is what I would call "symptomatic treatment" - as I don't know how much the application will grow and if this is a reliable solution for the future (and then I might end up with a even worse situation with a lot more users being affected)
My thoughts are:
Can it really be true that NoLock is not possible (for each statement without starting transactions)?
If 1 is true - can it really be true no one other got this problem and solved it in a generic linq to sql modification?
If 2 is true - why is this not a issue for others?
Is there another workaround I havn't looked at maybe?
Is the using of the same connection (nested) many times so bad practice that no-one has this issue?
1: LINQ-to-SQL does indeed not allow you to indicate hints like NOLOCK; it is possible to write your own TSQL, though, and use ExecuteQuery<T> etc
2: to solve in an elegant way would be pretty complicated, frankly; and there's a strong chance that you would be using it inappropriately. For example, in the "deadlock" scenario, I would wager that actually it is UPDLOCK that you should be using (during the first read), to ensure that the first read takes a write lock; this prevents a second later query getting a read lock, so you generally get blocking instead of deadlock
3: using the connection isn't necessarily a big problem (although note that new Context() won't generally share a connection; to share a connection you would use new Context(connection)). If seeing this issue, there are three likely solutions (if we exclude "use an ORM with hint support"):
using an explicit transaction (which doesn't have to be TransactionScope - it can be a connection level transaction) to specify the isolation level
write your own TSQL with hints
use a connection-level isolation level (noting the caveat I added as a comment)
IIRC there is also a way to subclass the data-context and override some of the transaction-creation code to control the isolation-level for the transactions that it creates internally.