How to get better control over connection pool? - c#

I've been getting this error recently, after several reloads of the same page:
System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached
So I am thinking there must be some queries or calls in the app I used incorrectly that causes them not to release the connection. Is there any tools out there that allows me to somehow peek into the pool to see who is hanging on to what?

There's a timeout property on your connection object that you can change. This will change the time it waits to get a connection, there's also a command timeout which controls how long it waits until the command times out once it is running (but the first one sounds like what you need) see here (anything that inherits from DBConnection should have this if you arn't using sql server).
Have a look here too, might help :)

Related

Postgresql and .Net - Connection Pooling with Multiple Endpoints

Basic Intro:
Connection string: Host=IP;Port=somePort;Username=someUser;Password=somePass;Database=someDb;Maximum Pool Size=100
My web application has several dozen endpoints available via WS and HTTP. Every one of these endpoints opens a new NPGSQL connection (all using the same connection string as above), processes data, then closes via the using statement.
Issue:
When the application restarts for an update, there is typically 2-3,000 users all reconnecting. This typically leads to errors regarding the connection pool being full and new connections being rejected due to too many clients already. However, once it can finally come online it typically only uses between 5-10 connections at any given time.
Question:
Is the logic below the proper way to use connection pooling? With every endpoint creating a new NPGSQL connection using the same connection string specifying a connection pool of 100?
It seems that the connection pool often shoots right up to 100, but ~80/100 of those connections are shown as idle in a DB viewer with new connection requests being denied due to pool overflow.
Better option?
I could also try and force a more "graceful" startup by slowly allowing new users to re-connect, but I'm not sure if the logic for creating a new connection with every endpoint is correct.
// DB Connection String - Used for all NPGSQL connections
const string connectionStr "Host=IP;Port=somePort;Username=someUser;Password=somePass;Database=someDb;Maximum Pool Size=100";
// Endpoint 1 available via Websocket
public async Task someRequest(someClass someArg)
{
/* Create a new SQL connection for this user's request using the same global connections string */
using var conn = new NpgsqlConnection(connectionStr);
conn.Open();
/* Call functions and pass this SQL connection for any queries to process this user request */
somefunction(conn, someArg);
anotherFunction(conn, someArg);
/* Request processing is done */
/* conn is closed automatically by the "using" statement above */
}
// Endpoint 2 available via Websocket
public async Task someOtherRequest(someClass someArg)
{
/* Create a new SQL connection for this user's request using the same global connections string */
using var conn = new NpgsqlConnection(connectionStr);
conn.Open();
/* Call functions and pass this SQL connection for any queries to process this user request */
somefunction(conn, someArg);
anotherFunction(conn, someArg);
/* Request processing is done */
/* conn is closed automatically by the "using" statement above */
}
// endpoint3();
// endpoint4();
// endpoint5();
// endpoint6();
// etc.
EDIT:
I've made the change suggested, by closing connections and sending them back to the pool during processing. However, the issue still persists on startup.
Application startup - 100 connections claimed for pooling. Almost all of them are idle. Application receives connection pool exhaustion errors, little to no transactions are even processed.
Transactions suddenly start churning, not sure why? Is this after some sort of timeout perhaps? I know there was some sort of 300 second timeout default in documentation somewhere... this might match up here.
Transactions lock up again, pool exhaustion errors resume.
Transactions spike and resume, user requests start coming through again.
Application levels out as normal.
EDIT 2:
This startup issue seems to consistently be taking 5 minutes from startup to clear a deadlock of idle transactions and start running all of the queries.
I know 5 minutes is the default value for idle_in_transaction_session_timeout. However, I tried running SET SESSION idle_in_transaction_session_timeout = '30s'; and 10s during the startup this time and it didn't seem to impact it at all.
I'm not sure why those 100 pooled connections would be stuck in idle like that on startup, taking 5 minutes to clear and allow queries to run if that's the case...
I had forgotten to update this post with some of the latest information. There was a few other internal optimizations I had made in the code.
One of the major ones, was simply changing conn.Open(); to await conn.OpenAsync(); and conn.Close(); to conn.CloseAsync();.
Everything else I had was properly async, but there was still IO blocking for all of the new connections in NPGSQL, causing worse performance with large bursts.
A very obvious change, but I didn't even think to look for an async method for the connections opening and closing.
A connection is released to the pool once you close it in your code. From what you wrote, you are keeping it open for the entire time of a request, so basically 1 user = 1 connection and pooling is just used as a waiting room (timeout setting, 15 seconds by default). Open/Close the connection each time you need to access the DB, so the connection is returned to the pool and can be used by another user when time is spent in .net code.
Example, in pseudo code:
Enter function
Do some computations in .net, like input validation
Open connection (grab it from the pool)
Fetch info#1
Close connection (return it to the pool)
Do some computations in .net, like ordering the result, computing an age from a date etc
Open connection (grab it from the pool)
Fetch info #2
Close connection (return it to the pool)
Do some computations in .net
Return

Troubleshooting SQL Timeout Expired

My C# application is currently throwing lots of the below exceptions:
Timeout expired. The timeout period elapsed prior to completion of
the operation or the server is not responding. This failure occurred
while attempting to connect to the routing destination.
I am using linq queries and NHibernate.
I am having difficulty troubleshooting this as the exception does not occur every time the query is ran. If I take the query and run it directly on SSMS it seems to run very quickly.
The timeout exceptions only appear to occur when ran against one table in the database.
I know I am able to increase the query timeout but I would like to resolve the root cause of the issue. I have a limited knowledge in troubleshooting these issues so what are the next steps I need to take to determine what the problem is?
Increase 'Connect Timeout' of your connection string. 60 is a good number.

Oracle Error: Pooled connection request timed out

Im using Oracle12c with the application written in C# and using Oracle.ManagedDataAccess.dll to handle the DB Connection.
A product we have has started to occasionally throw this exception after running fine for years:
Oracle.ManagedDataAccess.Client.OracleException (0xFFFFFC0C): Pooled connection request timed out
at OracleInternal.ConnectionPool.PoolManager`3.Get(ConnectionString csWithDiffOrNewPwd, Boolean bGetForApp, String affinityInstanceName, Boolean bForceMatch)
at OracleInternal.ConnectionPool.OraclePoolManager.Get(ConnectionString csWithNewPassword, Boolean bGetForApp, String affinityInstanceName, Boolean bForceMatch)
at OracleInternal.ConnectionPool.OracleConnectionDispenser`3.Get(ConnectionString cs, PM conPM, ConnectionString pmCS, SecureString securedPassword, SecureString securedProxyPassword)
at Oracle.ManagedDataAccess.Client.OracleConnection.Open()
I know the cause of this error. Looking at the code neither the OracleConnection or the OracleCommand objects are being disposed. So these connections are building up until it eventually throws this exception.
The fix is straight forward. Wrap these in Using statements. I dont need help with that.
However my interests me is why this problem has started now. This software was running for years without issue. They did some database maintenance, updated other software on the same server, then this problem started. I dont know what Db maintenance they did.
The connection string in the application does not specify any pool attributes.
Is there a oracle db setting which would cause a lower amount of simultaneous connections in the database that could have caused this to start occurring?
UPATE
I wrote a little test app to check the limit.
It just loops around and opens a connection, performs a basic query and doesnt dispose the connection.
On my test system it starts throwing this exception after 640ish loops. It varies give or take 10 loops each time i run it.
What is setting this limit?
I just had the same problem.
The reason, why you get that exception is, that the Oracle Pool Manager doesn't have a free connection anymore (per default you can have up to 100 connections). Often the reason for that problem are not closed connections (so 'using' was the right ways).
Even though you already found a solid solutions for that problem, I want to add those:
Try/catch and using is always a good idea
Adjust the Min Pool Size (or Max Pool Size, but default is 100 should be enough): So there's always a connection ready to use and the connection doesn't have to get established (see https://docs.oracle.com/en/database/oracle/oracle-database/21/jjucp/optimizing-ucp-behavior.html#GUID-FFCAB66D-45B3-4D7B-991B-40F1480630FD)
Update to Oracle.ManagedDataAcess >= 21.8.0 (
Bug 34322469 - CONNECTION POOL THROWS "CONNECTION REQUEST TIMED OUT" EXCEPTION DUE TO LOOPING WITHIN POOLMANAGER.GET() is fixed there)

sql timeout expired

Logging:System.Data.SqlClient.SqlException: Timeout expired. The
timeout period elapsed prior to completion of the operation or the
server is not responding.
i am a beginner when i saw in the application log files the above is the most frequent error i saw and also it is getting repeated everyday. on the database when i saw time taken for executing the particular procedure which the above function is calling is less than 5 secs.
But in the application we gave connection timeout=200s and by default command timeout=30 secs our manager says we don't have to increase the command timeout by anymore further as it is true. But still the exception is keep coming.
can anyone suggest me any solution so i can get rid of the above problem thanks
The setting in the web config, if it's the timeout in the connection string setting, is the connection timeout. It only applies to the time it takes to make a connection. From your problem description, it doesn't sound like a connection timeout is what's happening.
Command timeouts are specified in other ways. If you are using DataContext, for example, the timeout is set using the CommandTimeout property.
http://msdn.microsoft.com/en-us/library/system.data.linq.datacontext.commandtimeout.aspx
If you can give a code snippet of how you are hitting the database so we can see what classes you are using, more specific recommendations can be made.

problem regarding maximum pool size in asp.net

I have been working on a small file manager module in a project where a list of folders are shown in a treeview. I have done the whole thing in javascript. Everytime I click a node, a list of data is fetched into a datareader and populated in the front end.
But when I deploy the application in IIS, after about 18 subsequent clicks, the IIS is halted and I have to reset it again. When I checked the event viewer I got the following error
Exception type: InvalidOperationException
Exception message: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
So in my connection string in the web.config, I set pooling to True and max pool size to 200 and the problem was solved.
But I wonder is it a good practice to use connection pool size in this way. Or how do we prevent so many connections from being opened.
Thanks!
I think what's happening is that you don't free up unused resources. More specifically, you absolutely must call Dispose() on all database-related objects, like SqlConnection, SqlDataReader, etc. Or, better yet, wrap them in using statements.
A sample connection string for SQL Server:
"Data Source=(local);Initial Catalog=pubs;User ID=sa;Password=;Max Pool Size=75;Min Pool Size=5;"
Do like this may help You:)
Default value of Max Pool size is 100.
You can set it to a higher number also so far as performance of the server is not a issue..

Categories