I have a .net core application in which I am using c# and MongoDB. Within the application, I am using MongoDB driver (Version 2.7) for any database related operation and I have a MongoDB database (Version 4.0.9). I am facing one strange issue and not able to find out the root cause for it. The very first request to the database is taking significantly more time than the subsequent requests. As an example, if the first request is taking 1 second and if we make immediate more requests it is taking ~200-250 milliseconds
Does anyone know the solution to the above situation?
this is not an error. it is the default behavior of the c# driver. the driver only establishes the connection to the database server when the very first operation is initiated and will take a few hundred milliseconds to establish connection.
subsequent operations do not need to establish new connections because of the driver's connection pooling mechanisms. more connections will only be established only if they are really needed. if the app is not multi-threaded, the driver will usually open about 2 connections for the entirety of the app from what i have seen. if you inspect your mongodb log file, it will be apparent.
my suggestion is to just ignore the time it takes to initialize the connection if you're doing any kind of testing/ benchmarks.
UPDATE:
if your database is hosted across a network, something like a firewall may be interfering with idle connections. if that's the case you could try doing the following so that idle connections get recycled/renewed every minute.
MongoDefaults.MaxConnectionIdleTime = TimeSpan.FromMinutes(1)
if all else fails to work, the only remaining option i can think of is to fire off a keep-alive task like the following:
public void InitKeepAlive()
{
Task.Run(async () =>
{
while (true)
{
await client.GetCollection<Document>("Documents")
.AsQueryable()
.Select(d => d.Id)
.FirstOrDefaultAsync();
await Task.Delay(TimeSpan.FromMinutes(1));
}
});
}
Related
When running my microservice developed with ASP.NET Core and EF Core (latest versions) I started getting this error:
Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached.
This is very strange, as it worked fine until tomorrow. SQL database is hosted in Docker.
This exception is thrown by code:
try
{
using var c = ctx.Database.GetDbConnection();
var cconstrcc = c.ConnectionString;
var dbstate = c.State;
c.Open(); // <- exception thrown
ctx.Database.OpenConnection();
}
catch (Exception ex)
{
}
This is only test code, because application just hangs when I try to run query dbContext.SomeTable.ToList() (does not work with async version as well), and I am unable to see why and when it happens (I suspected some deadlock maybe).
This is very very strange, as it happens on the first attempt to connect to database.
I was inspecting SQL Server Monitor, SQL Server Profiler and also run stored procedure sp_who, but I don't see ANYTHING, any connection made from my app, it's like SQL pool exhausts immediately.
Also, we used dependency injection everywhere, but I have gone through all context usages and switched it to local variable with
using var ctx = new DbContext();
to make sure I close every connection. Issue is still exactly the same.
UPDATE
But I also run some test application (console app) and made connection simply by using SqlConnection object, without any ORM - it worked.
UPDATE
Also tried to go back in history with git checkout to point where it was definietely working - to my surprise, that version also hangs and does not work.
UPDATE
Changing from IIS to Kestrel resolved the issue, connections are made successfully. So it's IIS related bug ?
Basic Intro:
Connection string: Host=IP;Port=somePort;Username=someUser;Password=somePass;Database=someDb;Maximum Pool Size=100
My web application has several dozen endpoints available via WS and HTTP. Every one of these endpoints opens a new NPGSQL connection (all using the same connection string as above), processes data, then closes via the using statement.
Issue:
When the application restarts for an update, there is typically 2-3,000 users all reconnecting. This typically leads to errors regarding the connection pool being full and new connections being rejected due to too many clients already. However, once it can finally come online it typically only uses between 5-10 connections at any given time.
Question:
Is the logic below the proper way to use connection pooling? With every endpoint creating a new NPGSQL connection using the same connection string specifying a connection pool of 100?
It seems that the connection pool often shoots right up to 100, but ~80/100 of those connections are shown as idle in a DB viewer with new connection requests being denied due to pool overflow.
Better option?
I could also try and force a more "graceful" startup by slowly allowing new users to re-connect, but I'm not sure if the logic for creating a new connection with every endpoint is correct.
// DB Connection String - Used for all NPGSQL connections
const string connectionStr "Host=IP;Port=somePort;Username=someUser;Password=somePass;Database=someDb;Maximum Pool Size=100";
// Endpoint 1 available via Websocket
public async Task someRequest(someClass someArg)
{
/* Create a new SQL connection for this user's request using the same global connections string */
using var conn = new NpgsqlConnection(connectionStr);
conn.Open();
/* Call functions and pass this SQL connection for any queries to process this user request */
somefunction(conn, someArg);
anotherFunction(conn, someArg);
/* Request processing is done */
/* conn is closed automatically by the "using" statement above */
}
// Endpoint 2 available via Websocket
public async Task someOtherRequest(someClass someArg)
{
/* Create a new SQL connection for this user's request using the same global connections string */
using var conn = new NpgsqlConnection(connectionStr);
conn.Open();
/* Call functions and pass this SQL connection for any queries to process this user request */
somefunction(conn, someArg);
anotherFunction(conn, someArg);
/* Request processing is done */
/* conn is closed automatically by the "using" statement above */
}
// endpoint3();
// endpoint4();
// endpoint5();
// endpoint6();
// etc.
EDIT:
I've made the change suggested, by closing connections and sending them back to the pool during processing. However, the issue still persists on startup.
Application startup - 100 connections claimed for pooling. Almost all of them are idle. Application receives connection pool exhaustion errors, little to no transactions are even processed.
Transactions suddenly start churning, not sure why? Is this after some sort of timeout perhaps? I know there was some sort of 300 second timeout default in documentation somewhere... this might match up here.
Transactions lock up again, pool exhaustion errors resume.
Transactions spike and resume, user requests start coming through again.
Application levels out as normal.
EDIT 2:
This startup issue seems to consistently be taking 5 minutes from startup to clear a deadlock of idle transactions and start running all of the queries.
I know 5 minutes is the default value for idle_in_transaction_session_timeout. However, I tried running SET SESSION idle_in_transaction_session_timeout = '30s'; and 10s during the startup this time and it didn't seem to impact it at all.
I'm not sure why those 100 pooled connections would be stuck in idle like that on startup, taking 5 minutes to clear and allow queries to run if that's the case...
I had forgotten to update this post with some of the latest information. There was a few other internal optimizations I had made in the code.
One of the major ones, was simply changing conn.Open(); to await conn.OpenAsync(); and conn.Close(); to conn.CloseAsync();.
Everything else I had was properly async, but there was still IO blocking for all of the new connections in NPGSQL, causing worse performance with large bursts.
A very obvious change, but I didn't even think to look for an async method for the connections opening and closing.
A connection is released to the pool once you close it in your code. From what you wrote, you are keeping it open for the entire time of a request, so basically 1 user = 1 connection and pooling is just used as a waiting room (timeout setting, 15 seconds by default). Open/Close the connection each time you need to access the DB, so the connection is returned to the pool and can be used by another user when time is spent in .net code.
Example, in pseudo code:
Enter function
Do some computations in .net, like input validation
Open connection (grab it from the pool)
Fetch info#1
Close connection (return it to the pool)
Do some computations in .net, like ordering the result, computing an age from a date etc
Open connection (grab it from the pool)
Fetch info #2
Close connection (return it to the pool)
Do some computations in .net
Return
I'm using Azure Function V1 with StackExchange.Redis 1.2.6. Function receiving 1000s of messages per minutes, For every message, For every device, I'm checking Redis. I noticed When we have more messages at that time we are getting below an error.
Exception while executing function: TSFEventRoutingFunction No connection is available to service this operation: HGET GEO_DYNAMIC_hash; It was not possible to connect to the redis server(s); ConnectTimeout; IOCP: (Busy=1,Free=999,Min=24,Max=1000), WORKER: (Busy=47,Free=32720,Min=24,Max=32767), Local-CPU: n/a It was not possible to connect to the redis server(s); ConnectTimeout
CacheService as recommended by MS
public class CacheService : ICacheService
{
private readonly IDatabase cache;
private static readonly string connectionString = ConfigurationManager.AppSettings["RedisConnection"];
public CacheService()
{
this.cache = Connection.GetDatabase();
}
private static Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() =>
{
return ConnectionMultiplexer.Connect(connectionString);
});
public static ConnectionMultiplexer Connection
{
get
{
return lazyConnection.Value;
}
}
public async Task<string> GetAsync(string hashKey, string ruleKey)
{
return await this.cache.HashGetAsync(hashKey, ruleKey);
}
}
I'm injecting ICacheService in Azure function and calling GetAsync Method on every request.
Using Azure Redis Instance C3
Currently, you can see I have a single connection, Creating multiple connections will help to solve this issue? or Any other suggestion to solve/understand this issue.
There are many different causes of the error you are getting. Here are some I can think of off the top of my head (not in any particular order):
Your connectTimeout is too small. I often see customers set a small connect timeout often because they think it will ensure that the connection is established within that time span. The problem with this approach is that when something goes wrong (high client CPU, high server CPU, etc), then the connection attempt will fail. This often makes a bad situation worse - instead of helping, it aggravates the problem by forcing the system to restart the process of trying to reconnect, often resulting in a connect -> fail -> retry loop. I generally recommend that you leave your connectionTimeout at 15 seconds or higher. It is better to let your connection attempt succeed after 15 or 20 seconds than it is to have it fail after 5 seconds repeatedly, resulting in an outage lasting several minutes until the system finally recovers.
A server-side failover occurs. A connection is severed by the server as a result of some type of failover from master to replica. This can happen if the server-side software is updated at the Redis layer, the OS layer or the hosting layer.
A networking infrastructure failure of some type (hardware sitting between the client and the server sees some type of issue).
You change the access password for your Redis instance. Changing the password will reset connections to all clients to force them to re-authenticate.
Thread Pool Settings need to be adjusted. If your thread pool settings are not adjusted correctly for your workload, then you can run into delays in spinning up new threads as explained here.
I have written a bunch of best practices for Redis that will help you avoid other problems as well.
We solved this issue by upgrading StackExchange.Redis to 2.1.30.
I'm trying to add a member to a Group using SignalR 2.2. Every single time, I hit a 30 second timeout and get a "System.Threading.Tasks.TaskCanceledException: A task was canceled." error.
From a GroupSubscriptionController that I've written, I'm calling:
var hubContext = GlobalHost.ConnectionManager.GetHubContext<ProjectHub>();
await hubContext.Groups.Add(connectionId, groupName);
I've found this issue where people are periodically encountering this, but it happens to me every single time. I'm running the backend (ASP.NET 4.5) on one VS2015 launched localhost port, and the frontend (AngularJS SPA) on another VS 2015 launched localhost port.
I had gotten SignalR working to the point where messages were being broadcast to every connected client. It seemed so easy. Now, adding in the Groups part (so that people only get select messages from the server) has me pulling my hair out...
That task cancellation error could be being thrown because the connectionId can't be found in the SignalR registry of connected clients.
How are you getting this connectionId? You have multiple servers/ports going - is it possible that you're getting your wires crossed?
I know there is an accepted answer to this, but I came across this once for a different reason.
First off, do you know what Groups.Add does?
I had expected Groups.Add's task to complete almost immediately every time, but not so. Groups.Add returns a task that only completes, when the client (i.e. Javascript) acknowledges that it has been added to a group - this is useful for reconnecting so it can resubscribe to all its old groups. Note this acknowledgement is not visible to the developer code and nicely covered up for you.
The problem is that the client may not respond because they have disconnected (i.e. they've navigated to another page). This will mean that the await call will have to wait until the connection has disconnected (default timeout 30 seconds) before giving up by throwing a TaskCanceledException.
See http://www.asp.net/signalr/overview/guide-to-the-api/working-with-groups for more detail on groups
I'm looking for a way to check if a server is still available.
We have a offline application that saves data on the server, but if the serverconnection drops (it happens occasionally), we have to save the data to a local database instead of the online database.
So we need a continues check to see if the server is still available.
We are using C# for this application
The check on the sqlconnection.open is not really an option because this takes about 20 sec before an error is thrown, we can't wait this long + I'm using some http services as well.
Just use the System.Net.NetworkInformation.Ping class. If your server does not respond to ping (for some reason you decided to block ICMP Echo request) you'll have to invent your own service for this. Personally, I'm all for not blocking ICMP Echo requests, and I think this is the way to go. The ping command has been used for ages to check reachability of hosts.
using System.Net.NetworkInformation;
var ping = new Ping();
var reply = ping.Send("google.com", 60 * 1000); // 1 minute time out (in ms)
// or...
reply = ping.Send(new IPAddress(new byte[]{127,0,0,1}), 3000);
If the connection is as unreliable as you say, I would not use a seperate check, but make saving the data local part of the exception handling.
I mean if the connection fails and throws an exception, you switch strategies and save the data locally.
If you check first and the connection drops afterwards (when you actually save data), then you still would still run into an exception you need to handle. So the initial check was unnecessary. The check would only be useful if you can assume that after a succesfull check the connection is up and stays up.
From your question it appears the purpose of connecting to the server is to use its database. Your priority must be to check whether you can successfully connect to the database. It doesn't matter if you can PING the server or get an HTTP response (as suggested in other answers), your process will fail unless you successfully establish a connection to the database. You mention that checking a database connection takes too long, why don't you just change the Connection Timeout setting in your application's connection string to a more impatient value such as 5 seconds (Connection Timeout=5)?
If this is an sql server then you can just try to open a new connection to it. If the SqlConnection.Open method fails then you can check the error message to determine if the server is unavailable.
What you are doing now is:
use distant server
if distant server fails, resort to local cache
How to determine if the server is available? Use a catch block. That's the simplest to code.
If you actually have a local database (and not, for example, a list of transactions or data waiting to be inserted), I would turn the design around:
use the local database
regularly synchronize the local database and the distant database
I'll let you be the judge on concurrency constraints and other stuff related to your application to pick a solution.
Since you want to see if the database server is there either catch any errors when you attempt to connect to the database or use a socket and attempt a raw connection to the server on some service, I'd suggest the database as that is the resource you need.