"Timeout while getting a connection from pool." Hangfire.Postgres - c#

I'm new to Hangfire, so probably I'm messing up somewhere. I have the Hangfire configured like in: https://github.com/HangfireIO/Hangfire#installation
but instead of:
config.UseSqlServerStorage("<connection string or its name>");
I have:
config.UsePostgreSqlStorage("Server=127.0.0.1;Port=5432;User Id=postgres;Password=pwd;Database=Hangfire");
So I created an Hangfire Database in my DB.
And then, I'm building and running my project. It is ok. Creating all tables in Hangfire DB at my postgres. It is working great.
But then, when I'm trying:
BackgroundJob.Enqueue(() => HubService.SendPushNotificationToUsers(threadParticipants, messageApi.SenderId, messageApi.SenderName, messageApi.ThreadId, messageApi.Content));
I'm receiving an exception with the InnerMessage:
"Timeout while getting a connection from pool." postgres
Am I missing something?

Try to turn off Connection Pool via ConnectionString or String Builder.
Here is how we do it
var sb = new NpgsqlConnectionStringBuilder(connectionString);
sb.Pooling = false;
app.UseHangfire(config =>
{
config.UseActivator(new WindsorJobActivator(container.Kernel));
config.UsePostgreSqlStorage(sb.ToString(), new PostgreSqlStorageOptions() { UseNativeDatabaseTransactions = true });
config.UseServer();
});

Did you try to reduce the amount of HangFire workers instead?
Maybe these are consuming your connection pool as Hangfire uses by default 20 workers * X connections each (don't remember how many but they are several) and that could be consuming your connection pool... therefore the connection timeout...
You can set how many workers you want to use in the hangfire initialization.
In this example you would use only 1 worker...
app.UseHangfire(config =>
{
config.UseServer(1);
});

Related

Site stops working after custom keep alive functionality

I recently deployed a site that runs in a shared hosting environment. The problem is that the site receives sporadic traffic: after 20 minutes I suppose the server shuts down the instance and so when the site loads the first request, it's often slow. So I decided to add a functionality that loads the webpage every few minutes. I call the code in the Global.asax with new CodeKeepAlive.KeepAliveManager().SetKeepAlive(); and this is the complete code:
public class KeepAliveManager
{
Timer KeepAliveTimer;
public void SetKeepAlive()
{
KeepAliveTimer = new Timer(DoKeepAliveRequest, null, new Random().Next(200000, 900000), Timeout.Infinite);
}
public void DoKeepAliveRequest(object state)
{
string TheUrl = "https://www.the_website_url.com";
HttpWebRequest TheRequest = (HttpWebRequest)WebRequest.Create(TheUrl);
HttpWebResponse TheResponse = (HttpWebResponse)TheRequest.GetResponse();
KeepAliveTimer.Change(new Random().Next(200000, 900000), Timeout.Infinite);
}
}
For some reason, since I added this functionality, the site locks once in a while; after 30 seconds of load time, the server says the page can't be loaded. I also have an error logging functionality that triggers on Application_Error but there are no logs.
Is there anything wrong with my code?
The issue is in asp.net there is no reliable way to run background tasks with out the risk off IIS killing your thread. Once a request is complete IIS does not have reason to keep any of the threads you spawn up alive.
There are a few ways to do this, such as using some simple service on the free tier of amazon or google cloud to act as your heart beat.
But assuming you just want to use your shared hosting
You can use something like HangFire which specializes in this but they have their limitations see the docs for getting started:
GlobalConfiguration.Configuration
.SetDataCompatibilityLevel(CompatibilityLevel.Version_170)
.UseSimpleAssemblyNameTypeSerializer()
.UseRecommendedSerializerSettings()
.UseSqlServerStorage("Database=Hangfire.Sample; Integrated Security=True;", new SqlServerStorageOptions
{
CommandBatchMaxTimeout = TimeSpan.FromMinutes(5),
SlidingInvisibilityTimeout = TimeSpan.FromMinutes(5),
QueuePollInterval = TimeSpan.Zero,
UseRecommendedIsolationLevel = true,
UsePageLocksOnDequeue = true,
DisableGlobalLocks = true
})
.UseBatches()
.UsePerformanceCounters();
//Queue the job
BackgroundJob.Enqueue(() => Console.WriteLine("Hello, world!"));
//run it
using (var server = new BackgroundJobServer())
{
Console.ReadLine();
}
NOTE: Hang Fire it self has its limitations IIS will shut down any long running threads e.g things that take longer that 90 - or 180 seconds (I forget the limit) So make sure you queue your heart beat task on each request that comes in. If you want to make sure you don't fire too many you can add a header to the request and verify it's on the request.
See this answer on the new .Net-Core Background Tasks which is applicable because it relates to IIS
Add a Test program in PhantomJs, to randomly or periodically take screenshots of the url and deploy it on the same server .
IT will do that work for u

ASP.NET Core: Idle timeout between the calls with a delay

I have an ASP.NET Core Web API running under Kestrel(Ubuntu) and I faced with a strange situation:
When I run the series of the first 20 API calls, the first 3-5 calls are quite slow, then the response time is ok.
Then I make a short delay (could be a minute or even less) and run the series of the API calls again, and again the first several calls are quite slow and only after the first 3-5 calls the response time becomes ok.
Initially, I thought the issue is in the Kestrel configuration, so I made the following settings:
var host = new WebHostBuilder()
.UseKestrel(options =>
{
options.Limits.MaxConcurrentConnections = 200;
options.Limits.MaxConcurrentUpgradedConnections = 200;
options.Limits.MaxRequestBodySize = 10000;
options.Limits.MinRequestBodyDataRate = new MinDataRate(bytesPerSecond: 100, gracePeriod: TimeSpan.FromSeconds(10)));
options.Limits.MinResponseDataRate = new MinDataRate(bytesPerSecond: 100, gracePeriod: TimeSpan.FromSeconds(10)));
options.Limits.KeepAliveTimeout = TimeSpan.FromDays(2);
})
.UseConfiguration(config)
.UseContentRoot(Directory.GetCurrentDirectory())
.UseIISIntegration()
.UseStartup<Startup>()
.Build();
host.Run();
}
It helped me to make my service to work faster, but the issue is still on.
The basic logic of the service is the following:
1) get the request object
2) parse it into the instance of the POCO class
3) call the DB with the multiple SELECTs to get all the required data (for this purpose I am using Dapper and the method which allows running the multiple SQL queries with one go)
4) Update the instance of the object with the newly received data and insert the object into the DB
That's it.
And I can't figure out what causes this delay(idle time).
I had a guess that maybe I should have some dummy calls to keep the service running. So I added a Singleton which contains the Timer job to query the DB for some lookup data every min. But It did not help.
Then I tried to add another timer job to query the data which are required in the Step N3 for just the 1st record in the DB, without some specific req parameters and it did not help, moreover, it started working more slowly.
I also added the indexes on the table, to make the SELECTs to work faster and additionally I added WITH(NOLOCK) statement to all the SELECTs but it did not help.
Any ideas, guys?
When query execute getting time rather than desire then its throw timeout exception. We can resolve it by setting commandtimeout=0. When we set commandtimeout=0 then it will response after completing execution.
I was also thinking that maybe it's about the connection string, here it is:
Data Source=mydbserver;Initial Catalog=db1;Persist Security Info=True;User ID=bigboy;Password=bigboy;multipleactiveresultsets=True; Max Pool Size=200; Pooling=true;Connection Timeout=30; Connection Lifetime=0;Min Pool Size=0;

Connection to Elasticsearch 5.x is taking to long. NEST 5.0 rc

I am new in Elasticsearch and I have problems with the connection to the elasticsearch server.
I am using Elasticsearch 5.0.1, and I am running my code under .NET 4.5.2.
I am using NEST 5.0 rc lib.
I also installed Kibana and x-pack in my pc.
My code to connect to elasticsearch:
var nodes = new Uri[] { new Uri("http://localhost:9200") };
var pool = new StaticConnectionPool(nodes);
var settings = new ConnectionSettings(pool).DefaultIndex("visitor_index");
var client = ElasticClient(settings);
My Search code:
var result = client.Search<VisitorTest>(s => s.Index("visitor_index")
.Query(q => q.Match(mq => mq.Field(f => f.Name).Query("Visitor 1"))));
Basically the problem that I am having is that each time I create a new ElasticClient it take between 40-80 milliseconds to establish the connection.
I created a UT for this in which I am creating a connection and running the search query twice, and then I am creating a second connection in the same test and run again the search query two times.
The result is that the first query after the connection takes between 40-80 millisecond and the second query with the same connection take 2 milliseconds that is what I expect.
I tried changing the connection string to use a domain (added the domain to my local host file). I also tried removing xpack security so I do not need to authenticate.
xpack.security.enabled: false
But I always get the same result.
A few observations
A single instance of ConnectionSettings should be reused for the lifetime of the application. ConnectionSettings makes heavy use of caching so should be reused.
ElasticClient is thread-safe. A single instance can be safely used for the lifetime of an application
Unless you have a collection of nodes, I would recommend using SingleNodeConnectionPool instead of StaticConnectionPool. The latter has logic to round-robin over nodes which is unneeded for a single node.
The client takes advantage of connection pooling within the .NET framework; you can adjust KeepAlive behaviour on ConnectionSettings with EnableTcpKeepAlive()
If you have a web proxy configured on your machine, you could have a look at disabling automatic proxy detection with .DisableAutomaticProxyDetection() on ConnectionSettings.
I'll add my few coins here.
Had exactly same issue with 40 ms requests. However from Kibana dev tools it was taking 1 ms.
Fixed by tweaking two things:
Ninject part:
kernel.Bind<IEsClientProvider>().To<EsClientProvider>().InSingletonScope().WithConstructorArgument("indexName", "items");
And in client provider:
public ElasticClient GetClient()
{
if (this.client == null)
{
settings = new ConnectionSettings(nodeUri).DefaultIndex(indexName);
this.client = new ElasticClient(settings);
}
return client;
}

Why is my azure process not connecting to azure database?

I have a web app and a batch pool.
In the batch pool, created tasks are using the same database as the web app.
Today I started receiving the following exception in the batch:
A transport-level error has occurred when receiving results from the server. (provider: Session Provider, error: 19 - Physical connection is not usable)
The code base has not changed, older versions do not work, there were no updates, it just popped out of the blue. I repeated a couple tasks in a controlled debug environment in VS and they went through without any exceptions thrown. I went in and added the batch node’s IP to the sql server firewall rules, also no result. Meanwhile, the web application uses the database just fine.
Both the web app and batch pool are located in East US.
Here’s a snippet from Program.cs in my batch task:
MyEntities db; //MyEntities extends DbContext
System.Data.Entity.Core.EntityClient.EntityConnectionStringBuilder connstr = new System.Data.Entity.Core.EntityClient.EntityConnectionStringBuilder();
connstr.ProviderConnectionString = connectionString;
connstr.Provider = "System.Data.SqlClient";
connstr.Metadata = "res://*/MyEntities.csdl|res://*/MyEntities.ssdl|res://*/MyEntities.msl";
try {
db = new PepeEntities(connstr.ConnectionString);
}
The connection string looks like this:
Persist Security Info=True; Data Source=<host>; Initial Catalog=<database name>; Integrated Security=False; User ID=<login>; Password=<password>; MultipleActiveResultSets=True; Connect Timeout=30; Encrypt=True;
Edit:
This problem has subsided the same way it appeared: out of the blue. I’ll carry out tests whenever it surfaces again.
You can try one of these 2 possibilities:
1. Enabling an Execution Strategy:
public class MyEntitiesConfiguration : DbConfiguration
{
public MyEntitiesConfiguration()
{
SetExecutionStrategy("System.Data.SqlClient", () => new SqlAzureExecutionStrategy());
}
}
# please view more details here:https://msdn.microsoft.com/en-US/data/dn456835
2. if you have explicitly opened the connection, ensure that you close it. You can use an using statement:
using(var db = new PepeEntities(connstr.ConnectionString){
..do your work
}
https://blogs.msdn.microsoft.com/appfabriccat/2010/12/10/sql-azure-and-entity-framework-connection-fault-handling/

Finding Connection by UserId in SignalR

I have a webpage that uses ajax polling to get stock market updates from the server. I'd like to use SignalR instead, but I'm having trouble understanding how/if it would work.
ok, it's not really stock market updates, but the analogy works.
The SignalR examples I've seen send messages to either the current connection, all connections, or groups. In my example the stock updates happen outside of the current connection, so there's no such thing as the 'current connection'. And a user's account is associated with a few stocks, so sending a stock notification to all connections or to groups doesn't work either. I need to be able to find a connection associated with a certain userId.
Here's a fake code example:
foreach(var stock in StockService.GetStocksWithBigNews())
{
var userIds = UserService.GetUserIdsThatCareAboutStock(stock);
var connections = /* find connections associated with user ids */;
foreach(var connection in connections)
{
connection.Send(...);
}
}
In this question on filtering connections, they mention that I could keep current connections in memory but (1) it's bad for scaling and (2) it's bad for multi node websites. Both of these points are critically important to our current application. That makes me think I'd have to send a message out to all nodes to find users connected to each node >> my brain explodes in confusion.
THE QUESTION
How do I find a connection for a specific user that is scalable? Am I thinking about this the wrong way?
I created a little project last night to learn this also. I used 1.0 alpha and it was Straight forward. I created a Hub and from there on it just worked :)
I my project i have N Compute Units(some servers processing work), when they start up they invoke the ComputeUnitRegister.
await HubProxy.Invoke("ComputeUnitReqisted", _ComputeGuid);
and every time they do something they call
HubProxy.Invoke("Running", _ComputeGuid);
where HubProxy is :
HubConnection Hub = new HubConnection(RoleEnvironment.IsAvailable ?
RoleEnvironment.GetConfigurationSettingValue("SignalREndPoint"):
"http://taskqueue.cloudapp.net/");
IHubProxy HubProxy = Hub.CreateHubProxy("ComputeUnits");
I used RoleEnviroment.IsAvailable because i can now run this as a Azure Role , a Console App or what ever in .NET 4.5. The Hub is placed in a MVC4 Website project and is started like this:
GlobalHost.Configuration.ConnectionTimeout = TimeSpan.FromSeconds(50);
RouteTable.Routes.MapHubs();
public class ComputeUnits : Hub
{
public Task Running(Guid MyGuid)
{
return Clients.Group(MyGuid.ToString()).ComputeUnitHeartBeat(MyGuid,
DateTime.UtcNow.ToEpochMilliseconds());
}
public Task ComputeUnitReqister(Guid MyGuid)
{
Groups.Add(Context.ConnectionId, "ComputeUnits").Wait();
return Clients.Others.ComputeUnitCameOnline(new { Guid = MyGuid,
HeartBeat = DateTime.UtcNow.ToEpochMilliseconds() });
}
public void SubscribeToHeartBeats(Guid MyGuid)
{
Groups.Add(Context.ConnectionId, MyGuid.ToString());
}
}
My clients are Javascript clients, that have methods for(let me know if you need to see the code for this also). But basicly they listhen for the ComputeUnitCameOnline and when its run they call on the server SubscribeToHeartBeats. This means that whenever the server compute unit is doing some work it will call Running, which will trigger a ComputeUnitHeartBeat on javascript clients.
I hope you can use this to see how Groups and Connections can be used. And last, its also scaled out over multiply azure roles by adding a few lines of code:
GlobalHost.HubPipeline.EnableAutoRejoiningGroups();
GlobalHost.DependencyResolver.UseServiceBus(
serviceBusConnectionString,
2,
3,
GetRoleInstanceNumber(),
topicPathPrefix /* the prefix applied to the name of each topic used */
);
You can get the connection string on the servicebus on azure, remember the Provider=SharedSecret. But when adding the nuget packaged the connectionstring syntax is also pasted into your web.config.
2 is how many topics to split it about. Topics can contain 1Gb of data, so depending on performance you can increase it.
3 is the number of nodes to split it out on. I used 3 because i have 2 Azure Instances, and my localhost. You can get the RoleNumber like this (note that i hard coded my localhost to 2).
private static int GetRoleInstanceNumber()
{
if (!RoleEnvironment.IsAvailable)
return 2;
var roleInstanceId = RoleEnvironment.CurrentRoleInstance.Id;
var li1 = roleInstanceId.LastIndexOf(".");
var li2 = roleInstanceId.LastIndexOf("_");
var roleInstanceNo = roleInstanceId.Substring(Math.Max(li1, li2) + 1);
return Int32.Parse(roleInstanceNo);
}
You can see it all live at : http://taskqueue.cloudapp.net/#/compute-units
When using SignalR, after a client has connected to the server they are served up a Connection ID (this is essential to providing real time communication). Yes this is stored in memory but SignalR also can be used in multi-node environments. You can use the Redis or even Sql Server backplane (more to come) for example. So long story short, we take care of your scale-out scenarios for you via backplanes/service bus' without you having to worry about it.

Categories