Avoiding ElasticSearch error 503 Server Unavailable: Use of WaitForStatus - c#

When I start my program, I run the ElasticSearch Service and check if an Index exists and if there is any documents, let's say I just run the ES service and I have these two functions:
public ElasticClient getElasticSearchClient()
{
ConnectionSettings connectionSettings = new Nest.ConnectionSettings(new Uri("http://localhost:9200"))
.DefaultIndex("myindex")
.DisableDirectStreaming();
ElasticClient client = new ElasticClient(connectionSettings);
//var health = client.Cluster.Health("myindex", a => (a.WaitForStatus(WaitForStatus.Yellow)).Timeout(50));
return client;
}
public void checkElasticsearchIndex()
{
var client = getElasticSearchClient();
var health = this.client.Cluster.Health("myindex", a => (a.WaitForStatus(WaitForStatus.Yellow)));
CountResponse count = client.Count<myobject>();
if (!client.Indices.Exists("myindex").IsValid || count.Count == 0)
{
BulkWriteAllToIndexES(client);
}
}
Inside the checkElasticsearchIndex function,
The count operation fails with the following error message:
OriginalException: Elasticsearch.Net.ElasticsearchClientException: The remote server returned an error: (503) Server Unavailable.. Call: Status code 503 from: GET /myindex/_count. ServerError: Type: search_phase_execution_exception Reason: "all shards failed" ---> System.Net.WebException: The remote server returned an error: (503) Server Unavailable.
The Health fails as well:
OriginalException: Elasticsearch.Net.ElasticsearchClientException: Unable to connect to the remote server. Call: Status code unknown from: GET /_cluster/health/myindex?wait_for_status=yellow ---> System.Net.WebException: Unable to connect to the remote server ---> System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it 127.0.0.1:9200
As you can see, I have tried the Cluster WaitForStatus, but it didn't work.
My question: is there any way to wait until client/cluster/nodes are ready and not get any exception?

It sounds like you're starting the Elasticsearch process at the same time as starting your program, but Elasticsearch takes longer than your program to be ready.
If that's the case, you may be interested in using the same abstractions that the .NET client uses for integration tests against Elasticsearch. The abstractions read output from the Elasticsearch process to know when it is ready, and block until this happens. They're available on an AppVeyor CI package feed (with plans to release them to Nuget in the future).
There are some examples of how to spin up a cluster with the abstractions. For single node, it would be something like
using System;
using Elastic.Managed.Configuration;
using Elastic.Managed.ConsoleWriters;
using Elastic.Managed.FileSystem;
namespace Elastic.Managed.Example
{
class Program
{
static void Main(string[] args)
{
var version = "7.5.1";
var esHome = Environment.ExpandEnvironmentVariables($#"%LOCALAPPDATA%\ElasticManaged\{version}\elasticsearch-{version}");
using (var node = new ElasticsearchNode(version, esHome))
{
node.SubscribeLines(new LineHighlightWriter());
if (!node.WaitForStarted(TimeSpan.FromMinutes(2))) throw new Exception();
// do your work here
}
}
}
}
This assumes that Elasticsearch 7.5.1 zip has been downloaded already, and exists at %LOCALAPPDATA%\ElasticManaged\7.5.1\elasticsearch-7.5.1. There are more complex examples of how to integrate this into tests with xUnit.
You can use the EphemeralCluster components to download, configure and run Elasticsearch
var plugins = new ElasticsearchPlugins(ElasticsearchPlugin.RepositoryAzure, ElasticsearchPlugin.IngestAttachment);
var config = new EphemeralClusterConfiguration("7.5.1", ClusterFeatures.XPack, plugins, numberOfNodes: 1);
using (var cluster = new EphemeralCluster(config))
{
cluster.Start();
var nodes = cluster.NodesUris();
var connectionPool = new StaticConnectionPool(nodes);
var settings = new ConnectionSettings(connectionPool).EnableDebugMode();
var client = new ElasticClient(settings);
Console.Write(client.CatPlugins().DebugInformation);
}

Related

ServiceStack Redis (AWS ElastiCache implementation) using .Net core causing error No master found in: redis-cluster-xxxxxxxx:637

I have implemented the following version of ServiceStack .net Core Redis library:
ServiceStack.Redis.Core 5.9.2
I am using the library to access a Redis cache I have created to persist values for my AWS Serverless Application using .NET Core 3.1. I have paid for a commercial license for ServiceStack Redis.
Periodically and without warning, my application captures the following error when trying to create a Redis client:
Exception: System.Exception: No master found in: redis-cluster-api-prd-lcs.in-xxxxxxxx:6379
at ServiceStack.Redis.RedisResolver.CreateRedisClient(RedisEndpoint config, Boolean master) in C:\BuildAgent\work\b2a0bfe2b1c9a118\src\ServiceStack.Redis\RedisResolver.cs:line 116
at ServiceStack.Redis.RedisResolver.CreateMasterClient(Int32 desiredIndex) in C:\BuildAgent\work\b2a0bfe2b1c9a118\src\ServiceStack.Redis\RedisResolver.cs:line 142
at ServiceStack.Redis.RedisManagerPool.GetClient() in C:\BuildAgent\work\b2a0bfe2b1c9a118\src\ServiceStack.Redis\RedisManagerPool.cs:line 174
at LCSApi.UtilityCommand.Cache.IsCacheValueExists(String cacheKey, RedisManagerPool pool) in D:\a\1\s\testapi\Utility.cs:line 167
at LCSApi.Functions.LcsConfigurationSweeper(ILambdaContext context) in D:\a\1\s\testapi\Function.cs:line 2028
Exception: System.Exception: No master found in: redis-cluster-api-prd-lcs.in
Other times, the same code works fine. My implementation is quite simple:
private readonly RedisManagerPool _redisClient;
_redisClient = new RedisManagerPool(Environment.GetEnvironmentVariable("CACHE_URL") + ":" +
Environment.GetEnvironmentVariable("CACHE_PORT"));
public static T GetCacheValue<T>(string cacheKey, RedisManagerPool pool)
{
T cacheValue;
try
{
//StackExchange.Redis.IDatabase cache = Functions._redisConnect.GetDatabase();
//string value = cache.StringGet(cacheKey);
//cacheValue = (T)Convert.ChangeType(value, typeof(T));
using (var client = pool.GetClient())
{
client.RetryCount = Convert.ToInt32(Environment.GetEnvironmentVariable("CACHE_RETRY_COUNT"));
client.RetryTimeout = Convert.ToInt32(Environment.GetEnvironmentVariable("CACHE_RETRY_TIMEOUT"));
cacheValue = client.Get<T>(cacheKey);
}
}
catch (Exception ex)
{
//Console.WriteLine($"[CACHE_EXCEPTION] {ex.ToString()}");
cacheValue = GetParameterSSMFallback<T>(cacheKey);
//Console.WriteLine($"[CACHE_EXCEPTION] Fallback SSM parameter --> {cacheValue}");
}
return cacheValue;
}
It happens enough I've had to write a 'fallback' routine to fetch the value from the AWS Parameter Store where it originates from. Not ideal. Here is the Redis configuration:
I can find next to nothing about this error online anywhere. I've tried to sign up to the ServiceStack forums without success, it won't let me sign up for some reason, even though I have a commercial license. Can anyone assist?
The error is due to the network instance not connecting to a master instance as identified by the redis ROLE command. This could be happening during an ElastiCache failover where it eventually appears that the master instance will return under the original DNS name:
Amazon ElastiCache for Redis will repair the node by acquiring new service resources, and will then redirect the node's existing DNS name to point to the new service resources. Thus, the DNS name for a Redis node remains constant, but the IP address of a Redis node can change over time.
ServiceStack.Redis will try to connect to a master instance using all specified master connections (typically only 1). If it fails to connect to a master instance it has to give up as the client expects to perform operations on the read/write master instance.
If it's expected the master instance will return under the same DNS name we can use a custom IRedisResolver to continually retry connecting on the same connection for a master instance for a specified period of time, e.g:
public class ElasticCacheRedisResolver : RedisResolver
{
public override RedisClient CreateRedisClient(RedisEndpoint config, bool master)
{
if (master)
{
//ElastiCache Redis will failover & retain same DNS for master
var firstAttempt = DateTime.UtcNow;
Exception firstEx = null;
var retryTimeSpan = TimeSpan.FromMilliseconds(config.RetryTimeout);
var i = 0;
while (DateTime.UtcNow - firstAttempt < retryTimeSpan) {
try
{
var client = base.CreateRedisClient(config, master:true);
return client;
}
catch (Exception ex)
{
firstEx ??= ex;
ExecUtils.SleepBackOffMultiplier(++i);
}
}
throw new TimeoutException(
$"Could not resolve master within {config.RetryTimeout}ms RetryTimeout", firstEx);
}
return base.CreateRedisClient(config, master:false);
}
}
Which you configure with your Redis Client Manager, e.g:
private static readonly RedisManagerPool redisManager;
redisManager = new RedisManagerPool(...) {
RedisResolver = new ElasticCacheRedisResolver()
};
Note: there you should use only 1 shared instance of Redis Client Managers like RedisManagerPool so the share the same connection pool. If the class containing the redisManager is not a singleton it should be assigned to a static field ensuring the same singleton instance is used to retrieve clients.

How to check database connection in MongoDB [duplicate]

I use MongoDB drivers to connect to the database. When my form loads, I want to set up connection and to check whether it is ok or not. I do it like this:
var connectionString = "mongodb://localhost";
var client = new MongoClient(connectionString);
var server = client.GetServer();
var database = server.GetDatabase("reestr");
But I do not know how to check connection. I tried to overlap this code with try-catch, but to no avail. Even if I make an incorrect connectionString, I still can not get any error message.
To ping the server with the new 3.0 driver its:
var database = client.GetDatabase("YourDbHere");
database.RunCommandAsync((Command<BsonDocument>)"{ping:1}")
.Wait();
There's a ping method for that:
var connectionString = "mongodb://localhost";
var client = new MongoClient(connectionString);
var server = client.GetServer();
server.Ping();
full example for 2.4.3 - where "client.GetServer()" isn't available.
based on "Paul Keister" answer.
client = new MongoClient("mongodb://localhost");
database = client.GetDatabase(mongoDbStr);
bool isMongoLive = database.RunCommandAsync((Command<BsonDocument>)"{ping:1}").Wait(1000);
if(isMongoLive)
{
// connected
}
else
{
// couldn't connect
}
I've had the same question as the OP, and tried every and each solution I was able to find on Internet...
Well, none of them worked to my true satisfaction, so I've opted for a research to find a reliable and responsive way of checking if connection to a MongoDB Database Server is alive. And this without to block the application's synchronous execution for too long time period...
So here are my prerequisites:
Synchronous processing of the connection check
Short to very short time slice for the connection check
Reliability of the connection check
If possible, not throwing exceptions and not triggering timeouts
I've provided a fresh MongoDB Installation (version 3.6) on the default localhost URL: mongodb://localhost:27017. I've also written down another URL, where there was no MongoDB Database Server: mongodb://localhost:27071.
I'm also using the C# Driver 2.4.4 and do not use the legacy implementation (MongoDB.Driver.Legacy assembly).
So my expectations are, when I'm checking the connection to the first URL, it should give to me the Ok for a alive connection to an existing MongoDB server, when I'm checking the connection to the second URL it should give to me the Fail for a non-existing MongoDB server...
Using the IMongoDatabase.RunCommand method, queries the server and causes the server response timeout to elapse, thus not qualifying against the prerequisites. Furthermore after the timeout, it breaks with a TimeoutException, which requires additional exception handling.
This actual SO question and also this SO question have delivered the most of the start information I needed for my solution... So guys, many thanks for this!
Now my solution:
private static bool ProbeForMongoDbConnection(string connectionString, string dbName)
{
var probeTask =
Task.Run(() =>
{
var isAlive = false;
var client = new MongoDB.Driver.MongoClient(connectionString);
for (var k = 0; k < 6; k++)
{
client.GetDatabase(dbName);
var server = client.Cluster.Description.Servers.FirstOrDefault();
isAlive = (server != null &&
server.HeartbeatException == null &&
server.State == MongoDB.Driver.Core.Servers.ServerState.Connected);
if (isAlive)
{
break;
}
System.Threading.Thread.Sleep(300);
}
return isAlive;
});
probeTask.Wait();
return probeTask.Result;
}
The idea behind this is the MongoDB Server does not react (and seems to be non-existing) until a real attempt is made to access some resource on the server (for example a database). But retrieving some resource alone is not enough, as the server still has no updates to its state in the server's Cluster Description. This update comes first, when the resource is retrieved again. From this time point, the server has valid Cluster Description and valid data inside it...
Generally it seems to me, the MongoDB Server does not proactivelly propagate its Cluster Description to all connected clients. Rather then, each client receives the description, when a request to the server has been made. If some of you fellows have more information on this, please either confirm or deny my understandings on the topic...
Now when we target an invalid MongoDB Server URL, then the Cluster Description remains invalid and we can catch and deliver an usable signal for this case...
So the following statements (for the valid URL)
// The admin database should exist on each MongoDB 3.6 Installation, if not explicitly deleted!
var isAlive = ProbeForMongoDbConnection("mongodb://localhost:27017", "admin");
Console.WriteLine("Connection to mongodb://localhost:27017 was " + (isAlive ? "successful!" : "NOT successful!"));
will print out
Connection to mongodb://localhost:27017 was successful!
and the statements (for the invalid URL)
// The admin database should exist on each MongoDB 3.6 Installation, if not explicitly deleted!
isAlive = ProbeForMongoDbConnection("mongodb://localhost:27071", "admin");
Console.WriteLine("Connection to mongodb://localhost:27071 was " + (isAlive ? "successful!" : "NOT successful!"));
will print out
Connection to mongodb://localhost:27071 was NOT successful!
Here a simple extension method to ping mongodb server
public static class MongoDbExt
{
public static bool Ping(this IMongoDatabase db, int secondToWait = 1)
{
if (secondToWait <= 0)
throw new ArgumentOutOfRangeException("secondToWait", secondToWait, "Must be at least 1 second");
return db.RunCommandAsync((Command<MongoDB.Bson.BsonDocument>)"{ping:1}").Wait(secondToWait * 1000);
}
}
You can use it like so:
var client = new MongoClient("yourConnectionString");
var database = client.GetDatabase("yourDatabase");
if (!database.Ping())
throw new Exception("Could not connect to MongoDb");
This is a solution by using the try-catch approach,
var database = client.GetDatabase("YourDbHere");
bool isMongoConnected;
try
{
await database.RunCommandAsync((Command<BsonDocument>)"{ping:1}");
isMongoConnected = true;
}
catch(Exception)
{
isMongoConnected = false;
}
so when it fails to connect to the database, it will throw an exception and we can handle our bool flag there.
If you want to handle connection issues in your program you can use the ICluster.Description event.
When the MongoClient is created, it will continue to attempt connections in the background until it succeeds.
using MongoDB.Driver;
using MongoDB.Driver.Core.Clusters;
var mongoClient = new MongoClient("localhost")
mongoClient.Cluster.DescriptionChanged += Cluster_DescriptionChanged;
public void Cluster_DescriptionChanged(object sender, ClusterDescriptionChangedEventArgs e)
{
switch (e.NewClusterDescription.State)
{
case ClusterState.Disconnected:
break;
case ClusterState.Connected:
break;
}
}

Getting Akka.NET to connect to a remote addresses

All the demos I have found showing how to get started with remoting in Akka.NET demonstrate the simplest use case where the two actors are running on the same machine using localhost.
I am trying to get an Akka.NET actor to connect to a remote machine and have run into some difficulty.
The code is extremely simple:
Client Code:
var config = ConfigurationFactory.ParseString(#"
akka {
log-config-on-start = on
stdout-loglevel = DEBUG
loglevel = DEBUG
actor {
provider = ""Akka.Remote.RemoteActorRefProvider, Akka.Remote""
debug {
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
deployment {
/remoteactor {
router = round-robin-pool
nr-of-instances = 5
remote = ""akka.tcp://system2#xxx.australiasoutheast.cloudapp.azure.com:666""
}
}
}
remote {
dot-netty.tcp {
port = 0
hostname = localhost
}
}
}
");
using (var system = ActorSystem.Create("system1", config))
{
Console.ReadLine();
}
Server Code:
var config = ConfigurationFactory.ParseString(#"
akka {
log-config-on-start = on
stdout-loglevel = DEBUG
loglevel = DEBUG
actor {
provider = ""Akka.Remote.RemoteActorRefProvider, Akka.Remote""
debug {
receive = on
autoreceive = on
lifecycle = on
event-stream = on
unhandled = on
}
}
remote {
dot-netty.tcp {
transport-protocol = tcp
port = 666
hostname = ""10.0.0.4"" //This is the local IP address
}
}
}
");
using (ActorSystem.Create("system2", config))
{
Console.ReadLine();
}
I can successfully connect when I run the actor process on another machine on my local network but when I distribute the same simple example onto a cloud VM I receive the following error:
[ERROR][11/9/2017 3:58:45 PM][Thread 0008][[akka://system2/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fsystem1%40localhost%3A28456-1/endpointWriter#1657012126]] Dropping message [Akka.Remote.DaemonMsgCreate] for non-local recipient [[akka.tcp://system2#xxx.australiasoutheast.cloudapp.azure.com:666/remote]] arriving at [akka.tcp://system2#xxx.australiasoutheast.cloudapp.azure.com:666] inbound addresses [akka.tcp://system2#10.0.0.4:666]
Cause: Unknown
I have also tried using "127.0.0.1" but that doesn't seem to work either locally or over the net.
Could anyone provide any input on what I might be doing wrong?
UPDATE:
I have tried to use the bind-hostname and bind-port options available in Akka.NET as this is supposed to get around the NAT issues I believe I am suffering. Unfortunately this doesn't seem to work either, I have tried various configuration options such as using the hostname and IP address as shown below:
remote {
dot-netty.tcp {
port = 666
hostname = "13.73.xx.xx"
bind-port = 666
bind-hostname = "0.0.0.0"
}
}
The error message I receive when I try the above configuration is:
[ERROR][11/12/2017 5:19:58 AM][Thread 0003][Akka.Remote.Transport.DotNetty.TcpTransport] Failed to bind to 13.73.xx.xx:666; shutting down DotNetty transport.
Cause: System.AggregateException: One or more errors occurred. ---> System.Net.Sockets.SocketException: The requested address is not valid in its context
A few remarks:
In your server config have it bind to 0.0.0.0. (hostname = 0.0.0.0) This way the socket will bind to all local endpoints, in case your cloud hosted env uses multiple network endpoints.
Then use set the public-hostname = xxx.australiasoutheast.cloudapp.azure.com. This way the hostname for the server instance is the same as the remoting address you are using in your remoting url.
Do note that the public-hostname (and hostname, if you are not using public-hostname) must be DNS resolvable.

FTP client, Unexpected error occurred on a receive occurs twice, then times out indefinitely

I have an FTP client, running as part of a windows service that gets information from an FTP server on a scheduled basis. My issue is that sometimes, the FTP server is down for planned maintanance. When this happens, my FTP client still calls out on a scheduled basis and fails with the following error:
System.Net.WebException. The underlying connection was closed: An unexpected error occurred on a receive
I get the error above twice. After this, I get the following timeout error every time indefinitely:
System.Net.WebException The operation has timed out
Even with the maintenance window complete, my windows service will keep timing out when attempting to connect to the FTP server. The only way we can solve the problem is by restarting the windows service. The following code shows my FTP client code:
var _request = (FtpWebRequest)WebRequest.Create(configuration.Url);
_request.Method = WebRequestMethods.Ftp.DownloadFile;
_request.KeepAlive = false;
_request.Timeout = configuration.RequestTimeoutInMilliseconds;
_request.Proxy = null; // Do NOT use a proxy
_request.Credentials = new NetworkCredential(configuration.UserName, configuration.Password);
_request.ServicePoint.ConnectionLeaseTimeout = configuration.RequestTimeoutInMilliseconds;
_request.ServicePoint.MaxIdleTime = configuration.RequestTimeoutInMilliseconds;
try
{
using (var _response = (FtpWebResponse)_request.GetResponse())
using (var _responseStream = _response.GetResponseStream())
using (var _streamReader = new StreamReader(_responseStream))
{
this.c_rateSourceData = _streamReader.ReadToEnd();
}
}
catch (Exception genericException)
{
throw genericException;
}
Anyone know what the issue might be?

StreamInsight: Using a local Observer for a RemoteObservable

I've been playing with StreamInsight v2.3 and the newer Rx capabilities it provides. I'm investigating the use of SI for an Event Sourcing implementation. I've tweaked some of the MSDN sample code to get the following:
code for the server process:
using (var server = Server.Create("Default"))
{
var host = new ServiceHost(server.CreateManagementService());
host.AddServiceEndpoint(typeof(IManagementService), new WSHttpBinding(SecurityMode.Message), "http://localhost/SIDemo");
host.Open();
var myApp = server.CreateApplication("SIDemoApp");
var mySource = myApp.DefineObservable(() => Observable.Interval(TimeSpan.FromSeconds(1))).ToPointStreamable(x => PointEvent.CreateInsert(DateTimeOffset.Now, x), AdvanceTimeSettings.StrictlyIncreasingStartTime);
mySource.Deploy("demoSource");
Console.WriteLine("Hit enter to stop.");
Console.ReadLine();
host.Close();
}
code for the client process:
using (var server = Server.Connect(new System.ServiceModel.EndpointAddress(#"http://localhost/SIDemo")))
{
var myApp = server.Applications["SIDemoApp"];
var mySource = myApp.GetObservable<long>("demoSource");
using (var mySink = mySource.Subscribe(x => Console.WriteLine("Output - {0}", x)))
{
Console.WriteLine("Hit enter to stop.");
Console.ReadLine();
}
}
Trying to run this produces the following error:
Reading from a remote
'System.Reactive.Linq.IQbservable`1[System.Int64]' is not supported.
Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind'
method to read from the source using a remote observer.
The sample code I started with defines an observer and sink and binds it in the StreamInsight server. I'm trying to keep the observer in the client process. Is there a way to set up an observer in the client app for a remote StreamInsight source? Does this have to be done through something like a WCF endpoint in the server that is observed by the client?
Actualy error is directing to the solution. You need 'bind' to the source.
Please check the snippet below:
//Get SOURCE from server
var serverSource = myApp.GetStreamable<long>("demoSource");
//SINK for 'demoSource'
var sinkToBind = myApp.DefineObserver<long>( ()=> Observer.Create<long>( value => Console.WriteLine( "From client : " + value)));
//BINDING
var processForSink = serverSource.Bind(sinkToBind).Run("processForSink");
Also note that, sink will run on server, not like I guessed at first that it will run on client. If you look to console apps for both server and client, console output is writing to server app.
If even there is a way to run sink on client, I don't know and I like to know that too.

Categories