I need to make persistance connection to Aerospike noSQL DB in a Web service.
In a not-Web application, connection is straightfoward as
using (AerospikeClient client = new AerospikeClient("127.0.0.1", 3000))
{
...
}
But in a Web service application, creating new client for each request is expensive. The Best Practices say this too: "use only one client instance per cluster in a program and share that instance among multiple threads. AerospikeClient and AsyncClient are thread-safe."
I can make a static object, but what if the client disconnects, either by error or timeout (24 hours max connection living time)? Can anyone provide any fault-tolerant code spippet? (Maybe similar to redis pattern How does ConnectionMultiplexer deal with disconnects?)
The client manages a socket pool. If a socket error or timeout occurs, the socket is disposed of.
Related
I'm using Azure Function V1 with StackExchange.Redis 1.2.6. Function receiving 1000s of messages per minutes, For every message, For every device, I'm checking Redis. I noticed When we have more messages at that time we are getting below an error.
Exception while executing function: TSFEventRoutingFunction No connection is available to service this operation: HGET GEO_DYNAMIC_hash; It was not possible to connect to the redis server(s); ConnectTimeout; IOCP: (Busy=1,Free=999,Min=24,Max=1000), WORKER: (Busy=47,Free=32720,Min=24,Max=32767), Local-CPU: n/a It was not possible to connect to the redis server(s); ConnectTimeout
CacheService as recommended by MS
public class CacheService : ICacheService
{
private readonly IDatabase cache;
private static readonly string connectionString = ConfigurationManager.AppSettings["RedisConnection"];
public CacheService()
{
this.cache = Connection.GetDatabase();
}
private static Lazy<ConnectionMultiplexer> lazyConnection = new Lazy<ConnectionMultiplexer>(() =>
{
return ConnectionMultiplexer.Connect(connectionString);
});
public static ConnectionMultiplexer Connection
{
get
{
return lazyConnection.Value;
}
}
public async Task<string> GetAsync(string hashKey, string ruleKey)
{
return await this.cache.HashGetAsync(hashKey, ruleKey);
}
}
I'm injecting ICacheService in Azure function and calling GetAsync Method on every request.
Using Azure Redis Instance C3
Currently, you can see I have a single connection, Creating multiple connections will help to solve this issue? or Any other suggestion to solve/understand this issue.
There are many different causes of the error you are getting. Here are some I can think of off the top of my head (not in any particular order):
Your connectTimeout is too small. I often see customers set a small connect timeout often because they think it will ensure that the connection is established within that time span. The problem with this approach is that when something goes wrong (high client CPU, high server CPU, etc), then the connection attempt will fail. This often makes a bad situation worse - instead of helping, it aggravates the problem by forcing the system to restart the process of trying to reconnect, often resulting in a connect -> fail -> retry loop. I generally recommend that you leave your connectionTimeout at 15 seconds or higher. It is better to let your connection attempt succeed after 15 or 20 seconds than it is to have it fail after 5 seconds repeatedly, resulting in an outage lasting several minutes until the system finally recovers.
A server-side failover occurs. A connection is severed by the server as a result of some type of failover from master to replica. This can happen if the server-side software is updated at the Redis layer, the OS layer or the hosting layer.
A networking infrastructure failure of some type (hardware sitting between the client and the server sees some type of issue).
You change the access password for your Redis instance. Changing the password will reset connections to all clients to force them to re-authenticate.
Thread Pool Settings need to be adjusted. If your thread pool settings are not adjusted correctly for your workload, then you can run into delays in spinning up new threads as explained here.
I have written a bunch of best practices for Redis that will help you avoid other problems as well.
We solved this issue by upgrading StackExchange.Redis to 2.1.30.
I have a WCF Websocket client server communication. What I want to do on client side, is to keep trying to reconnect to the server, if server was shut down or something. From what I know, once a server is shut down, channel is in faulted state so I cannot use it again. I need to create a new one, but I am afraid, that there is memory leak in my solution:
creating the websocket service:
InstanceContext context = new InstanceContext(this);
ServiceReference.SomeServiceClient client = new ServiceReference.MLogDbServiceClient(context);
reconnecting:
client.Abort();
client = new ServiceReference.MLogDbServiceClient(context);
In Windows Task Manager I saw that in two minutes my app grew from 29mb to 48mb while I was keeping to create new channel every 20ms ( it was just for memory leak test purposes ). Anyone has a leak-free solution for me?
My client application NEEDS to keep reconnecting to the server (not so frequently but still). Greetings
I'm are storing ConnectionMultiplexer static object in ASP.NET MVC website getting ~500req/sec which are hitting Redis instance on RedisLabs. Once in a while I see errors saying SocketFailure on EVAL and increased connection count on RedisLabs dashboard. Should I have dispose old ConnectionMultiplexer instance and recreate new or try reconnect manually after those exceptions?
The system should attempt to reconnect automatically. What it does not do is retry your commands, because it has no way of knowing what did and did not complete at the server (because: the socket failed; for all it knows, the "ok" response could have already been sent by redis).
So, you should not need to dispose/reconnect. You can monitor the connection failure/reconnect via events published on the multiplexer instance. You can also use the .IsConnected() method on a database (this takes a key for server targeting reasons, but if you are only talking to one server, you could pass anything as the key).
I have several WCF Services in my WPF application, I open them using this method:
private void StartSpecificWCFService(IService service, string url, Type serviceInterfaceType)
{
ServiceHost serviceHost = new ServiceHost(service, address);
serviceHost.AddServiceEndpoint(serviceInterfaceType, new NetNamedPipeBinding(), url);
serviceHost.Open();
//sign to serviceHost.Faulted ??
_wcfServicesHolder.Add(serviceHost); //A dictionary containing all my services
}
the services attributes are:
[ServiceBehavior(InstanceContextMode = InstanceContextMode.Single, ConcurrencyMode = ConcurrencyMode.Multiple)]
The services are logging service and event service, they get many calls from other processes.. I use namedpipes since it is the fastest and the processes run on SAME computer.
My question is - How do i maintain these services to be up all time ?
Poll timer that iterate _wcfServicesHolder and check if service is opened
sign to serviceHost.Faulted event.
And after a service is in faulted state, does the client (on different process) must be re-created ? or it can still broadcast message on same channel ?
The exception i receive is:
There was no endpoint listening at net.pipe://localhost/LoggingService that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details
Why do the services have InstanceContextMode = InstanceContextMode.Single with concurrent thread access? Do the services hold some kind of in-memory thread-safe state? If not, it may be well worth trying to re-factor the services to use InstanceContextMode.PerCall. This should be your default and preferred choice when configuring WCF services - WCF is primarily a technology for implementing a service-orientated architecture, and using a mode other than PerCall violates the Statelessness principle of SO Design Principles.
In support of this, if you have a server-side fault with InstanceContextMode.Single, this suggests something has gone seriously wrong in the service. Any state that you maintained within the service will be lost - clients can not expect just to re-connect and resume as normal.
Whatever InstanceContextMode you end up using, your channel will fault if it remains open with no clients connecting to it for a certain length of time. Over TCP (or any protocol that explicitly exposes a reliable session), you can specify the inactivity timeout on the reliable session, but you have no such option using pipes.
With pipes, leaving a channel open longer than the configured timeout, will fault the channel rendering it useless. You can subscribe to the channel faulted event, and recreate the proxy if you are interested in keeping a channel open to the service for the lifetime of your application. As you suggest - another option is to keep polling along the channel to keep it alive.
In order to keep your service host up, go with your #2 option (Subscribe to the faulted event on the service host). When faulted, you need to Abort the servicehost, new up a fresh instance, rewire the faulted event handler, and open the service host.
There's not much official documentation on this scenario, but here's an old post from an msdn blog describing what you're looking for.
http://blogs.msdn.com/b/drnick/archive/2007/01/16/restarting-a-failed-service.aspx
As to the client, it also will need to recreate its channel to the server when said channel is faulted.
I can't reconnect to MQQueueManager after a while as an exception (reason code 2059 - MQRC_Q_MGR_NOT_AVAILABLE) is thrown when I'm constructing new object of MQQueueManager. My client app is written in .NET/C# and I'm running it on Win2003.
However I can connect to QM after I have restarted my client app. This would indicate that some state is incorrect in QM libraries? How can I reset the state in code so that I could reconnect to QM? Is there a way to reset/disconnect all active TCP connections to QM from client app code?
My connection code:
Hashtable properties = new Hashtable();
properties.Add( MQC.HOST_NAME_PROPERTY, Host );
properties.Add( MQC.PORT_PROPERTY, Port );
properties.Add( MQC.USER_ID_PROPERTY, UserId );
properties.Add( MQC.PASSWORD_PROPERTY, Password );
properties.Add( MQC.CHANNEL_PROPERTY, ChannelName );
properties.Add( MQC.TRANSPORT_PROPERTY, TransportType );
// Following line throws an exception randomly
MQQueueManager queueManager = new MQQueueManager( qmName, properties );
Stack trace:
Source: amqmdnet
CompletionCode: 2
ReasonCode: 2059
Reason: 2059
Stack Trace:
at IBM.WMQ.MQBase.throwNewMQException()
at IBM.WMQ.MQQueueManager.Connect(String queueManagerName)
at IBM.WMQ.MQQueueManager..ctor(String qmName, Hashtable properties)
at WebSphereMQOutboundAdapter.WebSphereMQOutbound.ConnectToWebSphereMQ()
Connections are per-thread so if you are attempting to create a new connection while the previous QMgr object is still instantiated, you would get this. If you close the previous connection and destroy the object before creating a new object you should be OK. Since queues and other WMQ objects depend on a connection handle these will also need to be destroyed and then reinstantiated after the new connection is made.
There are of course a few other explanations for this behavior but these are much less likely. For example, it is possible that a channel exit or (in WMQ v7) configuration could be limiting the number of simultaneous connections from a given IP address. When a connection is severed rather than closed, the channel agent holding the connection on the QMgr side has to time out before the QMgr sees the connection as closed. If connection limiting is in place, these "ghost" connections reduce the available pool. But as I said, this is far less common than programs not cleaning up old objects prior to a reconnect attempt.
There is also the possibility that this is a bug. To reduce that possibility, and for a variety of other reasons such as WMQ v6 going end of life next year, I'd recommend use of WMQ v7.0.1.2 for this project, at both the client and server side. In general, you can use v7.0.1.2 client with a v6.0.x server as long as you stick to v6 functionality. Among other things, .Net code is better integrated in v7 and the Cat-3 SupportPacs are now included in the base install media rather than a separate download.
After some months fighting with this issue and IBM support, the best solution I found is to change the connect/disconnect code in IBM MQ Driver.
Instead of calling manager.Disconnect() and manager.Close() for each GET/PUT, connect once and then reconnect only if you have some exception (like loosing connection).
What I've figure out is that some bug exists in IBM MQ Driver that caches some information for each connect/disconnect. When this buffer is full, the application stops reconnecting.
The driver version (client DLL's) I have this issue is: 7.0.1.6