Rabbitmq - Connecting to cluster from C# - c#

We have created a RabbitMQ cluster with two nodes (rabbit and rabbit1). We have 4 queues which are configured to be highly available queues by following http://www.rabbitmq.com/clustering.html and http://www.rabbitmq.com/ha.html
Before clustering, we used to connect to the node using the snippet below.
var factory = new ConnectionFactory(){ HostName = _rabbitMQ_Hostname, UserName = _rabbitMQ_Username, Password = _rabbitMQ_Password};
using (var connection = factory.CreateConnection())
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queue: _autoCancellationPNS_QueueName,
durable: true,
exclusive: false,
autoDelete: false,
arguments: null);
string message = appointmentId.ToString();
var body = Encoding.UTF8.GetBytes(message);
IBasicProperties properties = channel.CreateBasicProperties();
properties.DeliveryMode = 2;
channel.BasicPublish(exchange: _rabbitMQ_Exchange,
routingKey: _autoCancellationPNS_RoutingKey,
basicProperties: properties,
body: body);
returnMessage.ShortMessage = "Added to queue";
returnMessage.LongMessage = "Added to queue";
logger.Debug("|Added to queue");
}
How should we deal with cluster?

The RabbitMQ.Client have been supporting connecting to multiple hosts for over a year. It was fixed in pull request #92. You should be able to do something like the following
using (var connection = connectionFactory.CreateConnection(hostList))
using (var channel = connection.CreateModel())
{
}
However, with this approach you would need to perform all the recovery etc. yourself. About a year ago, we had massive problem with stability in the EasyNetQ client, but since we started using RawRabbit our clustered environment and never really had a problem with it.
Disclaimer: I am the creator of RawRabbit.

You can connect to the node you prefer.
Exchanges and queues are visible across the cluster.
Using a load-balancer in front of the nodes is common practice, so the clients have to know only the balancer IP/DNS.
clients ----> balancer -----> RabbitMQ cluster

The .Net client does (to my knowledge) not offer any support for this. You build something yourself to select and connect to a node on the cluster.
For example, if you want to implement a round-robin strategy, the pseudo code would be something like
Get list of hostname/port combinations that form the cluster
do {
try {
connect to next hostname in the list
} catch (rabbit connection failed) {
maybe log a warning
}
} while not connected
Of course you now need to think about connection strategies, retries, number of connection attempts, exponential backoff, ...
... which is why I would strongly recommend to look for a library that already provides this kind of functionality (and much more). One such library is EasyNetQ (available on nuget), maybe NServiceBus (with RabbitMq Transport) or MassTransit could also be interesting.
Another approach could be to set up an intelligent loadbalancer in front of the individual nodes (so myrabbitcluster.mycompany.com load balances between the cluster nodes and should then be responsible to detect node failures and take faulty nodes out of the cluster).

Related

IBMMQDotnet client retry mechanism

Hi everyone i am completely new to queues and especialy to IBMMQDotnet cleint library. Currently my application trying to send DTO object to the queue and sometimes it could faailed for various reasons like exception occuring or network issue. Is there any retrie mechanism ?i would like to implement retry mechansim, i tried to google it but could not found any example. Bellow is the current code
if (!TryConnectToQueueManager())
{
return;
}
using var destination = GetMqObjectForWrite(message.Destination, message.DestinationType);
var mqMessage = new MQMessage
{
Format = MQC.MQFMT_STRING,
CharacterSet = 1208
};
if (message.Headers?.Count > 0)
{
foreach (var (key, value) in message.Headers)
{
mqMessage.SetStringProperty(key, value);
}
}
mqMessage.WriteString(JsonSerializer.Serialize(message.Data));
destination.Put(mqMessage);
destination.Close();
IBM MQ provides a feature called as Client Auto Reconnect.You could refer the following KC page Client Auto Reconnect
If there is a connection failure because of the network issue, the IBM MQ client will try to re-establish a connection to the Queue Manager for a specific time period(which is configurable) before throwing an exception to the application
You could refer to the sample "SimpleClientAutoReconnectPut" & "SimpleClientAutoReconnectGet" which are available as part of the client installation.

Connection to Elasticsearch 5.x is taking to long. NEST 5.0 rc

I am new in Elasticsearch and I have problems with the connection to the elasticsearch server.
I am using Elasticsearch 5.0.1, and I am running my code under .NET 4.5.2.
I am using NEST 5.0 rc lib.
I also installed Kibana and x-pack in my pc.
My code to connect to elasticsearch:
var nodes = new Uri[] { new Uri("http://localhost:9200") };
var pool = new StaticConnectionPool(nodes);
var settings = new ConnectionSettings(pool).DefaultIndex("visitor_index");
var client = ElasticClient(settings);
My Search code:
var result = client.Search<VisitorTest>(s => s.Index("visitor_index")
.Query(q => q.Match(mq => mq.Field(f => f.Name).Query("Visitor 1"))));
Basically the problem that I am having is that each time I create a new ElasticClient it take between 40-80 milliseconds to establish the connection.
I created a UT for this in which I am creating a connection and running the search query twice, and then I am creating a second connection in the same test and run again the search query two times.
The result is that the first query after the connection takes between 40-80 millisecond and the second query with the same connection take 2 milliseconds that is what I expect.
I tried changing the connection string to use a domain (added the domain to my local host file). I also tried removing xpack security so I do not need to authenticate.
xpack.security.enabled: false
But I always get the same result.
A few observations
A single instance of ConnectionSettings should be reused for the lifetime of the application. ConnectionSettings makes heavy use of caching so should be reused.
ElasticClient is thread-safe. A single instance can be safely used for the lifetime of an application
Unless you have a collection of nodes, I would recommend using SingleNodeConnectionPool instead of StaticConnectionPool. The latter has logic to round-robin over nodes which is unneeded for a single node.
The client takes advantage of connection pooling within the .NET framework; you can adjust KeepAlive behaviour on ConnectionSettings with EnableTcpKeepAlive()
If you have a web proxy configured on your machine, you could have a look at disabling automatic proxy detection with .DisableAutomaticProxyDetection() on ConnectionSettings.
I'll add my few coins here.
Had exactly same issue with 40 ms requests. However from Kibana dev tools it was taking 1 ms.
Fixed by tweaking two things:
Ninject part:
kernel.Bind<IEsClientProvider>().To<EsClientProvider>().InSingletonScope().WithConstructorArgument("indexName", "items");
And in client provider:
public ElasticClient GetClient()
{
if (this.client == null)
{
settings = new ConnectionSettings(nodeUri).DefaultIndex(indexName);
this.client = new ElasticClient(settings);
}
return client;
}

Akka.net: Access remote Actors in Cluster

In an clustered environment I have a seed node and node1 and node2.
From node1 I want to send a message to an Actor which has been created on node2. The local path to this node on node2 is akka:MyAkkaSystem/user/AnActor.
Now I want to send a message from an Actor from node1 to this specific actor by using an ActorSelection like that:
var actorSystem = ActorSystem.Create("MyTestSystem");
var c = actorSystem.ActorSelection("/user/ConsoleReceiver");
c.Tell("Hello World");
On node2 the actor has been created like that:
var actorSystem = ActorSystem.Create("MyTestSystem");
var r = actorSystem.ActorOf(Props.Create<MessageReceiver>(), "ConsoleReceiver");
Console.WriteLine(r.Path);
Console.ReadLine();
actorSystem.Terminate().Wait();
Unfortunately this does not work out since the attempt ends in dead letters.
The HOCON configuration on node2 looks like that:
akka {
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
deployment {
}
}
remote {
log-remote-lifecycle-events = DEBUG
log-received-messages = on
helios.tcp {
transport-class = "Akka.Remote.Transport.Helios.HeliosTcpTransport, Akka.Remote"
applied-adapters = []
transport-protocol = tcp
hostname = "127.0.0.1"
port = 0
}
}
cluster {
#will inject this node as a self-seed node at run-time
seed-nodes = ["akka.tcp://webcrawler#127.0.0.1:4053"] #manually populate other seed nodes here, i.e. "akka.tcp://lighthouse#127.0.0.1:4053", "akka.tcp://lighthouse#127.0.0.1:4044"
roles = [crawler]
}
}
As seed node I am using lighthouse. From connection point of view everything seems to work out. The seed has been found and each node got has received a welcome message.
I thought I had location transparency on a Cluster and could reach remote resources as if they where local.
I thought I had location transparency on a Cluster and could reach remote resources as if they where local.
This is not so easy. Consider following scenario: What if you've created an actor on both nodes under the same path. If you'll try to use relative path - without showing which node you have in mind - which of the actor's should receive the message?.
Using basic cluster capabilities you can choose node easily using Context.ActorSelection(_cluster.ReadView.Members.Single(m => /* which node you want to choose */).Address + "/user/ConsoleReceiver");. Cluster extension gives you a read view data with info about all members visible from current node.
There are many ways to send message to another actor, without having to know on which node it lives.
First approach is to use Akka.Cluster.Tools cluster singleton feature - it allows you to create at most one instance of an actor present in the cluster. In case of node failures it will migrate to another node. Be aware that this solution shouldn't be used, if you want to have many actors working that way. It's more for distinct, special-case actors.
Second approach is to use Akka.Cluster.Tools Distributed Pub/Sub feature to broadcast cluster-wide events across actors in the cluster subscribed to specific topic without worrying of their actual location. This is good choice for message broadcasting scenarios.
Last approach is to use Akka.Cluster.Sharding feature which manages actors lifecycle automatically - you don't need to create actors explicitly - it's also able to route messages to them from anywhere in the cluster and can rebalance them across many cluster nodes when needed.

.Net NMS.ActiveMQ should I store session and connection between message send calls

I just started with the ActiveMQ thing and got several questions.
I should send messages using ActiveMQ
What I did for now :
public class ActiveMQSender
{
private readonly Uri connectionUri;
private readonly IConnectionFactory connectionFactory;
private readonly string destinationName;
public ActiveMQSender()
{
this.connectionUri = new Uri("activemq:tcp://localhost:61616");
this.connectionFactory = new NMSConnectionFactory(this.connectionUri);
this.destinationName = "queue://testQ";
}
public void Send(string msg)
{
using (var connection = this.connectionFactory.CreateConnection())
using (var session = connection.CreateSession())
{
var destination = SessionUtil.GetDestination(session, this.destinationName);
using (var producer = session.CreateProducer(destination))
{
connection.Start();
var message = session.CreateTextMessage(msg);
producer.Send(message);
}
}
}
}
There will be only one instance of this class which will be injected as a constructor parameter.
I am affraid of overhead for connection, session and producer creation, because messages will be sent frequently (often than one message per 10 seconds)
Should I reuse connection, session or producer instances, and how should I react to the connection failures? What is the common pattern in such scenarios?
NMS.ActiveMQ like the java client provides a failover transport which will automatically attempt to reconnect to the broker should the connection be lost. You can use that to minimize you failure handling code. Do some Google searching on the topic of failover transport in AMQ.
Recreating the connection and associated resources is not a lightweight operation so you're best bet is to cache them and reuse them for as long as you need that connection. In combination with failover you can reliably reuse the same MessageProducer over and over.
The model of NMS is much the same as JMS so doing some reading on JMS should provide enlightening.
Presumably you're using NMS for this?
As a suggestion you might want to consider using a blocking queue to throttle the messages and then send a chunk of n messages in a batch... Then you can keep your code and just publish the batch whilst disposing your connection and session and producer when you've finished as you are doing...
As for the connection failures - you should be able to hook up the exception listener to your session so that you get notified of any problems.

Finding Connection by UserId in SignalR

I have a webpage that uses ajax polling to get stock market updates from the server. I'd like to use SignalR instead, but I'm having trouble understanding how/if it would work.
ok, it's not really stock market updates, but the analogy works.
The SignalR examples I've seen send messages to either the current connection, all connections, or groups. In my example the stock updates happen outside of the current connection, so there's no such thing as the 'current connection'. And a user's account is associated with a few stocks, so sending a stock notification to all connections or to groups doesn't work either. I need to be able to find a connection associated with a certain userId.
Here's a fake code example:
foreach(var stock in StockService.GetStocksWithBigNews())
{
var userIds = UserService.GetUserIdsThatCareAboutStock(stock);
var connections = /* find connections associated with user ids */;
foreach(var connection in connections)
{
connection.Send(...);
}
}
In this question on filtering connections, they mention that I could keep current connections in memory but (1) it's bad for scaling and (2) it's bad for multi node websites. Both of these points are critically important to our current application. That makes me think I'd have to send a message out to all nodes to find users connected to each node >> my brain explodes in confusion.
THE QUESTION
How do I find a connection for a specific user that is scalable? Am I thinking about this the wrong way?
I created a little project last night to learn this also. I used 1.0 alpha and it was Straight forward. I created a Hub and from there on it just worked :)
I my project i have N Compute Units(some servers processing work), when they start up they invoke the ComputeUnitRegister.
await HubProxy.Invoke("ComputeUnitReqisted", _ComputeGuid);
and every time they do something they call
HubProxy.Invoke("Running", _ComputeGuid);
where HubProxy is :
HubConnection Hub = new HubConnection(RoleEnvironment.IsAvailable ?
RoleEnvironment.GetConfigurationSettingValue("SignalREndPoint"):
"http://taskqueue.cloudapp.net/");
IHubProxy HubProxy = Hub.CreateHubProxy("ComputeUnits");
I used RoleEnviroment.IsAvailable because i can now run this as a Azure Role , a Console App or what ever in .NET 4.5. The Hub is placed in a MVC4 Website project and is started like this:
GlobalHost.Configuration.ConnectionTimeout = TimeSpan.FromSeconds(50);
RouteTable.Routes.MapHubs();
public class ComputeUnits : Hub
{
public Task Running(Guid MyGuid)
{
return Clients.Group(MyGuid.ToString()).ComputeUnitHeartBeat(MyGuid,
DateTime.UtcNow.ToEpochMilliseconds());
}
public Task ComputeUnitReqister(Guid MyGuid)
{
Groups.Add(Context.ConnectionId, "ComputeUnits").Wait();
return Clients.Others.ComputeUnitCameOnline(new { Guid = MyGuid,
HeartBeat = DateTime.UtcNow.ToEpochMilliseconds() });
}
public void SubscribeToHeartBeats(Guid MyGuid)
{
Groups.Add(Context.ConnectionId, MyGuid.ToString());
}
}
My clients are Javascript clients, that have methods for(let me know if you need to see the code for this also). But basicly they listhen for the ComputeUnitCameOnline and when its run they call on the server SubscribeToHeartBeats. This means that whenever the server compute unit is doing some work it will call Running, which will trigger a ComputeUnitHeartBeat on javascript clients.
I hope you can use this to see how Groups and Connections can be used. And last, its also scaled out over multiply azure roles by adding a few lines of code:
GlobalHost.HubPipeline.EnableAutoRejoiningGroups();
GlobalHost.DependencyResolver.UseServiceBus(
serviceBusConnectionString,
2,
3,
GetRoleInstanceNumber(),
topicPathPrefix /* the prefix applied to the name of each topic used */
);
You can get the connection string on the servicebus on azure, remember the Provider=SharedSecret. But when adding the nuget packaged the connectionstring syntax is also pasted into your web.config.
2 is how many topics to split it about. Topics can contain 1Gb of data, so depending on performance you can increase it.
3 is the number of nodes to split it out on. I used 3 because i have 2 Azure Instances, and my localhost. You can get the RoleNumber like this (note that i hard coded my localhost to 2).
private static int GetRoleInstanceNumber()
{
if (!RoleEnvironment.IsAvailable)
return 2;
var roleInstanceId = RoleEnvironment.CurrentRoleInstance.Id;
var li1 = roleInstanceId.LastIndexOf(".");
var li2 = roleInstanceId.LastIndexOf("_");
var roleInstanceNo = roleInstanceId.Substring(Math.Max(li1, li2) + 1);
return Int32.Parse(roleInstanceNo);
}
You can see it all live at : http://taskqueue.cloudapp.net/#/compute-units
When using SignalR, after a client has connected to the server they are served up a Connection ID (this is essential to providing real time communication). Yes this is stored in memory but SignalR also can be used in multi-node environments. You can use the Redis or even Sql Server backplane (more to come) for example. So long story short, we take care of your scale-out scenarios for you via backplanes/service bus' without you having to worry about it.

Categories