ASP.NET + Rabbit MQ connection and channel lifetime - c#

I see a lot of examples of usage RabbitMq in .NET (in ASP.NET or console applications). Most of them look like this:
using (var connecttion = MyConnectionFactoryWrapper.CreateConnection())
using (var channel = connection.CreateChannel())
{
...
}
Is it efficient? In documentation I see:
AMQP connections are typically long-lived. AMQP is an application
level protocol that uses TCP for reliable delivery.
So I suppose it's better to have one connection for application. Another point about channels:
AMQP 0-9-1 connections are multiplexed with channels that can be
thought of as "lightweight connections that share a single TCP
connection".
Here I suppose I can use channel-per-request in case of ASP.NET application. My question: is it the best practice to have connection-per-application and channel-per-request?

Yes, connection-per-application is a suggested approach. Channel-per-request should do as well, but I'd test it for your required throughput. For our project we used EasyNetQ which takes care of creating connections/channels for you. We just kept a single MessageBus instance for the application.

Related

Simulate 10,000 Azure IoT Hub Device connections from Azure Service Fabric cluster

We are developing a .Net Core service that shall be hosted in Azure Service Fabric. This SF Service needs to interact with 10,000 devices registered in Azure IoT Hub via it's AMQP 1.0 SSL TLS endpoints. Each IoT Hub devices has it's own security tokens and connection string provided by the IoT Hub service.
For our scenario we need to listen to all cloud-to-devices messages coming from the 10,000 IoT Hub device instances and "route" these to a central Service Bus topic to which the actual "gateways" in the field listen to. So basically we want to forward messages from 10,000 Service Bus Queues into one central Queue.
What is the best approach to handle these 10,000 AMQP listners from a SF Service? Is there a way we can reuse AMQP connections, sessions or links so we cache/share resources? And how can we dynamically spread the load of connection maintenance over the 5 nodes in the SF cluster?
We are evaluating these Nuget packages for the implementation:
Microsoft.Azure.ServiceBus
AMQPNetLite
Microsoft.Azure.Devices.Client
We are doing some tests using the Microsoft.Azure.Devices.Client lib, see a simplified code sample below:
using System;
using System.Fabric;
using System.Text;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Azure.Devices.Client;
using Microsoft.ServiceFabric.Services.Runtime;
namespace ID.Monitoring.MonServer.ServiceFabric.ServiceBus
{
/// <summary>
/// An instance of this class is created for each service instance by the Service Fabric runtime.
/// </summary>
internal sealed class ServiceBus : StatelessService
{
private readonly DeviceClient _deviceClient;
private ConnectionStatus _status;
public ServiceBus(StatelessServiceContext context)
: base(context)
{
_deviceClient = DeviceClient.CreateFromConnectionString("HostName=id-monitoring-dev.azure-devices.net;DeviceId=100;SharedAccessSignature=SharedAccessSignature sr=id-monitoring-dev.azure-devices.net%2Fdevices%2F100&sig={token}&se=1553265888", TransportType.Amqp_Tcp_Only);
}
/// <summary>
/// This is the main entry point for your service instance.
/// </summary>
/// <param name="cancellationToken">Canceled when Service Fabric needs to shut down this service instance.</param>
protected override async Task RunAsync(CancellationToken cancellationToken)
{
_deviceClient.SetConnectionStatusChangesHandler(ConnectionStatusChangeHandler);
while (true)
{
if (_status != ConnectionStatus.Connected)
{
await _deviceClient.OpenAsync();
}
var receivedMessage = await _deviceClient.ReceiveAsync(TimeSpan.FromSeconds(10)).ConfigureAwait(false);
if (receivedMessage != null)
{
var messageData = Encoding.ASCII.GetString(receivedMessage.GetBytes());
//TODO: handle incoming message and publish to common
await _deviceClient.CompleteAsync(receivedMessage).ConfigureAwait(false);
}
}
}
private void ConnectionStatusChangeHandler(ConnectionStatus status, ConnectionStatusChangeReason reason)
{
_status = status;
}
}
}
Question: Does this scale well to 10,000 Service Fabric service instances? Or are there more efficient ways to have this many AMQP Service Bus Listners maintained from a Service Fabric Service environment? Is there a way we can apply AMQP connection multiplexing maybe?
Take a look at this.
The second answer provides a sample that allows you to multiplex multiple devices onto one Amqp connection.
The approach you choose to monitor your devices won't scale well and will be hard to maintain.
Currently, service fabric has a limitation of how many instances you can place in a single node. For example: if you create an application with your ServiceBus service and span 10000 instances, you will hit this limitation, that is the number of nodes. i.e: if you have a 5 node cluster, you will be able to run only 5 instances of your service by using the default scaling approach.
To bypass this issue you have some options:
Partitioning:
To have a single stateless service running more
partitions than the node count, you have to partition your service.
Assuming you have a 5 node cluster and need 10000 instances, you will
need 2000 partitions running on each node. If you use shared process and have enough ram to this, this approach might help you, please take a look at this thread and this thread before following this approach
Multiple Named Services:
Named service is the running service definition for one service type, in this case you would create one per device. like:
ServiceBusType
ServiceBus-Device1
ServiceBus-Device2
ServiceBus-Device3
This approach will consume too much resources in your machine, as you will be running one instance for each device, but easy to manage, as you can span new instances for each new device without affecting other running services.
Parallel Processing per instance:
Where each instance, would be responsible for processing multiple messages concurrently, in this case you would create 2000 connections for each instance(if running in a 5 instance/node per cluster). This will be lighter than the other approaches on resources consumption, but is a bit harder to maintain, as you will have to handle the balance yourself and might need an extra service to monitor and delegate tasks to all the services and ensure the messages are being processing evenly.
Summary:
One instance handling one connection at one message a time will required 10000 instances of your service, the partitioning will be similar but you can use a shared process to reduce memory consumption, but the memory consumption will still be high in both cases.
Multiple named services could be an option if the number of services were not too high, You also wouldn't be able to share the connection. So I won't recommend this approach for your scenario.
The third option, is the more resource friendly but you will have to find a way to partition the connections evenly throughout the cluster nodes.
You can also use a mixed approach, for example, you can have service handling multiple messages in parallel and a partitioned service to define the key range of devices.
Please take a look in the links I've mentioned.
I found that there is a DeviceClient constructor that allows the AmqpConnectionPoolSettings to be set.

Fault tolerant Aerospike connection in a Web service

I need to make persistance connection to Aerospike noSQL DB in a Web service.
In a not-Web application, connection is straightfoward as
using (AerospikeClient client = new AerospikeClient("127.0.0.1", 3000))
{
...
}
But in a Web service application, creating new client for each request is expensive. The Best Practices say this too: "use only one client instance per cluster in a program and share that instance among multiple threads. AerospikeClient and AsyncClient are thread-safe."
I can make a static object, but what if the client disconnects, either by error or timeout (24 hours max connection living time)? Can anyone provide any fault-tolerant code spippet? (Maybe similar to redis pattern How does ConnectionMultiplexer deal with disconnects?)
The client manages a socket pool. If a socket error or timeout occurs, the socket is disposed of.

TxSelect and TransactionScope

Recently, I've been checking out RabbitMQ over C# as a way to implement pub/sub. I'm more used to working with NServiceBus. NServiceBus handles transactions by enlisting MSMQ in a TransactionScope. Other transaction aware operations can also enlist in the same TransactionScope (like MSSQL) so everything is truly atomic. Underneath, NSB brings in MSDTC to coordinate.
I see that in the C# client API for RabbitMQ there is a IModel.TxSelect() and IModel.TxCommit(). This works well to not send messages to the exchange before the commit. This covers the use case where there are multiple messages sent to the exchange that need to be atomic. However, is there a good way to synchronize a database call (say to MSSQL) with the RabbitMQ transaction?
You can write a RabbitMQ Resource Manager to be used by MSDTC by implementing the IEnlistmentNotification interface. The implementation provides two phase commit notification callbacks for the transaction manager upon enlisting for participation. Please note that MSDTC comes with a heavy price and will degrade your overall performance drastically.
Example of RabbitMQ resource manager:
sealed class RabbitMqResourceManager : IEnlistmentNotification
{
private readonly IModel _channel;
public RabbitMqResourceManager(IModel channel, Transaction transaction)
{
_channel = channel;
_channel.TxSelect();
transaction.EnlistVolatile(this, EnlistmentOptions.None);
}
public RabbitMqResourceManager(IModel channel)
{
_channel = channel;
_channel.TxSelect();
if (Transaction.Current != null)
Transaction.Current.EnlistVolatile(this, EnlistmentOptions.None);
}
public void Commit(Enlistment enlistment)
{
_channel.TxCommit();
enlistment.Done();
}
public void InDoubt(Enlistment enlistment)
{
Rollback(enlistment);
}
public void Prepare(PreparingEnlistment preparingEnlistment)
{
preparingEnlistment.Prepared();
}
public void Rollback(Enlistment enlistment)
{
_channel.TxRollback();
enlistment.Done();
}
}
Example using resource manager
using(TransactionScope trx= new TransactionScope())
{
var basicProperties = _channel.CreateBasicProperties();
basicProperties.DeliveryMode = 2;
new RabbitMqResourceManager(_channel, trx);
_channel.BasicPublish(someExchange, someQueueName, basicProperties, someData);
trx.Complete();
}
As far as I'm aware there is no way of coordinating the TxSelect/TxCommit with the TransactionScope.
Currently the approach that I'm taking is using durable queues with persistent messages to ensure they survive RabbitMQ restarts. Then when consuming from the queues I read a message off do some processing and then insert a record into the database, once all this is done I ACK(nowledge) the message and it is removed from the queue. The potential problem with this approach is that the message could end up being processed twice (if for example the message is committed to the DB but say the connection to RabbitMQ is disconnected before the message can be ack'd), but for the system that we're building we're concerned about throughput. (I believe this is called the "at-least-once" approach).
The RabbitMQ site does say that there is a significant performance hit using the TxSelect and TxCommit so I would recommend benchmarking both approaches.
However way you do it, you will need to ensure that your consumer can cope with the message potentially being processed twice.
If you haven't found it yet take a look at the .Net user guide for RabbitMQ here, specifically section 3.5
Lets say you've got a service bus implementation for your abstraction IServiceBus. We can pretend it's rabbitmq under the hood, but it certainly doesn't need to be.
When you call servicebus.Publish, you can check System.Transaction.Current to see if you're in a transaction. If you are and it's a transaction for a mssql server connection, instead of publishing to rabbit you can publish to a broker queue within sql server which will respect the commit/rollback with whatever database operation you're performing (you want to do some connection magic here to avoid the broker publish upgrading your txn to msdtc)
Now you need to create a service that needs to read the broker queue and do an actual publish to rabbit, this way, for very important things, you can gaurantee that your database operation completed previously and that the message gets published to rabbit at some point in the future (when the service relays it). its still possible for failures here if when committing the broker receive an exception occurs, but the window for problems is drastically reduced and worse case scenario you would end up publishing multiple times, you would never lose a message. This is very unlikely, the sql server going offline after receive but before commit would be an example of when you would end up at minimum double publishing (when the server comes on-line you'd publish again) You can build your service smart to mitigate some, but unless you use msdtc and all that comes with it (yikes) or build your own msdtc (yikes yikes) you are going to have potential failures, it's all about making the window small and unlikely to occur.

How can I make a TCP-IP communication between 2 Visual C# apps?

I just need to know how can I send a message from a server to a client, if the communication could be bidirectional it would be perfect, but it is not necessary.
One easy way would be to use Sockets to accomplish this. A good reference for this is:
http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.aspx
It has little overhead and can be configured much more simply than Remoting or other types of communication.
Consider using .net remoting over tcp/ip. This makes bi-directional communication a breeze. You'll need a TcpServerChannel on one side, a TcpClientChannel on the other side. The server side will have an object that extends MarshalByRefObject.
public class MyServerClass : MarshalByRefObject
{
public override void InitializeLifetimeService ()
{
return null;
}
public string SendAndRecieve (string message)
{
return message + message;
}
}
WCF nettcpbinding from .net 3.5 .Net Remouting for .net 2
TcpListener
Sockets
Try to google for examples, there are many examples with this
Another approach would be to use the System.Messaging namespace and exchange messages over MSMQ. Not sure of your requirements, so that may not be the best approach, but it is another way to quickly get messages between 2 .Net processes.
For VS 2005/.NET 2.0 try .NET Remoting
Edit
Unless you enjoy the exercise I would advise against sockets. .NET Remoting allows for TCP, UDP, IPC communication and to your code it just looks like you are calling a method/property on an object. Also, any data structure that is serializable can be passed over the wire which lets you use rich data representation, rather than packaging/parsing byte streams at the socket level.

How to check the availability of a net.tcp WCF service

My WCF server needs to go up and down on a regular basis, the client sometimes uses the server, but if it is down the client just ignore it.
So each time I need to use the server services I check the connection state and if it's not open I open it.
The problem is that if I attempt to open while the server is down there is a delay which hits performance.
My question is, is there a way to do some kind of myClient.CanOpen()? so I'd know if there is any point to open the connection to the server.
There is an implementation of WS-Discovery that would allow you to listen for up/down announcements for your service. This is also a very convenient form of service address resolution because it utilizes UDP multicast messages to find the service, rather than configuring one set address on the client.
WS-Discovery for WCF
There's also an implementation done by a Microsoft employee:
WS-Discovery Sample Implementation
.NET 4.0 will include this natively. You can read about .NET 4.0's implementation on Jesus Rodriguez's blog. It has a great chart that details the ad-hoc communication that goes on in WS-Disco Using WS-Discovery in WCF 4.0
Another thing you might consider, especially if your messages are largely one-way, is a protocol that works natively disconnected, like MSMQ. I don't know what your design for your application looks like, but MSMQ would allow a client to send a message regardless of the state of the service and the service will get it when it comes back up. This way your client doesn't have to block quite so much trying to get confirmation that a service is up before communicating... it'll just fire and forget.
Hope this helps.
If you are doing a synchronous call expecting a server timeout in an application with a user interface, you should be doing it in another thread. I doubt that the performance hit is due to exception overhead.
Is your performance penalty in CPU load, gui availability or wall clock time?
You could investigate to see if you can create a custom binding on TCP, but with faster timeout.
I assume you know that "IsOneWay=true" is faster than request->response in your case because you wouldn't be expecting a response anyway, but then you are not getting confirmation or return values.
You could also implement a two-way communication that is not request->response.
If you were in a local network it might be possible to broadcast a signal to say that a new server is up. The client would need to listen for the broadcast signal and respond accordingly.
Here's what I'm using and it works like a charm. And btw, the ServiceController class lives in namespace 'System.ServiceProcess'.
try
{
ServiceController sc = new ServiceController("Service Name", "Computer's IP Address");
Console.WriteLine("The service status is currently set to {0}",
sc.Status.ToString());
if ((sc.Status.Equals(ServiceControllerStatus.Stopped)) ||
(sc.Status.Equals(ServiceControllerStatus.StopPending)))
{
Console.WriteLine("Service is Stopped, Ending the application...");
Console.Read();
EndApplication();
}
else
{
Console.WriteLine("Service is Started...");
}
}
catch (Exception)
{
Console.WriteLine("Error Occurred trying to access the Server service...");
Console.Read();
EndApplication();
}
I don't think it's possible doing a server side call to your Client to inform him that you the service has been started ... Best method i can see is having a client method figuring out where or not the service is open and in good condition. Unless I am missing some functionality of WCF ...
There is a good blogpost WCF: Availability of the WCF services if you are interested in a read.

Categories