using EasyNetQ multiple Handler for one consumer does not work - c#

We are using RabbitMQ for queuing messages in C# .Net (EasyNetQ Client).
i want one consumer app (C# Console App) listen to one queue and provide multiple handlers for each message type.
I implemented this scenario and my code is here :
using (var advancedBus = RabbitHutch.CreateBus("host=localhost;prefetchcount=100")
.Advanced)
{
var queue = advancedBus.QueueDeclare("MyQueue");
advancedBus.Consume(queue, x => x
.Add<MessageType1>((message, info) =>
{
Console.WriteLine("MessageType1 Body : " + message.Body.Body);
})
.Add<MessageType2>((message, info) =>
{
Console.WriteLine(" MessageType2 Body: " + message.Body.Body);
}).ThrowOnNoMatchingHandler = false);
}
My Problem :
But when i execute this consumer it does nothing. do not any thing happen.
i publish messages to that queue like this :
using (var advancedBus = RabbitHutch.CreateBus("host=localhost").Advanced)
{
var queue = advancedBus.QueueDeclare("MyQueue");
if (advancedBus.IsConnected)
advancedBus.Publish(Exchange.GetDefault(), queue.Name, false, false,
new Message<MessageType1>(change));
else
result = false;
}
What is the problem.

Ok, after testing this code, these are the problems:
First of all, you're disposing your advancedBus right after you register for consumption. You need to remember that when you invoke IAdvanceBus.Consume, you're only registering a callback for each message. If you dispose the bus immediately after registration, your delegate can't be invoked since the connection was already closed. So, you'll to remove the using statement around the rabbit declaration (don't forget to dispose it when you're done):
var advancedBus = RabbitHutch.CreateBus("host=localhost;prefetchcount=100").Advanced
Second, the immediate flag has been deprecated and shouldn't be used, the message doesn't seem to be getting to the queue. Change Publish to:
advancedBus.Publish(Exchange.GetDefault(), queue.Name, true, false,
new Message<MessageType1>(change));
Also, if you're running this from a console application, don't forget to use Console.ReadKey so your main thread doesn't terminate.
Here's a working code sample:
static void Main()
{
var change = new MessageType1();
var advancedBus = RabbitHutch.CreateBus("host=localhost").Advanced;
ConsumeMessage(advancedBus);
var queue = advancedBus.QueueDeclare("MyQueue");
if (advancedBus.IsConnected)
{
advancedBus.Publish(Exchange.GetDefault(), queue.Name, true, false,
new Message<MessageType1>(change));
}
else
{
Console.WriteLine("Can't connect");
}
Console.ReadKey();
}
private static void ConsumeMessage(IAdvancedBus advancedBus)
{
var queue = advancedBus.QueueDeclare("MyQueue");
advancedBus.Consume(queue, registration =>
{
registration.Add<MessageType1>((message, info) =>
{
Console.WriteLine("Body: {0}", message.Body);
});
});
}

Related

only one consumer receives messages from the queue on RabbitMQ

I created a small demo to show the RabbitMQ basics. Unfortunatly it doesn't work as expected an has two issues. I am using .NET Core 3.1 and RabbitMQ.Client 6.2.2
I created the Employee class which receives messages from the task queue. The first employee is working nice but if I start more employees they don't work (don't receive messages). And I can't figure out why that would be.
And if I have a lot of messages in the queue (before starting the second employee) I see that all messages in the tasks queue get ACKed when the second starts and then after a short time they become UNACKed again. Somehow weird.
But mainly: why do the other employees not work?
using RabbitMQ.Client;
using RabbitMQ.Client.Events;
using System;
using System.Text;
using System.Threading;
namespace DemoTasks.Employee
{
class Employee
{
static void Main(string[] args)
{
string clientName = "Employee-" + Guid.NewGuid().ToString();
Console.Title = clientName;
Console.WriteLine("Moin moin");
IConnectionFactory connectionFactory = new ConnectionFactory
{
HostName = "localhost",
Port = 5672,
VirtualHost = "/",
UserName = "user",
Password = "password",
ClientProvidedName = clientName
};
using (IConnection connection = connectionFactory.CreateConnection(clientName))
{
using (IModel model = connection.CreateModel())
{
model.ExchangeDeclare("jobs", "fanout", false, false, null);
model.QueueDeclare("tasks", true, false, false);
model.QueueBind("tasks", "jobs", "", null);
EventingBasicConsumer consumer = new EventingBasicConsumer(model);
consumer.Received += OnWorkReceived;
model.BasicConsume("tasks", false, clientName + ":OnWorkReceived", consumer);
Console.ReadLine();
model.Close();
}
connection.Close();
}
Console.WriteLine("Wochenende ... woooh !!!");
}
private static void OnWorkReceived(object sender, BasicDeliverEventArgs e)
{
EventingBasicConsumer consumer = (EventingBasicConsumer)sender;
IModel model = consumer.Model;
string task = Encoding.UTF8.GetString(e.Body.ToArray());
Console.Write("working on: " + task + " ... ");
Thread.Sleep(5000);
Console.WriteLine("done!");
model.BasicAck(e.DeliveryTag, false);
}
}
}
I think your problem is about setting PrefetchCount on your channel. It's about how many messages that one consumer can get from rabbit and cache them on itself to process them.
If don't set it, one consumer can consume all messages on queue and no time to get messages by other consumers, so you can set it by using channel.basicQos(1) or basicqos(0,1,false). By this setting every consumer can get one message after send ack to rabbit then can get another one.
When set prefetch count to lower number, can affect on performance because your consumer must ask rabbit more to get messages .
For detail information see this: https://www.rabbitmq.com/consumer-prefetch.html

How to map message type to a different Azure ServiceBus queue with MassTransit EndpointConvention.Map<T>

I want to use MassTransit to send messages that may have different structures in terms of message.Data, to different Azure Service Bus queues. As long as the router.Name keeps the initial value, it works welll. But, whenever the destination Uri of EndpointConvention.Map<ManyToOneTransferMessage> changes, an exception is thrown by MassTransit as "The endpoint convention has already been created and can no longer be modified". Is there any way to remap the message type with another destination to use MassTransit with multiple queues?
public class AzureServiceBusManager
{
string ServiceBusConnectionString = string.Empty;
public AzureServiceBusManager()
{
ServiceBusConnectionString = ConfigurationManager.AppSettings["AppSettings:ServiceBusConnectionString"];
}
public async Task SendMessageAsyncN1(TransferMessage transferMessage, Router router)
{
var message = new ManyToOneTransferMessage
{
BlobFileName = transferMessage.BlobFileName,
Compressed = transferMessage.Compressed,
Data = transferMessage.Data,
MessageId = transferMessage.MessageId,
TransferId = transferMessage.TransferId,
TransferType = transferMessage.TransferType
};
var queueBusControl = Bus.Factory.CreateUsingAzureServiceBus(
cfg =>
{
cfg.Host(ServiceBusConnectionString);
EndpointConvention.Map<ManyToOneTransferMessage>(new Uri("queue:" + router.Name));
cfg.ReceiveEndpoint(router.Name, e =>
{
e.RequiresSession = true;
e.MaxConcurrentCalls = 500;
});
});
await queueBusControl.Send(message);
}
}
So, first of all, do not use EndpointConvention.Map<ManyToOneTransferMessage>(new Uri("queue:" + router.Name));. It isn't useful, and only adds to the confusion.
You can resolve the endpoint from the bus, but you have to realize that creating a bus for each call is a bad idea. It is best to start the bus at startup (you aren't even starting it in the code above), and stop it at application shutdown.
Then, for each call, you can use that bus to resolve the send endpoint and send the message.
var endpoint = await bus.GetSendEndpoint(new Uri("queue:" + router.Name));
await endpoint.Send(message);
Also, you should remove this since it will cause all messages to be moved to the _skipped queue:
cfg.ReceiveEndpoint(router.Name, e =>
{
e.RequiresSession = true;
e.MaxConcurrentCalls = 500;
});
You'll likely need to configure the queues separately, in advance, if you requireSession, although I don't see you setting a SessionId on the message so it likely will not work anyway without one.

Directing messages to consumers

My client is attempting to send messages to the receiver. However I noticed that the receiver sometimes does not receive all the messages sent by the client thus missing a few messages (not sure where the problem is ? Client or the receiver).
Any suggestions on why that might be happening. This is what I am currently doing
On the receiver side this is what I am doing.
This is the Event Processor
async Task IEventProcessor.ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (var eventData in messages)
{
var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
}
}
This is how the client connects to the event hub
var StrBuilder = new EventHubsConnectionStringBuilder(eventHubConnectionString)
{
EntityPath = eventHubName,
};
this.eventHubClient = EventHubClient.CreateFromConnectionString(StrBuilder.ToString());
How do I direct my messages to specific consumers
I'm using this sample code from eventhub official doc, for sending and receiving.
And I have 2 consumer groups: $Default and newcg. Suppose you have 2 clients, the client_1 are using the default consumer group($Default), and client_2 are using the other consumer group(newcg)
First, after create the send client, in the SendMessagesToEventHub method, we need to add a property with value. The value should be the consumer group name. Sample code like below:
private static async Task SendMessagesToEventHub(int numMessagesToSend)
{
for (var i = 0; i < numMessagesToSend; i++)
{
try
{
var message = "444 Message";
Console.WriteLine($"Sending message: {message}");
EventData mydata = new EventData(Encoding.UTF8.GetBytes(message));
//here, we add a property named "cg", it's value is the consumer group. By setting this property, then we can read this message via this specified consumer group.
mydata.Properties.Add("cg", "newcg");
await eventHubClient.SendAsync(mydata);
}
catch (Exception exception)
{
Console.WriteLine($"{DateTime.Now} > Exception: {exception.Message}");
}
await Task.Delay(10);
}
Console.WriteLine($"{numMessagesToSend} messages sent.");
}
Then in the client_1, after create the receiver project, which use the default consumer group($Default)
-> in the SimpleEventProcessor class -> ProcessEventsAsync method, we can filter out the unnecessary event data. Sample code for ProcessEventsAsync method:
public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (var eventData in messages)
{
//filter the data here
if (eventData.Properties["cg"].ToString() == "$Default")
{
var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
Console.WriteLine($"Message received. Partition: '{context.PartitionId}', Data: '{data}'");
Console.WriteLine(context.ConsumerGroupName);
}
}
return context.CheckpointAsync();
}
And in another client, like client_2, which use another consumer group, like it's name is newcg, we can follow the steps in client_1, just a little changes in ProcessEventsAsync method, like below:
public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (var eventData in messages)
{
//filter the data here, using another consumer group name
if (eventData.Properties["cg"].ToString() == "newcg")
{
//other code
}
}
return context.CheckpointAsync();
}
This happens only when there are 2 or more Event Processor Host reading from same consumer group.
If you have event hub with 32 partitions and 2 event processor host reading from same consumer group. Then each event processor host will read from 16 partition and so on.
Similarly if 4 Event processor host parallelly reading from same consumer group then each will read from 8 partitions.
Check if you have 2 or more event processor host running on same consumer group.
I have tested your code and slightly modified it(different overload of EventProcessorHost constructor, and added CheckpointAsync after consuming the messages), and then did some tests.
By using the default implementation and default EventProcessorOptions(EventProcessorOptions.DefaultOptions) I can say that I did experience some latency when it comes to consuming messages, but all messages were processed successfully.
So, sometimes it seems like I am not getting the messages from the certain partition, but after a certain period of time, all messages arrive:
Here you can find the actual modified code that worked for me. It is a simple console app that prints to the console if something arrives.
string processorHostName = Guid.NewGuid().ToString();
var Options = new EventProcessorOptions()
{
MaxBatchSize = 1, //not required to make it working, just for testing
};
Options.SetExceptionHandler((ex) =>
{
System.Diagnostics.Debug.WriteLine($"Exception : {ex}");
});
var eventHubCS = "event hub connection string";
var storageCS = "storage connection string";
var containerName = "test";
var eventHubname = "test2";
EventProcessorHost eventProcessorHost = new EventProcessorHost(eventHubname, "$Default", eventHubCS, storageCS, containerName);
eventProcessorHost.RegisterEventProcessorAsync<MyEventProcessor>(Options).Wait();
For sending the messages to the event hub and testing I used this message publisher app.

Registering RabbitMQ Consumer with .NET Core?

I'm trying to handle request authorization in a microservice based architecture using a message queue (RabbitMQ).
I've got a receiver and sender configured fine as a console application in .NET Core per these instructions. However, when using this in a real world example, my application receiving project isn't collecting messages as a consumer.
I'm assuming I have to register the consumer in the Startup.cs, but I can't seem to get this working.
My consumer/responder code:
public class RabbitMqHandler
{
private readonly IJWTFactory _jwtFactory;
public RabbitMqHandler(IJWTFactory jWTFactory)
{
_jwtFactory = jWTFactory;
}
public void Register()
{
var mqFactory = new ConnectionFactory() { HostName = "localhost" };
using (var connection = mqFactory.CreateConnection())
{
Console.WriteLine("Listening on Rabbit MQ");
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queue: "Authorize", durable: false, exclusive: false, autoDelete: false, arguments: null);
var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
var body = ea.Body;
var jwtToken = Encoding.UTF8.GetString(body);
Console.WriteLine("Rceived Message");
var validatedToken = _jwtFactory.ValidateTokenSignature(jwtToken);
SendResponse(validatedToken);
};
channel.BasicConsume(queue: "Authorize", autoAck: true, consumer: consumer);
}
}
}
public void Deregister()
{
}
Startup.cs to register the
.AddSingleton()
Edit: I've added some additional listening code, this is definitely running on startup, but RabbitMQ is not showing the app as a consumer or a channel:
public static class ApplicationBuilderExtentions
{
public static RabbitMqHandler Listener { get; set; }
public static IApplicationBuilder UseRabbitListener(this IApplicationBuilder app)
{
Listener = app.ApplicationServices.GetService<RabbitMqHandler>();
var life = app.ApplicationServices.GetService<IApplicationLifetime>();
life.ApplicationStarted.Register(OnStarted);
//press Ctrl+C to reproduce if your app runs in Kestrel as a console app
life.ApplicationStopping.Register(OnStopping);
return app;
}
private static void OnStarted()
{
Listener.Register();
}
private static void OnStopping()
{
Listener.Deregister();
}
}
To summarise:
How do I correctly configure a consumer in .NET Core to consume messages?
Is this just the wrong to expect a Message Queue to manage request/response style communication?
Should I just be using an API call to authenticate and authorize users?
Answer as provided by #Evk (in the comments):
"using is designed to dispose things, it calls Dispose when you reach the end of the using block. BasicConsume is not a blocking call, so it starts consumption and returns immediately.
Right after that the end of the using blocks is reached for both channel and connection, disposing them (and disposing them is the same as closing)."
I'd like to add the following:
You can quickly try if removing the using brings the desired result very easily. Change the following code line:
using (var connection = mqFactory.CreateConnection())
to:
var connection = mqFactory.CreateConnection();
This will instantly do the trick. But be aware it also removes proper disposal - so you need to add that - here is an article from Microsoft describing how IDisposable is to be implemented correctly.

SignalR notification system

This is my first time playing around with SignalR. I am trying to build a notification system where the server checks at regular intervals to see if there is something (query database) to broadcast and if there is then it broadcasts it to all the clients.
I came across this post on Stackoverflow and was wondering if modifying the code to make a DB call at a particular interval was indeed the right way to do it. If not is there a better way to do it?
I did see a lot of Notification related questions posted here but none with any code in it. Hence this post.
This is the exact code that I am using:
public class NotificationHub : Hub
{
public void Start()
{
Thread thread = new Thread(Notify);
thread.Start();
}
public void Notify()
{
List<CDCNotification> notifications = new List<CDCNotification>();
while (true)
{
notifications.Clear();
notifications.Add(new CDCNotification()
{
Server = "Server A", Application = "Some App",
Message = "This is a long ass message and amesaadfasd asdf message",
ImgURL = "../Content/Images/accept-icon.png"
});
Clients.shownotification(notifications);
Thread.Sleep(20000);
}
}
}
I am already seeing some weird behaviour where the notifications come more often than they are supposed to. Even though I am supposed to get it every 20s I get it around 4-5 secs and I get multiple messages.
Here is my client:
var notifier = $.connection.notificationHub;
notifier.shownotification = function (data) {
$.each(data, function (i, sample) {
var output = Mustache.render("<img class='pull-left' src='{{ImgURL}}'/> <div><strong>{{Application}}</strong></div><em>{{Server}}</em> <p>{{Message}}</p>", sample);
$.sticky(output);
});
};
$.connection.hub.start(function () { notifier.start(); });
Couple of notes:
As soon as a second client connects to your server there will be 2 threads sending the notifications, therefore if you ave more than one client you will have intervals smaller than 20s
Handling thread manually within ASP.NET is considered bad practice, you should avoid this if possible
In general this smells a lot like polling which is kind of the thing SignalR lets you get rid of since you don't need to signal the server/client
In order to solve this you need todo something like this (again, threads in a web application are generally not a good idea):
public class NotificationHub : Hub
{
public static bool initialized = false;
public static object initLock = new object();
public void Start()
{
if(initialized)
return;
lock(initLock)
{
if(initialized)
return;
Thread thread = new Thread(Notify);
thread.Start();
initialized = true;
}
}
public void Notify()
{
List<CDCNotification> notifications = new List<CDCNotification>();
while (true)
{
notifications.Clear();
notifications.Add(new CDCNotification() { Server = "Server A", Application = "Some App", Message = "This is a long ass message and amesaadfasd asdf message", ImgURL = "../Content/Images/accept-icon.png" });
Clients.shownotification(notifications);
Thread.Sleep(20000);
}
}
}
The static initialized flag prevents multiple threads from being created. The locking around it is to ensure that the flag is only set once.
I am working on the same task over here. Instead of continuously checking the database, I created my own events and listener, where an event is RAISED when a NOTIFICATION IS ADDED :) What do you think about that?

Categories