I'm looking for a solid way of re-queuing messages that couldn't be handled properly - at this time.
I've been looking at http://dotnetcodr.com/2014/06/16/rabbitmq-in-net-c-basic-error-handling-in-receiver/ and it seems that it's supported to requeue messages in the RabbitMQ API.
else //reject the message but push back to queue for later re-try
{
Console.WriteLine("Rejecting message and putting it back to the queue: {0}", message);
model.BasicReject(deliveryArguments.DeliveryTag, true);
}
However I'm using EasyNetQ.
So wondering how I would do something similar here.
bus.Subscribe<MyMessage>("my_subscription_id", msg => {
try
{
// do work... could be long running
}
catch ()
{
// something went wrong - requeue message
}
});
Is this even a good approach? Not ACK the message could cause problems if do work exceeds the wait for ACK timeout by the RabbitMQ server.
So I came up with this solution. Which replaces the default error strategy by EasyNetQ.
public class DeadLetterStrategy : DefaultConsumerErrorStrategy
{
public DeadLetterStrategy(IConnectionFactory connectionFactory, ISerializer serializer, IEasyNetQLogger logger, IConventions conventions, ITypeNameSerializer typeNameSerializer)
: base(connectionFactory, serializer, logger, conventions, typeNameSerializer)
{
}
public override AckStrategy HandleConsumerError(ConsumerExecutionContext context, Exception exception)
{
object deathHeaderObject;
if (!context.Properties.Headers.TryGetValue("x-death", out deathHeaderObject))
return AckStrategies.NackWithoutRequeue;
var deathHeaders = deathHeaderObject as IList;
if (deathHeaders == null)
return AckStrategies.NackWithoutRequeue;
var retries = 0;
foreach (IDictionary header in deathHeaders)
{
var count = int.Parse(header["count"].ToString());
retries += count;
}
if (retries < 3)
return AckStrategies.NackWithoutRequeue;
return base.HandleConsumerError(context, exception);
}
}
You replace it like this:
RabbitHutch.CreateBus("host=localhost", serviceRegister => serviceRegister.Register<IConsumerErrorStrategy, DeadLetterStrategy>())
You have to use the AdvancedBus so you have to setup everything up manually.
using (var bus = RabbitHutch.CreateBus("host=localhost", serviceRegister => serviceRegister.Register<IConsumerErrorStrategy, DeadLetterStrategy>()))
{
var deadExchange = bus.Advanced.ExchangeDeclare("exchange.text.dead", ExchangeType.Direct);
var textExchange = bus.Advanced.ExchangeDeclare("exchange.text", ExchangeType.Direct);
var queue = bus.Advanced.QueueDeclare("queue.text", deadLetterExchange: deadExchange.Name);
bus.Advanced.Bind(deadExchange, queue, "");
bus.Advanced.Bind(textExchange, queue, "");
bus.Advanced.Consume<TextMessage>(queue, (message, info) => HandleTextMessage(message, info));
}
This will dead letter a failed message 3 times. After that it'll go to the default error queue provided by EasyNetQ for error handling. You can subscribe to that queue.
A message is dead lettered when an exception propagates out of your consumer method. So this would trigger a dead letter.
static void HandleTextMessage(IMessage<TextMessage> textMessage, MessageReceivedInfo info)
{
throw new Exception("This is a test!");
}
to the best of my knowledge, there is no way to manually ack, nack or reject a message with EasyNetQ.
I see you have opened an issue ticket with the EasyNetQ team, regarding this... but no answer, yet.
FWIW, this is a very appropriate thing to do. All of the libraries that I use support this feature set (in NodeJS) and it is common. I'm surprised EasyNetQ doesn't support this.
Related
My client is attempting to send messages to the receiver. However I noticed that the receiver sometimes does not receive all the messages sent by the client thus missing a few messages (not sure where the problem is ? Client or the receiver).
Any suggestions on why that might be happening. This is what I am currently doing
On the receiver side this is what I am doing.
This is the Event Processor
async Task IEventProcessor.ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (var eventData in messages)
{
var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
}
}
This is how the client connects to the event hub
var StrBuilder = new EventHubsConnectionStringBuilder(eventHubConnectionString)
{
EntityPath = eventHubName,
};
this.eventHubClient = EventHubClient.CreateFromConnectionString(StrBuilder.ToString());
How do I direct my messages to specific consumers
I'm using this sample code from eventhub official doc, for sending and receiving.
And I have 2 consumer groups: $Default and newcg. Suppose you have 2 clients, the client_1 are using the default consumer group($Default), and client_2 are using the other consumer group(newcg)
First, after create the send client, in the SendMessagesToEventHub method, we need to add a property with value. The value should be the consumer group name. Sample code like below:
private static async Task SendMessagesToEventHub(int numMessagesToSend)
{
for (var i = 0; i < numMessagesToSend; i++)
{
try
{
var message = "444 Message";
Console.WriteLine($"Sending message: {message}");
EventData mydata = new EventData(Encoding.UTF8.GetBytes(message));
//here, we add a property named "cg", it's value is the consumer group. By setting this property, then we can read this message via this specified consumer group.
mydata.Properties.Add("cg", "newcg");
await eventHubClient.SendAsync(mydata);
}
catch (Exception exception)
{
Console.WriteLine($"{DateTime.Now} > Exception: {exception.Message}");
}
await Task.Delay(10);
}
Console.WriteLine($"{numMessagesToSend} messages sent.");
}
Then in the client_1, after create the receiver project, which use the default consumer group($Default)
-> in the SimpleEventProcessor class -> ProcessEventsAsync method, we can filter out the unnecessary event data. Sample code for ProcessEventsAsync method:
public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (var eventData in messages)
{
//filter the data here
if (eventData.Properties["cg"].ToString() == "$Default")
{
var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
Console.WriteLine($"Message received. Partition: '{context.PartitionId}', Data: '{data}'");
Console.WriteLine(context.ConsumerGroupName);
}
}
return context.CheckpointAsync();
}
And in another client, like client_2, which use another consumer group, like it's name is newcg, we can follow the steps in client_1, just a little changes in ProcessEventsAsync method, like below:
public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (var eventData in messages)
{
//filter the data here, using another consumer group name
if (eventData.Properties["cg"].ToString() == "newcg")
{
//other code
}
}
return context.CheckpointAsync();
}
This happens only when there are 2 or more Event Processor Host reading from same consumer group.
If you have event hub with 32 partitions and 2 event processor host reading from same consumer group. Then each event processor host will read from 16 partition and so on.
Similarly if 4 Event processor host parallelly reading from same consumer group then each will read from 8 partitions.
Check if you have 2 or more event processor host running on same consumer group.
I have tested your code and slightly modified it(different overload of EventProcessorHost constructor, and added CheckpointAsync after consuming the messages), and then did some tests.
By using the default implementation and default EventProcessorOptions(EventProcessorOptions.DefaultOptions) I can say that I did experience some latency when it comes to consuming messages, but all messages were processed successfully.
So, sometimes it seems like I am not getting the messages from the certain partition, but after a certain period of time, all messages arrive:
Here you can find the actual modified code that worked for me. It is a simple console app that prints to the console if something arrives.
string processorHostName = Guid.NewGuid().ToString();
var Options = new EventProcessorOptions()
{
MaxBatchSize = 1, //not required to make it working, just for testing
};
Options.SetExceptionHandler((ex) =>
{
System.Diagnostics.Debug.WriteLine($"Exception : {ex}");
});
var eventHubCS = "event hub connection string";
var storageCS = "storage connection string";
var containerName = "test";
var eventHubname = "test2";
EventProcessorHost eventProcessorHost = new EventProcessorHost(eventHubname, "$Default", eventHubCS, storageCS, containerName);
eventProcessorHost.RegisterEventProcessorAsync<MyEventProcessor>(Options).Wait();
For sending the messages to the event hub and testing I used this message publisher app.
I am trying to receive all messages for a given subscription to a Service Bus Topic, but for the context of this app I do not want them dead lettered at this time, I just want to view them and leave them on the subscription. Despite instantiating the Client as
SubscriptionClient sc = SubscriptionClient.CreateFromConnectionString(connectionString, sub.topicName, sub.subscriptionName, ReceiveMode.PeekLock);
and making sure that I am using message.Abandon() rather than message.Complete() the message always gets Dead-lettered after accessing the message. I also have options.AutoComplete set to false
full method code below:
public List<ServiceBusMessage> RetrieveSubscriptionMessages(Subscription sub) {
ServiceBusMessage sbm;
List<ServiceBusMessage> list = new List<ServiceBusMessage>();
String connectionString = ConfigurationManager.AppSettings["Microsoft.ServiceBus.ConnectionString"].ToString();
SubscriptionClient sc = SubscriptionClient.CreateFromConnectionString(connectionString, sub.topicName, sub.subscriptionName, ReceiveMode.PeekLock);
OnMessageOptions options = new OnMessageOptions();
options.AutoComplete = false;
sc.OnMessage((message) => {
try {
sbm = new ServiceBusMessage() {
topicName = sub.topicName,
messageText = message.GetBody<String>()
};
list.Add(sbm);
message.Abandon();
}
catch (Exception) {
message.Abandon();
throw;
}
}, options);
return list;
}
Am I missing something ? Or is there an issue with auto dead-lettering with the onMessage() method?
Thanks !
When a message is abandoned the service bus will immediately make it available for re-delivery to any subscriber of the topic.
If you are trying to configure a multicast mechanism in which multiple listeners all receive the same message, then understand that all listeners on a given subscription will be competing for the same message. In order for every listener to receive its own copy of the message, then simply create a unique subscription to the topic for each listener.
If your intent is to delay re-delivery of the abandoned message, you might look at the SO question: What's the proper way to abandon an Azure SB Message so that it becomes visible again in the future in a way I can control?
I am using RabbitMQ .net client in a windows service. I have millions of messages coming in bulk which then get processed and then the output is put on another queue. I am creating the connection factory with a heartbeat of 30 and then creating a connection whenever a connection or subscriber is lost. In production, my code probably works in most cases. However, in my integration tests, I know it is failing most of the time. Here is my code:
public void ReceiveAll(Func<IDictionary<ulong, byte[]>, IOnStreamWatchResult> onReceiveAllCallback, int batchSize, CancellationToken cancellationToken)
{
IModel channel = null;
Subscription subscription = null;
while (!cancellationToken.IsCancellationRequested)
{
if (subscription == null || subscription.Model.IsClosed)
{
channel = _channelFactory.CreateChannel(ref _connection, _messageQueueConfig, _connectionFactory);
// This instructs the channel to not prefetch more than batch count into shared queue
channel.BasicQos(0, Convert.ToUInt16(batchSize), false);
subscription = new Subscription(channel, _messageQueueConfig.Queue, false);
}
try
{
BasicDeliverEventArgs message;
var dequeuedMessages = new Dictionary<ulong, byte[]>();
do
{
if (subscription.Next(_messageQueueConfig.DequeueTimeout.Milliseconds, out message))
{
if (message == null)
{
// This means channel is closed and the messages in shared queue would get moved back to ready state
DisposeChannelAndSubcription(ref channel, ref subscription);
ReceiveAll(onReceiveAllCallback, batchSize, cancellationToken);
}
else
{
dequeuedMessages.Add(message.DeliveryTag, message.Body);
}
}
} while (message != null && batchSize > dequeuedMessages.Count && !cancellationToken.IsCancellationRequested);
if (cancellationToken.IsCancellationRequested)
{
if (dequeuedMessages.Any())
{
NackUnProcessedMessages(subscription, dequeuedMessages.Keys);
}
DisposeChannelAndSubcription(ref channel, ref subscription);
dequeuedMessages.Clear();
break;
}
try
{
var onStreamWatchResult = onReceiveAllCallback(dequeuedMessages);
AckProcessedMessages(subscription, onStreamWatchResult.Processed);
NackUnProcessedMessages(subscription, onStreamWatchResult.UnProcessed);
dequeuedMessages.Clear();
}
catch(Exception unhandledException)
{
NackUnProcessedMessages(subscription, dequeuedMessages.Keys);
}
}
catch (EndOfStreamException endOfStreamException)
{
DisposeChannelAndSubcription(ref channel, ref subscription);
}
catch (OperationInterruptedException operationInterruptedException)
{
DisposeChannelAndSubcription(ref channel, ref subscription);
}
}
}
The batch size is set to 4 because I put 4 messages in my integration test, which is just a windows service that I ran after running unit tests.
The issue here is that almost always, the subscriber pre-fetches 4 messages, as expected, returns true for the first two .Next iterations, but after that it returns false. I believe that is happening because my messages are not getting unacked properly. In my integration test, I ack 2 and nack 2 messages and then read the 2 nacked messages again to clear the queue. However, after nacking, the messages are not returned to ready state and hence the test hangs. What am I doing wrong here? Am I not understanding something from the nacking documentation? Here is my nacking code:
subscription.Model.BasicNack(deliveryTag, false, true);
I use RabbitMQ as my queue message server, I use .NET C# client.
When there is error in processing message from queue, message will not ackknowleage and still stuck in queue not be processed again as the document I understand.
I don't know if I miss some configurations or block of codes.
My idea now is auto manual ack the message if error and manual push this message to queue again.
I hope to have another better solution.
Thank you so much.
my code
public void Subscribe(string queueName)
{
while (!Cancelled)
{
try
{
if (subscription == null)
{
try
{
//try to open connection
connection = connectionFactory.CreateConnection();
}
catch (BrokerUnreachableException ex)
{
//You probably want to log the error and cancel after N tries,
//otherwise start the loop over to try to connect again after a second or so.
log.Error(ex);
continue;
}
//crate chanel
channel = connection.CreateModel();
// This instructs the channel not to prefetch more than one message
channel.BasicQos(0, 1, false);
// Create a new, durable exchange
channel.ExchangeDeclare(exchangeName, ExchangeType.Direct, true, false, null);
// Create a new, durable queue
channel.QueueDeclare(queueName, true, false, false, null);
// Bind the queue to the exchange
channel.QueueBind(queueName, exchangeName, queueName);
//create subscription
subscription = new Subscription(channel, queueName, false);
}
BasicDeliverEventArgs eventArgs;
var gotMessage = subscription.Next(250, out eventArgs);//250 millisecond
if (gotMessage)
{
if (eventArgs == null)
{
//This means the connection is closed.
DisposeAllConnectionObjects();
continue;//move to new iterate
}
//process message
channel.BasicAck(eventArgs.DeliveryTag, false);
}
}
catch (OperationInterruptedException ex)
{
log.Error(ex);
DisposeAllConnectionObjects();
}
}
DisposeAllConnectionObjects();
}
private void DisposeAllConnectionObjects()
{
//dispose subscription
if (subscription != null)
{
//IDisposable is implemented explicitly for some reason.
((IDisposable)subscription).Dispose();
subscription = null;
}
//dipose channel
if (channel != null)
{
channel.Dispose();
channel = null;
}
//check if connection is not null and dispose it
if (connection != null)
{
try
{
connection.Dispose();
}
catch (EndOfStreamException ex)
{
log.Error(ex);
}
catch (OperationInterruptedException ex)//handle this get error from dispose connection
{
log.Error(ex);
}
catch (Exception ex)
{
log.Error(ex);
}
connection = null;
}
}
I think you may have misunderstood the RabbitMQ documentation. If a message does not get ack'ed from the consumer, the Rabbit broker will requeue the message onto the queue for consumption.
I don't believe your suggested method for ack'ing and then requeuing a message is a good idea, and will just make the problem more complex.
If you want to explicitly "reject" a message because the consumer had a problem processing it, you could use the Nack feature of Rabbit.
For example, within your catch exception blocks, you could use:
subscription.Model.BasicNack(eventArgs.DeliveryTag, false, true);
This will inform the Rabbit broker to requeue the message. Basically, you pass the delivery tag, false to say it is not multiple messages, and true to requeue the message.
If you want to reject the message and NOT requeue, just change true to false.
Additionally, you have created a subscription, so I think you should perform your ack's directly on this, not through the channel.
Change:
channel.BasicAck(eventArgs.DeliveryTag, false);
To:
subscription.Ack();
This method of ack'ing is much cleaner since you are then keeping everything subscription-related on the subscription object, rather than messing around with the channel that you've already subscribed to.
In C# ASP.NET 3.5 web application running on Windows Server 2003, I get the following error once in a while:
"Object reference not set to an instance of an object.: at System.Messaging.Interop.MessagePropertyVariants.Unlock()
at System.Messaging.Message.Unlock()
at System.Messaging.MessageQueue.ReceiveCurrent(TimeSpan timeout, Int32 action, CursorHandle cursor, MessagePropertyFilter filter, MessageQueueTransaction internalTransaction, MessageQueueTransactionType transactionType)
at System.Messaging.MessageEnumerator.get_Current()
at System.Messaging.MessageQueue.GetAllMessages()".
The line of code that throws this error is:
Message[] msgs = Global.getOutputQueue(mode).GetAllMessages();
where Global.getOutputQueue(mode) gives the messagequeue I want to get messages from.
Update:
Global.getPool(mode).WaitOne();
commonClass.log(-1, "Acquired pool: " + mode, "Report ID: " + unique_report_id);
............../* some code /
..............
lock(getLock(mode))
{
bool yet_to_get = true;
int num_retry = 0;
do
{
try
{
msgs = Global.getOutputQueue(mode).GetAllMessages();
yet_to_get = false;
}
catch
{
Global.setOutputQueue(mode);
msgs = Global.getOutputQueue(mode).GetAllMessages();
yet_to_get = false;
}
++num_retry;
}
while (yet_to_get && num_retry < 2);
}
... / some code*/
....
finally
{
commonClass.log(-1, "Released pool: " + mode, "Report ID: " + unique_report_id);
Global.getPool(mode).Release();
}
Your description and this thread suggests a timing issue. I would create the MessageQueue object infrequently (maybe only once) and have Global.getOutputQueue(mode) return a cached version, seems likely to get around this.
EDIT: Further details suggest you have the opposite problem. I suggest encapsulating access to the message queue, catching this exception and recreating the queue if that exception occurs. So, replace the call to Global.getOutputQueue(mode).GetAllMessages() with something like this:
public void getAllOutputQueueMessages()
{
try
{
return queue_.GetAllMessages();
}
catch (Exception)
{
queue_ = OpenQueue();
return queue_.GetAllMessages();
}
}
You'll notice I did not preserve your mode functionality, but you get the idea. Of course, you have to duplicate this pattern for other calls you make to the queue, but only for the ones you make (not the whole queue interface).
This is an old thread, but google brought me here so I shall add my findings.
I agree with user: tallseth that this is a timing issue.
After the message queue is created it is not instantly available.
try
{
return _queue.GetAllMessages().Length;
}
catch (Exception)
{
System.Threading.Thread.Sleep(4000);
return _queue.GetAllMessages().Length;
}
try adding a pause if you catch an exception when accessing a queue which you know has been created.
On a related note
_logQueuePath = logQueuePath.StartsWith(#".\") ? logQueuePath : #".\" + logQueuePath;
_queue = new MessageQueue(_logQueuePath);
MessageQueue.Create(_logQueuePath);
bool exists = MessageQueue.Exists(_logQueuePath);
running the MessageQueue.Exists(string nameofQ); method immediately after creating the queue will return false. So be careful when calling code such as:
public void CreateQueue()
{
if (!MessageQueue.Exists(_logQueuePath))
{
MessageQueue.Create(_logQueuePath);
}
}
As it is likely to throw an exception stating that the queue you are trying to create already exists.
-edit: (Sorry I don't have the relevant link for this new info)
I read that a newly created MessageQueue will return false on MessageQueue.Exists(QueuePath)until it has received at least one message.
Keeping this and the earlier points i mentioned in mind has gotten my code running reliably.