Consuming _error queue in masstransit - c#

For each queue masstransit has consumers, it automatically creates a [queuename]_error queue, and moves messages that could not be processed there (after retrials, etc.)
I´m trying to create a consumer, that takes errors from that queue, and writes it to a database.
In order to consume those messages, I had to create a handler/consumer for the error queue, receiving the original message.
cfg.ReceiveEndpoint(host, "myqueuename", e =>
{
e.Handler<MyMessage>(ctx =>
{
throw new Exception ("Not expected");
});
});
cfg.ReceiveEndpoint(host, "myqueuename_error", e =>
{
e.BindMessageExchanges = false;
e.Handler<MyMessage>(ctx =>
{
Console.WriteLine("Handled");
// do whatever
return ctx.CompleteTask;
});
});
All that works fine, the problem to retrieve the actual exception that occurred.
I was actually able to do that, with some serious hack....
e.Handler<MyMessage>(m =>
{
var buffer = m.ReceiveContext.TransportHeaders
.GetAll().Single(s => s.Key == "MT-Fault-Message").Value as byte[];
var errorText = new StreamReader(new MemoryStream(buffer)).ReadToEnd();
Console.WriteLine($"Handled, Error={errorText}");
return m.CompleteTask;
});
That just fells wrong though.
PS: I Know i could subscribe to a Fault event, but in this particular case, it is a RequestClient (request-response) pattern, and MT redirects FaultAddress back to the client, and I can´t garantee it is still running.

Request/reply should only be used for getting the data. It means that if the requestor goes down - there are no more reasons to reply with data or with fault and you do not have interest in consuming faults.
So, the reason for the request client to use a temporary (non-durable) queue instead of the receive endpoint queue is by design. It encourages you not to understand that the scope of your replies is only within the request waiting time.
If you send commands and need to be informed if the command has been processed - you should publish events to inform about the outcome of the command processing. Using message metadata (initiator id and conversation id) allows you to find out, how events correlate with commands.
So, only use request/reply for requesting information (queries) using decoupled invocation SOA pattern, where the reply only have a meaning in correlation with request and if the requestor goes down, the reply is no longer needed, no matter if it was a success of failure.

Related

Azure Service Bus send message every other time

I've a c# dotnet webjob and a simple desktop app.
Sending a message apperaes to work only every other time.
serviceBusClient = new QueueClient(_config["ServiceBusConnectionString"], "queuename", ReceiveMode.ReceiveAndDelete);
await serviceBusClient.SendMigrationMessageAsync("1", label);
await serviceBusClient.SendMigrationMessageAsync("2", label);
await serviceBusClient.SendMigrationMessageAsync("3", label);
await serviceBusClient.SendMigrationMessageAsync("4", label);
SendMigrationMessageAsync is an extension:
public static async Task SendMigrationMessageAsync(this IQueueClient client, string messageText, string label)
{
Message message = new Message(Encoding.UTF8.GetBytes(messageText));
message.Label = label;
await client.SendAsync(message);
}
In the destkop app I registered to receive the message and also registered a message exception handler (which is not call at all).
In this scenario I can only receive message "2" and "4".
When I stopped execution after the first message had been sent, the message never showed up on the Azure service.
Thanks in advance
EDITED:
I found out that arter creating brand new Azure Service Bus Namespace, all is working fine.
I had basic pricing tier and even after upgrading to standard I was able to only send every other message.
Creating new service sorted this out.
Is there any limitation or throtling? I haven't sent many messages at all, something around 300 daily.
You most probably had two processes with the same subscription id, so they are "stealing" messages from each other. Let's say there are two console apps, the first one sending messages and the second one receiving.
With both having same subscription id it looks like this:
And with the unique subscription for each process everything is ok:

how to receive N number of messages at time from azure service bus topic subscription

I have one azure service bus topic subscription where messages keeps pump up.
Below code is basically receive one message at a time and processing it and relevant result stored into database.
I tried to set MaxConcurrentCalls to 10 , but it's exhausted my database connection pool due to the database work design.
So I thought to get 10 messages from subscription at a time (receive in a batch of N number of messages) and want to process with one database call.
I don't see any batch api options, is this possible?
I am using Microsoft.Azure.ServiceBus nuget version 4.1.1.
_subscriptionClient = new SubscriptionClient(connectionString, topicName, subscriptionName);
// Register the callback method that will be invoked a message of interest is received
_subscriptionClient.RegisterMessageHandler(
async (message, token) =>
{
if (await ProcessMessage(message, token))
{
await _subscriptionClient.CompleteAsync(message.SystemProperties.LockToken);
}
},
new MessageHandlerOptions(ExceptionReceivedHandler) { MaxConcurrentCalls = 1, AutoComplete = false });
There is the concept of prefetching: https://learn.microsoft.com/en-us/azure/service-bus-messaging/service-bus-performance-improvements?tabs=net-framework-sdk#prefetching
Prefetching enables the queue or subscription client to load additional messages from the service when it performs a receive operation.
Check the receivebatch here:https://learn.microsoft.com/en-us/dotnet/api/microsoft.servicebus.messaging.subscriptionclient.receivebatch?view=azure-dotnet
Example:
SubscriptionClient client = SubscriptionClient.CreateFromConnectionString(connectionString, topic, subName);
client.PrefetchCount = 10;
IEnumerable<BrokeredMessage> messageList = client.ReceiveBatch(5);
Prefetch should be greater than or equal to the number of messages you are expecting to receive from ReceiveBatch.
Prefetch can be up to n/3 times the number of messages processed per second, where n is the default lock duration.

MassTransit and Broadcasting

I am trying to get a messaging system up and running betwen multiple applications. I have an instance of RabbitMQ running and that appears to be fine. I can connect multiple subscribers/publisher to the RabbitMQ instance and they appear to be fine. I can then publish a message from one publisher but only one subscriber is getting the message.
I believe it has to do with the the way I am establishing the queues. I've looked at the tutorial on Rabbit, https://www.rabbitmq.com/tutorials/tutorial-three-dotnet.html, but I dont know hopw this translates into the Masstransit library.
For the life of me I am having trouble working out what I am doing wrong.
NuGets:
MassTransit.Extensions.DependencyInjection 5.3.2
MassTransit.RabbitMQ 5.3.2
Can anyone help?
// Register MassTransit
services.AddMassTransit(mtCfg =>
{
mtCfg.AddConsumer<DomainMessageConsumer>();
mtCfg.AddBus(provider => Bus.Factory.CreateUsingRabbitMq(rbCfg =>
{
var host = rbCfg.Host(settings.RabbitMq.Host, settings.RabbitMq.VirtualHost, h =>
{
h.Username(settings.RabbitMq.Username);
h.Password(settings.RabbitMq.Password);
});
rbCfg.ReceiveEndpoint(host, settings.RabbitMq.ConnectionName, ep =>
{
ep.PrefetchCount = 16;
ep.UseMessageRetry(x => x.Interval(2, 100));
ep.ConfigureConsumer<DomainMessageConsumer>(provider);
});
}));
});
The problem you are having is because you are using the same queuename on all consumers. If you want broadcasting to all consumers, you should make all queuenames unique. In your code example, it's the settings.RabbitMq.ConnectionName variable that you should make unique for each consumer.
Check the below picture and imagine Subscription B is the queue settings.RabbitMq.ConnectionName you've set. What you'll get is the left part on the picture, only Subscriber B1 receives (actually it's round-robin balancing, but this is going offtopic). If you want broadcasting, you can create separate subscriptions (or settings.RabbitMq.ConnectionName in your example).

MessageReceiver.RegisterMessageHandler throws exceptions continuously if network is down

I have successfully implemented a connection to ServiceBus with MessageReceiver using RegisterMessageHandler that starts a pump (from this example) and all seems to work just fine.
But in case of exception like e.g. when I turn off network connection the pump throws exceptions continuously to the ExceptionHandler. Every second or even faster. I am wondering if this is supposed default behavior and more importantly if it's possible to change, so that e.g. connection retries can happen every 1 minute. Or am I supposed to do Thread.Sleep or something to achieve that?
receiver.RegisterMessageHandler(
async (message, cancellationToken1) => await HandleMessage(receiver, message),
new MessageHandlerOptions(HandleException)
{
AutoComplete = false,
MaxConcurrentCalls = 1
});
P.S. This is how I solved it now, but not sure if it's a proper way:
private Task HandleException(ExceptionReceivedEventArgs args)
{
_logger.Error(...);
return Task.Delay(60000);
}
P.S Here is the RetryPolicy.Default dump:
Azure Service Bus has a default retry policy (RetryPolicy.Default), but given the transport is trying to receive messages and the broker is not available, will raise exceptions.
ExceptionReceivedContext provides a context, ExceptionReceivedContext which has an action that has failed, and the original exception. You can evaluate the action and decide what needs to be done. You could also check if the exception is transient or not. For transient errors, based on the action, you could just wait for the message to be retried again later (Receive action). In other cases you could either log an error or take a more specific action.
Try to configure the "RetryExponential" on your "SubscriptionClient" like this:
var receiver = new Microsoft.Azure.ServiceBus.SubscriptionClient(_serviceBusConnString, _topic, _subscription, this._receiveMode, new RetryExponential(TimeSpan.FromSeconds(5), TimeSpan.FromSeconds(10), _retryPolicyMaximumRetryCount));
This is the parameters descriptions:
https://learn.microsoft.com/en-us/dotnet/api/microsoft.servicebus.retryexponential?view=azure-dotnet
Here other post about what the properties means:
ServiceBus RetryExponential Property Meanings

Send message to specific channel/routing key with Masstransit/RabbitMQ in C#

I've been working on an application that starts some worker roles based on messaging.
This is the way I want the application to work:
Client sends a request for work (RPC).
One of the worker roles accepts the work, generates a random id, and responds to the RPC with the new id.
The worker will post its debug logs on a log channel with the id.
The client will subscribe to this channel so users can see what's going on.
The RPC is working fine, but I can't seem to figure out how to implement the log-sending.
This is the code that accepts work (simplified)
var bus = Bus.Factory.CreateUsingRabbitMq(sbc =>
{
var host = sbc.Host(new Uri("rabbitmq://xxxxxx.nl"), h =>
{
h.Username("xxx");
h.Password("xxxx");
});
sbc.ReceiveEndpoint(host, "post_work_item", e =>
{
e.Consumer<CreateWorkItemCommand>();
});
sbc.ReceiveEndpoint(host, "list_work_items", e =>
{
e.Consumer<ListWorkItemsCommand>();
});
});
The CreateWorkItemCommand will create the thread, do the work, etc. Now, how would I implement the log-sending with Masstransit? I was thinking something like:
bus.Publish(
obj: WorkUpdate{ Message = "Hello world!" },
channel: $"work/{work_id}"
)
And the client will do something this:
bus.ReceiveFromEvented($"work/{rpc.work_id}").OnMessage += { more_psuedo_code() }
I can't seem to find out how to do this.
Can anyone help me out?
Thanks!
It looks both like a saga and turnout. Current Turnout implementation is monitoring the job itself and I doubt you can really subscribe to that message flow. And it is still not really done.
You might solve this using the saga. Some external trigger (a command) will start the first saga, which will use Request/Response to start the process, which will do the work, and get its correlation id (job id). The long job can publish progress reports using the same correlation id and the saga will consume them, doing what it needs to do.
The "work/{rpc.work_id}" will be then replaced by the correlation.

Categories