Does the In Memory Outbox only work with an underlying messaging transport configured.
The documentation and some of the various posts I have read are leading to believe me that it will ONLY work with a specific underlying transport specified. It would be nice if that wasn't the case.
I say this as I have read discussion around the outbox and acknowledging messages "from a broker" and only once all processing has completed successfully - messages are acknowledged and publishing occurs.
So, when handling the messaging (i.e. via Amazon SQS) oneself and publishing messages into the state machine (i.e taking the transport message, creating a new message and then handing off to publishing to a consumer or saga state machine, how would the outbox know about and work with underlying transport messages.)
To be really clear, will the outbox work when using the following configuration (note the absence of any messaging transport configuration) :
services.AddMediator(configurator =>
{
configurator.AddConsumer<PublishMessageConsumer>();
configurator.AddSagaStateMachine<YetAnotherStateMachine, YetSomeMoreState>(
sagaConfigurator =>
{
sagaConfigurator.UseInMemoryOutbox();
}).DynamoDbRepository()
/// Snip
});
If it DOES work - if I wanted a consumer AND the Saga statemachine to work in concert such that the the Saga published to the Consumer and the Consumer failed for some reason. What would actually happen?
The sole purpose of the in-memory outbox is to defer calls to Send/Publish until after the consumer has completed. In the case of a saga, it means after the saga has been persisted to the saga repository after all state machine behaviors for the event have completed successfully (without throwing an exception).
In the case above, the saga would complete all activities for triggering event, the instance would be saved to the saga repository, and finally the consumer would be created/called by the Send/Publish call from the saga.
If the consumer throws an exception, it won't affect the already persisted saga instance in any way, as that has already completed.
NOW. If you do NOT use the in-memory outbox in this scenario, since it is using mediator (and not a transport), if you call Send/Publish in a state machine activity, control is transferred immediately to the consumer of the message sent/published. After that consumer completes, controls returns to the saga, which once the activities have completed would be persisted to the repository and the original message consumed by the saga completes, returning control to the original Send/Publish call.
Mediator is immediate, and any messages produced by consumers and/or sagas are consumed immediately as well.
Related
RabbitMq 3.8.5, C# RabbitMqClient v6.1.0, .Net Core 3.1
I feel that I'm misunderstanding something with RabbitMq so I'm looking for clarification:
If I have a client sending a message to an exchange, and there's no consumer on the other side, what is meant to happen?
I had thought that it should sit in a queue until it's picked up, but the issue I've got is that, right now there is no queue on the other end of the exchange (which may well be my issue).
This is my declaration code:
channel.ExchangeDeclare(name, exchangeType, durable, autoDelete);
var queueName = ret._channel.QueueDeclare().QueueName;
channel.ConfirmSelect();
and this is my publisher:
channel.BasicPublish(exchangeName, routingKeyOrTopicName, messageProperties, message);
However doing that gives me one queue name for the outbound exchange, and another for the inbound consumer.
Would someone help this poor idiot out in understanding how this is meant to work? What is the expected behavior if there's no consumer at the other end? I do have an RPC mechanism that does work, but wasn't sure if that's the right way to handle this, or not.
Everything works find if I have my consumer running first, however if I fire up my Consumer after the client, then the messages are lost.
Edit
To further clarify, I've set up a simple RPC type test; I've two Direct Exchanges on the client side, one for the outbound Exchange, and another for the inbound RPC consumer.
Both those have their own queue.
Exchange queue name = amq.gen-fp-J9-TQxOJ7NpePEnIcGQ
Consumer queue name = amq.gen-wDFEJ269QcMsHMbAz-t3uw
When the Consumer app fires up, it declares its own Direct exchange and its own queue.
Consumer queue name = amq.gen-o-1O2uSczjXQDihTbkgeqA
If I do it that way though, the message gets lost.
If I fire up the consumer first then I still get three queues in total, but the messages are handled correctly.
This is the code I use to send my RPC message:
messageProperties.ReplyTo = _rpcResponder._routingKeyOrTopicName;
messageProperties.Type = "rpc";
messageProperties.Priority = priority;
messageProperties.Persistent = persistent;
messageProperties.Headers = headers;
messageProperties.Expiration = "3600000";
Looking at the management GUI, I see that all three queues end up being marked as Exclusive, but I'm not declaring them as such. In fact, I'm not creating any queues myself, rather letting the Client library handle that for me, for example, this is how I define my Consumer:
channel.ExchangeDeclare(name, exchangeType, durable, autoDelete);
var queueName = ret._channel.QueueDeclare().QueueName;
Console.WriteLine($"Consumer queue name = {queueName}");
channel.QueueBind(ret.QueueName, name, routingKeyOrTopicName, new Dictionary<string, object>());
In RabbitMQ, messages stay in queues, but they are published to exchanges. The way to link an exchange to a queue is through bindings (there are some default bindings).
If there are no queues, or the exchange's policy doesn't find any queue to forward the message, the message is lost.
Once a message is in a queue, the message is sent to one of that queue's consumers.
Maybe you're using exclusive queues? These queues get deleted when their declaring connection is gone.
Found the issue: I was allowing the library to generate the queue names rather than using specific ones. This meant that RabbitMq was always having to deal with a shifting target each time.
If I use 'well defined' queue names AND the consumer has fired up at least once to define the queue on RabbitMq, then I do see the message being dropped into the queue and stay there, even though the consumer isn't running.
I am using NService to create an endpoint.
The endpoint is listening to an event and do some calculation, then publish result (success or fail) to other endpoints
I know that NServiceBus support ImmediateRetry and DelayRetry, and they are configurable.
Now, I want to publish a fail result event to other endpoints after all retries (before sending to error queue).
public async Task Handle(MyEvent message, IMessageHandlerContext context)
{
Console.WriteLine($"Received MyEvent, ID = {message.Id}");
//Connect to other services to get data and do some calculation
Thread.Sleep(1000);
Console.WriteLine($"Processed MyEvent, ID = { message.Id}");
await context.Publish(new MyEventResult { IsSucceed = true });
}
Above is my current code. It will publish a successful result if there is no exception throw. But If it has a fatal exception, I don't know how to publish a fail result event before the message is sent to the error queue.
Thanks in advance.
Notes: I am using NServiceBus 6.4.3
I'm not sure why you want this but have you looked at NServiceBus sagas? They are intended to be used when having to doing blocking IO via (external) services. You can take alternative actions based on the fact if a specific task hasn't been performed within an allocated period or because the returned result was incorrect.
https://docs.particular.net/nservicebus/sagas/
See the following sample of a saga:
https://docs.particular.net/samples/saga/simple/
The following is a sample showing the usage of saga timeouts. If specific task has not been performed within a specific duration an alternative action can be performed like publishing an event or performing a ReplyToOriginator
https://docs.particular.net/nservicebus/sagas/timeouts
https://docs.particular.net/nservicebus/sagas/reply-replytooriginator-differences
https://docs.particular.net/nservicebus/sagas/#notifying-callers-of-status
By using sagas you are making your process explicit. I would avoid hooking into the recovery mechanism for this.
The recovery mechanism is meant to deal with transient errors like network connectivity issues, database deadlocks, etc. but not with expected failure results. You should properly process these and continue your modeled process in its unhappy path.
I have a Service Fabric cluster hosting an 'Orchestrator'-type service which spins up and shuts down other Stateful services to do work, using FabricClient.ServiceManagementClient's CreateServiceAsync and DeleteServiceAsync methods.
The work involves processing messages which are stored for a short time within a ReliableConcurrentQueue.
I'm trying to handle the graceful shutdown of these services via the CancellationToken by ensuring that the queue is completely drained of messages before the service is deleted, but have found that the service's access to the ReliableConcurrentQueue is revoked once the CancellationToken is cancelled.
For example, calling StateManager.GetOrAddAsync<T>() from a callback registered with the CancellationToken, results in a FabricNotReadableException, containing the message "Primary state manager is currently not readable".
Reading around, it seems this is expected behaviour:
"In Service Fabric, when a Primary is demoted, one of the first things
that happens is that write access to the underlying state is revoked."
https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-reliable-services-lifecycle
Also, the answers to this question suggest that FabricNotReadableException is often a transient issue, and affected calls can be retried. This doesn't seem to be the case in this example; multiple retries at various frequencies/delays all seem to fail the same way.
Is there a way to guarantee that everything in the queue is processed using the combination of Stateful services, Reliable Collections and CancellationTokens? Or should I be looking into storage outside of what Service Fabric can provide?
Consider performing the queue item processing inside RunAsync.
Stopping / changing the role of a service causes the CancellationToken passed to RunAsync to be cancelled.
Once that happens, you need to make sure that you only exit that method when the queue depth is 0.
Also, once this cancellation is requested, you should probably stop allowing new items to be enqueued.
How to configure MassTransit to retry context.Publish() before failing, for example when RabbitMQ server is temporary unavailable?
The problem with retry in this context is that the only real reason a Publish call would fail is if the broker connection was lost (for any reason: network, etc.).
In that case, the connection which was used to receive the message is also lost, meaning that another node connected to the broker may have already picked up the message. So a retry in this case would be bad, since it would reconnect to the broker and send, but then the message could not be acknowledged (since it was likely picked up on another thread/worker).
The usual course of action here is to let it fail, and when the receive endpoint reconnects, the message will be redelivered to a consumer which will then call Publish and reach the desired outcome.
You should make sure that your consumer can handle this (search for idempotent) properly to avoid a failure causing a break in your business logic.
Updated Jan 2022: Since v7, MassTransit retries all publish/send calls until the cancellationToken is canceled.
Recently, I've been checking out RabbitMQ over C# as a way to implement pub/sub. I'm more used to working with NServiceBus. NServiceBus handles transactions by enlisting MSMQ in a TransactionScope. Other transaction aware operations can also enlist in the same TransactionScope (like MSSQL) so everything is truly atomic. Underneath, NSB brings in MSDTC to coordinate.
I see that in the C# client API for RabbitMQ there is a IModel.TxSelect() and IModel.TxCommit(). This works well to not send messages to the exchange before the commit. This covers the use case where there are multiple messages sent to the exchange that need to be atomic. However, is there a good way to synchronize a database call (say to MSSQL) with the RabbitMQ transaction?
You can write a RabbitMQ Resource Manager to be used by MSDTC by implementing the IEnlistmentNotification interface. The implementation provides two phase commit notification callbacks for the transaction manager upon enlisting for participation. Please note that MSDTC comes with a heavy price and will degrade your overall performance drastically.
Example of RabbitMQ resource manager:
sealed class RabbitMqResourceManager : IEnlistmentNotification
{
private readonly IModel _channel;
public RabbitMqResourceManager(IModel channel, Transaction transaction)
{
_channel = channel;
_channel.TxSelect();
transaction.EnlistVolatile(this, EnlistmentOptions.None);
}
public RabbitMqResourceManager(IModel channel)
{
_channel = channel;
_channel.TxSelect();
if (Transaction.Current != null)
Transaction.Current.EnlistVolatile(this, EnlistmentOptions.None);
}
public void Commit(Enlistment enlistment)
{
_channel.TxCommit();
enlistment.Done();
}
public void InDoubt(Enlistment enlistment)
{
Rollback(enlistment);
}
public void Prepare(PreparingEnlistment preparingEnlistment)
{
preparingEnlistment.Prepared();
}
public void Rollback(Enlistment enlistment)
{
_channel.TxRollback();
enlistment.Done();
}
}
Example using resource manager
using(TransactionScope trx= new TransactionScope())
{
var basicProperties = _channel.CreateBasicProperties();
basicProperties.DeliveryMode = 2;
new RabbitMqResourceManager(_channel, trx);
_channel.BasicPublish(someExchange, someQueueName, basicProperties, someData);
trx.Complete();
}
As far as I'm aware there is no way of coordinating the TxSelect/TxCommit with the TransactionScope.
Currently the approach that I'm taking is using durable queues with persistent messages to ensure they survive RabbitMQ restarts. Then when consuming from the queues I read a message off do some processing and then insert a record into the database, once all this is done I ACK(nowledge) the message and it is removed from the queue. The potential problem with this approach is that the message could end up being processed twice (if for example the message is committed to the DB but say the connection to RabbitMQ is disconnected before the message can be ack'd), but for the system that we're building we're concerned about throughput. (I believe this is called the "at-least-once" approach).
The RabbitMQ site does say that there is a significant performance hit using the TxSelect and TxCommit so I would recommend benchmarking both approaches.
However way you do it, you will need to ensure that your consumer can cope with the message potentially being processed twice.
If you haven't found it yet take a look at the .Net user guide for RabbitMQ here, specifically section 3.5
Lets say you've got a service bus implementation for your abstraction IServiceBus. We can pretend it's rabbitmq under the hood, but it certainly doesn't need to be.
When you call servicebus.Publish, you can check System.Transaction.Current to see if you're in a transaction. If you are and it's a transaction for a mssql server connection, instead of publishing to rabbit you can publish to a broker queue within sql server which will respect the commit/rollback with whatever database operation you're performing (you want to do some connection magic here to avoid the broker publish upgrading your txn to msdtc)
Now you need to create a service that needs to read the broker queue and do an actual publish to rabbit, this way, for very important things, you can gaurantee that your database operation completed previously and that the message gets published to rabbit at some point in the future (when the service relays it). its still possible for failures here if when committing the broker receive an exception occurs, but the window for problems is drastically reduced and worse case scenario you would end up publishing multiple times, you would never lose a message. This is very unlikely, the sql server going offline after receive but before commit would be an example of when you would end up at minimum double publishing (when the server comes on-line you'd publish again) You can build your service smart to mitigate some, but unless you use msdtc and all that comes with it (yikes) or build your own msdtc (yikes yikes) you are going to have potential failures, it's all about making the window small and unlikely to occur.