In RabbitMQ, using C# client, when I close an with IModel.Close() (i.e. a channel), the target queue gets dropped.
I can't figure out how I can prevent this behavior after some trial-errors.
The whole queue is durable and the server isn't restarted. Queue is just dropped...
It was easy to solve: channels should be both durable and have autoDelete set to false when they're declared. Same goes for queues.
Related
RabbitMq 3.8.5, C# RabbitMqClient v6.1.0, .Net Core 3.1
I feel that I'm misunderstanding something with RabbitMq so I'm looking for clarification:
If I have a client sending a message to an exchange, and there's no consumer on the other side, what is meant to happen?
I had thought that it should sit in a queue until it's picked up, but the issue I've got is that, right now there is no queue on the other end of the exchange (which may well be my issue).
This is my declaration code:
channel.ExchangeDeclare(name, exchangeType, durable, autoDelete);
var queueName = ret._channel.QueueDeclare().QueueName;
channel.ConfirmSelect();
and this is my publisher:
channel.BasicPublish(exchangeName, routingKeyOrTopicName, messageProperties, message);
However doing that gives me one queue name for the outbound exchange, and another for the inbound consumer.
Would someone help this poor idiot out in understanding how this is meant to work? What is the expected behavior if there's no consumer at the other end? I do have an RPC mechanism that does work, but wasn't sure if that's the right way to handle this, or not.
Everything works find if I have my consumer running first, however if I fire up my Consumer after the client, then the messages are lost.
Edit
To further clarify, I've set up a simple RPC type test; I've two Direct Exchanges on the client side, one for the outbound Exchange, and another for the inbound RPC consumer.
Both those have their own queue.
Exchange queue name = amq.gen-fp-J9-TQxOJ7NpePEnIcGQ
Consumer queue name = amq.gen-wDFEJ269QcMsHMbAz-t3uw
When the Consumer app fires up, it declares its own Direct exchange and its own queue.
Consumer queue name = amq.gen-o-1O2uSczjXQDihTbkgeqA
If I do it that way though, the message gets lost.
If I fire up the consumer first then I still get three queues in total, but the messages are handled correctly.
This is the code I use to send my RPC message:
messageProperties.ReplyTo = _rpcResponder._routingKeyOrTopicName;
messageProperties.Type = "rpc";
messageProperties.Priority = priority;
messageProperties.Persistent = persistent;
messageProperties.Headers = headers;
messageProperties.Expiration = "3600000";
Looking at the management GUI, I see that all three queues end up being marked as Exclusive, but I'm not declaring them as such. In fact, I'm not creating any queues myself, rather letting the Client library handle that for me, for example, this is how I define my Consumer:
channel.ExchangeDeclare(name, exchangeType, durable, autoDelete);
var queueName = ret._channel.QueueDeclare().QueueName;
Console.WriteLine($"Consumer queue name = {queueName}");
channel.QueueBind(ret.QueueName, name, routingKeyOrTopicName, new Dictionary<string, object>());
In RabbitMQ, messages stay in queues, but they are published to exchanges. The way to link an exchange to a queue is through bindings (there are some default bindings).
If there are no queues, or the exchange's policy doesn't find any queue to forward the message, the message is lost.
Once a message is in a queue, the message is sent to one of that queue's consumers.
Maybe you're using exclusive queues? These queues get deleted when their declaring connection is gone.
Found the issue: I was allowing the library to generate the queue names rather than using specific ones. This meant that RabbitMq was always having to deal with a shifting target each time.
If I use 'well defined' queue names AND the consumer has fired up at least once to define the queue on RabbitMq, then I do see the message being dropped into the queue and stay there, even though the consumer isn't running.
I'd like to write parallel execution module based on Solace. And I use request-reply schema for this.
I have:
Multiple message consumers, which publish messages into the same queue.
Multiple message producers, which read queue and create reply messages.
Message execution time is between 10 seconds to 10 minutes.
Queue access type is non-exclusive (e.g. it does round-robin between all consumers).
Each producer and consumer is asynchronous, e.g. Solace API blocks execution during the connection only.
What I'd like to have: if produces works on the message, it should not receive any other messages. This is extremely important, because some tasks blocks executor for several minutes, however other executors can be free after couple of seconds.
Scheme below can be workable (possible), however blocking code appears below. I'd like to avoid it.
while(true)
{
var inputMessage = flow.ReceiveMsg( /*timeout 1s*/1_000); // <--- blocking code, I'd like to avoid it
flow.Ack(inputMessage.ADMessageId);
var reply = await ProcessMessageAsync(inputMessage); // execute plus handle exceptions
session.SendReply(inputMessage, reply)
}
Messages are only pushed to the consuming applications.
That being said, your desired behavior can be obtained by setting the "max-delivered-unacked-msgs-per-flow" on your queue to 1.
This means that each consumer bound to the queue is only allowed to have 1 outstanding unacknowledged messages.
The next message will be only sent to the consumer after it has acknowledged the message.
Details about this feature can be found here.
Do note that your code snippet does not appear to be valid.
IFlow.ReceiveMsg is only used in transacted sessions, which makes use of ITransactedSession.Commit to acknowledge messages.
I have a couple of queues where certain information is queued. Let us say I have "success" and "failed" queues in which Server side component has continuously written some data to these queues for clients.
Clients read this data and display it on a UI for end users. Now, I have a situation to purge any message in these queues older than 30 days. Clients would then only be able to see only 30 days of information at any point of time.
I have searched a lot and could see some command line options to purge whole queue but could not find a relevant suggestion.
Any help in the right direction is appreciated. Thanks
I don't think this is possible; looks like you're trying to use RabbitMq as data storage instead of message server.
The only way to understand if a message is "older" than 30, is to process the message, and by doing this you are removing the messagge from the queue.
Best thing to do here is to process the messages and store them in a long term storage; then you can implement a deletion policy to eliminate the older elements.
If you really want to go down this path, RabbitMQ implements TTL at queue level or message level; take a look at this: https://www.rabbitmq.com/ttl.html
[As discussed in comments]
To keep the message in the queue you can try to use a NACK instead of ACK as confirmation; this way RabbitMQ will consider the message undelivered and it will try to deliver it again and again. Remember to create a durable queue (https://www.rabbitmq.com/confirms.html).
You can also check this answer: Rabbitmq Ack or Nack, leaving messages on the queue
I am attempting to prove that MassTransit delivers messages in the same order (FIFO) that rabbitmq receives them. So far, I am not having luck. MT seems to randomly deliver messages out of a queue. I have tried setting both of these bus configuration options to 1:
SetConcurrentReceiverLimit()
SetConcurrentConsumerLimit()
...seems to make no difference.
How do I ensure FIFO delivery via MassTransit?
If you set the ConcurrentConsumerLimit to 1 (receiver limit is 1 by default), and set the prefetch=1 on the URI, it should be in order FIFO delivery assuming no consumer exceptions are thrown. Honestly, even with a prefetch > 1 (which is important for performance reasons) it should be in-order.
Also, if you're doing this with some sample code, post the sample code and make sure that your producer and consumer processes are listening to separate queues.
x.ReceiveFrom(uri) // uri should be unique per bus instance
There's also a prefetch you need to set with RabbitMQ. MassTransit capping message rates at 10 shows an example of using prefetch configuration on the queue URI.
I'm using MassTransit + MSMQ as a message passing bus, which seems to be having reasonable success. However, for some tests I want to enqueue messages but never dequeue them. It seems like the right way to do this is to not subscribe to the queue directly. Here is my code:
1) I want to send and receive messages from the same queue in this process [this works]:
var solrMessageBus = ServiceBusFactory.New(sbc =>
{
sbc.UseMsmq();
sbc.VerifyMsmqConfiguration();
sbc.ReceiveFrom("msmq://localhost/my_queue");
sbc.Subscribe(subs =>
{
subs.Handler<MyMessage>(msg => Enqueue(msg));
});
});
2) I want to send messages from this process, but not consume them. MSMQ should build up a large queue of messages [this does not work]
var solrMessageBus = ServiceBusFactory.New(sbc =>
{
sbc.UseMsmq();
sbc.VerifyMsmqConfiguration();
sbc.ReceiveFrom("msmq://localhost/my_queue");
});
I'm not a MassTransit expert, but the above seems like a reasonable way to enqueue without dequeuing messages from that same queue. In 1), I see messages end up in my MSMQ, but in 2) no messages ever get to the queue.
How can I build up the queue without dequeuing the messages?
If you do not register any subscriptions on the bus, the queue will be emptied and all of the message sent to the queue will end up in the _error queue.
If you need to just send messages to a queue, you can use an EndpointCacheFactory (instead of a service bus factory) to get an IEndpointCache, then call GetEndpoint(uri) and use the Send method to send messages to that queue. This has the added benefit of avoiding any thread pool usage for receiving messages that are never consumed.
Also, a quick reminder, every service bus instance must have its own queue.
That sounds reasonable however I've never tried it.
Mass Transit builds the subscription mapping out of your setup and then maps subscriptions to queues (using multicast subscription). Note that messages are never stored in queues assigned to senders, rather they are multicasted to subscribers. No subscribers = nowhere to put your messages.
To queue messages forever, I would add a subscriber but pause its consumer thread until tests are completed.