The template code when you create a worker role with a queue client provides a message pump implementation. The code has a comment in it saying:
// Initiates the message pump and callback is invoked for each message that is received, calling close on the client will stop the pump.
sourceClient.OnMessage(received =>
{
//blah blah implementation
});
What actually happens when you call close() on the sourceClient? Do messages that are currently being processed continue? I.e. is this a graceful shutdown of the message pump? Or will calling close affect messages that are currently being processed by the message pump?
The documentation would lead me to believe it is, but there is this outstanding feedback item which would imply that there is no graceful shutdown mechanism for a message pump: https://feedback.azure.com/forums/216926-service-bus/suggestions/4345733-provide-gracefull-shutdown-feature-to-message-pump
So what does souceClient.close() actually do?
In the full framework client (WindowsAzure.ServiceBus) QueueClient does not stop message pump gracefully. Messages in flight that were not completed will have their delivery count increased.
So what does souceClient.close() actually do?
That client is a closed source project. Best guess would be to raise an issue for it here.
Related
Is it possible to bulk read messages from Solace queue rather than receiving them one by one on callback?
Currently MessageEventHandler receives about 20 messages per minute, this is too slow for our application.
Does anyone have a better solution to speed things up in Solace?
This is a C# application.
We used
ISession.CreateFlow(FlowProperties, IEndpoint, ISubscription,
EventHandler<MessageEventArgs>, EventHandler<FlowEventArgs>)
Passing in a MessageEventHandler which gets the message via MessageEventArgs.Message
queue = CreateQueue();
Flow = Session.CreateFlow(flowProperties, queue, null, OnHandleMessageEvent, OnHandleFlowEvent);
..
void OnHandleMessageEvent(object sender, MessageEventArgs args)
{
var msgObj = args.Message.BinaryAttachment;
..
}
```
No, there is no API call for a user to read messages in bulk.
By default, the API already obtaining messages from the message broker in batches, with each message being individualy delivered to the application in the message receive callback.
FlowProperties.WindowSize and FlowProperties.MaxUnackedMessages can change this behavior.
20 messages per minute is extremely slow.
One common reason for slowness is that the application is taking a long time to process messages in the message receive callback ("OnHandleMessageEvent").
Blocking in the message receive callback will prevent the API from delivering another message to the application.
Refer to Do Not Block in Callbacks for details.
I'm having a similar situation described here, but cannot comment there because just registered on this site.
A workaround for "pausing" with SetNumberOfWorkers(0) works in most cases. However, if SetNumberOfWorkers(0) is called during a lengthy message handler, I receive the following error at the end of the message handler:
An error occurred when attempting to complete the transaction context
Rebus.Exceptions.RebusApplicationException: Could not complete message with ID <...> and lock token <...> ---> Microsoft.Azure.ServiceBus.MessageLockLostException: The lock supplied is invalid. Either the lock expired, or the message has already been removed from the queue.
Note, that "Worker Rebus 1 worker 1 stopped" messages are received for all workers almost immediately after calling SetNumberOfWorkers(0) despite handler is still running.
After bringing number of workers back to normal all further messages throw a similar error at the end of the handler.
Any advice how to correctly deal with the pause of rebus?
(I need to pause because my microservice requires to periodically updating some resources and handlers can't run during those update)
I'd like to write parallel execution module based on Solace. And I use request-reply schema for this.
I have:
Multiple message consumers, which publish messages into the same queue.
Multiple message producers, which read queue and create reply messages.
Message execution time is between 10 seconds to 10 minutes.
Queue access type is non-exclusive (e.g. it does round-robin between all consumers).
Each producer and consumer is asynchronous, e.g. Solace API blocks execution during the connection only.
What I'd like to have: if produces works on the message, it should not receive any other messages. This is extremely important, because some tasks blocks executor for several minutes, however other executors can be free after couple of seconds.
Scheme below can be workable (possible), however blocking code appears below. I'd like to avoid it.
while(true)
{
var inputMessage = flow.ReceiveMsg( /*timeout 1s*/1_000); // <--- blocking code, I'd like to avoid it
flow.Ack(inputMessage.ADMessageId);
var reply = await ProcessMessageAsync(inputMessage); // execute plus handle exceptions
session.SendReply(inputMessage, reply)
}
Messages are only pushed to the consuming applications.
That being said, your desired behavior can be obtained by setting the "max-delivered-unacked-msgs-per-flow" on your queue to 1.
This means that each consumer bound to the queue is only allowed to have 1 outstanding unacknowledged messages.
The next message will be only sent to the consumer after it has acknowledged the message.
Details about this feature can be found here.
Do note that your code snippet does not appear to be valid.
IFlow.ReceiveMsg is only used in transacted sessions, which makes use of ITransactedSession.Commit to acknowledge messages.
I'm trying to come up with the best way to schedule a message for an Azure service bus queue or topic while leaving the option open for immediately sending a message instead of the scheduled message. I want to make sure I can protect myself against creating a duplicate message if I try to send the replacement message right at or after the scheduled time of the first message.
What will happen if I try to cancel a scheduled message with CancelScheduledMessageAsync (for both QueueClient and TopicClient classes) after the message has already been enqueued? Will it throw an exception?
According to your description, I found a blog (Canceling Scheduled Messages) talking about the similar issue.
Before version 3.3.1, a scheduled message needs to be canceled prior to becoming visible, it was not possible. And any attempt to access its value would result in InvalidOperationException. Therefore, any messages scheduled in the future and no longer needed would be "stuck" on the broker until the later time.
With Microsoft Azure Service Bus >= 3.3.1 QueueClient or TopicClient can be used to schedule a message and cancel it later.
Also, I have tested it on my side via the following code:
BrokeredMessage brokerMsg= new BrokeredMessage("Hello World!!!");
long sequenceNumber = await queueClient.ScheduleMessageAsync(brokerMsg, DateTimeOffset.UtcNow.AddSeconds(30));
await Task.Delay(TimeSpan.FromMinutes(1));
// Cancel scheduled message
await queueClient.CancelScheduledMessageAsync(sequenceNumber);
I logged into azure portal and checked ACTIVE MESSAGE COUNT and SCHEDULED MESSAGE COUNT. I could cancel the scheduled message before it becomes active, but if I cancel the scheduled message via the sequenceNumber after the scheduled message becomes active, then I would retrieve the exception as follows:
I understand that RabbitMQ with ack, by default, will re-queue the message if it detects that the consumer/worker has died.
What about the situation where the consumer/worker is still alive but its process has stalled out for too long and didn't ack?
I would like to set an explicit time that says that if a message has been dispatched to a consumer but that consumer has held the message without ack for too long that the message gets re-queued.
I recognize that this might result in messages getting processed in duplicate but sometimes the consequence of that is not as bad as delayed message delivery.
It can also happen with errant exception handling if something get swallowed, the task terminates, and the message is never ack'd and never re-queued.
Timeout for RabbitMQ consumer could be explicitly set on the consumer side. I think this is clear but just to mention - there must not be any automatic ACKs in this case. The solution would be that the consumer is multithreaded with one thread doing message processing and ACKing the message only after it has been processed, and the other thread being a timeout thread that would:
terminate the connection to broker once the timeout has expired, and
as a consequence the message would be requeued
ACK the received message and re-publish it (explicitly)
NACK the received message, but based on the documentation (instructing the broker to either discard them or requeue them), it seems that some config should be set instructing the broker what should it do with NACKed messages
Now all this implies that at least some part of the process isn't stuck. If the whole process is stuck, perhaps the broker heartbeat towards the consumer is stopped and that is how the broker knows that the consumer died (honestly I didn't test this situation, so I'm assuming), but if this is not the case (or simply to be extra safe) you could add some kind of a watchdog process that would be pinging the consumer(s) and killing them if there's no reply, which again would lead to the messages not being ACKed and being requeued.