I have this method that works great to get all the messages from my subscription in Azure Service Bus. But I want the messages that have been in there for the last 60 minutes. Is there a way to do that?
public void GetMessages()
{
var connectionString = ConfigurationManager.ConnectionStrings["ServiceBus.EndPoint"].ConnectionString;
var topic = ConfigurationManager.AppSettings["ServiceBus.Topic"];
var subscription = ConfigurationManager.AppSettings["ServiceBus.Subscription"];
var client = SubscriptionClient.CreateFromConnectionString(connectionString, topic, subscription, ReceiveMode.PeekLock);
client.RetryPolicy = new RetryExponential(TimeSpan.FromSeconds(0.1), TimeSpan.FromSeconds(5), 5);
var messages = client.PeekBatch(100);
foreach (var msg in messages)
{
string body = msg.GetBody<String>();
}
}
I want the messages that have been in there for the last 60 minutes
The short answer is "why?". You can peek at the messages, but when you'll try to get them, you're not promised to have the same messages. Or any messages at all.
It's been said by many multiple times that a happy queue is an empty queue. If a message is sitting on a queue for 60 minutes, then something feels off. Almost as if a queue is used as a storage. Either your processors where all offline, and once they get online they should process regardless of how long the message was on the queue, or you're looking for some functionality that probably should be implemented via additional service/in a different way.
Related
We have a scenario when we pull message from Azure Service Bus Queue and for some reason if one of the down stream is down than we would like to delay a message and put back to queue. I understand we can do through multiple ways(Set the property ScheduledEnqueueTime or use Schedule API)but either way we will have to create a new message and put back to queue which will lose the delivery count and also can result in an issue where we have duplicate message where
sending the clone and completing the original are not an atomic operation and one of them fails.
https://www.markheath.net/post/defer-processing-azure-service-bus-message
based on the above article only way seems to be have our custom property, Is that the only way still as this article was written in 2016.
Scheduling a new message back does not increase the delivery count. And as you said, sending a message and completing a message are not atomic, it can be atomic with the help of transactions, thereby ensuring that all operations belonging to a given group of operations either succeed or fail jointly.
Here's an example:
ServiceBusClient client = new ServiceBusClient("<connection-string>");
ServiceBusReceiver serviceBusReceiver = client.CreateReceiver("<queue>");
ServiceBusSender serviceBusSender = client.CreateSender("<queue>");
var message = await serviceBusReceiver.ReceiveMessageAsync();
// Your condition to handle the down stream
if (true)
{
using (var ts = new TransactionScope(TransactionScopeAsyncFlowOption.Enabled))
{
await serviceBusReceiver.CompleteMessageAsync(message);
var newMessage = new ServiceBusMessage(message);
newMessage.ScheduledEnqueueTime = new DateTimeOffset(DateTime.UtcNow.AddMinutes(1));
await serviceBusSender.SendMessageAsync(newMessage);
ts.Complete();
}
}
What is the proper method of consuming a message, processing it, and then publishing it? I run into a lot of unacknowledged messages and I believe there is some blocking going on. Trying to understand the best practice for something like this.
I'm working on a set of services that will process around 50k requests a day. I have decided to use RabbitMQ and three Windows Services written in Dotnet Core 3.1.
I have diagrammed the process but essentially it works like this:
an external service publishes the message to Queue #1
service A is "listening" on Queue #1 and consumes any messages that arrive in the Queue. A database call is made and then Service A passes message to Queue #2
service B is "listening" on Queue #2 and consumes any messages that arrive in the Queue. Some internal processing is done and then Service B passes message to Queue #3
service C is "listening" on Queue #2 and consumes any messages that arrive in the Queue. Some internal processing is done and then Service C pushes message to database
Code example is below image
protected override void OnStart(string[] args)
{
logger.LogInformation("Starting Service ...");
base.OnStart(args);
string queue = "Queue_StageOne";
this.connection = factory.CreateConnection();
this.channel = connection.CreateModel();
this.publishingChannel = connection.CreateModel();
this.channel.BasicQos(0, 1, false);
consumer = new AsyncEventingBasicConsumer(channel);
consumer.Received += Consumer_Recieved;
this.channel.BasicConsume(queue: queue, autoAck: false, consumer: consumer);
}
private async Task Consumer_Recieved(object sender, BasicDeliverEventArgs #event)
{
var body = #event.Body;
var message = Encoding.UTF8.GetString(body.ToArray());
var inboundTransferObject = PatientObject.ConvertFromJson(message);
//logger.LogInformation("Processed message " + inboundTransferObject.WebhookMessageId);
//ServicePointManager.SecurityProtocol = SecurityProtocolType.SystemDefault;
//X509Certificate2 cert = new X509Certificate2(config["CertificationPath"].ToString(), config["PFXPassword"]);
//JToken access_token = GetAccessToken(cert);
//JObject payerData = GetPractitionerData(inboundTransferObject, cert, access_token);
//inboundTransferObject = ProcessPractitioner(inboundTransferObject, payerData);
var outboundTransferObject = Encoding.ASCII.GetBytes(inboundTransferObject.ConvertToJson());
channel.BasicAck(deliveryTag: #event.DeliveryTag, multiple: false);
publishingChannel.BasicPublish(exchange: "ExchangeA", routingKey: "Queue_StageTwo", basicProperties: null, body:outboundTransferObject);
await Task.Delay(250);
}
It's not clear exactly what you're asking here but one thing that does stand out is that your services should not acknowledge the inbound message unless and until they've completed all their processing steps, and that includes publishing follow-on outbound messages. In your code sample you appear to acknowledge the inbound message before publishing the outbound message.
That however does not explain the symptom you described "I run into a lot of unacknowledged messages". When do you run into these? How many is a lot? Have you set a prefetch limit on your channel? For testing purposes, you could try setting your prefetch count to one to ensure that only one message is in-flight at a time.
channel.BasicQos(1, global: true)
Please see this section of the RabbitMQ documentation:
"Because messages are sent (pushed) to clients asynchronously, there is usually more than one message "in flight" on a channel at any given moment. In addition, manual acknowledgements from clients are also inherently asynchronous in nature. So there's a sliding window of delivery tags that are unacknowledged. Developers would often prefer to cap the size of this window to avoid the unbounded buffer problem on the consumer end. This is done by setting a "prefetch count" value using the basic.qos method. The value defines the max number of unacknowledged deliveries that are permitted on a channel. Once the number reaches the configured count, RabbitMQ will stop delivering more messages on the channel unless at least one of the outstanding ones is acknowledged."
I am creating an application to connect to multiple ActiveMQ servers and get the total number of messages withing their different queues.
I am using a slightly modified version of the code found in this link ActiveMQ with C# and Apache NMS - Count messages in queue
to count the messages withing the queue.
The problem I am having is that if the queue contains more than 400 messages this code stops counting at 400.
public static int GetMessageCount(string server, string user, string pw) {
int messageCount = 0;
var _server = $"activemq:ssl://{server}:61616?transport.acceptInvalidBrokerCert=true";
IConnectionFactory factory = new NMSConnectionFactory(_server);
using (IConnection connection = factory.CreateConnection(user, pw)) {
connection.Start();
using (ISession session = connection.CreateSession(AcknowledgementMode.AutoAcknowledge)) {
IDestination requestDestination = session.GetQueue(QueueRequestUri);
IQueueBrowser queueBrowser = session.CreateBrowser((IQueue)requestDestination);
IEnumerator messages = queueBrowser.GetEnumerator();
while (messages.MoveNext()) {
IMessage message = (IMessage)messages.Current;
messageCount++;
}
connection.Close();
session.Close();
connection.Close();
}
}
return messageCount;
}
How do I get the actual number of messages in the queue?
Why is this behavior?
Is this an issue with IEnumerator interface or is it an issue with the Apache.NMS.ActiveMQ API?
Normally there is no guarantee that the browser will return all messages from the queue. It provides a snapshot of the messages but may not return all of them. ActiveMQ has a limit for overhead reduction. You can increase the limits, see maxBrowsePageSize, however there is still no guarantee.
maxBrowsePageSize - 400 - The maximum number of messages to page in from the store at one time for a browser.
Those APIs are not designed for counting messages and you shouldn't do it. Just process the messages without counting them. If you want to get metrics then use some kind of admin libraries. JMX (yes I know you use C#) could be helpful as well.
I have a console application to read all the brokered messages present in the subscription on the Azure Service Bus. I have around 3500 messages in there. This is my code to read the messages:
SubscriptionClient client = messagingFactory.CreateSubscriptionClient(topic, subscription);
long count = namespaceManager.GetSubscription(topic, subscription).MessageCountDetails.ActiveMessageCount;
Console.WriteLine("Total messages to process : {0}", count.ToString()); //Here the number is showing correctly
IEnumerable<BrokeredMessage> dlIE = null;
dlIE = client.ReceiveBatch(Convert.ToInt32(count));
When I execute the code, in the dlIE, I can see only 256 messages. I have also tried giving the prefetch count like this client.PrefetchCountbut then also it returns 256 messages only.
I think there is some limit to the number of messages that can be retrieved at a time.However there is no such thing mentioned on the msdn page for the RecieveBatch method. What can I do to retrieve all messages at a time?
Note:
I only want to read the message and then let it exist on the service bus. Therefore I do not use message.complete method.
I cannot remove and re-create the topic/subscription from the Service Bus.
Edit:
I used PeekBatch instead of ReceiveBatch like this:
IEnumerable<BrokeredMessage> dlIE = null;
List<BrokeredMessage> bmList = new List<BrokeredMessage>();
long i = 0;
dlIE = subsciptionClient.PeekBatch(Convert.ToInt32(count)); // count is the total number of messages in the subscription.
bmList.AddRange(dlIE);
i = dlIE.Count();
if(i < count)
{
while(i < count)
{
IEnumerable<BrokeredMessage> dlTemp = null;
dlTemp = subsciptionClient.PeekBatch(i, Convert.ToInt32(count));
bmList.AddRange(dlTemp);
i = i + dlTemp.Count();
}
}
I have 3255 messages in the subscription. When the first time peekBatch is called it gets 250 messages. so it goes into the while loop with PeekBatch(250,3225). Every time 250 messages are only received. The final total messages I am having in the output list is 3500 with duplicates. I am not able to understand how this is happening.
I have figured it out. The subscription client remembers the last batch it retrieved and when called again, retrieves the next batch.
So the code would be :
IEnumerable<BrokeredMessage> dlIE = null;
List<BrokeredMessage> bmList = new List<BrokeredMessage>();
long i = 0;
while (i < count)
{
dlIE = subsciptionClient.PeekBatch(Convert.ToInt32(count));
bmList.AddRange(dlIE);
i = i + dlIE.Count();
}
Thanks to MikeWo for guidance
Note: There seems to be some kind of a size limit on the number of messages you can peek at a time. I tried with different subscriptions and the number of messages fetched were different for each.
Is the topic you are writing to partitioned by chance? When you receive messages from a partitioned entity it will only fetch from one of the partitions at a time. From MSDN:
"When a client wants to receive a message from a partitioned queue, or from a subscription of a partitioned topic, Service Bus queries all fragments for messages, then returns the first message that is returned from any of the messaging stores to the receiver. Service Bus caches the other messages and returns them when it receives additional receive requests. A receiving client is not aware of the partitioning; the client-facing behavior of a partitioned queue or topic (for example, read, complete, defer, deadletter, prefetching) is identical to the behavior of a regular entity."
It's probably not a good idea to assume that even with a non partitioned entity that you'd get all messages in one go with really either the Receive or Peek methods. It would be much more efficient to loop through the messages in much smaller batches, especially if your message have any decent size to them or are indeterminate in size.
Since you don't actually want to remove the message from the queue I'd suggest using PeekBatch instead of ReceiveBatch. This lets you get a copy of the message and doesn't lock it. I'd highly suggest a loop using the same SubscriptionClient in conjunction with PeekBatch. By using the same SubscriptionClient with PeekBatch under the hood the last pulled sequence number is kept as as you loop through it should keep track and go through the whole queue. This would essentially let you read through the entire queue.
I came across a similar issue where client.ReceiveBatchAsync(....) would not retrieve any data from the subscription in the azure service bus.
After some digging around I found out that there is a bit for each subscriber to enable batch operations. This can only be enabled through powershell. Below is the command I used:
$subObject = Get-AzureRmServiceBusSubscription -ResourceGroup '#resourceName' -NamespaceName '#namespaceName' -Topic '#topicName' -SubscriptionName '#subscriptionName'
$subObject.EnableBatchedOperations = $True
Set-AzureRmServiceBusSubscription -ResourceGroup '#resourceName' -NamespaceName '#namespaceName' -Topic '#topicName'-SubscriptionObj $subObject
More details can be found here. While it still didn't load all the messages at least it started to clear the queue. As far as I'm aware, the batch size parameter is only there as a suggestion to the service bus but not a rule.
Hope it helps!
I have a WCF on a Web Role and then a Worker Role to process the messages added to an azure queue by the WCF.
I am doing the following :
var queue = queueStorage.GetQueueReference("myqueue");
var message = new CloudQueueMessage(string.Format("{0},{1}", pWord,processed));
queue.AddMessage(message);
Then I want to wait until the message has been processed, but I dont know if my queue object will get updated on its own or I have to do something about it.
On my worker role I have the following :
This is my onStart method :
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
inputQueue = queueClient.GetQueueReference("myqueue");
And then on my Run method :
while (true)
{
try
{
// Retrieve and process a new message from the queue.
msg = inputQueue.GetMessage();
if (msg != null)
{
result = processMessage(msg);
On my processMessage method :
var messageParts = msg.AsString.Split(new char[] { ',' });
var word = messageParts[0];
var processed = Convert.ToBoolean(messageParts[2]);
word = "recibido";
processed = true;
addMessageToQueue2(userId,processed);
return 1;
Add message to queue is :
var queue = outputQueue.GetQueueReference("myQueue");
var message = new CloudQueueMessage(string.Format("{0},{1}", pWord, pProcessed));
queue.AddMessage(message);
Im fairly new to queues but I think this should work so all I need is just waiting until the message has been processed but dont know how it internally works.
Not quite sure what you mean by waiting until the message has been processed. With Azure queues, the operation is very simple:
Place messages on queue (with message TTL)
Read message(s) from queue (with processing-timeout). This processing timeout says "I promise to finish dealing with this message, and then deleting this message, before this timeout is hit.
The queue message stays in the queue but becomes invisible to all other readers during the timeout period.
The message-reader then deletes the message from the queue.
Assuming the code that read the queue message deletes the message before the promised timeout expires, all is good in QueueLand. However: If the processing goes beyond the timeout period, then the message becomes visible again. And... if someone else then reads that message, the original reader loses rights to delete it (they'll get an exception when making an attempt).
So: Long story short: You can process a message for as long as you want, within the stated timeout period, and then delete it. If your code crashes during processing, the message will eventually reappear for someone else to read. If you want to deal with poison messages, just look at the DequeueCount property of the message to see how many times it's been read (and if over a certain threshold, do something special with the message, like tuck it away in a blob or table row for future inspection by the development team).
See this article for all documented queue limits (and a detailed side-by-side comparison with Service Bus Queues).