I am creating an application to connect to multiple ActiveMQ servers and get the total number of messages withing their different queues.
I am using a slightly modified version of the code found in this link ActiveMQ with C# and Apache NMS - Count messages in queue
to count the messages withing the queue.
The problem I am having is that if the queue contains more than 400 messages this code stops counting at 400.
public static int GetMessageCount(string server, string user, string pw) {
int messageCount = 0;
var _server = $"activemq:ssl://{server}:61616?transport.acceptInvalidBrokerCert=true";
IConnectionFactory factory = new NMSConnectionFactory(_server);
using (IConnection connection = factory.CreateConnection(user, pw)) {
connection.Start();
using (ISession session = connection.CreateSession(AcknowledgementMode.AutoAcknowledge)) {
IDestination requestDestination = session.GetQueue(QueueRequestUri);
IQueueBrowser queueBrowser = session.CreateBrowser((IQueue)requestDestination);
IEnumerator messages = queueBrowser.GetEnumerator();
while (messages.MoveNext()) {
IMessage message = (IMessage)messages.Current;
messageCount++;
}
connection.Close();
session.Close();
connection.Close();
}
}
return messageCount;
}
How do I get the actual number of messages in the queue?
Why is this behavior?
Is this an issue with IEnumerator interface or is it an issue with the Apache.NMS.ActiveMQ API?
Normally there is no guarantee that the browser will return all messages from the queue. It provides a snapshot of the messages but may not return all of them. ActiveMQ has a limit for overhead reduction. You can increase the limits, see maxBrowsePageSize, however there is still no guarantee.
maxBrowsePageSize - 400 - The maximum number of messages to page in from the store at one time for a browser.
Those APIs are not designed for counting messages and you shouldn't do it. Just process the messages without counting them. If you want to get metrics then use some kind of admin libraries. JMX (yes I know you use C#) could be helpful as well.
Related
I'm trying to use activeMQ with an NMS (C#) consumer to get messages, do some processing and then send the contents to a webserivce via HttpClient.PostAsync(), all running within a windows service (via Topshelf).
The downstream system I'm communicating with is extremely touchy and I'm using individual acknowledgement so that I can check the response and act accordingly by acknowledging or triggering a custom retry (i.e. not session.recover).
Since the downstream system is unreliable, I've been trying a few different ways to reduce the throughput of my consumer. I thought I'd be able to accomplish this by converting to be synchronous and using prefetch, but it doesn't appear to have worked.
My understanding is that with an async consumer the prefetch 'limit' will never be hit but using synchronous method the prefetch queue will only be eaten away as messages are acknowledged, meaning that I can tune my listener to pass messages at a rate which the downstream component can handle.
With a queue loaded with 100 messages, and kick off my code using a listener (i.e. asynchronously) then I can successfully log that 100 msgs have been through.
When I change it to use consumer.Receive() (or ReceiveNoWait) then I never get a message.
Here is a snippet of what I'm trying for the synchronous consumer, with the async option included but commented out:
public Worker(LogWriter logger, ServiceConfiguration config, IConnectionFactory connectionFactory, IEndpointClient endpointClient)
{
log = logger;
configuration = config;
this.endpointClient = endpointClient;
connection = connectionFactory.CreateConnection();
connection.RedeliveryPolicy = GetRedeliveryPolicy();
connection.ExceptionListener += new ExceptionListener(OnException);
session = connection.CreateSession(AcknowledgementMode.IndividualAcknowledge);
queue = session.GetQueue(configuration.JmsConfig.SourceQueueName);
consumer = session.CreateConsumer(queue);
// Asynchronous
//consumer.Listener += new MessageListener(OnMessage);
// Synchronous
var message = consumer.Receive(TimeSpan.FromSeconds(5));
while (true)
{
if (!Equals(message, null))
{
OnMessage(message);
}
}
}
public void OnMessage(IMessage message)
{
log.DebugFormat("Message {count} Received. Attempt:{attempt}", message.Properties.GetInt("count"), message.Properties.GetInt("NMSXDeliveryCount"));
message.Acknowledge();
}
I believe you need to call Start() on your connection, e.g.:
connection.Start();
Calling Start() indicates that you want messages to flow.
It's also worth noting that there's no way to break out of your while(true) loop aside from throwing an exception from OnMessage.
I have this method that works great to get all the messages from my subscription in Azure Service Bus. But I want the messages that have been in there for the last 60 minutes. Is there a way to do that?
public void GetMessages()
{
var connectionString = ConfigurationManager.ConnectionStrings["ServiceBus.EndPoint"].ConnectionString;
var topic = ConfigurationManager.AppSettings["ServiceBus.Topic"];
var subscription = ConfigurationManager.AppSettings["ServiceBus.Subscription"];
var client = SubscriptionClient.CreateFromConnectionString(connectionString, topic, subscription, ReceiveMode.PeekLock);
client.RetryPolicy = new RetryExponential(TimeSpan.FromSeconds(0.1), TimeSpan.FromSeconds(5), 5);
var messages = client.PeekBatch(100);
foreach (var msg in messages)
{
string body = msg.GetBody<String>();
}
}
I want the messages that have been in there for the last 60 minutes
The short answer is "why?". You can peek at the messages, but when you'll try to get them, you're not promised to have the same messages. Or any messages at all.
It's been said by many multiple times that a happy queue is an empty queue. If a message is sitting on a queue for 60 minutes, then something feels off. Almost as if a queue is used as a storage. Either your processors where all offline, and once they get online they should process regardless of how long the message was on the queue, or you're looking for some functionality that probably should be implemented via additional service/in a different way.
I have a console application to read all the brokered messages present in the subscription on the Azure Service Bus. I have around 3500 messages in there. This is my code to read the messages:
SubscriptionClient client = messagingFactory.CreateSubscriptionClient(topic, subscription);
long count = namespaceManager.GetSubscription(topic, subscription).MessageCountDetails.ActiveMessageCount;
Console.WriteLine("Total messages to process : {0}", count.ToString()); //Here the number is showing correctly
IEnumerable<BrokeredMessage> dlIE = null;
dlIE = client.ReceiveBatch(Convert.ToInt32(count));
When I execute the code, in the dlIE, I can see only 256 messages. I have also tried giving the prefetch count like this client.PrefetchCountbut then also it returns 256 messages only.
I think there is some limit to the number of messages that can be retrieved at a time.However there is no such thing mentioned on the msdn page for the RecieveBatch method. What can I do to retrieve all messages at a time?
Note:
I only want to read the message and then let it exist on the service bus. Therefore I do not use message.complete method.
I cannot remove and re-create the topic/subscription from the Service Bus.
Edit:
I used PeekBatch instead of ReceiveBatch like this:
IEnumerable<BrokeredMessage> dlIE = null;
List<BrokeredMessage> bmList = new List<BrokeredMessage>();
long i = 0;
dlIE = subsciptionClient.PeekBatch(Convert.ToInt32(count)); // count is the total number of messages in the subscription.
bmList.AddRange(dlIE);
i = dlIE.Count();
if(i < count)
{
while(i < count)
{
IEnumerable<BrokeredMessage> dlTemp = null;
dlTemp = subsciptionClient.PeekBatch(i, Convert.ToInt32(count));
bmList.AddRange(dlTemp);
i = i + dlTemp.Count();
}
}
I have 3255 messages in the subscription. When the first time peekBatch is called it gets 250 messages. so it goes into the while loop with PeekBatch(250,3225). Every time 250 messages are only received. The final total messages I am having in the output list is 3500 with duplicates. I am not able to understand how this is happening.
I have figured it out. The subscription client remembers the last batch it retrieved and when called again, retrieves the next batch.
So the code would be :
IEnumerable<BrokeredMessage> dlIE = null;
List<BrokeredMessage> bmList = new List<BrokeredMessage>();
long i = 0;
while (i < count)
{
dlIE = subsciptionClient.PeekBatch(Convert.ToInt32(count));
bmList.AddRange(dlIE);
i = i + dlIE.Count();
}
Thanks to MikeWo for guidance
Note: There seems to be some kind of a size limit on the number of messages you can peek at a time. I tried with different subscriptions and the number of messages fetched were different for each.
Is the topic you are writing to partitioned by chance? When you receive messages from a partitioned entity it will only fetch from one of the partitions at a time. From MSDN:
"When a client wants to receive a message from a partitioned queue, or from a subscription of a partitioned topic, Service Bus queries all fragments for messages, then returns the first message that is returned from any of the messaging stores to the receiver. Service Bus caches the other messages and returns them when it receives additional receive requests. A receiving client is not aware of the partitioning; the client-facing behavior of a partitioned queue or topic (for example, read, complete, defer, deadletter, prefetching) is identical to the behavior of a regular entity."
It's probably not a good idea to assume that even with a non partitioned entity that you'd get all messages in one go with really either the Receive or Peek methods. It would be much more efficient to loop through the messages in much smaller batches, especially if your message have any decent size to them or are indeterminate in size.
Since you don't actually want to remove the message from the queue I'd suggest using PeekBatch instead of ReceiveBatch. This lets you get a copy of the message and doesn't lock it. I'd highly suggest a loop using the same SubscriptionClient in conjunction with PeekBatch. By using the same SubscriptionClient with PeekBatch under the hood the last pulled sequence number is kept as as you loop through it should keep track and go through the whole queue. This would essentially let you read through the entire queue.
I came across a similar issue where client.ReceiveBatchAsync(....) would not retrieve any data from the subscription in the azure service bus.
After some digging around I found out that there is a bit for each subscriber to enable batch operations. This can only be enabled through powershell. Below is the command I used:
$subObject = Get-AzureRmServiceBusSubscription -ResourceGroup '#resourceName' -NamespaceName '#namespaceName' -Topic '#topicName' -SubscriptionName '#subscriptionName'
$subObject.EnableBatchedOperations = $True
Set-AzureRmServiceBusSubscription -ResourceGroup '#resourceName' -NamespaceName '#namespaceName' -Topic '#topicName'-SubscriptionObj $subObject
More details can be found here. While it still didn't load all the messages at least it started to clear the queue. As far as I'm aware, the batch size parameter is only there as a suggestion to the service bus but not a rule.
Hope it helps!
As everything fail one day or the other. Are there any recommendations/best practices on how to handle errors when publishing messages to Amazon SQS?
I am running the Amazon .NET SDK and send a couple of 1000 SQS messages a day. It hasnt come to my attention that publishing has failed but that could be that any problem hasent surfaced.
However, how should I handle an error in the following basic code (pretty much a straight forward usage example from the SDK documentation):
public static string sendSqs(string data)
{
IAmazonSQS sqs = AWSClientFactory.CreateAmazonSQSClient(RegionEndpoint.EUWest1);
SendMessageRequest sendMessageRequest = new SendMessageRequest();
CreateQueueRequest sqsRequest = new CreateQueueRequest();
sqsRequest.QueueName = "mySqsQueue";
CreateQueueResponse createQueueResponse = sqs.CreateQueue(sqsRequest);
sendMessageRequest.QueueUrl = createQueueResponse.QueueUrl;
sendMessageRequest.MessageBody = data;
SendMessageResponse sendMessageresponse = sqs.SendMessage(sendMessageRequest);
return sendMessageresponse.MessageId;
}
First (kinda unrelated) I would recommend separating the client from the send message:
public class QueueStuff{
private static IAmazonSQS SQS;
//Get only one of these
public QueueStuff(){
SQS = AWSClientFactory.CreateAmazonSQSClient(RegionEndpoint.EUWest1);
}
//...use SQS elsewhere...
Finally to answer your question: check the Common Errors and SendMessage (in your case) pages and catch relevant exceptions. What you do will depend on your app and how it should handle losing messages. An example might be:
public static string sendSqs(string data)
{
SendMessageRequest sendMessageRequest = new SendMessageRequest();
CreateQueueRequest sqsRequest = new CreateQueueRequest();
sqsRequest.QueueName = "mySqsQueue";
CreateQueueResponse createQueueResponse = sqs.CreateQueue(sqsRequest);
sendMessageRequest.QueueUrl = createQueueResponse.QueueUrl;
sendMessageRequest.MessageBody = data;
try{
SendMessageResponse sendMessageresponse = SQS.SendMessage(sendMessageRequest);
catch(InvalidMessageContents ex){ //Catch or bubble the exception up.
//I can't do anything about this so toss the message...
LOGGER.log("Invalid data in request: "+data, ex);
return null;
} catch(Throttling ex){ //I can do something about this!
//Exponential backoff...
}
return sendMessageresponse.MessageId;
}
Exceptions like Throttling or ServiceUnavailable are ones commonly overlooked but can be handled properly. Its commonly recommended that for things like these you implement an Exponential Backoff. When you're throttled you backoff until the service is available again. An example of implementation and usage in Java: https://gist.github.com/alph486/f123ea139e6ea56e696f .
You shouldn't need to do much of your own error handling at all; the AWS SDK for .NET handles retries for transient failures under the hood.
It will automatically retry any request that fails if:
your access to the AWS service is being throttled
the request times out
the HTTP connection fails
It uses an exponential backoff strategy for multiple retries. On the first failure, it sleeps for 400 ms, then tries again. If that attempt fails, it sleeps for 1600 ms before trying again. If that fails, it sleeps for 6400 ms, and so on, to a maximum of 30 seconds.
When the configured maximum number of retries is reached, the SDK will throw. You can configure the maximum number of retries like this:
var sqsClient = AWSClientFactory.CreateAmazonSQSClient(
new AmazonSQSConfig
{
MaxErrorRetry = 4 // the default is 4.
});
If the API call ends up throwing, it means that something is really wrong, like SQS has gone down in your region, or your request is invalid.
Source: The AWS SDK for .NET Source Code on GitHub.
Within ActiveMQ I've been told that the most optimal solution for increased throughput is to have multiple connections, each with their own session and consumer.
I've been trying to achieve this with NMS (Connecting via C#), but in the "Active Consumwers" screen of the MQ web console, I'm seeing all my connections & consumers listed as I'd expect to see them, but in the next column they all have a sessionId of "1". I would have expected there to be a separate session ID for each.
Is this right? And if there should be different session IDs for each connection/consumer, how would I go about ensuring these extra sessions are created?
Here's some example code I'm using to start a new connection (this is based on Remarks ActiveMQ transactional messaging introduction code):
public QueueConnection(IConnectionFactory connectionFactory, string queueName, AcknowledgementMode acknowledgementMode)
{
this.connection = connectionFactory.CreateConnection();
this.connection.Start();
this.session = this.connection.CreateSession(acknowledgementMode);
this.queue = new ActiveMQQueue(queueName);
}
... and this is being done each time for each of the connections I'm opening.
It looks this piece of code which is the root source of my problem:
public SimpleQueueListener CreateSimpleQueueListener(IMessageProcessor processor)
{
IMessageConsumer consumer = this.session.CreateConsumer(this.queue, "2 > 1");
return new SimpleQueueListener(consumer, processor, this.session);
}
As it's using a common session (this.session) for all the consumer. By creating a new session each time (and holding it in a collection or by other means), I have achieved the goal of operating each listener on its own session within the same connection. Eg:
public SimpleQueueListener CreateSimpleQueueListener(IMessageProcessor processor)
{
var listenerSession = this.connection.CreateSession();
IMessageConsumer consumer = listenerSession.CreateConsumer(this.queue, "2 > 1");
return new SimpleQueueListener(consumer, processor, listenerSession);
}