My Azure Service Bus has only one topic and there is only one publisher. The publisher sends messages to the topic with this code:
public void Publish<T>(T messageObject)
{
var jsonString = JsonSerializer.Serialize(messageObject);
var message = new ServiceBusMessage(jsonString);
message.ApplicationProperties["messageType"] = typeof(T).Name;
serviceBusSender.SendMessageAsync(message);
}
In my application code, I call this method consecutively to send message1, message2 and message3, respectivly. However, when I go to Azure, and receive messages on Service Bus Explorer, I see the messages' order is not necessarily the same.
Is this behavior expected? Or am I missing something here?
If you have created the non-partitioned entity then you can enable the Support ordering feature as documented here.
The Support ordering feature allows you to specify whether messages
that are sent to a topic will be forwarded to the subscription in the
same order in which they were sent. This feature doesn't support
partitioned topics. For more information, see
TopicProperties.SupportOrdering in .NET or
TopicProperties.setOrderingSupported in Java.
In case of partitioned entity your can leverage the session while sending the message. A session will give you related messages in the exact arrival order if you process them in sequence, meaning using one thread and without prefetching. When the consumer fails processing a message and abandons it, that message will again be the first to be delivered until it exceeds its delivery count.
Related
I am trying to Peek a message from Queue1 and send that message to Queue2. Both the queues are Partition-enabled.
I am able to peek a message successfully, but when I try to send it, I get the below exception:
Batching brokered messages with distinct SessionId or PartitionKey is not supported for an entity with partitioning enabled.LogicalPartitionId:45::PartitionKey:5::SessionId:5::MessageId:c2d57b0a-fff8-40bc-a835-d335eec0eade::ViaPartitionKey:::
The PartitionKey and SessionId are same, which is 5 in my case. So there's no difference. The queues do not have DuplicateDetectionEnabled. They are just simple partitioned queues.
Also, I am just sending a single message, hence no batching is involved (The exception however talks about batching). Where am I going wrong?
Note: I am using Service Bus SDK 7.4.0.
var receiverClient = serviceBusSourceClient.CreateReceiver("queue1");
var message = await receiverClient.PeekMessageAsync();
var senderClient = serviceBusDestinationClient.CreateSender("queue2");
await senderClient.SendMessageAsync(new ServiceBusMessage(message));
As suggested by Josh Love, you can try enabling logging to find the cause of this error Batching brokered messages with distinct SessionId or PartitionKey is not supported for an entity with partitioning
// Setup a listener to monitor logged events.
using AzureEventSourceListener listener = AzureEventSourceListener.CreateConsoleLogger();
RabbitMq 3.8.5, C# RabbitMqClient v6.1.0, .Net Core 3.1
I feel that I'm misunderstanding something with RabbitMq so I'm looking for clarification:
If I have a client sending a message to an exchange, and there's no consumer on the other side, what is meant to happen?
I had thought that it should sit in a queue until it's picked up, but the issue I've got is that, right now there is no queue on the other end of the exchange (which may well be my issue).
This is my declaration code:
channel.ExchangeDeclare(name, exchangeType, durable, autoDelete);
var queueName = ret._channel.QueueDeclare().QueueName;
channel.ConfirmSelect();
and this is my publisher:
channel.BasicPublish(exchangeName, routingKeyOrTopicName, messageProperties, message);
However doing that gives me one queue name for the outbound exchange, and another for the inbound consumer.
Would someone help this poor idiot out in understanding how this is meant to work? What is the expected behavior if there's no consumer at the other end? I do have an RPC mechanism that does work, but wasn't sure if that's the right way to handle this, or not.
Everything works find if I have my consumer running first, however if I fire up my Consumer after the client, then the messages are lost.
Edit
To further clarify, I've set up a simple RPC type test; I've two Direct Exchanges on the client side, one for the outbound Exchange, and another for the inbound RPC consumer.
Both those have their own queue.
Exchange queue name = amq.gen-fp-J9-TQxOJ7NpePEnIcGQ
Consumer queue name = amq.gen-wDFEJ269QcMsHMbAz-t3uw
When the Consumer app fires up, it declares its own Direct exchange and its own queue.
Consumer queue name = amq.gen-o-1O2uSczjXQDihTbkgeqA
If I do it that way though, the message gets lost.
If I fire up the consumer first then I still get three queues in total, but the messages are handled correctly.
This is the code I use to send my RPC message:
messageProperties.ReplyTo = _rpcResponder._routingKeyOrTopicName;
messageProperties.Type = "rpc";
messageProperties.Priority = priority;
messageProperties.Persistent = persistent;
messageProperties.Headers = headers;
messageProperties.Expiration = "3600000";
Looking at the management GUI, I see that all three queues end up being marked as Exclusive, but I'm not declaring them as such. In fact, I'm not creating any queues myself, rather letting the Client library handle that for me, for example, this is how I define my Consumer:
channel.ExchangeDeclare(name, exchangeType, durable, autoDelete);
var queueName = ret._channel.QueueDeclare().QueueName;
Console.WriteLine($"Consumer queue name = {queueName}");
channel.QueueBind(ret.QueueName, name, routingKeyOrTopicName, new Dictionary<string, object>());
In RabbitMQ, messages stay in queues, but they are published to exchanges. The way to link an exchange to a queue is through bindings (there are some default bindings).
If there are no queues, or the exchange's policy doesn't find any queue to forward the message, the message is lost.
Once a message is in a queue, the message is sent to one of that queue's consumers.
Maybe you're using exclusive queues? These queues get deleted when their declaring connection is gone.
Found the issue: I was allowing the library to generate the queue names rather than using specific ones. This meant that RabbitMq was always having to deal with a shifting target each time.
If I use 'well defined' queue names AND the consumer has fired up at least once to define the queue on RabbitMq, then I do see the message being dropped into the queue and stay there, even though the consumer isn't running.
I need to create a queue from which multiple unknown subscribers can get messages.
Each subscriber should only receive each message once and will mark the message complete/abandon but only for themselves. The message would remain on the queue for other subscribers.
Reading the documentation suggests that i need to create a topic and then multiple subscriptions. However, due to architectural reasons. i can't specify in advance what the subscribers are going to be. I want it to be possible for new subscribers to start consuming the messages without having to change my queue config.
Can azure servicebus handle this scenario? Also some of the subscribers will be using the rest client and not the .net client.
Thanks
Not necessarily a complete answer to this, but yes it’s possible to make Service Bus work in this way providing you stay within the Azure service quota limits.
A client can create its own subscription
string connectionString = CloudConfigurationManager.GetSetting("ServiceBus.ConnectionString");
// Note issue of how you secure this if necessary
var namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);
if (!namespaceManager.SubscriptionExists(TOPIC_NAME, SUBSCRIPTION_NAME))
{
//var messagesFilter = new SqlFilter("you can add a client filter for the subscription");
SubscriptionDescription sd = new SubscriptionDescription(TOPIC_NAME, SUBSCRIPTION_NAME)
{
// configure settings or accept the defaults
DefaultMessageTimeToLive = TimeSpan.FromDays(14),
EnableDeadLetteringOnMessageExpiration = true,
MaxDeliveryCount = 1000,
LockDuration = TimeSpan.FromMinutes(3)
};
namespaceManager.CreateSubscription(sd);
// or namespaceManager.CreateSubscription(sd, messagesFilter);
}
// subscription client for the new topic
_subscriptionClient = SubscriptionClient.CreateFromConnectionString(connectionString, TOPIC_NAME, SUBSCRIPTION_NAME, ReceiveMode.PeekLock);
There is an equivalent in the Rest api
https://msdn.microsoft.com/en-us/library/azure/hh780748.aspx
Once the subscription is created the client will be able to receive its own copy of any messages sent to the topic from that point on.
You don’t say how many clients but you will need to stay within the Service Bus service limitations
https://azure.microsoft.com/en-us/documentation/articles/service-bus-quotas/
However, you don’t include any information about your application and what the nature of the clients is and there could be many reasons that this is not an advisable solution. Including
Clients will need knowledge of the subscription security keys.
Uncoordinated clients will have to create unique subscription names.
The clients can delete subscriptions when they are finished with them but are you able to ensure that occurs?
Depending on configuration a number of inactive clients could cause your topic to reach its quota limit and stop accepting new messages.
… and probably a lot more
If the clients are not under your control I would say this is definitely not the right solution.
I have a javascript logging utility that sends requests in bulk to my server which then relays them to a Queue Client (Microsoft.ServiceBus.Messaging.QueueClient). I want to send them in batch asynchronously to the ServiceBus and still have them processed in the order they are placed into the batch I am sending. The documentation for SendBatchAsync shows that the method is for "batch" processing. This makes me think I can send it a batch of requests and have them processed as a single unit (i.e.: sequentially). Although, it appears that the messages are getting processed out of order. I'm using OnMessage to receive the messages; I'm not sure if this is a limitation or what am I missing?
I get that async doesn't guarantee order vs. other async requests, but this is a single request. I don't want to have to wait for a response before responding to the javascript client as I'm just trying to send them off, but I still need to ensure they stay in order since they are sequential events.
Here is how I send them to the queue:
MyQueueClient.SendBatchAsync(MyListOfBrokerMessages);
Then I process them:
ServiceBus.TrackerClient.OnMessage((m) =>
{
try
{
ProcessMessage(m);
}
I don't get the point of the batch processing if it doesn't process as a batch other than maybe making a single request. There must be some way to send a batch and have it process in order??
EDIT:
I've tried using Send instead of SendBatchAsync and I've set MaxConcurrentCalls to 1 and yet the messages are still not in order.
Taken from MSDN:
SessionId: If a message has the
Microsoft.ServiceBus.Messaging.BrokeredMessage.SessionId property set, then Service Bus
uses the SessionId property as the partition key. This way, all messages that belong to
the same session are handled by the same message broker. This enables Service Bus to
guarantee message ordering as well as the consistency of session states.
For a coding sample employing SessionId and AcceptSessionReceiver see.
What you can do is to use Sessions here,
Set the same sessions id to all the messages in the batch
Receiving side, AcceptMessageSession() will give you a session
Call receive on the session (ReceiveBatch). This session will give you all the messages in that batch alone.
Feature Description
The NServiceBus gateway, http://docs.particular.net/nservicebus/gateway/, seems to be a way to achieve an internal webhook using the NServiceBus infrastructure.
We need to go further with this concept to open up a few event to any 3rd party subscriber that has access to register a webhook url in our system.
Review
We plan to create two initial window services
1) WebHookBatchService, that can be added as a subscriber to specific messages of interest.
<UnicastBusConfig>
<MessageEndpointMappings>
.......
<add Messages="MyMessages.MyImportantMessage, MyMessages" Endpoint="WebHookBatchService.Queue"/>
.......
</MessageEndpointMappings>
</UnicastBusConfig>
2) WebHookProcessService - actually processes 1 message sent by the WebHookBatchService.
Once messages are received on the WebHookBatchService.Queue our WebHookBatchService will look up all the subscribers for the specific tenant + message type and foreach send individual messages to WebHookProcessService.Queue for the WebHookProcessService (which we can make an instance of nservicebus loadbalancer to bridge the batch and actual processor) to actually process the real messages probably using http://restsharp.org/.
Questions
Are there any existing open source projects that do this today?
Now since we have no control of the durability of the subscribers how should we manage errors?
http://wiki.shopify.com/WebHook
A webhook will be deleted if there are 19 consecutive failures for the exact same webhook.
It doesn't mention any delays in the webhook.. What have people experienced with standard delay in retry logic?
Here are some other thoughts:
proposal 0: MaxRetries="1". Purge WebHookProcessService.ErrorQueue nightly. (no retry - guaranteed message loss if it fails the first time)
proposal 1:
MaxRetries="1" on exception catch send email containing xml version of the message that would have been delivered over http.
Purge WebHookProcessService.ErrorQueue nightly.
-- I see potential a spam issues.
proposal 2: The nservicebus MaxRetries retries right away without delay. So i would need to create (1hr - 24hr) bucket queues and use a RetrySchedulerService although I see this as difficult to maintain and confusing for subscribers when they all at once get 25 messages in a non DateCreated ordered fashion when there service endpoint begins to work.
Digging for ideas...
The Gateway is typically used for communication between physical sites over HTTP. Since you are exposing an endpoint to the world to accept callbacks, I'm thinking you could just use the built-in WCF hosting and expose your endpoint through the firewall to 3rd parties. The rest of your setup sounds appropriate to me.
As for errors, you are correct, NSB retries immediately, but if you using web call backs this may get you by in the cases there are small hiccups. You will need to determine how you want to process the error queues, we just build in a new endpoint to process the error queues with logic to determine the retries, delay etc. A nice way to accomplish this is to use a Saga, which includes a Timeout manager. This enables a workflow where you can retry a specified number of times, try another communication, log everything, and ultimately notify someone who can contact the 3rd party to let them know there stuff is busted.