I am trying to receive all messages for a given subscription to a Service Bus Topic, but for the context of this app I do not want them dead lettered at this time, I just want to view them and leave them on the subscription. Despite instantiating the Client as
SubscriptionClient sc = SubscriptionClient.CreateFromConnectionString(connectionString, sub.topicName, sub.subscriptionName, ReceiveMode.PeekLock);
and making sure that I am using message.Abandon() rather than message.Complete() the message always gets Dead-lettered after accessing the message. I also have options.AutoComplete set to false
full method code below:
public List<ServiceBusMessage> RetrieveSubscriptionMessages(Subscription sub) {
ServiceBusMessage sbm;
List<ServiceBusMessage> list = new List<ServiceBusMessage>();
String connectionString = ConfigurationManager.AppSettings["Microsoft.ServiceBus.ConnectionString"].ToString();
SubscriptionClient sc = SubscriptionClient.CreateFromConnectionString(connectionString, sub.topicName, sub.subscriptionName, ReceiveMode.PeekLock);
OnMessageOptions options = new OnMessageOptions();
options.AutoComplete = false;
sc.OnMessage((message) => {
try {
sbm = new ServiceBusMessage() {
topicName = sub.topicName,
messageText = message.GetBody<String>()
};
list.Add(sbm);
message.Abandon();
}
catch (Exception) {
message.Abandon();
throw;
}
}, options);
return list;
}
Am I missing something ? Or is there an issue with auto dead-lettering with the onMessage() method?
Thanks !
When a message is abandoned the service bus will immediately make it available for re-delivery to any subscriber of the topic.
If you are trying to configure a multicast mechanism in which multiple listeners all receive the same message, then understand that all listeners on a given subscription will be competing for the same message. In order for every listener to receive its own copy of the message, then simply create a unique subscription to the topic for each listener.
If your intent is to delay re-delivery of the abandoned message, you might look at the SO question: What's the proper way to abandon an Azure SB Message so that it becomes visible again in the future in a way I can control?
Related
I want to use MassTransit to send messages that may have different structures in terms of message.Data, to different Azure Service Bus queues. As long as the router.Name keeps the initial value, it works welll. But, whenever the destination Uri of EndpointConvention.Map<ManyToOneTransferMessage> changes, an exception is thrown by MassTransit as "The endpoint convention has already been created and can no longer be modified". Is there any way to remap the message type with another destination to use MassTransit with multiple queues?
public class AzureServiceBusManager
{
string ServiceBusConnectionString = string.Empty;
public AzureServiceBusManager()
{
ServiceBusConnectionString = ConfigurationManager.AppSettings["AppSettings:ServiceBusConnectionString"];
}
public async Task SendMessageAsyncN1(TransferMessage transferMessage, Router router)
{
var message = new ManyToOneTransferMessage
{
BlobFileName = transferMessage.BlobFileName,
Compressed = transferMessage.Compressed,
Data = transferMessage.Data,
MessageId = transferMessage.MessageId,
TransferId = transferMessage.TransferId,
TransferType = transferMessage.TransferType
};
var queueBusControl = Bus.Factory.CreateUsingAzureServiceBus(
cfg =>
{
cfg.Host(ServiceBusConnectionString);
EndpointConvention.Map<ManyToOneTransferMessage>(new Uri("queue:" + router.Name));
cfg.ReceiveEndpoint(router.Name, e =>
{
e.RequiresSession = true;
e.MaxConcurrentCalls = 500;
});
});
await queueBusControl.Send(message);
}
}
So, first of all, do not use EndpointConvention.Map<ManyToOneTransferMessage>(new Uri("queue:" + router.Name));. It isn't useful, and only adds to the confusion.
You can resolve the endpoint from the bus, but you have to realize that creating a bus for each call is a bad idea. It is best to start the bus at startup (you aren't even starting it in the code above), and stop it at application shutdown.
Then, for each call, you can use that bus to resolve the send endpoint and send the message.
var endpoint = await bus.GetSendEndpoint(new Uri("queue:" + router.Name));
await endpoint.Send(message);
Also, you should remove this since it will cause all messages to be moved to the _skipped queue:
cfg.ReceiveEndpoint(router.Name, e =>
{
e.RequiresSession = true;
e.MaxConcurrentCalls = 500;
});
You'll likely need to configure the queues separately, in advance, if you requireSession, although I don't see you setting a SessionId on the message so it likely will not work anyway without one.
My client is attempting to send messages to the receiver. However I noticed that the receiver sometimes does not receive all the messages sent by the client thus missing a few messages (not sure where the problem is ? Client or the receiver).
Any suggestions on why that might be happening. This is what I am currently doing
On the receiver side this is what I am doing.
This is the Event Processor
async Task IEventProcessor.ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (var eventData in messages)
{
var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
}
}
This is how the client connects to the event hub
var StrBuilder = new EventHubsConnectionStringBuilder(eventHubConnectionString)
{
EntityPath = eventHubName,
};
this.eventHubClient = EventHubClient.CreateFromConnectionString(StrBuilder.ToString());
How do I direct my messages to specific consumers
I'm using this sample code from eventhub official doc, for sending and receiving.
And I have 2 consumer groups: $Default and newcg. Suppose you have 2 clients, the client_1 are using the default consumer group($Default), and client_2 are using the other consumer group(newcg)
First, after create the send client, in the SendMessagesToEventHub method, we need to add a property with value. The value should be the consumer group name. Sample code like below:
private static async Task SendMessagesToEventHub(int numMessagesToSend)
{
for (var i = 0; i < numMessagesToSend; i++)
{
try
{
var message = "444 Message";
Console.WriteLine($"Sending message: {message}");
EventData mydata = new EventData(Encoding.UTF8.GetBytes(message));
//here, we add a property named "cg", it's value is the consumer group. By setting this property, then we can read this message via this specified consumer group.
mydata.Properties.Add("cg", "newcg");
await eventHubClient.SendAsync(mydata);
}
catch (Exception exception)
{
Console.WriteLine($"{DateTime.Now} > Exception: {exception.Message}");
}
await Task.Delay(10);
}
Console.WriteLine($"{numMessagesToSend} messages sent.");
}
Then in the client_1, after create the receiver project, which use the default consumer group($Default)
-> in the SimpleEventProcessor class -> ProcessEventsAsync method, we can filter out the unnecessary event data. Sample code for ProcessEventsAsync method:
public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (var eventData in messages)
{
//filter the data here
if (eventData.Properties["cg"].ToString() == "$Default")
{
var data = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);
Console.WriteLine($"Message received. Partition: '{context.PartitionId}', Data: '{data}'");
Console.WriteLine(context.ConsumerGroupName);
}
}
return context.CheckpointAsync();
}
And in another client, like client_2, which use another consumer group, like it's name is newcg, we can follow the steps in client_1, just a little changes in ProcessEventsAsync method, like below:
public Task ProcessEventsAsync(PartitionContext context, IEnumerable<EventData> messages)
{
foreach (var eventData in messages)
{
//filter the data here, using another consumer group name
if (eventData.Properties["cg"].ToString() == "newcg")
{
//other code
}
}
return context.CheckpointAsync();
}
This happens only when there are 2 or more Event Processor Host reading from same consumer group.
If you have event hub with 32 partitions and 2 event processor host reading from same consumer group. Then each event processor host will read from 16 partition and so on.
Similarly if 4 Event processor host parallelly reading from same consumer group then each will read from 8 partitions.
Check if you have 2 or more event processor host running on same consumer group.
I have tested your code and slightly modified it(different overload of EventProcessorHost constructor, and added CheckpointAsync after consuming the messages), and then did some tests.
By using the default implementation and default EventProcessorOptions(EventProcessorOptions.DefaultOptions) I can say that I did experience some latency when it comes to consuming messages, but all messages were processed successfully.
So, sometimes it seems like I am not getting the messages from the certain partition, but after a certain period of time, all messages arrive:
Here you can find the actual modified code that worked for me. It is a simple console app that prints to the console if something arrives.
string processorHostName = Guid.NewGuid().ToString();
var Options = new EventProcessorOptions()
{
MaxBatchSize = 1, //not required to make it working, just for testing
};
Options.SetExceptionHandler((ex) =>
{
System.Diagnostics.Debug.WriteLine($"Exception : {ex}");
});
var eventHubCS = "event hub connection string";
var storageCS = "storage connection string";
var containerName = "test";
var eventHubname = "test2";
EventProcessorHost eventProcessorHost = new EventProcessorHost(eventHubname, "$Default", eventHubCS, storageCS, containerName);
eventProcessorHost.RegisterEventProcessorAsync<MyEventProcessor>(Options).Wait();
For sending the messages to the event hub and testing I used this message publisher app.
I've got an azure function right now that runs on a service bus trigger (queue trigger) and outputs a SendGridMessage. The trick is I need to do some cleanup in my blob storage after the function has successfully sent a sendgrid message but it seems like I have no way of identifying whether or not the function was successful until after it goes out of scope.
I'm currently attempting to push the message that needs to be cleaned up to a cleanup queue and take care of it after the try catch but I think I'm still running into the same problem. The function could succeed and then fail on the SendGrid output and the message would be cleaned up but thrown back into the queue to be reprocessed on this function and fail. Bleh.
Queue Trigger and Sendgrid Output
[FunctionName("ProcessEmail")]
public static void Run([ServiceBusTrigger("email-queue-jobs", AccessRights.Manage,
Connection = "MicroServicesServiceBus")]OutgoingEmail outgoingEmail, TraceWriter log,
[ServiceBus("email-queue-cleanup", Connection = "MicroServicesServiceBus",
EntityType = Microsoft.Azure.WebJobs.ServiceBus.EntityType.Queue)] IAsyncCollector<OutgoingEmail> cleanupEmailQueue,
[SendGrid] out SendGridMessage message)
{
try
{
log.Info($"Attempting to send the email {outgoingEmail.Id}");
message = SendgridHelper.ConvertToSendgridMessage(outgoingEmail);
log.Info("Successfully sent email:");
log.Info(JsonConvert.SerializeObject(outgoingEmail));
}
catch (Exception ex)
{
message = null;
throw ex;
}
// Add email to the cleanup queue
log.Info("Sending email to the cleanup queue.");
cleanupEmailQueue.AddAsync(outgoingEmail).Wait();
}
You should be able to achieve this by using ICollector or IAsyncCollector
[SendGrid] ICollector<SendGridMessage> messageCollector)
and then
var message = SendgridHelper.ConvertToSendgridMessage(outgoingEmail);
messageCollector.Add(message);
should call SendGrid synchronously and throw exception in case of failure.
If you want to use IAsyncCollector (as you already do for another binding), be sure to call FlushAsync method too:
[SendGrid] IAsyncCollector<SendGridMessage> messageCollector)
and then
var message = SendgridHelper.ConvertToSendgridMessage(outgoingEmail);
await messageCollector.AddAsync(message);
await messageCollector.FlushAsync();
I am trying to write an integration / acceptance test to test some code in azure, the code in the question ATM simply subscribes to one topic and publishes to another.
I have written the test but it doesn't always pass, seems as though there could be a race condition in place. I've tried writing it a couple of ways including using OnMessage and also using Receive (example I show here).
When using OnMessage the test seemed to always exit prematurely (around 30 seconds), which I guess perhaps means its inappropriate for this test anyway.
My query concerning my example specifically, I assumed that once I created the subscription to the target topic, that any message sent to it I would be able to pickup using Receive(), whatever point in time that message arrived meaning, if the message arrives at the target topic before I call Receive(), I would still be able to read the message afterward by calling Receive(). Could anyone please shed any light on this?
namespace somenamespace {
[TestClass]
public class SampleTopicTest
{
private static TopicClient topicClient;
private static SubscriptionClient subClientKoEligible;
private static SubscriptionClient subClientKoIneligible;
private static OnMessageOptions options;
public const string TEST_MESSAGE_SUB = "TestMessageSub";
private static NamespaceManager namespaceManager;
private static string topicFleKoEligible;
private static string topicFleKoIneligible;
private BrokeredMessage message;
[ClassInitialize]
public static void BeforeClass(TestContext testContext)
{
//client for publishing messages
string connectionString = ConfigurationManager.AppSettings["ServiceBusConnectionString"];
string topicDataReady = ConfigurationManager.AppSettings["DataReadyTopicName"];
topicClient = TopicClient.CreateFromConnectionString(connectionString, topicDataReady);
topicFleKoEligible = ConfigurationManager.AppSettings["KnockOutEligibleTopicName"];
topicFleKoIneligible = ConfigurationManager.AppSettings["KnockOutIneligibleTopicName"];
//create test subscription to receive messages
namespaceManager = NamespaceManager.CreateFromConnectionString(connectionString);
if (!namespaceManager.SubscriptionExists(topicFleKoEligible, TEST_MESSAGE_SUB))
{
namespaceManager.CreateSubscription(topicFleKoEligible, TEST_MESSAGE_SUB);
}
if (!namespaceManager.SubscriptionExists(topicFleKoIneligible, TEST_MESSAGE_SUB))
{
namespaceManager.CreateSubscription(topicFleKoIneligible, TEST_MESSAGE_SUB);
}
//subscriber client koeligible
subClientKoEligible = SubscriptionClient.CreateFromConnectionString(connectionString, topicFleKoEligible, TEST_MESSAGE_SUB);
subClientKoIneligible = SubscriptionClient.CreateFromConnectionString(connectionString, topicFleKoIneligible, TEST_MESSAGE_SUB);
options = new OnMessageOptions()
{
AutoComplete = false,
AutoRenewTimeout = TimeSpan.FromMinutes(1),
};
}
[TestMethod]
public void E2EPOCTopicTestLT50()
{
Random rnd = new Random();
string customerId = rnd.Next(1, 49).ToString();
FurtherLendingCustomer sentCustomer = new FurtherLendingCustomer { CustomerId = customerId };
BrokeredMessage sentMessage = new BrokeredMessage(sentCustomer.ToJson());
sentMessage.CorrelationId = Guid.NewGuid().ToString();
string messageId = sentMessage.MessageId;
topicClient.Send(sentMessage);
Boolean messageRead = false;
//wait for message to arrive on the ko eligible queue
while((message = subClientKoEligible.Receive(TimeSpan.FromMinutes(2))) != null){
//read message
string messageString = message.GetBody<String>();
//Serialize
FurtherLendingCustomer receivedCustomer = JsonConvert.DeserializeObject<FurtherLendingCustomer>(messageString.Substring(messageString.IndexOf("{")));
//assertion
Assert.AreEqual(sentCustomer.CustomerId, receivedCustomer.CustomerId,"verify customer id");
//pop message
message.Complete();
messageRead = true;
//leave loop after processing one message
break;
}
if (!messageRead)
Assert.Fail("Didn't receive any message after 2 mins");
}
}
}
As the official document states about SubscriptionClient.Receive(TimeSpan):
Parameters
serverWaitTime
TimeSpan
The time span the server waits for receiving a message before it times out.
A Null can be return by this API if operation exceeded the timeout specified, or the operations succeeded but there are no more messages to be received.
Per my test, if a message sent to the topic and then delivered to your subscription within your specific serverWaitTime, then you could receive a message no matter whether the message arrives at the target topic before or after you call Receive.
When using OnMessage the test seemed to always exit prematurely (around 30 seconds), which I guess perhaps means its inappropriate for this test anyway.
[TestMethod]
public void ReceiveMessages()
{
subClient.OnMessage(msg => {
System.Diagnostics.Trace.TraceInformation($"{DateTime.Now}:{msg.GetBody<string>()}");
msg.Complete();
});
Task.Delay(TimeSpan.FromMinutes(5)).Wait();
}
For SubscriptionClient.OnMessage, I assumed that it basically a loop invoking Receive. After calling OnMessage, you need to wait for a while and stop this method to exit. Here is a blog about the Event-Driven message programming for windows Azure Service Bus, you could refer to here.
Additionally, I found that your topicClient for sending messages and the subClientKoEligible for receiving a message are not targeted at the same topic path.
I have a very simple client that I want to be available 24/7 to consume messages. It is running in a Windows process.
I have no issues with the server and receiving messages, it is just the client.
The behavior is as follows:
Works if I start the connection fresh. After some time, perhaps hours, my client is in an odd state; the connection it contains 'holds' unacked messages.
In other words, using the web admin interface, I see that I have a total of, say, 2 unacked messages. Looking at my connections, I see the 2 unacked messages spread out.
But there is no processing going on.
And eventually, my connections get killed, with no exceptions or log messages being triggered. This puts all the messages into the ready state.
My first attempt to solve the problem was to add a simple external loop that checked the state of the i-vars of IModel, IChannel, and QueueingBasicConsumer. However, IModel/IChannel's IsOpen always reports true, even after the web admin reports no connections are active, and QueueingBasicConsumer's IsRunning always reports true as well.
Clearly I need another method to check whether a connection is 'active'.
So to summarize, things work well initially. Eventually, I get into an odd state where my diagnostic checks are meaningless, and messages sent to the server get unacked, and are spread out across any existing connections. Soon, my connections are killed with no debugs or exceptions thrown, and my diagnostic checks still report things are kosher.
Any help or best practices would be appreciated. I have read up on heartbeat, and the IsOpen 'race' condition, where it is suggested to use BasicQos and check for an exception, however I want to first understand what is happening.
Here is where I kick things off:
private void StartMessageLoop(string uri, string queueName) {
this.serverUri = uri;
this.queueName = queueName;
Connect(uri);
Task.Factory.StartNew(()=> MessageLoopTask(queueName));
}
Here is how I connect:
private void Connect(string serverAddress) {
ConnectionFactory cf = new ConnectionFactory();
cf.Uri = serverAddress;
this.connection = cf.CreateConnection();
this.connection.ConnectionShutdown += new ConnectionShutdownEventHandler(LogConnClose);
this.channel = this.connection.CreateModel();
}
Here is where the infinite loop starts:
private void MessageLoopTask(string queueName) {
consumer = new QueueingBasicConsumer(channel);
String consumerTag = channel.BasicConsume(queueName, false, consumer);
while (true) {
try {
BasicDeliverEventArgs e = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
IBasicProperties props = e.BasicProperties;
byte[] body = e.Body;
string messageContent = Encoding.UTF8.GetString(body);
bool result = this.messageProcessor.ProcessMessage(messageContent);
if(result){
channel.BasicAck(e.DeliveryTag, false);
}
else{
channel.BasicNack(e.DeliveryTag, false, true);
// log
}
}
catch (OperationInterruptedException ex) {
// log
break;
}
catch(Exception e) {
// log
break;
}
}
// log
}
Regards,
Dane