Azure Service Bus - How to Add Topic Subscriber Programmatically - c#

I'm trying to implement a basic pub/sub system with dynamic subscribers. I need to dynamically register a topic subscriber in my .NET APIs, but it seems like I can only do that manually from the Azure Portal. When my program starts, I want to be able to register a subscriber to a topic in the format of subscribername-{timestamp} because I want to be able to deploy as many staging/dev versions as I want without having to manually create these subscribers each time.
I feel like this is a fundamental feature that I'm just blindly missing. I can do this when working with queues, but if I try to do the same with a topic, I get continuous errors of that subscriber path not found. I have searched the internet to no end and while I have found SOME solutions, they are very old and often not compatible with .NET 5 or the package is deprecated. I'm feeling like I'm going against the grain and missing something with what I'm coming up with, so I'd like to get some input on what the proper practice is for this.
I'm using Azure.Messaging.ServiceBus for publishing and subscribing currently. Below is some code -
var processor = ServiceBusClient.CreateProcessor(TopicName, $"DynamicSubscriber-{DateTime.Now}");
try
{
processor.ProcessErrorAsync += ErrorHandler;
processor.ProcessMessageAsync += MessageHandler;
await processor.StartProcessingAsync();
}
catch (Exception e)
{
await processor.DisposeAsync();
await ServiceBusClient.DisposeAsync();
}
finally
{
Console.WriteLine("Press a key to exit.");
Console.ReadLine();
}

Thank You #PeterBons! Yes, when updating/creating/fetching/deleting the Service Bus entities, ServiceBusAdministrationClient is the client Class to be used.
Also, There are few error details given this article when using the method of Queue with ServiceBusAdministrationClient and this SO Thread.
The ServiceBusTopicSubscription class is used to setup the Azure Service bus subscription. The class uses the ServiceBusClient to set up the message handler, the ServiceBusAdministrationClient is used to implement filters and add or remove these rules. The Azure.Messaging.ServiceBus Nuget package is used to connect to the subscription.

Related

How can I avoid duplicate background task processing in Service Fabric hosted services?

Sorry about the vague title, it's rather hard to explain. I have the following setup:
I'm running a .NET Core 2.2 Web API hosted in Service Fabric.
Part of this API's responsibilities is to monitor an external FTP storage for new incoming files.
Each file will trigger a Mediator Command to be invoked with processing logic.
I've implemented a hybrid solution based on https://learn.microsoft.com/en-us/dotnet/architecture/microservices/multi-container-microservice-net-applications/background-tasks-with-ihostedservice and https://blog.maartenballiauw.be/post/2017/08/01/building-a-scheduled-cache-updater-in-aspnet-core-2.html. In essence this is an IHostedService implementation that is registered in the Startup.cs of this API. Its basically a background service running in-process.
As for the problem. The solution above works fine on a 1-node cluster, but causes "duplicates" to be processed when running on a 5-node cluster. The problem lies in the fact that on a 5-node cluster, there are ofcourse 5 identical ScheduledTasks running and will all access the same file on the FTP at the same time.
I've realised this is caused somewhat by improper separation of concerns - aka the API shouldn't be responsible for this, rather a completely separate process should handle this.
This brings me to the different services supported on Service fabric (Stateful, Stateless, Actors and Hosted Guest Exe's). The Actor seems to be the only one that runs single-threaded, even on a 5-node cluster. Additionally, an Actor doesn't seem to be well suited for this kind of scenario, as it needs to be triggered. In my case, I basically need a daemon that runs all the time on a schedule. If I'm not mistaken, the other stateful/stateless services will run with 5 "clones" as well and just cause the same issue as I currently have.
I guess my question is: how can I do efficient background processing with Service Fabric and avoid these multi-threaded/duplicate issues? Thanks in advance for any input.
In service farbic you have 2 options with actors:
Reliable actor timers
Reliable actor reminders
You can use the state to determine if the actor has processed your ftp file.
Have a look at this blog post, to see how they used a reminder to run every 30 seconds.
It's important that the code in your actor allows reantrancy.
Basically because the actors are reliable, your code might get executed multiple times and be canceled in the middle of an execution.
Instead of doing this:
public void Method()
{
_ftpService.Process(file);
}
Consider doing this:
public void Method(int fileId)
{
if (_ftpService.IsNotProcessed(fileId))
{
_ftpService.Process(file);
_ftpService.SetProcessed(fileId);
}
}
If your actor has trouble disposing, you might want to check if you are handling cancelationtokens in your code. I never had this issue, but we are using autofac, with Autofac.ServiceFabric to register our actors with RegisterActor<T>() and we have cancelationtokens in most of our logic. Also the documentation of CancellationTokenSource can help you.
Example
public Ctor()
{
_cancelationTokenSource = new CancellationTokenSource();
_cancellationToken= _cancelationTokenSource.Token;
}
public async Task SomeMethod()
{
while(/*condition*/)
{
_cancellationToken.ThrowIfCancellationRequested();
/*Other code*/
}
}
protected override async Task OnDeactivateAsync()
{
_cancelationTokenSource.Cancel();
}

Mass Transit connection management on class library

I am trying to use mass transit for request response handling. Most examples for mass transit are for console application or web application and I don't know how to start or stop bus on producer when I use it in class library.
Because in examples for web application bus started on application start but for class library there are no such a thing like startup.cs.
My question is where to start bus or stop when I use class library for connecting to bus?
My producer code looks like
IBusControl busControl = CreateBus();
TaskUtil.Await(() => busControl.StartAsync());
IRequestClient<IAccountingRequest, IAccountingResponse> client = CreateRequestClient(busControl);
IAccountingResponse response = null;
AccountingRequest accountingRequest = MapToAccountingRequest(accountingIntegration);
Task.Run(async () =>
{
response = await client.Request(accountingRequest);
}).Wait();
busControl.Stop();
But I think starting and stopping bus for every request is not good.
You should provide methods in your class library to Start/Stop the bus. You can abstract them however you want, but allow the developer the ability to startup and shutdown the bus. Many other libraries to this, typically via a method to Stop, Shutdown, Close, etc.
The fact that you're also hiding the Task in the above example and blocking/waiting makes me thing you're stuck within something super legacy that you can't get avoid. In this case, well, I had to say this, but manage the reference to the bus in some static singleton (yuck), and Start it the first time it's used (double yuck), then, try to find a hook on application exit to stop it clean (good luck).
The best solution is to give the developers a call into your library to shut it down so they can free the connection and resources.

Azure - Consume Event Hub to DocumentDB (not Storage Account)

I have two services running on Azure. An Event Hub and a CosmosDB/DocumentDB database.
My goal is two wire the two with a WebApp service so everything that gets on the Event Hub is consumed and properly stored on the database.
I went through the Quick Starts and the Tutorials of both Event Hubs and CosmosDB and I cannot figure out a way of wiring the two.
I know how to establish a connection with DocumentDB, I know how to consume the data of an Event Hub, but I can't manage to do both. Here is the deal:
To create an Event Hub Processor Host I've only found the following constructor
public EventProcessorHost(string eventHubPath, string consumerGroupName, string eventHubConnectionString, string storageConnectionString, string leaseContainerName)
with storageConnectionString being a combination of two strings called StorageAccountName and StorageAccountKey on the official tutorial.
Well that's actually my problem, Storage Account is another service available on Azure. I've created one for testing purposes and it works just fine but, I need to store everything on a DocumentDB CosmosDB database.
I am not excluding the possibility that going through a Storage Account is required, but if that's so, could you tell me why?
Thank you very much.
The Event Hub Processor requires connection information for Azure Storage for lease management and check-pointing purposes. Practically this means that if you have multiple instances of your processor running together all the hard work of figuring out who is reading from which Event Hub partitions is completely managed for you.
The EventProcessor class is extremely generic. You just subclass it and then implement public override Task HandleEventData(IEnumerable<EventData> data). Inside this method you're free to write EventData into Cosmos or do anything else your heart desires with the messages coming off of the Event Hub. For example:
class CosmosEventHubProcessor : EventProcessor
{
private DocumentClient _documentClient;
public CosmosEventHubProcessor()
{
// Initialize DocumentClient
}
public override Task HandleEventData(IEnumerable<EventData> data)
{
// Write data to Cosmos using DocumentClient
}
}

How to package MSMQ (or similar) server into an application?

I'm putting together an application that is intended to install copies of itself in a downstream server(s). I've been reading about RabbitMQ, and am very interested in using message queueing when the upstream/downstream applications talk to each other. From what I can tell with both RabbitMQ and MSMQ, the server/broker component needs to be installed first either from an installer or a script. I'd like to use a NuGet package or similar so the upstream application can remotely install a copy of itself downstream without having someone (or a script) install the prerequisite, as this is meant to be unattended all the time.
My question - is there a message queueing library available where I can embed it into my application for both the client and server components of MQ?
The idea is to do something like this (psuedo-code):
class Program
{
static void Main(string[] args)
{
MyQueueServer.Start();
MyQueueServer.MessageReceived += FancyCallback;
}
static void FancyCallback(object sender, CustomEventArgs e)
{
ReadMessageAndDoStuff(e);
}
static void SendMessage()
{
string message = "hello server";
var sender = new MyQueueClient(serverConfigDetails);
sender.Send(message);
}
}
UPDATE:
Looks like ZeroMQ may be the way to go, but it looks like I'm missing out on durability bit by persisting stuff to disk. I'm also unclear if ZeroMQ will keep trying to deliver a message if the remote host is offline. The Google says you need to roll your own implementation for these kinds of things. I'm willing to do this, but still hopeful for a different solution that I can take advantage of.

Push Notifications with PushSharp - the basics

I need to push notifications to tens of thousands of iOS devices that my app installed. I'm trying to do it with PushSharp, but I'm missing some fundamental concepts here. At first I tried to actually run this in a Windows service, but couldn't get it work - getting null reference errors coming from _push.QueueNotification() call. Then I did exactly what the documented sample code did and it worked:
PushService _push = new PushService();
_push.Events.OnNotificationSendFailure += new ChannelEvents.NotificationSendFailureDelegate(Events_OnNotificationSendFailure);
_push.Events.OnNotificationSent += new ChannelEvents.NotificationSentDelegate(Events_OnNotificationSent);
var cert = File.ReadAllBytes(HttpContext.Current.Server.MapPath("..pathtokeyfile.p12"));
_push.StartApplePushService(new ApplePushChannelSettings(false, cert, "certpwd"));
AppleNotification notification = NotificationFactory.Apple()
.ForDeviceToken(deviceToken)
.WithAlert(message)
.WithSound("default")
.WithBadge(badge);
_push.QueueNotification(notification);
_push.StopAllServices(true);
Issue #1:
This works perfectly and I see the notification pop up on the iPhone. However, since it's called a Push Service, I assumed it would behave like a service - meaning, I instantiate it and call _push.StartApplePushService() within a Windows service perhaps. And I thought to actually queue up my notifications, I could do this on the front-end (admin app, let's say):
PushService push = new PushService();
AppleNotification notification = NotificationFactory.Apple()
.ForDeviceToken(deviceToken)
.WithAlert(message)
.WithSound("default")
.WithBadge(badge);
push.QueueNotification(notification);
Obviously (and like I already said), it didn't work - the last line kept throwing a null reference exception.
I'm having trouble finding any other kind of documentation that would show how to set this up in a service/client manner (and not just call everything at once). Is it possible or am I missing the point of how PushSharp should be utilized?
Issue #2:
Also, I can't seem to find a way to target many device tokens at once, without looping through them and queuing up notifications one at a time. Is that the only way or am I missing something here as well?
Thanks in advance.
#baramuse explained it all, if you wish to see a service "processor" you can browse through my solution on https://github.com/vmandic/DevUG-PushSharp where I've implemented the workflow you seek for, i.e. a win service, win processor or even a web api ad hoc processor using the same core processor.
From what I've read and how I'm using it, the 'Service' keyword may have mislead you...
It is a service in a way that you configure it once and start it.
From this point, it will wait for you to push new notifications inside its queue system and it will raise events as soon as something happens (delivery report, delivery error...). It is asynchronous and you can push (=queue) 10000 notifications and wait for the results to come back later using the event handlers.
But still it's a regular object instance you will have to create and access as a regular one. It doesn't expose any "outside listener" (http/tcp/ipc connection for example), you will have to build that.
In my project I created a small selfhosted webservice (relying on ServiceStack) that takes care about the configuration and instance lifetime while only exposing the SendNotification function.
And about the Issue #2, there indeed isn't any "batch queue" but as the queue function returns straight away (enqueue and push later) it's just a matter of a looping into your device tokens list...
public void QueueNotification(Notification notification)
{
if (this.cancelTokenSource.IsCancellationRequested)
{
Events.RaiseChannelException(new ObjectDisposedException("Service", "Service has already been signaled to stop"), this.Platform, notification);
return;
}
notification.EnqueuedTimestamp = DateTime.UtcNow;
queuedNotifications.Enqueue(notification);
}

Categories