I am trying to achieve a pub/sub with azure service bus.
I started with this tutorial and it worked so far:
https://azure.microsoft.com/en-gb/documentation/articles/service-bus-dotnet-how-to-use-topics-subscriptions/
But in my dedicated case i am not exactly sure how i should do it exactly:
I have a web api running on azure web app that is scaled to three instances.
I have a console application client that triggers some messages to a dedicated topic.
What i want to achieve is, that all three instances of the web api get the messages delivered that is send to the message bus.
So it is a fire forget action:
Client sends message to topic
Every subscriber that is CURRENTLY subscribing to this topic should get the message
I am not interested in older messages that were sent when the subscriber was inactive/offline. I am just syncing an in memory cache over these instances so it is really a short living info when i need to know which keys i have to invalidate. but it is important that every subscriber gets the information to avoid stale data.
I am not exactly sure if i have to create a subscription dynamically in the startup code of the web api so that every instance has its own subscription or if i can subscribe all web app instances to the same subscription?
I would like not to dynamically create subscriptions since i don't know when to remove them again (e.g. scaled down to 2 instances instead of three).
but i was unable to find any documentation how to do this and if it is okey that multiple clients subscribe to the same subscription or if i need to create a subscription per client.
I am not exactly sure if i have to create a subscription dynamically in the startup code of the web api so that every instance has its own subscription or if i can subscribe all web app instances to the same subscription?
Service Bus subscribers adopt the Competing Consumer pattern by default. You must create a unique subscription for each Web API instance in order each instance to receive a copy of the message. It will be easiest to do this when the Web API instance starts up.
I would like not to dynamically create subscriptions since i don't know when to remove them again (e.g. scaled down to 2 instances instead of three).
You can configure the subscription to be auto-deleted after being idle for some period of time. "Idle" in this case would mean that the Web API instance has spun down and is no longer attempting to receive messages on the subscription. When creating the subscription set the AutoDeleteOnIdle time span for a brief duration, currently a minimum of 5 minutes. Now you can create a new subscription when the Web API instance starts and know that it will be automatically deleted soon after the Web API instance stops.
I am not interested in older messages that were sent when the subscriber was inactive/offline.
When creating the topic, set the DefaultMessageTimeToLive for a brief duration e.g. 5 seconds. This will ensure that new subscribers don't see old messages.
Related
In brief, what is the best way to create Azure resources (VM's, ResourceGroups, etc) that are defined programmatically, without locking the web app's interface because of the long time that some of these operations take?
More detailed:
I have a Net Core web application where customers are added, manually. Once added, it automatically creates some resources for Azure. However, I noticed that my interfaces is 'locked' during these operations. What is a relatively simple way of detaching these operations from the web application? I had in mind sending a trigger using a Service Bus or Azure Relay and triggering an Azure Function. However, it seems to me that all these resources return something back, and my web app is waiting for that. I need a 'send and forget method' for that. Just send out the trigger to create these resources, don't bother with the return values for now, and continue with the app.
If a 'send and return' method also works within my web app, that is also fine.
Any suggestions are welcome!
You need to queue the work to run the background and then return the action immediately. The easiest method of doing this is to create a hosted service. There's a couple of different ways to do this:
Use a queued background service and actually queue the work to be done in your action.
Just write the required info to a database table, redis store, etc. and use a timed service to perform the work on a schedule.
In either case, you may also consider splitting this off into a worker service (essentially, a separate app composed of just the hosted service, instead of running it in the same instance as your web app). This allows you to scale the service independently and also insulates your web app from problems that service might encounter.
Once you've set up your service and scheduled the work, you just need some way to let the user know when the work is complete. A typical approach is to use SignalR to allow the server to notify the client with progress updates or success notifications. However, you can also just do something simple like email the user when everything is ready.
We want to use Google pub/sub to consume messages. In rabbitMQ, whenever a message published, we were getting it and processing it. Our process operation takes 3-4 hours and because of that our consumers are windows services.
We dont want to use pub/sub pull because we dont want to poll. But Pub/sub push publishing to a web endpoint. Because of our long running process, we cannot use web app or web api. Is there any chance to consume pub/sub messages like in rabbitmq without requesting always and consuming when there is a message only.
Thanks
The Google Pub Sub technology does not requires that one continually explicitly poll the PubSub environment. If one is using the client libraries one can configure a callback function within the client application that is invoked when a new message is published to the topic against which the subscription has been taken.
I have a 5 node cluster, each node has a microservice(this is a stateless reliable service) running, which is receiving messages from Azure Service Bus.
Since I have created only one my_Subscription(Subscription Name) for my_topic(Topic Name), the microservice instances are receiving messages at random.
I was expecting it to be broadcasted as every instance is subscribed to the Service Bus Topic.
Now if this the case will I need to create one new subscription per instance, I will need to change the ARM template and redeploy it every time whenever I want my services to scale?
If you can make your cluster nodes create own subscription on the fly, as they start up (which should not be too difficult to do). Maybe using something like node's unique ID for the subscription name. Then each node would be receiving its own copy of the message only, achieving your goal. However if your nodes come and go all the time, then you'd need to implement some cleanup mechanism to make sure stale subscriptions don't clog topic's storage.
When a manager creates a task and sets the activation date in the future, it's supposed to be stored in the DB. No message is being dispatched out to the regarded workers, until a day or two before it's due. When the time's approaching, an email's being sent out to the subordinates.
Previously I've resolved that using a locally run Windows Service that scheduled the messaging. However, as I'm implementing something similar in the Azure, I'm not sure how to resolve it (other than actually hosting my own Windows Server in the cloud, of course, but kind of defeats the whole point).
Since my MVC application is strictly event driven, I've browsed around in the Azure portal to find a utility to schedule or postpone a method being invoked. No luck. So at the moment, all the emails are dispensed immediately and the scheduling is performed by keeping the message in the inbox until it's time (or manually setting up an appointment).
How should I approach the issue?
Other possible solution is to use Queueing mechanism. You can use Azure Storage Queues or Service Bus Queues.
The way it would work is when a task is created and saved in the database, you will write a message in a queue. This message will contain details about the task (may be a task id). However that message will be invisible by default and will only become visible after certain amount of time (you will calculate this period based on when you would need to send out the email). When the visibility timeout period expires, the message will become available to be consumed in the queue. Then you will have a WebJob with a Queue trigger (i.e. the WebJob will become alive when there's a message in the queue). In your WebJob code, you will fetch the task information from the database and send the notification to concerned person.
If you're using Azure Storage Queue, the property you would be interested in is InitialVisibilityTimeout. Please see this thread for more details: Azure storage queue message (show at specific time).
If you're using Azure Service Bus Queue, the property you would be interested in is BrokeredMessage.ScheduledEnqueueTimeUtc. You can read more about this property here: https://msdn.microsoft.com/en-us/library/microsoft.servicebus.messaging.brokeredmessage.scheduledenqueuetimeutc.aspx.
One solution to run background tasks is to use Web Jobs. Web Jobs can run on a schedule (let's say once per day), manually or triggered by a message in a queue.
You can use Azure WebJobs. Basically, create a WebJob and schedule it to regularly check the data in your database for upcoming tasks and then notify people.
I have the following message transport scenarios
Client -> Calls SignalR -> Calls NServiceBus -> Process Message internally -> Calls NServiceBus Gateway service with Result -> Calls SignalR Hub -> Updates the client with result.
In choosing whether to use SignalR vs. long polling, I need to know if SignalR is scaleable. So in doing my homework I came across SignalR on Azure Service Bus. The setup is done on the Global.asax application start.
Ultimately I need to be able to do this, from inside an NServiceBus handler:
var context = GlobalHost.ConnectionManager.GetHubContext<MyHub>();
context.Clients.Group(group).addMessage(message);
The question is if context will be jacked up, because I'm (potentially) calling it from another machine than the one the client was connected to?
Also what is the sharding schema that the SignalR implementation uses to seed the topics? I know I can configure it to use N-number of topics, but how is it actually determining which message goes to which topics and if it's relevant from an external caller PoV.
You should be able to use GlobalHost.ConnectionManager.GetHubContext in any application where you have registered ServiceBusMessageBus as your IMessageBus via SignalR's GlobalHost.DepenencyResolver. This is done for you if you call GlobalHost.DepenencyResolver.UseServiceBus(...) in the application.
If you do this, a message will be published to Azure Service Bus for each call to addMessage or any other hub method on the IHubContext returned from GetHubContext. If there are subscribed clients connected to other nodes in the web farm, the other nodes will pick up the message from Service Bus and relay it to the subscribed clients.
Which topic a message goes to should not be relevant from the PoV of an external caller. You can use multiple topics to improve throughput, but for most use cases one should be sufficient.
If you choose to use more than one topic, you can think about the topic a message goes to as being essentially random. The only thing that is guaranteed is that messages from the same sender will go to the same topic. This allows SignalR to keep messages from the same sender in order.
Caveat emptor: SignalR has not yet had an official release supporting scale out. The 1.1 version will be the first release to support scale out officially.