I'm trying to understand how Http triggered function decides when it should be scaled.
I found that for queue triggers IScaleMonitor implementations are used. Here they are for:RabbitMQ Blob trigger Event hub 1 Event hub 2 Kafka Service bus 1 Service bus 2 Cosmos DB Storage queue
But I couldn't find any code that works for HttpTriggers. Does anyone knows where to look for http scaling algorithms?
Azure Function that uses HTTP triggers is scaled based on the setting of maxConcurrentRequests in host.json:
{
"extensions": {
"http": {
"maxConcurrentRequests": 100,
}
}
}
Ducumentation is here: Azure Functions HTTP output bindings
Azure Function that uses a Service Bus is scaled based on the setting of maxConcurrentCalls in host.json file.
Example host.json file with maximum concurrent calls set to 10:
{
"extensions": {
"serviceBus": {
"messageHandlerOptions": {
"maxConcurrentCalls": 10
}
}
}
}
Ducumentation is here: Azure Service Bus bindings for Azure Functions
Related
I'm currently trying to update application that was originally .NET Core 3.1 using MassTransit 6.3.2. It is now configured to use .NET 6.0 and MassTransit 7.3.0
Our application uses MassTransit to send messages via Azure Service Bus, publishing messages to Topics, which then have other Subscribers listening to those Topic.
Cut down, it was implemented like so:
// Program.cs
services.AddMassTransit(config =>
{
config.AddConsumer<AppointmentBookedMessageConsumer>();
config.AddBus(BusControlFactory.ConfigureAzureServiceBus);
});
// BusControlFactory.cs
public static class BusControlFactory
{
public static IBusControl ConfigureAzureServiceBus(IRegistrationContext<IServiceProvider> context)
{
var config = context.Container.GetService<AppConfiguration>();
var azureServiceBus = Bus.Factory.CreateUsingAzureServiceBus(busFactoryConfig =>
{
busFactoryConfig.Host("Endpoint=sb://REDACTED-queues.servicebus.windows.net/;SharedAccessKeyName=MyMessageQueuing;SharedAccessKey=MyKeyGoesHere");
busFactoryConfig.Message<AppointmentBookedMessage>(m => m.SetEntityName("appointment-booked"));
busFactoryConfig.SubscriptionEndpoint<AppointmentBookedMessage>(
"my-subscriber-name",
configurator =>
{
configurator.UseMessageRetry(r => r.Interval(5, TimeSpan.FromSeconds(60)));
configurator.Consumer<AppointmentBookedMessageConsumer>(context.Container);
});
return azureServiceBus;
}
}
}
It has now been changed and upgraded to the latest MassTransit and is implemented like:
// Program.cs
services.AddMassTransit(config =>
{
config.AddConsumer<AppointmentBookedMessageConsumer, AppointmentBookedMessageConsumerDefinition>();
config.UsingAzureServiceBus((context, cfg) =>
{
cfg.Host("Endpoint=sb://REDACTED-queues.servicebus.windows.net/;SharedAccessKeyName=MyMessageQueuing;SharedAccessKey=MyKeyGoesHere");
cfg.Message<AppointmentBookedMessage>(m => m.SetEntityName("appointment-booked"));
cfg.ConfigureEndpoints(context);
});
// AppointmentBookedMessageConsumerDefinition.cs
public class AppointmentBookedMessageConsumerDefinition: ConsumerDefinition<AppointmentBookedMessageConsumer>
{
public AppointmentBookedMessageConsumerDefinition()
{
EndpointName = "testharness.subscriber";
}
protected override void ConfigureConsumer(IReceiveEndpointConfigurator endpointConfigurator, IConsumerConfigurator<AppointmentBookedMessageConsumer> consumerConfigurator)
{
endpointConfigurator.UseMessageRetry(r => r.Interval(5, TimeSpan.FromSeconds(60)));
}
}
The issue if it can be considered one, is that I can't bind to a subscription that already exists.
In the example above, you can see that the EndpointName is set as "testharness.subscriber". There was already a subscription to the Topic "appointment-booked" from prior to me upgrading. However, when the application runs, it does not error, but it receives no messages.
If I change the EndpointName to "testharness.subscriber2". Another subscriber appears in the Azure Service Bus topic (via the Azure Portal) and I start receiving messages. I can see no difference in the names (other than the change that I placed, in this case: the "2" suffix).
Am I missing something here? Is there something else I need to do to get these to bind? Is my configuration wrong? Was it wrong? While I'm sure I can get around this by managing the release more closely and removing unneeded queues once they're using new ones - it feels like the wrong approach.
With Azure Service Bus, ForwardTo on a subscription can be a bit opaque.
While the subscription may indeed visually indicate that it is forwarding to the correctly named queue, it might be that the queue was deleted and recreated at some point without deleting the subscription. This results in a subscription that will build up messages, as it is unable to forward them to a queue that no longer exists.
Why? Internally, a subscription maintains the ForwardTo as an object id, which after the queue is deleted points to an object that doesn't exist – resulting in messages building up in the subscription.
If you have messages in the subscription, you may need to go into the portal and update that subscription to point to the new queue (even though it has the same name), at which point the messages should flow through to the queue.
If there aren't any messages in the subscription (or if they aren't important), you can just delete the subscription and it will be recreated by MassTransit when you restart the bus.
with a very simple Azure Function program, I will test Azure Event Grid. My goal is if a file is uploaded to Storage account, then my Azure function should be triggered and log a message like Hello World. I have this block of code in my Azure Function:
namespace BlobTrigger
{
public static class BlobEventGrid
{
[FunctionName("BlobEventGrid")]
public static void Run([EventGridTrigger]JObject eventGridEvent,
[Blob("{data.url}", Connection = "BlobConnection")] string file,
TraceWriter log)
{
log.Info("Hello World");
}
}
}
I have set my Event grid from this article.
If I upload more than 50 file to my container, and then control the Live Metrics of my Azure Functions I can only see 4 incoming events:
By controlling metrics in event subscription, I see such metrics:
Delivered Events:66
Matched Events: 51
Do you have any Idea, why I only 4 events are tracked by my azure functions?
I have Windows service written in C#. Earlier we were using Event hubs with multiple partitions for message queuing. We recently moved to Kafka. For implementing Event hubs in c# , we have IEventProcessor.ProcessEventsAsync , which keeps listening to event hub notifications and is triggered whenever a message is posted to event hub , which runs asynchronously in the background
I did not find any equivalent method in Kafka.
My requirement here is to subscribe to a Kafka topic and continuously consume messages. When a message is consumed, some other operations are also supposed to executed for that message. For each message say the execution time takes around 15 mins, I want the Kafka consumer to consume all messages and keep it in queue as when it receives and writes it into a file. Other process should read the file, pick the message and do other operations. I want all of them to run simultaneously/parallelly.
PS : I have written a console application which can produce and consume one message.What I'm looking for is queuing and parallelism.
For paralellism Kafka implements what's known as consumer groups. Kafka stores the "offsets" (read: key of record across a topic) and also stores the offsets of where a given consumer group is also at in processing the records. This should allow you to create new consumer instances on the fly using the same program, and by changing the group allow two programs to consume the same data in paralell for different tasks.
I found this link helpful when I was creating my first consumer as well, in case you found a way to create it without a groupId: http://cloudurable.com/blog/kafka-tutorial-kafka-consumer/index.html
Hope this helps!
Have look at Silverback: https://silverback-messaging.net. It abstracts many of those concerns and the basic usage is as simple as this:
public class Startup
{
public void ConfigureServices(IServiceCollection services)
{
services
.AddSilverback()
.WithConnectionToMessageBroker(options => options.AddKafka())
.AddKafkaEndpoints(
endpoints => endpoints
.Configure(
config =>
{
config.BootstrapServers = "localhost:9092";
})
.AddInbound(
endpoint => endpoint
.ConsumeFrom("my-topic")
.DeserializeJson(serializer => serializer.UseFixedType<SomeMessage>())
.Configure(
config =>
{
config.GroupId = "test-consumer-group";
config.AutoOffsetReset = AutoOffsetReset.Earliest;
})))
.AddSingletonSubscriber<MySubscriber>();
}
}
public class MySubscriber
{
public Task OnMessageReceived(SomeMessage message)
{
// TODO: process message
}
}
I want to write a code, similar to the code at the bottom of this link (https://azure.microsoft.com/en-us/blog/automating-azure-analysis-services-processing-with-azure-functions/) in Visual Studio and building a DLL file. However instead of using the connection string, i would like to use an existing Linked Service from my Azure portal.
The goal is to create a DLL that refreshes my Cube, while at the same time using an existing Linked Service which is already in my Azure Portal.
Is this possible?
Thanks.
#r "Microsoft.AnalysisServices.Tabular.DLL"
#r "Microsoft.AnalysisServices.Core.DLL"
#r "System.Configuration"
using System;
using System.Configuration;
using Microsoft.AnalysisServices.Tabular;
public static void Run(TimerInfo myTimer, TraceWriter log)
{
log.Info($"C# Timer trigger function started at: {DateTime.Now}");
try
{
Microsoft.AnalysisServices.Tabular.Server asSrv = new Microsoft.AnalysisServices.Tabular.Server();
var connStr = ConfigurationManager.ConnectionStrings["AzureASConnString"].ConnectionString; // Change this to a Linked Service connection
asSrv.Connect(connStr);
Database db = asSrv.Databases["AWInternetSales2"];
Model m = db.Model;
db.Model.RequestRefresh(RefreshType.Full); // Mark the model for refresh
//m.RequestRefresh(RefreshType.Full); // Mark the model for refresh
m.Tables["Date"].RequestRefresh(RefreshType.Full); // Mark only one table for refresh
db.Model.SaveChanges(); //commit which will execute the refresh
asSrv.Disconnect();
}
catch (Exception e)
{
log.Info($"C# Timer trigger function exception: {e.ToString()}");
}
log.Info($"C# Timer trigger function finished at: {DateTime.Now}");
}
So I guess you're using the Data Factory and you want to process your analysis services model from your pipeline. I don't see what your question actually has to do with the Data lake store.
To trigger Azure Functions from the Data Factory (v2 only), you'll have to use a web activity. It is possible to pass a Linked Service as part of your payload, as shown in the documentation. It looks like this:
{
"body": {
"myMessage": "Sample",
"linkedServices": [{
"name": "MyService1",
"properties": {
...
}
}]
}
However, there is no Analysis services linked service in the Data Factory, at least, I didn't hear of such a thing. Passing in a connectionstring from the pipeline seems like a good idea however. You could pass it as a pipeline parameter in your body of the webrequest.
Create a parameter in your pipeline
Add it to your Web Activity Payload
{
"body": {
"AzureASConnString": "#pipeline().parameters.AzureASConnString"
}
You can retrieve this value from functions like described here
We have a requirement to provide an API endpoint which reports the health of various external dependencies. One of these is an Azure Service Bus. By health we simply need to know if the service is available and responding to connections.
Our application already starts up a service bus endpoint on startup and uses this to publish messages to its queue. However, it looks like the only way I can test this endpoint's health would be to actually publish a message to the queue and check for errors. I'd rather not do this because having to clean up these message later feels like overkill.
My other idea was to use a dedicated class to create a new endpoint and start it. Then stop it again if there are no errors, as below. And do this each time I need to check the health.
// Build the service bus configuration - connection string etc.
var configuration = _configurationBuilder.Configure(_settings);
IEndpointInstance serviceBusEndpoint = null;
try
{
serviceBusEndpoint = await Endpoint.Start(configuration);
return true;
}
catch
{
return false;
}
finally
{
if (serviceBusEndpoint != null)
{
await serviceBusEndpoint.Stop();
}
}
However, I suspect this may be a less efficient approach. Is there a better/correct way to achieve this aim?