I'm creating a webjob in .net core 3.1. In this project I have a function that is timer activated which should read the number of messages in a queue Q1 and if empty, put a message in Q2 as well as trigger a rest call to an API.
In order to check how many messages are in the API I need to access the AzureWebJobsStorage in my appsettings.json and then the url which is also in the settings.
Program.cs
class Program
{
static async Task Main()
{
var builder = new HostBuilder();
builder.ConfigureWebJobs(b =>
{
b.AddAzureStorageCoreServices();
b.AddAzureStorage();
b.AddTimers();
});
builder.ConfigureLogging((context, b) =>
{
b.AddConsole();
});
builder.ConfigureAppConfiguration((context, b) =>
{
b.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("appsettings.json", optional: false, reloadOnChange: true)
.AddEnvironmentVariables();
});
builder.ConfigureServices((context, services) =>
{
var mySettings = new MySettings
{
AzureWebJobsStorage = context.Configuration.GetValue<string>("AzureWebJobsStorage"),
AzureWebJobsDashboard = context.Configuration.GetValue<string>("AzureWebJobsDashboard"),
url = context.Configuration.GetValue<string>("url"),
};
services.AddSingleton(mySettings);
});
var host = builder.Build();
using (host)
{
await host.RunAsync();
}
}
}
Fuctions.cs
public class Functions
{
public static void UpdateChannels([QueueTrigger("Q1")] string message, ILogger logger)
{
logger.LogInformation(message);
}
public static void WhatIsThereToUpdate([QueueTrigger("Q2")] string message, ILogger logger)
{
logger.LogInformation(message);
}
public static void CronJob([TimerTrigger("0 * * * * *")] TimerInfo timer, [Queue("Q2")] out string message, ILogger logger, MySettings mySettings)
{
message = null;
// Get the connection string from app settings
string connectionString = mySettings.AzureWebJobsStorage;
logger.LogInformation("Connection String: " + connectionString);
// Instantiate a QueueClient which will be used to create and manipulate the queue
QueueClient queueClient = new QueueClient(connectionString, "Q1");
if (queueClient.Exists())
{
QueueProperties properties = queueClient.GetProperties();
// Retrieve the cached approximate message count.
int cachedMessagesCount = properties.ApproximateMessagesCount;
// Display number of messages.
logger.LogInformation($"Number of messages in queue: {cachedMessagesCount}");
if (cachedMessagesCount == 0)
message = "Hello world!" + System.DateTime.Now.ToString(); //here I would call the REST API as well
}
logger.LogInformation("Cron job fired!");
}
}
appsettings.json
{
"AzureWebJobsStorage": "constr",
"AzureWebJobsDashboard": "constr",
"url": "url"
}
My Settings
public class MySettings
{
public string AzureWebJobsStorage { get; set; }
public string AzureWebJobsDashboard { get; set; }
public string url { get; set; }
}
However when I run this I get the following error:
Error indexing method 'Functions.CronJob'
Microsoft.Azure.WebJobs.Host.Indexers.FunctionIndexingException: Error indexing method 'Functions.CronJob'
---> System.InvalidOperationException: Cannot bind parameter 'mySettings' to type MySettings. Make sure the parameter Type is supported by the binding. If you're using binding extensions (e.g. Azure Storage, ServiceBus, Timers, etc.) make sure you've called the registration method for the extension(s) in your startup code (e.g. builder.AddAzureStorage(), builder.AddServiceBus(), builder.AddTimers(), etc.).
In addition to what is shown in the above codes I also tried using ConfigurationManager and Environment.GetEnvironmentVariable, both methods gave me null when I tried to read the values. For example ConfigurationManager.AppSettings.GetValues("AzureWebJobsStorage").
I also tried to register IConfiguration as a service services.AddSingleton(context.Configuration); and inject it in the parameters (instead of MySettings), but it also gave me the same binding error.
I'm really at a loss here, I've scoured the SO archives trying to find a solution and I think I tried everything I saw gave people positive results, but unfortunately I wasn't as lucky as the other posters.
Any guidance is much appreciated.
Edited to add my packages
In case it helps anyone, I'm using the following
Azure.Storage.Queues (12.4.0)
Microsoft.Azure.WebJobs.Extensions (3.0.6)
Microsoft.Azure.WebJobs.Extensions.Storage (4.0.2)
Microsoft.Extensions.Logging.Console (3.1.7)
When using DI, I suggest you use non-static method and constructor inject.
Here is the Functions.cs:
public class Functions
{
private readonly MySettings mySettings;
public Functions(MySettings _mySettings)
{
mySettings = _mySettings;
}
public void ProcessQueueMessage([TimerTrigger("0 */1 * * * *")] TimerInfo timer, [Queue("queue")] out string message, ILogger logger)
{
message = null;
string connectionString = mySettings.AzureWebJobsStorage;
logger.LogInformation("Connection String: " + connectionString);
}
}
No code change in other .cs file.
Here is the test result:
Related
I have an Azure Function that I developed using an HTTP trigger. The function would start up, call an api, initialise a DBcontext and write that data from the API to the DB.
It all worked fine, but I needed to make a change and deploy the function using a timer trigger instead. Now, since this change was made, the Startup.cs class is no longer called so there isn't a context injected and it obviously falls over when the context is called inside the function.
Here's a breakdown of the Startup class in my project:
// using statements
[assembly: FunctionsStartup(typeof(MyNamespace.FunctionAPI.Startup))]
namespace MyNamespace.FunctionAPI
{
public class Startup : FunctionsStartup
{
private static string development = FunctionAPIHelper.GetEnvironmentVariable("development");
public override void Configure(IFunctionsHostBuilder builder)
{
string SqlConnection = "";
if (development.Equals("yes"))
{
SqlConnection = FunctionAPIHelper.GetSqlAzureConnectionString("TestConnectionString");
}
else
{
SqlConnection = FunctionAPIHelper.GetSqlAzureConnectionString("ConnectionString");
}
var credential = new DefaultAzureCredential();
CancellationTokenSource source = new CancellationTokenSource();
CancellationToken ctoken = source.Token;
builder.Services.AddDbContext<Database.FunctionAPIContext>(options =>
options.UseSqlServer(new SqlConnection
{
ConnectionString = SqlConnection,
AccessToken = credential.GetTokenAsync(
new TokenRequestContext(
new[] { "https://database.windows.net//.default" }), ctoken)
.Result.Token
}
));
}
}
}
Then the Azure Function code using the HTTP Trigger is below. This works as expected when posting to the endpoint it opens up.
// using statements
namespace MyNamespace.FunctionAPI
{
public class FunctionAPITrigger
{
private static readonly HttpClient httpClient = new HttpClient();
private readonly FunctionAPIContext _functionAPIContext;
public FunctionAPITrigger(FunctionAPIContext functionAPIContext)
{
_functionAPIContext = functionAPIContext;
}
[FunctionName("FunctionAPITrigger")]
public async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)]
HttpRequest req,
ILogger log)
{
log.LogInformation("HTTP Trigger received.");
// do API calls here
// get stuff from the db
List<FunctionDBItems> functionDBItems = _functionAPIContext.FunctionDBItems.ToList();
// do this to the DB items
_functionAPIContext.SaveChanges();
return new OkObjectResult("DB updated!");
}
}
}
And now the function code when using a timer trigger and then calling the function manually using the Azure Function extension within VS Code.
// using statements
namespace MyNamespace.FunctionAPI
{
public class FunctionAPITrigger
{
private static readonly HttpClient httpClient = new HttpClient();
private readonly FunctionAPIContext _functionAPIContext;
public FunctionAPITrigger(FunctionAPIContext functionAPIContext)
{
_functionAPIContext = functionAPIContext;
}
[FunctionName("PetitionsAPITrigger")]
public async void Run([TimerTrigger("0 10 12 * * 1-5")] TimerInfo myTimer, ILogger log)
{
log.LogInformation("HTTP Trigger received.");
// do API calls here
// get stuff from the db
List<FunctionDBItems> functionDBItems = _functionAPIContext.FunctionDBItems.ToList();
// do this to the DB items
_functionAPIContext.SaveChanges();
log.LogInformation("DB Updated and such");
}
}
}
The above code now crashes when it reaches the DB call to list the items in the table. Again, the function is triggered by going to the Azure Functions Extension in VS Code, right clicking the function and selecting Execute Function.
This is my first project using Azure Functions, so I'm thinking that maybe there is an issue with the way the Azure Extension triggers timed functions which bypases the Startup.cs class.
Our requirement is like below :-
Exchange 1 is a topic exchange and queue 1 is bound to it. It is on VHOST 1.
Application is subscribed to the queue 1. It processes the message of queue 1. After processing of queue 1 message, we want to publish next message to different exchange which is on VHOST 2 (different rabbit connection)
I have below questions :-
a) Is it possible to implement this without federation ?a
b) In a same application, Can I maintain 2 different rabbit connections ?
We are using using EasynetQ as a client to connect with rabbitmq.
Can you please share some sample on this.
Thanks in advance.
a) Yes, you can also create a shovel between vhosts, which is simpler than a federation
b) Yes I don't see a problem with creating multiple IBus instances, as long as you use different DI (sub) containers per bus instance, so there's an added complexity.
Here is how I handle multiple connections. I can't find a solution directly from EasyNetQ. I don't use the default DI adaptors for MS DI. And I only use the advance api and inject the service I need manually. So far it seems working but it needs more test for sure.
in startup.cs / ConfigureServices
services.AddBusStation(busStationBuilder =>
{
// inject IBusStation and get the bus thru name
appSettings.RabbitMQSettings.Connections.ForEach(c =>
{
var taskQueueBus = RabbitHutch.CreateBus(c.ConnectionString, CustomServiceRegister.ServiceRegisterAction());
c.Exchanges.ForEach(async e =>
{
await taskQueueBus.Advanced.ExchangeDeclareAsync(e.Name, e.Type, e.Durable, e.AutoDelete);
});
busStationBuilder.Add(c.Name, taskQueueBus.Advanced);
busStationBuilder.AddDefaultBus(taskQueueBus);
});
});
public interface IBusStation
{
IBus DefualtBus { get; }
IAdvancedBus Get(string busName);
void Add(string busName, IAdvancedBus advancedBus);
void Add(IBus bus);
}
public class BusStation : IBusStation
{
private Dictionary<string, IAdvancedBus> BusList { get; set; } = new Dictionary<string, IAdvancedBus>();
public IBus DefualtBus { get; private set; }
public IAdvancedBus Get(string busName)
{
if (BusList.TryGetValue(busName, out IAdvancedBus advancedBus))
{
return advancedBus;
}
return null;
}
public void Add(string busName, IAdvancedBus advancedBus)
{
BusList.Add(busName, advancedBus);
}
public void Add(IBus bus)
{
this.DefualtBus = bus;
}
}
public class BusStationBuilder
{
private readonly IBusStation _BusStation;
public BusStationBuilder(IServiceCollection services, IBusStation busStation)
{
this._BusStation = busStation;
services.AddSingleton(busStation);
}
public BusStationBuilder Add(string busName, IAdvancedBus advancedBus)
{
_BusStation.Add(busName, advancedBus);
return this;
}
public BusStationBuilder AddDefaultBus(IBus bus)
{
_BusStation.Add(bus);
return this;
}
}
public static class DependencyExtensions
{
public static IServiceCollection AddBusStation(this IServiceCollection services, Action<BusStationBuilder> builder)
{
var busStationBuilder = new BusStationBuilder(services, new BusStation());
builder(busStationBuilder);
return services;
}
}
appsettings.json
"RabbitMQSettings": {
"DefaultQueue": "task.main",
"Connections": [
{
"Name": "Task_Queue",
"ConnectionString": "host=192.168.123.123;virtualHost=/;username=admin;password=password123;prefetchCount=1;persistentMessages=true;publisherConfirms=true",
"Exchanges": [
{
"Name": "Direct_Task_Queue",
"Type": "direct",
"Passive": false,
"Durable": true,
"AutoDelete": false,
"Internal": false,
"AlternateExchange": null,
"Delayed": false
}
]
}
]
},
I am using rabbitmq in a "Work Queues" scenario.
I need eg. a pool of 5 consumers, (each with its own channel), so one consumer doing I/O operations, won't block other consumer of the same queue.
Eg.
If I have on my queue:
Message 1, Message 2, Message 3, Message 4. Each instance of (FistConsumerHandler) will take 1 message from the queue using Round Robin (default rabbitmq behavior)
The problem I am facing is I need to do this using Dependency Injection.
Here is what i have so far:
On Windows service start (my consumers are hosted in a windows service):
protected override void OnStart(string[] args)
{
BuildConnections();
// Register the consumers. For simplicity only showing FirstConsumerHandler.
AddConsumerHandlers<FistConsumerHandler>(ConstantesProcesos.Exchange, ConstantesProcesos.QueueForFirstHandler);
BuildStartup();
var logger = GetLogger<ServicioProcesos>();
logger.LogInformation("Windows Service Started");
Console.WriteLine("Press [enter] to exit.");
}
protected virtual void BuildConnections(
string notificationHubPath = "notificationhub_path",
string rabbitMQHostname = "rabbitmq_hostname",
string rabbitMQPort = "rabbitmq_port",
string rabbitMQUserName = "rabbitmq_username",
string rabbitMQPassword = "rabbitmq_password")
{
ContextHelpers.Setup(ConfigurationManager.ConnectionStrings[appContextConnectionString].ConnectionString);
if (_connection == null)
{
var factory = new ConnectionFactory
{
HostName = ConfigurationManager.AppSettings[rabbitMQHostname],
Port = int.Parse(ConfigurationManager.AppSettings[rabbitMQPort]),
UserName = ConfigurationManager.AppSettings[rabbitMQUserName],
Password = ConfigurationManager.AppSettings[rabbitMQPassword],
DispatchConsumersAsync = true,
};
// Create a connection
do
{
try
{
_connection = factory.CreateConnection();
}
catch (RabbitMQ.Client.Exceptions.BrokerUnreachableException e)
{
Thread.Sleep(5000);
}
} while (_connection == null);
}
_startupBuilder = new StartupBuilder(_connection);
}
protected void AddConsumerHandlers<THandler>(string exchange, string queue)
{
var consumerHandlerItem = new ConsumerHandlerItem
{
ConsumerType = typeof(THandler),
Exchange = exchange,
Queue = queue
};
_startupBuilder._consumerHandlerItems.Add(consumerHandlerItem);
}
protected void BuildStartup()
{
ServiceProvider = _startupBuilder.Build();
}
Startup Builder:
using Microsoft.Extensions.DependencyInjection;
using RabbitMQ.Client;
using RabbitMQ.Client.Events;
using System;
using System.Collections.Generic;
public class StartupBuilder
{
private static IConnection _connection;
private IModel _channel;
public List<ConsumerHandlerItem> _consumerHandlerItems;
public IServiceCollection Services { get; private set; }
public StartupBuilder(IConnection connection)
{
_connection = connection;
_consumerHandlerItems = new List<ConsumerHandlerItem>();
Services = new ServiceCollection();
}
public IServiceProvider Build()
{
_channel = _connection.CreateModel();
Services.InitSerilog();
// Add channel as singleton (this is not correct as I need 1 channel per ConsumerHandler)
Services.AddSingleton(_channel);
// Register the ConsumerHandler to DI
foreach (var item in _consumerHandlerItems)
{
// Add FirstHandler to DI
Type consumerType = item.ConsumerType;
Services.AddSingleton(consumerType);
}
// Finish DI Setup
var serviceProvider = Services.BuildServiceProvider();
// Bind the consumer handler to the channel and queue
foreach (var item in _consumerHandlerItems)
{
var consumerHandler = (AsyncEventingBasicConsumer)serviceProvider.GetRequiredService(item.ConsumerType);
_channel.AssignNewProcessor(item, consumerHandler);
}
return serviceProvider;
}
}
Helpers:
public static class QueuesHelpers
{
public static void AssignNewProcessor(this IModel channel, ConsumerHandlerItem item, AsyncEventingBasicConsumer consumerHandler)
{
channel.ExchangeDeclare(item.Exchange, ExchangeType.Topic, durable: true);
channel.QueueDeclare(item.Queue, true, false, false, null);
channel.QueueBind(item.Queue, item.Exchange, item.Queue, null);
channel.BasicConsume(item.Queue, false, consumerHandler);
}
}
Consumer handler:
public class FistConsumerHandler : AsyncEventingBasicConsumer
{
private readonly ILogger<FistConsumerHandler> _logger;
private Guid guid = Guid.NewGuid();
public FistConsumerHandler(
IModel channel,
ILogger<FistConsumerHandler> logger) : base(channel)
{
Received += ConsumeMessageAsync;
_logger = logger;
}
private async Task ConsumeMessageAsync(object sender, BasicDeliverEventArgs eventArgs)
{
try
{
// consumer logic to consume the message
}
catch (Exception ex)
{
}
finally
{
Model.Acknowledge(eventArgs);
}
}
}
The problem with this code is:
There is ony 1 instance of FistConsumerHandler (as is reigstered as singleton). I need, for instance 5.
I have only 1 channel, I need 1 channel per instance.
To sum up, the expected behavior using Microsoft.Extensions.DependencyInjection should be:
Create a connection (share this connection with all consumers)
When a message is received to the queue, it should be consumed by 1 consumer using its own channel
If another message is received to the queue, it should be consumed by another consumer
TL;DR; Create your own scope
I've done something similar in an app I'm working on, albeit not as cleanly as I would like (and thus why I came across this post). The key for me was using IServiceScopeFactory to get injected services and use them in a consumer method. In a typical HTTP request the API will automatically create/close scope for you as the request comes in / response goes out, respectively. But since this isn't an HTTP request, we need to create / close the scope for using injected services.
This is a simplified example for getting an injected DB context (but could be anything), assuming I've already set up the RabbitMQ consumer, deserialized the message as an object (FooEntity in this example):
public class RabbitMQConsumer
{
private readonly IServiceProvider _provider;
public RabbitMQConsumer(IServiceProvider serviceProvider)
{
this._serviceProvider = serviceProvider;
}
public async Task ConsumeMessageAsync()
{
// Using statement ensures we close scope when finished, helping avoid memory leaks
using (var scope = this._serviceProvider.CreateScope())
{
// Get your service(s) within the scope
var context = scope.ServiceProvider.GetRequiredService<MyDBContext>();
// Do things with dbContext
}
}
}
Be sure to register RabbitMQConsumer as a singleton and not a transient in Startup.cs also.
References:
Similar SO post
MS Docs
Is there a way to programmatically enable/disable an Azure function?
I can enable/disable a function using the portal under the "Manage" section, which causes a request to be sent to https://<myfunctionapp>.scm.azurewebsites.net/api/functions/<myfunction>
The JSON payload looks a bit like:
{
"name":"SystemEventFunction",
"config":{
"disabled":true,
"bindings":[
// the bindings for this function
]
}
// lots of other properties (mostly URIs)
}
I'm creating a management tool outside of the portal that will allow users to enable and disable functions.
Hoping I can avoid creating the JSON payload by hand, so I'm wondering if there is something in an SDK (WebJobs??) that has this functionality.
Further to #James Z.'s answer, I've created the following class in C# that allows you to programmatically disable / enable an Azure function.
The functionsSiteRoot constructor argument is the Kudu root of your Functions application, eg https://your-functions-web-app.scm.azurewebsites.net/api/vfs/site/wwwroot/
The username and password can be obtained from "Get publish profile" in the App Service settings for your Functions.
public class FunctionsHelper : IFunctionsHelper
{
private readonly string _username;
private readonly string _password;
private readonly string _functionsSiteRoot;
private WebClient _webClient;
public FunctionsHelper(string username, string password, string functionsSiteRoot)
{
_username = username;
_password = password;
_functionsSiteRoot = functionsSiteRoot;
_webClient = new WebClient
{
Headers = { ["ContentType"] = "application/json" },
Credentials = new NetworkCredential(username, password),
BaseAddress = functionsSiteRoot
};
}
public void StopFunction(string functionName)
{
SetFunctionState(functionName, isDisabled: true);
}
public void StartFunction(string functionName)
{
SetFunctionState(functionName, isDisabled: false);
}
private void SetFunctionState(string functionName, bool isDisabled)
{
var functionJson =
JsonConvert.DeserializeObject<FunctionSettings>(_webClient.DownloadString(GetFunctionJsonUrl(functionName)));
functionJson.disabled = isDisabled;
_webClient.Headers["If-Match"] = "*";
_webClient.UploadString(GetFunctionJsonUrl(functionName), "PUT", JsonConvert.SerializeObject(functionJson));
}
private static string GetFunctionJsonUrl(string functionName)
{
return $"{functionName}/function.json";
}
}
internal class FunctionSettings
{
public bool disabled { get; set; }
public List<Binding> bindings { get; set; }
}
internal class Binding
{
public string name { get; set; }
public string type { get; set; }
public string direction { get; set; }
public string queueName { get; set; }
public string connection { get; set; }
public string accessRights { get; set; }
}
No, this is not possible currently. The disabled metadata property in function.json is what determines whether a function is enabled. The portal just updates that value when you enable/disable in the portal.
Not sure if it will meet your needs, but I'll point out that there is also a host.json functions array that can be used to control the set of functions that will be loaded (documented here). So for example, if you only wanted 2 of your 10 functions enabled, you could set this property to an array containing only those 2 function names (e.g. "functions": [ "QueueProcessor", "GitHubWebHook" ]), and only those will be loaded/enabled. However, this is slightly different than enable/disable in that you won't be able to invoke the excluded functions via the portal, whereas you can portal invoke disabled functions.
Further to #DavidGouge 's answer above, the code he posted does work, I just tested it and will be using it in my app. However it needs a couple of tweaks:
Remove the inheritance from IFunctionsHelper. I'm not sure what that interface is but it wasn't required.
Change the class definition for Binding as follows:
internal class Binding
{
public string name { get; set; }
public string type { get; set; }
public string direction { get; set; }
public string queueName { get; set; }
public string connection { get; set; }
public string accessRights { get; set; }
public string schedule { get; set; }
}
After that it would work.
P.S. I would have put this as a comment on the original answer, but I don't have enough reputation on Stack Overflow to post comments!
Using a combination of #Satya V's and #DavidGouge's solutions, I came up with this:
public class FunctionsHelper
{
private readonly ClientSecretCredential _tokenCredential;
private readonly HttpClient _httpClient;
public FunctionsHelper(string tenantId, string clientId, string clientSecret, string subscriptionId, string resourceGroup, string functionAppName)
{
var baseUrl =
$"https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroup}/providers/Microsoft.Web/sites/{functionAppName}/";
var httpClient = new HttpClient
{
BaseAddress = new Uri(baseUrl)
};
_httpClient = httpClient;
_tokenCredential = new ClientSecretCredential(tenantId, clientId, clientSecret);
}
private async Task SetAuthHeader()
{
var accessToken = await GetAccessToken();
_httpClient.DefaultRequestHeaders.Authorization = AuthenticationHeaderValue.Parse($"Bearer {accessToken}");
}
private async Task<string> GetAccessToken()
{
return (await _tokenCredential.GetTokenAsync(
new TokenRequestContext(new[] {"https://management.azure.com/.default"}))).Token;
}
public async Task StopFunction(string functionName)
{
await SetFunctionState(functionName, isDisabled: true);
}
public async Task StartFunction(string functionName)
{
await SetFunctionState(functionName, isDisabled: false);
}
private async Task SetFunctionState(string functionName, bool isDisabled)
{
await SetAuthHeader();
var appSettings = await GetAppSettings();
appSettings.properties[$"AzureWebJobs.{functionName}.Disabled"] = isDisabled ? "1" : "0";
var payloadJson = JsonConvert.SerializeObject(new
{
kind = "<class 'str'>", appSettings.properties
});
var stringContent = new StringContent(payloadJson, Encoding.UTF8, "application/json");
await _httpClient.PutAsync("config/appsettings?api-version=2019-08-01", stringContent);
}
private async Task<AppSettings> GetAppSettings()
{
var res = await _httpClient.PostAsync("config/appsettings/list?api-version=2019-08-01", null);
var content = await res.Content.ReadAsStringAsync();
return JsonConvert.DeserializeObject<AppSettings>(content);
}
}
internal class AppSettings
{
public Dictionary<string, string> properties { get; set; }
}
The problem with using the Kudu api to update the function.json file is that it will be overwritten on any subsequent deploy. This uses Azure's Rest Api to update the Configuration of the application. You will first need an Azure Service Principle to use the api though.
Using the Azure Cli, you can run az ad sp create-for-rbac to generate the Service Principle and get the client id and client secret. Because the UpdateConfiguration endpoint does not allow you to update a single value, and overwrites the entire Configuration object with the new values, you must first get all the current Configuration values, update the one you want, and then call the Update endpoint with the new Configuration keys and values.
I would imagine you can use Kudu REST API (specifically VFS) to update the disabled metadata property in function.json. Would that disable the function?
Here is the Kudu REST API. https://github.com/projectkudu/kudu/wiki/REST-API
The CLI command That is used to disable the Azure function through CLI - documented here
az functionapp config appsettings set --name <myFunctionApp> \
--resource-group <myResourceGroup> \
--settings AzureWebJobs.QueueTrigger.Disabled=true
I had captured fiddler while while running the above command.
Azure CLI works on the Python process The python process was issuing request to
https://management.azure.com to update appsetting.
got a reference to the same endpoint in the below REST Endpoint :
https://learn.microsoft.com/en-us/rest/api/appservice/webapps/updateapplicationsettings
Request URI :
PUT
https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Web/sites/{name}/config/appsettings?api-version=2019-08-01
Headers :
Authorization: Bearer <> ;
Content-Type: application/json; charset=utf-8
Request Body:
{"kind": "<class 'str'>", "properties":JSON}
We can hardcode the properties or get it dynamically. For disabling the function, will have to update the JSON node of Properties : Azure.WebJobs.QueueTrigger.Disabled = True
To get properties you could use the endpoint, you could refer Web Apps - List Application Settings
The Output looks up as below :
Hope this helps :)
What about this: https://learn.microsoft.com/en-us/azure/azure-functions/disable-function?tabs=portal#localsettingsjson
This looks like the easiest solution for local development.
Azure WebJob obtains connection string from web application (which runs the job) configuration parameter - AzureWebJobsStorage.
I need to monitor two queues in different storages using one WebJob.
Is it possible somehow to have multiple connection strings for a WebJob?
Related to this post it is possible :
servicebus webjob different connection string for output or trigger
In your case, you'd like to bind to differents storage accounts so your function can look like that:
public static void JobQueue1(
[QueueTrigger("queueName1"),
StorageAccount("storageAccount1ConnectionString")] string message)
{
}
public static void JobQueue2(
[QueueTrigger("queueName2"),
StorageAccount("storageAccount2ConnectionString")] string message)
{
}
You can also implement a custom INameResolver if you want to get the connectionstrings from the config :
public class ConfigNameResolver : INameResolver
{
public string Resolve(string name)
{
string resolvedName = ConfigurationManager.AppSettings[name];
if (string.IsNullOrWhiteSpace(resolvedName))
{
throw new InvalidOperationException("Cannot resolve " + name);
}
return resolvedName;
}
}
to use it:
var config = new JobHostConfiguration();
config.NameResolver = new ConfigNameResolver();
...
new JobHost(config).RunAndBlock();
And your new functions look like that:
public static void JobQueue1(
[QueueTrigger("queueName1"),
StorageAccount("%storageAccount2%")] string filename)
{
}
public static void JobQueue2(
[QueueTrigger("queueName2"),
StorageAccount("%storageAccount1%")] string filename)
{
}
storageAccount1 and storageAccount2 are the connection string key in the appSettings