I have a webjob in sdk 3.
public class NotificationsFunction
{
public async Task CustomerNotificationFunctionAsync([TimerTrigger("0 * * * * *")]TimerInfo myTimer, ILogger log)
log.LogInformation("Running job");
It used to run correctly.Now, if i try to debug it locally, it just hangs.. It finds the functions but never trigger it:
Now, if i just change the name in :
public class NotificationsFunction
{
public async Task CustomerNotificationFunctionAsyncTest([TimerTrigger("0 * * * * *")]TimerInfo myTimer, ILogger log)
log.LogInformation("Running job");
It runs perfectly:
I have the same exact problem when i deploy to azure.
I have no idea why this happens (and it took me a while to find this problem)...
As anyone ever had this problem? If so, what can I do?
Thanks
From this document:
TimerTrigger uses the Singleton feature of the WebJobs SDK to ensure that only a single instance of your triggered function is running at any given time. When the JobHost starts up, for each of your TimerTrigger functions a blob lease (the Singleton Lock) is taken. This distributed lock ensures that only a single instance of your scheduled function is running at any time. If the blob for that function is not currently leased, the function will acquire the lease and start running on schedule immediately. If the blob lease cannot be acquired, it generally means that another instance of that function is running, so the function is not started in the current host.
The lock ID is based the fully qualified function name.
According to your webjob in sdk 3, you could use AddAzureStorageCoreServices.
var builder = new HostBuilder()
.ConfigureWebJobs(b=>
{
b.AddTimers();
b.AddAzureStorageCoreServices();
})
.Build();
I have the same exact problem when i deploy to azure.
Also note that if you're sharing the same storage account between your local development and production deployment, the Singleton locks (blob leases) will be shared. To get around this, you can either use a separate storage account for local development or perhaps stop the job running in Azure.
Related
I am getting this below error when I am running my timer function app in Azure Cloud. It is just a basic code and I wanted the log to be shown in Insight.
public static class Function1
{
[FunctionName("Function1")]
public static void Run([TimerTrigger("0 * * * * *")]TimerInfo myTimer, ILogger log)
{
log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
}
}
Here is the workaround I did to identify this issue cause:
Created the Azure Function (Stack: .NET 3.1) of type Timer Trigger with your given timer "0 * * * * *" and the connection string given from the storage account created in Azure portal, which is running successfully (in local):
2. Deleted the Storage account from the Azure Portal and tried to run the function locally which gave me the error: The listener for function Function1 was unable to start.
Recovered the Storage Account and then deployed to the Azure Portal Function App and running successfully in cloud also:
In the Azure Cloud, Yes, as #Skin Said that it would be a Storage Account configuration issue.
Few of the steps to resolve this issue were:
Check the AzureWebJobsStorage value contains correct the correct storage account connection string.
Check the Storage account is not deleted.
Check the Networking Option in the Function App that might be the firewall is blocking/restricting the access to the associated storage account.
It was a firewall issue.
Added the appropriate Virtual Network and Subnet under Storage Account > Networking > Firewalls and Virtual Network
Durable functions keep a state in storage, this is what makes them work, but it is very troublesome while debugging and developing. I have a large number of runs which have not completed and that the system tries to run again when I start the process. Some of the runs have erroneous data same which causes exceptions while others have been terminated early as something did not work as expected.
I don't want to run all the old cases when starting my application in debug (running against my local storage account). How can I automatically clear all data so only new functions will trigger?
You can use Azure Core Tools to purge the orchestration instance state.
First you need to make sure that the Azure Core Tools is installed for your particular Azure Function version. You can do this using the NPM package manager. (Note that this is for the Azure Functions Version - V3.)
npm install -g azure-functions-core-tools#3
Then open a command prompt in the root directory of your Azure Functions project. The Azure Core Tools requires the host.json file from your project to identify your orchestration instances.
You can use the following to look at all of the available actions:
func durable
You can then purge the instance history using the following:
func durable purge-history
There is now this VsCode extension, which now also has 'Purge Durable Functions History' feature. Type 'Purge Durable Functions History' in your Command Palette - and there you go. If you're not using VsCode, then the same tool is available as a standalone service, that you can either run locally or deploy into Azure.
You may call the PurgeInstanceHistoryAsync method with one of the following:
An orchestration instance ID
[FunctionName("PurgeInstanceHistory")]
public static Task Run(
[DurableClient] IDurableOrchestrationClient client,
[ManualTrigger] string instanceId)
{
return client.PurgeInstanceHistoryAsync(instanceId);
}
Time interval
[FunctionName("PurgeInstanceHistory")]
public static Task Run(
[DurableClient] IDurableOrchestrationClient client,
[TimerTrigger("0 0 12 * * *")]TimerInfo myTimer)
{
return client.PurgeInstanceHistoryAsync(
DateTime.MinValue,
DateTime.UtcNow.AddDays(-30),
new List<OrchestrationStatus>
{
OrchestrationStatus.Completed
});
}
Reference for code snippets above: https://learn.microsoft.com/en-gb/azure/azure-functions/durable/durable-functions-instance-management#purge-instance-history
For everyone else wondering just how on earth to do this.
Install the Microsoft Azure Storage Explorer
Add a connection to azure storage, but choose Local storage emulator
4. Use the defaults / click next.
At this point, Click on Local & Attached in the Explorer. Click on (Emulator Default Ports) (Key) -> Tables. Delete the task hug history table, and relaunch your application.
From this point, its only a matter of dev time to figure out a way to do it programatically.
I have a working azure function which puts a message on a service bus queue.
public static void Run(
[TimerTrigger("0 * * * *")]TimerInfo myTimer,
[ServiceBus("queueName", Connection = "ServiceBusConnection")] ICollector<Message> queue,
TraceWriter log)
{
//function logic here
}
The connection string is currently in the plain text in the app settings. Is it possible to have this encyrpted and still use the built in integration between azure functions and the service bus?
I have tried creating a ServiceBusAttribute at runtime but it doesn't look like you can pass it a connection string.
Any help is much appreciated
This is currently not possible. There is a feature request to retrieve secrets used in bindings from KeyVault: https://github.com/Azure/azure-webjobs-sdk/issues/746
The GitHub issue also describes a workaround to retrieve the secrets from KeyVault at build time within VSTS.
Azure Functions have a time limit of 10 minutes. Suppose I have a long-running task such as downloading a file that takes 1 hr to download.
[FunctionName("PerformDownload")]
[return: Queue("download-path-queue")]
public static async Task<string> RunAsync([QueueTrigger("download-url-queue")] string url, TraceWriter log)
{
string downloadPath = Path.Combine(Path.GetTempPath(), Guid.NewGuid().ToString);
log.Info($"Downloading file at url {url} to {downloadPath} ...");
using (var client = new WebClient())
{
await client.DownloadFileAsync(new Uri(url), myLocalFilePath);
}
log.Info("Finished!");
}
Is there any hacky way to make something like this start and then resume in another function before the time limit expires? Or is there a better way altogether to integrate some long task like this into a workflow that uses Azure Functions?
(On a slightly related note, is plain Azure Web Jobs obsolete? I can't find it under Resources.)
Adding for others who might come across this post: Workflows composed of several Azure Functions can be created in code using the Durable Functions extension, which can be used to create orchestration functions that schedule async tasks, shut down, and are reawakened when said async work is complete.
They're not a direct solution for long-running tasks that require an open TCP port, such as downloading a file, (for that, a function running on an App Service Plan has no execution time limit), but it can be used to integrate such tasks into a larger workflow.
Is there any hacky way to make something like this start and then
resume in another function before the time limit expires?
If you are on a Consumption Plan you have no control over how long your Function App runs, and so it would not be reliable to use background threads that continue running after your Function entry point completes.
On an App Service plan you're running on VMs you pay for, so you can configure your Function App to run continuously. Also AFAIK you don't have to have a Function timeout on an App Service Plan, so your main Function entry point can run for as long as you want.
Or is there a better way altogether to integrate some long task like this into a workflow that uses Azure Functions?
Yes. Use Azure Data Factory to copy data into Blob Storage, and then process it. The Data Factory pipeline can call Functions both before and after the copy activity.
One additional option, depending on the details of your workload, is to take advantage of Azure Container Instances. You can have your Azure Function spin up a container, process your workload (download your file \ do some processing, etc), and then shut down your container for you. Spin up time is typically a few seconds and you only pay for what you use (no need for a dedicated app service plan or vm instance). More details on ACI here.
10 minutes (based on the timeout setting in the host.json file) after the last function of your function app has been triggered, the VM running your function app will stop.
To prevent this behavior to happen, you can have an empty Timertrigger function that runs every 5 minutes. it wont cost anything and will keep your app up and running.
I think the issue is related with the Cold Start state. Here you can find more details about it.
https://markheath.net/post/avoiding-azure-functions-cold-starts
What you can do is, create an trigger azure function that "ping" your long running function to keep it "warm"
namespace NewProject
{
public static class PingTimer
{
[FunctionName("PingTimer")]
public static async Task Run([TimerTrigger("0 */4 * * * *")]TimerInfo myTimer, TraceWriter log)
{
// This CRON job executes every 4 minutes
log.Info($"PingTimer function executed at: {DateTime.Now}");
var client = new HttpClient();
string url = #"<Azure function URL>";
var result = await client.GetAsync(new Uri(url));
log.Info($"PingTimer function executed completed at: {DateTime.Now}");
}
}}
I have the following functions in the same web job console app that uses the azure jobs sdk and its extensions. The timed trigger queries an API end point for a file, does some additional work on it and then saves the file to the blob named blahinput. Now the second method "ProcessBlobMessage" is supposed to identify the new blob file in the blahinput and do something with it.
public static void ProcessBlobMessage([BlobTrigger("blahinput/{name}")] TextReader input,
string name, [Blob("foooutput/{name}")] out string output)
{//do something }
public static void QueryAnAPIEndPointToGetFile([TimerTrigger("* */1 * * * *")] TimerInfo timerInfo) { // download a file and save it to blob named blah input}
The problem here is :
When I deploy the above said web job as continuous, only the timer triggered events seems to get triggered while the function that is supposed to identify the new file never gets triggered. Is it not possible to have two such triggers in the same web job?
From this article: How to use Azure blob storage with the WebJobs SDK
The WebJobs SDK scans log files to watch for new or changed blobs. This process is not real-time; a function might not get triggered until several minutes or longer after the blob is created. In addition, storage logs are created on a "best efforts" basis; there is no guarantee that all events will be captured. Under some conditions, logs might be missed. If the speed and reliability limitations of blob triggers are not acceptable for your application, the recommended method is to create a queue message when you create the blob, and use the QueueTrigger attribute instead of the BlobTrigger attribute on the function that processes the blob.
Until the new blob trigger strategy is released, BlobTriggers are not reliable. The trigger is based on Azure Storage Analytics logs which stores logs on a Best-Effort basis.
There is an ongoing Github issue about this and there is also a PR regarding a new Blob scanning strategy.
This being said, check if you are using the Latest Webjobs SDK version 1.1.1 because there was an issue on prior versions that could lead to problems on BlobTriggers.