Clearing history while debugging azure durable functions - c#

Durable functions keep a state in storage, this is what makes them work, but it is very troublesome while debugging and developing. I have a large number of runs which have not completed and that the system tries to run again when I start the process. Some of the runs have erroneous data same which causes exceptions while others have been terminated early as something did not work as expected.
I don't want to run all the old cases when starting my application in debug (running against my local storage account). How can I automatically clear all data so only new functions will trigger?

You can use Azure Core Tools to purge the orchestration instance state.
First you need to make sure that the Azure Core Tools is installed for your particular Azure Function version. You can do this using the NPM package manager. (Note that this is for the Azure Functions Version - V3.)
npm install -g azure-functions-core-tools#3
Then open a command prompt in the root directory of your Azure Functions project. The Azure Core Tools requires the host.json file from your project to identify your orchestration instances.
You can use the following to look at all of the available actions:
func durable
You can then purge the instance history using the following:
func durable purge-history

There is now this VsCode extension, which now also has 'Purge Durable Functions History' feature. Type 'Purge Durable Functions History' in your Command Palette - and there you go. If you're not using VsCode, then the same tool is available as a standalone service, that you can either run locally or deploy into Azure.

You may call the PurgeInstanceHistoryAsync method with one of the following:
An orchestration instance ID
[FunctionName("PurgeInstanceHistory")]
public static Task Run(
[DurableClient] IDurableOrchestrationClient client,
[ManualTrigger] string instanceId)
{
return client.PurgeInstanceHistoryAsync(instanceId);
}
Time interval
[FunctionName("PurgeInstanceHistory")]
public static Task Run(
[DurableClient] IDurableOrchestrationClient client,
[TimerTrigger("0 0 12 * * *")]TimerInfo myTimer)
{
return client.PurgeInstanceHistoryAsync(
DateTime.MinValue,
DateTime.UtcNow.AddDays(-30),
new List<OrchestrationStatus>
{
OrchestrationStatus.Completed
});
}
Reference for code snippets above: https://learn.microsoft.com/en-gb/azure/azure-functions/durable/durable-functions-instance-management#purge-instance-history

For everyone else wondering just how on earth to do this.
Install the Microsoft Azure Storage Explorer
Add a connection to azure storage, but choose Local storage emulator
4. Use the defaults / click next.
At this point, Click on Local & Attached in the Explorer. Click on (Emulator Default Ports) (Key) -> Tables. Delete the task hug history table, and relaunch your application.
From this point, its only a matter of dev time to figure out a way to do it programatically.

Related

Azure creates duplicate webjobs for Scheduled and Continuous

I have a scheduled web job created in Azure app service developed using C#. I want to change my web job from scheduled to continuous, but upon deploying the web job from visual studio it created a new instance of the web job in the Azure portal(1-Continuous and 1-Scheduled).
Duplicate Web Jobs in Azure Portal
webjob-publish-settings.json
Before:
{
"$schema": "http://schemastore.org/schemas/json/webjob-publish-settings.json",
"webJobName": "SampleWebJob",
"runMode": "OnDemand"
}
After:
{
"$schema": "http://schemastore.org/schemas/json/webjob-publish-settings.json",
"webJobName": "SampleWebJob",
"runMode": "Continuous"
}
I would like to overwrite the existing web job instead of creating a new one. Is there any way I can do it?
Also is there any way I can achieve this using ARM templates?
If the webjob is the same one, azure will overwrite it. And in your case actually it's the new webjob that's why it creates a new one.
If you only have one webjob and you want to change the type and don't want keep it after deployment, you could set Delete Existing Files to true.
If you have more than one webjob, you have to delete it on the portal or go to you web kudu site and delete it from the app_data folder.

Azure Functions V2 With Service Bus Trigger in Development Team

We have Azure Functions (V2) that have been created with the Service Bus Trigger.
[FunctionName("MyFunctionName")]
public static async Task Run(
[ServiceBusTrigger("%MyQueueName%", Connection = "ServiceBusConnectionString")]
byte[] messageBytes,
TraceWriter log)
{
// code to handle message
}
The queue name is defined in the local.settings.json file:
{
"Values": {
...
"MyQueueName": "local-name-of-my-queue-in-azure",
...
}
}
This works quite well as when deployed we can set the environment variables to be dev-queue-name, live-queue-name etc for the various deployed environments that we have.
However, when more than one developer is connected locally, given that the local.settings.json file is in source control and needs to be to properly maintain the environment variables, then the local function app runners will all connect to the same queue, and it is random as to which developer's application will pick up and process the messages.
What we need is for each developer to have their own queue, but we do not want to have to remove the JSON config file from source control so that we can maintain a different file (as it contains other pieces of information that need updating).
How can we get each developer / computer running our application to have a unique queue name (but known so that we can create the service bus queues in the cloud)?
You can override the setting value via Environment variables. Settings specified as a system environment variable take precedence over values in the local.settings.json file. Just define an Environment variable called MyQueueName.
Having said that, I think that committing local.settings.json to source control is generally not recommended. I suppose you also store your Service Bus connection string there, which means you store your secrets in source control.
Note that default .gitignore file has it listed out.
If you need it in source control, I would commit a version of local.settings.json with all variables with fake values, and then make each developer setup the proper values locally and then ignore the changes on commit (set assume-unchanged).

Azure Functions: CosmosDBTrigger not triggering in Visual Studio

TL;DR: This example is not working for me in VS2017.
I have an Azure Cosmos DB and want to fire some logic when something adds or updates there. For that, CosmosDBTrigger should be great.
Tutorial demonstrates creating trigger in Azure Portal and it works for me. However, doing just the same thing in Visual Studio (15.5.4, latest by now) does not.
I use the default Azure Functions template, predefined Cosmos DB trigger and nearly default code:
[FunctionName("TestTrigger")]
public static void Run(
[CosmosDBTrigger("Database", "Collection", ConnectionStringSetting = "myCosmosDB")]
IReadOnlyList<Document> input,
TraceWriter log)
{
log.Info("Documents modified " + input.Count);
log.Info("First document Id " + input[0].Id);
}
App runs without errors but nothing happens when I actually do stuff in the database. So I cannot debug things and actually implement some required logic.
Connection string is specified in the local.settings.json and is considered. If I deliberately foul it, trigger spits runtime errors.
It all looks like the connection string is to a wrong database. But it is exactly the one, copypasted, string I have in the trigger made via Azure Portal.
Where could I go wrong? What else can I check?
Based on your comment, you were running both portal and local Apps at the same time for the same collection and the same lease collection.
That means both Apps were competing to each other for locks (leases) on collection processing. The portal App won in your case, took the lease, so the local App was sitting doing nothing.

.NET class library as an Azure Function App with ServiceBusTopic trigger does not work

I created an Azure function app locally in Visual Studio 2017(Not Azure portal) by following the steps at the following URL.
https://blogs.msdn.microsoft.com/appserviceteam/2017/03/16/publishing-a-net-class-library-as-a-function-app
I followed the steps exactly to create a function that has a “ServiceBusTopicTrigger”. I added the following to my function.json
{
“disabled”: false,
“bindings”: [
{
“name”: “mySbMsg”,
“type”: “serviceBusTrigger”,
“direction”: “in”,
“topicName”: “negotiatedaddcharge_test”,
“subscriptionName”: “clientdispatches”,
“connection”: “servicebusnac”,
“accessRights”: “manage”
}
]
}
My appsenttings.json has the following
{
“IsEncrypted”: true,
“Values”: {
“servicebusnac”: “Endpoint=MyCompanyEndPointPlaceHolder”
}
}
When I run the function in Visual Studio I keep getting an error message “Microsoft.Azure.WebJobs.ServiceBus: Microsoft Azure WebJobs SDK ServiceBus connection string ‘AzureWebJobsservicebusnac’ is missing or empty.”
Just for the heck of it I added another entry to the values collection with the name “AzureWebJobsservicebusnac” but still the same message shows up. Is there something that I am doing wrong?
Also how do you unit test this function? I cannot Access any function in the csx file in my unit test project.
Thanks.
Edited:
I added information to make it clear that I am creating the function in Visual Studio rather than the Azure portal.
Function App will search for your Service Bus connection strings in Environment variables. You can set those from Azure portal:
Go to your Function App.
Select Platform features tab above the editor.
Click Application settings.
Under App settings section add an entry with connection name and string.
The appsettings.json file is used to support local development only, and settings defined there are not published to Azure.
The solution is simple; I actually ran into this myself and it had me completely stumped for a while.
In your appsettings.json, change "IsEncrypted" from true to false. This should fix the issue you're seeing.
The error messages are less than ideal for this scenario; the Azure Functions team already has a bugfix in for it.
Hope this helps anyone who runs into this issue. (I swear, it was a week before I figured this out, and not without help.)

Continuous Web Job with timer trigger and Blob trigger

I have the following functions in the same web job console app that uses the azure jobs sdk and its extensions. The timed trigger queries an API end point for a file, does some additional work on it and then saves the file to the blob named blahinput. Now the second method "ProcessBlobMessage" is supposed to identify the new blob file in the blahinput and do something with it.
public static void ProcessBlobMessage([BlobTrigger("blahinput/{name}")] TextReader input,
string name, [Blob("foooutput/{name}")] out string output)
{//do something }
public static void QueryAnAPIEndPointToGetFile([TimerTrigger("* */1 * * * *")] TimerInfo timerInfo) { // download a file and save it to blob named blah input}
The problem here is :
When I deploy the above said web job as continuous, only the timer triggered events seems to get triggered while the function that is supposed to identify the new file never gets triggered. Is it not possible to have two such triggers in the same web job?
From this article: How to use Azure blob storage with the WebJobs SDK
The WebJobs SDK scans log files to watch for new or changed blobs. This process is not real-time; a function might not get triggered until several minutes or longer after the blob is created. In addition, storage logs are created on a "best efforts" basis; there is no guarantee that all events will be captured. Under some conditions, logs might be missed. If the speed and reliability limitations of blob triggers are not acceptable for your application, the recommended method is to create a queue message when you create the blob, and use the QueueTrigger attribute instead of the BlobTrigger attribute on the function that processes the blob.
Until the new blob trigger strategy is released, BlobTriggers are not reliable. The trigger is based on Azure Storage Analytics logs which stores logs on a Best-Effort basis.
There is an ongoing Github issue about this and there is also a PR regarding a new Blob scanning strategy.
This being said, check if you are using the Latest Webjobs SDK version 1.1.1 because there was an issue on prior versions that could lead to problems on BlobTriggers.

Categories