TL;DR: This example is not working for me in VS2017.
I have an Azure Cosmos DB and want to fire some logic when something adds or updates there. For that, CosmosDBTrigger should be great.
Tutorial demonstrates creating trigger in Azure Portal and it works for me. However, doing just the same thing in Visual Studio (15.5.4, latest by now) does not.
I use the default Azure Functions template, predefined Cosmos DB trigger and nearly default code:
[FunctionName("TestTrigger")]
public static void Run(
[CosmosDBTrigger("Database", "Collection", ConnectionStringSetting = "myCosmosDB")]
IReadOnlyList<Document> input,
TraceWriter log)
{
log.Info("Documents modified " + input.Count);
log.Info("First document Id " + input[0].Id);
}
App runs without errors but nothing happens when I actually do stuff in the database. So I cannot debug things and actually implement some required logic.
Connection string is specified in the local.settings.json and is considered. If I deliberately foul it, trigger spits runtime errors.
It all looks like the connection string is to a wrong database. But it is exactly the one, copypasted, string I have in the trigger made via Azure Portal.
Where could I go wrong? What else can I check?
Based on your comment, you were running both portal and local Apps at the same time for the same collection and the same lease collection.
That means both Apps were competing to each other for locks (leases) on collection processing. The portal App won in your case, took the lease, so the local App was sitting doing nothing.
Related
I have an Visual Studio 2019 developed azure function (v3) that is deployed via devops pipeline when the webapp is deployed. It looks like this:
[FunctionName("GetUserCoordinatesFunction")]
public async Task Run([QueueTrigger("getusercoordinatesquery", Connection = "AzureWebJobsStorage")] GetUserCoordinatesQuery request, ILogger log)
{
This function works perfectly locally. If I emulate storage, it works... if I replace my storage string with my production storage connection string, it still works.
In other words, my app writes messages to the storage without any issue.
I've double checked, triple checked, quadruple checked my AzureWebJobsStorage string in Azure Function Configuration... everything is correct!
In production only, my function "GetUserCoordinatesFunction" does not detect messages in queue. Why?
Please and thanks.
I tried this, but I'm on v3 and that didn't fix it
Azure function implemented locally won't work in the cloud
Summarize the comments above. According to the screenshot you provided, it seems the deployment didn't success. Just deploy the function from VS to azure again.
Durable functions keep a state in storage, this is what makes them work, but it is very troublesome while debugging and developing. I have a large number of runs which have not completed and that the system tries to run again when I start the process. Some of the runs have erroneous data same which causes exceptions while others have been terminated early as something did not work as expected.
I don't want to run all the old cases when starting my application in debug (running against my local storage account). How can I automatically clear all data so only new functions will trigger?
You can use Azure Core Tools to purge the orchestration instance state.
First you need to make sure that the Azure Core Tools is installed for your particular Azure Function version. You can do this using the NPM package manager. (Note that this is for the Azure Functions Version - V3.)
npm install -g azure-functions-core-tools#3
Then open a command prompt in the root directory of your Azure Functions project. The Azure Core Tools requires the host.json file from your project to identify your orchestration instances.
You can use the following to look at all of the available actions:
func durable
You can then purge the instance history using the following:
func durable purge-history
There is now this VsCode extension, which now also has 'Purge Durable Functions History' feature. Type 'Purge Durable Functions History' in your Command Palette - and there you go. If you're not using VsCode, then the same tool is available as a standalone service, that you can either run locally or deploy into Azure.
You may call the PurgeInstanceHistoryAsync method with one of the following:
An orchestration instance ID
[FunctionName("PurgeInstanceHistory")]
public static Task Run(
[DurableClient] IDurableOrchestrationClient client,
[ManualTrigger] string instanceId)
{
return client.PurgeInstanceHistoryAsync(instanceId);
}
Time interval
[FunctionName("PurgeInstanceHistory")]
public static Task Run(
[DurableClient] IDurableOrchestrationClient client,
[TimerTrigger("0 0 12 * * *")]TimerInfo myTimer)
{
return client.PurgeInstanceHistoryAsync(
DateTime.MinValue,
DateTime.UtcNow.AddDays(-30),
new List<OrchestrationStatus>
{
OrchestrationStatus.Completed
});
}
Reference for code snippets above: https://learn.microsoft.com/en-gb/azure/azure-functions/durable/durable-functions-instance-management#purge-instance-history
For everyone else wondering just how on earth to do this.
Install the Microsoft Azure Storage Explorer
Add a connection to azure storage, but choose Local storage emulator
4. Use the defaults / click next.
At this point, Click on Local & Attached in the Explorer. Click on (Emulator Default Ports) (Key) -> Tables. Delete the task hug history table, and relaunch your application.
From this point, its only a matter of dev time to figure out a way to do it programatically.
We have Azure Functions (V2) that have been created with the Service Bus Trigger.
[FunctionName("MyFunctionName")]
public static async Task Run(
[ServiceBusTrigger("%MyQueueName%", Connection = "ServiceBusConnectionString")]
byte[] messageBytes,
TraceWriter log)
{
// code to handle message
}
The queue name is defined in the local.settings.json file:
{
"Values": {
...
"MyQueueName": "local-name-of-my-queue-in-azure",
...
}
}
This works quite well as when deployed we can set the environment variables to be dev-queue-name, live-queue-name etc for the various deployed environments that we have.
However, when more than one developer is connected locally, given that the local.settings.json file is in source control and needs to be to properly maintain the environment variables, then the local function app runners will all connect to the same queue, and it is random as to which developer's application will pick up and process the messages.
What we need is for each developer to have their own queue, but we do not want to have to remove the JSON config file from source control so that we can maintain a different file (as it contains other pieces of information that need updating).
How can we get each developer / computer running our application to have a unique queue name (but known so that we can create the service bus queues in the cloud)?
You can override the setting value via Environment variables. Settings specified as a system environment variable take precedence over values in the local.settings.json file. Just define an Environment variable called MyQueueName.
Having said that, I think that committing local.settings.json to source control is generally not recommended. I suppose you also store your Service Bus connection string there, which means you store your secrets in source control.
Note that default .gitignore file has it listed out.
If you need it in source control, I would commit a version of local.settings.json with all variables with fake values, and then make each developer setup the proper values locally and then ignore the changes on commit (set assume-unchanged).
I created an Azure function app locally in Visual Studio 2017(Not Azure portal) by following the steps at the following URL.
https://blogs.msdn.microsoft.com/appserviceteam/2017/03/16/publishing-a-net-class-library-as-a-function-app
I followed the steps exactly to create a function that has a “ServiceBusTopicTrigger”. I added the following to my function.json
{
“disabled”: false,
“bindings”: [
{
“name”: “mySbMsg”,
“type”: “serviceBusTrigger”,
“direction”: “in”,
“topicName”: “negotiatedaddcharge_test”,
“subscriptionName”: “clientdispatches”,
“connection”: “servicebusnac”,
“accessRights”: “manage”
}
]
}
My appsenttings.json has the following
{
“IsEncrypted”: true,
“Values”: {
“servicebusnac”: “Endpoint=MyCompanyEndPointPlaceHolder”
}
}
When I run the function in Visual Studio I keep getting an error message “Microsoft.Azure.WebJobs.ServiceBus: Microsoft Azure WebJobs SDK ServiceBus connection string ‘AzureWebJobsservicebusnac’ is missing or empty.”
Just for the heck of it I added another entry to the values collection with the name “AzureWebJobsservicebusnac” but still the same message shows up. Is there something that I am doing wrong?
Also how do you unit test this function? I cannot Access any function in the csx file in my unit test project.
Thanks.
Edited:
I added information to make it clear that I am creating the function in Visual Studio rather than the Azure portal.
Function App will search for your Service Bus connection strings in Environment variables. You can set those from Azure portal:
Go to your Function App.
Select Platform features tab above the editor.
Click Application settings.
Under App settings section add an entry with connection name and string.
The appsettings.json file is used to support local development only, and settings defined there are not published to Azure.
The solution is simple; I actually ran into this myself and it had me completely stumped for a while.
In your appsettings.json, change "IsEncrypted" from true to false. This should fix the issue you're seeing.
The error messages are less than ideal for this scenario; the Azure Functions team already has a bugfix in for it.
Hope this helps anyone who runs into this issue. (I swear, it was a week before I figured this out, and not without help.)
I've been working to try and convert Microsoft's EWS Streaming Notification Example to a service
( MS source http://www.microsoft.com/en-us/download/details.aspx?id=27154).
I tested it as a console app. I then used a generic service template and got it to the point it would compile, install, and start. It stops after about 10 seconds with the ubiquitous "the service on local computer started and then stopped."
So I went back in and upgraded to C# 2013 express and used NLog to put a bunch of log trace commands to so I could see where it was when it exited.
The last place I can find it is in the example code, SynchronizationChanges function,
public static void SynchronizeChanges(FolderId folderId)
{
logger.Trace("Entering SynchronizeChanges");
bool moreChangesAvailable;
do
{
logger.Trace("Synchronizing changes...");
//Console.WriteLine("Synchronizing changes...");
// Get all changes since the last call. The synchronization cookie is stored in the
// _SynchronizationState field.
// Only the the ids are requested. Additional properties should be fetched via GetItem
//calls.
logger.Trace("Getting changes into var changes.");
var changes = _ExchangeService.SyncFolderItems(folderId, PropertySet.IdOnly, null, 512,
SyncFolderItemsScope.NormalItems,
_SynchronizationState);
// Update the synchronization cookie
logger.Trace("Updating _SynchronizationState");
the log file shows the trace message ""Getting changes into var changes." but not the "Updating _SynchronizationState" message.
so it never gets past var changes = _ExchangeService.SyncFolderItems
I cannot for the life figure out why its just exiting. There are many examples of EWS streaming notifications. I have 3 that compile and run just fine but nobody as far as I can tell has posted an example of it done as a service.
If you don't see the "Updating..." message it's likely the sync threw an exception. Wrap it in a try/catch.
OK, so now that I see the error, this looks like your garden-variety permissions problem. When you ran this as a console app, you likely presented the default credentials to Exchange, which were for your login ID. For a Windows service, if you're running the service with one of the built-in accounts (e.g. Local System), your default credentials will not have access to Exchange.
To rectify, either (1) run the service under the account you did the console app with, or (2) add those credentials to the Exchange Service object.