Deployed Azure Function (v2) not running - c#

I have an Azure Function v2, which works locally. When deployed to Azure, everything seems to be in working order (bin folder containing binaries and function name folder with function.json).
However, when I check if they ran (viewing the logs), they aren't there on the monitor tab (No data available), the kudu log stream nor in the general function view:
2018-01-25T10:27:47 Welcome, you are now connected to log-streaming service.
2018-01-25T10:28:47 No new trace in the past 1 min(s).
2018-01-25T10:29:47 No new trace in the past 2 min(s).
Info:
Using a Timer triggered Azure Function v2 calling a:
Durable Function (but "normal" v2 functions do not work as well)
Using TFS to deploy the code to ARM generated resources
function.json:
{
"generatedBy": "Microsoft.NET.Sdk.Functions.Generator-1.0.6",
"configurationSource": "attributes",
"bindings": [
{
"type": "timerTrigger",
"schedule": "0 */1 * * * *",
"useMonitor": true,
"runOnStartup": false,
"name": "timerInfo"
}
],
"disabled": false,
"scriptFile": "../bin/Namespace.dll",
"entryPoint": "Namespace.RetrieveRedisMetrics.RunOrchestrator"
}
I have:
Deleted the function in the Azure portal
Renamed the function
Removed the blob storage "azure-webjobs-hosts" and durable-lease (forgot the exact name) containers for the function
The behavior seems similar to https://github.com/Azure/Azure-Functions/issues/618, but there is no resolution or comment on that bug.
I can't share the information app name privately (according to https://github.com/Azure/azure-webjobs-sdk-script/wiki/Sharing-Your-Function-App-name-privately), since... The function isn't logging!
Any other ideas/recommendations?

When trying to publish the function from Visual Studio to skip the TFS continuous delivery factor, I got the following popup:
Apparently, you need to set the FUNCTIONS_EXTENSION_VERSION to beta.
ARM template:
{
"name": "FUNCTIONS_EXTENSION_VERSION",
"value": "beta" // was ~1
}
Azure portal (Azure Function Application Settings):
It also works with Continuous Delivery. Only problem now is that AF v1 (preview currently) takes 3 to 4 times longer than the AF v1. But that is another story..

Related

Issues running event hub triggered azure function locally, getting "Out of retries creating lease for partition" error on startup

Currently I have set up the following simple azure azure function which I tested previously few days ago and worked normally but for some unknown reason it returns an error
The function setup is:
[FunctionName("EventUpdatedHubFunction")]
public async Task Run([EventHubTrigger(
"%EventConsumer:Name%",
ConsumerGroup = "%EventConsumer:ConsumerGroup%",
Connection = "EventConsumer:ConnectionString")]
EventData[] events)
{
// logic
}
And the error I am getting when I am running the function is:
[2022-03-04T12:25:32.671Z] The listener for function 'EventUpdatedHubFunction' was unable to start.
[2022-03-04T12:27:09.897Z] The listener for function 'EventUpdatedHubFunction' was unable to start. Microsoft.Azure.EventHubs.Processor: Out of retries creating lease for partition 0. Microsoft.WindowsAzure.Storage: The response ended prematurely, with at least 157 additional bytes expected. System.Net.Http: The response ended prematurely, with at least 157 additional bytes expected.
And this is my config file (which I have no reason to believe is incorrect since it has worked in the past):
{
"IsEncrypted": false,
"Values": {
"AzureWebJobsStorage": "UseDevelopmentStorage=true",
"AzureWebJobsDashboard": "UseDevelopmentStorage=true",
"FUNCTIONS_WORKER_RUNTIME": "dotnet",
"EventConsumer:Name": "event_consumer_name_test",
"EventConsumer:ConsumerGroup": "consumer_group_test",
"EventConsumer:ConnectionString": "Endpoint=.........",
"Database:ConnectionString": "Server=.;Database=TestDatabase;Trusted_Connection=True;"
}
}
So far I have attempted to:
delete and reinstall the Azure storage emulator on my machine,
delete and recreate the azure emulator database,
run the azure function solution on an another machine (in which case it actually worked).
So at this point I am out of ideas of what might be causing the problem.
The solution was: through the Azure Storage Explorer, under the Local & Attached find (Emulator - Default Ports)/Blob Containers/azure-webjobs-eventhub container and delete everything in it to free space.

Sync gateway is not syncing with Couchbase server in Xamarin

I am trying to use couchbase for online and offline database in my xamarin application .
the offline part is working good , but it doesn't sync .
I followed this tutorial :
https://docs.couchbase.com/userprofile-couchbase-mobile/sync/userprofile/xamarin/userprofile_sync.html
I have installed couchbase server , I created a bucket named : userprofile , and a user and I have enabled the Application Access and Read Only Admin roles.
also I installed sync gateway and I have configured it ,
here is my configuration .json file
{
"log": ["*"],
"databases": {
"userprofile": {
"server": "http://127.0.0.1:8091",
"bucket": "user-profile",
"username": "Maria",
"password": "123456",
"enable_shared_bucket_access": true,
"import_docs": true,
"num_index_replicas": 0,
"delta_sync" :{"enabled":true},
"users": {
"Maria": { "password": "123456"},
"GUEST": { "disabled": false, "admin_channels": ["*"] }
},
"sync": `function (doc, oldDoc) {
if (doc.sdk) {
channel(doc.sdk);
}
}`
}
}
}
and I used this command to configure the sync gateway :
C:\Program Files\Couchbase\Sync Gateway\sync_gateway sync-gateway-config-userprofile-walrus.json
also I have changed the sync url to ws://10.0.2.2:4984 because I am using android emulator .
but it did not sync between devices , can anyone help me please ?
1) How are you verifying if the sync is working (what steps did you follow for testing)? If you are following the instructions here, that won't work with the config you have above. Because the only valid user that you have configured in your config file is the one with credentials "username" and "password" and the steps in tutorial indicates a different username/password. So make sure you log into your app with the credentials you have in your config file and try out
2) You indicated that you have a Couchbase Server running that you have configured for Sync Gateway access. But your config file indicates that you are using "walrus" mode (Walrus mode is a memory only mode intended only for dev/test purposes). In other words, in walrus mode, you are not pointing to an actual couchbase server.
If you want to sync with a backend Couchbase Server(which you should), then replace the "walrus:" in the "server" property in the config file to point to the couchbase server.
3) The best resource to troubleshoot is the Sync Gateway logs. That should give sufficient hints as to what is going wrong with the sync. Post the errors you are seeing if any if the above tips don't work
4) Check out this tutorial for relevant background. Probably from this point

Azure function TimerTrigger only running once

This is my Function.Json
{ "generatedBy": "Microsoft.NET.Sdk.Functions-1.0.28", "configurationSource": "attributes", "bindings": [
{
"type": "timerTrigger",
"schedule": "*/5 * * * * *",
"useMonitor": true,
"runOnStartup": false,
"name": "myTimer"
} ], "disabled": false, "scriptFile": "../bin/PullRequest.dll", "entryPoint": "PullRequest.PullRequest.Run" }
This is my actual function:
[FunctionName("PullRequest")]
public static void Run([TimerTrigger("*/5 * * * * *")]TimerInfo myTimer, ILogger log)
{
if (myTimer.IsPastDue)
{
log.LogInformation("Timer is running late!");
}
log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");
}
When I tried to run this function on Azure portal then it only run once and stops.
this is the log of the Azure Funciton:
I rerun it and now it only running once.:
Time trigger will automatically execute as per CORN expression i.e. in your case this function will execute every five seconds and If you run it from azure portal it will run only once.
If you want to check timings of last executions you can go to Monitor tab and check timings .
I have executed this locally and its working as expected
Like DavidG said: the Logs you're showing us, show that the function PullRequest ran at least 3 times.
Once where the executing log line isn't visible, so the reason is not clear
Once because the timer fired (#11:30:00)
Once because it was programmatically called via the host APIs (AKA manually)
Your CRON expression */5 * * * * * roughly translates into 'run this every time the number of seconds is divisible by 5'. That wouldn't match with the logs you're providing. Are you sure that's the CRON expression you're using?
Azure Functions uses the NCronTab library to interpret CRON expressions. A CRON expression includes six fields:
{second} {minute} {hour} {day} {month} {day-of-week}
Taken from Timer trigger for Azure Functions - CRON expressions.
EDIT:
Functions running on a Timer Trigger are automatically triggered on the specified timer intervals. To have the Functions actually run, you (of course) need to have something running that executes that trigger. Otherwise: how can they be triggered?
In Azure, the Functions Runtime is responsible for triggering the Function at the right time. Locally, the func.exe tool that starts automatically when you debug the application will do this for you. But if that doesn't run, nothing will happen.
Azure Functions Core Tools lets you develop and test your functions on your local computer from the command prompt or terminal. Your local functions can connect to live Azure services, and you can debug your functions on your local computer using the full Functions runtime.
and
To run a Functions project, run the Functions host. The host enables triggers for all functions in the project.
Taken from Work with Azure Functions Core Tools.
In short: "The host enables triggers. It needs to run to have something that triggers any Function".

Azure function implemented locally won't work in the cloud

I have the following function, which I define locally and am able to debug it normally.
[FunctionName("QueueTrigger")]
public static void DUMMYFUNCTION(
[QueueTrigger("myqueue", Connection = "AzureWebJobsStorage")]string myQueueItem, TraceWriter log)
{
log.Info($"C# function processed: {myQueueItem}");
}
Locally, "AzureWebJobsStorage" is defined in the local.settings.json file to use the storage account which has "myqueue". In the function settings on Azure "AzureWebJobsStorage" is also set to the correct connection string (same as the one set locally). That means, I do not have the same problem as in Azure Function does not execute in Azure (No Error)
Now, I use Visual Studio Team Service to host my source code in a git repository. I've configured the deployment to use the source code and deploy the functions contained in it.
I don't think the issue is related to VSTS because the deployment is performed successfully and the function is displayed in my functions list:
After the deployment, the file function.json is generated and has the content below:
{
"generatedBy": "Microsoft.NET.Sdk.Functions.Generator-1.0.8",
"configurationSource": "attributes",
"bindings": [
{
"type": "queueTrigger",
"connection": "AzureWebJobsStorage",
"queueName": "myqueue",
"name": "myQueueItem"
}],
"disabled": false,
"scriptFile": "../bin/myAssembly.dll",
"entryPoint": "myAssembly.MyClass.DUMMYFUNCTION"
}
The problem is that, when I add an item to the queue while debugging it locally, the function is executed, but when the function is running on azure it does not.
What do I need to change in the code to have it work on azure as well? I thought it would work out-of-the-box.
Is your function running at all? If you go in the KUDU, do you see any log that your function actually ran?
If your function is not running at all, Azure functions 2 (using the .NET Standard 2 framework) is still in preview (beta). So when you deploy your function through, make sure to go in the Application Settings of your function app and set the FUNCTIONS_EXTENSION_VERSION value to beta

Custom activity failed in Azure Data Factory after running several hours

I was running custom .Net activity in Azure Data Factory using On-demand HDInsight cluster. Activity processes XML files, stored in Azure Blob, and move them to Azure Data Lake Store. It failed after 28 hours of execution with the following error:
"Error in Activity: The request was aborted: The request was
canceled.."
There were no log files available for this activity run and above error is not good enough to troubleshoot the problem. How can I troubleshoot this problem?
I suggest you check the system log. Even if you didn't have any user log, you shall have a system log which will help you investigate your issue.
You should be able to check system log from detail of failed output dataset(see below)
When you create a linkedServce for ondemand HDInsight, you are supposed to specify "linkedServiceName" in typeProperties
{
"$schema": "http://datafactories.schema.management.azure.com/schemas/2015-08-01/Microsoft.DataFactory.LinkedService.json",
"name": "LinkedServiceOnDemand_OnCloud_HDInsight",
"properties": {
"type": "HDInsightOnDemand",
"typeProperties": {
"clusterSize": 1,
"timeToLive": "00:10:00",
"linkedServiceName": "LinkedService_OnCloud_Storage"
}
}
}
Logs will be created in that storage. If it hadn't been created, the only option left would be getting a technical support from Microsoft.

Categories