I have a scheduled web job created in Azure app service developed using C#. I want to change my web job from scheduled to continuous, but upon deploying the web job from visual studio it created a new instance of the web job in the Azure portal(1-Continuous and 1-Scheduled).
Duplicate Web Jobs in Azure Portal
webjob-publish-settings.json
Before:
{
"$schema": "http://schemastore.org/schemas/json/webjob-publish-settings.json",
"webJobName": "SampleWebJob",
"runMode": "OnDemand"
}
After:
{
"$schema": "http://schemastore.org/schemas/json/webjob-publish-settings.json",
"webJobName": "SampleWebJob",
"runMode": "Continuous"
}
I would like to overwrite the existing web job instead of creating a new one. Is there any way I can do it?
Also is there any way I can achieve this using ARM templates?
If the webjob is the same one, azure will overwrite it. And in your case actually it's the new webjob that's why it creates a new one.
If you only have one webjob and you want to change the type and don't want keep it after deployment, you could set Delete Existing Files to true.
If you have more than one webjob, you have to delete it on the portal or go to you web kudu site and delete it from the app_data folder.
Related
I am deploying a C# ASP.NET Core web service to Azure using Pulumi. I can deploy it in 3 ways:
Run it locally from Visual Studio, i.e., not using Azure at all.
Deploy it to Azure from my local developer computer.
Deploy it to Azure from Jenkins (whicn runs on a different computer).
I have this problem:
When I run it locally, I can call the service fine, e.g. from Postman or from a C# application. The web service returns what I expect.
When I deploy it to Azure from my local machine, I can also call it fine. The web service returns what I expect.
When I deploy it to Azure from Jenkins and then try to call the webservice, it returns "NotFound" to all calls no matter what I do. (This presumably means HTTP 404.)
The deployments in 2 and 3 should be exactly the same. My question is: How can I find out what the difference is between these two deployments in Azure?
The Jenkins-deployed webservice exhibits the following curious behaviour:
It does not log any exceptions (even when I wait several minutes for them to show up).
If I go to my resource group -> Application Insights -> Logs and search for "requests", it does list requests. Curiously, it says that it returned HTTP 200 to all the requests, even though what I get when calling them is 404.
The above is true even for web service calls that should never return 200 (they should return 201).
The above is true even for web service calls to methods that shouldn't even exist (i.e., when I deliberately corrupt the method URI before calling the service).
During deployment I authenticate with Azure using a service principal. My Jenkinsfile looks like this:
withVaultSecrets([
"path/to/secret/in/vault": [
"sp_name", "application_id", "object_id", "sp_secret"
]
]){
script {
env.PULUMI_CONFIG_PASSPHRASE = 'jenkinspassphrase'
env.ARM_CLIENT_ID = "${application_id}"
env.ARM_CLIENT_SECRET = "${sp_secret}"
env.ARM_TENANT_ID = "${azure_dev_tenant_id}"
env.ARM_SUBSCRIPTION_ID = "${azure_dev_subscription_id}"
env.AZURE_CLIENT_ID = "${application_id}"
env.AZURE_CLIENT_SECRET = "${sp_secret}"
env.AZURE_TENANT_ID = "${azure_dev_tenant_id}"
}//script
dir("./src/deploy/KmsStack"){
powershell "pulumi login --local";
powershell "pulumi stack init jenkinsfunctionaltest --secrets-provider=passphrase"
powershell "pulumi up --yes"
}//dir
}//withVaultSecrets
The script which I use to deploy locally looks like this, with the same service principal credentials:
cd $PSScriptRoot
cd webapi
dotnet publish /p:DisableGitVersionTask=true
cd ../deploy/KmsStack
$env:PULUMI_CONFIG_PASSPHRASE = 'jenkinspassphrase'
$env:ARM_CLIENT_ID = ...
$env:ARM_CLIENT_SECRET = ...
$env:ARM_TENANT_ID = ...
$env:ARM_SUBSCRIPTION_ID = ...
$env:AZURE_CLIENT_ID = ...
$env:AZURE_CLIENT_SECRET = ...
$env:AZURE_TENANT_ID = ...
pulumi logout
pulumi login --local
pulumi stack rm jenkinsfunctionaltest -y
pulumi stack init jenkinsfunctionaltest --secrets-provider=passphrase
pulumi stack select jenkinsfunctionaltest
pulumi up --yes
How can I find out why these two deployed services behave differently? The Azure portal GUI is rich and has lots of sections. Can you recommend me where to look? Might there be some security settings that differ? How can I find them?
Thanks in advance!
We found out what was wrong. It was not an Azure issue. The problem was that we were deploying a bad ZIP file. The ZIP file was missing web.config, which meant that the web application could not start up.
We were zipping our published web application by having this in the CSPROJ file:
<Target Name="ZipOutputPath" AfterTargets="Publish">
<ZipDirectory SourceDirectory="$(OutputPath)\publish" DestinationFile="$(MSBuildProjectDirectory)\kmswebapp.zip" Overwrite="true" />
</Target>
This turned out not to work because the compiler does things in a different order than we expected. At the time when it generated the ZIP file, web.config was not generated yet, so web.config never got packed into the ZIP file. Hence Azure could not start the application.
When we deployed from our local machines, it worked because we didn't clean the publish directory before each run, so there would be a web.config left over from the previous run, and this old (but unchanged) web.config would get packed into the ZIP file and deployed to Azure, so Azure would know how to start the application.
We solved it by removing the above from our CSPROJ file and doing (roughly) this in our Jenkinsfile:
powershell "dotnet publish ./src/webapi/WebAPI.csproj"
powershell "if (!(Test-Path('${publishDirectoryPath}/web.config'))){throw 'We need web.config to exist in the publish directory'}"
powershell "Compress-Archive -Path '${publishDirectoryPath}/*' -DestinationPath './src/webapi/kmswebapp.zip' -Force"
This generates a proper ZIP file including web.config, and Azure can now start our application so it can respond properly to requests.
Durable functions keep a state in storage, this is what makes them work, but it is very troublesome while debugging and developing. I have a large number of runs which have not completed and that the system tries to run again when I start the process. Some of the runs have erroneous data same which causes exceptions while others have been terminated early as something did not work as expected.
I don't want to run all the old cases when starting my application in debug (running against my local storage account). How can I automatically clear all data so only new functions will trigger?
You can use Azure Core Tools to purge the orchestration instance state.
First you need to make sure that the Azure Core Tools is installed for your particular Azure Function version. You can do this using the NPM package manager. (Note that this is for the Azure Functions Version - V3.)
npm install -g azure-functions-core-tools#3
Then open a command prompt in the root directory of your Azure Functions project. The Azure Core Tools requires the host.json file from your project to identify your orchestration instances.
You can use the following to look at all of the available actions:
func durable
You can then purge the instance history using the following:
func durable purge-history
There is now this VsCode extension, which now also has 'Purge Durable Functions History' feature. Type 'Purge Durable Functions History' in your Command Palette - and there you go. If you're not using VsCode, then the same tool is available as a standalone service, that you can either run locally or deploy into Azure.
You may call the PurgeInstanceHistoryAsync method with one of the following:
An orchestration instance ID
[FunctionName("PurgeInstanceHistory")]
public static Task Run(
[DurableClient] IDurableOrchestrationClient client,
[ManualTrigger] string instanceId)
{
return client.PurgeInstanceHistoryAsync(instanceId);
}
Time interval
[FunctionName("PurgeInstanceHistory")]
public static Task Run(
[DurableClient] IDurableOrchestrationClient client,
[TimerTrigger("0 0 12 * * *")]TimerInfo myTimer)
{
return client.PurgeInstanceHistoryAsync(
DateTime.MinValue,
DateTime.UtcNow.AddDays(-30),
new List<OrchestrationStatus>
{
OrchestrationStatus.Completed
});
}
Reference for code snippets above: https://learn.microsoft.com/en-gb/azure/azure-functions/durable/durable-functions-instance-management#purge-instance-history
For everyone else wondering just how on earth to do this.
Install the Microsoft Azure Storage Explorer
Add a connection to azure storage, but choose Local storage emulator
4. Use the defaults / click next.
At this point, Click on Local & Attached in the Explorer. Click on (Emulator Default Ports) (Key) -> Tables. Delete the task hug history table, and relaunch your application.
From this point, its only a matter of dev time to figure out a way to do it programatically.
I created an Azure function app locally in Visual Studio 2017(Not Azure portal) by following the steps at the following URL.
https://blogs.msdn.microsoft.com/appserviceteam/2017/03/16/publishing-a-net-class-library-as-a-function-app
I followed the steps exactly to create a function that has a “ServiceBusTopicTrigger”. I added the following to my function.json
{
“disabled”: false,
“bindings”: [
{
“name”: “mySbMsg”,
“type”: “serviceBusTrigger”,
“direction”: “in”,
“topicName”: “negotiatedaddcharge_test”,
“subscriptionName”: “clientdispatches”,
“connection”: “servicebusnac”,
“accessRights”: “manage”
}
]
}
My appsenttings.json has the following
{
“IsEncrypted”: true,
“Values”: {
“servicebusnac”: “Endpoint=MyCompanyEndPointPlaceHolder”
}
}
When I run the function in Visual Studio I keep getting an error message “Microsoft.Azure.WebJobs.ServiceBus: Microsoft Azure WebJobs SDK ServiceBus connection string ‘AzureWebJobsservicebusnac’ is missing or empty.”
Just for the heck of it I added another entry to the values collection with the name “AzureWebJobsservicebusnac” but still the same message shows up. Is there something that I am doing wrong?
Also how do you unit test this function? I cannot Access any function in the csx file in my unit test project.
Thanks.
Edited:
I added information to make it clear that I am creating the function in Visual Studio rather than the Azure portal.
Function App will search for your Service Bus connection strings in Environment variables. You can set those from Azure portal:
Go to your Function App.
Select Platform features tab above the editor.
Click Application settings.
Under App settings section add an entry with connection name and string.
The appsettings.json file is used to support local development only, and settings defined there are not published to Azure.
The solution is simple; I actually ran into this myself and it had me completely stumped for a while.
In your appsettings.json, change "IsEncrypted" from true to false. This should fix the issue you're seeing.
The error messages are less than ideal for this scenario; the Azure Functions team already has a bugfix in for it.
Hope this helps anyone who runs into this issue. (I swear, it was a week before I figured this out, and not without help.)
I created a new Azure WebJob project in Visual Studio 2015 using .NET Framework 4.6.
In the app.config, I set three connection strings:
AzureWebJobsDashboard
AzureWebJobsStorage
MyDatabaseConnectionString
The AzureWebJobsDashboard and AzureWebJobsStorage connection strings are identical and they're both pointing to my storage account. I'm including one of the connection strings -- since they're both identical, except the "name".
<add name="AzureWebJobsDashboard" connectionString="DefaultEndpointsProtocol=https;AccountName=mystorageaccountname;AccountKey=thisIsTheLongPrimaryKeyICopiedFromAzurePortalForMyStorageAccount" />
Everything looks right to me but I'm getting the following error:
The configuration is not properly set for the Microsoft Azure WebJobs
Dashboard. In your Microsoft Azure Website configuration you must set
a connection string named AzureWebJobsDashboard by using the following
format DefaultEndpointsProtocol=https;AccountName=NAME;AccountKey=KEY
pointing to the Microsoft Azure Storage account where the Microsoft
Azure WebJobs Runtime logs are stored.
By the way, I know the app.config is being read by the web job because my code is able to connect to my database and update some records.
Any idea what I'm doing wrong?
You need to set the AzureWebJobsDashboard connection string in the portal in your Web App Application Settings blade (steps to do that here). The Dashboard runs as a separate site extension and doesn't have access to app.config. Add the connection string to the connection strings section on the settings blade.
You can add your other connection strings there as well (e.g. AzureWebJobsStorage) rather than storing in app.config if you wish for security/consistency, however the WebJob can read AzureWebJobsStorage from app.config.
The change needs to be done in App Services settings in Azure Portal
For that
Open the Azure (Management) Portal at https://portal.azure.com
Goto Home > App Services
Select the App service that is hosting your WebJob
Goto Settings > Choose Application settings
Scroll down to Connection strings
Add a new connection string with Name as 'AzureWebJobsDashboard' and Value as . Choose Type as 'Custom'
Press Save Button ( at the Page Top)
All done! Check your Webjobs Dashboard - the Warning and Error message at top should be gone now.
I was having this problem too.
My storageaccount kind is StorageV2 (general purpose V2)
I had both AzureWebJobsDashboard and AzureWebJobsStorage correctly set in the App Service Configuration.
But, the storage account had a Minimum TLS version set to 1.2
I found that changing this to 1.0 was needed for the WebJobs Dashboard to display correctly and for the WebJobs to run ok.
Right now (version of Web Job Tools is 15.0.31201.0) no necessary to configure any connection strings from azure portal, enough to set it in app.config file of web job
I have a Worker Role that executes code (fetching data and storing it to Azure SQL) every X hours. The timing is implemented using a Thread.Sleep in the while(true) loop in the Run method.
In the Web Role I want to have the abillity to manualy start the code in Worker Role (manualy fecth and store data in my case). I found out that the whole Worker Role can be restarted using the Azure Management API but it seems like an overkill, especialy looking at all the work needed around certificates.
Is there a better way to restart Worker Role from Web Role or have the code in Worker Role run on demand from the Web Role?
Anything like posting an event to an Azure Queue, posting a blob to Azure Blobs, changing a record in Azure Tables or even making some change in SQL Azure will work - the web role will do the change and the worker role will wait for that change. Perhaps Azure Queues would be the cleanest way, although I'm not sure.
One very important thing you should watch for is that if you decide to use polling - like query a blob until it appears - you should insert a delay between the queries, otherwise this code:
while( true ) {
if( storage.BlobExists( blobName ) ) {
break;
}
}
will rush into the storage and you'll encounter outrageous transaction fees. In case of SQL Azure you will not see any fees, but you'll waste the service capacity for no good and this will slow down other operations you queue to SQL Azure.
This is how is should be done:
while( true ) {
if( storage.BlobExists( blobName ) ) {
break;
}
// value should not be less that several hundred (milliseconds)
System.Threading.Thread.Sleep( 15 * 1000 );
}
Well I suggest you use Azure Fluent Management (which uses the Service Management API internally). Take a look at the "Deploying to Windows Azure" page.
What you will want to do is the following:
Cloud Service: mywebapp.cloudapp.net
Production slot
Role: MyMvcApplication
Cloud Service: mybackgroundworker.cloudapp.net
Production slot
No DEPLOYMENT
So you would typically have a Cloud Service running with a Web Role and that's it. What you do next is create the Worker Role, add your code, package it to a cspkg file and upload it to blob storage.
Finally you would have some code in your Web Role that can deploy (or remove) the Worker Role to that other Cloud Service by downloading the package locally and then running code similar to this:
var subscriptionManager = new SubscriptionManager(TestConstants.SubscriptionId);
var deploymentManager = subscriptionManager.GetDeploymentManager();
deploymentManager
.AddCertificateFromStore(Constants.Thumbprint)
.ForNewDeployment(TestConstants.HostedServiceName)
.SetCspkgEndpoint(#"C:\mypackage")
.WithNewHostedService("myelastatestservice")
.WithStorageAccount("account")
.AddDescription("my new service")
.AddLocation(LocationConstants.NorthEurope)
.GoHostedServiceDeployment();