I do have a Docker container running a .net core 2 app.
The logging is configured using this code in Program.cs
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.ConfigureLogging((hostingContext, logging) =>
{
logging.AddConfiguration(hostingContext.Configuration.GetSection("Logging"));
logging.AddConsole();
logging.AddDebug();
})
.UseStartup<Startup>();
and the appsettings.json file
{
"Logging": {
"LogLevel": {
"Default": "Information"
}
},
}
Logging seems to be Ok, when running Kestrel directly, I can see the logs in the terminal. Same thing, when containerized: the command docker logs shows what I want.
Troubles arise in production, when run in as a container in Azure Web App. I cannot find any consistent docker logs.
I made attempts to visit the Log file via FTP or via the url https://[mysite].scm.azurewebsites.net/api/logs/docker the log files are almost empty for example,
https://[mysite].scm.azurewebsites.net/api/vfs/LogFiles/2018_09_09_RD0003FF74F63E_docker.log,
only container starting lines are present
Also I do have the same lines in the usual portal interface.
The question is: do docker logs are automatically output in docker.log files in Azure Web App? Is there something that I am missing?
Firstly you need to enable container logs
[App Service] -> Monitoring -> App Service logs
Then you can see container logs in [App Service] -> Monitoring -> Log stream
UPD
Also you can find logs in KUDU
Then "Current Docker Logs"
I've been cursing this for a good while, and eventually found something that works for me.
First I enabled filesystem logging as per #dima_horror's answer.
Next I ran a command-line az webapp log tail --name myApp --resource-group myRg
That now seems to give me useful output (it gave me nothing prior to enabling filesystem logging).
Have you checked the logs in container settings? I followed this guide to deploy a container to Azure web app.
To add to the answers, if you're having trouble with deploying the image in your App Service and need logs for that specifically, you can get the docker logs for the provisioning phase of your container under Deployment Center, e.g.:
I have the same exact issue, the App Service container logs is generic and vague. This is not the same logs that Docker shows us whenever we run a container.
17/02/2020 08:59:25.186 INFO - Site: tutorial-api - Start container succeeded. Container: f8bfa7e27680c0e9551c6157f9d1c8a73c9a3e739b4f15de8586ce52809798d3
17/02/2020 08:59:30.675 INFO - Site: tutorial-api - Application Logging (Filesystem): On
17/02/2020 08:59:44.106 INFO - Site: tutorial-api - Waiting for container to be ready
17/02/2020 08:59:49.116 INFO - Site: tutorial-api - Container has exited
17/02/2020 08:59:49.117 ERROR - Site: tutorial-api - Container could not be started
17/02/2020 08:59:49.120 INFO - Site: tutorial-api - Purging after container failed to start
17/02/2020 08:59:49.120 ERROR - Site: tutorial-api - Unable to start container. Error message: Container could not be started: tutorial-api_20
"Unable to start container, container couldn't be started"
Wow! Azure just told me every 60 seconds a minute has passed.
I understand that this is production environment, but you got to give us something!
Out of frustration, I decided to run same image in a Azure Container Instance resource, there it shows you the same detailed logs that Docker provides (see screenshot below)
Now that's what I'm talking about!
Using the error logs in Azure Container Instance, I found out that my App Service couldn't access the Sql Server resource (even though they are within the same resource group). I simply enabled the Sql Server resource to be accessed within the same resource group
Related
I currently have several .Net windows services running on my server. Is there a way to attach a console app to the service to get all of the ILogger data? I have had issues where the service runs perfectly as a console app/worker service but as soon as I run it as a windows service, it just sits there and does nothing.
I did find an article about attaching the VS debugger to the process, but this will not work with our network security.
I am open to any other suggestions as well.
The technical answer is no, but as #Fildor mentioned, you would set up a log sink of some sort. The file logger is just an example, but you can also have the logs send emails, post to some cloud logging service such as splunk or cloudwatch, etc.
One issue you may run into is that you need to capture an error prior to ILogger being available and properly configured for you. Here is a guide I followed for capturing startup errors using NLog: https://alistairevans.co.uk/2019/10/04/asp-net-core-3-0-logging-in-the-startup-class-with-nlog/
Startup classes are no longer necessary in the latest .NET version, so I modified their example to be code you would have in Program.cs:
// NLog: setup the nlog config first
NLogBuilder.ConfigureNLog("nlog.config");
try
{
var host = Host.CreateDefaultBuilder(args)
.ConfigureLogging(logging =>
{
logging.ClearProviders();
logging.SetMinimumLevel(LogLevel.Trace);
})
// Use NLog to provide ILogger instances.
.UseNLog()
.Build();
host.Run();
}
catch (Exception ex)
{
var logger = nlogLoggerProvider.CreateLogger(typeof(Program).FullName);
}
}
Here's the list of available log sinks you can configure in that nlog configuration file: https://nlog-project.org/config/
This same thing can be accomplished with other log providers you may already be using such as Serilog, Log4Net, etc.
Some general setup of my system:
Windows x64
.net 3.1
AspNet.Core
Serilog + AppInsights writer
Recently, we started to observe many trace logs like below. I think something went wrong when we try to write log into AppInsights. But we can see our logs. It seems we have everything we wanted logging into AppInsights correctly. But we also have this unwanted trace message logged in as well, and there are a lot of them.
This trace message happens on my local machine, in our Azure AKS environment. But the stack trace is too short to help locating the origin of the error.
AI (Internal): [Microsoft-ApplicationInsights-Core] [msg=Log Error];[msg=Exception while initializing Microsoft.ApplicationInsights.AspNetCore.TelemetryInitializers.ClientIpHeaderTelemetryInitializer, exception message - System.ObjectDisposedException: Request has finished and HttpContext disposed.
Object name: 'HttpContext'.
at Microsoft.AspNetCore.Http.DefaultHttpContext.ThrowContextDisposed()
at Microsoft.AspNetCore.Http.DefaultHttpContext.get_Features()
at Microsoft.ApplicationInsights.AspNetCore.TelemetryInitializers.TelemetryInitializerBase.Initialize(ITelemetry telemetry)
at Microsoft.ApplicationInsights.TelemetryClient.Initialize(ITelemetry telemetry)]
I tried to create an environment, with Asp.net core application & .net3.1 for track down the logs (Request, exceptions, users, failure, trace etc.)
Here are the steps i followed : Added application insight telemetry to my application and install the NuGet package Microsoft.ApplicationInsights.AspNetcore
And in startup.cs under services.AddControllersWithViews(); add
services.AddApplicationInsightsTelemetry(Configuration["APPINSIGHTS_CONNECTIONSTRING"]);
After deploy the app to azure and for tracing the logs go to my created Application insights> Transaction search>view data ,Where we can see all of our logs as below also to identify if there is any error occurs.
Also we can download all our logs and track down for the application using kudu-console.
Go to App service>Advance tools>Go
For more information please refer the below links:
.MS DOC | Explore .NET/.NET Core and Python trace logs in Application Insights
.SO Thread | Application Insights not showing stack trace for errors
.Blog| Using Azure Application Insights For Exception Logging In C#
I am adding Application Insights (AI) to my web API service by following this page Application Insights Instructions. I managed to get my service to connect to AI and I am able to see when my service preforms a post, get, etc. I also placed log calls through my service, but none of them are being written to my AI's Traces log.
I made sure to setup my Startup.cs and appsettings.json files to contain the new code needed to run AI in throughout my service, and the logging data need to have AI grab logs debug and up.
Startup.cs
public void ConfigureServices(IServiceCollection services)
{
services.AddApplicationInsightsTelemetry();
}
appsettings.json
Logging Example
public async Task ProcessQueueAsync(dBData dbContext)
{
// _logger is of type ILogger<[INSERT CLASS NAME]>
_logger.LogDebug("This is a test log by Lotzi11.");
await ProcessQueueAsyncSingle(dbContext, CancellationToken.None);
}
Can someone help me figure out why my logs are not being sent to AI?
Your code configuration and appsettings.json are correct. I can see these logs in AI as per your settings at my side.
One thing you should know is that, it may take a few minutes for these data to arrive in AI server. Please wait for a few minutes, like 5 minutes or more, then query these data again from azure portal -> application insights.
And here is a simple way to check if the data is sent to AI. In visual studio, when running the project, you can search the logs in visual studio output window. If you can find the logs there, then it should be sent to AI:
Search in visual studio output window:
If you still cannot see the these logs in AI, you should also check if you have set something like filter or sampling in your code.
While I was working on solving my problem, I found out that my company uses Serilog to handle logging, so I had to alter my project so Serilog would also send logs to AI. I modified my code using the following page serilog-sinks-applicationinsights.
This led my to realize that even though I followed Microsofts instructions on setting up AI, my ILogger class is not properly setup to handle sending logs to AI. To fix that, I alter my Startup.cs's constructor:
public Startup(IHostEnvironment environment, IConfiguration configuration)
{
var env = new Environment(environment.EnvironmentName);
_systemConfiguration = new SystemConfiguration(env, configuration);
_systemConfiguration.Validate();
Log.Logger = new LoggerConfiguration().Enrich.FromLogContext().WriteTo.ApplicationInsights(_systemConfiguration.BaseConfiguration["APPINSIGHTS_INSTRUMENTATIONKEY"], TelemetryConverter.Traces).CreateLogger();
using var provider = new SerilogLoggerProvider(Log.Logger);
_logger = provider.CreateLogger(nameof(Startup));
}
After adding AI to Log.Logger, my logs began showing up in my AI's page.
I am trying to use VSTS (now Azure DevOps) to do a CI/CD pipeline. For my build pipeline, I have a very basic setup involving doing a restore, build, test, and publish steps.
For my test step, I have it setup to run two test projects - one unit test project and one integration test project. I have my Key Vault access policy setup to provide access to both myself and Azure Devops. When I run my tests locally using visual studio, as I am logged into the same account which has access to azure key vault, I can run the tests without any errors.
My application is configured to access key vault using below setup:
public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
WebHost.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((ctx, builder) =>
{
var keyVaultEndpoint = GetKeyVaultEndpoint();
if (!string.IsNullOrEmpty(keyVaultEndpoint))
{
var azureServiceTokenProvider = new AzureServiceTokenProvider();
var keyVaultClient = new KeyVaultClient(new KeyVaultClient.AuthenticationCallback(azureServiceTokenProvider.KeyVaultTokenCallback));
builder.AddAzureKeyVault(keyVaultEndpoint, keyVaultClient, new DefaultKeyVaultSecretManager());
}
}
)
.UseStartup<Startup>();
When I run the build pipeline, I am using a Hosted VS2017 instance to build my project. Everything is running except the integration tests which try and access the key vault fail. I am using the following packages:
Microsoft.Azure.Services.AppAuthentication - makes it easy to fetch
access tokens for Service-to-Azure-Service authentication scenarios.
Microsoft.Azure.KeyVault - contains methods for interacting with Key Vault.
Microsoft.Extensions.Configuration.AzureKeyVault - contains
IConfiguration extensions for Azure Key Vault
I followed this tutorial https://learn.microsoft.com/en-us/azure/key-vault/tutorial-web-application-keyvault to setup the key vault and integrate it into my app.
I am merely trying to get my build to work by making sure both the unit and integration tests pass. I am not deploying it to an app service yet. The unit tests run without any issues as I am mocking the various services. My integration test is failing with below error messages. How do I get my test access to the key vault? Do I need to add any special access policies to my key vault for the hosted VS2017 build? Not sure what to do as I don't see anything that stands out.
Below is the stack trace for the error:
2018-10-16T00:37:04.6202055Z Test run for D:\a\1\s\SGIntegrationTests\bin\Release\netcoreapp2.1\SGIntegrationTests.dll(.NETCoreApp,Version=v2.1)
2018-10-16T00:37:05.3640674Z Microsoft (R) Test Execution Command Line Tool Version 15.8.0
2018-10-16T00:37:05.3641588Z Copyright (c) Microsoft Corporation. All rights reserved.
2018-10-16T00:37:05.3641723Z
2018-10-16T00:37:06.8873531Z Starting test execution, please wait...
2018-10-16T00:37:51.9955035Z [xUnit.net 00:00:40.80] SGIntegrationTests.HomeControllerShould.IndexContentTypeIsTextHtml [FAIL]
2018-10-16T00:37:52.0883568Z Failed SGIntegrationTests.HomeControllerShould.IndexContentTypeIsTextHtml
2018-10-16T00:37:52.0884088Z Error Message:
2018-10-16T00:37:52.0884378Z Microsoft.Azure.Services.AppAuthentication.AzureServiceTokenProviderException : Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/63cd8468-5bc3-4c0a-a6f8-1e314d696937. Exception Message: Tried the following 3 methods to get an access token, but none of them worked.
2018-10-16T00:37:52.0884737Z Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/63cd8468-5bc3-4c0a-a6f8-1e314d696937. Exception Message: Tried to get token using Managed Service Identity. Access token could not be acquired. MSI ResponseCode: BadRequest, Response: {"error":"invalid_request","error_description":"Identity not found"}
2018-10-16T00:37:52.0884899Z Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/63cd8468-5bc3-4c0a-a6f8-1e314d696937. Exception Message: Tried to get token using Visual Studio. Access token could not be acquired. Visual Studio Token provider file not found at "C:\Users\VssAdministrator\AppData\Local\.IdentityService\AzureServiceAuth\tokenprovider.json"
2018-10-16T00:37:52.0885142Z Parameters: Connection String: [No connection string specified], Resource: https://vault.azure.net, Authority: https://login.windows.net/63cd8468-5bc3-4c0a-a6f8-1e314d696937. Exception Message: Tried to get token using Azure CLI. Access token could not be acquired. Process took too long to return the token.
2018-10-16T00:37:52.0885221Z
2018-10-16T00:37:52.0885284Z Stack Trace:
2018-10-16T00:37:52.0885349Z at Microsoft.Azure.Services.AppAuthentication.AzureServiceTokenProvider.GetAccessTokenAsyncImpl(String authority, String resource, String scope)
2018-10-16T00:37:52.0885428Z at Microsoft.Azure.KeyVault.KeyVaultCredential.PostAuthenticate(HttpResponseMessage response)
2018-10-16T00:37:52.0885502Z at Microsoft.Azure.KeyVault.KeyVaultCredential.ProcessHttpRequestAsync(HttpRequestMessage request, CancellationToken cancellationToken)
2018-10-16T00:37:52.0886831Z at Microsoft.Azure.KeyVault.KeyVaultClient.GetSecretsWithHttpMessagesAsync(String vaultBaseUrl, Nullable`1 maxresults, Dictionary`2 customHeaders, CancellationToken cancellationToken)
2018-10-16T00:37:52.0886887Z at Microsoft.Azure.KeyVault.KeyVaultClientExtensions.GetSecretsAsync(IKeyVaultClient operations, String vaultBaseUrl, Nullable`1 maxresults, CancellationToken cancellationToken)
2018-10-16T00:37:52.0886935Z at Microsoft.Extensions.Configuration.AzureKeyVault.AzureKeyVaultConfigurationProvider.LoadAsync()
2018-10-16T00:37:52.0887000Z at Microsoft.Extensions.Configuration.AzureKeyVault.AzureKeyVaultConfigurationProvider.Load()
2018-10-16T00:37:52.0887045Z at Microsoft.Extensions.Configuration.ConfigurationRoot..ctor(IList`1 providers)
2018-10-16T00:37:52.0887090Z at Microsoft.Extensions.Configuration.ConfigurationBuilder.Build()
2018-10-16T00:37:52.0887269Z at Microsoft.AspNetCore.Hosting.WebHostBuilder.BuildCommonServices(AggregateException& hostingStartupErrors)
2018-10-16T00:37:52.0887324Z at Microsoft.AspNetCore.Hosting.WebHostBuilder.Build()
2018-10-16T00:37:52.0887371Z at Microsoft.AspNetCore.TestHost.TestServer..ctor(IWebHostBuilder builder, IFeatureCollection featureCollection)
2018-10-16T00:37:52.0887433Z at Microsoft.AspNetCore.Mvc.Testing.WebApplicationFactory`1.CreateServer(IWebHostBuilder builder)
2018-10-16T00:37:52.0887477Z at Microsoft.AspNetCore.Mvc.Testing.WebApplicationFactory`1.EnsureServer()
2018-10-16T00:37:52.0887525Z at Microsoft.AspNetCore.Mvc.Testing.WebApplicationFactory`1.CreateDefaultClient(DelegatingHandler[] handlers)
Update
I have found only 1 related post to this issue: https://social.msdn.microsoft.com/Forums/en-US/0bac778a-283a-4be1-bc75-605e776adac0/managed-service-identity-issue?forum=windowsazurewebsitespreview. But the post is related to deploying an application into an azure slot. I am merely trying to build my application in a build pipeline.
I am still trying to solve this issue and am not sure what the best way to provide the required access is.
Update 2
I have still not found a solution for this. I am lost on how to get my pipeline to run my test without issues. I saw that the release pipeline you have the options of running tests too. But these seem to take .dll files and my build pipeline drop file only has the web app (I don't see any of the test projects published drop file). Not sure if that is even a possibility.
Update 3
I managed to get it to work by using the last option provided here: https://learn.microsoft.com/en-us/azure/key-vault/service-to-service-authentication#connection-string-support
I tried the other ways of using a certificate but anytime {CurrentUser} is provided in a connection string, the build pipeline fails. It works on my local machine but not in the build pipeline.
To get it to work, I had to do three things:
Log in to Azure. Setup a new app registration in Azure AD
In your new AD app registration, create a new client secret
Provide your new AD App access to your key vault. Go into your key vault access policies and add the app that you created in your AD with read access to your secrets.
Modified my call to AzureServiceTokenProvier() in my Program.cs file as follows:
var azureServiceTokenProvider = new AzureServiceTokenProvider("connectionString={your key vault endpoint};RunAs=App;AppId={your app id that you setup in Azure AD};TenantId={your azure subscription};AppKey={your client secret key}")
Note that your client secret has to be formatted correctly. The app registrations (preview) generates a random secret key. Sometimes this key does not work in the connection string (throws an error as incorrectly formatted). Either try generating your own key in the non-preview version of app registration or generate a new key and try again.
After that I was able to run my integration test in my build pipeline successfully and create a release to my web app in Azure. I'm not satisfied with this approach because although it works, its exposing a secret value in the code itself. Manages service identity does not need to be turned on due to above approach. I feel that this is extremely bad in that regard.
There has to be a better way than this. One option is not to run the integration test in the build pipeline. Not sure if this is the correct approach. I'm still hoping someone will be able to provide a better approach to this or explain if my approach is okay to use.
Use the Azure CLI pipeline task to run integration tests that need KeyVault secrets successfully, without exposing any secrets in source control:
Create a Service Principal service connection in your Azure
DevOps project.
Give the principal Get and List permissions to the Vault in Azure.
Run your integration tests inside an Azure CLI task:
- task: AzureCLI#1
inputs:
azureSubscription: 'Your Service Connection Name'
scriptLocation: 'inlineScript'
inlineScript: 'dotnet test --configuration $(buildConfiguration) --logger trx'
This works because the tests will run in the context of azure cli, which is where AzureServiceTokenProvider tries fetching a token from before it fails. Azure CLI handles the authentication and cleans up when the task is done.
You should not do the integration test of authentication to Azure KeyVault within Azure DevOps Pipelines build, because you are using Azure DevOps default hosted agents.
By default, the Azure DevOps Pipelines are using basic default hosted agents, and these hosted agents are not accessible from your Azure subscription. These are not surprising, because these hosted agents are common agents for all common build needs, including build/compile, running unit tests, getting test coverages, and all of these tasks has no other additional features such as having ActiveDirectory, database, and other actual authentication/requests to other party such as authentication to any Azure Keyvault. Therefore these agents by default are not registered in your Azure subscription.
If you want to have successful integration tests for these special needs, you have to create your own agents for Azure DevOps Pipelines build and release. Therefore, there is no other way to force Azure DevOps default agent to run your KeyVault authentication tests, other than creating your own agents and configure your Azure DevOps to use your own agents.
To create your own agents, consult this documentation from Microsoft:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=vsts#install
UPDATE 29th October, 2018:
For more clarity, I also reply for your "Update 3" workaround. There is no guarantee that your workaround will work nicely when Microsoft updates the Azure DevOps' default hosted agent.
Therefore I also need to add more point: it's not a good practice to have integration test that relies on other party beyond the realm of your Azure DevOps Pipelines build such as connecting to a database server or using external authentications (even on Azure KeyVault) within your CI, especially if you are using Microsoft's default hosted agents.
Not just it will be error-prone due to invalid authentication configuration, but there's no guarantee that the further updates on the default hosted-agents would guarantee your third-party logic test will work.
Running into the exact same issue myself. I did get a little further by modifying the code by adding a connection string to the AzureServiceTokenProvider (The default parameter passed is null). I still didn’t get it to fully work though, maybe since the Azure DevOps user may or may not have the required access to the KeyVault, but I did not get an opportunity to dig in further.
Hoping there is a better solution posted here.
Update
We added the Build user into the Azure AD and then added it to the Access Policies within the KeyVault to the user. Granting it only Get Access (Our test was only testing whether it could gather the secret). Tests pass successfully now.
An easier solution would be to use Azure DevOps Variable Groups.
Someone with read permissions and contributor on the DevOps project can create the variable group, link it to the key vault and select the desired secrets.
The variable group can now be linked to any of your pipelines.
However to make it available to any code running in the pipeline you must first export the secrets using this method.
You need to do this via a task (Azure Powershell or Bash) but it must be done via an inline script. You cannot export the keyvault variables in a script in a file in the t ask. So in the first task export all your variables and all the subsequent tasks and referenced scripts can consume them.
PowerShell:
Write-Host "##vso[task.setvariable variable=mysecretexported]$(mysecret1)"
Bash
#echo ##vso[task.setvariable variable=mysecretexported]$(mysecret1)"
You can then refer to the secret using the exported variable
Powershell
Write-Host No problem reading "$env:MYSECRETEXPORTED"
Batch:
#echo No problem reading %mysecretexported%
Bash works similar:
#!/bin/bash
echo "No problem reading $MYSECRETEXPORTED"
This is also supported in YAML
The nice part is that these variables will be masked in your logs so your secrets stay secret.
Problem Description
I have a Windows service which is hosting an NServiceBus endpoint in NServiceBus.Host.exe.
The binaries are deployed to c:\inetpub\bus\services\myService folder on the server.
A DSC script makes sure the Windows service is created/exists on the server, with the "path to executable" property of the service set to "c:\inetpub\bus\services\myService´NServiceBus.Host.exe" -service NServicebus.Production.
Note! The -service switch is added when installing the service by using the built-in NServiceBus.Host.exe /install parameter, which is why I added it to the Windows service executable path in the DSC script.
Now, when I try to start the service manually on the server, it yields the following error message
Windows could not start the <service name> service on the Local Computer.
Error 1053: The service did not respond to the start or control request in a timely fashion.
Debugging Steps
I have looked through the event log and the following two error messages sticks out:
NServiceBust.Host.exe error:
Application: NServiceBus.Host.exe
Framework Version: v4.0.30319
Description: The process was terminated due to an unhandled exception.
Exception Info:
Topshelf.Exceptions.ConfigurationException
at Topshelf.Internal.Actions.RunAsServiceAction.Do(Topshelf.Configuration.IRunConfiguration)
at Topshelf.Runner.Host(Topshelf.Configuration.IRunConfiguration, System.String[])
at NServiceBus.Host.Program.Main(System.String[])
Local Activation permission error:
The application-specific permission settings do not grant Local
Activation permission for the COM Server application with CLSID
{D63B10C5-BB46-4990-A94F-E40B9D520160}
and APPID
{9CA88EE3-ACB7-47C8-AFC4-AB702511C276}
to the user <my_service_account_user> SID (<service_account_SID>) from
address LocalHost (Using LRPC) running in the application container
Unavailable SID (Unavailable). This security permission can be modified
using the Component Services administrative tool.`
Note! The error above only occurs once, i.e. the first time I try to start the service. It does not appear again in the event log for any subsequent attempts of starting the service.
What I have done so far:
Tried the suggestions in a closely related post here on SO, none of which were working.
Tried to install the service by using using the NServiceBus.Host.exe /install parameter. In this case, the service name is created with its name on the following format: MyService.EndpointConfig_v1.0.0.0. Using this approach, the service starts successfully without any error message
Stopping the service and then try to start the service created by the DSC script (with a different name) => success
Removing the service created by NServiceBus and then trying to start the DSC-created service again => failure
Tried granting the service account used for logon when running the service various privileges (neither of which yielded any success), among others:
Membership in the Administrators group
Membership in the Performance Log Users group
Full DCOM permissions via "Launch and Activation Permissions" in dcomcnfg
Tried running c:\inetpub\bus\services\myService´NServiceBus.Host.exe NServicebus.Production from the CLI => success
Code
My Init() method for the service looks like this:
namespace MyService
{
public class EndpointConfig : IConfigureThisEndpoint, AsA_Server, IWantCustomLogging, IWantCustomInitialization
{
public void Init()
{
Directory.SetCurrentDirectory(System.AppDomain.CurrentDomain.BaseDirectory);
SetLoggingLibrary.Log4Net(() => XmlConfigurator.Configure(File.OpenRead(#"log4net.config")));
GlobalContext.Properties["Hostname"] = Dns.GetHostName();
GlobalContext.Properties["Service"] = typeof(EndpointConfig).Namespace;
var container = new WindsorContainer(new XmlInterpreter());
Configure.With()
.CastleWindsorBuilder(container)
.XmlSerializer()
.MsmqTransport()
.IsTransactional(true)
.PurgeOnStartup(false)
.IsolationLevel(System.Transactions.IsolationLevel.RepeatableRead);
var connectionString = ConfigurationManager.ConnectionStrings["<some_conn_string>"].ConnectionString;
container.Register(
Component.For<IDatabaseProvider>()
.ImplementedBy<DatabaseProvider>()
.DependsOn(Property.ForKey("connectionString").Eq(connectionString)));
}
}
}
Theory
When installing the service using /install I assume that NServiceBus.Host.exe does some black magic under the hood - e.g. grants some necessary permissions - to make the service able to start.
Note! On the server, the latest version of NServiceBus is installed (v6.x). However, in my solution/service project version 2.x is used (please, do not ask if I can upgrade - unfortunately, that is not an option).
Appreciate any help I can get, as I am running out of ideas.
EDIT 1
I was asked why I can't just use the /install parameter of NServiceBus and be happy with that. The answer to that is that I could (and, actually, I currently am).
The reason I have still posted this question is split:
I wish to understand why one of two seemingly equivalent approaches fails
I am not completely happy with using the /install parameter. The reason? It boils down to a "chicken or the egg" problem. I use Powershell DSC to provision servers in Azure and I believe that ensuring that Windows Services exists on the server is the responsibility of DSC. However, the first time a server is provisioned the services cannot exist unless I script their creation with DSC, and point the executable path to where the service binaries will be deployed whenever that happens. The other alternative is to skip service creation in DSC, and run the NServiceBus.Host.exe /install as a part of the service/application deployment script instead. Obviously, deployment cannot happen until after a server has been provisioned. Thus, it requires the Windows Service part of the DSC script being stripped down to e.g. merely ensuring the service exist - a verification which will fail until a first time deployment of the application has been performed.