IDataProtector unable to decrypt when application runs as a service - c#

I'm using ASP.NET Core data protection with default DI behavior. That works fine when my ASP.NET application is hosted on IIS. Now I have an application that needs to run as a service. So I'm using Microsoft.Extensions.Hosting.WindowsServices to do the windows service part with our standard
Host.CreateDefaultBuilder(args)
.UseWindowsService()
The BackgroundService then hosts ASP.NET Core with your standard
var builder = Host.CreateDefaultBuilder()
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddJsonFile("secrets.json", optional: true, reloadOnChange: true);
}
.ConfigureWebHostDefaults(....)
inside the background service, I can then resolve an instance of IDataProtectionProvider, create a protector, and use it to unprotect my secrets
var dataProtectionProvider = Container.Resolve<Microsoft.AspNetCore.DataProtection.IDataProtectionProvider>();
var protector = dataProtectionProvider.CreateProtector(appName);
var decryptedSecret = protector.Unprocect(some secret)
Now that all works fine as long as I run my application from the CLI. But running it as a service (same file, same location, and of course under the same account), I get an 'invalid payload' exception when I call Unprotect.
I know same path and same account is important, so that's taken care of. I also know that the application can find secrets.json as I wrote some probing code that checks if the file is present and can be read before I even try to unprotect. I'm even checking if the string I'm trying to unprotect is null/empty (which it isn't).
I finally installed a debug build as a service and attached the debugger, and when I look at IDataProtectionProvider, it has a Purpose.. and when running as a service, that's c:\windows\system32. When my app runs from the CLI, it's the path to the exe. So, is there a way to specify the purpose on my own so things behave the same regardless of CLI/Service?
So how can I control the purpose?

So having noted the difference in purpose of the IDataProtectionProvider, I was well on my way of solving this. The solution was to set a static purpose as explained here

Related

Attach console to .Net Windows Service

I currently have several .Net windows services running on my server. Is there a way to attach a console app to the service to get all of the ILogger data? I have had issues where the service runs perfectly as a console app/worker service but as soon as I run it as a windows service, it just sits there and does nothing.
I did find an article about attaching the VS debugger to the process, but this will not work with our network security.
I am open to any other suggestions as well.
The technical answer is no, but as #Fildor mentioned, you would set up a log sink of some sort. The file logger is just an example, but you can also have the logs send emails, post to some cloud logging service such as splunk or cloudwatch, etc.
One issue you may run into is that you need to capture an error prior to ILogger being available and properly configured for you. Here is a guide I followed for capturing startup errors using NLog: https://alistairevans.co.uk/2019/10/04/asp-net-core-3-0-logging-in-the-startup-class-with-nlog/
Startup classes are no longer necessary in the latest .NET version, so I modified their example to be code you would have in Program.cs:
// NLog: setup the nlog config first
NLogBuilder.ConfigureNLog("nlog.config");
try
{
var host = Host.CreateDefaultBuilder(args)
.ConfigureLogging(logging =>
{
logging.ClearProviders();
logging.SetMinimumLevel(LogLevel.Trace);
})
// Use NLog to provide ILogger instances.
.UseNLog()
.Build();
host.Run();
}
catch (Exception ex)
{
var logger = nlogLoggerProvider.CreateLogger(typeof(Program).FullName);
}
}
Here's the list of available log sinks you can configure in that nlog configuration file: https://nlog-project.org/config/
This same thing can be accomplished with other log providers you may already be using such as Serilog, Log4Net, etc.

How to authenticate with a private repo on DockerHub through the Docker.Dotnet client?

I am currently writing a microservice in .NET Standard that writes a Dockerfile, builds the Docker image associated with that Dockerfile (which implicitly causes a pull image), and pushes the image to a private Docker registry. Those Docker operations are all performed using the Docker.Dotnet library that MSFT maintains. I believe that this is mostly just a wrapper around calls to the Docker Remote API. The execution context of this microservice is in a K8s cluster hosted on AWS, or internally on bare-metal, depending on the deployment.
Previously our Docker registry has just been a private registry hosted on Artifactory internally, but we are migrating to a private DockerHub registry/repository/. With this migration have come some authentication problems.
We authenticate all of the pull and push operations with an AuthConfig that consists of the username and password for the account associated with the registry. The AuthConfig is either added to a Parameters object and then passed to the call:
imageBuildParameters.AuthConfigs = new Dictionary<string,
AuthConfig>() { { DockerEnvVariables.DockerRegistry, authConfig } };
…
using (var responseStream = _dockerClient.Images.BuildImageFromDockerfileAsync(tarball, imageBuildParameters).GetAwaiter().GetResult())
Or it’s (strangely, to me) both passed in a parameter and separately to the call:
ImagePushParameters imagePushParameters = new ImagePushParameters() { ImageID = image.Id, RegistryAuth = authConfig, Tag = "latest" };
_dockerClient.Images.PushImageAsync(RepoImage(image.Id), imagePushParameters, authConfig, this).GetAwaiter().GetResult();
We are currently getting auth errors for any of the various Docker registries I’ve tried for DockerHub, as such (where I’ve redacted the organzation/namespace and image:
{“message”:“pull access denied for registry-1.docker.io//, repository does not exist or may require ‘docker login’: denied: requested access to the resource is denied”},“error”:“pull access denied for registry-1.docker.io//, repository does not exist or may require ‘docker login’: denied: requested access to the resource is denied”
The list of DockerHub urls that I’ve tried is following, all with either the error above or a different “Invalid reference format” error:
Hub.docker.io
Hub.docker.io/v1/
Docker.io
Docker.io/v1/
Index.docker.io
Index.docker.io/v1/
Registry.hub.docker.com
registry-1.docker.io
Strangely enough, if I run it locally on my Windows system, the bolded URL’s actually work. However, they all fail when deployed to the cluster (different method of interacting with the Docker socket/npipe?).
Does anyone know the correct URL I need to set to properly authenticate and interact with DockerHub? Or if my current implementation and usage of the Docker.Dotnet library is incorrect in some way? Again, it works just fine with a private Docker registry on Artifactory, and also with DockerHub when run on my local Windows system. Any help would be greatly appreciated. I can provide any more information that is necessary, be it code or configuration.

Who invoke IHost.Run when using Generic Host Builder in Xamarin.Forms?

I am trying to digest a Xamarin.Forms app (developed by James Montemagno) on which the Generic Host Builder is applied.
The following is the Init method extracted from this line:
public static App Init(Action<HostBuilderContext, IServiceCollection> nativeConfigureServices)
{
// others are removed for simplicity
var host = new HostBuilder()
.ConfigureServices((c, x) =>
{
nativeConfigureServices(c, x);
ConfigureServices(c, x);
})
.ConfigureLogging(l => l.AddConsole(o =>
{
o.DisableColors = true;
}))
.Build();
App.ServiceProvider = host.Services;
return App.ServiceProvider.GetService<App>();
}
Note: Init have not run the host but it will be invoked from Android project as follows:
protected override void OnCreate(Bundle savedInstanceState)
{
// others are removed for simplicity.
LoadApplication(Startup.Init(ConfigureServices));
}
Question
Now if I compare with Asp.net Core, we know that IHost.Run() is invoked in Program.Main. The question is:
Who invokes IHost.Run() in the Xamarin.Forms app above?
From the way I understand it, this setup does not make use of actually running the host. Instead, it just relies to building it to make sure that DI and all the other related services are available and can be used by Xamarin.
Starting a host actually doesn’t do that much with the generic host. It will mostly just start the host lifetime and run hosted services (like the ASP.NET Core application would be).
But in Xamarin, starting the XAML application is something that already works on its own. So it doesn’t need to be “hosted” (although I rather think that we simply cannot have it owned by the host yet).
So this setup just makes use of the host environment to enable DI, configuration and logging, instead of using the host capabilities to actually run things. This also means that with this setup you probably won’t be able to run other hosted services within the application (unless you manage a way to properly start and stop the host within the App).

Publishing a shared appsettings file with .net core

I have a .net core solution that contains a database project (class library) and a console application. The database project contains EF Migrations and to do Add-Migration, most methods use a hard-coded connection string in one place or the other.
To avoid hard-coding (and/or duplication) I have created a shared appsettings.json file in the solution root and I use it in my Main method and the class library
In the console application
static void Main(string[] args)
{
var settingPath = Path.GetFullPath(Path.Combine(#"../appsettings.json"));
var builder = new ConfigurationBuilder()
.AddJsonFile(settingPath, false);
var configuration = builder.Build();
var services = new ServiceCollection()
.AddDbContext<MyContext>(options => options.UseSqlServer(configuration["ConnectionStrings:MyDatabase"]))
.BuildServiceProvider();
}
And in the class library to use migrations
public class DesignTimeDbContextFactory : IDesignTimeDbContextFactory<MyContext>
{
public MyContext CreateDbContext(string[] args)
{
var settingPath = Path.GetFullPath(Path.Combine(#"../appsettings.json"));
var builder = new ConfigurationBuilder()
.AddJsonFile(settingPath, false);
var configuration = builder.Build();
var optionsBuilder = new DbContextOptionsBuilder<MyContext>()
.UseSqlServer(configuration["ConnectionStrings:MyDatabase"]);
return new MyContext(optionsBuilder.Options);
}
}
This is working well for development purposes when I use dotnet run but when I publish the console application, it doesn't include the appsettings file. Other than running a powershell script as part of dotnet publish, is there any other cleaner way of including this file when the project is published?
IDesignTimeDbContextFactory is exactly for the purpose its name describes. You shouldn't be running migrations against your production database in the first place, and if you do, you should be generating specific migrations for production into your app (instead of the class library) and using your app for the migrations. See the docs on using a separate project for migrations. That, then, negates the need to share your appsettings.json. Just leave the connection string hard-coded in your factory, since it's only for development anyways.
Now, you might have an issue I suppose in a team environment. However, even if you're using something like SQLite, you can use project-relative paths that won't be developer-specific, and with LocalDB, you can use a normal SQL Server connection string to the MSSQLLocalDB instance, which will be same for every developer using Visual Studio. Regardless, even if you do need to specify the connection specifically by developer, at that point it would make more sense to use user secrets, anyways, since you wouldn't want that info be committed to source control. Otherwise, each developer would end up clobbering the other's copy of appsettings.json, and you'd have a mess on your hands.
Long and short, just hard-code the connection string in your factory, or if you can't or won't, use user secrets for the connection string. In either case, you do not need to share appsettings.json.
The way I've done this before is to specify the startup project when you run dotnet ef (with the -s switch - the options are at https://learn.microsoft.com/en-us/ef/core/miscellaneous/cli/dotnet#common-options)
It gets messy quickly, and it's probably easiest to write some wrapper scripts for the project that deal with this kind of thing.

relying on a stateful service for configuration values?

We have approximately 100 microservices running. Each microservice has an entire set of configuration files such as applicationmanifest.xml, settings.xml, node1.xml, etc.
This is getting to be a configuration nightmare.
After exploring this, someone has suggested:
You can keep configs inside stateful service, then change parameters
through your API.
The problem I see with this, is that there is now a single point of a failure: the service that provides the configuration values.
Is there a centralized solution to maintaining so much configuration data for every microservice?
While a central configuration service seems like the way to do, if you do it you introduce a few problem that you must get right each time). When you have an central configuration service it MUST be updated with the correct configuration before you start your code upgrade and you must of course keep previous configurations around in case your deployment rolls back. Here's the configuration slide that I presented when I was on the Service Fabric team.
Service Fabric ships with the ability to version configuration, you should use that, but not in the manner that Service Fabric recommends. For my projects, I use the Microsoft.Extensions.Configuration for configuration. Capture the configuration events
context.CodePackageActivationContext.ConfigurationPackageAddedEvent += CodePackageActivationContext_ConfigurationPackageAddedEvent;
context.CodePackageActivationContext.ConfigurationPackageModifiedEvent += CodePackageActivationContext_ConfigurationPackageModifiedEvent;
context.CodePackageActivationContext.ConfigurationPackageRemovedEvent += Context_ConfigurationPackageRemovedEvent;
Each of these event handler can call to load the configuration like this
protected IConfigurationRoot LoadConfiguration()
{
ConfigurationBuilder builder = new ConfigurationBuilder();
// Get the name of the environment this service is running within.
EnvironmentName = Environment.GetEnvironmentVariable(EnvironmentVariableName);
if (string.IsNullOrWhiteSpace(EnvironmentName))
{
var err = $"Environment is not defined using '{EnvironmentVariableName}'.";
_logger.Fatal(err);
throw new ArgumentException(err);
}
// Enumerate the configuration packaged. Look for the service type name, service name or settings.
IList<string> names = Context?.CodePackageActivationContext?.GetConfigurationPackageNames();
if (null != names)
{
foreach (string name in names)
{
if (name.Equals(GenericStatelessService.ConfigPackageName, StringComparison.InvariantCultureIgnoreCase))
{
var newPackage = Context.CodePackageActivationContext.GetConfigurationPackageObject(name);
// Set the base path to be the configuration directory, then add the JSON file for the service name and the service type name.
builder.SetBasePath(newPackage.Path)
.AddJsonFile($"{ServiceInstanceName}-{EnvironmentName}.json", true, true)
.AddJsonFile($"{Context.ServiceTypeName}-{EnvironmentName}.json", true, true);
// Load the settings into memory.
builder.AddInMemoryCollection(LoadSettings(newPackage));
}
}
}
// Swap in a new configuration.
return builder.Build();
}
You can now interact with Configuration using the .Net configuration. Last thing to cover is the format of the configuration files. In the PackageRoot | Config directory, you simply include your configuration files. I happen to use the Name of the service + datacenter.
The files internal look like this, where there is a JSON property for each service fabric class.
{
"Logging": {
"SeqUri": "http://localhost:5341",
"MaxFileSizeMB": "100",
"DaysToKeep": "1",
"FlushInterval": "00:01:00",
"SeqDefaultLogLevel": "Verbose",
"FileDefaultLogLevel": "Verbose"
},
"ApplicationOperations": {
"your values here": "<values>"
},
If you stuck this long, the big advantage of this is that the configuration gets deployed at the same time as the code and if the code rolls back, so does the configuration, leaving you in a know state.
NOTE: It seems your question is blurred between whether to use a single configuration service is reliable or whether to use static vs dynamic configuration.
For the debate on static vs dynamic configuration, see my answer to the OP's other question.
A config service sounds reasonable particularly when you consider that Service Fabric is designed to be realiable, even stateful services.
MSDN:
Service Fabric enables you to build and manage scalable and reliable applications composed of microservices that run at high density on a shared pool of machines, which is referred to as a cluster
Develop highly reliable stateless and stateful microservices. Tell me more...
Stateful services store state in a reliable distrubuted dictionary enclosed in a transaction which guarentees the data is stored if the transaction was successful.
OP:
The problem I see with this, is that there is now a single point of a failure: the service that provides the configuration values.
Not necessarily. It's not really the service that is the single point of failure but a "fault domain" as defined by Service Fabric and your chosen Azure data centre deployment options.
MSDN:
A Fault Domain is any area of coordinated failure. A single machine is a Fault Domain (since it can fail on its own for various reasons, from power supply failures to drive failures to bad NIC firmware). Machines connected to the same Ethernet switch are in the same Fault Domain, as are machines sharing a single source of power or in a single location. Since it's natural for hardware faults to overlap, Fault Domains are inherently hierarchal and are represented as URIs in Service Fabric.
It is important that Fault Domains are set up correctly since Service Fabric uses this information to safely place services. Service Fabric doesn't want to place services such that the loss of a Fault Domain (caused by the failure of some component) causes a service to go down. In the Azure environment Service Fabric uses the Fault Domain information provided by the environment to correctly configure the nodes in the cluster on your behalf. For Service Fabric Standalone, Fault Domains are defined at the time that the cluster is set up
So you would probably want to have at least two configuration services running on two separate fault domains.
More
Describing a service fabric cluster

Categories