relying on a stateful service for configuration values? - c#

We have approximately 100 microservices running. Each microservice has an entire set of configuration files such as applicationmanifest.xml, settings.xml, node1.xml, etc.
This is getting to be a configuration nightmare.
After exploring this, someone has suggested:
You can keep configs inside stateful service, then change parameters
through your API.
The problem I see with this, is that there is now a single point of a failure: the service that provides the configuration values.
Is there a centralized solution to maintaining so much configuration data for every microservice?

While a central configuration service seems like the way to do, if you do it you introduce a few problem that you must get right each time). When you have an central configuration service it MUST be updated with the correct configuration before you start your code upgrade and you must of course keep previous configurations around in case your deployment rolls back. Here's the configuration slide that I presented when I was on the Service Fabric team.
Service Fabric ships with the ability to version configuration, you should use that, but not in the manner that Service Fabric recommends. For my projects, I use the Microsoft.Extensions.Configuration for configuration. Capture the configuration events
context.CodePackageActivationContext.ConfigurationPackageAddedEvent += CodePackageActivationContext_ConfigurationPackageAddedEvent;
context.CodePackageActivationContext.ConfigurationPackageModifiedEvent += CodePackageActivationContext_ConfigurationPackageModifiedEvent;
context.CodePackageActivationContext.ConfigurationPackageRemovedEvent += Context_ConfigurationPackageRemovedEvent;
Each of these event handler can call to load the configuration like this
protected IConfigurationRoot LoadConfiguration()
{
ConfigurationBuilder builder = new ConfigurationBuilder();
// Get the name of the environment this service is running within.
EnvironmentName = Environment.GetEnvironmentVariable(EnvironmentVariableName);
if (string.IsNullOrWhiteSpace(EnvironmentName))
{
var err = $"Environment is not defined using '{EnvironmentVariableName}'.";
_logger.Fatal(err);
throw new ArgumentException(err);
}
// Enumerate the configuration packaged. Look for the service type name, service name or settings.
IList<string> names = Context?.CodePackageActivationContext?.GetConfigurationPackageNames();
if (null != names)
{
foreach (string name in names)
{
if (name.Equals(GenericStatelessService.ConfigPackageName, StringComparison.InvariantCultureIgnoreCase))
{
var newPackage = Context.CodePackageActivationContext.GetConfigurationPackageObject(name);
// Set the base path to be the configuration directory, then add the JSON file for the service name and the service type name.
builder.SetBasePath(newPackage.Path)
.AddJsonFile($"{ServiceInstanceName}-{EnvironmentName}.json", true, true)
.AddJsonFile($"{Context.ServiceTypeName}-{EnvironmentName}.json", true, true);
// Load the settings into memory.
builder.AddInMemoryCollection(LoadSettings(newPackage));
}
}
}
// Swap in a new configuration.
return builder.Build();
}
You can now interact with Configuration using the .Net configuration. Last thing to cover is the format of the configuration files. In the PackageRoot | Config directory, you simply include your configuration files. I happen to use the Name of the service + datacenter.
The files internal look like this, where there is a JSON property for each service fabric class.
{
"Logging": {
"SeqUri": "http://localhost:5341",
"MaxFileSizeMB": "100",
"DaysToKeep": "1",
"FlushInterval": "00:01:00",
"SeqDefaultLogLevel": "Verbose",
"FileDefaultLogLevel": "Verbose"
},
"ApplicationOperations": {
"your values here": "<values>"
},
If you stuck this long, the big advantage of this is that the configuration gets deployed at the same time as the code and if the code rolls back, so does the configuration, leaving you in a know state.

NOTE: It seems your question is blurred between whether to use a single configuration service is reliable or whether to use static vs dynamic configuration.
For the debate on static vs dynamic configuration, see my answer to the OP's other question.
A config service sounds reasonable particularly when you consider that Service Fabric is designed to be realiable, even stateful services.
MSDN:
Service Fabric enables you to build and manage scalable and reliable applications composed of microservices that run at high density on a shared pool of machines, which is referred to as a cluster
Develop highly reliable stateless and stateful microservices. Tell me more...
Stateful services store state in a reliable distrubuted dictionary enclosed in a transaction which guarentees the data is stored if the transaction was successful.
OP:
The problem I see with this, is that there is now a single point of a failure: the service that provides the configuration values.
Not necessarily. It's not really the service that is the single point of failure but a "fault domain" as defined by Service Fabric and your chosen Azure data centre deployment options.
MSDN:
A Fault Domain is any area of coordinated failure. A single machine is a Fault Domain (since it can fail on its own for various reasons, from power supply failures to drive failures to bad NIC firmware). Machines connected to the same Ethernet switch are in the same Fault Domain, as are machines sharing a single source of power or in a single location. Since it's natural for hardware faults to overlap, Fault Domains are inherently hierarchal and are represented as URIs in Service Fabric.
It is important that Fault Domains are set up correctly since Service Fabric uses this information to safely place services. Service Fabric doesn't want to place services such that the loss of a Fault Domain (caused by the failure of some component) causes a service to go down. In the Azure environment Service Fabric uses the Fault Domain information provided by the environment to correctly configure the nodes in the cluster on your behalf. For Service Fabric Standalone, Fault Domains are defined at the time that the cluster is set up
So you would probably want to have at least two configuration services running on two separate fault domains.
More
Describing a service fabric cluster

Related

IDataProtector unable to decrypt when application runs as a service

I'm using ASP.NET Core data protection with default DI behavior. That works fine when my ASP.NET application is hosted on IIS. Now I have an application that needs to run as a service. So I'm using Microsoft.Extensions.Hosting.WindowsServices to do the windows service part with our standard
Host.CreateDefaultBuilder(args)
.UseWindowsService()
The BackgroundService then hosts ASP.NET Core with your standard
var builder = Host.CreateDefaultBuilder()
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddJsonFile("secrets.json", optional: true, reloadOnChange: true);
}
.ConfigureWebHostDefaults(....)
inside the background service, I can then resolve an instance of IDataProtectionProvider, create a protector, and use it to unprotect my secrets
var dataProtectionProvider = Container.Resolve<Microsoft.AspNetCore.DataProtection.IDataProtectionProvider>();
var protector = dataProtectionProvider.CreateProtector(appName);
var decryptedSecret = protector.Unprocect(some secret)
Now that all works fine as long as I run my application from the CLI. But running it as a service (same file, same location, and of course under the same account), I get an 'invalid payload' exception when I call Unprotect.
I know same path and same account is important, so that's taken care of. I also know that the application can find secrets.json as I wrote some probing code that checks if the file is present and can be read before I even try to unprotect. I'm even checking if the string I'm trying to unprotect is null/empty (which it isn't).
I finally installed a debug build as a service and attached the debugger, and when I look at IDataProtectionProvider, it has a Purpose.. and when running as a service, that's c:\windows\system32. When my app runs from the CLI, it's the path to the exe. So, is there a way to specify the purpose on my own so things behave the same regardless of CLI/Service?
So how can I control the purpose?
So having noted the difference in purpose of the IDataProtectionProvider, I was well on my way of solving this. The solution was to set a static purpose as explained here

Azure Functions V2 With Service Bus Trigger in Development Team

We have Azure Functions (V2) that have been created with the Service Bus Trigger.
[FunctionName("MyFunctionName")]
public static async Task Run(
[ServiceBusTrigger("%MyQueueName%", Connection = "ServiceBusConnectionString")]
byte[] messageBytes,
TraceWriter log)
{
// code to handle message
}
The queue name is defined in the local.settings.json file:
{
"Values": {
...
"MyQueueName": "local-name-of-my-queue-in-azure",
...
}
}
This works quite well as when deployed we can set the environment variables to be dev-queue-name, live-queue-name etc for the various deployed environments that we have.
However, when more than one developer is connected locally, given that the local.settings.json file is in source control and needs to be to properly maintain the environment variables, then the local function app runners will all connect to the same queue, and it is random as to which developer's application will pick up and process the messages.
What we need is for each developer to have their own queue, but we do not want to have to remove the JSON config file from source control so that we can maintain a different file (as it contains other pieces of information that need updating).
How can we get each developer / computer running our application to have a unique queue name (but known so that we can create the service bus queues in the cloud)?
You can override the setting value via Environment variables. Settings specified as a system environment variable take precedence over values in the local.settings.json file. Just define an Environment variable called MyQueueName.
Having said that, I think that committing local.settings.json to source control is generally not recommended. I suppose you also store your Service Bus connection string there, which means you store your secrets in source control.
Note that default .gitignore file has it listed out.
If you need it in source control, I would commit a version of local.settings.json with all variables with fake values, and then make each developer setup the proper values locally and then ignore the changes on commit (set assume-unchanged).

Unable to start an NServiceBus Windows Service

Problem Description
I have a Windows service which is hosting an NServiceBus endpoint in NServiceBus.Host.exe.
The binaries are deployed to c:\inetpub\bus\services\myService folder on the server.
A DSC script makes sure the Windows service is created/exists on the server, with the "path to executable" property of the service set to "c:\inetpub\bus\services\myService´NServiceBus.Host.exe" -service NServicebus.Production.
Note! The -service switch is added when installing the service by using the built-in NServiceBus.Host.exe /install parameter, which is why I added it to the Windows service executable path in the DSC script.
Now, when I try to start the service manually on the server, it yields the following error message
Windows could not start the <service name> service on the Local Computer.
Error 1053: The service did not respond to the start or control request in a timely fashion.
Debugging Steps
I have looked through the event log and the following two error messages sticks out:
NServiceBust.Host.exe error:
Application: NServiceBus.Host.exe
Framework Version: v4.0.30319
Description: The process was terminated due to an unhandled exception.
Exception Info:
Topshelf.Exceptions.ConfigurationException
at Topshelf.Internal.Actions.RunAsServiceAction.Do(Topshelf.Configuration.IRunConfiguration)
at Topshelf.Runner.Host(Topshelf.Configuration.IRunConfiguration, System.String[])
at NServiceBus.Host.Program.Main(System.String[])
Local Activation permission error:
The application-specific permission settings do not grant Local
Activation permission for the COM Server application with CLSID
{D63B10C5-BB46-4990-A94F-E40B9D520160}
and APPID
{9CA88EE3-ACB7-47C8-AFC4-AB702511C276}
to the user <my_service_account_user> SID (<service_account_SID>) from
address LocalHost (Using LRPC) running in the application container
Unavailable SID (Unavailable). This security permission can be modified
using the Component Services administrative tool.`
Note! The error above only occurs once, i.e. the first time I try to start the service. It does not appear again in the event log for any subsequent attempts of starting the service.
What I have done so far:
Tried the suggestions in a closely related post here on SO, none of which were working.
Tried to install the service by using using the NServiceBus.Host.exe /install parameter. In this case, the service name is created with its name on the following format: MyService.EndpointConfig_v1.0.0.0. Using this approach, the service starts successfully without any error message
Stopping the service and then try to start the service created by the DSC script (with a different name) => success
Removing the service created by NServiceBus and then trying to start the DSC-created service again => failure
Tried granting the service account used for logon when running the service various privileges (neither of which yielded any success), among others:
Membership in the Administrators group
Membership in the Performance Log Users group
Full DCOM permissions via "Launch and Activation Permissions" in dcomcnfg
Tried running c:\inetpub\bus\services\myService´NServiceBus.Host.exe NServicebus.Production from the CLI => success
Code
My Init() method for the service looks like this:
namespace MyService
{
public class EndpointConfig : IConfigureThisEndpoint, AsA_Server, IWantCustomLogging, IWantCustomInitialization
{
public void Init()
{
Directory.SetCurrentDirectory(System.AppDomain.CurrentDomain.BaseDirectory);
SetLoggingLibrary.Log4Net(() => XmlConfigurator.Configure(File.OpenRead(#"log4net.config")));
GlobalContext.Properties["Hostname"] = Dns.GetHostName();
GlobalContext.Properties["Service"] = typeof(EndpointConfig).Namespace;
var container = new WindsorContainer(new XmlInterpreter());
Configure.With()
.CastleWindsorBuilder(container)
.XmlSerializer()
.MsmqTransport()
.IsTransactional(true)
.PurgeOnStartup(false)
.IsolationLevel(System.Transactions.IsolationLevel.RepeatableRead);
var connectionString = ConfigurationManager.ConnectionStrings["<some_conn_string>"].ConnectionString;
container.Register(
Component.For<IDatabaseProvider>()
.ImplementedBy<DatabaseProvider>()
.DependsOn(Property.ForKey("connectionString").Eq(connectionString)));
}
}
}
Theory
When installing the service using /install I assume that NServiceBus.Host.exe does some black magic under the hood - e.g. grants some necessary permissions - to make the service able to start.
Note! On the server, the latest version of NServiceBus is installed (v6.x). However, in my solution/service project version 2.x is used (please, do not ask if I can upgrade - unfortunately, that is not an option).
Appreciate any help I can get, as I am running out of ideas.
EDIT 1
I was asked why I can't just use the /install parameter of NServiceBus and be happy with that. The answer to that is that I could (and, actually, I currently am).
The reason I have still posted this question is split:
I wish to understand why one of two seemingly equivalent approaches fails
I am not completely happy with using the /install parameter. The reason? It boils down to a "chicken or the egg" problem. I use Powershell DSC to provision servers in Azure and I believe that ensuring that Windows Services exists on the server is the responsibility of DSC. However, the first time a server is provisioned the services cannot exist unless I script their creation with DSC, and point the executable path to where the service binaries will be deployed whenever that happens. The other alternative is to skip service creation in DSC, and run the NServiceBus.Host.exe /install as a part of the service/application deployment script instead. Obviously, deployment cannot happen until after a server has been provisioned. Thus, it requires the Windows Service part of the DSC script being stripped down to e.g. merely ensuring the service exist - a verification which will fail until a first time deployment of the application has been performed.

Initializing Service Fabric Actors using DataPackages

I am building a proof of concept application using Azure Service Fabric and would like to initialize a few 'demo' user actors in my cluster when it starts up. I've found a few brief articles that talk about loading data from a DataPackage, which shows how to load the data itself, but nothing about how to create actors from this data.
Can this be done with DataPackages or is there a better way to accomplish this?
Data packages are just opaque directories with whatever files you want in there for each deployment. It doesn't load or process the data itself, you have to do all the heavy lifting as only your code knows what the data means. For example, if you had a data package named "SvcData", it would deploy the files in that package during deployment. If you had a file StaticDataMaster.json in that directory, you'd be able to access it when you service ran (either in your actor, or somewhere else). For example:
// get the data package
var DataPkg = ServiceInitializationParameters.CodePackageActivationContext.
GetDataPackageObject("SvcData");
// fabric doesn't load data it is just manages for you. data is opaque to Fabric
var customDataFilePath = DataPkg.Path + #"\StaticDataMaster.json";
// TODO: read customDatafilePath, etc.

How to remotely deploy actor with dynamic name in Akka.NET

When I look at the samples of the Akka.NET actor remote deployment, it is done via actor system configuration like this:
actor {
provider = ""Akka.Remote.RemoteActorRefProvider, Akka.Remote""
deployment {
/MyActor {
remote = ""akka.tcp://MyActors#127.0.0.1:8091""
}
}
}
The string "MyActor" above is the actual actor name. It is not clear to me how would I deploy an actor that would have dynamic name (e.g. "MyActor:{UID}") or deploy unknown number of actors remotely? Is there a way to dynamically configure deployment options via code? It seems a bit tedious and very limited to specify all the remotely deployable actor names in the configuration.
You can set deployment options using actor's Props like: Props.Create(() => new MyActor).WithDeploy(new Deploy(new RemoteScope("akka.tcp://remote-system-name#ip:port/"))).
A workaround that comes to my mind is to have one fixed name supervisor actor deployed remotely and use that one as a 'proxy' to create and possibly communicate with all the other dynamically named children actors.

Categories