When I look at the samples of the Akka.NET actor remote deployment, it is done via actor system configuration like this:
actor {
provider = ""Akka.Remote.RemoteActorRefProvider, Akka.Remote""
deployment {
/MyActor {
remote = ""akka.tcp://MyActors#127.0.0.1:8091""
}
}
}
The string "MyActor" above is the actual actor name. It is not clear to me how would I deploy an actor that would have dynamic name (e.g. "MyActor:{UID}") or deploy unknown number of actors remotely? Is there a way to dynamically configure deployment options via code? It seems a bit tedious and very limited to specify all the remotely deployable actor names in the configuration.
You can set deployment options using actor's Props like: Props.Create(() => new MyActor).WithDeploy(new Deploy(new RemoteScope("akka.tcp://remote-system-name#ip:port/"))).
A workaround that comes to my mind is to have one fixed name supervisor actor deployed remotely and use that one as a 'proxy' to create and possibly communicate with all the other dynamically named children actors.
Related
I followed "public access" to set up the configuration. I have two goals, Firstly, I want to create topic from local terminal by using this command line "/bin/kafka-topics.sh --create --bootstrap-server ZookeeperConnectString --replication-factor 3 --partitions 1 --topic ExampleTopicName", but it always return "the broker is not available". Secondly, I want to connect MKS from local .Net Application. However, it seams cannot connect to the MKS successfully.
This is my some configuration that attach on my MKS
Create public subnet 172.31.0.0/20 and 172.31.16.0/20 and attach an Internet Gateway
Close unauthenticated access control off and turn on SASL/SCRAM access-control methods. Besides, I attached an secret for this authentication and add allow.everyone.if.no.acl.found to false to cluster's configuration.
Turn on public access
Cluster configuration
Cluster configuration
Producer Configuration
Producer Configuration
Security Group
Security Group
Does anyone can give me some advice or hints? I do some research that not sure I have to add listeners in my cluster configuration? Thanks for your time and consideration.
I was struggling with MSK, too. I finally got it working and maybe give some hints here:
according to the docs at AWS, only SCRAM-SHA-512 is supported, not -256
in the SG, I did add a rule for inbound traffic to accept from anywhere (0.0.0.0)
Hope that helps,
donbachi
We have approximately 100 microservices running. Each microservice has an entire set of configuration files such as applicationmanifest.xml, settings.xml, node1.xml, etc.
This is getting to be a configuration nightmare.
After exploring this, someone has suggested:
You can keep configs inside stateful service, then change parameters
through your API.
The problem I see with this, is that there is now a single point of a failure: the service that provides the configuration values.
Is there a centralized solution to maintaining so much configuration data for every microservice?
While a central configuration service seems like the way to do, if you do it you introduce a few problem that you must get right each time). When you have an central configuration service it MUST be updated with the correct configuration before you start your code upgrade and you must of course keep previous configurations around in case your deployment rolls back. Here's the configuration slide that I presented when I was on the Service Fabric team.
Service Fabric ships with the ability to version configuration, you should use that, but not in the manner that Service Fabric recommends. For my projects, I use the Microsoft.Extensions.Configuration for configuration. Capture the configuration events
context.CodePackageActivationContext.ConfigurationPackageAddedEvent += CodePackageActivationContext_ConfigurationPackageAddedEvent;
context.CodePackageActivationContext.ConfigurationPackageModifiedEvent += CodePackageActivationContext_ConfigurationPackageModifiedEvent;
context.CodePackageActivationContext.ConfigurationPackageRemovedEvent += Context_ConfigurationPackageRemovedEvent;
Each of these event handler can call to load the configuration like this
protected IConfigurationRoot LoadConfiguration()
{
ConfigurationBuilder builder = new ConfigurationBuilder();
// Get the name of the environment this service is running within.
EnvironmentName = Environment.GetEnvironmentVariable(EnvironmentVariableName);
if (string.IsNullOrWhiteSpace(EnvironmentName))
{
var err = $"Environment is not defined using '{EnvironmentVariableName}'.";
_logger.Fatal(err);
throw new ArgumentException(err);
}
// Enumerate the configuration packaged. Look for the service type name, service name or settings.
IList<string> names = Context?.CodePackageActivationContext?.GetConfigurationPackageNames();
if (null != names)
{
foreach (string name in names)
{
if (name.Equals(GenericStatelessService.ConfigPackageName, StringComparison.InvariantCultureIgnoreCase))
{
var newPackage = Context.CodePackageActivationContext.GetConfigurationPackageObject(name);
// Set the base path to be the configuration directory, then add the JSON file for the service name and the service type name.
builder.SetBasePath(newPackage.Path)
.AddJsonFile($"{ServiceInstanceName}-{EnvironmentName}.json", true, true)
.AddJsonFile($"{Context.ServiceTypeName}-{EnvironmentName}.json", true, true);
// Load the settings into memory.
builder.AddInMemoryCollection(LoadSettings(newPackage));
}
}
}
// Swap in a new configuration.
return builder.Build();
}
You can now interact with Configuration using the .Net configuration. Last thing to cover is the format of the configuration files. In the PackageRoot | Config directory, you simply include your configuration files. I happen to use the Name of the service + datacenter.
The files internal look like this, where there is a JSON property for each service fabric class.
{
"Logging": {
"SeqUri": "http://localhost:5341",
"MaxFileSizeMB": "100",
"DaysToKeep": "1",
"FlushInterval": "00:01:00",
"SeqDefaultLogLevel": "Verbose",
"FileDefaultLogLevel": "Verbose"
},
"ApplicationOperations": {
"your values here": "<values>"
},
If you stuck this long, the big advantage of this is that the configuration gets deployed at the same time as the code and if the code rolls back, so does the configuration, leaving you in a know state.
NOTE: It seems your question is blurred between whether to use a single configuration service is reliable or whether to use static vs dynamic configuration.
For the debate on static vs dynamic configuration, see my answer to the OP's other question.
A config service sounds reasonable particularly when you consider that Service Fabric is designed to be realiable, even stateful services.
MSDN:
Service Fabric enables you to build and manage scalable and reliable applications composed of microservices that run at high density on a shared pool of machines, which is referred to as a cluster
Develop highly reliable stateless and stateful microservices. Tell me more...
Stateful services store state in a reliable distrubuted dictionary enclosed in a transaction which guarentees the data is stored if the transaction was successful.
OP:
The problem I see with this, is that there is now a single point of a failure: the service that provides the configuration values.
Not necessarily. It's not really the service that is the single point of failure but a "fault domain" as defined by Service Fabric and your chosen Azure data centre deployment options.
MSDN:
A Fault Domain is any area of coordinated failure. A single machine is a Fault Domain (since it can fail on its own for various reasons, from power supply failures to drive failures to bad NIC firmware). Machines connected to the same Ethernet switch are in the same Fault Domain, as are machines sharing a single source of power or in a single location. Since it's natural for hardware faults to overlap, Fault Domains are inherently hierarchal and are represented as URIs in Service Fabric.
It is important that Fault Domains are set up correctly since Service Fabric uses this information to safely place services. Service Fabric doesn't want to place services such that the loss of a Fault Domain (caused by the failure of some component) causes a service to go down. In the Azure environment Service Fabric uses the Fault Domain information provided by the environment to correctly configure the nodes in the cluster on your behalf. For Service Fabric Standalone, Fault Domains are defined at the time that the cluster is set up
So you would probably want to have at least two configuration services running on two separate fault domains.
More
Describing a service fabric cluster
Creating a Nancy self-hosted console application requires the local address including the PORT as parameter:
using (var host = new NancyHost(new Uri("http://localhost:1234")))
{
host.Start();
Console.ReadLine();
}
While customizing the PORT is a valid use case, is it possible to use another HOST than ("http://localhost"). If yes, which ones and for which reason?
Backgroud:
I am creating a custom settings file for the server and I wonder if it is enough to provide a setting 'Port' or is it better to provide a setting 'Host' (or 'URL') that includes the HOST as well as the PORT?
Edit
To avoid hardcoding, the HOST part may be configurable via application settings (App.config) which is different to the custom settings file that is used by the server's administrator. However, I want to keep the custom settings file as simple as possible. Therefere, the question: Is there is any thinkable reason that the part 'http://localhost' should be modified?
The NancyHost constructor needs a valid Uri object, and to create that you can't get around specifying a HOST. Depending on your application make the HOST editable either inside your program, some form of communication or via a settings file. Do not hardcode the HOST as localhost, even if you think it's gonna stay that way, it's good practice to keep things modifiable. If you want your settings file to be as simple as possible, split it into 2 files:
basicSettings
advancedSettings
where advancedSettings only contains things you rarely, if ever, change und basicSettings contain the things you expect to be changed more frequently.
There might be a case at some point in time where you want to connect to another host because NancyHost has moved, either to the cloud or another system in the same network(the latter is more probable). Just in case this happens you should make it modifiable.
I am building a proof of concept application using Azure Service Fabric and would like to initialize a few 'demo' user actors in my cluster when it starts up. I've found a few brief articles that talk about loading data from a DataPackage, which shows how to load the data itself, but nothing about how to create actors from this data.
Can this be done with DataPackages or is there a better way to accomplish this?
Data packages are just opaque directories with whatever files you want in there for each deployment. It doesn't load or process the data itself, you have to do all the heavy lifting as only your code knows what the data means. For example, if you had a data package named "SvcData", it would deploy the files in that package during deployment. If you had a file StaticDataMaster.json in that directory, you'd be able to access it when you service ran (either in your actor, or somewhere else). For example:
// get the data package
var DataPkg = ServiceInitializationParameters.CodePackageActivationContext.
GetDataPackageObject("SvcData");
// fabric doesn't load data it is just manages for you. data is opaque to Fabric
var customDataFilePath = DataPkg.Path + #"\StaticDataMaster.json";
// TODO: read customDatafilePath, etc.
I have a Windows service which is supposed to run in a (Windows Server 2012 R2) failover cluster as a generic service in a dedicated role, that is, there is a hostname and IP address configured for this service in the failover cluster manager. (I think 'role' used to be called 'group' in earlier Windows server releases).
One requirement is that the service has to know/provide the hostname of the role it is running in. System.Net.Dns.GetHostName() returns the name of the physical server on which the service is currently active, but what is needed is the configured hostname of the role.
I've searched both in the dns APi direction and the MS documentation for the System.ServiceProcesses namespace, but was not able to figure this out from these resources.
Is there a .Net API which is able to retrieve this, or is that the wrong approach altogether? (I.e. should this information be written into a configuration database during installation and retrieved from there).
There is a .NET API for Failover Clustering. Please refer to it here -
https://msdn.microsoft.com/en-us/library/aa372876(v=vs.85).aspx
As for your qeustion, I believe every Role has an OwnerNode property and this WMI Class should help you.
MSCluster_Node class
[Dynamic, Provider ("MS_CLUSTER_PROVIDER"), UUID ("{C306EBED-0654-4360-AA70-DE912C5FC364}")]class MSCluster_Node : CIM_UnitaryComputerSystem
{
string Roles[];
}
https://msdn.microsoft.com/en-us/library/aa371446(v=vs.85).aspx
If you drill down to the methods there is also a -
ExecuteNodeControl method which even has a CLUSCTL_NODE_GET_ID
https://msdn.microsoft.com/en-us/library/cc512216(v=vs.85).aspx
If the above doesn't help you, you can also try the reference below.
The MSCluster_ResourceToPossibleOwner class is a dynamic association WMI class that represents a list of the resources and their possible owner nodes.
https://msdn.microsoft.com/en-us/library/aa371478(v=vs.85).aspx
Hope this helps, I'm pretty new to doing stuff with Failover Clustering and C#. I hope I can learn from this post as well.