I'm helping create an MVC 5 web application. Its models are being provided via a Web API. Currently, the endpoints for the Web API are hard-coded into our controllers:
public HomeController() {
string baseUrl = "http://webApi.example/api/";
string endpoint1 = "endpoint1/{0}";
}
Obviously this is not the best approach. However, I'm not entirely sure where exactly we should put them. One of the challenges is the different environments the application will have to 'pass' through on its way to Production. For example, it must work 'as is' (with minimal configuration) in a development environment, QA, and ultimately production.
We've considered a few approaches, such as using the Web.config file. But that means we'd have to edit it in each and every environment ... and what if there's 30 development environments? Or 300? We've also considered reading things from the Windows Registry, under a special Key. This could be easily ported to all environments and supplied a 'generic' solution.
However, reading from the registry seems like bad idea (with no proof to tell me that it is or isn't).
So, what architect, strategy, or method could be used to create a valid configuration solution?
I've been using a Web.config for development, beta and production for years.
You certainly could store the setting in the registry, although a Web.config would be easier (editing the registry seems like more of a hassle). The one benefit to using the registry is in the event of sensitive information (e.g. service account password). This would allow the infrastructure team or server administrators to keep the password (registry key value) secret while letting developers deploy Web.config changes freely. In your case it's just a simple URL so who cares?
Either way you're going to be storing the same number of values (either you have 30 different Web.config files or you have 30 different registry entries). However, if you get to the point of having 30 different environments you're probably not going to be hand rolling these configuration entries. You're probably at the scale of having continuous integration and deployments actually powered by software where the values would be dynamic based upon the deployment settings.
Related
I have to save keys out of project and repository. For this, I want to use User secrets. But it is written there,
Call AddUserSecrets only when the app runs in the Development environment, as shown in the following example
And I can't understand or find a cause. Why can't I use it in the Production environment?
You can find it in the link you provided to the User Secrets documentation:
The Secret Manager tool doesn't encrypt the stored secrets and
shouldn't be treated as a trusted store. It's for development purposes
only. The keys and values are stored in a JSON configuration file in
the user profile directory.
I think the short answer is that you probably could if you wanted to but that it is not what it is intended for.
My understanding is that the primary purpose of User Secrets is to keep credentials out of source control. In the days before GitHub and the cloud, most developers just stuck any and all credentials in the web.config and it was mostly ok. Then lots of people started using public repositories and AWS and all of a sudden https://www.zdnet.com/article/trufflehog-high-entropy-key-hunter-released-to-the-masses/
There are now a great many different tools out there for managing secrets, which one best suits your needs is a much harder question, but you could consider:
Are you using access controlled source control?
Are you cloud or on-prem for build and deploy?
Who has read access to the live servers?
How sensitive is the data you are storing?
What other applications are running on the server?
I was just poking around in the CreateDefaultBuilder method and found this, which is perhaps relevant:
if (hostingEnvironment.IsDevelopment())
{
Assembly assembly = Assembly.Load(new
AssemblyName(hostingEnvironment.ApplicationName));
if (assembly != (Assembly) null)
config.AddUserSecrets(assembly, true);
}
Obviously you don't have to use the default version and you could add secrets for all the environments, but there it is.
This is a development time only tool. Storing any kind of secret in a file is risky, because you may accidentally check it in. In production, you can for example use environment variables to hold secrets (or any other more secure mechanism.)
While environment variables are one of the most used options in web development there are some reasons why this may not be the best approach:
1.The environment is implicitly available to the process and it's hard to track access. As a result, for example, you may face with situation when your error report will contain your secrets
2.The whole environment is passed down to child processes (if not explicitly filtered). So your secret keys are implicitly made available to any 3rd-party tools that may be used.
All this are one of the reasons why products like Vault become popular nowadays.
So, you may use environment variables, but be aware.
User secrets are basically a JSOn File somewhere in your user directory. That works well on your dev pc. But on a production system, the values should usually be injected through more production ready configuration system(s), like Environment Variables, appsettings.json or a azure keyvault. Envs and appsettings are already activated per default.
I am developing .Net Core Web API 2.2 project and trying to protect it best I can. This application will be connected to SQL database plus it will be sending emails from the server, and therefore I would like to figure out what is the good way of protecting my sensitive data (such as connection string, database password or even email password for SMTP account).
I have read that it is bad practice storing your passwords in a plain text in your file somewhere and one of the best practices is to use some Microsoft Azure functionality (where you provide some key and it returns you the actual password) that I have not yet used. Furthermore I do not have any subscription with Azure, and for the time being I would like not to go that direction.
Another method proposed by some of you guys was to store all the password to Environmental Variables and simply reference it in the application. I am currently exploring this option, as my app will be hosted on a 'virtual windows server' where I do not have direct access to, and thus it's difficult (without direct access) to get there and set up environmental variables (not even sure if that would be possible).
Finally, so far the best option (in case it will not be possible to use the variables mentioned above), was to actually store connections and passwords directly to appsettings.json file, but to hash them and decrypt on run-time. This option for me is surely feasible; however I wanted to ask (even though this might be quite subjective) you guys, whether this is a correct approach or there is something I have missed that could help me better to protect my application from external threats.
Any suggestions or advices would be more than appreciated as I do not really know now how to proceed.
P.S. I am using VSTS repository to store all the application code, which might be probably (I am guessing) the reason why people suggest to at least hash your passwords when storing them in appsettings.json
The appsettings.json file should never be used for secrets simply because it's committed to source control. That alone makes it a bad choice. However, there is also no capability to encrypt anything in appsettings.json. You could, I suppose, encrypt your secrets via some other means and merely place the ciphertext in appsettings.json manually after the fact, but then you would need some facility to decrypt the secret later, when then means exposing your means of encryption (i.e. your private key), which kind of defeats the entire point. Long and short, don't use appsettings.json.
Environment variables are a compromise solution. Since you manually set them on the server (not in your source control) and they can be made to only be accessible to certain users (restricted access), you get a modicum of security. However, they are also stored plaintext, which means if someone is able access the server to view them, all security is out the window. Environment variables can also be set as part of your CI\D pipeline in DevOps (formerly VSTS), so direct access to the server is not necessarily a prerequisite as long as the service account doing the deploy has the necessary access.
Azure Key Vault is the recommended approach because it's the only built-in config provider that supports encryption, meaning your secrets are encrypted at rest and pretty much secure end-to-end. However, there's nothing uniquely special about Azure Key Vault other than its ready availability. You can conceivably use any type of service that lets store secrets securely; you may just have to write your own config provider to target it.
I have some kind of a job scheduling implemented which calls a function ProcessJob. Now inside this method I need to generate url to one of my pages i.e DoanloadPage.aspx?some_params. That url is sent to user via email and when user clicks that link, it will take to the page.
The problem here is that I am not generating url in a web request method or I don't have access to the Request object. URL need to be generated in a custom class which is threaded i.e not in a web request.
So I can't go with these solutions:
HostingEnvironment.MapPath("test.aspx");
VirtualPathUtility.ToAbsolute("123.aspx");
HttpContext.Current.Request.Url.Authority;
None of these works because I think they all rely on current request or session somehow. So how do I generate urls for my app inside my code so I can use them anyway I want.
If your method cannot use HttpContext.Current.Request.Url, for example in case it's a background scheduled task, then you can use either of the following options:
In case that your code is hosted in the same ASP.NET application, you can pass the site domain name of the site to your class, in the first request. To do so, you need to handle Application_BeginRequest event and get the domain from HttpContext.Current.Request.Url and then pass it to your class, or store it in an application scope storage. You can find an implementation in this post or the original article.
Note: The code is available in SO, so I don't repeat the code
here.
If your code is not hosted in the same ASP.NET application or if for any reason you don't want to rely on Application_BeginRequest, as another option you can store the site domain name in a setting (like appsettigs in app.condig or web.config if it's web app) and use it in your code.
You can do something like this. Dns.GetHostName will return the name of the computer that is hosting the site. You can use that to check if the site is on a development server.
string domain = "www.productionurl/123.aspx";
if (Dns.GetHostName() == "Development")
{
domain = "www.developmenturl/123.aspx";
}
The Dns.GetHostName() is not the only way to check. You could also use the HostingEnvironment.ApplicationPhysicalPath. You can check that also and see if the path is that of the development server.
My answer is: don't do this. You're building a distributed system, albeit a simple one, and generally speaking it is problematic to introduce coupling between services in a distributed system. So even though it is possible to seed your domain using Application_BeginRequest, you are then tying the behavior of your batch job to your web site. With this arrangement you risk propagating errors and you make deployment of your system more complicated.
A better way to look at this problem is to realize that the core desire is to synchronize the binding of your production site with the URL that is used in your batch job. In many cases an entry in the app.config of your batch would be the best solution, there really isn't any need to introduce code unless you know that your URL will be changing frequently or you will need to scale to many different arbitrary URLs. If you have a need to support changing the URL programmatically, I recommend you look at setting up a distributed configuration system like Consul and read the current URLs from your deployment system for both the IIS binding and the app.config file for your batch. So even in this advanced scenario, there's no direct interaction between your batch and your web site.
Let's suppose that we have two APIs, one for UserManagement and one for Auth.
UserManagement API is responsible for initial invitation email (where i need a ResetPasswordToken because this is my current app flow) and Auth API is responsible for password recovery (where i need a ResetPasswordToken).
Of course, i need to specify the same machine key for both applications.
Let's also suppose that those two applications will be deployed behind a load balancer. 2 apps x 3 instances.
It is sufficient to have the same shared location for persisting keys (Redis or so) in both APIs?
services.AddDataProtection().PersistKeysToRedis(/* */);
I'm thinking that if it works for one app, multiple instances scenario, it will work for multiple apps, multiple instances scenario too.
P.S: I wasn't able to find anything about any locking mechanism (it seems that there is one just looking at how it behaves)
Another thing that i'm concerned of: race condition?!
Duc_Thuan_Nguy Jun 9, 2017
Out of curiosity, how does key rolling
handle concurrency? For example, let's say we have a web-farm with 2
machines and a shared network directory. There may be a race condition
in which both machines want to roll a new key at the same time. How is
this situation handled? Or the two machines can roll their own new
keys and as long as they can have access to both new keys, they can
unprotect data smoothly?
Comment reference: https://learn.microsoft.com/en-us/aspnet/core/security/data-protection/implementation/key-management
Later edit: It looks like if you have multiple apps it isn't sufficient to specify that you want to persist keys in the same location. There is a concept of application discriminator (all apps being isolated).
You will need something like the following:
services.AddDataProtection(configure => {
configure.ApplicationDiscriminator = "App.X";
}).PersistKeysToRedis(/* */);
Locking and race condition questions are still valid.
No, it's not sufficient. ASP.NET Core's data protection isolates applications by default based on file paths, or IIS hosting information, so multiple apps can share a single keyring, but still not be able to read each other's data.
As the docs state
By default, the Data Protection system isolates apps from one another,
even if they're sharing the same physical key repository. This
prevents the apps from understanding each other's protected payloads.
To share protected payloads between two apps, use SetApplicationName
with the same value for each app
public void ConfigureServices(IServiceCollection services)
{
services.AddDataProtection()
.SetApplicationName("shared app name");
}
A quick update on this one: it seems like it's possible to eliminate race conditions by using the DisableAutomaticKeyGeneration method on all of your apps except the "main" one.
i.e. it will be
services.AddDataProtection()
.SetApplicationName("shared app name");
for the main one and
services.AddDataProtection()
.SetApplicationName("shared app name")
.DisableAutomaticKeyGeneration();
for all other apps
I am developing a windows application for my client, in .NET Framework 3.5, using C#.
There is no need of any database in my application.
I want to secure my application by making a registration process at the time of installation, where the user will be asked to enter a registration key, which should be machine dependent, otherwise the user can copy the installation folder and distribute to others, which i don't want to happen.
Please suggest me, how to achieve this.
Thanks,
Bibhu
I believe you will need a registration service.
When the user registers (they'll need to be online), their registration 'code' is sent to your registration service along with their machine details / other identification (username?).
Your service verifies this & returns a key which can be decrypted by your app using their machine details / identification. Your service also marks that registration code as 'used' so that no one else can get a valid key by using it.
The application stores the valid key in registry, or even config. It won't work on another machine because it is specific to the machine details.
my suggestion is this ways:
1)you can create a registery key after registration and in start up of your app check this registery key.
2)you can create a web service (over local network or internet) and on startup check if this version is registerd or not
3)create a custom file and store a hashed value based on machine and in startup of you app check this file
in every 3 way do not forget OBFUSCATION
There is no way to guarantee software is secure. Even registering over a network can be faked with the use of packet analyzers. In securing software, all you can do is make it slightly inconvenient for professionals, difficult for dabblers, and impossible for people with no knowledge. Generally, it's accepted that obfuscation is not a good protection, because someone will eventually figure it out and publish it anyway.
Also keep in mind that the more secure you make your program, the less usable legitimate users are likely to find it. It's a hard balance to strike between usability, security, and the value of what you lose if security is broken. There is no hard and fast 'right' way to secure something.
For machine dependent information, you can gather information about the hardware on that system, hash it somehow, and store the value somewhere, and then check it at the launch of the program each time. It's not fool-proof, but it allows some security fairly easily.