I am searching for a function that allows me to put a dialog-window(w/ a password query) before the folder is accessed. Is there such a function? Also, this would be great if this protection is there before any program, even Windows Explorer/cmd.exe are allowed to access those files. Is that possible to make?
I'm not using something like IOContainer, passwd. protected ZIPs or any other things that are too slow, because I guess 20GB in one file are a bit overkill and it would take ages to decrypt that file. Is there maybe a VFS solution for C# which supports password protection and can be used as a normal filesystem or folder on the disk?
Thanks!
There exist two options. The simpler one is to have a virtual file system mapped from the file. Our product, SolFS (OS edition), does exactly what you are asking in the second part of your question - it provides a container with optional encryption, which is exposed as a virtual drive so that access to the contents is transparent. Decryption in such systems is done in pages, so 20GB-large file won't be decrypted in whole as you worry.
Another option is to employ a filesystem filter driver, which will intercept requests for directory opening, and will ask the user for a password. This approach is possible (we even have a product for this, called CallbackFilter), but there are two drawbacks in it: first, it's not impossible to remove the driver, leaving the data unprotected. And the second problem is that if you ask the user for a password in a callback, while the OS is waiting for access to the directory, you can end up in a deadlock or a timeout while the user is thinking.
With these two limitations in mind something like SolFS is the preferred and recommended approach.
PS: and we have free non-commercial licenses as well.
Related
is it possible to use either File.Delete or File.Encrypt to shred files? Or do both functions not overwrite the actual content on disk?
And if they do, does this also work with wear leveling of ssds and similar techniques of other storages? Or is there another function that I should use instead?
I'm trying to improve an open source project which currently stores credentials in plaintext within a file. Because of reasons they are always written to that file (I don't know why Ansible does this, but for now I don't want to touch that part of the code, there may be some valid reason, why that is that way, at least for now) and I can just delete that file afterwards. So is using File.Delete or File.Encrypt the right approach to purge that information off the disk?
Edit: If it is only possible using native API and pinvoke, I'm also fine with that. I'm not limited to only .net, but to C#.
Edit2: To provide some context: The plaintext credentials are saved by the ansible internals as they are passed as a variable for the modules that get executed on the target windows host. This file is responsible for retrieving the variables again: https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1#L287
https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/csharp/Ansible.Basic.cs#L373
There's a possibility that File.Encrypt would do more to help shred data than File.Delete (which definitely does nothing in that regard), but it won't be a reliable approach.
There's a lot going on at both the Operating System and Hardware level that's a couple of abstraction layers separated from the .NET code. For example, your file system may randomly decide to move the location where it's storing your file physically on the disk, so overwriting the place where you currently think the file is might not actually remove traces from where the file was stored previously. Even if you succeed in overwriting the right parts of the file, there's often residual signal on the disk itself that could be picked up by someone with the right equipment. Some file systems don't truly overwrite anything: they just add information every time a change happens, so you can always find out what the disk's contents were at any given point in time.
So if you legitimately cannot prevent a file getting saved, any attempt to truly erase it is going to be imperfect. If you're willing to accept imperfection and only want to mitigate the potential for problems somewhat, you can use a strategy like the ones you've found to try to overwrite the file with garbage data several times and hope for the best.
But I wouldn't be too quick to give up on solving the problem at its source. For example, Ansible's docs mention:
A great alternative to the password lookup plugin, if you don’t need to generate random passwords on a per-host basis, would be to use Vault in playbooks. Read the documentation there and consider using it first, it will be more desirable for most applications.
I'm working on a project that is going to replace legacy software on our manufacturing floor. One of my concerns is that currently, config files, script caches, etc are all plain text, stored on the system that the user is using. A lot of this stuff is going to get pushed off to limit access network locations, but things like config files stay local. It's already been an issue with users thinking that they know what they're doing with the system, and modifying the config files. I don't want this happening any more in the new software. How should I prevent this? Encryption? Do some sort of signing/checksum with a database lookup? What kind of features does C#/.NET offer to help me out with this?
UPDATE: Just to address some things that were brought up in comments, every user on the manufacturing floor has admin access to the system they're working on. This isn't likely to change soon as most of the security comes from limiting access to folders on the network, web services, and databases. Permissions would be ideal, I agree, but I have to work in the environment that I'm provided. I plan to bring it up in a meeting that I have with IS to see if this is a possibility, but assume for now that this will be on a system where the user has full access.
This isn't a C# coding issue, it's a system configuration issue. Set up the machine such that the users have normal (non-admin) accounts. Set the file permissions on the config files you're worried about so that anyone (including your app running as current user) can read the config files, but only an admin can write the config files. Finally, don't give the users the admin password. ;>
If your app needs to be able to write the the config files also, you'll have to add code to transition into admin mode within your app, preferably only around the write operation.
To prevent the average end-user from modifying config files by hand, you could simply sign the config file using the SHA of its contents concatenated with some secret factor known only by the program. This is obviously not a true or perfect secret, but it's enough to prevent simple tampering by end-users.
Basically (pseudo-code):
function isValidConfig(configPath, signaturePath) {
return readFile(signaturePath) == SHA(readFile(configPath) + secret)
}
function writeConfig(contents, configPath, signaturePath) {
writeFile(configPath, contents)
writeFile(signaturePath, SHA(contents + secret))
}
Short of decompiling the program, they won't be able to tamper with the config. I assume you don't have l33t crax0rs on your manufacturing floor...
This seems like a good job for a Digital Signature. A Digital signature will provide integrity and authentication of your data. Meaning the digital signature will detect if the data (config file) has been changed, and that the data originated from a trusted source. A digital signature is created by performing a hash of the data and then encrypting the hash with a private key from a public/private pair. The Application will decrypt the encrypted hash, calculate hash of the data, and compare the decrypted hash to the calculated hash. If the hashes match the data is valid. If they do not match the data has been altered.
.Net contains these functions in DSACryptoServiceProvider.VerifyHash
Of course if you do not want to go through the hassle of creating a public/private key pair you could just go with a simple hash of the config file to make sure it hasn't been altered.
The really important question is; what are you going to do when the application detects an altered config file?
Are you going to have the application quit, lock out certain functions, send an e-mail to you, try to obtain a good copy of a config file? These actions are referred to as the penalty for failing the integrity check. Right now your application is not performing integrity checks on the config file, but when you add the check you will need to decide the best course of action for a failure.
An option is for you to move as much as you can from the config file to either IsolatedStorage or even better to the database. It would be highly unlikely that a typical user would know how to access them.
I'd keep the files in some kind of structured storage, be it isolated storage, slightly encrypted ZIP file or something like our SolFS virtual file system (also encrypted). The secondary benefit of having one file is that it can be copied for backup easily.
I'm not an expert in local security but maybe you could use file system permissions to prevent use access to a given folder or file.
Then, if you application needs to access this file, you will have to launch your application with different windows account that has the right to modify your file.
I manage a software download site, and we've been trying to find a good way to present the downloads to students. Due to licensing restrictions, there are a large number of downloads that should only be accessable to certain students or staff, and many of the files are dvd iso's or other large files. We started out by pushing all the downloads through code, but we found that files over 500 megs would just time out and die half way through. (I think part of this problem is related to using afs for a storage system instead of cifs, but I won't go into that...)
What I was looking at doing was giving users a temporary url to the file that is only good for x number of minutes. I've seen this used on other sites before, but I wasn't sure what was involved with setting it up.
So first off, is this a workable solution for my scenario? Or will we still run into problems? And what is the best method for going about doing this? Thanks!
Something you could do is randomly generate a string in a database that corresponds to a file and do some sort of stealthed redirect to the actual file. This parameter would be passed as part of the query string and would allow you to invalidate urls however you like by performing any kind of checking before sending the file.
Well, as you haven't mentioned about the IIS version, you may take a look at
http://learn.iis.net/page.aspx/389/configuring-ftp-with-net-membership-authentication/
This article explains how to configure FTP server for ASP.NET Membership authentication. If you set this up, you can restrict the files based on roles.
Also, I doubt how you would implement a anonymous url solution without pushing the downloads through code.
I'm currently conceiving a system that works like an anti-virus, but also uses the White Listing i.e
Preventing Viruses from Running by having a database of Known legitimate Programs
Yes , there is the Windows UAC, but still many viruses "work around" it. I'm planning on a more reliable system.
My system has also a database of known threats (cryptographic hash).
Is this approach viable,
What are the possible loop holes in this approach
I understand that there has been a lot of attempts at this. But still I want to try it out.
I'm planning to use C# and .Net for a prototype may be i'll move on to C++ for performance later
Update:
Thank you all for your time and thoughts.
I decided to do some more research in this area before actually designing something
Espcially as pointd out below the Zeroday threat problem
What about DLLs used by executables? Do you hash them too? A virus can replace a DLL.
This has been brought up before, and there are products out there which do that. (Faronics Anti-Executable works like this)
There are two main problems with this approach:
A virus can embed itself into any file; not just EXEs. Programs can load DLLs and other bits of code(macros, scripts, etc), and programs can contain bugs(such as buffer overflows) which can be exploited by malicious documents and other files.
Every time you patch a system or otherwise legitimately modify the software, you also need to update the white list.
There is products like Appsense Application Manager that do this already. It was temporarily pitched as a security product but they changed tact and focused it on licensing. I think it's because it didn't work too well as a security product.
If you are planning to work with a limited set of applications and you can work with application developers you can use a code signing model. You can find a similar approach in most mobile operating systems. You have to sign all the executable modules including libraries and need to verify they have a valid signature and not modified using a root certificate.
If you are only planning to white list applications based on their hash value you need to make sure your white listed applications verify any modules they use before they load. Even if the applications/installation files are digitally signed it does not guarantee that a library will be modified later in a malicious way.
In reality, it is not even enough to only verify executables and libraries. For example, Xbox Linux hack utilizes a malicious save file. It is a specially prepared save file that causes a legitimate and signed application behave in unexpected ways. And, of course it is not possible to white list a save file based on its hash value.
Another problem with keeping a database is zero day attacks. You need to be ahead of the curve for creating hash values for new attacks and propagating these updates to your users otherwise they will be vulnerable all new attacks. Unless you only allow only white listed applications to be executed and that would be really restrictive.
IMHO, it is really difficult build such a system on open platfom. Good luck with it.
I'd like to bind a configuration file to my executable. I'd like to do this by storing an MD5 hash of the file inside the executable. This should keep anyone but the executable from modifying the file.
Essentially if someone modifies this file outside of the program the program should fail to load it again.
EDIT: The program processes credit card information so being able to change the configuration in any way could be a potential security risk. This software will be distributed to a large number of clients. Ideally client should have a configuration that is tied directly to the executable. This will hopefully keep a hacker from being able to get a fake configuration into place.
The configuration still needs to be editable though so compiling an individual copy for each customer is not an option.
It's important that this be dynamic. So that I can tie the hash to the configuration file as the configuration changes.
A better solution is to store the MD5 in the configuration file. But instead of the MD5 being just of the configuration file, also include some secret "key" value, like a fixed guid, in the MD5.
write(MD5(SecretKey + ConfigFileText));
Then you simply remove that MD5 and rehash the file (including your secret key). If the MD5's are the same, then no-one modified it. This prevents someone from modifying it and re-applying the MD5 since they don't know your secret key.
Keep in mind this is a fairly weak solution (as is the one you are suggesting) as they could easily track into your program to find the key or where the MD5 is stored.
A better solution would be to use a public key system and sign the configuration file. Again that is weak since that would require the private key to be stored on their local machine. Pretty much anything that is contained on their local PC can be bypassed with enough effort.
If you REALLY want to store the information in your executable (which I would discourage) then you can just try appending it at the end of the EXE. That is usually safe. Modifying executable programs is virus like behavior and most operating system security will try to stop you too. If your program is in the Program Files directory, and your configuration file is in the Application Data directory, and the user is logged in as a non-administrator (in XP or Vista), then you will be unable to update the EXE.
Update: I don't care if you are using Asymmetric encryption, RSA or Quantum cryptography, if you are storing your keys on the user's computer (which you must do unless you route it all through a web service) then the user can find your keys, even if it means inspecting the registers on the CPU at run time! You are only buying yourself a moderate level of security, so stick with something that is simple. To prevent modification the solution I suggested is the best. To prevent reading then encrypt it, and if you are storing your key locally then use AES Rijndael.
Update: The FixedGUID / SecretKey could alternatively be generated at install time and stored somewhere "secret" in the registry. Or you could generate it every time you use it from hardware configuration. Then you are getting more complicated. How you want to do this to allow for moderate levels of hardware changes would be to take 6 different signatures, and hash your configuration file 6 times - once with each. Combine each one with a 2nd secret value, like the GUID mentioned above (either global or generated at install). Then when you check you verify each hash separately. As long as they have 3 out of 6 (or whatever your tolerance is) then you accept it. Next time you write it you hash it with the new hardware configuration. This allows them to slowly swap out hardware over time and get a whole new system. . . Maybe that is a weakness. It all comes down to your tolerance. There are variations based on tighter tolerances.
UPDATE: For a Credit Card system you might want to consider some real security. You should retain the services of a security and cryptography consultant. More information needs to be exchanged. They need to analyze your specific needs and risks.
Also, if you want security with .NET you need to first start with a really good .NET obfuscator (just Google it). A .NET assembly is way to easy to disassemble and get at the source code and read all your secrets. Not to sound a like a broken record, but anything that depends on the security of your user's system is fundamentally flawed from the beginning.
Out of pure curiosity, what's your reasoning for never wanting to load the file if it's been changed?
Why not just keep all of the configuration information compiled in the executable? Why bother with an external file at all?
Edit
I just read your edit about this being a credit card info program. That poses a very interesting challenge.
I would think, for that level of security, some sort of pretty major encryption would be necessary but I don't know anything about handling that sort of thing in such a way that the cryptographic secrets can't just be extracted from the executable.
Is authenticating against some sort of online source a possibility?
I'd suggest you use a Assymmetric Key Encryption to encrypt your configuration file, wherever they are stored, inside the executable or not.
If I remember correctly, RSA is one the variants.
For the explanation of it, see Public-key cryptography on Wikipedia
Store the "reading" key in your executable and keep to yourself the "writing" key. So no one but you can modify the configuration.
This has the advantages of:
No-one can modify the configuration unless they have the "writing" key because any modification will corrupt it entirely, even if they know the "reading" key it would takes ages to compute the other key.
Modification guarantee.
It's not hard - there are plenty of libraries available these days. There're also a lot of key-generation programs that can generate really, really long keys.
Do take some research on how to properly implement them though.
just make a const string that holds the md5 hash and compile it into your app ... your app can then just refer to this const string when validating the configuration file