Preamble
I was digging deep into the .Net x509certificate2 and x509certificate objects noticed that both the certificate classes serialize the private key to disk by default due to the verifiable use of Persist-Key options in the constructors of the x509certificate2 and x509certificate classes using a default option set for CspProviderFlags that does not include CspProviderFlags.CreateEphemeralKey. This is verifiable via any number of web sites that show the source for .Net as well as with a copy of resharper should you choose to do it yourself.
Looking further, In fact these .Net objects simply seem to be wrappers for a central old style crypto32.dll cryptographic context. What worries me about this is that looking at the code deeper I find that it seems to explicitly look in an on-disk file storage for public and private keys in the getters and setters for the PublicKey field on the x509certificate2 and x509certificate object.. despite the fact that I would otherwise expect them to only exist in memory. Based on this knowledge it seems to store my private key in this storage even if I do not want it to do so, because the constructor does not allow you to change this option and turn it off based on the constructor list I am seeing when I look at the documentation; So if you set() or get() from the PrivateKey field it is really just collecting it from this file store behind the scenes.
Reason for my question
I think that being forced to leak my private key to the file system - even temporally - is a huge security issue so I do not want to do it. I also do not trust the certificate store. Even if you disagree with me, I personally believe that being forced to serialize my private key to disk even if I do not intend to do so is a huge security issue, so I do not want to do it.
Goal
My current goal is to simply be able to generate RSA public and private X509 keys in C# that can be used on windows without needing to be forced to write them to the hard disk as part of the creation process, and I am not finding many valid options as all of the ones I can find seem to have either hidden gaps or publicly known issues.
The goal is not to never write them to disk, it is just to not do so unless its with code I have written and know to be secure to make sure they are fully encrypted at rest; ie, it should not do so unless I explicitly ask for it after the keys themselves have been fully encrypted.
Goal tldr;
In short: I want to write secure software on windows using C# to generate a 509 certificate that doesn't force me to export, import, or otherwise leak my private key to the file system as part of the certificate creation. I want to be able to create my own X509Certificates on windows using as little third party code as possible.
Options I have considered
The Standard .Net types as stated have been looked at in depth and found to be lacking due to the above issues. I just don't have enough access to the internal CSP settings as the constructors and access do not expose them, and the defaults used as what I would personally consider to be insecure.
The Bouncy Castle library looked like a good option at first, but it actively exposes my public and private keys to the file system just lie the normal .net types do when it converts them to the standard .Net types since these objects store things behind the scenes without asking BC if its ok first..
I attempted to find a crypto32.dll p-invoke option that would allow me to not serialize my keys, only to find that all I could find did exactly that even when I did not want them to. And finding data on it was very difficult so I mostly just looked at the source code only to find that it didn't seem to like me trying to not store keys.
I am unable to find any other options despite spending several days researching this topic. Even Stack Overflow says to use BC or the native typesd in most cases; yet that doesn't solve my problem.
Question
How would I best create my own X509 Certificates without doing any of the following as I feel they are all security issues:
The use of makecert.exe requires I allow my process execute rights on the server so it can start a process; thats a security issue waiting to happen. Not to mention, I cant audit the code and I am willing to bet its still storing the keys in the cert store even If I do not want them to be.
The use of the C#/.Net x509certificate2 and x509certificate objects serialize a generated public key to disk by default via its calls to the Certificate Store so that's not an option; The goal is to do things in a secure manner, and so the idea of all the private keys I'm generating being stored on disk for just anybody to access them doesn't leave me with good feelings. ANd its very hard to find data on the cert store; thus I cant validate its security and have to consider it not a valid option.
The use of third party libs such as Bouncy Castle creates update scenarios and the possibility of a third party that has to be trusted, creating an even larger attack surface; and sadly they all use the same conversions that are affected by what I am personally calling the "hidden private key store" issue of them trying to play nice with the standard System.Security.Cryptography API by using the same objects that have this leaking key issue. So If in general we want to keep the system as secure as possible.. that sort of leak is not an option.
Without going super-low-level (or disabling paging altogether) you can't guarantee that your private key material is never written to disk, because it can always be paged out. Even then, on a system that supports hibernation your key will be present in the hibernation file.
The crypt32 key storage mechanism uses DPAPI (user or machine scoped, depending on your import flags and/or flags in the PFX) to encrypt the key material at rest. So while it is involving the filesystem, it isn't "stored on disk for just anybody to access them"; unless they're running as you, or have permission to access the system keys.
Like others have said, if you are this concerned and not willing to trust anyone/any outside APIs then you are going to have to use a lower level language than .Net. .Net is a great RAD type language, but if you don't trust the machine you are running on, you will want the lowest level environment you can achieve.
C would be a good option.
Related
Im using some 3rd party Dll's in my code which require a license key to work.
The license key needs to be passed into the methods when called or they don't work.
However I have written my code in C# which means this license key can be retrieved by decompiling my code.
Is there any way for this license string to be prevented from being decompiled?
(adding as an answer because the message is too long for a comment)
There is no way to completely block access to this string, you can only make it harder. Even if you encrypt it, your program will need the ability to decrypt it, which means that anyone skilled enough can access and reuse the decryption mechanism to get the key in clear format. Or they could even simply read the decrypted key from memory.
You could store the License key in app.config and then encrypt it: http://www.codeproject.com/Articles/18209/Encrypting-the-app-config-File-for-Windows-Forms-A
That probably gets you the most bang for the least buck.
Don't worry too much about it. If your application has access to the key then anyone with access to the binaries and enough time can get that key.
Keep in mind is not your licensing scheme which is at risk, is the licensing scheme of the 3rd party one and thus you have to rely on the applicable laws. If someone decompiles your application and gets access to the key they're breaking the law and it's something you can't prevent.
Your app has the legal rights to use that third library, breaking it has to be handled by law, not by protection, and it's a violation by the terms of the 3rd party licensing scheme. You can go into a certain length to protect it but in the end the responsibility is not yours as there's no way the data can be completely protected (your app needs to know that key and so, at some point, that key will be available in plain text for anyone with access to the code to pick on it). That is true even if the app was entirely written in assembly language.
Add strong verbiage to your license agreement against de-compiling and/or using any licenses that you have acquired in their own software. Require users to electronically accept these terms. Record the timestamp and IP address of each user that accepts these terms on your own server.
If you wanted to add some basic obfuscation in addition to this, that it ok too. But the main point is that as a licensed re-distributor of this component, you are making a concerted effort to prevent others from using your key.
Here's something simple without encryption, to stop a plain-text view of the license:
Store the string in your code and do some replacing before using it, for example:
string LicenseCode = "croftycot";
// Use license code
CallUsingLicense(LicenseCode.Replace("o","a")); // Change to real code: craftycat
You can get more complicated if you need with a regular expressions, mixing a few strings together etc.
I love c# for programming applications (I consider myself intermediate with c#, and a bit less with C/C++, but am only learning, nothing real yet in the arena), and I used to like it until i discovered "anyone" who understand MSIL (not an easy task to learn neither) could decompile my code. I don’t really care about someone decompiling my code, but my utter concern is the security for my eventual program users. I know obfuscators exist, and I even know of one or two that are really good, I hear (even if they only delay a decompiling).
For example, if I want to decrypt something using c#, some where in the code the key should be, making it a danger for anyone who use my program (someone who know someone who encrypted the file using my program could decrypt it by researching on my MSIL code, finding my key). Then, the developing of massive applications that encrypt/decrypt stuff (or OpenSSL) is insane with c#, I think, for this reason.
I mean, most users won’t know what language was used to make that exe, but a bunch of people are able to program n c#, and an elite of this people can read MSIL, and a minority of this elite would like to hack what ever is possible to hack. Of those people who like to hack, some of them can do it with perverse intentions (in a value-less world where we live that shouldn’t surprise anyone).
So, if I want to make a program that download a file from the internet, someone could interfere the transmission and do some evil, even if I use OpenSSL with c#, because somewhere in the c# file is the key. I know avoiding hacking is probably impossible, but it looks like c# is a very unsecure way.
Does it happen with Java? (Java has the same “interpreting” and “decompile” structure as C#); I mean, the fact that the key is visible in Java (with some educated eye) some where in the building file? Or does Java use some C/C++ based API that makes it harder (way harder) to decompile the file where the key is and so making it hard to get the key?
Is my only option to write my program with c/c++? Because if so, my only option is C++Builder, since its a hell to even try to watch (and less to learn) MFC/OWL code; I mean: I cant hardly think of someone who could like MFC/OWL programming. In fact, I suppose Assembly could be of more interest in the today programming world.
So, here I am, wanting to find someone who could explain me better a way to store securely crypto keys for encrypting/decrypting or to use OpenSSL with c#. Or even with Java. I would like to confirm that C/C++ is the only way of really using these features with some security for decompiling reasons (as other compiled programming languages, i.e. Delphi).
If anyone knows a site where I can find precise information about the subtle reasoning I suppose I have done (specially one that shows am wrong in my analysis), please tell me. If any one can confirm my analysis, please confirm. If anyone find any hole in my analysis, again, please tell me, and where to find more information that rule me to get a better understanding of all this.
Am sorry for making this philosophical computer programming question that long.
Thank you,
McNaddy
Could I hide the encryption key of a c# exe securely (in a way that can't be decompiled in any known way), as in C/C++?
No. You can't do that in any language.
The .NET security system is designed to protect benign users from hostile code. You are trying to protect benign code from hostile users. You simply cannot do that, so don't even try. If you have a secret, do not share it with anyone.
The purpose of crypto is to leverage the secrecy of some private key into the secrecy of a text. If that is not the security problem you face, crypto is the wrong tool. Explain the security problem you actually have and someone here can help you solve it.
So, if I want to make a program that download a file from the internet, someone could interfere the transmission and do some evil, even if I use OpenSSL with c#, because somewhere in the c# file is the key.
You don't need to store a secret key in the program just to download a file safely.
If you want to ensure that the file you downloaded is authentic and hasn't been modified in transit, you use a digital signature. The private key used to make the signature doesn't have to be (and shouldn't be) distributed with the program; all the program needs is the corresponding public key, which you don't have to hide.
If you want to prevent eavesdroppers from reading the file as it's downloaded, then you need to encrypt it, but that can be done with a temporary session key generated randomly for each download; it doesn't have to be stored anywhere. If you use HTTPS for your download, it'll do this for you.
The choice you've mentioned (embed key into executable) is bad irrespective of language you choose - it is not too hard to extract data from C/C++ and slightly easier for C#/Java.
As Jordão said - you need to figure out your story of distributing key outside the binaries. You also need to figure out what you actually trying to protect and understand possible exploits. Just using encryption of some sort in an application does not make it more secure.
You should not store cryptographic keys inside assemblies; they should normally be provided from outside, e.g. from a key-store, or derived from a secret known to a user.
You can also generate a key from a password(this means the key is no more stronger than the password though). So each time the user runs the program, they are prompted for a password, and that password is then used to generate a key. Depending on your requirements you could employ this in a variety of ways.
When the user needs to access the encrypted data, the password can be provided again and this generates the key for use during that session. Once the program is closed the key is discarded(there are techniques/APIs in C# to help ensure that sensitive data is only present in memory as short a time as possible).
For example, this is essentially what many password storing programs like Keepass or Roboform do. The user can upload and download the encrypted data to and from servers. No keys are ever stores, and instead generated on demand as the user supplies their password for that session.
With a service like Dropbox, when you register with their site, they generate the private key on their server and keep a copy there. So the user's machine and client software never store the key, but the server has a copy stored. Dropbox does this so that they can decrypt user data for many purposes, such as compression, de-duplication, compliance with law enforcement, etc.
We have around 60 MB of device configuration implemented in at least 1000 xml files.
Now we are releasing the software to the customers. But our requirement is not to allow the user to view and edit the xml configuration files. XML configuration files contains a lot of secret of the device information which can be easily hacked if it is readable.
Now we need to encrypt the xml files. Are there any recommended method to encrypt the xml file and it can be decrypted at run time?
This is a problem known from DRM applications - you want to make the data available to the user agent of your choice but not to the user operating the user agent. But, since the user agent is usually on the user's side, as Jon and Oded point out, a determined hacker will find a way to break the encryption. It's a cat and mouse game. You are trying to find a solution to exactly the same problem that people implementing DRM want to solve. Software-only user agents are easier to hack than hardware-assisted user agents, but in either case time works for the hackers. The latest development is the latter - embedding all the cryptography in hardware - like the HDMI's HDCP method (High-bandwith Digital Content protection Path) where they have essentially made the decrypted digital signal inaccessible to the user by letting it pass along black-box hardware from its point of decryption until it is made so available, but at the intended destination - TV screen. The key for HDCP to succeed however was implementing it in hardware. Most hackers have learned to deal with software. But since I would say there is 1 good hardware hacker per 100 good software hackers these days, the mouse hopes no cat will be around to catch it. Sorry for too much theory, it is essential to your problem though, I believe. If you are still willing to play the game, encrypt your XML files and make sure the decryption key is not available to potential hackers on a silver plate - i.e. obfuscate it, can't do much else.
How determined are you expecting the "hackers" to be? If all the information required to decrypt the information has to be present on the system anyway, then a determined attacker is going to be able to get at it anyway.
You can use the classes in the Cryptography namespace.
Most of the encryption classes will allow you to encrypt and decrypt streams, so are good for your purpose.
However, you will still need to hold the encryption keys somewhere, even if it is in the assembly.
As Jon points out, a determined hacker will find a way to break any encryption.
As others explained, you won't get it absolutely secure without a trusted device which stores the key and does the decryption without granting access to the key under any circumstances. Computers aren't "trusted devices"...
My employer sells such technology and if your data is really money worth, you should possibly take such a solution into account.
If an additional USB-Dongle is not acceptable (or too expensive) at least use public-key (asymmetic) cryptography (see System.Security.Cryptography).
Asymmetric cryptography has the advantage that the key used to decrypt your data can't be used to encrypt the data.
Your application has to store the decryption key and the hacker can determine it with more or less effort. He then can decrypt all your data but he can't not encrpyt the changed data again. So he can't use your application with the changed data.
If you want to prevent him from doing this, you have to obfuscate your application and use anti-debugging techniques (static and runtime). If you go this way buying an existing solution is probably cheaper.
Watch out: Hackers can see all functions in .net generated executables and dll's!
If you make a decription algorithm in your .net project like DecryptXML(string Path), it is very easy for a hacker to call this instruction. So be sure to dotfuscate your project.
I'm developing an intranet application (C#) that uses some data (local to the web server) that we'd like to keep private. This data is encrypted (AES) using a legacy data repository. We can't totally prevent physical access to the machine.
Clearly, we're never going to have perfect security here. However, we want to make it as hard as possible for anyone to gain unauthorized access to the data.
The question is how best to store the key. Encrypting it based on some machine specific ID is an option, but that information would be readily available to anyone running a diagnostic tool on the machine.
Encoding it in the application is an option (it's a one off application). However, .NET assemblies are pretty easy to decompile. So, would it be best to obfuscate it, use an encryption launcher, compile it?
Or is there an option I'm missing?
Just so we're clear, I know it's pretty much a lost cause if someone is determined, but we're looking to make it as hard as possible within the constraints.
Encryption is built into the .NET configuration system. You can encrypt chunks of your app/web.config file, including where you store your private key.
http://www.dotnetprofessional.com/blog/post/2008/03/03/Encrypt-sections-of-WebConfig-or-AppConfig.aspx
Speaking in obfuscation terminology, what you are after is called constant hiding, i.e. a means by which you transform a constant into, say, a number of functions and calculations that are executed at runtime to re-materialize said constant.
This still falls within the domain of obfuscation, however, and is susceptible to either code extraction, where the attacker simply maps out the code relevant to this constant, and runs it in a separate application to retrieve the value; or dumping the application's memory at the right point in order to scan it for the desired value.
There is another, slightly more advanced method of hiding crypto keys in particular, called White-box cryptography, which employs key-less ciphers through essentially generating a cipher function from a given key, baking them together. As the name suggests, this method has been devised to be resilient even in a white-box attack scenario (the attacker has access to the bytecode and is able to inspect and manipulate the executable at runtime).
These are both quite advanced methods of achieving security through obscurity, and it might be worth considering alternative models which do not force you to do this in the first place.
If somebody can just attach a debugger to your program, there is absolutely nothing you can do. They won't have to figure out your config, disassemble your app, etc. All they have to do is run the app - watch it use the key - bingo.
Obfuscation is of no help under those conditions.
The best defense is to use hardware to protect the key - which will do the crypto but not give out the key itself (and is sometimes hardened against attacks such as probing the wires, exposing the memory to low temperatures/radiation/other novel stuff). IBM do some appropriate stuff (google IBM-4764) but it's not cheap.
I'd like to bind a configuration file to my executable. I'd like to do this by storing an MD5 hash of the file inside the executable. This should keep anyone but the executable from modifying the file.
Essentially if someone modifies this file outside of the program the program should fail to load it again.
EDIT: The program processes credit card information so being able to change the configuration in any way could be a potential security risk. This software will be distributed to a large number of clients. Ideally client should have a configuration that is tied directly to the executable. This will hopefully keep a hacker from being able to get a fake configuration into place.
The configuration still needs to be editable though so compiling an individual copy for each customer is not an option.
It's important that this be dynamic. So that I can tie the hash to the configuration file as the configuration changes.
A better solution is to store the MD5 in the configuration file. But instead of the MD5 being just of the configuration file, also include some secret "key" value, like a fixed guid, in the MD5.
write(MD5(SecretKey + ConfigFileText));
Then you simply remove that MD5 and rehash the file (including your secret key). If the MD5's are the same, then no-one modified it. This prevents someone from modifying it and re-applying the MD5 since they don't know your secret key.
Keep in mind this is a fairly weak solution (as is the one you are suggesting) as they could easily track into your program to find the key or where the MD5 is stored.
A better solution would be to use a public key system and sign the configuration file. Again that is weak since that would require the private key to be stored on their local machine. Pretty much anything that is contained on their local PC can be bypassed with enough effort.
If you REALLY want to store the information in your executable (which I would discourage) then you can just try appending it at the end of the EXE. That is usually safe. Modifying executable programs is virus like behavior and most operating system security will try to stop you too. If your program is in the Program Files directory, and your configuration file is in the Application Data directory, and the user is logged in as a non-administrator (in XP or Vista), then you will be unable to update the EXE.
Update: I don't care if you are using Asymmetric encryption, RSA or Quantum cryptography, if you are storing your keys on the user's computer (which you must do unless you route it all through a web service) then the user can find your keys, even if it means inspecting the registers on the CPU at run time! You are only buying yourself a moderate level of security, so stick with something that is simple. To prevent modification the solution I suggested is the best. To prevent reading then encrypt it, and if you are storing your key locally then use AES Rijndael.
Update: The FixedGUID / SecretKey could alternatively be generated at install time and stored somewhere "secret" in the registry. Or you could generate it every time you use it from hardware configuration. Then you are getting more complicated. How you want to do this to allow for moderate levels of hardware changes would be to take 6 different signatures, and hash your configuration file 6 times - once with each. Combine each one with a 2nd secret value, like the GUID mentioned above (either global or generated at install). Then when you check you verify each hash separately. As long as they have 3 out of 6 (or whatever your tolerance is) then you accept it. Next time you write it you hash it with the new hardware configuration. This allows them to slowly swap out hardware over time and get a whole new system. . . Maybe that is a weakness. It all comes down to your tolerance. There are variations based on tighter tolerances.
UPDATE: For a Credit Card system you might want to consider some real security. You should retain the services of a security and cryptography consultant. More information needs to be exchanged. They need to analyze your specific needs and risks.
Also, if you want security with .NET you need to first start with a really good .NET obfuscator (just Google it). A .NET assembly is way to easy to disassemble and get at the source code and read all your secrets. Not to sound a like a broken record, but anything that depends on the security of your user's system is fundamentally flawed from the beginning.
Out of pure curiosity, what's your reasoning for never wanting to load the file if it's been changed?
Why not just keep all of the configuration information compiled in the executable? Why bother with an external file at all?
Edit
I just read your edit about this being a credit card info program. That poses a very interesting challenge.
I would think, for that level of security, some sort of pretty major encryption would be necessary but I don't know anything about handling that sort of thing in such a way that the cryptographic secrets can't just be extracted from the executable.
Is authenticating against some sort of online source a possibility?
I'd suggest you use a Assymmetric Key Encryption to encrypt your configuration file, wherever they are stored, inside the executable or not.
If I remember correctly, RSA is one the variants.
For the explanation of it, see Public-key cryptography on Wikipedia
Store the "reading" key in your executable and keep to yourself the "writing" key. So no one but you can modify the configuration.
This has the advantages of:
No-one can modify the configuration unless they have the "writing" key because any modification will corrupt it entirely, even if they know the "reading" key it would takes ages to compute the other key.
Modification guarantee.
It's not hard - there are plenty of libraries available these days. There're also a lot of key-generation programs that can generate really, really long keys.
Do take some research on how to properly implement them though.
just make a const string that holds the md5 hash and compile it into your app ... your app can then just refer to this const string when validating the configuration file