I'm trying to use a public key of a user to encrypt a string in a sort of pgp fashion, but I keep getting the error:
bignum routines:BN_mod_inverse:no inverse
I've looked around and I cannot find anything specific as to what I'm doing wrong. I've looked around for .NET core information, but I cannot seem to find anything relevant.
I'm using the following code:
byte[] publicKey = Encoding.UTF8.GetBytes(key);
RSA rsa = RSA.Create();
RSAParameters RSAKeyInfo = new RSAParameters();
RSAKeyInfo.Modulus = publicKey;
RSAKeyInfo.Exponent = new byte[]{1,0,1};
rsa.ImportParameters(RSAKeyInfo);
var encrypted = rsa.Encrypt(Encoding.UTF8.GetBytes(user.challenge.text), RSAEncryptionPadding.Pkcs1);
It's entirely possible I'm going about this entirely wrong, so any thoughts or suggestions would be great!
Your overall structure (build RSAParameters, call ImportParameters, call Encrypt) is valid, which suggests that your error is in Modulus recovery.
If your Modulus is input as a string, it's likely encoded as (most to least)
Base64 (Convert.FromBase64String)
Hex (May need a manual parser)
UTF-8
UTF-8 is really unlikely, since the Modulus value can contain bytes whose value are 0 (and other invalid/unexpected UTF-8 sequences). While all even-length byte sequences encoded as hex can be validly decoded as Base64, it's extraordinarily unlikely that you'd misinterpret them given two or three different inputs.
Other noteworthy things:
RSA is IDisposable, you should put it in a using statement to ensure resources free up earlier when you are done with them.
The Exponent is usually 0x010001, but that isn't required. Unless you have a guaranteed constraint on it, you should be serializing it, too.
And if it is guaranteed to always be 0x010001, why make a new one each time? Save it as a static field and make the GC's job easier.
As Maarten said, RSA-ENC-PKCS1 is susceptible to a padding oracle attack, so (especially if your data is over the wire) you should use OAEP.
In the context of .NET, OaepSHA1 has the best support (all inbox providers). OAEP with a SHA-2 algorithm is only supported by RSACng (or the opaque RSA.Create() on Windows).
Related
I'm creating an IAP plugin for the local Android market that I think has similar API as google play.
I build an Android side and it will return all response from the market to the unity c# side.
All part working fine but I can't figure out how to verify the signature of the response.
I'm new in cryptography and searching day to day about this.
Please correct me if I'm wrong.
They use a hash algorithm to sign data and encrypt that hash with private key.
I must decrypt signature with public key and compare hashes.
This is my public key (Base64):
MIHNMA0GCSqGSIb3DQEBAQUAA4G7ADCBtwKBrwDltnT/AaF3oMt+F3lza5JEvce0GLS1L9v9Z53lQ3LNluGk0eI2ukgWm7tAiIOLQgn11Sb9mW2VWkYTWGnZ1AZtY0GwdUQJUr7u3CWNznE6XH4UCVOVhGDCLnFrG8BcfDelhcfReGZQ3izOefhc4Oq6vZf5PfLwximK+FH27fR6XL8vg3yyK4LSwT764Dfd6H3IGes6EdTx/C3C690jdyMvhi2Q3qBiqfepHzW/jV8CAwEAAQ==
This key is in ASN.1 DER format.
I break it out and find this data:
SEQUENCE
SEQUENCE
OBJECT IDENTIFIER 1.2.840.113549.1.1.1 rsaEncryption(PKCS #1)
NULL
BIT STRING
SEQUENCE
INTEGER 969837669069837851043825067021609343597253227631794160042862620526559…
INTEGER 65537
As I read on net first INTEGER is Modules and second INTEGER is Exponent
So in c# write a code like this =>
var parameter = new RSAParameters
{
Modulus = HexToByteArray(/* "first_INTEGET" */),
Exponent = BitConverter.GetBytes(/* "second_INTEGER" */)
};
Market send me a Json like this:
{"orderId": "0j8oJgE0Bett-neB", "purchaseToken": "0j8oJgE0Bett-neB", "developerPayload": "payload", "packageName": "com.some.market", "purchaseState": 0, "purchaseTime": 1520676644872, "productId": "card-1"}
The signature is like this:
hTFeQd25PZJ2DhGmXd0eO+C+oBeWsg983I4e5ztXtKAUrOIaNBaqAxHU3vW8acBs1I9fE5cxx/DI/sQGY4QSvpDnSm9aYz3do3joHPOXIVvXjSJfejxwzp9DKMUPd6LrgtxkaGevG+94NuKHFxpCdZlovEPXRJZyEznbASuYLqeW0KjP3jnvvw2O5iNlQRdh98h4Q18bSsaxq9zaRKExFLHkhNf/yO5m84kRB1G8
I'm searched for a method to do this but I don't know which method is true for me.
My verify code is this:
using (var rsa = new RSACryptoServiceProvider())
{
rsa.ImportParameters(parameter);
var hash = new SHA1Managed();
bool dataOK =
rsa.VerifyData(hash.ComputeHash(Encoding.UTF8.GetBytes(json)), CryptoConfig.MapNameToOID("SHA1"), Encoding.UTF8.GetBytes(signature));
}
How truly convert signature to byte[] to verify? (with encoding or what???)
I'm searching a lot but more search more confuse.
Am I going the wrong way or using wrong method or ...?
Why should workflow be complicated?
Can anyone help me, please?
Thanks.
OK, I'll answer in order:
They use a hash algorithm to sign data and encrypt that hash with private key. I must decrypt signature with public key and compare hashes.
No, that's not correct. You should use a signature verification method, as you're currently doing. Seeing signature as encryption of a hash is incorrect; even the latest RSA standards go out of their way to explain this. For RSA the internal padding method is different. For ECDSA there is no direct encryption/decryption possible using the same scheme.
As I read on net first INTEGER is Modules and second INTEGER is Exponent
Yes, although it is spelled modulus, not modules. It is the public exponent, there is also a private exponent for the private key. Also without the caps.
How truly convert signature to byte[] to verify? (with encoding or what???)
Standard Base 64 is already mentioned in the comment section. Note that the key and signature size are not common (but that's OK in itself).
Why should workflow be complicated?
Well, somebody has to code it in the end and crypto is hard. But to make it easier for you: the entire ASN.1 structure is called a SubjectPublicKeyInfo structure; if you look on the internet you will find pre-made code to import from such a structure.
To finally verify the structure: make sure that you use the correct signature format (RSA-PKCS#1 v1.5 or RSA-PSS) and that you know exactly which binary data is fed to the signature generation function. For instance, the signature over the JSON could be in ASCII or UTF-8 or it could be in UTF-16 LE or BE.
Better ask the creator of the signature.
I'm quite new to .net and had a question regarding DataProtector.
When using DataProtector.Protect without any configuration, the resulting encryption becomes too long for the API I need to pass it to, I was wondering if using the configuration methods (as seen here) would help? I tried the following in the class where I needed to protect the data:
var serviceCollection = new ServiceCollection();
serviceCollection.AddDataProtection()
.UseCustomCryptographicAlgorithms(new ManagedAuthenticatedEncryptionSettings()
{
// a type that subclasses SymmetricAlgorithm
EncryptionAlgorithmType = typeof(Aes),
// specified in bits
EncryptionAlgorithmKeySize = 128,
// a type that subclasses KeyedHashAlgorithm
ValidationAlgorithmType = typeof(HMACSHA256)
});
var services = serviceCollection.BuildServiceProvider();
_protector = services.GetDataProtector("MyClass.v1");
var protect = _protector.Protect(JsonConvert.SerializeObject(myData));
However even after changing the EncryptionAlgorithmKeySize from the default 256 to the minimum 128, 'protect' was still resulting in an encryption of the same length which makes me think that the configuration isn't working or configuration doesn't affect encryption length.
Does anyone know if this is being done the right way or if there is a better way to reduce encryption length?
For example a simple 9 character string gets encrypted to 134 characters.
Any help is much appreciated, thanks!
DPAPI is meant to secure data-at-rest, not data for transmission.
Ryan Dobbs is correct, above (or below? I can't figure out how StackOverflow sorts unaccepted answers...), weakening your encryption to attain a smaller payload is a very bad idea. The right way to address this is to secure the connection (TLS-style SSL), then you can just send things plaintext, or (as Ryan suggests) drop a properly-encrypted payload somewhere that both sender and receiver can access it.
But to answer your question more directly, the payload size is controlled by the hashing function. Encryption key size only tells you the cryptographic complexity of the encryption algorithm -- how hard the encryption is to break. The part that says HMACSHA256 is a SHA-256 hash which means it produces a 256-bit output.
MD5 is 128-bit but it's generally insecure (only good for checksums).
The documentation says the key size and hash size must be equivalent, so you can't go to 128 bits with SHA. The shortest SHA available is the old SHA1 algorithm (HMACSHA1) which is 160 bits, but the expectation is that anything less than 256-bits will be insecure relatively soon. The SHA2 algorithm yields HMACSHA256 and HMACSHA512.
I spent 100s of hours researching this subject, and other senior programmer who coded the original project also could not make it work. I have an xml with a parameter of SignatureValue, Certificate (X509Certificate2) and Digest Value. The created and given Signature value stated in the same xml was made by converting concatinated fields (equal to Digest Value) into a hash (SHA1), then encrypted via private key. Private key is taken out of the certificate for privacy and I only have the Public key within. Now, no matter how I code around it, I always get a false value back (as in VerifyHash/verifyHashResult is false). Here is the code I am using:
// Need your help please.
static void VerifyHash(string sigVal , string digestVal, System.Security.Cryptography.X509Certificates.X509Certificate2 cert)
{
sigValInBytes = Convert.FromBase64String(sigVal);
try
{
using (RSACryptoServiceProvider rsaProviderDecrypt = (RSACryptoServiceProvider)cert.PublicKey.Key)
{
// Line below always return value of FALSE no matter how I code it. Here I want to verify the hashed freshly calculated digest value that is now hashed with the signature value
rsaProviderDecrypt.Decrypt(sigValInBytes, false);
rsaProviderDecrypt.Dispose();
}
}
}
// At the main program I get the certificate from the xml given and call the method above:
main
{
// Code below gets the certificate details from a given xml, details of each variable confirmed to be accurate.
char[] Base64_x509ByteArray;
Base64_x509ByteArray = t.DigitalSignatures.First().X509Data.ToCharArray();
byte[] x509ByteArray;
x509ByteArray = Convert.FromBase64CharArray(Base64_x509ByteArray, 0, Base64_x509ByteArray.Length);
// Here am creating the certificate from the gathered data/certificate:
System.Security.Cryptography.X509Certificates.X509Certificate2 cert = new System.Security.Cryptography.X509Certificates.X509Certificate2(x509ByteArray);
VerifyHash(t.DigitalSignatures.FirstOrDefault().SignatureValue.Trim(), concatenatedFieldValues, cert);
}
Some shots in the dark:
Find the piece that is broken: Try doing the entire "encrypt / hash check" process in code without transfering anything over XML. If you can hash a string locally, and the hashes match, then the problem is in XML. Otherwise, the problem is in the cert or decryptor.
If the problem is on the cert / encryptor side, try hash matching with a local .NET cryptography class. If that fails, the problem is an encryption setting. Otherwise, it is the cert.
BIG shot in the dark: The call to Dispose right after the hash check. It shouldn't matter, but that caused an issue while I was decrypting using the Rijndael algorithm. Best guess was the optimizer was closing the stream early or something weird like that. Moving the constructor out of the using statement and manually calling Dispose after accessing the result fixed that "optimization".
Might try a reversable encryption algorithm. Rinjdael is native to .NET, and is reversable. Good for debug and proof of concept work. (Note: it uses Time as part of the salt, so RJ doesn't match hashes, it decrypts. So not good for passwords in Production environments.)
If the XML is the cause, check the encodings. Encryption is very sensitive to encodings, and XML serializers are finnicky beasts to begin with. The strings may look the same, but represented differently, or extra control characters added. Sql Server nvarchars are UCS-2, varchars are iso-8859-1, C# strings are utf-8, etc. Easy for encodings to mis-match, and an encoding change would easily cause this. Try converting the original value to utf-16 before inserting into the Xml, and set the Xml Declaration Encoding to utf-16. Just to be safe.
Note about NotePad: if you have opened the Xml in Notepad to take a quick look or edit, and saved it, there are probably extra "end of line" characters on your strings now. If you did the same in Word... oh my... Might want to try an original copy.
Failing that, try generating new encrypted values and see if they match.
X9ECParameters curve = NistNamedCurves.GetByName("P-521");
ECDomainParameters ecparam = new ECDomainParameters(curve.Curve, curve.G, curve.N, curve.H, curve.GetSeed());
ECKeyPairGenerator generator = new ECKeyPairGenerator();
generator.Init(new ECKeyGenerationParameters(ecparam, new SecureRandom()));
AsymmetricCipherKeyPair ackp1 = generator.GenerateKeyPair();
AsymmetricCipherKeyPair ackp2 = generator.GenerateKeyPair();
then,
ECDHWithKdfBasicAgreement agreement = new ECDHWithKdfBasicAgreement("2.16.840.1.101.3.4.42", new ECDHKekGenerator(DigestUtilities.GetDigest("SHA256")));
agreement.Init(ackp1.PrivateKey);
BigInteger agInt = agreement.CalculateAgreement(ackp2.PublicKey);
byte[] aeskey = agInt.ToByteArrayUnsigned();
This goes through without generating any errors and I verified that the "aeskey" is the same when I swap in the other pair of public/private keys.
I found zero examples of this kind of usage with google.
The code seems correct to me, bu having to provide the Der OID for AES256 (instead of the string "AES256", which bombs in CalculateAgreement) makes me suspicious that I am doing something wrong.
this was reposted from This question on crypto.stackexchange.
You seem to be heading in the right direction though I'm not sure your OID is correct. It looks very suspicious to me, a quick internet search did not show up any expected results.
According to RFC 2631:
algorithm is the ASN.1 algorithm OID of the CEK wrapping algorithm
with which this KEK will be used. Note that this is NOT an
AlgorithmIdentifier, but simply the OBJECT IDENTIFIER. No
parameters are used.
So the use of an OID is correct, but the OID itself may not be. I would expect an OID that indicates e.g. AES in CBC or GCM mode. This won't show up if you use an invalid OID on both sides of course, it is only used to generate your key, not when you actually use it.
Note that the code of Bouncy Castle seems to be vulnerable to a bug that was also in the Java DH code: a BigInteger is always encoded in the minimal number of bytes. However, the key generated by any normal key agreement is a specific number of bytes, including initial 00 valued bytes. This means that just calling BigInteger.ToByteArray will generate the wrong number of bytes (or an illegal value) once in about 256 bytes as leading zero's will be lost during the conversion. Again, this won't make any differences in operation during testing against identical code on the same system. But the DH used against other systems will fail now and then (I've reported this to Bouncy for Java and it has been confirmed and then fixed in Bouncy 1.50)...
ECDHWithKdfBasicAgreement is a little awkward since it's a port of something that only exists in the JCE parts of the Java build. As #owlstead points out you need to deal with the BigInteger/byte[] conversion. In this case, with latest code, you can use:
int keyLen = GeneratorUtilities.GetDefaultKeySize(algorithm) / 8;
byte[] key = BigIntegers.AsUnsignedByteArray(keyLen, agInt);
or of course, just pad it out to the size you known you need. I think the AES256 thing is fixed in latest code too. Code is now on github (https://github.com/bcgit/bc-csharp), but a new beta build of C# is (finally) a mere day or two away also.
Is there any way to perform private key encryption in C#?
I know about the standard RSACryptoServiceProvider in System.Security.Cryptography, but these classes provide only public key encryption and private key decryption. Also, they provide digital signature functionality, which uses internally private key encryption, but there are not any publicly accessible functions to perform private key encryption and public key decryption.
I've found this article on codeproject, which is a very good start point for performing this kind of encryption, however, I was looking for some ready-to-use code, as the code in the article can hardly encrypt arbitrary-long byte arrays containing random values (that means any values, including zeroes).
Do you know some good components (preferably free) to perform private key encryption?
I use .NET 3.5.
Note: I know this is generally considered as bad way of using asymmetric encryption (encrypting using private key and decrypting using public key), but I just need to use it that way.
Additional Explanation
Consider you have
var bytes = new byte[30] { /* ... */ };
and you want to use 2048bit RSA to ensure no one have changed anything in this array.
Normally, you would use digital signature (ie. RIPEMD160), which you then attach to the original bytes and send over to the receiver.
So, you have 30 bytes of original data, and additional 256 bytes of digital signature (because it is a 2048bit RSA), which is overall of 286 bytes. Hovewer, only 160 bits of that 256 bytes are actually hash, so there is exactly 1888 bits (236 bytes) unused.
So, my idea was this:
Take the 30 bytes of original data, attach to it the hash (20 bytes), and now encrypt these 50 bytes. You get 256 bytes long message, which is much shorter than 286 bytes, because "you were able to push the actual data inside the digital signature".
ECDSA Resources
MSDN
Eggheadcafe.com
c-plusplus.de
MSDN Blog
Wiki
DSA Resources
CodeProject
MSDN 1
MSDN 2
MSDN 3
Final Solution
If anyone is interested how I've solved this problem, I'm going to use 1024bit DSA and SHA1, which is widely supported on many different versions of Windows (Windows 2000 and newer), security is good enough (I'm not signing orders, I just need to ensure that some child can't crack the signature on his iPhone (:-D)), and the signature size is only 40 bytes long.
What you are trying to design is known as a "Signature scheme with message recovery".
Designing a new signature scheme is hard. Designing a new signature scheme with message recovery is harder. I don't know all the details about your design, but there is a good chance that it is susceptible to a chosen message attack.
One proposal for signature schemes with message recovery is RSA PSS-R. But unfortunately, this proposal is covered with a patent.
The IEEE P1363 standarization group, once discussed the addition of signature schemes with message recovery. However, I'm not sure about the current state of this effort, but it might be worth checking out.
Your Public key is a sub-set of your private key. You can use your private key as a public key as it will only use the components of the full key it requires.
In .NET both your private & public keys are stored in the RSAParameters struct. The struct contains fields for:
D
DP
DQ
Exponent
InverseQ
Modulus
P
Q
If you're at the point where the data is so small that the digital signature is huge in comparison, then you have excess signature. The solution isn't to roll your own algorithm, but to cut down what's there. You definitely don't want to try to combine a key with the hash in an amateurish way: this has been broken already, which is why we have HMAC's.
So here's the basic idea:
Create a session key using a cryptographically strong RNG.
Transmit it via PKE.
Use the session key to generate an HMAC-SHA1 (or HMAC-RIPEMD160, or whatever).
If the size of the hash is absurdly large for the given data, cut it in half by XORing the top with the bottom. Repeat as needed.
Send the data and the (possibly cut-down) hash.
The receiver uses the data and the session key to regenerate the hash and then compares it with the one transmitted (possibly after first cutting it down.)
Change session keys often.
This is a compromise between the insanity of rolling your own system and using an ill-fitting one.
I'm wide open to constructive criticism...
I get it now, after reading the comments.
The answer is: don't do it.
Cryptographic signature algorithms are not algorithms from which you can pick and choose - or modify - steps. In particular, supposing a signature sig looks something like encrypt(hash), orig + sig is not the same as encrypt(orig + hash). Further, even outdated signature algorithms like PKCS v1.5 are not as simple as encrypt(hash) in the first place.
A technique like the one you describe sacrifices security for the sake of cleverness. If you don't have the bandwidth for a 256 byte signature, then you need one of:
a different algorithm,
more bandwidth, or
a smaller key.
And if you go with (1), please be sure it's not an algorithm you made up! The simple fact is that crypto is hard.