I am using the AES to encrypt and decrypt passwords on a website. Anyways; the encrypting works just fine. But I have some problems with the decrypting. On the line:
byte[] decrypted = DecryptStringFromBytes_Aes(encrypted, key, iv);
I recieve this error: Cannot implicity convert type 'string' to 'byte[]'. I have tried lots of things, but nothing seem to work.
You can see the rest of the code below.
string original = txtEncrypt.Text;
byte[] key = new byte[] { 3,122,23,189,15,2,55,82,97,17,255,45,1,65,41,200 };
byte[] iv = new byte[16];
Aes myAes = Aes.Create();
byte[] encrypted = EncryptStringToBytes_Aes(original, key, iv);
byte[] decrypted = DecryptStringFromBytes_Aes(encrypted, key, iv);
Sincerely,
Adrian
The sample code that you used returns the ciphertext as byte array. Modern ciphers, like AES in CBC mode as you're using, operate on bytes, not strings.
So if you need a string then you need to convert to a string and then back again. For this you could use an encoding such as base 64 encoding. So encode to base 64 after encryption and then decode before decryption.
If you just directly interpret the bytes as a string (e.g. UTF-8) then you will experience data loss as not every byte is a valid / printable UTF-8 character.
Don't forget to include all required information that needs to be shared, such as the IV. The example code conveniently forgets about that.
Note that CBC is not secure for transport mode security; only use for data at rest.
Related
We have a C# library that encrypts and decrypts using Rijndael
_algorithm = new RijndaelManaged() { Mode = CipherMode.CBC, Padding = PaddingMode.ISO10126 };
public override byte[] Encrypt(byte[] bytes)
{
// a new iv must be generated every time
_algorithm.GenerateIV();
var iv = _algorithm.IV;
var memoryStream = new MemoryStream();
using (var encryptor = _algorithm.CreateEncryptor(_key, iv))
using (var cryptoStream = new CryptoStream(memoryStream, encryptor, CryptoStreamMode.Write))
{
memoryStream.Write(iv, 0, iv.Length);
cryptoStream.Write(bytes, 0, bytes.Length);
cryptoStream.FlushFinalBlock();
return memoryStream.ToArray();
}
}
There is a corresponding decrypt method in C# that decrypts what is encrypted by the above code. Now there comes a need that a node application will send an encrypted data using exactly the same algorithm.
However, I believe because of the iv, the C# code is not able to decrypt it
Any idea
CryptoJS.AES.encrypt(
value,
key,
{
mode: CryptoJS.mode.CBC,
padding: CryptoJS.pad.Iso10126,
}
);
const decryptedString= CryptoJS.enc.Base64.stringify(result.ciphertext);
The C# library generates a random IV. As that IV is 128 bits in size, it is impossible to generate an identical one using CryptoJS. And you don't need to: for decryption you simply send the IV together with the ciphertext. You can then directly set it instead for decryption.
You can do the same when going the other way: generate a random IV on the CryptoJS site, send the IV together with the ciphertext to the C# side and send it to the C# implementation.
Generally the IV is simply prefixed to the ciphertext. For CBC mode the size of the IV is always exactly one block: 128 bits / 16 bytes for AES. So the size is known, which makes it easy to retrieve it from the start of the ciphertext.
Note that:
using CBC without HMAC is entirely insecure for transport mode security - not only can an adversary make you receive invalid plaintext, CBC is also vulnerable against plaintext / padding oracle attacks;
CBC requires a unpredictable IV which is different for each plaintext when using the same key - generally this means generating a random IV;
ISO/IEC 10126 compatible padding is largely deprecated, everybody uses PKCS#7 compatible padding by now, for CBC anyways: most other modes don't require padding at all (but .NET has pretty bad support for other modes of operation).
I have a client/server setup that consists of a server written in C++ using OpenSSL and a client written in C# using Aes/RSACryptoServiceProvider. I generate an RSA key pair on both sides and send each side the public key. Then, when I'm ready to send a message I generate an Aes key/iv and encrypt the message with this, and then encrypt the Aes key (and the iv too? I've tried both encrypting it and not encrypting it, but both give me the same error, which I will mention in a bit) with the public key of the recipient and then send the Aes encrypted key, the iv and the encrypted message. However, when I try to send from the client to the server, I get an OpenSSL error that reads "data greater than mod len" when using EVP_OpenInit.
Here is how I generate the data in C#:
var keyPair = new RSACryptoServiceProvider(2048); //server uses 2048 too (added on Edit)
var aes = new AesCryptoServiceProvider();
//OpenSSL uses the EVP_CIPHER* EVP_aes_256_cbc()
aes.Mode = CipherMode.CBC;
aes.Padding = PaddingMode.PKCS7;
aes.KeySize = 256; //bits
aes.GenerateKey();
aes.GenerateIV();
var message = Encoding.Default.GetBytes("test data");
var eMessage = new byte[4096];
using (var stream = new MemoryStream(eMessage))
{
var encryptor = aes.CreateEncryptor();
using (var cryptoStream = new CryptoStream(stream, encryptor, CryptoStreamMode.Write))
{
await cryptoStream.WriteAsync(message, 0, message.Length);
}
}
string eMessageString = null;
for (int i = 0; i < eMessage.Length; i++)
{
if (eMessage[i] == '\0')
{
eMessageString = Convert.ToBase64String(eMessage, 0, i-1);
}
}
var eKey = Convert.ToBase64String(keyPair.Encrypt(aes.Key, false));
var eIV = Convert.ToBase64String(aes.IV); //may not need to encrypt
I know my C++ implementation works as OpenSSL correctly reads in the client public key and I can encrypt/decrypt data using the EVP_Seal/EVP_Open functions when using a different key generated through OpenSSL on the server. So, I'm not sure what's causing this error, but I think I have an idea. Could it be the way that the key/iv/encrypted message is encoded when I'm sending the data to the server? Or could it be the differences in implementation between OpenSSL and C#? Or maybe something I'm not catching altogether?
EDIT: Here is the requested code for how I use EVP_OpenInit.
BlueSOD::Encryption::DecryptionData BlueSOD::Encryption::EncryptionFactory::Decrypt2(DecryptionWork2 && work)
{
DecryptionData data;
EVP_PKEY* privateKey = work.privateKey.get();
auto encryptionKey = (unsigned char*)work.info.key.c_str();
auto encryptionIV = (unsigned char*)work.info.iv.c_str();
EVP_CIPHER_CTX_ptr cipherCtxPtr{ AcquireCipherCtx() };
EVP_CIPHER_CTX* cipher = cipherCtxPtr.get();
int status;
//ERROR HAPPENS HERE
status = EVP_OpenInit(cipher, m_Cipher, encryptionKey, work.info.key.size(), encryptionIV, privateKey);
cout << ERR_error_string(ERR_get_error(), nullptr) << endl;
CheckForError(status, "EVP_OpenInit failed.");
int bufferLength = work.cipherText.size() + EVP_MAX_BLOCK_LENGTH;
auto buffer = make_unique<unsigned char[]>(bufferLength);
auto cipherTemp = (unsigned char*)work.cipherText.c_str();
status = EVP_OpenUpdate(cipher, buffer.get(), &bufferLength, cipherTemp, work.cipherText.size());
CheckForError(status, "EVP_OpenUpdate failed.");
status = EVP_OpenFinal(cipher, buffer.get(), &bufferLength);
CheckForError(status, "EVP_OpenFinal failed.");
data.plainText = CreateSecureString(buffer.get(), bufferLength);
return move(data);
}
Encoding.Default.GetString won't work. The IV, wrapped key and ciphertext are all binary, and (as good as) indistinguishable from random. That means that the encoding may go wrong as not all bytes will map to characters. This means that information is lost. Try and use base 64 encoding instead.
Your IV, wrapped key and ciphertext should also be distinguishable from each other. This is however not hard as the IV has the block size of the underlying cipher (16 bytes for AES/CBC), the wrapped key has the same size in bytes of the modulus (or the RSA key size), and the ciphertext, well, is the rest. In other words you might as well simply concatenate them all.
So your hunch was right.
RSA 4096 w/ OAEP can only encrypt 446 bytes of data (see 7.1 of RSA RFC 2437), and RSA 2048 w/ OAEP can only encrypt 245 bytes (still should be plenty of room for 16 + 32 bytes for IV and symmetric key). I don't see anywhere that you set the key length for the RSA provider, so it may be failing for some reason to encrypt the AES key.
Can you provide at least the line at which the server code throws the exception? What are you providing for the eki parameter (symmetric secret key length) in EVP_OpenInit? Are you performing the Base64 decoding of the symmetric key before attempting to decrypt it using RSA on the server?
And for the record, you do not need to encrypt the IV before transmitting, but it has no negative impact (other than computation cost) to do so.
Update:
It is always helpful when debugging crypto issues to reduce the number of steps in each statement so you can find where the error is occurring. I'd recommend breaking out the last few statements of your client code into individual steps and walking through them (i.e. RSA encryption and Base64-encoding on separate lines).
So you can now compare the following values on client and server and they are byte-for-byte equal (no extra 0x00, etc.)?
Reference | Client | Server
------------------------------------------------------------------
A | keyPair.Encrypt(aes.Key, false) | ek
Base64E(A) | eKey | ??
len(A) | len(A) | len(ek)
You mentioned in another comment that you compared the hex-encoded value of the Base64-decoded, encrypted key on both client and server and it was identical? Can you try just using the client & server to encrypt and decrypt an arbitrary plaintext message (<< 245 bytes in order to ensure that OAEP or PKCS#1 v1.5 padding does not then exceed 245 bytes) with that key pair to ensure everything is correct?
I'm not particularly familiar with the C# implementation -- is there something additional you need to do to replicate EVP_SealInit on the client?
My issue lied in how the message, key, and iv were encoded on the client and decoded on the server. The client encoded them in base64, so it had to be decoded on the server in order for OpenSSL to understand them (OpenSSL uses raw bytes rather than an encoding scheme).
EDIT: There also seems to be a conflict between the C# implementation and the OpenSSL implementation. The C# Aes implementation does not seem to match the OpenSSL exactly. I get the correct plain text when decrypting in OpenSSL, but there is a bunch of garbage data that follows and EVP_OpenFinal causes an error that says "bad decrypt". If anyone knows a way around this, please let me know! I have tried the openssl.net API, but it throws an error if I try to use BIO.
I am currently using AesManaged class in C# to encrypt a plain text. It works fine.
However, it produces the same cipher text each time it encrypts same piece of data. Is there anyway I can tweak this behavior and produce different cipher text for same piece of data?
I have implemented encryption in SQL server using AES_256 algorithm and certificate. The process closely resembles with the post here: http://www.codeproject.com/Articles/662187/FIPS-Encryption-Algorithms-and-Implementation-of-A. In this process each time a plain text is encrypted, different cipher text is produced.
I want the same effect with C# code. How that can be achieved?
EDIT:
Here is how I implemented the approach suggested by Yolanda Ruiz:
Encrypt
public static string Encrypt(string plainText)
{
//Check for valid arguments.
if (String.IsNullOrEmpty(plainText)) throw new ArgumentNullException("plainText");
List<byte> encryptedList;
//Create Aes object
using (AesManaged aes = new AesManaged())
{
aes.Key = Key;
aes.GenerateIV();
encryptedList = aes.IV.ToList();
aes.BlockSize = BlockSize;
/*Here goes the standard code to encrypt the plain text - refer msdn for that*/
/*Append the encrypted stream to encryptedList*/
}
return encryptedList.ToArray().ToBase64();
}
Decrypt
public static string Decrypt(string cipherText)
{
//Check for valid arguments.
if (string.IsNullOrEmpty(cipherText)) throw new ArgumentNullException("cipherText");
string plainText;
byte[] cipherTextArray = cipherText.FromBase64();
//Create Aes object
using (AesManaged aes = new AesManaged())
{
aes.Key = Key;
aes.BlockSize = BlockSize;
aes.IV = cipherTextArray.Take(NoOfBytes).ToArray();//Extract the IV
cipherTextArray = cipherTextArray.Skip(NoOfBytes).ToArray();//Extract the actual plain text.
/*Here goes the standard code to Decrypt the cipher text - refer msdn for that*/
/*Assign the decrypted stream output to plainText*/
}
return plainText;
}
Unit Test
//Arrange
string plainText = "Sayan";
//Act
string cipherText1 = MyCrypto.Encrypt(plainText);
string cipherText2 = Crypto.Encrypt(plainText);
string plainText1 = Crypto.Decrypt(cipherText1);
string plainText2 = Crypto.Decrypt(cipherText2);
//Assert
//Check the cipher text is different everytime
Assert.AreNotEqual(cipherText1, cipherText2);
//Check that every plaintext output should match with the original
Assert.AreEqual(plainText, plainText1);
Assert.AreEqual(plainText, plainText2);
The way to do that is to use a different Initialization Vector for each encryption.
The default mode of operation in AesManaged is CBC. In this mode, when a block of plaintext is encrypted, it is first mixed with the result of the encryption of the previous block. As long as the previous ciphertext block is always different, this prevents two similar blocks of plaintext to output the same ciphertext. But what do we use for the very first block then? The initialization vector.
The IV is basically a randomized block that acts as if it was the result of encrypting an hypothetical plaintext block coming before the actual first block of plaintext.
The IV has to be kept around so we can feed it to the decryption method. As it is semantically a ciphertext block, it is usual to prepend it to the actual ciphertext. When decrypting, you would first extract the first block of ciphertext (as is, without decrypting) and use it as the IV to decrypt subsequent blocks.
The IV is not a secret. The attacker will not be able to derive the key or the first plaintext block from it. You must never reuse the same IV twice with the same key though, or you loose the randomization property.
The methods you will want to look at are AesManaged.GenerateIV(), AesManaged.BlockSize (which is in bits, keep it in mind if you use that property to extract the IV bytes from the ciphertext).
Encryption algorithms have to be deterministic (otherwise there's no way of reversing them)
If you want to get different cipher text, you'll have to change the key, or the data to be encrypted (or the actual algorithm).
I have a set of encrypted documents encoded with TripleDES coming from a remote system. I need to decode the data in C# and I have no control over the key or encoding algorithm. All I have is the key and the mode (CBC) and the data located in a file.
The TripleDESCryptoServiceProvider is easy enough to use, but I can't figure out how to use the Decryptor without an Initialization Vector.
We have a have 24 byte (192bit) key to decrypt with, but nothing else.
string key = "1468697320656E6372797174696F6E206973737265206933";
byte[] keyData = ParseHex(key); // key is OK at 24 bytes
TripleDESCryptoServiceProvider des = new TripleDESCryptoServiceProvider();
des.Mode = CipherMode.CBC;
des.GenerateIV();
var decryptor = des.CreateDecryptor(keyData,null); // des.IV
var encoded = File.ReadAllBytes(#"..\..\..\..\test.tdes");
byte[] output = decryptor.TransformFinalBlock(encoded, 0, encoded.Length);
This fails outright with Bad data. If I switch to TransformBlock the code at least runs but produces just gibberish:
byte[] output = new byte[10000];
var count = decryptor.TransformBlock(encoded, 0, encoded.Length, output, 0);
So the questions are:
If I only have a key is the InitializationVector required?
If not is null the right thing to pass?
What else would I possibly need to set beyond the key and mode?
Why does TransformBlock at least work and TransformFinalBlock just fails?
Update - found the problem
It turns out the decoding problem was caused, not by the missing Initialization Vector, but by incorrect information from the provider of the encrypted data. The updated working code looks like this:
// Read the test data
byte[] encoded = File.ReadAllBytes(#"..\..\..\..\test.tdes");
// Get the key into a byte array
string key = "1468697320656E6372797174696F6E206973737265206933";
byte[] keyData = ParseHex(key);
TripleDESCryptoServiceProvider des = new TripleDESCryptoServiceProvider();
des.Mode = CipherMode.ECB; // Make sure this is correct!!!
des.Padding = PaddingMode.Zeros; // Make sure this is correct!!!
des.Key = keyData;
var decryptor = des.CreateDecryptor();
byte[] output = decryptor.TransformFinalBlock(encoded, 0, encoded.Length);
string dataString = Encoding.Default.GetString(encoded);
Console.WriteLine(dataString);
Console.WriteLine("\r\n\r\nDecoded:");
string result = Encoding.Default.GetString(output);
Console.WriteLine(result);
Console.Read();
The key in our case was using the proper CipherMode and Padding. Fixing the padding made TransformFinalBlock() work without Bad Data errors. Fixing the CipherMode made properly unencrypted the data.
Moral of the story: In CipherMode.ECB mode at least an Initialization Vector you don't need to provide an initialization vector. If no IV is provided the provider will auto-generate one, but the decryption still works (at least with ECB).
In the end it's CRUCIAL to make sure you have all the information from the provider that encrypted the data.
Trying to answer each point:
The Initialization Vector is required in CBC mode. It is not required to be a secret (unlike the key) so it should be sent from the remote system.
Since you need the IV, null is not the right thing to pass.
Padding mode. You need to know which padding mode is used.
TransformFinalBlock probably fails because the Padding mode is wrong.
Edit
The difference between ECB (Electronic Code Book) and CBC (Cipher Block Chaining) is illustrated below:
As you can see no IV is used in ECB mode. So even if you provide one it will be ignored.
I am using AES 256 encryption in GCM mode using a class called AuthenticatedAesCng from this site: CLR security
After writing the plaintext through the crypto stream, I manually concatenate the IV, TAG, and encrypted data, then return that value.
cs is the cryptostream and ms the memorystream
// Write through and retrieve encrypted data.
cs.Write(message, 0, message.Length);
cs.FlushFinalBlock();
byte[] cipherText = ms.ToArray();
// Retrieve tag and create array to hold encrypted data.
byte[] authenticationTag = encryptor.GetTag();
byte[] encrypted = new byte[cipherText.Length + aes.IV.Length + authenticationTag.Length];
// Set needed data in byte array.
aes.IV.CopyTo(encrypted, 0);
authenticationTag.CopyTo(encrypted, IV_LENGTH);
cipherText.CopyTo(encrypted, IV_LENGTH + TAG_LENGTH);
// Store encrypted value in base 64.
return Convert.ToBase64String(encrypted);
Is this the correct manner of using the AES cipher in GCM mode? Am I supposed to manually place all these values together or is it done automatically and I just missed it?
Ciphertext is just the data but you cannot have GCM ciphertext without the tag: it would defeat the entire purpose of GCM. The tag is normally appended to the ciphtertext.
The AAD data is optional, and the entire purpose of it is to have it send in the clear.
The IV is actually a nonce, so it may be computed on both sides. If you use a random NONCE or cannot pre-compute it, then it is normal to prefix it to the ciphertext (but you will have to explicitly code this at both sides).