Decrypting AES cipher text - c#

Consider the following test:
[Test]
public void TestAes256EcbPkcs7Stream()
{
// 504 bytes of plain text
const string inputString = new string('z', 504);
var inputBytes = Encoding.UTF8.GetBytes(inputString);
byte[] key = {
0, 0, 0, 0, 0, 0, 0, 0,
1, 0, 0, 0, 0, 0, 0, 0,
2, 0, 0, 0, 0, 0, 0, 0,
3, 0, 0, 0, 0, 0, 0, 0
};
var rij = new RijndaelManaged
{
BlockSize = 256, // 256 bits == 32 bytes
Key = key,
IV = key, // just for test
Mode = CipherMode.ECB,
Padding = PaddingMode.PKCS7
};
var enc = rij.CreateEncryptor();
var encBytes = enc.TransformFinalBlock(inputBytes, 0, inputBytes.Length);
Assert.AreEqual(512, encBytes.Length);
var dec = rij.CreateDecryptor();
byte[] decBytes = new byte[inputBytes.Length];
int decPos = 0;
using (var cipherMs = new MemoryStream(encBytes))
{
var buf = new byte[32];
// process all blocks except the last one
while (cipherMs.Read(buf, 0, buf.Length)==buf.Length &&
cipherMs.Length!=cipherMs.Position)
{
for (int w = 0; w!=buf.Length;)
{
w += dec.TransformBlock(buf, 0, buf.Length, decBytes, decPos);
decPos += w;
}
}
// ensure that we read all blocks
Assert.IsTrue(cipherMs.Length==cipherMs.Position);
// process the last block
var tailBytes = dec.TransformFinalBlock(buf, 0, buf.Length);
// here decPos==480, that means 480 bytes were written to decBytes
// and 504-480 = 24 bytes come from TransformFinalBlock
Assert.AreEqual(24, tailBytes.Length); // <- fail, because the actual length is 56
Buffer.BlockCopy(tailBytes, 0, decBytes, decPos, tailBytes.Length);
}
Assert.AreEqual(inputBytes, decBytes);
}
For some reason I got 56-byte final block instead of 24-byte.
I suppose, TransformBlock/TransformFinalBlock should be used in some other way, but unfortunately, MSDN docs don't explain much about these methods.
Any thoughts?

Ok, here's the thing:
When TransformBlock is called first time, it copies the last block of the input buffer to the so called depad buffer and then transforms and writes remaining blocks to the output buffer. During next calls it transforms data from depad buffer, writes it to the output buffer, copies the last block of the input buffer to the depad buffer again, and writes transformed remaining blocks to the output, just as the first time.
TL;DR: TransformBlock caches the last block of input data so when TransformFinalBlock is called, it can grab the last block and remove padding. Without caching, the last block may be handled by TransformBlock. In that case, padding wouldn't be removed.

Related

Converting HEX to (extended) ASCII 0-255 with NO UTF for DOS C#

Firstly, I don't want to be 'That Guy' when I ask this long question, even though I know it's been asked plenty of times in different way, but I'm having significant problems in getting a date format to store in a string correctly.
Some minor background.
I am using the DOS FileTime Date format that needs to be stored in an 8 character HEX format - as seen here: https://doubleblak.com/blogPosts.php?id=7
In short, the time and date are captured, then arranged in binary bits, and then converted to HEX.
What I need is to be able to do now, is store those HEX Values as a string, and be able to pass them to tagLib sharp to write a custom APE tag in an MP3 file. Easier said than done...
Writing the custom tags is easy, as it's basically just a matter of this:
TagLib.File file = TagLib.File.Create(filename);
TagLib.Ape.Tag ape_tag = (TagLib.Ape.Tag)file.GetTag(TagLib.TagTypes.Ape, true);
// Write - for my example
/* declarations:
public void SetValue(string key, string value);
public void SetValue(string key, uint number, uint count);
public void SetValue(string key, string[] value);
*/
ape_tag.SetValue("XLastPlayed", history );
So, on to the actual problem:
After the conversion of the date to the correct HEX Values, I get the following result:
928C9D51
However, to make this work and store it correctly, I need to convert it to ASCII values, so that it can then be stored by TagLibSharp.
If I convert this to ASCII, then I get the following: (which is wrong), as it should only be 4 ASCII characters long - even if they are unprintable, or sit in the > 127 character range.
"\u0092\u008c\u009dQ"
You can see in this image the extra HEX values that have been stored, which is incorrect.
This is a sample of the code of I've been trying to use, (in various forms) to get this to work.
string FirstHistory = "7D8C9D51";
String test1 = "";
for (int i = 0; i < FirstHistory.Length; i += 2)
{
string hs = FirstHistory.Substring(i, 2);
var enc = Encoding.GetEncoding("iso-8859-1"); //.ASCII;// .GetEncoding(437);
var bytes1 = enc.GetBytes(string.Format("{0:x1}", Convert.ToChar(Convert.ToUInt16(hs, 16))));
string unicodeString = enc.GetString(bytes1);
Console.WriteLine(unicodeString);
test1 = test1 + unicodeString;
}
// needs to be "00 00 00 21" for the standard date array for this file format.
byte[] bytesArray = { 0, 0, 0, 33 }; // A byte array containing non-printable characters
string s1 = "";
string history = "";
// Basically what the history will look like
// "???!???!???!???!???!???!???!???!???!???!???!???!???!???!???!???!???!"
for (int i =0; i < 18; i++)
{
if(i==0) {
history = test1; // Write the first value.
}
s1 = Encoding.UTF8.GetString(bytesArray); // encoding on this string won't effect the array date values
history = history + s1;
}
ape_tag.SetValue("XLastPlayed", history );
I am aware there are multiple encodings, and I've basically tried all that I can, and have read things, but I'm not getting anywhere.
Sometimes I think I've got it, but then when I look at the file I'm saving, it slips in a "C2" HEX value, when it shouldn't, and this is the unicode breaking everything. I've included an image of what it should be without these C2 Hex Values, and you can actually see the DOS Time and Date time appear correctly in the HxD Hex viewer.
I've tried various encodings such as 437, ios-8859-1, ASCII, and different methods such as using string builder, char, bytes, etc. and sometimes I get a date and time stamp where the values are correct, where the HEX values don't exceed in to the extended ASCII ranges, but then I run it again, and I'm back to square 1. It ALWAYS inserts those extended values as UTF8 entries and breaks regardless of what I do.
I'm sure there's not a bug in VS, but I'm running Microsoft Visual Studio Community 2019, Version 16.8.2 if that adds to the case.
I can not seem to find a way around this. Does anyone have any thoughts on this?
Thanks in advance.
*** UPDATE ***
This the update thanks to #xanatos
public static byte[] ConvertHexStringToByteArray(string str)
{
Dictionary<string, byte> hexindex = new Dictionary<string, byte>();
for (int i = 0; i <= 255; i++)
hexindex.Add(i.ToString("X2"), (byte)i);
List<byte> hexres = new List<byte>();
for (int i = 0; i < str.Length; i += 2)
hexres.Add(hexindex[str.Substring(i, 2)]);
return hexres.ToArray();
}
string FirstHistory = "7D8C9D51";
string s1 = "";
string history = "";
byte[] bytes = { 0, 0, 33, 0 }; // A byte array contains non-ASCII (or non-readable) characters
for (int i =0; i < 18; i++)
{
s1 = Encoding.UTF8.GetString(bytes); // ???
history = history + s1;
}
var theArray_SO = ConvertHexStringToByteArray(FirstHistory);
ape_tag.SetItem(new TagLib.Ape.Item("XLastPlayed", (new TagLib.ByteVector(theArray_SO)) + history));
*** UPDATE 2 - 30th Jan 2021 ***
After editing other values and resaving them, I ran into some trouble. It seems that there could be data corruption with TagLib and custom APE tags, specifically for this ByteVector data. If you just use the save method for editing other custom values then it's not a problem, but if you have custom values with these values with ByteVector values, you will most probably run in to trouble. This is what I still used for saving the files.
TagLib.File file = TagLib.File.Create(filename);
// changes
file.save();
However, to overcome this data corruption, I read (searched) the file first as a FileStream to locate the value I needed, and then put the values of 72 bytes after the found value to a new byte array, then save it back to the file.
I discovered that reading ByteVector data in through a string failed spectacularly and had results all over the place.
TagLib.Ape.Item item_Duration = ape_tag.GetItem("XLastScheduled");
While this can probably be rewritten a thousand ways, here's my code that works.
int foundlocation = 0;
int loop1 = 0;
byte[] sevenItems = new byte[80] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
string match = "XLastScheduled";
byte[] matchBytes = Encoding.ASCII.GetBytes(match);
{
using (var fs = new FileStream(filename, FileMode.Open))
{
int i = 0;
int readByte;
while ((readByte = fs.ReadByte()) != -1)
{
if (foundlocation == 0)
{
if (matchBytes[i] == readByte)
{
i++;
}
else
{
i = 0;
}
}
if (i == matchBytes.Length)
{
//Console.WriteLine("It found between {0} and {1}.", fs.Position - matchBytes.Length, fs.Position);
// set to true.
foundlocation = 1;
}
if (foundlocation==1)
{
//if (loop1 > 1)
{
// Start adding it at 2 bytes after it's found.
sevenItems[loop1] = (byte)readByte;
}
loop1++;
if(loop1 > 79)
{
fs.Close();
Console.WriteLine("Found the XLastScheduled data");
// 72/4 = 18 date/times
break;
}
}
// Then, I can save those values back as a vector byte array, instead of a string - hopefully...
}
fs.Close();
}
}
byte[] dst = new byte[sevenItems.Length - 8];
Array.Copy(sevenItems, 2, dst, 0, dst.Length);
TagLib.File file = TagLib.File.Create(filename);
// Get the APEv2 tag if it exists.
TagLib.Ape.Tag ape_tag = (TagLib.Ape.Tag)file.GetTag(TagLib.TagTypes.Ape, true);
// Save the new byteVector.
ape_tag.SetItem(new TagLib.Ape.Item("XLastScheduled", (new TagLib.ByteVector(dst))));
Console.WriteLine("XLastScheduled: set" );
There is another method for binary data:
var bytes = new byte[4] { 0xFF, 0, 0, 0xFF };
ape_tag.SetItem(new TagLib.Ape.Item("XLastPlayed", new ByteVector(bytes)));
Unclear if you need methods to convert from/to DOS FileTime:
public static uint ToDosFileTimeDate(DateTime dt)
{
ushort date = (ushort)(((dt.Year - 1980) << 9) | (dt.Month << 5) | (dt.Day));
ushort time = (ushort)((dt.Hour << 11) | (dt.Minute << 5) | (dt.Second >> 1));
uint dateTime = ((uint)date << 16) | time;
return dateTime;
}
public static DateTime FromDosFileTimeDate(uint ui)
{
ushort date = (ushort)(ui >> 16);
ushort time = (ushort)(ui & 0xFFFF);
var year = (date >> 9) + 1980;
var month = (date >> 5) & 0xF;
var day = date & 0x1F;
var hour = time >> 11;
var minute = (time >> 5) & 0x3F;
var second = (time & 0x1F) << 1;
return new DateTime(year, month, day, hour, minute, second, DateTimeKind.Local);
}
and to convert the uint to byte[4] arrays there are
uint ui = BitConverter.ToUInt32(bytes);
and
byte[] bytes = BitConverter.GetBytes(ui);

How to convert from UInt32 array to byte array?

This question only vise versa. For now I got this:
UInt32[] target;
byte[] decoded = new byte[target.Length * 2];
Buffer.BlockCopy(target, 0, decoded, 0, target.Length);
And this doesn't work, I get array filled with 0x00.
I would recommend something like the following:
UInt32[] target;
//Assignments
byte[] decoded = new byte[target.Length * sizeof(uint)];
Buffer.BlockCopy(target, 0, decoded, 0, decoded.Length);
See code:
uint[] target = new uint[] { 1, 2, 3 };
//Assignments
byte[] decoded = new byte[target.Length * sizeof(uint)];
Buffer.BlockCopy(target, 0, decoded, 0, decoded.Length);
for (int i = 0; i < decoded.Length; i++)
{
Console.WriteLine(decoded[i]);
}
Console.ReadKey();
Also see:
Size of values
Int Array to Byte Array
BlockCopy MSDN
You can use BitConverter.GetBytes method for converting a unit to byte
Try this code. It works for me.
UInt32[] target = new UInt32[]{1,2,3};
byte[] decoded = new byte[target.Length * sizeof(UInt32)];
Buffer.BlockCopy(target, 0, decoded, 0, target.Length*sizeof(UInt32));
foreach(byte b in decoded)
{
Console.WriteLine( b);
}
You need to multiple by 4 to create your byte array, since UInt32 is 4 bytes (32 bit). But use BitConverter and fill a list of byte and late you can create an array out of it if you need.
UInt32[] target = new UInt32[] { 1, 2, 3 };
byte[] decoded = new byte[target.Length * 4]; //not required now
List<byte> listOfBytes = new List<byte>();
foreach (var item in target)
{
listOfBytes.AddRange(BitConverter.GetBytes(item));
}
If you need array then:
byte[] decoded = listOfBytes.ToArray();
Your code has a few errors:
UInt32[] target = new uint[] { 1, 2, 3, 4 };
// Error 1:
// You had 2 instead of 4. Each UInt32 is actually 4 bytes.
byte[] decoded = new byte[target.Length * 4];
// Error 2:
Buffer.BlockCopy(
src: target,
srcOffset: 0,
dst: decoded,
dstOffset: 0,
count: decoded.Length // You had target.Length. You want the length in bytes.
);
This should yield what you're expecting.

CNG AES -> C# AES

I want to make a program in C# that can open KeePass 1.x kdb files. I downloaded sources and trying to port password database reading functionality. Database contents is encrypted. Encryption key is obtained the following way:
User enters password;
SHA256 hash of password is calculated and split in two 128-bit halves;
Several rounds of AES is applied to each half of hash using key from database header;
Halves are concatenated back;
Result is salted with salt from database header;
SHA256 hash of step 4 result is calculated. That is the encryption key.
I'm stuck on step 3. KeePass uses CNG for AES. Simplified source (for half of hash, other half had the same applied to it):
BCRYPT_ALG_HANDLE hAes = NULL;
BCRYPT_KEY_HANDLE hKey = NULL;
BYTE pbKey32[32] = <encryption key>;
BYTE pbData16[16] = <half of hash from step 2>;
BCryptOpenAlgorithmProvider(&hAes, BCRYPT_AES_ALGORITHM, NULL, 0);
DWORD dwKeyObjLen = 0;
ULONG uResult = 0;
BCryptGetProperty(hAes, BCRYPT_OBJECT_LENGTH, (PUCHAR)&dwKeyObjLen, sizeof(DWORD), &uResult, 0);
BCryptSetProperty(hAes, BCRYPT_CHAINING_MODE, (PUCHAR)BCRYPT_CHAIN_MODE_ECB, static_cast<ULONG>((wcslen(BCRYPT_CHAIN_MODE_ECB) + 1) * sizeof(wchar_t)), 0);
BCRYPT_KEY_DATA_BLOB_32 keyBlob;
ZeroMemory(&keyBlob, sizeof(BCRYPT_KEY_DATA_BLOB_32));
keyBlob.dwMagic = BCRYPT_KEY_DATA_BLOB_MAGIC;
keyBlob.dwVersion = BCRYPT_KEY_DATA_BLOB_VERSION1;
keyBlob.cbKeyData = 32;
memcpy(keyBlob.pbData, pbKey32, 32);
pKeyObj = new UCHAR[dwKeyObjLen];
BCryptImportKey(hAes, NULL, BCRYPT_KEY_DATA_BLOB, &hKey, pKeyObj.get(), dwKeyObjLen, (PUCHAR)&keyBlob, sizeof(BCRYPT_KEY_DATA_BLOB_32), 0);
for (int i = 0; i < rounds; ++i)
{
BCryptEncrypt(hKey, pbData16, 16, NULL, NULL, 0, pbData16, 16, &uResult, 0);
}
So, as far as I understand, it uses AES algorithm with ECB chaining mode and it passes NULL and 0 as 5th and 6th argument of BCryptEncrypt function meaning it will not use initialization vector.
Now, how do I do the same in C#? I wrote following function to do one round of transformation (based on MSDN sample):
public static byte[] KeyTransform(byte[] buffer, byte[] key)
{
Aes aes = Aes.Create();
aes.Key = key;
aes.BlockSize = 128;
aes.KeySize = key.Length * 8;
aes.Mode = CipherMode.ECB;
//aes.IV = new byte[16] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0};
ICryptoTransform ct = aes.CreateEncryptor();
using (MemoryStream ms = new MemoryStream())
{
using (CryptoStream cs = new CryptoStream(ms, ct, CryptoStreamMode.Write))
{
using (BinaryWriter bw = new BinaryWriter(cs))
{
bw.Write(buffer);
}
cs.Flush();
}
return ms.ToArray();
}
}
Then I compare buffers after one round of AES applied in original and in my code. My code produces different results from original. How do I fix it?
By the way, no matter if I specify IV or not my code produces different result every time (so I believe IV is always generated and used). If I try to set aes.IV to null it throws exception saying that I can't set it to null.
It seems that initializing ICryptoTransform like this:
ICryptoTransform ct = aes.CreateEncryptor(key, new byte[] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 });
does the trick.
The only thing that worries me is that resulting memory stream has 32 bytes instead of 16. But if I drop last 16 of them it produces what I need.

C# split byte array from file

Hello I'm doing an encryption algorithm which reads bytes from file (any type) and outputs them into a file. The problem is my encryption program takes only blocks of 16 bytes so if the file is bigger it has to be split into blocks of 16, or if there's a way to read 16 bytes from the file each time it's fine.
The algorithm is working fine with hard coded input of 16 bytes. The ciphered result has to be saved in a list or array because it has to be deciphered the same way later. I can't post all my program but here's what I do in main so far and cannot get results
static void Main(String[] args)
{
byte[] bytes = File.ReadAllBytes("path to file");
var stream = new StreamReader(new MemoryStream(bytes));
byte[] cipherText = new byte[16];
byte[] decipheredText = new byte[16];
Console.WriteLine("\nThe message is: ");
Console.WriteLine(stream.ReadToEnd());
AES a = new AES(keyInput);
var list1 = new List<byte[]>();
for (int i = 0; i < bytes.Length; i+=16)
{
a.Cipher(bytes, cipherText);
list1.Add(cipherText);
}
Console.WriteLine("\nThe resulting ciphertext is: ");
foreach (byte[] b in list1)
{
ToBytes(b);
}
}
I know that my loops always add the first 16 bytes from the byte array but I tried many ways and nothing work. It won't let me index the bytes array or copy an item to a temp variable like temp = bytes[i]. The ToBytes method is irrelevant, it just prints the elements as bytes.
I would like to recommend you to change the interface for your Cipher() method: instead of passing the entire array, it would be better to pass the source and destination arrays and offset - block by block encryption.
Pseudo-code is below.
void Cipher(byte[] source, int srcOffset, byte[] dest, int destOffset)
{
// Cipher these bytes from (source + offset) to (source + offset + 16),
// write the cipher to (dest + offset) to (dest + offset + 16)
// Also I'd recommend to check that the source and dest Length is less equal to (offset + 16)!
}
Usage:
For small files (one memory allocation for destination buffer, block by block encryption):
// You can allocate the entire destination buffer before encryption!
byte[] sourceBuffer = File.ReadAllBytes("path to file");
byte[] destBuffer = new byte[sourceBuffer.Length];
// Encrypt each block.
for (int offset = 0; i < sourceBuffer.Length; offset += 16)
{
Cipher(sourceBuffer, offset, destBuffer, offset);
}
So, the main advantage of this approach - it elimitates additional memory allocations: the destination array is allocated at once. There is also no copy-memory operations.
For files of any size (streams, block by block encryption):
byte[] inputBlock = new byte[16];
byte[] outputBlock = new byte[16];
using (var inputStream = File.OpenRead("input path"))
using (var outputStream = File.Create("output path"))
{
int bytesRead;
while ((bytesRead = inputStream.Read(inputBlock, 0, inputBlock.Length)) > 0)
{
if (bytesRead < 16)
{
// Throw or use padding technique.
throw new InvalidOperationException("Read block size is not equal to 16 bytes");
// Fill the remaining bytes of input block with some bytes.
// This operation for last block is called "padding".
// See http://en.wikipedia.org/wiki/Block_cipher_modes_of_operation#Padding
}
Cipher(inputBlock, 0, outputBlock, 0);
outputStream.Write(outputBlock, 0, outputBlock.Length);
}
}
No need to read the whole mess into memory if you can only process it a bit at a time...
var filename = #"c:\temp\foo.bin";
using(var fileStream = new FileStream(filename, FileMode.Open))
{
var buffer = new byte[16];
var bytesRead = 0;
while((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) > 0)
{
// do whatever you need to with the next 16-byte block
Console.WriteLine("Read {0} bytes: {1}",
bytesRead,
string.Join(",", buffer));
}
}
You can use Array.Copy
byte[] temp = new byte[16];
Array.Copy(bytes, i, temp, 0, 16);

how to encrypt AES/ECB/128 messages in .Net?

Took the vectors from this site http://www.inconteam.com/software-development/41-encryption/55-aes-test-vectors#aes-ecb-128
In javascript (sjcl) have the same result
var key = [0x2b7e1516,0x28aed2a6,0xabf71588,0x09cf4f3c];
var test = [0x6bc1bee2,0x2e409f96,0xe93d7e11,0x7393172a];
aes = new sjcl.cipher.aes(key);
r = aes.encrypt(test);
console.log(r);
But I can not reach it in the C#
[TestMethod]
public void EncryptIntsToInts()
{
Int32[] key = { unchecked((Int32)0x2b7e1516), 0x28aed2a6, unchecked((Int32)0xabf71588), 0x09cf4f3c };
Int32[] test = { 0x6bc1bee2,0x2e409f96,unchecked((Int32)0xe93d7e11),0x7393172a };
Int32[] answer = { 0x3ad77bb4, 0x0d7a3660, unchecked((Int32)0xa89ecaf3), 0x2466ef97 };
var r = AES.EncryptIntsToInts(test, key.ToByteArray());
Assert.IsTrue(r.SequenceEqual(answer));
}
static byte[] zeroIV = new byte[] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
public static Int32[] EncryptIntsToInts(Int32[] input, byte[] key)
{
// Check arguments.
if (input == null || input.Length <= 0)
throw new ArgumentNullException("input");
if (key == null || key.Length <= 0)
throw new ArgumentNullException("key");
// Declare the RijndaelManaged object
// used to encrypt the data.
RijndaelManaged aesAlg = null;
byte[] bResult;
try
{
aesAlg = new RijndaelManaged
{
Key = key,
Mode = CipherMode.ECB,
Padding = PaddingMode.None,
KeySize = 128,
BlockSize = 128,
IV = zeroIV
};
ICryptoTransform encryptor = aesAlg.CreateEncryptor();
byte[] bInput = new byte[input.Length * sizeof(int)];
Buffer.BlockCopy(input, 0, bInput, 0, bInput.Length);
bResult = encryptor.TransformFinalBlock(bInput, 0, input.Length);
}
finally
{
if (aesAlg != null)
aesAlg.Clear();
}
int[] iResult = new int[bResult.Length / sizeof(int)];
Buffer.BlockCopy(bResult, 0, iResult, 0, bResult.Length);
return iResult;
}
What is my error?
========================================================
Start edit
New code in which right order of the bytes, but it does not work
[TestMethod]
public void EncryptIntsToInts()
{
byte[] key = "2b7e151628aed2a6abf7158809cf4f3c".HEX2Bytes();
byte[] test = "6bc1bee22e409f96e93d7e117393172a".HEX2Bytes();
byte[] answer = "3ad77bb40d7a3660a89ecaf32466ef97".HEX2Bytes();
RijndaelManaged aesAlg = new RijndaelManaged
{
Key = key,
Mode = CipherMode.ECB,
Padding = PaddingMode.PKCS7,
KeySize = 128,
BlockSize = 128,
IV = zeroIV
};
ICryptoTransform encryptor = aesAlg.CreateEncryptor();
var r = encryptor.TransformFinalBlock(test, 0, test.Length);
Assert.IsTrue(r.SequenceEqual(answer));
}
public static byte[] HEX2Bytes(this string hex)
{
if (hex.Length%2 != 0)
{
throw new ArgumentException(String.Format(CultureInfo.InvariantCulture,
"The binary key cannot have an odd number of digits: {0}", hex));
}
byte[] HexAsBytes = new byte[hex.Length/2];
for (int index = 0; index < HexAsBytes.Length; index++)
{
string byteValue = hex.Substring(index*2, 2);
HexAsBytes[index] = byte.Parse(byteValue, NumberStyles.HexNumber, CultureInfo.InvariantCulture);
}
return HexAsBytes;
}
static byte[] zeroIV = new byte[] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 };
Right code (just add a try / using):
[TestMethod]
public void EncryptIntsToInts()
{
byte[] key = "2b7e151628aed2a6abf7158809cf4f3c".HEX2Bytes();
byte[] test = "6bc1bee22e409f96e93d7e117393172a".HEX2Bytes();
byte[] answer = "3ad77bb40d7a3660a89ecaf32466ef97".HEX2Bytes();
var r = AES.Encrypt(test, key);
Assert.IsTrue(answer.SequenceEqual(r));
}
public static byte[] Encrypt(byte[] input, byte[] key)
{
var aesAlg = new AesManaged
{
KeySize = 128,
Key = key,
BlockSize = 128,
Mode = CipherMode.ECB,
Padding = PaddingMode.Zeros,
IV = new byte[] { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }
};
ICryptoTransform encryptor = aesAlg.CreateEncryptor(aesAlg.Key, aesAlg.IV);
return encryptor.TransformFinalBlock(input, 0, input.Length);
}
You use 32 bit integers to define the key. When you transform them to bytes, you use native endianness, which typically is little endian. So your key is 16157e2b a6... and not 2b7e1516 28....
I wouldn't use ints to represent a key in the first place. But if you really want to, write a big endian conversion function.
I also strongly recommend against ECB mode. You could use CBC together with HMAC (in an encrypt then mac construction), or use a third party lib to implement GCM.

Categories