Convert C++ function to C# - c#

I am trying to port the following C++ function to C#:
QString Engine::FDigest(const QString & input)
{
if(input.size() != 32) return "";
int idx[] = {0xe, 0x3, 0x6, 0x8, 0x2},
mul[] = {2, 2, 5, 4, 3},
add[] = {0x0, 0xd, 0x10, 0xb, 0x5},
a, m, i, t, v;
QString b;
char tmp[2] = { 0, 0 };
for(int j = 0; j <= 4; j++)
{
a = add[j];
m = mul[j];
i = idx[j];
tmp[0] = input[i].toAscii();
t = a + (int)(strtol(tmp, NULL, 16));
v = (int)(strtol(input.mid(t, 2).toLocal8Bit(), NULL, 16));
snprintf(tmp, 2, "%x", (v * m) % 0x10);
b += tmp;
}
return b;
}
Some of this code is easy to port however I'm having problems with this part:
tmp[0] = input[i].toAscii();
t = a + (int)(strtol(tmp, NULL, 16));
v = (int)(strtol(input.mid(t, 2).toLocal8Bit(), NULL, 16));
snprintf(tmp, 2, "%x", (v * m) % 0x10);
I have found that (int)strtol(tmp, NULL, 16) equals int.Parse(tmp, "x") in C# and snprintf is String.Format, however I'm not sure about the rest of it.
How can I port this fragment to C#?

Edit I have a suspicion that your code actually does a MD5 digest of the input data.
See below for a snippet based on that assumption.
Translation steps
A few hints that should work well1
Q: tmp[0] = input[i].toAscii();
bytes[] ascii = ASCIIEncoding.GetBytes(input);
tmp[0] = ascii[i];
Q: t = a + (int)(strtol(tmp, NULL, 16));
t = a + int.Parse(string.Format("{0}{1}", tmp[0], tmp[1]),
System.Globalization.NumberStyles.HexNumber);
Q: v = (int)(strtol(input.mid(t, 2).toLocal8Bit(), NULL, 16));
No clue about the toLocal8bit, would need to read Qt documentation...
Q: snprintf(tmp, 2, "%x", (v * m) % 0x10);
{
string tmptext = ((v*m % 16)).ToString("X2");
tmp[0] = tmptext[0];
tmp[1] = tmptext[1];
}
What if ... it's just MD5?
You could try this directly to see whether it achieves what you need:
using System;
public string FDigest(string input)
{
MD5 md5 = System.Security.Cryptography.MD5.Create();
byte[] ascii = System.Text.Encoding.ASCII.GetBytes (input);
byte[] hash = md5.ComputeHash (ascii);
// Convert the byte array to hexadecimal string
StringBuilder sb = new StringBuilder();
for (int i = 0; i < hash.Length; i++)
sb.Append (hash[i].ToString ("X2")); // "x2" for lowercase
return sb.ToString();
}
1 explicitly not optimized, intended as quick hints; optimize as necessary

A few more hints:
t is a two byte buffer and you only ever write to the first byte, leaving a trailing nul. So t is always a string of exactly one character, and you're processing a hex number one character at a time. So I think
tmp[0] = input[i].toAscii();
t = a + (int)(strtol(tmp, NULL, 16));
this is roughly int t = a + Convert.ToInt32(input.substring(i, 1), 16); - take one digit from input and add its hex value to a which you've looked up from a table. (I'm assuming that the toAscii is simply to map the QString character which is already a hex digit into ASCII for strtol, so if you have a string of hex digits already this is OK.)
Next
v = (int)(strtol(input.mid(t, 2).toLocal8Bit(), NULL, 16));
this means look up two characters from input from offset t, i.e. input.substring(t, 2), then convert these to a hex integer again. v = Convert.ToInt32(input.substring(t, 2), 16); Now, as it happens, I think you'll only actually use the second digit here anyway since the calculation is (v * a) % 0x10, but hey. If again we're working with a QString of hex digits then toLocal8Bit ought to be the same conversion as toAscii - I'm not clear why your code has two different functions here.
Finally convert these values to a single digit in tmp, then append that to b
snprintf(tmp, 2, "%x", (v * m) % 0x10);
b += tmp;
(2 is the length of the buffer, and since we need a trailing nul only 1 is ever written) i.e.
int digit = (v * m) % 0x10;
b += digit.ToString("x");
should do. I'd personally write the mod 16 as a logical and, & 0xf, since it's intended to strip the value down to a single digit.
Note also that in your code i is never set - I guess that's a loop or something you omitted for brevity?
So, in summary
int t = a + Convert.ToInt32(input.substring(i, 1), 16);
int v = Convert.ToInt32(input.substring(t, 2), 16);
int nextDigit = (v * m) & 0xf;
b += nextDigit.ToString("x");

Related

AES GCM porting from python to C#

I am trying to port AES GCM implementation in python OpenTLS project, to C# (.Net). Below is the code in OpenTLS code:
#######################
### Galois Counter Mode
#######################
class AES_GCM:
def __init__(self, keys, key_size, hash):
key_size //= 8
hash_size = hash.digest_size
self.client_AES_key = keys[0 : key_size]
self.server_AES_key = keys[key_size : 2*key_size]
self.client_IV = keys[2*key_size : 2*key_size+4]
self.server_IV = keys[2*key_size+4 : 2*key_size+8]
self.H_client = bytes_to_int(AES.new(self.client_AES_key, AES.MODE_ECB).encrypt('\x00'*16))
self.H_server = bytes_to_int(AES.new(self.server_AES_key, AES.MODE_ECB).encrypt('\x00'*16))
def GF_mult(self, x, y):
product = 0
for i in range(127, -1, -1):
product ^= x * ((y >> i) & 1)
x = (x >> 1) ^ ((x & 1) * 0xE1000000000000000000000000000000)
return product
def H_mult(self, H, val):
product = 0
for i in range(16):
product ^= self.GF_mult(H, (val & 0xFF) << (8 * i))
val >>= 8
return product
def GHASH(self, H, A, C):
C_len = len(C)
A_padded = bytes_to_int(A + b'\x00' * (16 - len(A) % 16))
if C_len % 16 != 0:
C += b'\x00' * (16 - C_len % 16)
tag = self.H_mult(H, A_padded)
for i in range(0, len(C) // 16):
tag ^= bytes_to_int(C[i*16:i*16+16])
tag = self.H_mult(H, tag)
tag ^= bytes_to_int(nb_to_n_bytes(8*len(A), 8) + nb_to_n_bytes(8*C_len, 8))
tag = self.H_mult(H, tag)
return tag
def decrypt(self, ciphertext, seq_num, content_type, debug=False):
iv = self.server_IV + ciphertext[0:8]
counter = Counter.new(nbits=32, prefix=iv, initial_value=2, allow_wraparound=False)
cipher = AES.new(self.server_AES_key, AES.MODE_CTR, counter=counter)
plaintext = cipher.decrypt(ciphertext[8:-16])
# Computing the tag is actually pretty time consuming
if debug:
auth_data = nb_to_n_bytes(seq_num, 8) + nb_to_n_bytes(content_type, 1) + TLS_VERSION + nb_to_n_bytes(len(ciphertext)-8-16, 2)
auth_tag = self.GHASH(self.H_server, auth_data, ciphertext[8:-16])
auth_tag ^= bytes_to_int(AES.new(self.server_AES_key, AES.MODE_ECB).encrypt(iv + '\x00'*3 + '\x01'))
auth_tag = nb_to_bytes(auth_tag)
print('Auth tag (from server): ' + bytes_to_hex(ciphertext[-16:]))
print('Auth tag (from client): ' + bytes_to_hex(auth_tag))
return plaintext
def encrypt(self, plaintext, seq_num, content_type):
iv = self.client_IV + os.urandom(8)
# Encrypts the plaintext
plaintext_size = len(plaintext)
counter = Counter.new(nbits=32, prefix=iv, initial_value=2, allow_wraparound=False)
cipher = AES.new(self.client_AES_key, AES.MODE_CTR, counter=counter)
ciphertext = cipher.encrypt(plaintext)
# Compute the Authentication Tag
auth_data = nb_to_n_bytes(seq_num, 8) + nb_to_n_bytes(content_type, 1) + TLS_VERSION + nb_to_n_bytes(plaintext_size, 2)
auth_tag = self.GHASH(self.H_client, auth_data, ciphertext)
auth_tag ^= bytes_to_int(AES.new(self.client_AES_key, AES.MODE_ECB).encrypt(iv + b'\x00'*3 + b'\x01'))
auth_tag = nb_to_bytes(auth_tag)
# print('Auth key: ' + bytes_to_hex(nb_to_bytes(self.H)))
# print('IV: ' + bytes_to_hex(iv))
# print('Key: ' + bytes_to_hex(self.client_AES_key))
# print('Plaintext: ' + bytes_to_hex(plaintext))
# print('Ciphertext: ' + bytes_to_hex(ciphertext))
# print('Auth tag: ' + bytes_to_hex(auth_tag))
return iv[4:] + ciphertext + auth_tag
An attempt to translate this to C# code is below (sorry for the amateurish code, I am a newbie):
EDIT:
Created an array which got values from GetBytes, and printed the result:
byte[] incr = BitConverter.GetBytes((int) 2);
cf.printBuf(incr, (String) "Array:");
return;
Noticed that the result was "02 00 00 00". Hence I guess my machine is little endian
Made some changes to the code as rodrigogq mentioned. Below is the latest code. It is still not working:
Verified that GHASH, GF_mult and H_mult are giving same results. Below is the verification code:
Python:
key = "\xab\xcd\xab\xcd"
key = key * 10
h = "\x00\x00"
a = AES_GCM(key, 128, h)
H = 200
A = "\x02" * 95
C = "\x02" * 95
D = a.GHASH(H, A, C)
print(D)
C#:
BigInteger H = new BigInteger(200);
byte[] A = new byte[95];
byte[] C = new byte[95];
for (int i = 0; i < 95; i ++)
{
A[i] = 2;
C[i] = 2;
}
BigInteger a = e.GHASH(H, A, C);
Console.WriteLine(a);
Results:
For both: 129209628709014910494696220101529767594
EDIT: Now the outputs are agreeing between Python and C#. So essentially the porting is done :) However, these outputs still don't agree with Wireshark. Hence, the handshake is still failing. May be something wrong with the procedure or the contents. Below is the working code
EDIT: Finally managed to get the code working. Below is the code that resulted in a successful handshake
Working Code:
/*
* Receiving seqNum as UInt64 and content_type as byte
*
*/
public byte[] AES_Encrypt_GCM(byte[] client_write_key, byte[] client_write_iv, byte[] plaintext, UInt64 seqNum, byte content_type)
{
int plaintext_size = plaintext.Length;
List<byte> temp = new List<byte>();
byte[] init_bytes = new byte[16];
Array.Clear(init_bytes, 0, 16);
byte[] encrypted = AES_Encrypt_ECB(init_bytes, client_write_key, 128);
Array.Reverse(encrypted);
BigInteger H_client = new BigInteger(encrypted);
if (H_client < 0)
{
temp.Clear();
temp.TrimExcess();
temp.AddRange(H_client.ToByteArray());
temp.Add(0);
H_client = new BigInteger(temp.ToArray());
}
Random rnd = new Random();
byte[] random = new byte[8];
rnd.NextBytes(random);
/*
* incr is little endian, but it needs to be in big endian format
*
*/
byte[] incr = BitConverter.GetBytes((int) 2);
Array.Reverse(incr);
/*
* Counter = First 4 bytes of IV + 8 Random bytes + 4 bytes of sequential value (starting at 2)
*
*/
temp.Clear();
temp.TrimExcess();
temp.AddRange(client_write_iv);
temp.AddRange(random);
byte[] iv = temp.ToArray();
temp.AddRange(incr);
byte[] counter = temp.ToArray();
AES_CTR aesctr = new AES_CTR(counter);
ICryptoTransform ctrenc = aesctr.CreateEncryptor(client_write_key, null);
byte[] ctext = ctrenc.TransformFinalBlock(plaintext, 0, plaintext_size);
byte[] seq_num = BitConverter.GetBytes(seqNum);
/*
* Using UInt16 instead of short
*
*/
byte[] tls_version = BitConverter.GetBytes((UInt16) 771);
Console.WriteLine("Plain Text size = {0}", plaintext_size);
byte[] plaintext_size_array = BitConverter.GetBytes((UInt16) plaintext_size);
/*
* Size was returned as 10 00 instead of 00 10
*
*/
Array.Reverse(plaintext_size_array);
temp.Clear();
temp.TrimExcess();
temp.AddRange(seq_num);
temp.Add(content_type);
temp.AddRange(tls_version);
temp.AddRange(plaintext_size_array);
byte[] auth_data = temp.ToArray();
BigInteger auth_tag = GHASH(H_client, auth_data, ctext);
Console.WriteLine("H = {0}", H_client);
this.printBuf(plaintext, "plaintext = ");
this.printBuf(auth_data, "A = ");
this.printBuf(ctext, "C = ");
this.printBuf(client_write_key, "client_AES_key = ");
this.printBuf(iv.ToArray(), "iv = ");
Console.WriteLine("Auth Tag just after GHASH: {0}", auth_tag);
AesCryptoServiceProvider aes2 = new AesCryptoServiceProvider();
aes2.Key = client_write_key;
aes2.Mode = CipherMode.ECB;
aes2.Padding = PaddingMode.None;
aes2.KeySize = 128;
ICryptoTransform transform1 = aes2.CreateEncryptor();
byte[] cval = {0, 0, 0, 1};
temp.Clear();
temp.TrimExcess();
temp.AddRange(iv);
temp.AddRange(cval);
byte[] encrypted1 = AES_Encrypt_ECB(temp.ToArray(), client_write_key, 128);
Array.Reverse(encrypted1);
BigInteger nenc = new BigInteger(encrypted1);
if (nenc < 0)
{
temp.Clear();
temp.TrimExcess();
temp.AddRange(nenc.ToByteArray());
temp.Add(0);
nenc = new BigInteger(temp.ToArray());
}
this.printBuf(nenc.ToByteArray(), "NENC = ");
Console.WriteLine("NENC: {0}", nenc);
auth_tag ^= nenc;
byte[] auth_tag_array = auth_tag.ToByteArray();
Array.Reverse(auth_tag_array);
this.printBuf(auth_tag_array, "Final Auth Tag Byte Array: ");
Console.WriteLine("Final Auth Tag: {0}", auth_tag);
this.printBuf(random, "Random sent = ");
temp.Clear();
temp.TrimExcess();
temp.AddRange(random);
temp.AddRange(ctext);
temp.AddRange(auth_tag_array);
return temp.ToArray();
}
public void printBuf(byte[] data, String heading)
{
int numBytes = 0;
Console.Write(heading + "\"");
if (data == null)
{
return;
}
foreach (byte element in data)
{
Console.Write("\\x{0}", element.ToString("X2"));
numBytes = numBytes + 1;
if (numBytes == 32)
{
Console.Write("\r\n");
numBytes = 0;
}
}
Console.Write("\"\r\n");
}
public BigInteger GF_mult(BigInteger x, BigInteger y)
{
BigInteger product = new BigInteger(0);
BigInteger e10 = BigInteger.Parse("00E1000000000000000000000000000000", NumberStyles.AllowHexSpecifier);
/*
* Below operation y >> i fails if i is UInt32, so leaving it as int
*
*/
int i = 127;
while (i != -1)
{
product = product ^ (x * ((y >> i) & 1));
x = (x >> 1) ^ ((x & 1) * e10);
i = i - 1;
}
return product;
}
public BigInteger H_mult(BigInteger H, BigInteger val)
{
BigInteger product = new BigInteger(0);
int i = 0;
/*
* Below operation (val & 0xFF) << (8 * i) fails if i is UInt32, so leaving it as int
*
*/
while (i < 16)
{
product = product ^ GF_mult(H, (val & 0xFF) << (8 * i));
val = val >> 8;
i = i + 1;
}
return product;
}
public BigInteger GHASH(BigInteger H, byte[] A, byte[] C)
{
int C_len = C.Length;
List <byte> temp = new List<byte>();
int plen = 16 - (A.Length % 16);
byte[] zeroes = new byte[plen];
Array.Clear(zeroes, 0, zeroes.Length);
temp.AddRange(A);
temp.AddRange(zeroes);
temp.Reverse();
BigInteger A_padded = new BigInteger(temp.ToArray());
temp.Clear();
temp.TrimExcess();
byte[] C1;
if ((C_len % 16) != 0)
{
plen = 16 - (C_len % 16);
byte[] zeroes1 = new byte[plen];
Array.Clear(zeroes, 0, zeroes.Length);
temp.AddRange(C);
temp.AddRange(zeroes1);
C1 = temp.ToArray();
}
else
{
C1 = new byte[C.Length];
Array.Copy(C, 0, C1, 0, C.Length);
}
temp.Clear();
temp.TrimExcess();
BigInteger tag = new BigInteger();
tag = H_mult(H, A_padded);
this.printBuf(H.ToByteArray(), "H Byte Array:");
for (int i = 0; i < (int) (C1.Length / 16); i ++)
{
byte[] toTake;
if (i == 0)
{
toTake = C1.Take(16).ToArray();
}
else
{
toTake = C1.Skip(i * 16).Take(16).ToArray();
}
Array.Reverse(toTake);
BigInteger tempNum = new BigInteger(toTake);
tag ^= tempNum;
tag = H_mult(H, tag);
}
byte[] A_arr = BitConverter.GetBytes((long) (8 * A.Length));
/*
* Want length to be "00 00 00 00 00 00 00 xy" format
*
*/
Array.Reverse(A_arr);
byte[] C_arr = BitConverter.GetBytes((long) (8 * C_len));
/*
* Want length to be "00 00 00 00 00 00 00 xy" format
*
*/
Array.Reverse(C_arr);
temp.AddRange(A_arr);
temp.AddRange(C_arr);
temp.Reverse();
BigInteger array_int = new BigInteger(temp.ToArray());
tag = tag ^ array_int;
tag = H_mult(H, tag);
return tag;
}
Using SSL decryption in wireshark (using private key), I found that:
The nonce calculated by the C# code is same as that in wireshark (fixed part is client_write_IV and variable part is 8 bytes random)
The value of AAD (auth_data above) (client_write_key, seqNum + ctype + tls_version + plaintext_size) is matching with wireshark value
Cipher text (ctext above) (the C in GHASH(H, A, C)), is also matching the wireshark calculated value
However, the auth_tag calculation (GHASH(H_client, auth_data, ctext)) is failing. It would be great if someone could guide me as to what could be wrong in GHASH function. I just did a basic comparison of results of GF_mult function in python and C#, but the results are not matching too
This is not a final solution, but just an advice. I have seen you are using a lot the function BitConverter.GetBytes, int instead of Int32 or Int16.
The remarks from the official documentation says:
The order of bytes in the array returned by the GetBytes method
depends on whether the computer architecture is little-endian or
big-endian.
As for when you are using the BigInteger structure, it seems to be expecting always the little-endian order:
value
Type: System.Byte[]
An array of byte values in little-endian order.
Prefer using the Int32 and Int16 and pay attention to the order of the bytes before using it on these calculations.
Use log4net to log all the operations. Would be nice to put the same logs in the python program so that you could compare then at once, and check exactly where the calculations change.
Hope this give some tips on where to start.

Long to String...Not your usual convert

Just for being curious I was rummaging through some code to do diff's on files. I've got it al working etc but one of the points it writes the following
long test = 0x3034464649445342L;
I understand that that this is just another way to write...
long test = 3473478480300364610;
...but when it writes to a file it prints out as 'BSDIFF40'. Can anyone shed some light on how this is converted? I've tried different encodings (ANSI, ASCII etc) but can't figure it out. The line that writes it to the file is below if that helps anyone...
private static void WriteInt64(long value, byte[] buf, int offset)
{
var valueToWrite = value < 0 ? -value : value;
for (var byteIndex = 0; byteIndex < 8; byteIndex++)
{
buf[offset + byteIndex] = (byte)(valueToWrite % 256);
valueToWrite -= buf[offset + byteIndex];
valueToWrite /= 256;
}
if (value < 0)
buf[offset + 7] |= 0x80;
}
Thanks :)
Whatever you are doing to print the values in the file is interpreting these values as ASCII encoded text:
0x30 = '0'
0x34 = '4'
0x46 = 'F'
0x46 = 'F'
0x49 = 'I'
0x44 = 'D'
0x53 = 'S'
0x42 = 'B'

Integer.parseInt(String s, int radix) in C#

I have this function in a Java program.
private static byte[] converToByte(String s)
{
byte[] output = new byte[s.length() / 2];
for (int i = 0, j = 0; i < s.length(); i += 2, j++)
{
output[j] = (byte)(Integer.parseInt(s.substring(i, i + 2), 16));
}
return output;
}
I am trying to create the same thing with C# but I'm having troubles. I tried this:
output[j] = (byte)(Int16.Parse(str.Substring(i, i + 2)));
But after a couple of iterations I got a System.OverflowException, what would be the instruction in C#?
Thanks.
private static sbyte[] converToByte(string s)
{
sbyte[] output = new sbyte[s.Length / 2];
for (int i = 0, j = 0; i < s.Length; i += 2, j++)
{
output[j] = (sbyte)(Convert.ToInt32(s.Substring(i, 2), 16));
}
return output;
}
You are using the wrong data type in your line:
output[j] = (byte)(Int16.Parse(str.Substring(i, i + 2)));
Short Name .NET Class Type Width Range (bits)
byte Byte Unsigned integer 8 0 to 255
short Int16 Signed integer 16 -32,768 to 32,767
You are getting an overflow exception because an Int16 (short) is far to big to fit into a byte.
After struggling with tihs problem myself I realised the real problem is that Java's substring method is:
substring(int beginIndex, int endIndex)
While C#'s implementation takes:
substring(int beginIndex, int length)
This means in the C# the same code is grabbing larger chunks of bytes causing an overflow.
#Dave Doknjas was on the right track but you can still convert to a byte with the new smaller chunk size.
output[j] = Convert.ToByte(str.Substring(i, i + 2), 16);

Convert an integer to a byte[] of specific length

I'm trying to create a function (C#) that will take 2 integers (a value to become a byte[], a value to set the length of the array to) and return a byte[] representing the value. Right now, I have a function which only returns byte[]s of a length of 4 (I'm presuming 32-bit).
For instance, something like InttoByteArray(0x01, 2) should return a byte[] of {0x00, 0x01}.
Does anyone have a solution to this?
You could use the following
static public byte[] ToByteArray(object anyValue, int length)
{
if (length > 0)
{
int rawsize = Marshal.SizeOf(anyValue);
IntPtr buffer = Marshal.AllocHGlobal(rawsize);
Marshal.StructureToPtr(anyValue, buffer, false);
byte[] rawdatas = new byte[rawsize * length];
Marshal.Copy(buffer, rawdatas, (rawsize * (length - 1)), rawsize);
Marshal.FreeHGlobal(buffer);
return rawdatas;
}
return new byte[0];
}
Some test cases are:
byte x = 45;
byte[] x_bytes = ToByteArray(x, 1);
int y = 234;
byte[] y_bytes = ToByteArray(y, 5);
int z = 234;
byte[] z_bytes = ToByteArray(z, 0);
This will create an array of whatever size the type is that you pass in. If you want to only return byte arrays, it should be pretty easy to change. Right now its in a more generic form
To get what you want in your example you could do this:
int a = 0x01;
byte[] a_bytes = ToByteArray(Convert.ToByte(a), 2);
You can use the BitConverter utility class for this. Though I don't think it allows you to specify the length of the array when you're converting an int. But you can always truncate the result.
http://msdn.microsoft.com/en-us/library/de8fssa4.aspx
Take your current algorithm and chop off bytes from the array if the length specified is less than 4, or pad it with zeroes if it's more than 4. Sounds like you already have it solved to me.
You'd want some loop like:
for(int i = arrayLen - 1 ; i >= 0; i--) {
resultArray[i] = (theInt >> (i*8)) & 0xff;
}
byte[] IntToByteArray(int number, int bytes)
{
if(bytes > 4 || bytes < 0)
{
throw new ArgumentOutOfRangeException("bytes");
}
byte[] result = new byte[bytes];
for(int i = bytes-1; i >=0; i--)
{
result[i] = (number >> (8*i)) & 0xFF;
}
return result;
}
It fills the result array from right to left with the the bytes from less to most significant.
byte byte1 = (byte)((mut & 0xFF) ^ (mut3 & 0xFF));
byte byte2 = (byte)((mut1 & 0xFF) ^ (mut2 & 0xFF));
quoted from
C#: Cannot convert from ulong to byte

BitConverter.ToString() in reverse? [duplicate]

This question already has answers here:
How do you convert a byte array to a hexadecimal string, and vice versa?
(53 answers)
Closed 8 years ago.
I have an array of bytes that I would like to store as a string. I can do this as follows:
byte[] array = new byte[] { 0x01, 0x02, 0x03, 0x04 };
string s = System.BitConverter.ToString(array);
// Result: s = "01-02-03-04"
So far so good. Does anyone know how I get this back to an array? There is no overload of BitConverter.GetBytes() that takes a string, and it seems like a nasty workaround to break the string into an array of strings and then convert each of them.
The array in question may be of variable length, probably about 20 bytes.
Not a built in method, but an implementation. (It could be done without the split though).
String[] arr=str.Split('-');
byte[] array=new byte[arr.Length];
for(int i=0; i<arr.Length; i++) array[i]=Convert.ToByte(arr[i],16);
Method without Split: (Makes many assumptions about string format)
int length=(s.Length+1)/3;
byte[] arr1=new byte[length];
for (int i = 0; i < length; i++)
arr1[i] = Convert.ToByte(s.Substring(3 * i, 2), 16);
And one more method, without either split or substrings. You may get shot if you commit this to source control though. I take no responsibility for such health problems.
int length=(s.Length+1)/3;
byte[] arr1=new byte[length];
for (int i = 0; i < length; i++)
{
char sixteen = s[3 * i];
if (sixteen > '9') sixteen = (char)(sixteen - 'A' + 10);
else sixteen -= '0';
char ones = s[3 * i + 1];
if (ones > '9') ones = (char)(ones - 'A' + 10);
else ones -= '0';
arr1[i] = (byte)(16*sixteen+ones);
}
(basically implementing base16 conversion on two chars)
You can parse the string yourself:
byte[] data = new byte[(s.Length + 1) / 3];
for (int i = 0; i < data.Length; i++) {
data[i] = (byte)(
"0123456789ABCDEF".IndexOf(s[i * 3]) * 16 +
"0123456789ABCDEF".IndexOf(s[i * 3 + 1])
);
}
The neatest solution though, I believe, is using extensions:
byte[] data = s.Split('-').Select(b => Convert.ToByte(b, 16)).ToArray();
If you don't need that specific format, try using Base64, like this:
var bytes = new byte[] { 0x12, 0x34, 0x56 };
var base64 = Convert.ToBase64String(bytes);
bytes = Convert.FromBase64String(base64);
Base64 will also be substantially shorter.
If you need to use that format, this obviously won't help.
byte[] data = Array.ConvertAll<string, byte>(str.Split('-'), s => Convert.ToByte(s, 16));
I believe the following will solve this robustly.
public static byte[] HexStringToBytes(string s)
{
const string HEX_CHARS = "0123456789ABCDEF";
if (s.Length == 0)
return new byte[0];
if ((s.Length + 1) % 3 != 0)
throw new FormatException();
byte[] bytes = new byte[(s.Length + 1) / 3];
int state = 0; // 0 = expect first digit, 1 = expect second digit, 2 = expect hyphen
int currentByte = 0;
int x;
int value = 0;
foreach (char c in s)
{
switch (state)
{
case 0:
x = HEX_CHARS.IndexOf(Char.ToUpperInvariant(c));
if (x == -1)
throw new FormatException();
value = x << 4;
state = 1;
break;
case 1:
x = HEX_CHARS.IndexOf(Char.ToUpperInvariant(c));
if (x == -1)
throw new FormatException();
bytes[currentByte++] = (byte)(value + x);
state = 2;
break;
case 2:
if (c != '-')
throw new FormatException();
state = 0;
break;
}
}
return bytes;
}
it seems like a nasty workaround to break the string into an array of strings and then convert each of them.
I don't think there's another way... the format produced by BitConverter.ToString is quite specific, so if there is no existing method to parse it back to a byte[], I guess you have to do it yourself
the ToString method is not really intended as a conversion, rather to provide a human-readable format for debugging, easy printout, etc.
I'd rethink about the byte[] - String - byte[] requirement and probably prefer SLaks' Base64 solution

Categories