Related
I am trying to edit the hex code of a file. The code in C# needs to find either the decoded text i.e., 1004 or the corresponding hex 31 30 30 34 and replace with string 2113 or either the corresponding hex 32 31 31 33
I have tried to edit the string directly using:-
string test = File.ReadAllText("tram.png");
Regex id = new Regex("1004");
string idre = id.Replace(test, "2113", 1);
File.WriteAllText("tram.png", idre);
Also tried Byte method
Byte[] test = File.ReadAllBytes("tram.png");
Byte[] id = new Regex("1004");
Byte[] idre = id.Replace(test, "2113", 1);
File.WriteAllBytes("tram.png", idre);
It says 'byte[]' does not contain a definition for 'Replace' and Cannot implicitly convert type 'System.Text.RegularExpressions.Regex' to 'byte[]'
Can you tell me what am I doing wrong please?
You can't use regex on bytes. You can look for the pattern manually and just replace those elements in the array:
byte[] bytes = File.ReadAllBytes("tram.png");
for (int i = 0; i < bytes.Length - 4; ++i)
{
if (bytes[i] == 31 && bytes[i + 1] == 30 && bytes[i + 2] == 30 && bytes[i + 3] == 34)
{
bytes[i] = 32;
bytes[i + 1] = 31;
bytes[i + 2] = 31;
bytes[i + 3] = 33;
i += 3; // Skip the new bytes for efficiency.
}
}
File.WriteAllBytes("tram.png", bytes);
I have a byte array as follows -
byte[] arrByt = new byte[] { 0xF, 0xF, 0x11, 0x4 };
so in binary
arrByt = 00001111 00001111 00010001 000000100
Now I want to create a new byte array by removing leading 0s for each byte from arrByt
arrNewByt = 11111111 10001100 = { 0xFF, 0x8C };
I know that this can be done by converting the byte values into binary string values, removing the leading 0s, appending the values and converting back to byte values into the new array.
However this is a slow process for a large array.
Is there a faster way to achieve this (like logical operations, bit operations, or other efficient ways)?
Thanks.
This should do the job quite fast. At least only standard loops and operators. Give it a try, will also work for longer source arrays.
// source array of bytes
var arrByt = new byte[] {0xF, 0xF, 0x11, 0x4 };
// target array - first with the size of the source array
var targetArray = new byte[arrByt.Length];
// bit index in target array
// from left = byte 0, bit 7 = index 31; to the right = byte 4, bit 0 = index 0
var targetIdx = targetArray.Length * 8 - 1;
// go through all bytes of the source array from left to right
for (var i = 0; i < arrByt.Length; i++)
{
var startFound = false;
// go through all bits of the current byte from the highest to the lowest
for (var x = 7; x >= 0; x--)
{
// copy the bit if it is 1 or if there was already a 1 before in this byte
if (startFound || ((arrByt[i] >> x) & 1) == 1)
{
startFound = true;
// copy the bit from its position in the source array to its new position in the target array
targetArray[targetArray.Length - ((targetIdx / 8) + 1)] |= (byte) (((arrByt[i] >> x) & 1) << (targetIdx % 8));
// advance the bit + byte position in the target array one to the right
targetIdx--;
}
}
}
// resize the target array to only the bytes that were used above
Array.Resize(ref targetArray, (int)Math.Ceiling((targetArray.Length * 8 - (targetIdx + 1)) / 8d));
// write target array content to console
for (var i = 0; i < targetArray.Length; i++)
{
Console.Write($"{targetArray[i]:X} ");
}
// OUTPUT: FF 8C
If you are trying to find the location of the most-significant bit, you can do a log2() of the byte (and if you don't have log2, you can use log(x)/log(2) which is the same as log2(x))
For instance, the number 7, 6, 5, and 4 all have a '1' in the 3rd bit position (0111, 0110, 0101, 0100). The log2() of them are all between 2 and 2.8. Same thing happens for anything in the 4th bit, it will be a number between 3 and 3.9. So you can find out the Most Significant Bit by adding 1 to the log2() of the number (round down).
floor(log2(00001111)) + 1 == floor(3.9) + 1 == 3 + 1 == 4
You know how many bits are in a byte, so you can easily know the number of bits to shift left:
int numToShift = 8 - floor(log2(bytearray[0])) + 1;
shiftedValue = bytearray[0] << numToShift;
From there, it's just a matter of keeping track of how many outstanding bits (not pushed into a bytearray yet) you have, and then pushing some/all of them on.
The above code would only work for the first byte array. If you put this in a loop, the numToShift would maybe need to keep track of the latest empty slot to shift things into (you might have to shift right to fit in current byte array, and then use the leftovers to put into the start of the next byte array). So instead of doing "8 -" in the above code, you would maybe put the starting location. For instance, if only 3 bits were left to fill in the current byte array, you would do:
int numToShift = 3 - floor(log2(bytearray[0])) + 1;
So that number should be a variable:
int numToShift = bitsAvailableInCurrentByte - floor(log2(bytearray[0])) + 1;
Please check this code snippet. This might help you.
byte[] arrByt = new byte[] { 0xF, 0xF, 0x11, 0x4 };
byte[] result = new byte[arrByt.Length / 2];
var en = arrByt.GetEnumerator();
int count = 0;
byte result1 = 0;
int index = 0;
while (en.MoveNext())
{
count++;
byte item = (byte)en.Current;
if (count == 1)
{
while (item < 128)
{
item = (byte)(item << 1);
}
result1 ^= item;
}
if (count == 2)
{
count = 0;
result1 ^= item;
result[index] = result1;
index++;
result1 = 0;
}
}
foreach (var s in result)
{
Console.WriteLine(s.ToString("X"));
}
I am trying to port the following C++ function to C#:
QString Engine::FDigest(const QString & input)
{
if(input.size() != 32) return "";
int idx[] = {0xe, 0x3, 0x6, 0x8, 0x2},
mul[] = {2, 2, 5, 4, 3},
add[] = {0x0, 0xd, 0x10, 0xb, 0x5},
a, m, i, t, v;
QString b;
char tmp[2] = { 0, 0 };
for(int j = 0; j <= 4; j++)
{
a = add[j];
m = mul[j];
i = idx[j];
tmp[0] = input[i].toAscii();
t = a + (int)(strtol(tmp, NULL, 16));
v = (int)(strtol(input.mid(t, 2).toLocal8Bit(), NULL, 16));
snprintf(tmp, 2, "%x", (v * m) % 0x10);
b += tmp;
}
return b;
}
Some of this code is easy to port however I'm having problems with this part:
tmp[0] = input[i].toAscii();
t = a + (int)(strtol(tmp, NULL, 16));
v = (int)(strtol(input.mid(t, 2).toLocal8Bit(), NULL, 16));
snprintf(tmp, 2, "%x", (v * m) % 0x10);
I have found that (int)strtol(tmp, NULL, 16) equals int.Parse(tmp, "x") in C# and snprintf is String.Format, however I'm not sure about the rest of it.
How can I port this fragment to C#?
Edit I have a suspicion that your code actually does a MD5 digest of the input data.
See below for a snippet based on that assumption.
Translation steps
A few hints that should work well1
Q: tmp[0] = input[i].toAscii();
bytes[] ascii = ASCIIEncoding.GetBytes(input);
tmp[0] = ascii[i];
Q: t = a + (int)(strtol(tmp, NULL, 16));
t = a + int.Parse(string.Format("{0}{1}", tmp[0], tmp[1]),
System.Globalization.NumberStyles.HexNumber);
Q: v = (int)(strtol(input.mid(t, 2).toLocal8Bit(), NULL, 16));
No clue about the toLocal8bit, would need to read Qt documentation...
Q: snprintf(tmp, 2, "%x", (v * m) % 0x10);
{
string tmptext = ((v*m % 16)).ToString("X2");
tmp[0] = tmptext[0];
tmp[1] = tmptext[1];
}
What if ... it's just MD5?
You could try this directly to see whether it achieves what you need:
using System;
public string FDigest(string input)
{
MD5 md5 = System.Security.Cryptography.MD5.Create();
byte[] ascii = System.Text.Encoding.ASCII.GetBytes (input);
byte[] hash = md5.ComputeHash (ascii);
// Convert the byte array to hexadecimal string
StringBuilder sb = new StringBuilder();
for (int i = 0; i < hash.Length; i++)
sb.Append (hash[i].ToString ("X2")); // "x2" for lowercase
return sb.ToString();
}
1 explicitly not optimized, intended as quick hints; optimize as necessary
A few more hints:
t is a two byte buffer and you only ever write to the first byte, leaving a trailing nul. So t is always a string of exactly one character, and you're processing a hex number one character at a time. So I think
tmp[0] = input[i].toAscii();
t = a + (int)(strtol(tmp, NULL, 16));
this is roughly int t = a + Convert.ToInt32(input.substring(i, 1), 16); - take one digit from input and add its hex value to a which you've looked up from a table. (I'm assuming that the toAscii is simply to map the QString character which is already a hex digit into ASCII for strtol, so if you have a string of hex digits already this is OK.)
Next
v = (int)(strtol(input.mid(t, 2).toLocal8Bit(), NULL, 16));
this means look up two characters from input from offset t, i.e. input.substring(t, 2), then convert these to a hex integer again. v = Convert.ToInt32(input.substring(t, 2), 16); Now, as it happens, I think you'll only actually use the second digit here anyway since the calculation is (v * a) % 0x10, but hey. If again we're working with a QString of hex digits then toLocal8Bit ought to be the same conversion as toAscii - I'm not clear why your code has two different functions here.
Finally convert these values to a single digit in tmp, then append that to b
snprintf(tmp, 2, "%x", (v * m) % 0x10);
b += tmp;
(2 is the length of the buffer, and since we need a trailing nul only 1 is ever written) i.e.
int digit = (v * m) % 0x10;
b += digit.ToString("x");
should do. I'd personally write the mod 16 as a logical and, & 0xf, since it's intended to strip the value down to a single digit.
Note also that in your code i is never set - I guess that's a loop or something you omitted for brevity?
So, in summary
int t = a + Convert.ToInt32(input.substring(i, 1), 16);
int v = Convert.ToInt32(input.substring(t, 2), 16);
int nextDigit = (v * m) & 0xf;
b += nextDigit.ToString("x");
I've got a byte array that was created using a hash function. I would like to convert this array into a string. So far so good, it will give me hexadecimal string.
Now I would like to use something different than hexadecimal characters, I would like to encode the byte array with these 36 characters: [a-z][0-9].
How would I go about?
Edit: the reason I would to do this, is because I would like to have a smaller string, than a hexadecimal string.
I adapted my arbitrary-length base conversion function from this answer to C#:
static string BaseConvert(string number, int fromBase, int toBase)
{
var digits = "0123456789abcdefghijklmnopqrstuvwxyz";
var length = number.Length;
var result = string.Empty;
var nibbles = number.Select(c => digits.IndexOf(c)).ToList();
int newlen;
do {
var value = 0;
newlen = 0;
for (var i = 0; i < length; ++i) {
value = value * fromBase + nibbles[i];
if (value >= toBase) {
if (newlen == nibbles.Count) {
nibbles.Add(0);
}
nibbles[newlen++] = value / toBase;
value %= toBase;
}
else if (newlen > 0) {
if (newlen == nibbles.Count) {
nibbles.Add(0);
}
nibbles[newlen++] = 0;
}
}
length = newlen;
result = digits[value] + result; //
}
while (newlen != 0);
return result;
}
As it's coming from PHP it might not be too idiomatic C#, there are also no parameter validity checks. However, you can feed it a hex-encoded string and it will work just fine with
var result = BaseConvert(hexEncoded, 16, 36);
It's not exactly what you asked for, but encoding the byte[] into hex is trivial.
See it in action.
Earlier tonight I came across a codereview question revolving around the same algorithm being discussed here. See: https://codereview.stackexchange.com/questions/14084/base-36-encoding-of-a-byte-array/
I provided a improved implementation of one of its earlier answers (both use BigInteger). See: https://codereview.stackexchange.com/a/20014/20654. The solution takes a byte[] and returns a Base36 string. Both the original and mine include simple benchmark information.
For completeness, the following is the method to decode a byte[] from an string. I'll include the encode function from the link above as well. See the text after this code block for some simple benchmark info for decoding.
const int kByteBitCount= 8; // number of bits in a byte
// constants that we use in FromBase36String and ToBase36String
const string kBase36Digits= "0123456789abcdefghijklmnopqrstuvwxyz";
static readonly double kBase36CharsLengthDivisor= Math.Log(kBase36Digits.Length, 2);
static readonly BigInteger kBigInt36= new BigInteger(36);
// assumes the input 'chars' is in big-endian ordering, MSB->LSB
static byte[] FromBase36String(string chars)
{
var bi= new BigInteger();
for (int x= 0; x < chars.Length; x++)
{
int i= kBase36Digits.IndexOf(chars[x]);
if (i < 0) return null; // invalid character
bi *= kBigInt36;
bi += i;
}
return bi.ToByteArray();
}
// characters returned are in big-endian ordering, MSB->LSB
static string ToBase36String(byte[] bytes)
{
// Estimate the result's length so we don't waste time realloc'ing
int result_length= (int)
Math.Ceiling(bytes.Length * kByteBitCount / kBase36CharsLengthDivisor);
// We use a List so we don't have to CopyTo a StringBuilder's characters
// to a char[], only to then Array.Reverse it later
var result= new System.Collections.Generic.List<char>(result_length);
var dividend= new BigInteger(bytes);
// IsZero's computation is less complex than evaluating "dividend > 0"
// which invokes BigInteger.CompareTo(BigInteger)
while (!dividend.IsZero)
{
BigInteger remainder;
dividend= BigInteger.DivRem(dividend, kBigInt36, out remainder);
int digit_index= Math.Abs((int)remainder);
result.Add(kBase36Digits[digit_index]);
}
// orientate the characters in big-endian ordering
result.Reverse();
// ToArray will also trim the excess chars used in length prediction
return new string(result.ToArray());
}
"A test 1234. Made slightly larger!" encodes to Base64 as "165kkoorqxin775ct82ist5ysteekll7kaqlcnnu6mfe7ag7e63b5"
To decode that Base36 string 1,000,000 times takes 12.6558909 seconds on my machine (I used the same build and machine conditions as provided in my answer on codereview)
You mentioned that you were dealing with a byte[] for the MD5 hash, rather than a hexadecimal string representation of it, so I think this solution provide the least overhead for you.
If you want a shorter string and can accept [a-zA-Z0-9] and + and / then look at Convert.ToBase64String
Using BigInteger (needs the System.Numerics reference)
Using BigInteger (needs the System.Numerics reference)
const string chars = "0123456789abcdefghijklmnopqrstuvwxyz";
// The result is padded with chars[0] to make the string length
// (int)Math.Ceiling(bytes.Length * 8 / Math.Log(chars.Length, 2))
// (so that for any value [0...0]-[255...255] of bytes the resulting
// string will have same length)
public static string ToBaseN(byte[] bytes, string chars, bool littleEndian = true, int len = -1)
{
if (bytes.Length == 0 || len == 0)
{
return String.Empty;
}
// BigInteger saves in the last byte the sign. > 7F negative,
// <= 7F positive.
// If we have a "negative" number, we will prepend a 0 byte.
byte[] bytes2;
if (littleEndian)
{
if (bytes[bytes.Length - 1] <= 0x7F)
{
bytes2 = bytes;
}
else
{
// Note that Array.Resize doesn't modify the original array,
// but creates a copy and sets the passed reference to the
// new array
bytes2 = bytes;
Array.Resize(ref bytes2, bytes.Length + 1);
}
}
else
{
bytes2 = new byte[bytes[0] > 0x7F ? bytes.Length + 1 : bytes.Length];
// We copy and reverse the array
for (int i = bytes.Length - 1, j = 0; i >= 0; i--, j++)
{
bytes2[j] = bytes[i];
}
}
BigInteger bi = new BigInteger(bytes2);
// A little optimization. We will do many divisions based on
// chars.Length .
BigInteger length = chars.Length;
// We pre-calc the length of the string. We know the bits of
// "information" of a byte are 8. Using Log2 we calc the bits of
// information of our new base.
if (len == -1)
{
len = (int)Math.Ceiling(bytes.Length * 8 / Math.Log(chars.Length, 2));
}
// We will build our string on a char[]
var chs = new char[len];
int chsIndex = 0;
while (bi > 0)
{
BigInteger remainder;
bi = BigInteger.DivRem(bi, length, out remainder);
chs[littleEndian ? chsIndex : len - chsIndex - 1] = chars[(int)remainder];
chsIndex++;
if (chsIndex < 0)
{
if (bi > 0)
{
throw new OverflowException();
}
}
}
// We append the zeros that we skipped at the beginning
if (littleEndian)
{
while (chsIndex < len)
{
chs[chsIndex] = chars[0];
chsIndex++;
}
}
else
{
while (chsIndex < len)
{
chs[len - chsIndex - 1] = chars[0];
chsIndex++;
}
}
return new string(chs);
}
public static byte[] FromBaseN(string str, string chars, bool littleEndian = true, int len = -1)
{
if (str.Length == 0 || len == 0)
{
return new byte[0];
}
// This should be the maximum length of the byte[] array. It's
// the opposite of the one used in ToBaseN.
// Note that it can be passed as a parameter
if (len == -1)
{
len = (int)Math.Ceiling(str.Length * Math.Log(chars.Length, 2) / 8);
}
BigInteger bi = BigInteger.Zero;
BigInteger length2 = chars.Length;
BigInteger mult = BigInteger.One;
for (int j = 0; j < str.Length; j++)
{
int ix = chars.IndexOf(littleEndian ? str[j] : str[str.Length - j - 1]);
// We didn't find the character
if (ix == -1)
{
throw new ArgumentOutOfRangeException();
}
bi += ix * mult;
mult *= length2;
}
var bytes = bi.ToByteArray();
int len2 = bytes.Length;
// BigInteger adds a 0 byte for positive numbers that have the
// last byte > 0x7F
if (len2 >= 2 && bytes[len2 - 1] == 0)
{
len2--;
}
int len3 = Math.Min(len, len2);
byte[] bytes2;
if (littleEndian)
{
if (len == bytes.Length)
{
bytes2 = bytes;
}
else
{
bytes2 = new byte[len];
Array.Copy(bytes, bytes2, len3);
}
}
else
{
bytes2 = new byte[len];
for (int i = 0; i < len3; i++)
{
bytes2[len - i - 1] = bytes[i];
}
}
for (int i = len3; i < len2; i++)
{
if (bytes[i] != 0)
{
throw new OverflowException();
}
}
return bytes2;
}
Be aware that they are REALLY slow! REALLY REALLY slow! (2 minutes for 100k). To speed them up you would probably need to rewrite the division/mod operation so that they work directly on a buffer, instead of each time recreating the scratch pads as it's done by BigInteger. And it would still be SLOW. The problem is that the time needed to encode the first byte is O(n) where n is the length of the byte array (this because all the array needs to be divided by 36). Unless you want to work with blocks of 5 bytes and lose some bits. Each symbol of Base36 carries around 5.169925001 bits. So 8 of these symbols would carry 41.35940001 bits. Very near 40 bytes.
Note that these methods can work both in little-endian mode and in big-endian mode. The endianness of the input and of the output is the same. Both methods accept a len parameter. You can use it to trim excess 0 (zeroes). Note that if you try to make an output too much small to contain the input, an OverflowException will be thrown.
System.Text.Encoding enc = System.Text.Encoding.ASCII;
string myString = enc.GetString(myByteArray);
You can play with what encoding you need:
System.Text.ASCIIEncoding,
System.Text.UnicodeEncoding,
System.Text.UTF7Encoding,
System.Text.UTF8Encoding
To match the requrements [a-z][0-9] you can use it:
Byte[] bytes = new Byte[] { 200, 180, 34 };
string result = String.Join("a", bytes.Select(x => x.ToString()).ToArray());
You will have string representation of bytes with char separator. To convert back you will need to split, and convert the string[] to byte[] using the same approach with .Select().
Usually a power of 2 is used - that way one character maps to a fixed number of bits. An alphabet of 32 bits for instance would map to 5 bits. The only challenge in that case is how to deserialize variable-length strings.
For 36 bits you could treat the data as a large number, and then:
divide by 36
add the remainder as character to your result
repeat until the division results in 0
Easier said than done perhaps.
you can use modulu.
this example encode your byte array to string of [0-9][a-z].
change it if you want.
public string byteToString(byte[] byteArr)
{
int i;
char[] charArr = new char[byteArr.Length];
for (i = 0; i < byteArr.Length; i++)
{
int byt = byteArr[i] % 36; // 36=num of availible charachters
if (byt < 10)
{
charArr[i] = (char)(byt + 48); //if % result is a digit
}
else
{
charArr[i] = (char)(byt + 87); //if % result is a letter
}
}
return new String(charArr);
}
If you don't want to lose data for de-encoding you can use this example:
public string byteToString(byte[] byteArr)
{
int i;
char[] charArr = new char[byteArr.Length*2];
for (i = 0; i < byteArr.Length; i++)
{
charArr[2 * i] = (char)((int)byteArr[i] / 36+48);
int byt = byteArr[i] % 36; // 36=num of availible charachters
if (byt < 10)
{
charArr[2*i+1] = (char)(byt + 48); //if % result is a digit
}
else
{
charArr[2*i+1] = (char)(byt + 87); //if % result is a letter
}
}
return new String(charArr);
}
and now you have a string double-lengthed when odd char is the multiply of 36 and even char is the residu. for example: 200=36*5+20 => "5k".
This question already has answers here:
How do you convert a byte array to a hexadecimal string, and vice versa?
(53 answers)
Closed 8 years ago.
I have an array of bytes that I would like to store as a string. I can do this as follows:
byte[] array = new byte[] { 0x01, 0x02, 0x03, 0x04 };
string s = System.BitConverter.ToString(array);
// Result: s = "01-02-03-04"
So far so good. Does anyone know how I get this back to an array? There is no overload of BitConverter.GetBytes() that takes a string, and it seems like a nasty workaround to break the string into an array of strings and then convert each of them.
The array in question may be of variable length, probably about 20 bytes.
Not a built in method, but an implementation. (It could be done without the split though).
String[] arr=str.Split('-');
byte[] array=new byte[arr.Length];
for(int i=0; i<arr.Length; i++) array[i]=Convert.ToByte(arr[i],16);
Method without Split: (Makes many assumptions about string format)
int length=(s.Length+1)/3;
byte[] arr1=new byte[length];
for (int i = 0; i < length; i++)
arr1[i] = Convert.ToByte(s.Substring(3 * i, 2), 16);
And one more method, without either split or substrings. You may get shot if you commit this to source control though. I take no responsibility for such health problems.
int length=(s.Length+1)/3;
byte[] arr1=new byte[length];
for (int i = 0; i < length; i++)
{
char sixteen = s[3 * i];
if (sixteen > '9') sixteen = (char)(sixteen - 'A' + 10);
else sixteen -= '0';
char ones = s[3 * i + 1];
if (ones > '9') ones = (char)(ones - 'A' + 10);
else ones -= '0';
arr1[i] = (byte)(16*sixteen+ones);
}
(basically implementing base16 conversion on two chars)
You can parse the string yourself:
byte[] data = new byte[(s.Length + 1) / 3];
for (int i = 0; i < data.Length; i++) {
data[i] = (byte)(
"0123456789ABCDEF".IndexOf(s[i * 3]) * 16 +
"0123456789ABCDEF".IndexOf(s[i * 3 + 1])
);
}
The neatest solution though, I believe, is using extensions:
byte[] data = s.Split('-').Select(b => Convert.ToByte(b, 16)).ToArray();
If you don't need that specific format, try using Base64, like this:
var bytes = new byte[] { 0x12, 0x34, 0x56 };
var base64 = Convert.ToBase64String(bytes);
bytes = Convert.FromBase64String(base64);
Base64 will also be substantially shorter.
If you need to use that format, this obviously won't help.
byte[] data = Array.ConvertAll<string, byte>(str.Split('-'), s => Convert.ToByte(s, 16));
I believe the following will solve this robustly.
public static byte[] HexStringToBytes(string s)
{
const string HEX_CHARS = "0123456789ABCDEF";
if (s.Length == 0)
return new byte[0];
if ((s.Length + 1) % 3 != 0)
throw new FormatException();
byte[] bytes = new byte[(s.Length + 1) / 3];
int state = 0; // 0 = expect first digit, 1 = expect second digit, 2 = expect hyphen
int currentByte = 0;
int x;
int value = 0;
foreach (char c in s)
{
switch (state)
{
case 0:
x = HEX_CHARS.IndexOf(Char.ToUpperInvariant(c));
if (x == -1)
throw new FormatException();
value = x << 4;
state = 1;
break;
case 1:
x = HEX_CHARS.IndexOf(Char.ToUpperInvariant(c));
if (x == -1)
throw new FormatException();
bytes[currentByte++] = (byte)(value + x);
state = 2;
break;
case 2:
if (c != '-')
throw new FormatException();
state = 0;
break;
}
}
return bytes;
}
it seems like a nasty workaround to break the string into an array of strings and then convert each of them.
I don't think there's another way... the format produced by BitConverter.ToString is quite specific, so if there is no existing method to parse it back to a byte[], I guess you have to do it yourself
the ToString method is not really intended as a conversion, rather to provide a human-readable format for debugging, easy printout, etc.
I'd rethink about the byte[] - String - byte[] requirement and probably prefer SLaks' Base64 solution