Convert from BitArray to Byte - c#

I have a BitArray with the length of 8, and I need a function to convert it to a byte. How to do it?
Specifically, I need a correct function of ConvertToByte:
BitArray bit = new BitArray(new bool[]
{
false, false, false, false,
false, false, false, true
});
//How to write ConvertToByte
byte myByte = ConvertToByte(bit);
var recoveredBit = new BitArray(new[] { myByte });
Assert.AreEqual(bit, recoveredBit);

This should work:
byte ConvertToByte(BitArray bits)
{
if (bits.Count != 8)
{
throw new ArgumentException("bits");
}
byte[] bytes = new byte[1];
bits.CopyTo(bytes, 0);
return bytes[0];
}

A bit late post, but this works for me:
public static byte[] BitArrayToByteArray(BitArray bits)
{
byte[] ret = new byte[(bits.Length - 1) / 8 + 1];
bits.CopyTo(ret, 0);
return ret;
}
Works with:
string text = "Test";
byte[] bytes = System.Text.Encoding.ASCII.GetBytes(text);
BitArray bits = new BitArray(bytes);
bytes[] bytesBack = BitArrayToByteArray(bits);
string textBack = System.Text.Encoding.ASCII.GetString(bytesBack);
// bytes == bytesBack
// text = textBack
.

A poor man's solution:
protected byte ConvertToByte(BitArray bits)
{
if (bits.Count != 8)
{
throw new ArgumentException("illegal number of bits");
}
byte b = 0;
if (bits.Get(7)) b++;
if (bits.Get(6)) b += 2;
if (bits.Get(5)) b += 4;
if (bits.Get(4)) b += 8;
if (bits.Get(3)) b += 16;
if (bits.Get(2)) b += 32;
if (bits.Get(1)) b += 64;
if (bits.Get(0)) b += 128;
return b;
}

Unfortunately, the BitArray class is partially implemented in .Net Core class (UWP). For example BitArray class is unable to call the CopyTo() and Count() methods. I wrote this extension to fill the gap:
public static IEnumerable<byte> ToBytes(this BitArray bits, bool MSB = false)
{
int bitCount = 7;
int outByte = 0;
foreach (bool bitValue in bits)
{
if (bitValue)
outByte |= MSB ? 1 << bitCount : 1 << (7 - bitCount);
if (bitCount == 0)
{
yield return (byte) outByte;
bitCount = 8;
outByte = 0;
}
bitCount--;
}
// Last partially decoded byte
if (bitCount < 7)
yield return (byte) outByte;
}
The method decodes the BitArray to a byte array using LSB (Less Significant Byte) logic. This is the same logic used by the BitArray class. Calling the method with the MSB parameter set on true will produce a MSB decoded byte sequence. In this case, remember that you maybe also need to reverse the final output byte collection.

This should do the trick. However the previous answer is quite likely the better option.
public byte ConvertToByte(BitArray bits)
{
if (bits.Count > 8)
throw new ArgumentException("ConvertToByte can only work with a BitArray containing a maximum of 8 values");
byte result = 0;
for (byte i = 0; i < bits.Count; i++)
{
if (bits[i])
result |= (byte)(1 << i);
}
return result;
}
In the example you posted the resulting byte will be 0x80. In other words the first value in the BitArray coresponds to the first bit in the returned byte.

That's should be the ultimate one. Works with any length of array.
private List<byte> BoolList2ByteList(List<bool> values)
{
List<byte> ret = new List<byte>();
int count = 0;
byte currentByte = 0;
foreach (bool b in values)
{
if (b) currentByte |= (byte)(1 << count);
count++;
if (count == 7) { ret.Add(currentByte); currentByte = 0; count = 0; };
}
if (count < 7) ret.Add(currentByte);
return ret;
}

In addition to #JonSkeet's answer you can use an Extension Method as below:
public static byte ToByte(this BitArray bits)
{
if (bits.Count != 8)
{
throw new ArgumentException("bits");
}
byte[] bytes = new byte[1];
bits.CopyTo(bytes, 0);
return bytes[0];
}
And use like:
BitArray foo = new BitArray(new bool[]
{
false, false, false, false,false, false, false, true
});
foo.ToByte();

byte GetByte(BitArray input)
{
int len = input.Length;
if (len > 8)
len = 8;
int output = 0;
for (int i = 0; i < len; i++)
if (input.Get(i))
output += (1 << (len - 1 - i)); //this part depends on your system (Big/Little)
//output += (1 << i); //depends on system
return (byte)output;
}
Cheers!

Little endian byte array converter : First bit (indexed with "0") in the BitArray
assumed to represents least significant bit (rightmost bit in the bit-octet) which interpreted as "zero" or "one" as binary.
public static class BitArrayExtender {
public static byte[] ToByteArray( this BitArray bits ) {
const int BYTE = 8;
int length = ( bits.Count / BYTE ) + ( (bits.Count % BYTE == 0) ? 0 : 1 );
var bytes = new byte[ length ];
for ( int i = 0; i < bits.Length; i++ ) {
int bitIndex = i % BYTE;
int byteIndex = i / BYTE;
int mask = (bits[ i ] ? 1 : 0) << bitIndex;
bytes[ byteIndex ] |= (byte)mask;
}//for
return bytes;
}//ToByteArray
}//class

Related

convert from BitArray to 16-bit unsigned integer in c#

BitArray bits=new BitArray(16); // size 16-bit
There is bitArray and I want to convert 16-bit from this array to unsigned integer in c# ,
I can not use copyto for convert, is there other method for convert from 16-bit to UInt16?
You can do it like this:
UInt16 res = 0;
for (int i = 0 ; i < 16 ; i++) {
if (bits[i]) {
res |= (UInt16)(1 << i);
}
}
This algorithm checks the 16 least significant bits one by one, and uses the bitwise OR operation to set the corresponding bit of the result.
You can loop through it and compose the value itself.
var bits = new BitArray(16);
bits[1] = true;
var value = 0;
for (int i = 0; i < bits.Length; i++)
{
if (lBits[i])
{
value |= (1 << i);
}
}
This should do the work
private uint BitArrayToUnSignedInt(BitArray bitArray)
{
ushort res = 0;
for(int i= bitArray.Length-1; i != 0;i--)
{
if (bitArray[i])
{
res = (ushort)(res + (ushort) Math.Pow(2, bitArray.Length- i -1));
}
}
return res;
}
You can check this another anwser already in stackoverflow of that question:
Convert bit array to uint or similar packed value

How can I calculate Longitudinal Redundancy Check (LRC)?

I've tried the example from wikipedia: http://en.wikipedia.org/wiki/Longitudinal_redundancy_check
This is the code for lrc (C#):
/// <summary>
/// Longitudinal Redundancy Check (LRC) calculator for a byte array.
/// ex) DATA (hex 6 bytes): 02 30 30 31 23 03
/// LRC (hex 1 byte ): EC
/// </summary>
public static byte calculateLRC(byte[] bytes)
{
byte LRC = 0x00;
for (int i = 0; i < bytes.Length; i++)
{
LRC = (LRC + bytes[i]) & 0xFF;
}
return ((LRC ^ 0xFF) + 1) & 0xFF;
}
It said the result is "EC" but I get "71", what I'm doing wrong?
Thanks.
Here's a cleaned up version that doesn't do all those useless operations (instead of discarding the high bits every time, they're discarded all at once in the end), and it gives the result you observed. This is the version that uses addition, but that has a negation at the end - might as well subtract and skip the negation. That's a valid transformation even in the case of overflow.
public static byte calculateLRC(byte[] bytes)
{
int LRC = 0;
for (int i = 0; i < bytes.Length; i++)
{
LRC -= bytes[i];
}
return (byte)LRC;
}
Here's the alternative LRC (a simple xor of bytes)
public static byte calculateLRC(byte[] bytes)
{
byte LRC = 0;
for (int i = 0; i < bytes.Length; i++)
{
LRC ^= bytes[i];
}
return LRC;
}
And Wikipedia is simply wrong in this case, both in the code (doesn't compile) and in the expected result.
Guess this one looks cooler ;)
public static byte calculateLRC(byte[] bytes)
{
return bytes.Aggregate<byte, byte>(0, (x, y) => (byte) (x^ y));
}
If someone wants to get the LRC char from a string:
public static char CalculateLRC(string toEncode)
{
byte[] bytes = Encoding.ASCII.GetBytes(toEncode);
byte LRC = 0;
for (int i = 0; i < bytes.Length; i++)
{
LRC ^= bytes[i];
}
return Convert.ToChar(LRC);
}
The corrected Wikipedia version is as follows:
private byte calculateLRC(byte[] b)
{
byte lrc = 0x00;
for (int i = 0; i < b.Length; i++)
{
lrc = (byte)((lrc + b[i]) & 0xFF);
}
lrc = (byte)(((lrc ^ 0xff) + 2) & 0xFF);
return lrc;
}
I created this for Arduino to understand the algorithm (of course it's not written in the most efficient way)
String calculateModbusAsciiLRC(String input)
{
//Refer this document http://www.simplymodbus.ca/ASCII.htm
if((input.length()%2)!=0) { return "ERROR COMMAND SHOULD HAVE EVEN NUMBER OF CHARACTERS"; }
// Make sure to omit the semicolon in input string and input String has even number of characters
byte byteArray[input.length()+1];
input.getBytes(byteArray, sizeof(byteArray));
byte LRC = 0;
for (int i = 0; i <sizeof(byteArray)/2; i++)
{
// Gettting the sum of all registers
uint x=0;
if(47<byteArray[i*2] && byteArray[i*2] <58) {x=byteArray[i*2] -48;}
else { x=byteArray[i*2] -55; }
uint y=0;
if(47<byteArray[i*2+1] && byteArray[i*2+1] <58) {y=byteArray[i*2+1] -48;}
else { y=byteArray[i*2+1] -55; }
LRC += x*16 + y;
}
LRC = ~LRC + 1; // Getting twos Complement
String checkSum = String(LRC, HEX);
checkSum.toUpperCase(); // Converting to upper case eg: bc to BC - Optional somedevices are case insensitve
return checkSum;
}
I realize that this question pretty old, but I had trouble figuring out how to do this. It's working now, so I figured I should paste the code. In my case, the checksum needs to return as an ASCII string.
public function getLrc($string)
{
$LRC = 0;
// Get hex checksum.
foreach (str_split($string, 1) as $char) {
$LRC ^= ord($char);
}
$hex = dechex($LRC);
// convert hex to string
$str = '';
for($i=0;$i<strlen($hex);$i+=2) $str .= chr(hexdec(substr($hex,$i,2)));
return $str;
}

Difference in outputs between C# and java

I am trying to write a Java equivalent for a function in C#. The code follows.
In C#:
byte[] a = new byte[sizeof(Int32)];
readBytes(fStream, a, 0, sizeof(Int32)); //fstream is System.IO.Filestream
int answer = BitConverter.ToInt32(a, 0);
In Java:
InputStream fstream = new FileInputStream(fileName);
DataInputStream in = new DataInputStream(fstream);
BufferedReader br = new BufferedReader(new InputStreamReader(in));
byte[] a = new byte[4];
readBytes(in, a, 0, 4);
int answer = byteArrayToInt(a);
Both Java and C#:
int readBytes(Stream stream, byte[] storageBuffer, int offset, int requiredCount)
{
int totalBytesRead = 0;
while (totalBytesRead < requiredCount)
{
int bytesRead = stream.Read(
storageBuffer,
offset + totalBytesRead,
requiredCount - totalBytesRead);
if (bytesRead == 0)
{
break; // while
}
totalBytesRead += bytesRead;
}
return totalBytesRead;
}
Output:
In C#: answer = 192 (Correct)
In JAVA: answer = -1073741824
There is a difference in the two.I am reading from a file input stream which is encoded and parsing the first four bytes. The C# code seems to produce 192 which is the correct answer while Java produces -1073741824 which is the wrong answer. Why and how ?
EDIT
Here is my byteArrayToInt
public static int byteArrayToInt(byte[] b, int offset) {
int value = 0;
for (int i = 0; i < 4; i++) {
int shift = (4 - 1 - i) * 8;
value += (b[i + offset] & 0x000000FF) << shift;
}
return value;
}
SOLUTION
The right solution for byteArrayToInt
public static int byteArrayToInt(byte[] b)
{
long value = 0;
for (int i = 0; i < b.length; i++)
{
value += (b[i] & 0xff) << (8 * i);
}
return (int) value;
}
This gives the right output
In java bytes are signed, so your -64 in java byte is binary equivalent to 192 in c# byte (192 == 256 - 64).
The problem is probaby in byteArrayToInt() where you assume it's unsigned during the conversion.
A simple
`b & 0x000000FF`
might help in that case.
Java's byte object is signed as soulcheck wrote. The binary value for 192 on an unsigned 8 bit integer would be 11000000.
If you are reading this value with a signed format, a leading 1 will indicate a negative. This means 11000000 becomes negative 01000000, which is -64.

How to convert a byte array (MD5 hash) into a string (36 chars)?

I've got a byte array that was created using a hash function. I would like to convert this array into a string. So far so good, it will give me hexadecimal string.
Now I would like to use something different than hexadecimal characters, I would like to encode the byte array with these 36 characters: [a-z][0-9].
How would I go about?
Edit: the reason I would to do this, is because I would like to have a smaller string, than a hexadecimal string.
I adapted my arbitrary-length base conversion function from this answer to C#:
static string BaseConvert(string number, int fromBase, int toBase)
{
var digits = "0123456789abcdefghijklmnopqrstuvwxyz";
var length = number.Length;
var result = string.Empty;
var nibbles = number.Select(c => digits.IndexOf(c)).ToList();
int newlen;
do {
var value = 0;
newlen = 0;
for (var i = 0; i < length; ++i) {
value = value * fromBase + nibbles[i];
if (value >= toBase) {
if (newlen == nibbles.Count) {
nibbles.Add(0);
}
nibbles[newlen++] = value / toBase;
value %= toBase;
}
else if (newlen > 0) {
if (newlen == nibbles.Count) {
nibbles.Add(0);
}
nibbles[newlen++] = 0;
}
}
length = newlen;
result = digits[value] + result; //
}
while (newlen != 0);
return result;
}
As it's coming from PHP it might not be too idiomatic C#, there are also no parameter validity checks. However, you can feed it a hex-encoded string and it will work just fine with
var result = BaseConvert(hexEncoded, 16, 36);
It's not exactly what you asked for, but encoding the byte[] into hex is trivial.
See it in action.
Earlier tonight I came across a codereview question revolving around the same algorithm being discussed here. See: https://codereview.stackexchange.com/questions/14084/base-36-encoding-of-a-byte-array/
I provided a improved implementation of one of its earlier answers (both use BigInteger). See: https://codereview.stackexchange.com/a/20014/20654. The solution takes a byte[] and returns a Base36 string. Both the original and mine include simple benchmark information.
For completeness, the following is the method to decode a byte[] from an string. I'll include the encode function from the link above as well. See the text after this code block for some simple benchmark info for decoding.
const int kByteBitCount= 8; // number of bits in a byte
// constants that we use in FromBase36String and ToBase36String
const string kBase36Digits= "0123456789abcdefghijklmnopqrstuvwxyz";
static readonly double kBase36CharsLengthDivisor= Math.Log(kBase36Digits.Length, 2);
static readonly BigInteger kBigInt36= new BigInteger(36);
// assumes the input 'chars' is in big-endian ordering, MSB->LSB
static byte[] FromBase36String(string chars)
{
var bi= new BigInteger();
for (int x= 0; x < chars.Length; x++)
{
int i= kBase36Digits.IndexOf(chars[x]);
if (i < 0) return null; // invalid character
bi *= kBigInt36;
bi += i;
}
return bi.ToByteArray();
}
// characters returned are in big-endian ordering, MSB->LSB
static string ToBase36String(byte[] bytes)
{
// Estimate the result's length so we don't waste time realloc'ing
int result_length= (int)
Math.Ceiling(bytes.Length * kByteBitCount / kBase36CharsLengthDivisor);
// We use a List so we don't have to CopyTo a StringBuilder's characters
// to a char[], only to then Array.Reverse it later
var result= new System.Collections.Generic.List<char>(result_length);
var dividend= new BigInteger(bytes);
// IsZero's computation is less complex than evaluating "dividend > 0"
// which invokes BigInteger.CompareTo(BigInteger)
while (!dividend.IsZero)
{
BigInteger remainder;
dividend= BigInteger.DivRem(dividend, kBigInt36, out remainder);
int digit_index= Math.Abs((int)remainder);
result.Add(kBase36Digits[digit_index]);
}
// orientate the characters in big-endian ordering
result.Reverse();
// ToArray will also trim the excess chars used in length prediction
return new string(result.ToArray());
}
"A test 1234. Made slightly larger!" encodes to Base64 as "165kkoorqxin775ct82ist5ysteekll7kaqlcnnu6mfe7ag7e63b5"
To decode that Base36 string 1,000,000 times takes 12.6558909 seconds on my machine (I used the same build and machine conditions as provided in my answer on codereview)
You mentioned that you were dealing with a byte[] for the MD5 hash, rather than a hexadecimal string representation of it, so I think this solution provide the least overhead for you.
If you want a shorter string and can accept [a-zA-Z0-9] and + and / then look at Convert.ToBase64String
Using BigInteger (needs the System.Numerics reference)
Using BigInteger (needs the System.Numerics reference)
const string chars = "0123456789abcdefghijklmnopqrstuvwxyz";
// The result is padded with chars[0] to make the string length
// (int)Math.Ceiling(bytes.Length * 8 / Math.Log(chars.Length, 2))
// (so that for any value [0...0]-[255...255] of bytes the resulting
// string will have same length)
public static string ToBaseN(byte[] bytes, string chars, bool littleEndian = true, int len = -1)
{
if (bytes.Length == 0 || len == 0)
{
return String.Empty;
}
// BigInteger saves in the last byte the sign. > 7F negative,
// <= 7F positive.
// If we have a "negative" number, we will prepend a 0 byte.
byte[] bytes2;
if (littleEndian)
{
if (bytes[bytes.Length - 1] <= 0x7F)
{
bytes2 = bytes;
}
else
{
// Note that Array.Resize doesn't modify the original array,
// but creates a copy and sets the passed reference to the
// new array
bytes2 = bytes;
Array.Resize(ref bytes2, bytes.Length + 1);
}
}
else
{
bytes2 = new byte[bytes[0] > 0x7F ? bytes.Length + 1 : bytes.Length];
// We copy and reverse the array
for (int i = bytes.Length - 1, j = 0; i >= 0; i--, j++)
{
bytes2[j] = bytes[i];
}
}
BigInteger bi = new BigInteger(bytes2);
// A little optimization. We will do many divisions based on
// chars.Length .
BigInteger length = chars.Length;
// We pre-calc the length of the string. We know the bits of
// "information" of a byte are 8. Using Log2 we calc the bits of
// information of our new base.
if (len == -1)
{
len = (int)Math.Ceiling(bytes.Length * 8 / Math.Log(chars.Length, 2));
}
// We will build our string on a char[]
var chs = new char[len];
int chsIndex = 0;
while (bi > 0)
{
BigInteger remainder;
bi = BigInteger.DivRem(bi, length, out remainder);
chs[littleEndian ? chsIndex : len - chsIndex - 1] = chars[(int)remainder];
chsIndex++;
if (chsIndex < 0)
{
if (bi > 0)
{
throw new OverflowException();
}
}
}
// We append the zeros that we skipped at the beginning
if (littleEndian)
{
while (chsIndex < len)
{
chs[chsIndex] = chars[0];
chsIndex++;
}
}
else
{
while (chsIndex < len)
{
chs[len - chsIndex - 1] = chars[0];
chsIndex++;
}
}
return new string(chs);
}
public static byte[] FromBaseN(string str, string chars, bool littleEndian = true, int len = -1)
{
if (str.Length == 0 || len == 0)
{
return new byte[0];
}
// This should be the maximum length of the byte[] array. It's
// the opposite of the one used in ToBaseN.
// Note that it can be passed as a parameter
if (len == -1)
{
len = (int)Math.Ceiling(str.Length * Math.Log(chars.Length, 2) / 8);
}
BigInteger bi = BigInteger.Zero;
BigInteger length2 = chars.Length;
BigInteger mult = BigInteger.One;
for (int j = 0; j < str.Length; j++)
{
int ix = chars.IndexOf(littleEndian ? str[j] : str[str.Length - j - 1]);
// We didn't find the character
if (ix == -1)
{
throw new ArgumentOutOfRangeException();
}
bi += ix * mult;
mult *= length2;
}
var bytes = bi.ToByteArray();
int len2 = bytes.Length;
// BigInteger adds a 0 byte for positive numbers that have the
// last byte > 0x7F
if (len2 >= 2 && bytes[len2 - 1] == 0)
{
len2--;
}
int len3 = Math.Min(len, len2);
byte[] bytes2;
if (littleEndian)
{
if (len == bytes.Length)
{
bytes2 = bytes;
}
else
{
bytes2 = new byte[len];
Array.Copy(bytes, bytes2, len3);
}
}
else
{
bytes2 = new byte[len];
for (int i = 0; i < len3; i++)
{
bytes2[len - i - 1] = bytes[i];
}
}
for (int i = len3; i < len2; i++)
{
if (bytes[i] != 0)
{
throw new OverflowException();
}
}
return bytes2;
}
Be aware that they are REALLY slow! REALLY REALLY slow! (2 minutes for 100k). To speed them up you would probably need to rewrite the division/mod operation so that they work directly on a buffer, instead of each time recreating the scratch pads as it's done by BigInteger. And it would still be SLOW. The problem is that the time needed to encode the first byte is O(n) where n is the length of the byte array (this because all the array needs to be divided by 36). Unless you want to work with blocks of 5 bytes and lose some bits. Each symbol of Base36 carries around 5.169925001 bits. So 8 of these symbols would carry 41.35940001 bits. Very near 40 bytes.
Note that these methods can work both in little-endian mode and in big-endian mode. The endianness of the input and of the output is the same. Both methods accept a len parameter. You can use it to trim excess 0 (zeroes). Note that if you try to make an output too much small to contain the input, an OverflowException will be thrown.
System.Text.Encoding enc = System.Text.Encoding.ASCII;
string myString = enc.GetString(myByteArray);
You can play with what encoding you need:
System.Text.ASCIIEncoding,
System.Text.UnicodeEncoding,
System.Text.UTF7Encoding,
System.Text.UTF8Encoding
To match the requrements [a-z][0-9] you can use it:
Byte[] bytes = new Byte[] { 200, 180, 34 };
string result = String.Join("a", bytes.Select(x => x.ToString()).ToArray());
You will have string representation of bytes with char separator. To convert back you will need to split, and convert the string[] to byte[] using the same approach with .Select().
Usually a power of 2 is used - that way one character maps to a fixed number of bits. An alphabet of 32 bits for instance would map to 5 bits. The only challenge in that case is how to deserialize variable-length strings.
For 36 bits you could treat the data as a large number, and then:
divide by 36
add the remainder as character to your result
repeat until the division results in 0
Easier said than done perhaps.
you can use modulu.
this example encode your byte array to string of [0-9][a-z].
change it if you want.
public string byteToString(byte[] byteArr)
{
int i;
char[] charArr = new char[byteArr.Length];
for (i = 0; i < byteArr.Length; i++)
{
int byt = byteArr[i] % 36; // 36=num of availible charachters
if (byt < 10)
{
charArr[i] = (char)(byt + 48); //if % result is a digit
}
else
{
charArr[i] = (char)(byt + 87); //if % result is a letter
}
}
return new String(charArr);
}
If you don't want to lose data for de-encoding you can use this example:
public string byteToString(byte[] byteArr)
{
int i;
char[] charArr = new char[byteArr.Length*2];
for (i = 0; i < byteArr.Length; i++)
{
charArr[2 * i] = (char)((int)byteArr[i] / 36+48);
int byt = byteArr[i] % 36; // 36=num of availible charachters
if (byt < 10)
{
charArr[2*i+1] = (char)(byt + 48); //if % result is a digit
}
else
{
charArr[2*i+1] = (char)(byt + 87); //if % result is a letter
}
}
return new String(charArr);
}
and now you have a string double-lengthed when odd char is the multiply of 36 and even char is the residu. for example: 200=36*5+20 => "5k".

Convert bool[] to byte[]

I have a List<bool> which I want to convert to a byte[]. How do i do this?
list.toArray() creates a bool[].
Here's two approaches, depending on whether you want to pack the bits into bytes, or have as many bytes as original bits:
bool[] bools = { true, false, true, false, false, true, false, true,
true };
// basic - same count
byte[] arr1 = Array.ConvertAll(bools, b => b ? (byte)1 : (byte)0);
// pack (in this case, using the first bool as the lsb - if you want
// the first bool as the msb, reverse things ;-p)
int bytes = bools.Length / 8;
if ((bools.Length % 8) != 0) bytes++;
byte[] arr2 = new byte[bytes];
int bitIndex = 0, byteIndex = 0;
for (int i = 0; i < bools.Length; i++)
{
if (bools[i])
{
arr2[byteIndex] |= (byte)(((byte)1) << bitIndex);
}
bitIndex++;
if (bitIndex == 8)
{
bitIndex = 0;
byteIndex++;
}
}
Marc's answer is good already, but...
Assuming you are the kind of person that is comfortable doing bit-twiddling, or just want to write less code and squeeze out some more performance, then this here code is for you good sir / madame:
byte[] PackBoolsInByteArray(bool[] bools)
{
int len = bools.Length;
int bytes = len >> 3;
if ((len & 0x07) != 0) ++bytes;
byte[] arr2 = new byte[bytes];
for (int i = 0; i < bools.Length; i++)
{
if (bools[i])
arr2[i >> 3] |= (byte)(1 << (i & 0x07));
}
}
It does the exact same thing as Marc's code, it's just more succinct.
Of course if we really want to go all out we could unroll it too...
...and while we are at it lets throw in a curve ball on the return type!
IEnumerable<byte> PackBoolsInByteEnumerable(bool[] bools)
{
int len = bools.Length;
int rem = len & 0x07; // hint: rem = len % 8.
/*
byte[] byteArr = rem == 0 // length is a multiple of 8? (no remainder?)
? new byte[len >> 3] // -yes-
: new byte[(len >> 3)+ 1]; // -no-
*/
const byte BZ = 0,
B0 = 1 << 0, B1 = 1 << 1, B2 = 1 << 2, B3 = 1 << 3,
B4 = 1 << 4, B5 = 1 << 5, B6 = 1 << 6, B7 = 1 << 7;
byte b;
int i = 0;
for (int mul = len & ~0x07; i < mul; i += 8) // hint: len = mul + rem.
{
b = bools[i] ? B0 : BZ;
if (bools[i + 1]) b |= B1;
if (bools[i + 2]) b |= B2;
if (bools[i + 3]) b |= B3;
if (bools[i + 4]) b |= B4;
if (bools[i + 5]) b |= B5;
if (bools[i + 6]) b |= B6;
if (bools[i + 7]) b |= B7;
//byteArr[i >> 3] = b;
yield return b;
}
if (rem != 0) // take care of the remainder...
{
b = bools[i] ? B0 : BZ; // (there is at least one more bool.)
switch (rem) // rem is [1:7] (fall-through switch!)
{
case 7:
if (bools[i + 6]) b |= B6;
goto case 6;
case 6:
if (bools[i + 5]) b |= B5;
goto case 5;
case 5:
if (bools[i + 4]) b |= B4;
goto case 4;
case 4:
if (bools[i + 3]) b |= B3;
goto case 3;
case 3:
if (bools[i + 2]) b |= B2;
goto case 2;
case 2:
if (bools[i + 1]) b |= B1;
break;
// case 1 is the statement above the switch!
}
//byteArr[i >> 3] = b; // write the last byte to the array.
yield return b; // yield the last byte.
}
//return byteArr;
}
Tip: As you can see I included the code for returning a byte[] as comments. Simply comment out the two yield statements instead if that is what you want/need.
Twiddling Hints:
Shifting x >> 3 is a cheaper x / 8.
Masking x & 0x07 is a cheaper x % 8.
Masking x & ~0x07 is a cheaper x - x % 8.
Edit:
Here is some example documentation:
/// <summary>
/// Bit-packs an array of booleans into bytes, one bit per boolean.
/// </summary><remarks>
/// Booleans are bit-packed into bytes, in order, from least significant
/// bit to most significant bit of each byte.<br/>
/// If the length of the input array isn't a multiple of eight, then one
/// or more of the most significant bits in the last byte returned will
/// be unused. Unused bits are zero / unset.
/// </remarks>
/// <param name="bools">An array of booleans to pack into bytes.</param>
/// <returns>
/// An IEnumerable<byte> of bytes each containing (up to) eight
/// bit-packed booleans.
/// </returns>
You can use LINQ. This won't be efficient, but will be simple. I'm assuming that you want one byte per bool.
bool[] a = new bool[] { true, false, true, true, false, true };
byte[] b = (from x in a select x ? (byte)0x1 : (byte)0x0).ToArray();
Or the IEnumerable approach to AnorZaken's answer:
static IEnumerable<byte> PackBools(IEnumerable<bool> bools)
{
int bitIndex = 0;
byte currentByte = 0;
foreach (bool val in bools) {
if (val)
currentByte |= (byte)(1 << bitIndex);
if (++bitIndex == 8) {
yield return currentByte;
bitIndex = 0;
currentByte = 0;
}
}
if (bitIndex != 8) {
yield return currentByte;
}
}
And the according unpacking where paddingEnd means the amount of bits to discard from the last byte to unpack:
static IEnumerable<bool> UnpackBools(IEnumerable<byte> bytes, int paddingEnd = 0)
{
using (var enumerator = bytes.GetEnumerator()) {
bool last = !enumerator.MoveNext();
while (!last) {
byte current = enumerator.Current;
last = !enumerator.MoveNext();
for (int i = 0; i < 8 - (last ? paddingEnd : 0); i++) {
yield return (current & (1 << i)) != 0;
}
}
}
}
If you have any control over the type of list, try to make it a List, which will then produce the byte[] on ToArray(). If you have an ArrayList, you can use:
(byte[])list.ToArray(typeof(byte));
To get the List, you could create one with your unspecified list iterator as an input to the constructor, and then produce the ToArray()? Or copy each item, casting to a new byte from bool?
Some info on what type of list it is might help.
Have a look at the BitConverter class. Depending on the exact nature of your requirement, it may solve your problem quite neatly.
Another LINQ approach, less effective than #hfcs101's but would easily work for other value types as well:
var a = new [] { true, false, true, true, false, true };
byte[] b = a.Select(BitConverter.GetBytes).SelectMany(x => x).ToArray();

Categories