I am trying to write a Java equivalent for a function in C#. The code follows.
In C#:
byte[] a = new byte[sizeof(Int32)];
readBytes(fStream, a, 0, sizeof(Int32)); //fstream is System.IO.Filestream
int answer = BitConverter.ToInt32(a, 0);
In Java:
InputStream fstream = new FileInputStream(fileName);
DataInputStream in = new DataInputStream(fstream);
BufferedReader br = new BufferedReader(new InputStreamReader(in));
byte[] a = new byte[4];
readBytes(in, a, 0, 4);
int answer = byteArrayToInt(a);
Both Java and C#:
int readBytes(Stream stream, byte[] storageBuffer, int offset, int requiredCount)
{
int totalBytesRead = 0;
while (totalBytesRead < requiredCount)
{
int bytesRead = stream.Read(
storageBuffer,
offset + totalBytesRead,
requiredCount - totalBytesRead);
if (bytesRead == 0)
{
break; // while
}
totalBytesRead += bytesRead;
}
return totalBytesRead;
}
Output:
In C#: answer = 192 (Correct)
In JAVA: answer = -1073741824
There is a difference in the two.I am reading from a file input stream which is encoded and parsing the first four bytes. The C# code seems to produce 192 which is the correct answer while Java produces -1073741824 which is the wrong answer. Why and how ?
EDIT
Here is my byteArrayToInt
public static int byteArrayToInt(byte[] b, int offset) {
int value = 0;
for (int i = 0; i < 4; i++) {
int shift = (4 - 1 - i) * 8;
value += (b[i + offset] & 0x000000FF) << shift;
}
return value;
}
SOLUTION
The right solution for byteArrayToInt
public static int byteArrayToInt(byte[] b)
{
long value = 0;
for (int i = 0; i < b.length; i++)
{
value += (b[i] & 0xff) << (8 * i);
}
return (int) value;
}
This gives the right output
In java bytes are signed, so your -64 in java byte is binary equivalent to 192 in c# byte (192 == 256 - 64).
The problem is probaby in byteArrayToInt() where you assume it's unsigned during the conversion.
A simple
`b & 0x000000FF`
might help in that case.
Java's byte object is signed as soulcheck wrote. The binary value for 192 on an unsigned 8 bit integer would be 11000000.
If you are reading this value with a signed format, a leading 1 will indicate a negative. This means 11000000 becomes negative 01000000, which is -64.
Related
I have this function in a Java program.
private static byte[] converToByte(String s)
{
byte[] output = new byte[s.length() / 2];
for (int i = 0, j = 0; i < s.length(); i += 2, j++)
{
output[j] = (byte)(Integer.parseInt(s.substring(i, i + 2), 16));
}
return output;
}
I am trying to create the same thing with C# but I'm having troubles. I tried this:
output[j] = (byte)(Int16.Parse(str.Substring(i, i + 2)));
But after a couple of iterations I got a System.OverflowException, what would be the instruction in C#?
Thanks.
private static sbyte[] converToByte(string s)
{
sbyte[] output = new sbyte[s.Length / 2];
for (int i = 0, j = 0; i < s.Length; i += 2, j++)
{
output[j] = (sbyte)(Convert.ToInt32(s.Substring(i, 2), 16));
}
return output;
}
You are using the wrong data type in your line:
output[j] = (byte)(Int16.Parse(str.Substring(i, i + 2)));
Short Name .NET Class Type Width Range (bits)
byte Byte Unsigned integer 8 0 to 255
short Int16 Signed integer 16 -32,768 to 32,767
You are getting an overflow exception because an Int16 (short) is far to big to fit into a byte.
After struggling with tihs problem myself I realised the real problem is that Java's substring method is:
substring(int beginIndex, int endIndex)
While C#'s implementation takes:
substring(int beginIndex, int length)
This means in the C# the same code is grabbing larger chunks of bytes causing an overflow.
#Dave Doknjas was on the right track but you can still convert to a byte with the new smaller chunk size.
output[j] = Convert.ToByte(str.Substring(i, i + 2), 16);
I've got a byte array that was created using a hash function. I would like to convert this array into a string. So far so good, it will give me hexadecimal string.
Now I would like to use something different than hexadecimal characters, I would like to encode the byte array with these 36 characters: [a-z][0-9].
How would I go about?
Edit: the reason I would to do this, is because I would like to have a smaller string, than a hexadecimal string.
I adapted my arbitrary-length base conversion function from this answer to C#:
static string BaseConvert(string number, int fromBase, int toBase)
{
var digits = "0123456789abcdefghijklmnopqrstuvwxyz";
var length = number.Length;
var result = string.Empty;
var nibbles = number.Select(c => digits.IndexOf(c)).ToList();
int newlen;
do {
var value = 0;
newlen = 0;
for (var i = 0; i < length; ++i) {
value = value * fromBase + nibbles[i];
if (value >= toBase) {
if (newlen == nibbles.Count) {
nibbles.Add(0);
}
nibbles[newlen++] = value / toBase;
value %= toBase;
}
else if (newlen > 0) {
if (newlen == nibbles.Count) {
nibbles.Add(0);
}
nibbles[newlen++] = 0;
}
}
length = newlen;
result = digits[value] + result; //
}
while (newlen != 0);
return result;
}
As it's coming from PHP it might not be too idiomatic C#, there are also no parameter validity checks. However, you can feed it a hex-encoded string and it will work just fine with
var result = BaseConvert(hexEncoded, 16, 36);
It's not exactly what you asked for, but encoding the byte[] into hex is trivial.
See it in action.
Earlier tonight I came across a codereview question revolving around the same algorithm being discussed here. See: https://codereview.stackexchange.com/questions/14084/base-36-encoding-of-a-byte-array/
I provided a improved implementation of one of its earlier answers (both use BigInteger). See: https://codereview.stackexchange.com/a/20014/20654. The solution takes a byte[] and returns a Base36 string. Both the original and mine include simple benchmark information.
For completeness, the following is the method to decode a byte[] from an string. I'll include the encode function from the link above as well. See the text after this code block for some simple benchmark info for decoding.
const int kByteBitCount= 8; // number of bits in a byte
// constants that we use in FromBase36String and ToBase36String
const string kBase36Digits= "0123456789abcdefghijklmnopqrstuvwxyz";
static readonly double kBase36CharsLengthDivisor= Math.Log(kBase36Digits.Length, 2);
static readonly BigInteger kBigInt36= new BigInteger(36);
// assumes the input 'chars' is in big-endian ordering, MSB->LSB
static byte[] FromBase36String(string chars)
{
var bi= new BigInteger();
for (int x= 0; x < chars.Length; x++)
{
int i= kBase36Digits.IndexOf(chars[x]);
if (i < 0) return null; // invalid character
bi *= kBigInt36;
bi += i;
}
return bi.ToByteArray();
}
// characters returned are in big-endian ordering, MSB->LSB
static string ToBase36String(byte[] bytes)
{
// Estimate the result's length so we don't waste time realloc'ing
int result_length= (int)
Math.Ceiling(bytes.Length * kByteBitCount / kBase36CharsLengthDivisor);
// We use a List so we don't have to CopyTo a StringBuilder's characters
// to a char[], only to then Array.Reverse it later
var result= new System.Collections.Generic.List<char>(result_length);
var dividend= new BigInteger(bytes);
// IsZero's computation is less complex than evaluating "dividend > 0"
// which invokes BigInteger.CompareTo(BigInteger)
while (!dividend.IsZero)
{
BigInteger remainder;
dividend= BigInteger.DivRem(dividend, kBigInt36, out remainder);
int digit_index= Math.Abs((int)remainder);
result.Add(kBase36Digits[digit_index]);
}
// orientate the characters in big-endian ordering
result.Reverse();
// ToArray will also trim the excess chars used in length prediction
return new string(result.ToArray());
}
"A test 1234. Made slightly larger!" encodes to Base64 as "165kkoorqxin775ct82ist5ysteekll7kaqlcnnu6mfe7ag7e63b5"
To decode that Base36 string 1,000,000 times takes 12.6558909 seconds on my machine (I used the same build and machine conditions as provided in my answer on codereview)
You mentioned that you were dealing with a byte[] for the MD5 hash, rather than a hexadecimal string representation of it, so I think this solution provide the least overhead for you.
If you want a shorter string and can accept [a-zA-Z0-9] and + and / then look at Convert.ToBase64String
Using BigInteger (needs the System.Numerics reference)
Using BigInteger (needs the System.Numerics reference)
const string chars = "0123456789abcdefghijklmnopqrstuvwxyz";
// The result is padded with chars[0] to make the string length
// (int)Math.Ceiling(bytes.Length * 8 / Math.Log(chars.Length, 2))
// (so that for any value [0...0]-[255...255] of bytes the resulting
// string will have same length)
public static string ToBaseN(byte[] bytes, string chars, bool littleEndian = true, int len = -1)
{
if (bytes.Length == 0 || len == 0)
{
return String.Empty;
}
// BigInteger saves in the last byte the sign. > 7F negative,
// <= 7F positive.
// If we have a "negative" number, we will prepend a 0 byte.
byte[] bytes2;
if (littleEndian)
{
if (bytes[bytes.Length - 1] <= 0x7F)
{
bytes2 = bytes;
}
else
{
// Note that Array.Resize doesn't modify the original array,
// but creates a copy and sets the passed reference to the
// new array
bytes2 = bytes;
Array.Resize(ref bytes2, bytes.Length + 1);
}
}
else
{
bytes2 = new byte[bytes[0] > 0x7F ? bytes.Length + 1 : bytes.Length];
// We copy and reverse the array
for (int i = bytes.Length - 1, j = 0; i >= 0; i--, j++)
{
bytes2[j] = bytes[i];
}
}
BigInteger bi = new BigInteger(bytes2);
// A little optimization. We will do many divisions based on
// chars.Length .
BigInteger length = chars.Length;
// We pre-calc the length of the string. We know the bits of
// "information" of a byte are 8. Using Log2 we calc the bits of
// information of our new base.
if (len == -1)
{
len = (int)Math.Ceiling(bytes.Length * 8 / Math.Log(chars.Length, 2));
}
// We will build our string on a char[]
var chs = new char[len];
int chsIndex = 0;
while (bi > 0)
{
BigInteger remainder;
bi = BigInteger.DivRem(bi, length, out remainder);
chs[littleEndian ? chsIndex : len - chsIndex - 1] = chars[(int)remainder];
chsIndex++;
if (chsIndex < 0)
{
if (bi > 0)
{
throw new OverflowException();
}
}
}
// We append the zeros that we skipped at the beginning
if (littleEndian)
{
while (chsIndex < len)
{
chs[chsIndex] = chars[0];
chsIndex++;
}
}
else
{
while (chsIndex < len)
{
chs[len - chsIndex - 1] = chars[0];
chsIndex++;
}
}
return new string(chs);
}
public static byte[] FromBaseN(string str, string chars, bool littleEndian = true, int len = -1)
{
if (str.Length == 0 || len == 0)
{
return new byte[0];
}
// This should be the maximum length of the byte[] array. It's
// the opposite of the one used in ToBaseN.
// Note that it can be passed as a parameter
if (len == -1)
{
len = (int)Math.Ceiling(str.Length * Math.Log(chars.Length, 2) / 8);
}
BigInteger bi = BigInteger.Zero;
BigInteger length2 = chars.Length;
BigInteger mult = BigInteger.One;
for (int j = 0; j < str.Length; j++)
{
int ix = chars.IndexOf(littleEndian ? str[j] : str[str.Length - j - 1]);
// We didn't find the character
if (ix == -1)
{
throw new ArgumentOutOfRangeException();
}
bi += ix * mult;
mult *= length2;
}
var bytes = bi.ToByteArray();
int len2 = bytes.Length;
// BigInteger adds a 0 byte for positive numbers that have the
// last byte > 0x7F
if (len2 >= 2 && bytes[len2 - 1] == 0)
{
len2--;
}
int len3 = Math.Min(len, len2);
byte[] bytes2;
if (littleEndian)
{
if (len == bytes.Length)
{
bytes2 = bytes;
}
else
{
bytes2 = new byte[len];
Array.Copy(bytes, bytes2, len3);
}
}
else
{
bytes2 = new byte[len];
for (int i = 0; i < len3; i++)
{
bytes2[len - i - 1] = bytes[i];
}
}
for (int i = len3; i < len2; i++)
{
if (bytes[i] != 0)
{
throw new OverflowException();
}
}
return bytes2;
}
Be aware that they are REALLY slow! REALLY REALLY slow! (2 minutes for 100k). To speed them up you would probably need to rewrite the division/mod operation so that they work directly on a buffer, instead of each time recreating the scratch pads as it's done by BigInteger. And it would still be SLOW. The problem is that the time needed to encode the first byte is O(n) where n is the length of the byte array (this because all the array needs to be divided by 36). Unless you want to work with blocks of 5 bytes and lose some bits. Each symbol of Base36 carries around 5.169925001 bits. So 8 of these symbols would carry 41.35940001 bits. Very near 40 bytes.
Note that these methods can work both in little-endian mode and in big-endian mode. The endianness of the input and of the output is the same. Both methods accept a len parameter. You can use it to trim excess 0 (zeroes). Note that if you try to make an output too much small to contain the input, an OverflowException will be thrown.
System.Text.Encoding enc = System.Text.Encoding.ASCII;
string myString = enc.GetString(myByteArray);
You can play with what encoding you need:
System.Text.ASCIIEncoding,
System.Text.UnicodeEncoding,
System.Text.UTF7Encoding,
System.Text.UTF8Encoding
To match the requrements [a-z][0-9] you can use it:
Byte[] bytes = new Byte[] { 200, 180, 34 };
string result = String.Join("a", bytes.Select(x => x.ToString()).ToArray());
You will have string representation of bytes with char separator. To convert back you will need to split, and convert the string[] to byte[] using the same approach with .Select().
Usually a power of 2 is used - that way one character maps to a fixed number of bits. An alphabet of 32 bits for instance would map to 5 bits. The only challenge in that case is how to deserialize variable-length strings.
For 36 bits you could treat the data as a large number, and then:
divide by 36
add the remainder as character to your result
repeat until the division results in 0
Easier said than done perhaps.
you can use modulu.
this example encode your byte array to string of [0-9][a-z].
change it if you want.
public string byteToString(byte[] byteArr)
{
int i;
char[] charArr = new char[byteArr.Length];
for (i = 0; i < byteArr.Length; i++)
{
int byt = byteArr[i] % 36; // 36=num of availible charachters
if (byt < 10)
{
charArr[i] = (char)(byt + 48); //if % result is a digit
}
else
{
charArr[i] = (char)(byt + 87); //if % result is a letter
}
}
return new String(charArr);
}
If you don't want to lose data for de-encoding you can use this example:
public string byteToString(byte[] byteArr)
{
int i;
char[] charArr = new char[byteArr.Length*2];
for (i = 0; i < byteArr.Length; i++)
{
charArr[2 * i] = (char)((int)byteArr[i] / 36+48);
int byt = byteArr[i] % 36; // 36=num of availible charachters
if (byt < 10)
{
charArr[2*i+1] = (char)(byt + 48); //if % result is a digit
}
else
{
charArr[2*i+1] = (char)(byt + 87); //if % result is a letter
}
}
return new String(charArr);
}
and now you have a string double-lengthed when odd char is the multiply of 36 and even char is the residu. for example: 200=36*5+20 => "5k".
I'm trying to create a function (C#) that will take 2 integers (a value to become a byte[], a value to set the length of the array to) and return a byte[] representing the value. Right now, I have a function which only returns byte[]s of a length of 4 (I'm presuming 32-bit).
For instance, something like InttoByteArray(0x01, 2) should return a byte[] of {0x00, 0x01}.
Does anyone have a solution to this?
You could use the following
static public byte[] ToByteArray(object anyValue, int length)
{
if (length > 0)
{
int rawsize = Marshal.SizeOf(anyValue);
IntPtr buffer = Marshal.AllocHGlobal(rawsize);
Marshal.StructureToPtr(anyValue, buffer, false);
byte[] rawdatas = new byte[rawsize * length];
Marshal.Copy(buffer, rawdatas, (rawsize * (length - 1)), rawsize);
Marshal.FreeHGlobal(buffer);
return rawdatas;
}
return new byte[0];
}
Some test cases are:
byte x = 45;
byte[] x_bytes = ToByteArray(x, 1);
int y = 234;
byte[] y_bytes = ToByteArray(y, 5);
int z = 234;
byte[] z_bytes = ToByteArray(z, 0);
This will create an array of whatever size the type is that you pass in. If you want to only return byte arrays, it should be pretty easy to change. Right now its in a more generic form
To get what you want in your example you could do this:
int a = 0x01;
byte[] a_bytes = ToByteArray(Convert.ToByte(a), 2);
You can use the BitConverter utility class for this. Though I don't think it allows you to specify the length of the array when you're converting an int. But you can always truncate the result.
http://msdn.microsoft.com/en-us/library/de8fssa4.aspx
Take your current algorithm and chop off bytes from the array if the length specified is less than 4, or pad it with zeroes if it's more than 4. Sounds like you already have it solved to me.
You'd want some loop like:
for(int i = arrayLen - 1 ; i >= 0; i--) {
resultArray[i] = (theInt >> (i*8)) & 0xff;
}
byte[] IntToByteArray(int number, int bytes)
{
if(bytes > 4 || bytes < 0)
{
throw new ArgumentOutOfRangeException("bytes");
}
byte[] result = new byte[bytes];
for(int i = bytes-1; i >=0; i--)
{
result[i] = (number >> (8*i)) & 0xFF;
}
return result;
}
It fills the result array from right to left with the the bytes from less to most significant.
byte byte1 = (byte)((mut & 0xFF) ^ (mut3 & 0xFF));
byte byte2 = (byte)((mut1 & 0xFF) ^ (mut2 & 0xFF));
quoted from
C#: Cannot convert from ulong to byte
I am trying to send a UDP packet of bytes corresponding to the numbers 1-1000 in sequence. How do I convert each number (1,2,3,4,...,998,999,1000) into the minimum number of bytes required and put them in a sequence that I can send as a UDP packet?
I've tried the following with no success. Any help would be greatly appreciated!
List<byte> byteList = new List<byte>();
for (int i = 1; i <= 255; i++)
{
byte[] nByte = BitConverter.GetBytes((byte)i);
foreach (byte b in nByte)
{
byteList.Add(b);
}
}
for (int g = 256; g <= 1000; g++)
{
UInt16 st = Convert.ToUInt16(g);
byte[] xByte = BitConverter.GetBytes(st);
foreach (byte c in xByte)
{
byteList.Add(c);
}
}
byte[] sendMsg = byteList.ToArray();
Thank you.
You need to use :
BitConverter.GetBytes(INTEGER);
Think about how you are going to be able to tell the difference between:
260, 1 -> 0x1, 0x4, 0x1
1, 4, 1 -> 0x1, 0x4, 0x1
If you use one byte for numbers up to 255 and two bytes for the numbers 256-1000, you won't be able to work out at the other end which number corresponds to what.
If you just need to encode them as described without worrying about how they are decoded, it smacks to me of a contrived homework assignment or test, and I'm uninclined to solve it for you.
I think you are looking for something along the lines of a 7-bit encoded integer:
protected void Write7BitEncodedInt(int value)
{
uint num = (uint) value;
while (num >= 0x80)
{
this.Write((byte) (num | 0x80));
num = num >> 7;
}
this.Write((byte) num);
}
(taken from System.IO.BinaryWriter.Write(String)).
The reverse is found in the System.IO.BinaryReader class and looks something like this:
protected internal int Read7BitEncodedInt()
{
byte num3;
int num = 0;
int num2 = 0;
do
{
if (num2 == 0x23)
{
throw new FormatException(Environment.GetResourceString("Format_Bad7BitInt32"));
}
num3 = this.ReadByte();
num |= (num3 & 0x7f) << num2;
num2 += 7;
}
while ((num3 & 0x80) != 0);
return num;
}
I do hope this is not homework, even though is really smells like it.
EDIT:
Ok, so to put it all together for you:
using System;
using System.IO;
namespace EncodedNumbers
{
class Program
{
protected static void Write7BitEncodedInt(BinaryWriter bin, int value)
{
uint num = (uint)value;
while (num >= 0x80)
{
bin.Write((byte)(num | 0x80));
num = num >> 7;
}
bin.Write((byte)num);
}
static void Main(string[] args)
{
MemoryStream ms = new MemoryStream();
BinaryWriter bin = new BinaryWriter(ms);
for(int i = 1; i < 1000; i++)
{
Write7BitEncodedInt(bin, i);
}
byte[] data = ms.ToArray();
int size = data.Length;
Console.WriteLine("Total # of Bytes = " + size);
Console.ReadLine();
}
}
}
The total size I get is 1871 bytes for numbers 1-1000.
Btw, could you simply state whether or not this is homework? Obviously, we will still help either way. But we would much rather you try a little harder so you can actually learn for yourself.
EDIT #2:
If you want to just pack them in ignoring the ability to decode them back, you can do something like this:
protected static void WriteMinimumInt(BinaryWriter bin, int value)
{
byte[] bytes = BitConverter.GetBytes(value);
int skip = bytes.Length-1;
while (bytes[skip] == 0)
{
skip--;
}
for (int i = 0; i <= skip; i++)
{
bin.Write(bytes[i]);
}
}
This ignores any bytes that are zero (from MSB to LSB). So for 0-255 it will use one byte.
As states elsewhere, this will not allow you to decode the data back since the stream is now ambiguous. As a side note, this approach crams it down to 1743 bytes (as opposed to 1871 using 7-bit encoding).
A byte can only hold 256 distinct values, so you cannot store the numbers above 255 in one byte. The easiest way would be to use short, which is 16 bits. If you realy need to conserve space, you can use 10 bit numbers and pack that into a byte array ( 10 bits = 2^10 = 1024 possible values).
Naively (also, untested):
List<byte> bytes = new List<byte>();
for (int i = 1; i <= 1000; i++)
{
byte[] nByte = BitConverter.GetBytes(i);
foreach(byte b in nByte) bytes.Add(b);
}
byte[] byteStream = bytes.ToArray();
Will give you a stream of bytes were each group of 4 bytes is a number [1, 1000].
You might be tempted to do some work so that i < 256 take a single byte, i < 65535 take two bytes, etc. However, if you do this you can't read the values out of the stream. Instead, you'd add length encoding or sentinels bits or something of the like.
I'd say, don't. Just compress the stream, either using a built-in class, or gin up a Huffman encoding implementation using an agree'd upon set of frequencies.
I have a BitArray with the length of 8, and I need a function to convert it to a byte. How to do it?
Specifically, I need a correct function of ConvertToByte:
BitArray bit = new BitArray(new bool[]
{
false, false, false, false,
false, false, false, true
});
//How to write ConvertToByte
byte myByte = ConvertToByte(bit);
var recoveredBit = new BitArray(new[] { myByte });
Assert.AreEqual(bit, recoveredBit);
This should work:
byte ConvertToByte(BitArray bits)
{
if (bits.Count != 8)
{
throw new ArgumentException("bits");
}
byte[] bytes = new byte[1];
bits.CopyTo(bytes, 0);
return bytes[0];
}
A bit late post, but this works for me:
public static byte[] BitArrayToByteArray(BitArray bits)
{
byte[] ret = new byte[(bits.Length - 1) / 8 + 1];
bits.CopyTo(ret, 0);
return ret;
}
Works with:
string text = "Test";
byte[] bytes = System.Text.Encoding.ASCII.GetBytes(text);
BitArray bits = new BitArray(bytes);
bytes[] bytesBack = BitArrayToByteArray(bits);
string textBack = System.Text.Encoding.ASCII.GetString(bytesBack);
// bytes == bytesBack
// text = textBack
.
A poor man's solution:
protected byte ConvertToByte(BitArray bits)
{
if (bits.Count != 8)
{
throw new ArgumentException("illegal number of bits");
}
byte b = 0;
if (bits.Get(7)) b++;
if (bits.Get(6)) b += 2;
if (bits.Get(5)) b += 4;
if (bits.Get(4)) b += 8;
if (bits.Get(3)) b += 16;
if (bits.Get(2)) b += 32;
if (bits.Get(1)) b += 64;
if (bits.Get(0)) b += 128;
return b;
}
Unfortunately, the BitArray class is partially implemented in .Net Core class (UWP). For example BitArray class is unable to call the CopyTo() and Count() methods. I wrote this extension to fill the gap:
public static IEnumerable<byte> ToBytes(this BitArray bits, bool MSB = false)
{
int bitCount = 7;
int outByte = 0;
foreach (bool bitValue in bits)
{
if (bitValue)
outByte |= MSB ? 1 << bitCount : 1 << (7 - bitCount);
if (bitCount == 0)
{
yield return (byte) outByte;
bitCount = 8;
outByte = 0;
}
bitCount--;
}
// Last partially decoded byte
if (bitCount < 7)
yield return (byte) outByte;
}
The method decodes the BitArray to a byte array using LSB (Less Significant Byte) logic. This is the same logic used by the BitArray class. Calling the method with the MSB parameter set on true will produce a MSB decoded byte sequence. In this case, remember that you maybe also need to reverse the final output byte collection.
This should do the trick. However the previous answer is quite likely the better option.
public byte ConvertToByte(BitArray bits)
{
if (bits.Count > 8)
throw new ArgumentException("ConvertToByte can only work with a BitArray containing a maximum of 8 values");
byte result = 0;
for (byte i = 0; i < bits.Count; i++)
{
if (bits[i])
result |= (byte)(1 << i);
}
return result;
}
In the example you posted the resulting byte will be 0x80. In other words the first value in the BitArray coresponds to the first bit in the returned byte.
That's should be the ultimate one. Works with any length of array.
private List<byte> BoolList2ByteList(List<bool> values)
{
List<byte> ret = new List<byte>();
int count = 0;
byte currentByte = 0;
foreach (bool b in values)
{
if (b) currentByte |= (byte)(1 << count);
count++;
if (count == 7) { ret.Add(currentByte); currentByte = 0; count = 0; };
}
if (count < 7) ret.Add(currentByte);
return ret;
}
In addition to #JonSkeet's answer you can use an Extension Method as below:
public static byte ToByte(this BitArray bits)
{
if (bits.Count != 8)
{
throw new ArgumentException("bits");
}
byte[] bytes = new byte[1];
bits.CopyTo(bytes, 0);
return bytes[0];
}
And use like:
BitArray foo = new BitArray(new bool[]
{
false, false, false, false,false, false, false, true
});
foo.ToByte();
byte GetByte(BitArray input)
{
int len = input.Length;
if (len > 8)
len = 8;
int output = 0;
for (int i = 0; i < len; i++)
if (input.Get(i))
output += (1 << (len - 1 - i)); //this part depends on your system (Big/Little)
//output += (1 << i); //depends on system
return (byte)output;
}
Cheers!
Little endian byte array converter : First bit (indexed with "0") in the BitArray
assumed to represents least significant bit (rightmost bit in the bit-octet) which interpreted as "zero" or "one" as binary.
public static class BitArrayExtender {
public static byte[] ToByteArray( this BitArray bits ) {
const int BYTE = 8;
int length = ( bits.Count / BYTE ) + ( (bits.Count % BYTE == 0) ? 0 : 1 );
var bytes = new byte[ length ];
for ( int i = 0; i < bits.Length; i++ ) {
int bitIndex = i % BYTE;
int byteIndex = i / BYTE;
int mask = (bits[ i ] ? 1 : 0) << bitIndex;
bytes[ byteIndex ] |= (byte)mask;
}//for
return bytes;
}//ToByteArray
}//class