Setting bits in a byte array with C# - c#

I have a requirement to encode a byte array from an short integer value
The encoding rules are
The bits representing the integer are bits 0 - 13
bit 14 is set if the number is negative
bit 15 is always 1.
I know I can get the integer into a byte array using BitConverter
byte[] roll = BitConverter.GetBytes(x);
But I cant find how to meet my requirement
Anyone know how to do this?

You should use Bitwise Operators.
Solution is something like this:
Int16 x = 7;
if(x < 0)
{
Int16 mask14 = 16384; // 0b0100000000000000;
x = (Int16)(x | mask14);
}
Int16 mask15 = -32768; // 0b1000000000000000;
x = (Int16)(x | mask15);
byte[] roll = BitConverter.GetBytes(x);

You cannot rely on GetBytes for negative numbers since it complements the bits and that is not what you need.
Instead you need to do bounds checking to make sure the number is representable, then use GetBytes on the absolute value of the given number.
The method's parameter is 'short' so we GetBytes returns a byte array with the size of 2 (you don't need more than 16 bits).
The rest is in the comments below:
static readonly int MAX_UNSIGNED_14_BIT = 16383;// 2^14-1
public static byte[] EncodeSigned14Bit(short x)
{
var absoluteX = Math.Abs(x);
if (absoluteX > MAX_UNSIGNED_14_BIT) throw new ArgumentException($"{nameof(x)} is too large and cannot be represented");
byte[] roll = BitConverter.GetBytes(absoluteX);
if (x < 0)
{
roll[1] |= 0b01000000; //x is negative, set 14th bit
}
roll[1] |= 0b10000000; // 15th bit is always set
return roll;
}
static void Main(string[] args)
{
// testing some values
var r1 = EncodeSigned14Bit(16383); // r1[0] = 255, r1[1] = 191
var r2 = EncodeSigned14Bit(-16383); // r2[0] = 255, r2[1] = 255
}

Related

Can we access randomly a part of byte

Can we access randomly a part of byte.I mean can i access three bits of a bit randomnly without using BitArray and accessing by byte.
Is there any possibility of accessing bit from byte and if not why it is not possible to access it and is it depends on any criteria
You can use the Bitwise And (&) operator in order to read a specific bit from a byte. I'll give some examples by using the 0b prefix, which in C# allows you to write binary literals in your code.
So suppose you have the following byte value:
byte val = 0b10010100; // val = 148 (in decimal)
byte testBits = 0b00000100; // set ONLY the BITS you want to test here...
if ( val & testBits != 0 ) // bitwise and will return 0 if the bit is NOT SET.
Console.WriteLine("The bit is set!");
else
Console.WriteLine("The bit is not set....");
Here's a method for you to test any bit in a given byte, which uses the Left-Shift operator applied to the number 1 in order to generate a binary number which is able to be used for testing against any arbitrary bit in the given byte:
public static int readBit(byte val, int bitPos)
{
if ((val & (1 << bitPos)) != 0)
return 1;
return 0;
}
You can use this method to print which bits are set in a given byte:
byte val = 0b10010100;
for (int i = 0; i < 8; i++)
{
int bitValue = readBit(val, i);
Console.WriteLine($"Bit {i} = {bitValue}");
}
The output from the code above should be:
Bit 0 = 0
Bit 1 = 0
Bit 2 = 1
Bit 3 = 0
Bit 4 = 1
Bit 5 = 0
Bit 6 = 0
Bit 7 = 1
you can use bit shifting
var bitNumber = 0;
var firstBit = (b & (1 << bitNumber)) != 0 ;
we can convert this to extension method
public static class ByteExtensions
{
public static bool GetBit(this byte b, int bitNumber) =>
(b & (1 << bitNumber)) != 0;
}
then
byte b = 7;
var bit0 = b.GetBit(3);

Convert BitArray to a small byte array

I've read the other posts on BitArray conversions and tried several myself but none seem to deliver the results I want.
My situation is as such, I have some c# code that controls an LED strip. To issue a single command to the strip I need at most 28 bits
1 bit for selecting between 2 led strips
6 for position (Max 48 addressable leds)
7 for color x3 (0-127 value for color)
Suppose I create a BitArray for that structure and as an example we populate it semi-randomly.
BitArray ba = new BitArray(28);
for(int i = 0 ;i < 28; i++)
{
if (i % 3 == 0)
ba.Set(i, true);
else
ba.Set(i, false);
}
Now I want to stick those 28 bits in 4 bytes (The last 4 bits can be a stop signal), and finally turn it into a String so I can send the string via USB to the LED strip.
All the methods I've tried convert each 1 and 0 as a literal char which is not the goal.
Is there a straightforward way to do this bit compacting in C#?
Well you could use BitArray.CopyTo:
byte[] bytes = new byte[4];
ba.CopyTo(bytes, 0);
Or:
int[] ints = new int[1];
ba.CopyTo(ints, 0);
It's not clear what you'd want the string representation to be though - you're dealing with naturally binary data rather than text data...
I wouldn't use a BitArray for this. Instead, I'd use a struct, and then pack that into an int when I need to:
struct Led
{
public readonly bool Strip;
public readonly byte Position;
public readonly byte Red;
public readonly byte Green;
public readonly byte Blue;
public Led(bool strip, byte pos, byte r, byte g, byte b)
{
// set private fields
}
public int ToInt()
{
const int StripBit = 0x01000000;
const int PositionMask = 0x3F; // 6 bits
// bits 21 through 26
const int PositionShift = 20;
const int ColorMask = 0x7F;
const int RedShift = 14;
const int GreenShift = 7;
int val = Strip ? 0 : StripBit;
val = val | ((Position & PositionMask) << PositionShift);
val = val | ((Red & ColorMask) << RedShift);
val = val | (Blue & ColorMask);
return val;
}
}
That way you can create your structures easily without having to fiddle with bit arrays:
var blue17 = new Led(true, 17, 0, 0, 127);
var blah22 = new Led(false, 22, 15, 97, 42);
and to get the values:
int blue17_value = blue17.ToInt();
You can turn the int into a byte array easily enough with BitConverter:
var blue17_bytes = BitConverter.GetBytes(blue17_value);
It's unclear to me why you want to send that as a string.

Convert datatype 'long' to byte array

I have to convert values (double/float in C#) to bytes and need some help..
// Datatype long 4byte -99999999,99 to 99999999,99
// Datatype long 4byte -99999999,9 to 99999999,9
// Datatype short 2byte -999,99 to 999,99
// Datatype short 2byte -999,9 to 999,9
In my "world at home" i would just string it and ASCII.GetBytes().
But now, in this world, we have to make less possible space.
And indeed that '-99999999,99' takes 12 bytes instead of 4! if it's a 'long' datatype.
[EDIT]
Due to some help and answer I attach some results here,
long lng = -9999999999L;
byte[] test = Encoding.ASCII.GetBytes(lng.ToString()); // 11 byte
byte[] test2 = BitConverter.GetBytes(lng); // 8 byte
byte[] mybyt = BitConverter.GetBytes(lng); // 8 byte
byte[] bA = BitConverter.GetBytes(lng); // 8 byte
There still have to be one detail left to find out. The lng-variabel got 8 byte even if it helds a lower values, i.e. 99951 (I won't include the ToString() sample).
If the value are even "shorter", which means -999,99 -- 999,99 it will only take 2 byte space.
[END EDIT]
Have you checked BitConverter
long lng =-9999999999L;
byte[] mybyt = BitConverter.GetBytes(lng);
hope this is what you are looking
Try to do it in this way:
long l = 4554334;
byte[] bA = BitConverter.GetBytes(l);
Be aware that in 2 bytes you can only have 4 full digits + sign, and in 4 bytes you can only have 9 digits + sign, so I had to scale your prereqs accordingly.
public static byte[] SerializeLong2Dec(double value)
{
value *= 100;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -999999999.0 || value > 999999999.0)
{
throw new ArgumentOutOfRangeException();
}
int value2 = (int)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeLong2Dec(byte[] value)
{
int value2 = BitConverter.ToInt32(value, 0);
return (double)value2 / 100.0;
}
public static byte[] SerializeLong1Dec(double value) {
value *= 10;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -999999999.0 || value > 999999999.0) {
throw new ArgumentOutOfRangeException();
}
int value2 = (int)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeLong1Dec(byte[] value) {
int value2 = BitConverter.ToInt32(value, 0);
return (double)value2 / 10.0;
}
public static byte[] SerializeShort2Dec(double value) {
value *= 100;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -9999.0 || value > 9999.0) {
throw new ArgumentOutOfRangeException();
}
short value2 = (short)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeShort2Dec(byte[] value) {
short value2 = BitConverter.ToInt16(value, 0);
return (double)value2 / 100.0;
}
public static byte[] SerializeShort1Dec(double value) {
value *= 10;
value = Math.Round(value, MidpointRounding.AwayFromZero);
if (value < -9999.0 || value > 9999.0) {
throw new ArgumentOutOfRangeException();
}
short value2 = (short)value;
return BitConverter.GetBytes(value2);
}
public static double DeserializeShort1Dec(byte[] value) {
short value2 = BitConverter.ToInt16(value, 0);
return (double)value2 / 10.0;
}
So that it's clear, the range of a (signed) short (16 bits) is -32,768 to 32,767 so it's quite clear that you only have 4 full digits plus a little piece (the 0-3), the range of a (signed) int (32 bits) is −2,147,483,648 to 2,147,483,647 so it's quite clear that you only have 9 full digits plus a little piece (the 0-2). Going to a (signed) long (64 bits) you have -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 so 18 digits plus a (big) piece. Using floating points you lose in accuracy. A float (32 bits) has an accuracy of around 7 digits, while a double (64 bits) has an accuracy of around 15-16 digits.
To everyone reading this question and the answers. Please note that:
//convert to bytes
long number = 123;
byte[] bytes = BitConverter.GetBytes(number);
//convert back to int64
long newNumber = BitConverter.ToInt64(bytes);
//numbers are different on x64 systems!!!
number != newNumber;
The workaround is to check which system you run on:
newNumber = BitConverter.IsLittleEndian
? BitConverter.ToInt64(bytes, 0)
: BitConverter.ToInt64(bytes.Reverse().ToArray(), 0);
long longValue = 9999999999L;
Console.WriteLine("Long value: " + longValue.ToString());
bytes = BitConverter.GetBytes(longValue);
Console.WriteLine("Byte array value:");
Console.WriteLine(BitConverter.ToString(bytes));
As the other answers have pointed out, you can use BitConverter to get the byte representation of primitive types.
You said that in the current world you inhabit, there is an onus on representing these values as concisely as possible, in which case you should consider variable length encoding (though that document may be a bit abstract).
If you decide this approach is applicable to your case I would suggest looking at how the Protocol Buffers project represents scalar types as some of these types are encoded using variable length encoding which produces shorter output if the data set favours smaller absolute values. (The project is open source under a New BSD license, so you will be able to learn the technique employed from the source repository or even use the source in your own project.)

Convert int to a bit array in .NET

How can I convert an int to a bit array?
If I e.g. have an int with the value 3 I want an array, that has the length 8 and that looks like this:
0 0 0 0 0 0 1 1
Each of these numbers are in a separate slot in the array that have the size 8.
Use the BitArray class.
int value = 3;
BitArray b = new BitArray(new int[] { value });
If you want to get an array for the bits, you can use the BitArray.CopyTo method with a bool[] array.
bool[] bits = new bool[b.Count];
b.CopyTo(bits, 0);
Note that the bits will be stored from least significant to most significant, so you may wish to use Array.Reverse.
And finally, if you want get 0s and 1s for each bit instead of booleans (I'm using a byte to store each bit; less wasteful than an int):
byte[] bitValues = bits.Select(bit => (byte)(bit ? 1 : 0)).ToArray();
To convert the int 'x'
int x = 3;
One way, by manipulation on the int :
string s = Convert.ToString(x, 2); //Convert to binary in a string
int[] bits= s.PadLeft(8, '0') // Add 0's from left
.Select(c => int.Parse(c.ToString())) // convert each char to int
.ToArray(); // Convert IEnumerable from select to Array
Alternatively, by using the BitArray class-
BitArray b = new BitArray(new byte[] { x });
int[] bits = b.Cast<bool>().Select(bit => bit ? 1 : 0).ToArray();
Use Convert.ToString (value, 2)
so in your case
string binValue = Convert.ToString (3, 2);
I would achieve it in a one-liner as shown below:
using System;
using System.Collections;
namespace stackoverflowQuestions
{
class Program
{
static void Main(string[] args)
{
//get bit Array for number 20
var myBitArray = new BitArray(BitConverter.GetBytes(20));
}
}
}
Please note that every element of a BitArray is stored as bool as shown in below snapshot:
So below code works:
if (myBitArray[0] == false)
{
//this code block will execute
}
but below code doesn't compile at all:
if (myBitArray[0] == 0)
{
//some code
}
I just ran into an instance where...
int val = 2097152;
var arr = Convert.ToString(val, 2).ToArray();
var myVal = arr[21];
...did not produce the results I was looking for. In 'myVal' above, the value stored in the array in position 21 was '0'. It should have been a '1'. I'm not sure why I received an inaccurate value for this and it baffled me until I found another way in C# to convert an INT to a bit array:
int val = 2097152;
var arr = new BitArray(BitConverter.GetBytes(val));
var myVal = arr[21];
This produced the result 'true' as a boolean value for 'myVal'.
I realize this may not be the most efficient way to obtain this value, but it was very straight forward, simple, and readable.
To convert your integer input to an array of bool of any size, just use LINQ.
bool[] ToBits(int input, int numberOfBits) {
return Enumerable.Range(0, numberOfBits)
.Select(bitIndex => 1 << bitIndex)
.Select(bitMask => (input & bitMask) == bitMask)
.ToArray();
}
So to convert an integer to a bool array of up to 32 bits, simply use it like so:
bool[] bits = ToBits(65, 8); // true, false, false, false, false, false, true, false
You may wish to reverse the array depending on your needs.
Array.Reverse(bits);
int value = 3;
var array = Convert.ToString(value, 2).PadLeft(8, '0').ToArray();
public static bool[] Convert(int[] input, int length)
{
var ret = new bool[length];
var siz = sizeof(int) * 8;
var pow = 0;
var cur = 0;
for (var a = 0; a < input.Length && cur < length; ++a)
{
var inp = input[a];
pow = 1;
if (inp > 0)
{
for (var i = 0; i < siz && cur < length; ++i)
{
ret[cur++] = (inp & pow) == pow;
pow *= 2;
}
}
else
{
for (var i = 0; i < siz && cur < length; ++i)
{
ret[cur++] = (inp & pow) != pow;
pow *= 2;
}
}
}
return ret;
}
I recently discovered the C# Vector<T> class, which uses hardware acceleration (i.e. SIMD: Single-Instruction Multiple Data) to perform operations across the vector components as single instructions. In other words, it parallelizes array operations, to an extent.
Since you are trying to expand an integer bitmask to an array, perhaps you are trying to do something similar.
If you're at the point of unrolling your code, this would be an optimization to strongly consider. But also weigh this against the costs if you are only sparsely using them. And also consider the memory overhead, since Vectors really want to operate on contiguous memory (in CLR known as a Span<T>), so the kernel may be having to twiddle bits under the hood when you instantiate your own vectors from arrays.
Here is an example of how to do masking:
//given two vectors
Vector<int> data1 = new Vector<int>(new int[] { 1, 0, 1, 0, 1, 0, 1, 0 });
Vector<int> data2 = new Vector<int>(new int[] { 0, 1, 1, 0, 1, 0, 0, 1 });
//get the pairwise-matching elements
Vector<int> mask = Vector.Equals(data1, data2);
//and return values from another new vector for matches
Vector<int> whenMatched = new Vector<int>(new int[] { 1, 2, 3, 4, 5, 6, 7, 8 });
//and zero otherwise
Vector<int> whenUnmatched = Vector<int>.Zero;
//perform the filtering
Vector<int> result = Vector.ConditionalSelect(mask, whenMatched, whenUnmatched);
//note that only the first half of vector components render in the Debugger (this is a known bug)
string resultStr = string.Join("", result);
//resultStr is <0, 0, 3, 4, 5, 6, 0, 0>
Note that the VS Debugger is bugged, showing only the first half of the components of a vector.
So with an integer as your mask, you might try:
int maskInt = 0x0F;//00001111 in binary
//convert int mask to a vector (anybody know a better way??)
Vector<int> maskVector = new Vector<int>(Enumerable.Range(0, Vector<int>.Count).Select(i => (maskInt & 1<<i) > 0 ? -1 : 0).ToArray());
Note that the (signed integer) -1 is used to signal true, which has binary representation of all ones.
Positive 1 does not work, and you can cast (int)-1 to uint to get every bit of the binary enabled, if needed (but not by using Enumerable.Cast<>()).
However this only works for int32 masks up to 2^8 because of the 8-element capacity in my system (that supports 4x64-bit chunks). This depends on the execution environment based on the hardware capabilities, so always use Vector<T>.Capacity.
You therefore can get double the capacity with shorts as ints as longs (the new Half type isn't yet supported, nor is "Decimal", which are the corresponding float/double types to int16 and int128):
ushort maskInt = 0b1111010101010101;
Vector<ushort> maskVector = new Vector<ushort>(Enumerable.Range(0, Vector<ushort>.Count).Select(i => (maskInt & 1<<i) > 0 ? -1 : 0).Select(x => (ushort)x).ToArray());
//string maskString = string.Join("", maskVector);//<65535, 0, 65535, 0, 65535, 0, 65535, 0, 65535, 0, 65535, 0, 65535, 65535, 65535, 65535>
Vector<ushort> whenMatched = new Vector<ushort>(Enumerable.Range(1, Vector<ushort>.Count).Select(i => (ushort)i).ToArray());//{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}
Vector<ushort> whenUnmatched = Vector<ushort>.Zero;
Vector<ushort> result = Vector.ConditionalSelect(maskVector, whenMatched, whenUnmatched);
string resultStr = string.Join("", result);//<1, 0, 3, 0, 5, 0, 7, 0, 9, 0, 11, 0, 13, 14, 15, 16>
Due to how integers work, being signed or unsigned (using the most significant bit to indicate +/- values), you might need to consider that too, for values like converting 0b1111111111111111 to a short. The compiler will usually stop you if you try to do something that appears to be stupid, at least.
short maskInt = unchecked((short)(0b1111111111111111));
Just make sure you don't confuse the most-significant bit of an int32 as being 2^31.
Using & (AND)
Bitwise Operators
The answers above are all correct and effective. If you wanted to do it old-school, without using BitArray or String.Convert(), you would use bitwise operators.
The bitwise AND operator & takes two operands and returns a value where every bit is either 1, if both operands have a 1 in that place, or a 0 in every other case. It's just like the logical AND (&&), but it takes integral operands instead of boolean operands.
Ex. 0101 & 1001 = 0001
Using this principle, any integer AND the maximum value of that integer is itself.
byte b = 0b_0100_1011; // In base 10, 75.
Console.WriteLine(b & byte.MaxValue); // byte.MaxValue = 255
Result: 75
Bitwise AND in a loop
We can use this to our advantage to only take specific bits from an integer by using a loop that goes through every bit in a positive 32-bit integer (i.e., uint) and puts the result of the AND operation into an array of strings which will all be "1" or "0".
A number that has a 1 at only one specific digit n is equal to 2 to the nth power (I typically use the Math.Pow() method).
public static string[] GetBits(uint x) {
string[] bits = new string[32];
for (int i = 0; i < 32; i++) {
uint bit = x & Math.Pow(2, i);
if (bit == 1)
bits[i] = "1";
else
bits[i] = "0";
}
return bits;
}
If you were to input, say, 1000 (which is equivalent to binary 1111101000), you would get an array of 32 strings that would spell 0000 0000 0000 0000 0000 0011 1110 1000 (the spaces are just for readability).

Two's complement conversion

I need to convert bytes in two's complement format to positive integer bytes.
The range -128 to 127 mapped to 0 to 255.
Examples: -128 (10000000) -> 0 , 127 (01111111) -> 255, etc.
EDIT To clear up the confusion, the input byte is (of course) an unsigned integer in the range 0 to 255. BUT it represents a signed integer in the range -128 to 127 using two's complement format. For example, the input byte value of 128 (binary 10000000) actually represents -128.
EXTRA EDIT Alrighty, lets say we have the following byte stream 0,255,254,1,127. In two's complement format this represents 0, -1, -2, 1, 127. This I need clamping to the 0 to 255 range. For more info check out this hard to find article: Two's complement
From you sample input you simply want:
sbyte something = -128;
byte foo = (byte)( something + 128);
new = old + 128;
bingo :-)
Try
sbyte signed = (sbyte)input;
or
int signed = input | 0xFFFFFF00;
public static byte MakeHexSigned(byte value)
{
if (value > 255 / 2)
{
value = -1 * (255 + 1) + value;
}
return value;
}
I believe that 2s complement bytes would be best done with the following. Maybe not elegant or short but clear and obvious. I would put it as a static method in one of my util classes.
public static sbyte ConvertTo2Complement(byte b)
{
if(b < 128)
{
return Convert.ToSByte(b);
}
else
{
int x = Convert.ToInt32(b);
return Convert.ToSByte(x - 256);
}
}
If I undestood correctly, your problem is how to convert the input, which is really a signed-byte (sbyte), but that input is stored in a unsigned integer, and then also avoid negative values by converting them to zero.
To be clear, when you use a signed type (like ubyte) the framework is using Two's complement behind the scene, so just by casting to the right type you will be using two's complement.
Then, once you have that conversion done, you could clamp the negative values with a simple if or a conditional ternary operator (?:).
The functions presented below will return 0 for values from 128 to 255 (or from -128 to -1), and the same value for values from 0 to 127.
So, if you must use unsigned integers as input and output you could use something like this:
private static uint ConvertSByteToByte(uint input)
{
sbyte properDataType = (sbyte)input; //128..255 will be taken as -128..-1
if (properDataType < 0) { return 0; } //when negative just return 0
if (input > 255) { return 0; } //just in case as uint can be greater than 255
return input;
}
Or, IMHO, you could change your input and outputs to the data types best suited to your input and output (sbyte and byte):
private static byte ConvertSByteToByte(sbyte input)
{
return input < 0 ? (byte)0 : (byte)input;
}
int8_t indata; /* -128,-127,...-1,0,1,...127 */
uint8_t byte = indata ^ 0x80;
xor MSB, that's all
Here is my solution for this problem, for numbers bigger than 8-bits. My example is for a 16-bit value. Note: You will have to check the first bit, to see if it is a negative or not.
Steps:
Convert # to compliment by placing '~' before variable. (ie. y = ~y)
Convert #s to binary string
Break binary strings into character array
Starting with right most value, add 1 , keeping track of carries. Store result in character array.
Convert character array back to string.
private string TwosComplimentMath(string value1, string value2)
{
char[] binary1 = value1.ToCharArray();
char[] binary2 = value2.ToCharArray();
bool carry = false;
char[] calcResult = new char[16];
for (int i = 15; i >= 0; i--)
{
if (binary1[i] == binary2[i])
{
if (binary1[i] == '1')
{
if (carry)
{
calcResult[i] = '1';
carry = true;
}
else
{
calcResult[i] = '0';
carry = true;
}
}
else
{
if (carry)
{
calcResult[i] = '1';
carry = false;
}
else
{
calcResult[i] = '0';
carry = false;
}
}
}
else
{
if (carry)
{
calcResult[i] = '0';
carry = true;
}
else
{
calcResult[i] = '1';
carry = false;
}
}
}
string result = new string(calcResult);
return result;
}
So the problem is that the OP's problem isn't really two's complement conversion. He's adding a bias to a set of values, to adjust the range from -128..127 to 0..255.
To actually do a two's complement conversion, you just typecast the signed value to the unsigned value, like this:
sbyte test1 = -1;
byte test2 = (byte)test1;
-1 becomes 255. -128 becomes 128. This doesn't sound like what the OP wants, though. He just wants to slide an array up so that the lowest signed value (-128) becomes the lowest unsigned value (0).
To add a bias, you just do integer addition:
newValue = signedValue+128;
You could be describing something as simple as adding a bias to your number ( in this case, adding 128 to the signed number ).

Categories