I have a 2d array of UInt16s which I've converted to raw bytes - I would like to take those bytes and convert them back into the original 2D array. I've managed to do this with a 2d array of doubles, but I can't figure out how to do it with UInt16.
Here's my code:
UInt16[,] dataArray;
//This array is populated with this data:
[4 6 2]
[0 2 0]
[1 3 4]
long byteCountUInt16Array = dataArray.GetLength(0) * dataArray.GetLength(1) * sizeof(UInt16);
var bufferUInt16 = new byte[byteCountUInt16Array];
Buffer.BlockCopy(newUint16Array, 0, bufferUInt16, 0, bufferUInt16.Length);
//Here is where I try to convert the values and print them out to see if the values are still the same:
UInt16[] originalUInt16Values = new UInt16[bufferUInt16.Length / 8];
for (int i = 0; i < 5; i++)
{
originalUInt16Values[i] = BitConverter.ToUInt16(bufferUInt16, i * 8);
Console.WriteLine("Values: " + originalUInt16Values[i]);
}
The print statement does not show the same values as the original 2d array. I'm pretty new to coding with bytes and UInt16 so most of this I'm learning in the process.
*Also, I know the last chunk of my code isn't putting values into a 2d array like the original array - right now I'm just trying to print out the values to see if they even match the original data.
If what you want is just to cast UInt16[,]->Byte, and then Byte->UInt16 you can do another Block copy, which is very fast at run-time, code should look like this:
UInt16[,] dataArray = new UInt16[,] {
{4, 6, 2},
{0, 2, 0},
{1, 3, 4}
};
for (int j = 0; j < 3; j++)
{
for (int i = 0; i < 3; i++)
{
Console.WriteLine("Value[" + i + ", " + j + "] = " + dataArray[j,i]);
}
}
long byteCountUInt16Array = dataArray.GetLength(0) * dataArray.GetLength(1) * sizeof(UInt16);
var bufferUInt16 = new byte[byteCountUInt16Array];
Buffer.BlockCopy(dataArray, 0, bufferUInt16, 0, bufferUInt16.Length);
//Here is where I try to convert the values and print them out to see if the values are still the same:
UInt16[] originalUInt16Values = new UInt16[bufferUInt16.Length / 2];
Buffer.BlockCopy(bufferUInt16, 0, originalUInt16Values, 0, BufferUInt16.Length);
for (int i = 0; i < 5; i++)
{
//originalUInt16Values[i] = BitConverter.ToUInt16(bufferUInt16, i * 8);
Console.WriteLine("Values---: " + originalUInt16Values[i]);
}
by the way, you only divided each UInt16 into two bytes, so you should calculate your new size dividing by two, not eight
The program
public static void Main(string[] args)
{
UInt16[,] dataArray = new ushort[,]{ {4,6,2}, {0,2,0}, {1,3,4}};
//This array is populated with this data:
long byteCountUInt16Array = dataArray.GetLength(0) * dataArray.GetLength(1) * sizeof(UInt16);
var byteBuffer = new byte[byteCountUInt16Array];
Buffer.BlockCopy(dataArray, 0, byteBuffer, 0, byteBuffer.Length);
for(int i=0; i < byteBuffer.Length; i++) {
Console.WriteLine("byteBuf[{0}]= {1}", i, byteBuffer[i]);
}
Console.WriteLine("Byte buffer len: {0} data array len: {1}", byteBuffer.Length, dataArray.GetLength(0)* dataArray.GetLength(1));
UInt16[] originalUInt16Values = new UInt16[byteBuffer.Length / 2];
for (int i = 0; i < byteBuffer.Length; i+=2)
{
ushort _a = (ushort)( (byteBuffer[i]) | (byteBuffer[i+1]) << 8);
originalUInt16Values[i/2] = _a;
Console.WriteLine("Values: " + originalUInt16Values[i/2]);
}
}
Outputs
byteBuf[0]= 4
byteBuf[1]= 0
byteBuf[2]= 6
byteBuf[3]= 0
byteBuf[4]= 2
byteBuf[5]= 0
byteBuf[6]= 0
byteBuf[7]= 0
byteBuf[8]= 2
byteBuf[9]= 0
byteBuf[10]= 0
byteBuf[11]= 0
byteBuf[12]= 1
byteBuf[13]= 0
byteBuf[14]= 3
byteBuf[15]= 0
byteBuf[16]= 4
byteBuf[17]= 0
Byte buffer len: 18 data array len: 9
Values: 4
Values: 6
Values: 2
Values: 0
Values: 2
Values: 0
Values: 1
Values: 3
Values: 4
You see that a ushort, aka UInt16 is stored in a byte-order in which 4 = 0x04 0x00, which is why I chose the conversion formula
ushort _a = (ushort)( (byteBuffer[i]) | (byteBuffer[i+1]) << 8);
Which will grab the byte at index i and take the next byte at i+1 and left shift it by the size of a byte (8 bits) to make up the 16 bits of a ushort. In orhter words, ushort _a = 0x[second byte] 0x[first byte], which is then repeated. This conversion code is specific for the endianess of the machine you are on and thus non-portable.
Also I fixed the error where the byteBuffer array was to big because it was multiplied with factor 8. A ushort is double the size of a byte, thus we only need factor 2 in the array length.
Addressing the title of your question (Convert byte[] to UInt16):
UInt16 result = (UInt16)BitConverter.ToInt16(yourByteArray, startIndex = 0);
Your casting up so you should be able to do things implicitly
var list = new List<byte> { 1, 2 ,
var uintList = new List<UInt16>();
//Cast in your select
uintList = list.Select(x => (UInt16)x).ToList();
Related
I have a byte array as follows -
byte[] arrByt = new byte[] { 0xF, 0xF, 0x11, 0x4 };
so in binary
arrByt = 00001111 00001111 00010001 000000100
Now I want to create a new byte array by removing leading 0s for each byte from arrByt
arrNewByt = 11111111 10001100 = { 0xFF, 0x8C };
I know that this can be done by converting the byte values into binary string values, removing the leading 0s, appending the values and converting back to byte values into the new array.
However this is a slow process for a large array.
Is there a faster way to achieve this (like logical operations, bit operations, or other efficient ways)?
Thanks.
This should do the job quite fast. At least only standard loops and operators. Give it a try, will also work for longer source arrays.
// source array of bytes
var arrByt = new byte[] {0xF, 0xF, 0x11, 0x4 };
// target array - first with the size of the source array
var targetArray = new byte[arrByt.Length];
// bit index in target array
// from left = byte 0, bit 7 = index 31; to the right = byte 4, bit 0 = index 0
var targetIdx = targetArray.Length * 8 - 1;
// go through all bytes of the source array from left to right
for (var i = 0; i < arrByt.Length; i++)
{
var startFound = false;
// go through all bits of the current byte from the highest to the lowest
for (var x = 7; x >= 0; x--)
{
// copy the bit if it is 1 or if there was already a 1 before in this byte
if (startFound || ((arrByt[i] >> x) & 1) == 1)
{
startFound = true;
// copy the bit from its position in the source array to its new position in the target array
targetArray[targetArray.Length - ((targetIdx / 8) + 1)] |= (byte) (((arrByt[i] >> x) & 1) << (targetIdx % 8));
// advance the bit + byte position in the target array one to the right
targetIdx--;
}
}
}
// resize the target array to only the bytes that were used above
Array.Resize(ref targetArray, (int)Math.Ceiling((targetArray.Length * 8 - (targetIdx + 1)) / 8d));
// write target array content to console
for (var i = 0; i < targetArray.Length; i++)
{
Console.Write($"{targetArray[i]:X} ");
}
// OUTPUT: FF 8C
If you are trying to find the location of the most-significant bit, you can do a log2() of the byte (and if you don't have log2, you can use log(x)/log(2) which is the same as log2(x))
For instance, the number 7, 6, 5, and 4 all have a '1' in the 3rd bit position (0111, 0110, 0101, 0100). The log2() of them are all between 2 and 2.8. Same thing happens for anything in the 4th bit, it will be a number between 3 and 3.9. So you can find out the Most Significant Bit by adding 1 to the log2() of the number (round down).
floor(log2(00001111)) + 1 == floor(3.9) + 1 == 3 + 1 == 4
You know how many bits are in a byte, so you can easily know the number of bits to shift left:
int numToShift = 8 - floor(log2(bytearray[0])) + 1;
shiftedValue = bytearray[0] << numToShift;
From there, it's just a matter of keeping track of how many outstanding bits (not pushed into a bytearray yet) you have, and then pushing some/all of them on.
The above code would only work for the first byte array. If you put this in a loop, the numToShift would maybe need to keep track of the latest empty slot to shift things into (you might have to shift right to fit in current byte array, and then use the leftovers to put into the start of the next byte array). So instead of doing "8 -" in the above code, you would maybe put the starting location. For instance, if only 3 bits were left to fill in the current byte array, you would do:
int numToShift = 3 - floor(log2(bytearray[0])) + 1;
So that number should be a variable:
int numToShift = bitsAvailableInCurrentByte - floor(log2(bytearray[0])) + 1;
Please check this code snippet. This might help you.
byte[] arrByt = new byte[] { 0xF, 0xF, 0x11, 0x4 };
byte[] result = new byte[arrByt.Length / 2];
var en = arrByt.GetEnumerator();
int count = 0;
byte result1 = 0;
int index = 0;
while (en.MoveNext())
{
count++;
byte item = (byte)en.Current;
if (count == 1)
{
while (item < 128)
{
item = (byte)(item << 1);
}
result1 ^= item;
}
if (count == 2)
{
count = 0;
result1 ^= item;
result[index] = result1;
index++;
result1 = 0;
}
}
foreach (var s in result)
{
Console.WriteLine(s.ToString("X"));
}
I've taken a 2D array of UInt16 values, and converted it to raw bytes. I would like to take those bytes and convert them back into the original 2D array, but I'm unsure of how to do this when I only have the bytes, i.e., is there a way to determine the dimensions of an original array when all you have is that array converted to bytes?
Here's my code:
UInt16[,] dataArray = new UInt16[,] {
{4, 6, 2},
{0, 2, 0},
{1, 3, 4}
};
long byteCountUInt16Array = dataArray.GetLength(0) * dataArray.GetLength(1) * sizeof(UInt16);
var bufferUInt16 = new byte[byteCountUInt16Array];
Buffer.BlockCopy(dataArray, 0, bufferUInt16, 0, bufferUInt16.Length);
//Here is where I try to convert the values and print them out to see if the values are still the same:
UInt16[] originalUInt16Values = new UInt16[bufferUInt16.Length / 2];
Buffer.BlockCopy(bufferUInt16, 0, originalUInt16Values, 0, BufferUInt16.Length);
for (int i = 0; i < 5; i++)
{
Console.WriteLine("Values---: " + originalUInt16Values[i]);
}
This code will put the bytes into a 1-dimensional array, but I would like to put them into the original 2d array. Is this possible when if all I have are the raw bytes? I'll eventually be sending these bytes via a REST call and the receiving side will only have the bytes to convert back into the original 2D array.
So... not certain exactly what you're specifications are, but you could send the dimensions (x,y) of the array as the first four bytes of your buffer. below is my crack at it. I heavily commented it so hopefully it should make sense there. Ask any questions if that code isn't clear.
/**** SENDER *****/
// ushort and UInt16 are the same (16-bit, 2 bytes)
ushort[,] dataArray = new ushort[,] {
{4, 6, 2},
{0, 2, 0},
{1, 3, 4}
};
// get the X and Y dimensions
ushort xDim = (ushort)dataArray.GetLength(0);
ushort yDim = (ushort)dataArray.GetLength(1);
// Make an array for the entire 2D array and the dimension sizes
ushort[] toSend = new ushort[xDim * yDim + 2];
// load the dimensions into first two spots in the array
toSend[0] = xDim;
toSend[1] = yDim;
// load everything else into the array
int pos = 2;
for (int i = 0; i < xDim; i++)
{
for (int j = 0; j < yDim; j++)
{
toSend[pos] = dataArray[i, j];
pos += 1;
}
}
// size of the array in bytes
long byteCountUInt16Array = sizeof(ushort) * (xDim * yDim + 2);
// create the byte buffer
var bufferUInt16 = new byte[byteCountUInt16Array];
// copy everything (including dimensions) into the byte beffer
Buffer.BlockCopy(toSend, 0, bufferUInt16, 0, bufferUInt16.Length);
/***********RECEIVER************/
// get the dimensions from the received bytes
ushort[] xyDim = new ushort[2];
Buffer.BlockCopy(bufferUInt16, 0, xyDim, 0, sizeof(ushort) * 2);
// create buffer to read the bytes as ushorts into, size it based off of
// dimensions received.
ushort[] readIn = new ushort[xyDim[0] * xyDim[1]];
Buffer.BlockCopy(bufferUInt16, sizeof(ushort) * 2, readIn, 0, sizeof(ushort) * readIn.Length);
// create 2D array to load everything into, size based off of received sizes
ushort[,] originalUInt16Values = new ushort[xyDim[0], xyDim[1]];
// load everything in
int cur = 0;
for (int i = 0; i < xyDim[0]; i++)
{
for (int j = 0; j < xyDim[1]; j++)
{
originalUInt16Values[i, j] = readIn[cur];
cur += 1;
}
}
// print everything out to prove it works
for (int i = 0; i < xyDim[0]; i++)
{
for (int j = 0; j < xyDim[1]; j++)
{
Console.WriteLine("Values at {0},{1}: {2}", i, j, originalUInt16Values[i, j]);
}
}
// uhh... keep the console open
Console.ReadKey();
You can't get the original dimensions. Example:
8 bytes = [0, 1, 0, 2, 0, 1, 0, 2]
into array of 16 bits (2 bytes):
= [1, 2, 1, 2]
into array of 64 bits (4 bytes):
= [65538, 65538]
and all of these ways (1 byte, 2 bytes, 4 bytes) are valid for parsing, so you must indicate your original sizes, or at least one of them. Luckily you may send the sizes (or sizes) in the headers of the request. This may do the trick for what you want.
Another way of doing this is what serial systems do: simply concat your size (or sizes) and your buffer.
size [4 bytes = Int32] + buffer [n bytes]
finally parse the first bytes to read the size and block copy starting from 1 first byte of your buffer (don't forget the offset. In the example above you should start block copying from byte number 5)
I have this function in a Java program.
private static byte[] converToByte(String s)
{
byte[] output = new byte[s.length() / 2];
for (int i = 0, j = 0; i < s.length(); i += 2, j++)
{
output[j] = (byte)(Integer.parseInt(s.substring(i, i + 2), 16));
}
return output;
}
I am trying to create the same thing with C# but I'm having troubles. I tried this:
output[j] = (byte)(Int16.Parse(str.Substring(i, i + 2)));
But after a couple of iterations I got a System.OverflowException, what would be the instruction in C#?
Thanks.
private static sbyte[] converToByte(string s)
{
sbyte[] output = new sbyte[s.Length / 2];
for (int i = 0, j = 0; i < s.Length; i += 2, j++)
{
output[j] = (sbyte)(Convert.ToInt32(s.Substring(i, 2), 16));
}
return output;
}
You are using the wrong data type in your line:
output[j] = (byte)(Int16.Parse(str.Substring(i, i + 2)));
Short Name .NET Class Type Width Range (bits)
byte Byte Unsigned integer 8 0 to 255
short Int16 Signed integer 16 -32,768 to 32,767
You are getting an overflow exception because an Int16 (short) is far to big to fit into a byte.
After struggling with tihs problem myself I realised the real problem is that Java's substring method is:
substring(int beginIndex, int endIndex)
While C#'s implementation takes:
substring(int beginIndex, int length)
This means in the C# the same code is grabbing larger chunks of bytes causing an overflow.
#Dave Doknjas was on the right track but you can still convert to a byte with the new smaller chunk size.
output[j] = Convert.ToByte(str.Substring(i, i + 2), 16);
I have a .wav mono file (16bit,44.1kHz) and im using this code below. If im not wrong, this would give me an output of values between -1 and 1 which i can apply FFT on ( to be converted to a spectrogram later on). However, my output is no where near -1 and 1.
This is a portion of my output
7.01214599609375
17750.2552337646
8308.42733764648
0.000274658203125
1.00001525878906
0.67291259765625
1.3458251953125
16.0000305175781
24932
758.380676269531
0.0001068115234375
This is the code which i got from another post
Edit 1:
public static Double[] prepare(String wavePath, out int SampleRate)
{
Double[] data;
byte[] wave;
byte[] sR = new byte[4];
System.IO.FileStream WaveFile = System.IO.File.OpenRead(wavePath);
wave = new byte[WaveFile.Length];
data = new Double[(wave.Length - 44) / 4];//shifting the headers out of the PCM data;
WaveFile.Read(wave, 0, Convert.ToInt32(WaveFile.Length));//read the wave file into the wave variable
/***********Converting and PCM accounting***************/
for (int i = 0; i < data.Length; i += 2)
{
data[i] = BitConverter.ToInt16(wave, i) / 32768.0;
}
/**************assigning sample rate**********************/
for (int i = 24; i < 28; i++)
{
sR[i - 24] = wave[i];
}
SampleRate = BitConverter.ToInt16(sR, 0);
return data;
}
Edit 2 : Im getting ouput with 0s every 2nd number
0.009002685546875
0
0.009613037109375
0
0.0101318359375
0
0.01080322265625
0
0.01190185546875
0
0.01312255859375
0
0.014068603515625
If your samples are 16 bits (which appears to be the case), then you want to work with Int16. Each 2 bytes of the sample data is a signed 16-bit integer in the range -32768 .. 32767, inclusive.
If you want to convert a signed Int16 to a floating point value from -1 to 1, then you have to divide by Int16.MaxValue + 1 (which is equal to 32768). So, your code becomes:
for (int i = 0; i < data.Length; i += 2)
{
data[i] = BitConverter.ToInt16(wave, i) / 32768.0;
}
We use 32768 here because the values are signed.
So -32768/32768 will give -1.0, and 32767/32768 gives 0.999969482421875.
If you used 65536.0, then your values would only be in the range -0.5 .. 0.5.
How can I convert an int to a bit array?
If I e.g. have an int with the value 3 I want an array, that has the length 8 and that looks like this:
0 0 0 0 0 0 1 1
Each of these numbers are in a separate slot in the array that have the size 8.
Use the BitArray class.
int value = 3;
BitArray b = new BitArray(new int[] { value });
If you want to get an array for the bits, you can use the BitArray.CopyTo method with a bool[] array.
bool[] bits = new bool[b.Count];
b.CopyTo(bits, 0);
Note that the bits will be stored from least significant to most significant, so you may wish to use Array.Reverse.
And finally, if you want get 0s and 1s for each bit instead of booleans (I'm using a byte to store each bit; less wasteful than an int):
byte[] bitValues = bits.Select(bit => (byte)(bit ? 1 : 0)).ToArray();
To convert the int 'x'
int x = 3;
One way, by manipulation on the int :
string s = Convert.ToString(x, 2); //Convert to binary in a string
int[] bits= s.PadLeft(8, '0') // Add 0's from left
.Select(c => int.Parse(c.ToString())) // convert each char to int
.ToArray(); // Convert IEnumerable from select to Array
Alternatively, by using the BitArray class-
BitArray b = new BitArray(new byte[] { x });
int[] bits = b.Cast<bool>().Select(bit => bit ? 1 : 0).ToArray();
Use Convert.ToString (value, 2)
so in your case
string binValue = Convert.ToString (3, 2);
I would achieve it in a one-liner as shown below:
using System;
using System.Collections;
namespace stackoverflowQuestions
{
class Program
{
static void Main(string[] args)
{
//get bit Array for number 20
var myBitArray = new BitArray(BitConverter.GetBytes(20));
}
}
}
Please note that every element of a BitArray is stored as bool as shown in below snapshot:
So below code works:
if (myBitArray[0] == false)
{
//this code block will execute
}
but below code doesn't compile at all:
if (myBitArray[0] == 0)
{
//some code
}
I just ran into an instance where...
int val = 2097152;
var arr = Convert.ToString(val, 2).ToArray();
var myVal = arr[21];
...did not produce the results I was looking for. In 'myVal' above, the value stored in the array in position 21 was '0'. It should have been a '1'. I'm not sure why I received an inaccurate value for this and it baffled me until I found another way in C# to convert an INT to a bit array:
int val = 2097152;
var arr = new BitArray(BitConverter.GetBytes(val));
var myVal = arr[21];
This produced the result 'true' as a boolean value for 'myVal'.
I realize this may not be the most efficient way to obtain this value, but it was very straight forward, simple, and readable.
To convert your integer input to an array of bool of any size, just use LINQ.
bool[] ToBits(int input, int numberOfBits) {
return Enumerable.Range(0, numberOfBits)
.Select(bitIndex => 1 << bitIndex)
.Select(bitMask => (input & bitMask) == bitMask)
.ToArray();
}
So to convert an integer to a bool array of up to 32 bits, simply use it like so:
bool[] bits = ToBits(65, 8); // true, false, false, false, false, false, true, false
You may wish to reverse the array depending on your needs.
Array.Reverse(bits);
int value = 3;
var array = Convert.ToString(value, 2).PadLeft(8, '0').ToArray();
public static bool[] Convert(int[] input, int length)
{
var ret = new bool[length];
var siz = sizeof(int) * 8;
var pow = 0;
var cur = 0;
for (var a = 0; a < input.Length && cur < length; ++a)
{
var inp = input[a];
pow = 1;
if (inp > 0)
{
for (var i = 0; i < siz && cur < length; ++i)
{
ret[cur++] = (inp & pow) == pow;
pow *= 2;
}
}
else
{
for (var i = 0; i < siz && cur < length; ++i)
{
ret[cur++] = (inp & pow) != pow;
pow *= 2;
}
}
}
return ret;
}
I recently discovered the C# Vector<T> class, which uses hardware acceleration (i.e. SIMD: Single-Instruction Multiple Data) to perform operations across the vector components as single instructions. In other words, it parallelizes array operations, to an extent.
Since you are trying to expand an integer bitmask to an array, perhaps you are trying to do something similar.
If you're at the point of unrolling your code, this would be an optimization to strongly consider. But also weigh this against the costs if you are only sparsely using them. And also consider the memory overhead, since Vectors really want to operate on contiguous memory (in CLR known as a Span<T>), so the kernel may be having to twiddle bits under the hood when you instantiate your own vectors from arrays.
Here is an example of how to do masking:
//given two vectors
Vector<int> data1 = new Vector<int>(new int[] { 1, 0, 1, 0, 1, 0, 1, 0 });
Vector<int> data2 = new Vector<int>(new int[] { 0, 1, 1, 0, 1, 0, 0, 1 });
//get the pairwise-matching elements
Vector<int> mask = Vector.Equals(data1, data2);
//and return values from another new vector for matches
Vector<int> whenMatched = new Vector<int>(new int[] { 1, 2, 3, 4, 5, 6, 7, 8 });
//and zero otherwise
Vector<int> whenUnmatched = Vector<int>.Zero;
//perform the filtering
Vector<int> result = Vector.ConditionalSelect(mask, whenMatched, whenUnmatched);
//note that only the first half of vector components render in the Debugger (this is a known bug)
string resultStr = string.Join("", result);
//resultStr is <0, 0, 3, 4, 5, 6, 0, 0>
Note that the VS Debugger is bugged, showing only the first half of the components of a vector.
So with an integer as your mask, you might try:
int maskInt = 0x0F;//00001111 in binary
//convert int mask to a vector (anybody know a better way??)
Vector<int> maskVector = new Vector<int>(Enumerable.Range(0, Vector<int>.Count).Select(i => (maskInt & 1<<i) > 0 ? -1 : 0).ToArray());
Note that the (signed integer) -1 is used to signal true, which has binary representation of all ones.
Positive 1 does not work, and you can cast (int)-1 to uint to get every bit of the binary enabled, if needed (but not by using Enumerable.Cast<>()).
However this only works for int32 masks up to 2^8 because of the 8-element capacity in my system (that supports 4x64-bit chunks). This depends on the execution environment based on the hardware capabilities, so always use Vector<T>.Capacity.
You therefore can get double the capacity with shorts as ints as longs (the new Half type isn't yet supported, nor is "Decimal", which are the corresponding float/double types to int16 and int128):
ushort maskInt = 0b1111010101010101;
Vector<ushort> maskVector = new Vector<ushort>(Enumerable.Range(0, Vector<ushort>.Count).Select(i => (maskInt & 1<<i) > 0 ? -1 : 0).Select(x => (ushort)x).ToArray());
//string maskString = string.Join("", maskVector);//<65535, 0, 65535, 0, 65535, 0, 65535, 0, 65535, 0, 65535, 0, 65535, 65535, 65535, 65535>
Vector<ushort> whenMatched = new Vector<ushort>(Enumerable.Range(1, Vector<ushort>.Count).Select(i => (ushort)i).ToArray());//{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15}
Vector<ushort> whenUnmatched = Vector<ushort>.Zero;
Vector<ushort> result = Vector.ConditionalSelect(maskVector, whenMatched, whenUnmatched);
string resultStr = string.Join("", result);//<1, 0, 3, 0, 5, 0, 7, 0, 9, 0, 11, 0, 13, 14, 15, 16>
Due to how integers work, being signed or unsigned (using the most significant bit to indicate +/- values), you might need to consider that too, for values like converting 0b1111111111111111 to a short. The compiler will usually stop you if you try to do something that appears to be stupid, at least.
short maskInt = unchecked((short)(0b1111111111111111));
Just make sure you don't confuse the most-significant bit of an int32 as being 2^31.
Using & (AND)
Bitwise Operators
The answers above are all correct and effective. If you wanted to do it old-school, without using BitArray or String.Convert(), you would use bitwise operators.
The bitwise AND operator & takes two operands and returns a value where every bit is either 1, if both operands have a 1 in that place, or a 0 in every other case. It's just like the logical AND (&&), but it takes integral operands instead of boolean operands.
Ex. 0101 & 1001 = 0001
Using this principle, any integer AND the maximum value of that integer is itself.
byte b = 0b_0100_1011; // In base 10, 75.
Console.WriteLine(b & byte.MaxValue); // byte.MaxValue = 255
Result: 75
Bitwise AND in a loop
We can use this to our advantage to only take specific bits from an integer by using a loop that goes through every bit in a positive 32-bit integer (i.e., uint) and puts the result of the AND operation into an array of strings which will all be "1" or "0".
A number that has a 1 at only one specific digit n is equal to 2 to the nth power (I typically use the Math.Pow() method).
public static string[] GetBits(uint x) {
string[] bits = new string[32];
for (int i = 0; i < 32; i++) {
uint bit = x & Math.Pow(2, i);
if (bit == 1)
bits[i] = "1";
else
bits[i] = "0";
}
return bits;
}
If you were to input, say, 1000 (which is equivalent to binary 1111101000), you would get an array of 32 strings that would spell 0000 0000 0000 0000 0000 0011 1110 1000 (the spaces are just for readability).