Split byte array in bit arrays - c#

I have a byte array of length 64. I have a sequential list of what the data in this byte array corresponds to. This list tells me the size in bits that each value is. Most values are 8 or 16 bits which is easy enougth to parse out. However, about mid way through the list i start getting length of 12 bits or 5 bits. What is the best way to loop through these and pull out the bits i need.

The following code should extract n bits from a data buffer stored as a uint[]. I haven't tested it, so caveat lector.
uint GetBits(uint[] data, ref uint pos, uint n) {
uint a = pos/32;
uint off = pos%32;
uint mask = (1 << n) - 1;
pos += n;
return (data[a] << off)&mask | (data[a + 1] >> (32 - off))&mask;
}
Note that it assumes little-endian storage, so that unaligned values flow from the high bits of one word into the low bits of the next.

Related

Algorithm to get ushort number from GUID c#

How to get an unique(most of the time) ushort number from GUID, I have tried below code but since I am converting it to ushort so it is just ignoring the LSB hexadecimal values of GUID
static ushort GetId() {
Guid guid = Guid.NewGuid();
byte[] buffer = guid.ToByteArray();
return BitConverter.ToUInt16(buffer, 0);
}
FYI: Somewhere in my code, I have a guid and I want to keep the corresponding ushort number.
I have tried below code but since I am converting it to ushort so it
is just ignoring the LSB hexadecimal values of GUID
Yes, this is correct and for good reason, you cannot store 128 bits of data in 16 bits of data.
Name Length (bytes) Contents
---------------------------------------------------------------------------------------
time_low 4 integer giving the low 32 bits of the time
time_mid 2 integer giving the middle 16 bits of the time
time_hi_and_version 2 4-bit "version" in the most significant bits, followed by the high 12 bits of the time
clock_seq_hi_and_res clock_seq_low 2 1-3 bit "variant" in the most significant bits, followed by the 13-15 bit clock sequence
node 6 the 48-bit node id
If you want the last 16 bits (2 bytes, 4 hex values) Just reverse the array
Array.Reverse(buffer, 0, buffer.Length);
return BitConverter.ToUInt16(buffer, 0);
Note what you are doing is very suspect, and i truly think you need to rethink your design

Convert a variable size hex string to signed number (variable size bytes) in C#

C# provides the method Convert.ToUInt16("FFFF", 16)/Convert.ToInt16("FFFF", 16) to convert hex strings into unsigned and signed 16 bit integer. These methods works fine for 16/32 bit values but not so for 12 bit values.
I would like to convert 3 char long hex string to signed integer. How could I do it? I would prefer a solution that could take the number of character as parameter to decide signed values.
Convert(string hexString, int fromBase, int size)
Convert("FFF", 16, 12) return -1.
Convert("FFFF", 16, 16) return -1.
Convert("FFF", 16, 16) return 4095.
The easiest way I can think of converting 12 bit signed hex to a signed integer is as follows:
string value = "FFF";
int convertedValue = (Convert.ToInt32(value, 16) << 20) >> 20; // -1
The idea is to shift the result as far left as possible so that the negative bits line up, then shift right again to the original position. This works because a "signed shift right" operation keeps the negative bit in place.
You can generalize this into a method as follows:
int Convert(string value, int fromBase, int bits)
{
int bitsToShift = 32 - bits;
return (Convert.ToInt32(value, fromBase) << bitsToShift) >> bitsToShift;
}
You can cast the result to a short if you want a 16 bit value when working with 12 bit hex strings. Performance of this method will be the same as a 16 bit version because bit shift operators on short cast the values to int anyway and this gives you more flexibility to specify more than 16 bits if needed without writing another method.
Ah, you'd like to calculate the Two's Complement for a certain number of bits (12 in your case, but really it should work with anything).
Here's the code in C#, blatantly stolen from the Python example in the wiki article:
int Convert(string hexString, int fromBase, int num_bits)
{
var i = System.Convert.ToUInt16(hexString, fromBase);
var mask = 1 << (num_bits - 1);
return (-(i & mask) + (i & ~mask));
}
Convert("FFF", 16, 12) returns -1
Convert("4095", 10, 12) is also -1 as expected

c# int[] array to byte[] array only LSB

What is fastest way to write int[] array to byte[] array, but only LSB (last 8 bits)? I used for loop and bitmask bArray[i] = (byte)(iArray[i] & 0xFF), but array is very long (+900k) and this take about 50ms. Do you know maybe other faster option?
You can try parallelizing the workload:
Parallel.For(0,iArray.Length, i => bArray[i] = (byte)(iArray[i] & 0xFF));
This will spawn multiple threads to do the conversion. It tends to be faster on my machine, but, will sometimes take longer due to the overhead of spawning multiple threads.
What are you doing that 50ms is too slow?
Part of the slowness comes from the need to read more data than you need: when you do
array[i] & 0xFF
you read the entire four bytes of an int, only to drop three most significant ones.
You could avoid this overhead with unsafe code. Keep in mind that the approach below assumes little endian architecture:
static unsafe void CopyLsbUnsafe(int[] from, byte[] to) {
fixed (int* s = from) {
fixed (byte* d = to) {
byte* sb = (byte*) s;
int* db = (int*)d;
int* end = db + to.Length/4;
while (db != end) {
*db++ = (*(sb + 0) << 0)
| (*(sb + 4) << 8)
| (*(sb + 8) << 16)
| (*(sb + 12) << 24);
sb += 16;
}
}
}
}
The above code re-interprets the int array as an array of bytes, and the array of bytes as an array of integers. Then it reads every 4-th byte into the destination array using a pointer, writing to the destination in groups of four bytes using an integer assignment.
My testing shows a respectable 60% improvement over a simple loop.

Convert 13 bits in BitArray to signed integer then convert 4 bits in BitArray to integer

there is array of bits
BitArray bits = new BitArray(17);
i want to take first 13 bit and convert into 13bits signed integer
and remainder in bits array 4 bits convert into 4 bits integer.
How can I do it in C#?
Assuming your bits are stored LSB first (e.g., left most in the BitArray), you can do something like this (borrowing off of this post: How can I convert BitArray to single int?).
int[] arr = new int[1];
bits.CopyTo(arr, 0); // assume that bits are stored LSB first
int first13Bits = arr[0] >> 4; // shift off last 4 bits to leave top 13
int last4Bits = 0x0000000F & arr[0]; // mask off top 28 bits to leave bottom 4
Note that first13Bits should be signed, but last4Bits will not be signed here (since the top bits are masked off). If your bits are stored MSB first you will need to reverse the bits in the BitArray before you convert them (as CopyTo seems to assume they're stored LSB first).

How does BitConverter.ToInt32 work?

Here is a method -
using System;
class Program
{
static void Main(string[] args)
{
//
// Create an array of four bytes.
// ... Then convert it into an integer and unsigned integer.
//
byte[] array = new byte[4];
array[0] = 1; // Lowest
array[1] = 64;
array[2] = 0;
array[3] = 0; // Sign bit
//
// Use BitConverter to convert the bytes to an int and a uint.
// ... The int and uint can have different values if the sign bit differs.
//
int result1 = BitConverter.ToInt32(array, 0); // Start at first index
uint result2 = BitConverter.ToUInt32(array, 0); // First index
Console.WriteLine(result1);
Console.WriteLine(result2);
Console.ReadLine();
}
}
Output
16385
16385
I just want to know how this is happening?
The docs for BitConverter.ToInt32 actually have some pretty good examples. Assuming BitConverter.IsLittleEndian returns true, array[0] is the least significant byte, as you've shown... although array[3] isn't just the sign bit, it's the most significant byte which includes the sign bit (as bit 7) but the rest of the bits are for magnitude.
So in your case, the least significant byte is 1, and the next byte is 64 - so the result is:
( 1 * (1 << 0) ) + // Bottom 8 bits
(64 * (1 << 8) ) + // Next 8 bits, i.e. multiply by 256
( 0 * (1 << 16)) + // Next 8 bits, i.e. multiply by 65,536
( 0 * (1 << 24)) // Top 7 bits and sign bit, multiply by 16,777,216
which is 16385. If the sign bit were set, you'd need to consider the two cases differently, but in this case it's simple.
It converts like it was a number in base 256. So in your case : 1+64*256 = 16385
Looking at the .Net 4.0 Framework reference source, BitConverter does work how Jon's answer said, though it uses pointers (unsafe code) to work with the array.
However, if the second argument (i.e., startindex) is divisible by 4 (as is the case in your example), the framework takes a shortcut. It takes a byte pointer to the value[startindex], casts it to an int pointer, then dereferences it. This trick works regardless of whether IsLittleEndian is true.
From a high level, this basically just means the code is pointing at 4 bytes in the byte array and categorically declaring, "the chunk of memory over there is an int!" (and then returning a copy of it). This makes perfect sense when you take into account that under the hood, an int is just a chunk of memory.
Below is the source code of the framework ToUint32 method:
return (uint)ToInt32(value, startIndex);
array[0] = 1; // Lowest // 0x01 array[1] = 64; //
0x40 array[2] = 0; // 0x00 array[3] = 0; // Sign bit
0x00
If you combine each hex value 0x00004001
The MSDN documentatin explains everything
You can look for yourself - https://referencesource.microsoft.com/#mscorlib/system/bitconverter.cs,e8230d40857425ba
If the data is word-aligned, it will simply cast the memory pointer to an int32.
return *((int *) pbyte);
Otherwise, it uses bitwise logic from the byte memory pointer values.
For those of you who are having trouble with Little Endien and Big Endien. I use the following wrapper functions to take care of it.
public static Int16 ToInt16(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
{
return BitConverter.ToInt16(BitConverter.IsLittleEndian ? data.Skip(offset).Take(2).Reverse().ToArray() : data, 0);
}
return BitConverter.ToInt16(data, offset);
}
public static Int32 ToInt32(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
{
return BitConverter.ToInt32(BitConverter.IsLittleEndian ? data.Skip(offset).Take(4).Reverse().ToArray() : data, 0);
}
return BitConverter.ToInt32(data, offset);
}
public static Int64 ToInt64(byte[] data, int offset)
{
if (BitConverter.IsLittleEndian)
{
return BitConverter.ToInt64(BitConverter.IsLittleEndian ? data.Skip(offset).Take(8).Reverse().ToArray() : data, 0);
}
return BitConverter.ToInt64(data, offset);
}

Categories