I have some code that manages data received from an array of sensors. The PIC that controls the sensors uses 8 SAR-ADCs in parallel to read 4096 data bytes. It means it reads the most significant bit for the first 8 bytes; then it reads their second bit and so on until the eighth (least significant bit).
Basically, for each 8 bytes it reads, it creates (and sends forth to the computer) 8 bytes as follows:
// rxData[0] = MSB[7] MSB[6] MSB[5] MSB[4] MSB[3] MSB[2] MSB[1] MSB[0]
// rxData[1] = B6[7] B6[6] B6[5] B6[4] B6[3] B6[2] B6[1] B6[0]
// rxData[2] = B5[7] B5[6] B5[5] B5[4] B5[3] B5[2] B5[1] B5[0]
// rxData[3] = B4[7] B4[6] B4[5] B4[4] B4[3] B4[2] B4[1] B4[0]
// rxData[4] = B3[7] B3[6] B3[5] B3[4] B3[3] B3[2] B3[1] B3[0]
// rxData[5] = B2[7] B2[6] B2[5] B2[4] B2[3] B2[2] B2[1] B2[0]
// rxData[6] = B1[7] B1[6] B1[5] B1[4] B1[3] B1[2] B1[1] B1[0]
// rxData[7] = LSB[7] LSB[6] LSB[5] LSB[4] LSB[3] LSB[2] LSB[1] LSB[0]
This pattern is repeated for all the 4096 bytes the system reads and I process.
Imagine that each 8 bytes read are taken separately, we can then see them as an 8-by-8 array of bits. I need to mirror this array around the diagonal going from its bottom-left (LSB[7]) to its top-right (MSB[0]). Once this is done, the resulting 8-by-8 array of bits contains in its rows the correct data bytes read from the sensors. I used to perform this operation on the PIC controller, using left shifts and so on, but that slowed down the system quite a lot. Thus, this operation is now performed on the computer where we process the data, using the following code:
BitArray ba = new BitArray(rxData);
BitArray ba2 = new BitArray(ba.Count);
for (int i = 0; i < ba.Count; i++)
{
ba2[i] = ba[(((int)(i / 64)) + 1) * 64 - 1 - (i % 8) * 8 - (int)(i / 8) + ((int)(i / 64)) * 8];
}
byte[] data = new byte[rxData.Length];
ba2.CopyTo(data, 0);
Note that THIS CODE WORKS.
rxData is the received byte array.
The formula I use for the index of ba[] in the loop codes for the mirroring of the arrays I described above. The size of the array is checked elsewhere to make sure it always contains the correct number (4096) of bytes.
So far this was the background for my problem.
In each processing loop of my system I need to perform that mirroring twice, because my data processing is on the difference between two arrays acquired consecutively. Speed is important for my system (possibly the main constraint on the processing), and the mirroring accounts for between 10% and 30% of the execution time of my processing.
I would like to know if there are alternative solutions I might compare to my mirroring code and that might allow me to improve performances. Using the BitArrays is the only way I found to address the different bits in the received bytes.
Operating on separate bits is very slow, and creating 2 bit arrays and copy data back and forth creates further overheads
The simplest obvious solution is just extracting the bits and combining them again. You can do it with a loop but since it uses both left and right shift at the same time, you need a function to handle negative shift amount. As a result here I unrolled it for easier understanding and more speed
out[0] = (byte)(((rxData[0] & 0x80) >> 0) | ((rxData[1] & 0x80) >> 1) | ((rxData[2] & 0x80) >> 2) | ((rxData[3] & 0x80) >> 3) |
((rxData[4] & 0x80) >> 4) | ((rxData[5] & 0x80) >> 5) | ((rxData[6] & 0x80) >> 6) | ((rxData[7] & 0x80) >> 7));
out[1] = (byte)(((rxData[0] & 0x40) << 1) | ((rxData[1] & 0x40) >> 0) | ((rxData[2] & 0x40) >> 1) | ((rxData[3] & 0x40) >> 2) |
((rxData[4] & 0x40) >> 3) | ((rxData[5] & 0x40) >> 4) | ((rxData[6] & 0x40) >> 5) | ((rxData[7] & 0x40) >> 6));
out[2] = (byte)(((rxData[0] & 0x20) << 2) | ((rxData[1] & 0x20) << 1) | ((rxData[2] & 0x20) >> 0) | ((rxData[3] & 0x20) >> 1) |
((rxData[4] & 0x20) >> 2) | ((rxData[5] & 0x20) >> 3) | ((rxData[6] & 0x20) >> 4) | ((rxData[7] & 0x20) >> 5));
out[3] = (byte)(((rxData[0] & 0x10) << 3) | ((rxData[1] & 0x10) << 2) | ((rxData[2] & 0x10) << 1) | ((rxData[3] & 0x10) >> 0) |
((rxData[4] & 0x10) >> 1) | ((rxData[5] & 0x10) >> 2) | ((rxData[6] & 0x10) >> 3) | ((rxData[7] & 0x10) >> 4));
out[4] = (byte)(((rxData[0] & 0x08) << 4) | ((rxData[1] & 0x08) << 3) | ((rxData[2] & 0x08) << 2) | ((rxData[3] & 0x08) << 1) |
((rxData[4] & 0x08) >> 0) | ((rxData[5] & 0x08) >> 1) | ((rxData[6] & 0x08) >> 2) | ((rxData[7] & 0x08) >> 3));
out[5] = (byte)(((rxData[0] & 0x04) << 5) | ((rxData[1] & 0x04) << 4) | ((rxData[2] & 0x04) << 3) | ((rxData[3] & 0x04) << 2) |
((rxData[4] & 0x04) << 1) | ((rxData[5] & 0x04) >> 0) | ((rxData[6] & 0x04) >> 1) | ((rxData[7] & 0x04) >> 2));
out[6] = (byte)(((rxData[0] & 0x02) << 6) | ((rxData[1] & 0x02) << 5) | ((rxData[2] & 0x02) << 4) | ((rxData[3] & 0x02) << 3) |
((rxData[4] & 0x02) << 2) | ((rxData[5] & 0x02) << 1) | ((rxData[6] & 0x02) >> 0) | ((rxData[7] & 0x02) >> 1));
out[7] = (byte)(((rxData[0] & 0x01) << 7) | ((rxData[1] & 0x01) << 6) | ((rxData[2] & 0x01) << 5) | ((rxData[3] & 0x01) << 4) |
((rxData[4] & 0x01) << 3) | ((rxData[5] & 0x01) << 2) | ((rxData[6] & 0x01) << 1) | ((rxData[7] & 0x01) >> 0));
Obviously this is still very slow because it operates byte-by-byte. An optimal solution would operate on multiple bytes at once, for example with SIMD and/or multithreading. Especially since you're doing that for lots of data, .NET SIMD intrinsics would be extremely useful
You will probably find that BitVector performs a good deal better than BitArray.
BitVector32 is more efficient than BitArray for Boolean values and small integers that are used internally. A BitArray can grow indefinitely as needed, but it has the memory and performance overhead that a class instance requires. In contrast, a BitVector32 uses only 32 bits.
http://msdn.microsoft.com/en-us/library/system.collections.specialized.bitvector32.aspx
If you initialize an array of BitVector32 and operate on those, it should be faster than operating on BitArray as you do now.
You may also get a performance boost if you use one thread to perform the mirroring and a second thread to perform the analysis of consecutive reads. The Task Parallel Library Dataflow provides a nice framework for that type of solution. You could have one Source Block to acquire the data buffer, one Transform Block to perform the mirroring, and one Target Block to perform the data processing.
This is actually the same as the get column in a bitboard problem so it can be solved even more efficiently by considering the byte array as a 64-bit integer
byte get_byte(ulong matrix, uint col)
{
const ulong column_mask = 0x8080808080808080ull;
const ulong magic = 0x2040810204081ull;
ulong column = ((matrix << col) & column_mask) * magic;
return (byte)(column >> 56);
}
// Actually the below step is not needed. You can read rxData directly into the `ulong`
// variable instead of a bit array. Remember to CHANGE THE ENDIANNESS if necessary
ulong matrix = (rxData[7] << 56) | (rxData[6] << 48) | (rxData[5] << 40) | (rxData[4] << 32)
| (rxData[3] << 24) | (rxData[2] << 16) | (rxData[1] << 8) | rxData[0];
for (int i = 0; i < 8; i++)
data[i] = get_byte(matrix, i);
In newer x86 CPUs you can use the PDEP instruction in the BMI2 instruction set. I'm not sure if there's any corresponding intrinsic in C#. If the intrinsic doesn't exist then you have to use native code like this
data[i] = _pext_u64(matrix, column_mask >> col);
Update:
The instruction has been added to .NET Core 3.0 as ParallelBitDeposit() intrinsic so it's now much easier and faster to do that from C#
ulong matrix = BitConverter.ToUInt64(rxData, 0);
for (int i = 0; i < 8; i++)
data[i] = Bmi2.X64.ParallelBitDeposit(matrix, 0x8080808080808080UL >> i);
There's also ParallelBitExtract() for the inverse PEXT instruction
Related
I need to add GetInt64 and SetInt64 instructions to the ISession interface in ASP.NET Core so we're able to store some long values.
The existing code for GetInt32 and SetInt32 is available on Github in SessionExtensions.cs.
I am trying to understand the pattern that is in use:
public static void SetInt32(this ISession session, string key, int value)
{
var bytes = new byte[]
{
(byte)(value >> 24),
(byte)(0xFF & (value >> 16)),
(byte)(0xFF & (value >> 8)),
(byte)(0xFF & value)
};
session.Set(key, bytes);
}
public static int? GetInt32(this ISession session, string key)
{
var data = session.Get(key);
if (data == null || data.Length < 4)
{
return null;
}
return data[0] << 24 | data[1] << 16 | data[2] << 8 | data[3];
}
I had expected to see BitConverter.GetBytes, but for whatever reason the set is doing a bunch of right shifts against each octet, and the read does left shifts against each octet. I'm guessing that this relates to keeping the endianness neutral as the BitConverter methods return different values depending on the CPU architecture in use.
Is there an obvious reason the code is written like this?
Would the following be a correct implementation for SetInt64/GetInt64?
public static void SetInt64(this ISession session, string key, long value)
{
var bytes = new byte[]
{
(byte)(value >> 56),
(byte)(0xFF & (value >> 48)),
(byte)(0xFF & (value >> 40)),
(byte)(0xFF & (value >> 32)),
(byte)(0xFF & (value >> 24)),
(byte)(0xFF & (value >> 16)),
(byte)(0xFF & (value >> 8)),
(byte)(0xFF & value)
};
session.Set(key, bytes);
}
public static long? GetInt64(this ISession session, string key)
{
var data = session.Get(key);
if (data == null || data.Length < 8)
{
return null;
}
return data[0] << 56 | data[1] << 48 | data[2] << 40 | data[3] << 32 | data[4] << 24 | data[5] << 16 | data[6] << 8 | data[7];
}
You are almost right. The only problem is that in GetInt64 you need to cast byte to long before shifting it.
public static void SetInt64(this ISession session, string key, long value)
{
var bytes = new byte[]
{
(byte)(value >> 56),
(byte)(0xFF & (value >> 48)),
(byte)(0xFF & (value >> 40)),
(byte)(0xFF & (value >> 32)),
(byte)(0xFF & (value >> 24)),
(byte)(0xFF & (value >> 16)),
(byte)(0xFF & (value >> 8)),
(byte)(0xFF & value)
};
session.Set(key, bytes);
}
public static long? GetInt64(this ISession session, string key)
{
var data = session.Get(key);
if (data == null || data.Length < 8)
{
return null;
}
return (long)data[0] << 56 | (long)data[1] << 48 | (long)data[2] << 40 | (long)data[3] << 32 | (long)data[4] << 24 | (long)data[5] << 16 | (long)data[6] << 8 | (long)data[7];
}
Eventually this code and BitConverter has the same result. BitConverter is using Unsafe to do this operation, maybe that's the reason it's not being used.
https://source.dot.net/#System.Private.CoreLib/BitConverter.cs,107
Why the shifting?
Let's say you want to store 472 (a 3 digit number) in one digit place holders. This is what you do.
Take the first digit from right and put it in first place holder.
Shift the number one digit to the right and repeat the first step.
So it goes
472 ---> 2
047 ---> 7
004 ---> 4
This is exactly what's happening here except place holder is a byte which holds 8 bits and that is why the shifting is happening 8 bits at a time.
I suspect this is an easy one.
I need to get a number from the first 4 bits and another number from the last 12 bits
of 2 bytes.
So here is what I have but it doesn't seem to be right:
byte[] data = new byte[2];
//assume byte array contains data
var _4bit = data[0] >> 4;
var _12bit = data[0] >> 8 | data[1] & 0xff;
data[0]>>8 is 0. Remember that your data is defined as byte[] so it has 8bits per single item, so you are effectively cutting ALL bits off the data[0].
You want rather to take the lowest 4 bits from that byte by bitwise AND (00001111 = 0F) and then shift it leftwards as needed.
So try this:
var _4bit = data[0] >> 4;
var _12bit = ((data[0] & 0x0F) << 8) | (data[1] & 0xff);
It's also worth noting that the last & 0xFF is not needed, as the data[1] is already a byte.
On bits, step by step:
byte[2] data = { aaaabbbb, cccccccc }
var _4bit = data[0] >> 4;
= aaaabbbb >> 4
= 0000aaaa
var _12bit = ( (data[0] & 0x0F) << 8) | ( data[1] & 0xff);
= ((aaaabbbb & 0x0F) << 8) | (cccccccc & 0xff);
= ( 0000bbbb << 8) | ( cccccccc );
= ( 0000bbbb000000000 ) | ( cccccccc );
= 0000bbbbcccccccc;
BTW. also, note that results of & and | operators are typed as int, so 32bits, I've omitted the zeroes for clarity and written it as 8bit only to make it brief!
Ok, so I have two methods
public static long ReadLong(this byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long length = data[0] | data[1] << 8 | data[2] << 16 | data[3] << 24
| data[4] << 32 | data[5] << 40 | data[6] << 48 | data[7] << 56;
return length;
}
public static void WriteLong(this byte[] data, long i)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
data[0] = (byte)((i >> (8*0)) & 0xFF);
data[1] = (byte)((i >> (8*1)) & 0xFF);
data[2] = (byte)((i >> (8*2)) & 0xFF);
data[3] = (byte)((i >> (8*3)) & 0xFF);
data[4] = (byte)((i >> (8*4)) & 0xFF);
data[5] = (byte)((i >> (8*5)) & 0xFF);
data[6] = (byte)((i >> (8*6)) & 0xFF);
data[7] = (byte)((i >> (8*7)) & 0xFF);
}
So WriteLong works correctly(Verified against BitConverter.GetBytes()). The problem is ReadLong. I have a fairly good understanding of this stuff, but I'm guessing what's happening is the or operations are happening as 32 bit ints so at Int32.MaxValue it rolls over. I'm not sure how to avoid that. My first instinct was to make an int from the lower half and an int from the upper half and combine them, but I'm not quite knowledgeable to know even where to start with that, so this is what I tried....
public static long ReadLong(byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long l1 = data[0] | data[1] << 8 | data[2] << 16 | data[3] << 24;
long l2 = data[4] | data[5] << 8 | data[6] << 16 | data[7] << 24;
return l1 | l2 << 32;
}
This didn't work though, at least not for larger numbers, it seems to work for everything below zero.
Here's how I run it
void Main()
{
var larr = new long[5]{
long.MinValue,
0,
long.MaxValue,
1,
-2000000000
};
foreach(var l in larr)
{
var arr = new byte[8];
WriteLong(ref arr,l);
Console.WriteLine(ByteString(arr));
var end = ReadLong(arr);
var end2 = BitConverter.ToInt64(arr,0);
Console.WriteLine(l + " == " + end + " == " + end2);
}
}
and here's what I get(using the modified ReadLong method)
0:0:0:0:0:0:0:128
-9223372036854775808 == -9223372036854775808 == -9223372036854775808
0:0:0:0:0:0:0:0
0 == 0 == 0
255:255:255:255:255:255:255:127
9223372036854775807 == -1 == 9223372036854775807
1:0:0:0:0:0:0:0
1 == 1 == 1
0:108:202:136:255:255:255:255
-2000000000 == -2000000000 == -2000000000
The problem is not the or, it is the bitshift. This has to be done as longs. Currently, the data[i] are implicitely converted to int. Just change that to long and that's it. I.e.
public static long ReadLong(byte[] data)
{
if (data.Length < 8) throw new ArgumentOutOfRangeException("Not enough data");
long length = (long)data[0] | (long)data[1] << 8 | (long)data[2] << 16 | (long)data[3] << 24
| (long)data[4] << 32 | (long)data[5] << 40 | (long)data[6] << 48 | (long)data[7] << 56;
return length;
}
You are doing int arithmetic and then assigning to long, try:
long length = data[0] | data[1] << 8L | data[2] << 16L | data[3] << 24L
| data[4] << 32L | data[5] << 40L | data[6] << 48L | data[7] << 56L;
This should define your constants as longs forcing it to use long arithmetic.
EDIT: Turns out this may not work according to comments below as while bitshift takes many operators on the left it only takes int on the right. Georg's should be the accepted answer.
How do I take the hex 0A 25 10 A2 and get the end result of 851.00625? This must be multiplied by 0.000005. I have tried the following code without success:
byte oct6 = 0x0A;
byte oct7 = 0x25;
byte oct8 = 0x10;
byte oct9 = 0xA2;
decimal BaseFrequency = Convert.ToDecimal((oct9 | (oct8 << 8) | (oct7 << 16) | (oct6 << 32))) * 0.000005M;
I am not getting 851.00625 as the BaseFrequency.
oct6 is being shifted 8 bits too far (32 instead of 24)
decimal BaseFrequency = Convert.ToDecimal((oct9 | (oct8 << 8) | (oct7 << 16) | (oct6 << 24))) * 0.000005M;
I'm trying to convert 4 bytes into a 32 bit unsigned integer.
I thought maybe something like:
UInt32 combined = (UInt32)((map[i] << 32) | (map[i+1] << 24) | (map[i+2] << 16) | (map[i+3] << 8));
But this doesn't seem to be working. What am I missing?
Your shifts are all off by 8. Shift by 24, 16, 8, and 0.
Use the BitConverter class.
Specifically, this overload.
BitConverter.ToInt32()
You can always do something like this:
public static unsafe int ToInt32(byte[] value, int startIndex)
{
fixed (byte* numRef = &(value[startIndex]))
{
if ((startIndex % 4) == 0)
{
return *(((int*)numRef));
}
if (IsLittleEndian)
{
return (((numRef[0] | (numRef[1] << 8)) | (numRef[2] << 0x10)) | (numRef[3] << 0x18));
}
return ((((numRef[0] << 0x18) | (numRef[1] << 0x10)) | (numRef[2] << 8)) | numRef[3]);
}
}
But this would be reinventing the wheel, as this is actually how BitConverter.ToInt32() is implemented.