I am iterating through an array of bytes and add values of another array of bytes in a for loop.
var random = new Random();
byte[] bytes = new byte[20_000_000];
byte[] bytes2 = new byte[20_000_000];
for (int i = 0; i < bytes.Length; i++)
{
bytes[i] = (byte)random.Next(255);
}
for (int i = 0; i < bytes.Length; i++)
{
bytes2[i] = (byte)random.Next(255);
}
//how to optimize the part below
for (int i = 0; i < bytes.Length; i++)
{
bytes[i] += bytes2[i];
}
Is there any way to speed up the process, so it can be faster than linear.
You could use Vector:
static void Add(Span<byte> dst, ReadOnlySpan<byte> src)
{
Span<Vector<byte>> dstVec = MemoryMarshal.Cast<byte, Vector<byte>>(dst);
ReadOnlySpan<Vector<byte>> srcVec = MemoryMarshal.Cast<byte, Vector<byte>>(src);
for (int i = 0; i < dstVec.Length; ++i)
{
dstVec[i] += srcVec[i];
}
for (int i = dstVec.Length * Vector<byte>.Count; i < dst.Length; ++i)
{
dst[i] += src[i];
}
}
Will go even faster if you use a pointer here to align one of your arrays.
Pad the array length to the next highest multiple of 8.(It already is in your example.)
Use an unsafe context to create two ulong arrays pointing to the start of the existing byte arrays. Use a for loop to iterate bytes.Length / 8 times adding 8 bytes at a time.
On my system this runs for less than 13 milliseconds. Compared to 105 milliseconds for the original code.
You must add the /unsafe option to use this code. Open the project properties and select "allow unsafe code".
var random = new Random();
byte[] bytes = new byte[20_000_000];
byte[] bytes2 = new byte[20_000_000];
int Len = bytes.Length >> 3; // >>3 is the same as / 8
ulong MASK = 0x8080808080808080;
ulong MASKINV = 0x7f7f7f7f7f7f7f7f;
//Sanity check
if((bytes.Length & 7) != 0) throw new Exception("bytes.Length is not a multiple of 8");
if((bytes2.Length & 7) != 0) throw new Exception("bytes2.Length is not a multiple of 8");
unsafe
{
//Add 8 bytes at a time, taking into account overflow between bytes
fixed (byte* pbBytes = &bytes[0])
fixed (byte* pbBytes2 = &bytes2[0])
{
ulong* pBytes = (ulong*)pbBytes;
ulong* pBytes2 = (ulong*)pbBytes2;
for (int i = 0; i < Len; i++)
{
pBytes[i] = ((pBytes2[i] & MASKINV) + (pBytes[i] & MASKINV)) ^ ((pBytes[i] ^ pBytes2[i]) & MASK);
}
}
}
You can utilize all your processors/cores, assuming that your machine has more than one.
Parallel.ForEach(Partitioner.Create(0, bytes.Length), range =>
{
for (int i = range.Item1; i < range.Item2; i++)
{
bytes[i] += bytes2[i];
}
});
Update: The Vector<T> class can also be used in .NET Framework. It requires the package System.Numerics.Vectors. It offers the advantage of parallelization in a single core, by issuing a Single Instruction to Multiple Data (SIMD). Most current processors are SIMD-enabled. It is only enabled for 64-bit processes, so the flag [Prefer 32-bit] must be unchecked. On 32-bit processes the property Vector.IsHardwareAccelerated returns false, and the performance is bad.
using System.Numerics;
/// <summary>Adds each pair of elements in two arrays, and replaces the
/// left array element with the result.</summary>
public static void Add_UsingVector(byte[] left, byte[] right, int start, int length)
{
int i = start;
int step = Vector<byte>.Count; // the step is 16
int end = start + length - step + 1;
for (; i < end; i += step)
{
// Vectorize 16 bytes from each array
var vector1 = new Vector<byte>(left, i);
var vector2 = new Vector<byte>(right, i);
vector1 += vector2; // Vector arithmetic is unchecked only
vector1.CopyTo(left, i);
}
for (; i < start + length; i++) // Process the last few elements
{
unchecked { left[i] += right[i]; }
}
}
This runs 4-5 times faster than a simple loop, without utilizing more than one thread (25% CPU consumption in a 4-core PC).
Related
Below is the code to visualize what's need to be done. I am looking for a solution that can do it faster. One of them is to Sum to arrays using bit manipulation (https://stackoverflow.com/a/55945544/4791668). I wonder if there is any way to do it the way described in the link and find the average at the same time.
var random = new Random();
byte[] bytes = new byte[20_000_000];
byte[] bytes2 = new byte[20_000_000];
for (int i = 0; i < bytes.Length; i++)
{
bytes[i] = (byte)random.Next(255);
}
for (int i = 0; i < bytes.Length; i++)
{
bytes2[i] = (byte)random.Next(255);
}
//how to optimize the part below
for (int i = 0; i < bytes.Length; i++)
{
bytes[i] = (byte)((bytes[i] + bytes2[i]) / 2);
}
/////////// Solution that needs to be improved. It doesn't do the average part.
var random = new Random();
byte[] bytes = new byte[20_000_000];
byte[] bytes2 = new byte[20_000_000];
int Len = bytes.Length >> 3; // >>3 is the same as / 8
ulong MASK = 0x8080808080808080;
ulong MASKINV = 0x7f7f7f7f7f7f7f7f;
//Sanity check
if((bytes.Length & 7) != 0) throw new Exception("bytes.Length is not a multiple of 8");
if((bytes2.Length & 7) != 0) throw new Exception("bytes2.Length is not a multiple of 8");
unsafe
{
//Add 8 bytes at a time, taking into account overflow between bytes
fixed (byte* pbBytes = &bytes[0])
fixed (byte* pbBytes2 = &bytes2[0])
{
ulong* pBytes = (ulong*)pbBytes;
ulong* pBytes2 = (ulong*)pbBytes2;
for (int i = 0; i < Len; i++)
{
pBytes[i] = ((pBytes2[i] & MASKINV) + (pBytes[i] & MASKINV)) ^ ((pBytes[i] ^ pBytes2[i]) & MASK);
}
}
}
Using bit manipulation, you can compute the average of the bytes in parallel:
ulong NOLOW = 0xfefefefefefefefe;
unsafe {
//Add 8 bytes at a time, taking into account overflow between bytes
fixed (byte* pbBytes = &bytes[0])
fixed (byte* pbBytes2 = &bytes2[0])
fixed (byte* pbAns2 = &ans2[0]) {
ulong* pBytes = (ulong*)pbBytes;
ulong* pBytes2 = (ulong*)pbBytes2;
ulong* pAns2 = (ulong*)pbAns2;
for (int i = 0; i < Len; i++) {
pAns2[i] = (pBytes2[i] & pBytes[i]) + (((pBytes[i] ^ pBytes2[i]) & NOLOW) >> 1);
}
}
}
I modified the code to store in a separate ans byte array since I needed the source arrays to compare the two methods. Obviously you could store back to the original bytes[] if desired.
This is based on this formula: x+y == (x&y)+(x|y) == (x&y)*2 + (x^y) == (x&y)<<1 + (x^y), which means you can compute (x+y)/2 == (x&y)+((x^y) >> 1). Since we know we are computing 8 bytes at a time, we can mask the low order bit out of every byte so we shift in a 0 bit for the high order bit of every byte when we shift all 8 bytes.
On my PC this runs 2x to 3x faster (trending to 2x for longer arrays) than the (byte) sum.
As my goal is to out perform the List<T>
i am testing arrays and found few starting points to get on testing
i have tested this before trying to capture bitmaps off screen,
and tests proved the usage is suffice.
my question is what data types could use this Copy() code except for byte[]
say i want a data storage unit to take the advantage of unmanaged / unsafe
public unsafe struct NusT
{
public unsafe int vi;
public unsafe bool vb;
}
instead of populating a list
i initialise the struct as follows : 1)
NusT n;
n.vi= 90;
n.vb=true
i have tested this after testing the folowing: 2)
NusT n = new NusT(){vi=90, vb=true};
this test was after testing :3)
NusT n = new NusT("90", true);
i think both last had same results but the first one is blazing fast, as i do not create an object so
NusT n-> instructions- 1
n.vi=90 -> instructions- 1
n.vb=true -> instructions- 1
now i minimized what i could and this started at the begining with a class:
whitch was even worse than 2 & 3 above as it also uses properties
class bigAndSlow
{
public int a { get; private set;}
public bool b { get; private set;}
public string c { get; private set;}
public bigAndSlow(int .. ,boo .. , string.. )
{
initialise ...
}
}
so now when the final decision is
public unsafe struct NusT
{
public unsafe int vi;
public unsafe bool vb;
}
how can i implement this blazingly fast data unit to use Copy() on
NusT[] NustyArr;
static unsafe void Copy(byte[] src, int srcIndex,
byte[] dst, int dstIndex, int count)
{
if (src == null || srcIndex < 0 ||
dst == null || dstIndex < 0 || count < 0)
{
throw new ArgumentException();
}
int srcLen = src.Length;
int dstLen = dst.Length;
if (srcLen - srcIndex < count ||
dstLen - dstIndex < count)
{
throw new ArgumentException();
}
// The following fixed statement pins the location of
// the src and dst objects in memory so that they will
// not be moved by garbage collection.
fixed (byte* pSrc = src, pDst = dst)
{
byte* ps = pSrc;
byte* pd = pDst;
// Loop over the count in blocks of 4 bytes, copying an
// integer (4 bytes) at a time:
for (int n = 0; n < count / 4; n++)
{
*((int*)pd) = *((int*)ps);
pd += 4;
ps += 4;
}
// Complete the copy by moving any bytes that weren't
// moved in blocks of 4:
for (int n = 0; n < count % 4; n++)
{
*pd = *ps;
pd++;
ps++;
}
}
}
static void Main(string[] args)
{
byte[] a = new byte[100];
byte[] b = new byte[100];
for (int i = 0; i < 100; ++i)
a[i] = (byte)i;
Copy(a, 0, b, 0, 100);
Console.WriteLine("The first 10 elements are:");
for (int i = 0; i < 10; ++i)
Console.Write(b[i] + " ");
Console.WriteLine("\n");
}
Yes, you can do this with any blittable type. The blittable types are primitive types (integer and float types, but not bool), one-dimensional arrays of blittable types and structures containing fields of blittable types only.
The structure NusT is not blittable because it contains bool field. Just change it to byte and you will get a blittable structure for which you can obtain a pointer.
Here is the code that works for any type:
static unsafe void UnsafeCopy<T>(T[] src, int srcIndex, T[] dst, int dstIndex, int count) where T : struct
{
if (src == null || srcIndex < 0 || dst == null || dstIndex < 0 || count < 0 || srcIndex + count > src.Length || dstIndex + count > dst.Length)
{
throw new ArgumentException();
}
int elem_size = Marshal.SizeOf(typeof(T));
GCHandle gch1 = GCHandle.Alloc(src, GCHandleType.Pinned);
GCHandle gch2 = GCHandle.Alloc(dst, GCHandleType.Pinned);
byte* ps = (byte*)gch1.AddrOfPinnedObject().ToPointer() + srcIndex * elem_size;
byte* pd = (byte*)gch2.AddrOfPinnedObject().ToPointer() + dstIndex * elem_size;
int len = count * elem_size;
try
{
// Loop over the count in blocks of 4 bytes, copying an
// integer (4 bytes) at a time:
for (int n = 0; n < len / 4; n++)
{
*((int*)pd) = *((int*)ps);
pd += 4;
ps += 4;
}
// Complete the copy by moving any bytes that weren't
// moved in blocks of 4:
for (int n = 0; n < len % 4; n++)
{
*pd = *ps;
pd++;
ps++;
}
}
finally
{
gch1.Free();
gch2.Free();
}
}
But I strongly advice you to use Array.Copy. It is already the most efficient way to copy arrays. See the benchmarks of copying array of 1M elements below:
byte[] Array.Copy: 57,491 us
byte[] FastCopy: 138,198 us
byte[] JustCopy: 792,399 us
byte[] UnsafeCopy: 138,575 us
byte[] MemCpy: 57,667 us
NusT[] Array.Copy: 1,197 ms
NusT[] JustCopy: 1,843 ms
NusT[] UnsafeCopy: 1,550 ms
NusT[] MemCpy: 1,208 ms
FastCopy is your copy function, UnsafeCopy is my templated function, JustCopy is a simple implementation for (int i = 0; i < src.Length; i++) dst[i] = src[i];. MemCpy is PInvoke call of msvcrt memcpy function.
The verdict is: using pointers in C# for performance improvement is a bad practice. JIT does not optimize the unsafe code. The best solution is to move performance critical code to native DLLs.
I am using a segmented integer counter (byte array) in a parallel CTR implementation (encryption). The counter needs to be incremented in sizes corresponding to the blocks of data being processed by each processor. So, the number needs to be incremented by more than a single bit value. In other words.. the byte array is acting as a Big Integer, and I need to increase the sum value of the byte array by an integer factor.
I am currently using the methods shown below using a while loop, but I am wondering if there is a bit wise method (& | ^ etc), as using a loop seems very wasteful.. any ideas?
private void Increment(byte[] Counter)
{
int j = Counter.Length;
while (--j >= 0 && ++Counter[j] == 0) { }
}
/// <summary>
/// Increase a byte array by a numerical value
/// </summary>
/// <param name="Counter">Original byte array</param>
/// <param name="Count">Number to increase by</param>
/// <returns>Array with increased value [byte[]]</returns>
private byte[] Increase(byte[] Counter, Int32 Count)
{
byte[] buffer = new byte[Counter.Length];
Buffer.BlockCopy(Counter, 0, buffer, 0, Counter.Length);
for (int i = 0; i < Count; i++)
Increment(buffer);
return buffer;
}
The standard O(n) multi-precision add goes like this (assuming [0] is LSB):
static void Add(byte[] dst, byte[] src)
{
int carry = 0;
for (int i = 0; i < dst.Length; ++i)
{
byte odst = dst[i];
byte osrc = i < src.Length ? src[i] : (byte)0;
byte ndst = (byte)(odst + osrc + carry);
dst[i] = ndst;
carry = ndst < odst ? 1 : 0;
}
}
It can help to think of this is terms of grade-school arithmetic, which is really all it is:
129
+ 123
-----
Remember, where you'd perform an add and carry for each digit, starting from the left (the least-significant digit)? In this case, each digit is a byte in your array.
Rather than roll your own, have you considered using the arbitrary-precision BigInteger? It was actually created specifically for .NET's own crypto stuff.
I used a variation of Cory Nelsons answer that creates the array with the correct endian order and returns a new array. This is quite a bit faster then the method first posted.. thanks for the help.
private byte[] Increase(byte[] Counter, int Count)
{
int carry = 0;
byte[] buffer = new byte[Counter.Length];
int offset = buffer.Length - 1;
byte[] cnt = BitConverter.GetBytes(Count);
byte osrc, odst, ndst;
Buffer.BlockCopy(Counter, 0, buffer, 0, Counter.Length);
for (int i = offset; i > 0; i--)
{
odst = buffer[i];
osrc = offset - i < cnt.Length ? cnt[offset - i] : (byte)0;
ndst = (byte)(odst + osrc + carry);
carry = ndst < odst ? 1 : 0;
buffer[i] = ndst;
}
return buffer;
}
I am writing a live-video imaging application and need to speed up this method. It's currently taking about 10ms to execute and I'd like to get it down to 2-3ms.
I've tried both Array.Copy and Buffer.BlockCopy and they both take ~30ms which is 3x longer than the manual copy.
One thought was to somehow copy 4 bytes as an integer and then paste them as an integer, thereby reducing 4 lines of code to one line of code. However, I'm not sure how to do that.
Another thought was to somehow use pointers and unsafe code to do this, but I'm not sure how to do that either.
All help is much appreciated. Thank you!
EDIT: Array sizes are: inputBuffer[327680], lookupTable[16384], outputBuffer[1310720]
public byte[] ApplyLookupTableToBuffer(byte[] lookupTable, ushort[] inputBuffer)
{
System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();
sw.Start();
// Precalculate and initialize the variables
int lookupTableLength = lookupTable.Length;
int bufferLength = inputBuffer.Length;
byte[] outputBuffer = new byte[bufferLength * 4];
int outIndex = 0;
int curPixelValue = 0;
// For each pixel in the input buffer...
for (int curPixel = 0; curPixel < bufferLength; curPixel++)
{
outIndex = curPixel * 4; // Calculate the corresponding index in the output buffer
curPixelValue = inputBuffer[curPixel] * 4; // Retrieve the pixel value and multiply by 4 since the lookup table has 4 values (blue/green/red/alpha) for each pixel value
// If the multiplied pixel value falls within the lookup table...
if ((curPixelValue + 3) < lookupTableLength)
{
// Copy the lookup table value associated with the value of the current input buffer location to the output buffer
outputBuffer[outIndex + 0] = lookupTable[curPixelValue + 0];
outputBuffer[outIndex + 1] = lookupTable[curPixelValue + 1];
outputBuffer[outIndex + 2] = lookupTable[curPixelValue + 2];
outputBuffer[outIndex + 3] = lookupTable[curPixelValue + 3];
//System.Buffer.BlockCopy(lookupTable, curPixelValue, outputBuffer, outIndex, 4); // Takes 2-10x longer than just copying the values manually
//Array.Copy(lookupTable, curPixelValue, outputBuffer, outIndex, 4); // Takes 2-10x longer than just copying the values manually
}
}
Debug.WriteLine("ApplyLookupTableToBuffer(ms): " + sw.Elapsed.TotalMilliseconds.ToString("N2"));
return outputBuffer;
}
EDIT: I've updated the method keeping the same variable names so others can see how the code would translate based on HABJAN's solution below.
public byte[] ApplyLookupTableToBufferV2(byte[] lookupTable, ushort[] inputBuffer)
{
System.Diagnostics.Stopwatch sw = new System.Diagnostics.Stopwatch();
sw.Start();
// Precalculate and initialize the variables
int lookupTableLength = lookupTable.Length;
int bufferLength = inputBuffer.Length;
byte[] outputBuffer = new byte[bufferLength * 4];
//int outIndex = 0;
int curPixelValue = 0;
unsafe
{
fixed (byte* pointerToOutputBuffer = &outputBuffer[0])
fixed (byte* pointerToLookupTable = &lookupTable[0])
{
// Cast to integer pointers since groups of 4 bytes get copied at once
uint* lookupTablePointer = (uint*)pointerToLookupTable;
uint* outputBufferPointer = (uint*)pointerToOutputBuffer;
// For each pixel in the input buffer...
for (int curPixel = 0; curPixel < bufferLength; curPixel++)
{
// No need to multiply by 4 on the following 2 lines since the pointers are for integers, not bytes
// outIndex = curPixel; // This line is commented since we can use curPixel instead of outIndex
curPixelValue = inputBuffer[curPixel]; // Retrieve the pixel value
if ((curPixelValue + 3) < lookupTableLength)
{
outputBufferPointer[curPixel] = lookupTablePointer[curPixelValue];
}
}
}
}
Debug.WriteLine("2 ApplyLookupTableToBuffer(ms): " + sw.Elapsed.TotalMilliseconds.ToString("N2"));
return outputBuffer;
}
I did some tests, and I managed to achieve max speed by turning my code into unsafe along with using the RtlMoveMemory API. I figured out that Buffer.BlockCopy and Array.Copy were much slower than direct RtlMoveMemory usage.
So, at the end you will end up with something like this:
fixed(byte* ptrOutput= &outputBufferBuffer[0])
{
MoveMemory(ptrOutput, ptrInput, 4);
}
[DllImport("Kernel32.dll", EntryPoint = "RtlMoveMemory", SetLastError = false)]
private static unsafe extern void MoveMemory(void* dest, void* src, int size);
EDIT:
Ok, now once when I figured out your logic and when I did some tests, I managed to speed up your method for almost up to 50%. Since you need to copy a small data blocks (always 4 bytes), yes, you were right, RtlMoveMemory wont help here and it's better to copy data as integer. Here is the final solution I came up with:
public static byte[] ApplyLookupTableToBufferV2(byte[] lookupTable, ushort[] inputBuffer)
{
int lookupTableLength = lookupTable.Length;
int bufferLength = inputBuffer.Length;
byte[] outputBuffer = new byte[bufferLength * 4];
int outIndex = 0, curPixelValue = 0;
unsafe
{
fixed (byte* ptrOutput = &outputBuffer[0])
fixed (byte* ptrLookup = &lookupTable[0])
{
uint* lkp = (uint*)ptrLookup;
uint* opt = (uint*)ptrOutput;
for (int index = 0; index < bufferLength; index++)
{
outIndex = index;
curPixelValue = inputBuffer[index];
if ((curPixelValue + 3) < lookupTableLength)
{
opt[outIndex] = lkp[curPixelValue];
}
}
}
}
return outputBuffer;
}
I renamed your method to ApplyLookupTableToBufferV1.
And here are my test result:
int tc1 = Environment.TickCount;
for (int i = 0; i < 200; i++)
{
byte[] a = ApplyLookupTableToBufferV1(lt, ib);
}
tc1 = Environment.TickCount - tc1;
Console.WriteLine("V1: " + tc1.ToString() + "ms");
Result - V1: 998 ms
int tc2 = Environment.TickCount;
for (int i = 0; i < 200; i++)
{
byte[] a = ApplyLookupTableToBufferV2(lt, ib);
}
tc2 = Environment.TickCount - tc2;
Console.WriteLine("V2: " + tc2.ToString() + "ms");
Result - V2: 473 ms
I have two byte arrays with the same length. I need to perform XOR operation between each byte and after this calculate sum of bits.
For example:
11110000^01010101 = 10100101 -> so 1+1+1+1 = 4
I need do the same operation for each element in byte array.
Use a lookup table. There are only 256 possible values after XORing, so it's not exactly going to take a long time. Unlike izb's solution though, I wouldn't suggest manually putting all the values in though - compute the lookup table once at startup using one of the looping answers.
For example:
public static class ByteArrayHelpers
{
private static readonly int[] LookupTable =
Enumerable.Range(0, 256).Select(CountBits).ToArray();
private static int CountBits(int value)
{
int count = 0;
for (int i=0; i < 8; i++)
{
count += (value >> i) & 1;
}
return count;
}
public static int CountBitsAfterXor(byte[] array)
{
int xor = 0;
foreach (byte b in array)
{
xor ^= b;
}
return LookupTable[xor];
}
}
(You could make it an extension method if you really wanted...)
Note the use of byte[] in the CountBitsAfterXor method - you could make it an IEnumerable<byte> for more generality, but iterating over an array (which is known to be an array at compile-time) will be faster. Probably only microscopically faster, but hey, you asked for the fastest way :)
I would almost certainly actually express it as
public static int CountBitsAfterXor(IEnumerable<byte> data)
in real life, but see which works better for you.
Also note the type of the xor variable as an int. In fact, there's no XOR operator defined for byte values, and if you made xor a byte it would still compile due to the nature of compound assignment operators, but it would be performing a cast on each iteration - at least in the IL. It's quite possible that the JIT would take care of this, but there's no need to even ask it to :)
Fastest way would probably be a 256-element lookup table...
int[] lut
{
/*0x00*/ 0,
/*0x01*/ 1,
/*0x02*/ 1,
/*0x03*/ 2
...
/*0xFE*/ 7,
/*0xFF*/ 8
}
e.g.
11110000^01010101 = 10100101 -> lut[165] == 4
This is more commonly referred to as bit counting. There are literally dozens of different algorithms for doing this. Here is one site which lists a few of the more well known methods. There are even CPU specific instructions for doing this.
Theorectically, Microsoft could add a BitArray.CountSetBits function that gets JITed with the best algorithm for that CPU architecture. I, for one, would welcome such an addition.
As I understood it you want to sum the bits of each XOR between the left and right bytes.
for (int b = 0; b < left.Length; b++) {
int num = left[b] ^ right[b];
int sum = 0;
for (int i = 0; i < 8; i++) {
sum += (num >> i) & 1;
}
// do something with sum maybe?
}
I'm not sure if you mean sum the bytes or the bits.
To sum the bits within a byte, this should work:
int nSum = 0;
for (int i=0; i<=7; i++)
{
nSum += (byte_val>>i) & 1;
}
You would then need the xoring, and array looping around this, of course.
The following should do
int BitXorAndSum(byte[] left, byte[] right) {
int sum = 0;
for ( var i = 0; i < left.Length; i++) {
sum += SumBits((byte)(left[i] ^ right[i]));
}
return sum;
}
int SumBits(byte b) {
var sum = 0;
for (var i = 0; i < 8; i++) {
sum += (0x1) & (b >> i);
}
return sum;
}
It can be rewritten as ulong and use unsafe pointer, but byte is easier to understand:
static int BitCount(byte num)
{
// 0x5 = 0101 (bit) 0x55 = 01010101
// 0x3 = 0011 (bit) 0x33 = 00110011
// 0xF = 1111 (bit) 0x0F = 00001111
uint count = num;
count = ((count >> 1) & 0x55) + (count & 0x55);
count = ((count >> 2) & 0x33) + (count & 0x33);
count = ((count >> 4) & 0xF0) + (count & 0x0F);
return (int)count;
}
A general function to count bits could look like:
int Count1(byte[] a)
{
int count = 0;
for (int i = 0; i < a.Length; i++)
{
byte b = a[i];
while (b != 0)
{
count++;
b = (byte)((int)b & (int)(b - 1));
}
}
return count;
}
The less 1-bits, the faster this works. It simply loops over each byte, and toggles the lowest 1 bit of that byte until the byte becomes 0. The castings are necessary so that the compiler stops complaining about the type widening and narrowing.
Your problem could then be solved by using this:
int Count1Xor(byte[] a1, byte[] a2)
{
int count = 0;
for (int i = 0; i < Math.Min(a1.Length, a2.Length); i++)
{
byte b = (byte)((int)a1[i] ^ (int)a2[i]);
while (b != 0)
{
count++;
b = (byte)((int)b & (int)(b - 1));
}
}
return count;
}
A lookup table should be the fastest, but if you want to do it without a lookup table, this will work for bytes in just 10 operations.
public static int BitCount(byte value) {
int v = value - ((value >> 1) & 0x55);
v = (v & 0x33) + ((v >> 2) & 0x33);
return ((v + (v >> 4) & 0x0F));
}
This is a byte version of the general bit counting function described at Sean Eron Anderson's bit fiddling site.