c# - fastest method get integer from part of bits - c#

I have byte[] byteArray, usually byteArray.Length = 1-3
I need decompose an array into bits, take some bits (for example, 5-17), and convert these bits to Int32.
I tried to do this
private static IEnumerable<bool> GetBitsStartingFromLSB(byte b)
{
for (int i = 0; i < 8; i++)
{
yield return (b % 2 != 0);
b = (byte)(b >> 1);
}
}
public static Int32 Bits2Int(ref byte[] source, int offset, int length)
{
List<bool> bools = source.SelectMany(GetBitsStartingFromLSB).ToList();
bools = bools.GetRange(offset, length);
bools.AddRange(Enumerable.Repeat(false, 32-length).ToList() );
int[] array = new int[1];
(new BitArray(bools.ToArray())).CopyTo(array, 0);
return array[0];
}
But this method is too slow, and I have to call it very often.
How can I do this more efficiently?
Thanx a lot! Now i do this:
public static byte[] GetPartOfByteArray( byte[] source, int offset, int length)
{
byte[] retBytes = new byte[length];
Buffer.BlockCopy(source, offset, retBytes, 0, length);
return retBytes;
}
public static Int32 Bits2Int(byte[] source, int offset, int length)
{
if (source.Length > 4)
{
source = GetPartOfByteArray(source, offset / 8, (source.Length - offset / 8 > 3 ? 4 : source.Length - offset / 8));
offset -= 8 * (offset / 8);
}
byte[] intBytes = new byte[4];
source.CopyTo(intBytes, 0);
Int32 full = BitConverter.ToInt32(intBytes);
Int32 mask = (1 << length) - 1;
return (full >> offset) & mask;
}
And it works very fast!

If you're after "fast", then ultimately you need to do this with bit logic, not LINQ etc. I'm not going to write actual code, but you'd need to:
use your offset with / 8 and % 8 to find the starting byte and the bit-offset inside that byte
compose however many bytes you need - quite possibly up to 5 if you are after a 32-bit number (because of the possibility of an offset)
; for example into a long, in whichever endianness (presumably big-endian?) you expect
use right-shift (>>) on the composed value to drop however-many bits you need to apply the bit-offset (i.e. value >>= offset % 8;)
mask out any bits you don't want; for example value &= ~(-1L << length); (the -1 gives you all-ones; the << length creates length zeros at the right hand edge, and the ~ swaps all zeros for ones and ones for zeros, so you now have length ones at the right hand edge)
if the value is signed, you'll need to think about how you want negatives to be handled, especially if you aren't always reading 32 bits

First of all, you're asking for optimization. But the only things you've said are:
too slow
need to call it often
There's no information on:
how much slow is too slow? have you measured current code? have you estimated how fast you need it to be?
how often is "often"?
how large are the source byt arrays?
etc.
Optimization can be done in a multitude of ways. When asking for optimization, everything is important. For example, if source byte[] is 1 or 2 bytes long (yeah, may be ridiculous, but you didn't tell us), and if it rarely changes, then you could get very nice results by caching results. And so on.
So, no solutions from me, just a list of possible performance problems:
private static IEnumerable<bool> GetBitsStartingFromLSB(byte b) // A
{
for (int i = 0; i < 8; i++)
{
yield return (b % 2 != 0); // A
b = (byte)(b >> 1);
}
}
public static Int32 Bits2Int(ref byte[] source, int offset, int length)
{
List<bool> bools = source.SelectMany(GetBitsStartingFromLSB).ToList(); //A,B
bools = bools.GetRange(offset, length); //B
bools.AddRange(Enumerable.Repeat(false, 32-length).ToList() ); //C
int[] array = new int[1]; //D
(new BitArray(bools.ToArray())).CopyTo(array, 0); //D
return array[0]; //D
}
A: LINQ is fun, but not fast unless done carefully. For each input byte, it takes 1 byte, splits that in 8 bools, passing them around wrapped it in a compiler-generated IEnumerable object *). Note that it all needs to be cleaned up later, too. Probably you'd get a better performance simply returning a new bool[8] or even BitArray(size=8).
*) conceptually. In fact yield-return is lazy, so it's not 8valueobj+1refobj, but just 1 enumerable that generates items. But then, you're doing .ToList() in (B), so me writing this in that way isn't that far from truth.
A2: the 8 is hardcoded. Once you drop that pretty IEnumerable and change it to a constant-sized array-like thing, you can preallocate that array and pass it via parameter to GetBitsStartingFromLSB to further reduce the amount of temporary objects created and later thrown away. And since SelectMany visits items one-by-one without ever going back, that preallocated array can be reused.
B: Converts whole Source array to stream of bytes, converts it to List. Then discards that whole list except for a small offset-length range of that list. Why covert to list at all? It's just another pack of objects wasted, and internal data is copied as well, since bool is a valuetype. You could have taken the range directly from IEnumerable by .Skip(X).Take(Y)
C: padding a list of bools to have 32 items. AddRange/Repeat is fun, but Repeat has to return an IEnumerable. It's again another object that is created and throw away. You're padding the list with false. Drop the list idea, make it an bool[32]. Or BitArray(32). They start with false automatically. That's the default value of a bool. Iterate over the those bits from 'range' A+B and write them into that array by index. Those written will have their value, those unwritten will stay false. Job done, no objects wasted.
C2: connect preallocating 32-item array with A+A2. GetBitsStartingFromLSB doesn't need to return anything, it may get a buffer to be filled via parameter. And that buffer doesn't need to be 8-item buffer. You may pass the whole 32-item final array, and pass an offset so that function knows exactly where to write. Even less objects wasted.
D: finally, all that work to return selected bits as an integer. new temporary array is created&wasted, new BitArray is effectively created&wasted too. Note that earlier you were already doing manual bit-shift conversion int->bits in GetBitsStartingFromLSB, why not just create a similar method that will do some shifts and do bits->int instead? If you knew the order of the bits, now you know them as well. No need for array&BitArray, some code wiggling, and you save on that allocations and data copying again.
I have no idea how much time/space/etc will that save for you, but that's just a few points that stand out at first glance, without modifying your original idea for the code too much, without doing-it-all via math&bitshifts in one go, etc. I've seen MarcGravell already wrote you some hints too. If you have time to spare, I suggest you try first mine, one by one, and see how (and if at all !) each change affects performance. Just to see. Then you'll probably scrap it all and try again new "do-it-all via math&bitshifts in one go" version with Marc's hints.

Related

Binary search a byte array

I have a file that contains a sorted list of key/value pairs. The key is a fixed-length string, and the value is the binary representation of an integer. So if the key size were 6, each record is exactly 10 bytes (6+sizeof(int)). The key size can be any (reasonable) value but is the same for every record in the file.
For example, the data file might look like this (where 'xxxx' would be binary representations of int values):
ABLE xxxxBAKER xxxxCARL xxxxDELTA xxxxEDGAR xxxxGEORGExxxx
I would like to load this table into memory (a byte[]) and binary search it. In C++, this is a snap, the bsearch() function takes an 'element size' parameter that tells it the size of the records in the buffer, but I cannot find a similar function in C#.
I've also looked at Array.BinarySearch, but I can't figure out what I'd use as a T.
You should probably use a binary reader to convert your binary data to proper .net types. Since c# is a managed language, and the memory representation of types may depend on the runtime, reading and writing data is generally more involved than just dumping some part of memory to disk.
You mention "fixed-length string", I would assume you mean it is ascii encoded. C# string are UTF16, so you will probably need to convert it to allow it to be used in any meaningful way. See also bitconverter for an alternative way to read integer data.
Once you have converted the data to proper c# types it should be fairly trivial to do a binary search, just put the key and value in separate arrays and use Array.BinarySearch. Or use a custom type that implements IComparable<T>.
Or put the data into some other data structure, like a dictionary, for fast searching.
We can modify standard binary-search algorithms to do this.
We need to modify it to refer to the array in Spans of the width you want. So we multiply the index by that width.
delegate int BytesComparer(ReadOnlySpan<byte> a, ReadOnlySpan<byte> b);
int BinarySearchRecord(byte[] array, int recordWidth, ReadOnlySpan<byte> value, BytesComparer comparer)
{
return BinarySearchRecord(array, 0, array.Length / recordWidth, value, recordWidth, comparer);
}
int BinarySearchRecord(byte[] array, int startRecord, int count, int recordWidth, ReadOnlySpan<byte> value, BytesComparer comparer)
{
int lo = startRecord;
int hi = startRecord + count - 1;
while (lo <= hi)
{
int i = lo + ((hi - lo) >> 1);
int order = comparer(new ReadOnlySpan<byte>(array, i * recordWidth, recordWidth), value);
if (order == 0) return i;
if (order < 0)
lo = i + 1;
else
hi = i - 1;
}
return ~lo;
}
You can use it like this (it requires some bit-twiddling on little-endian systems, in order to sort strings correctly)
var yourArray = Encoding.ASCII.GetBytes("ABLE xxxxBAKER xxxxCARL xxxxDELTA xxxxEDGAR xxxxGEORGExxxx");
var valToFind = Encoding.ASCII.GetBytes("DELTA ").Concat(new byte[]{0x1, 0x2, 0x3, 0x4}).ToArray();
var foundIndex = BinarySearchRecord(yourArray, valToFind.Length, new ReadOnlySpan<byte>(valToFind), (a, b) =>
{
return BinaryPrimitives.ReverseEndianness(BitConverter.ToInt64(a) & 0xffffffffffff)
.CompareTo(
BinaryPrimitives.ReverseEndianness(BitConverter.ToInt64(b) & 0xffffffffffff));
});
dotnetfiddle

C# Explanation of stream string example

I am using a Microsoft example for interprocess communication. In the example there are two methods for reading/writing a string to/from a stream. The code sends the length of the string being streamed in the data. I need similar code, but need to make some modifications. An explanation of the highlighted lines would be helpful.
In WriteString(), they taken the length of the byte array being written and divide it by 256. The opposite is done in ReadString(), but an explanation as to why 256 is used would be great. Then, it writes another byte by taking the length and & it with 255. I don't understand the reasoning for this either. I'm thinking it shifts the value, but I don't really understand why this is needed. And then in ReadString() it does a += to the length by reading a byte. An explanation of this would be really helpful. I'm new to streaming and just want to understand exactly what is happening and why.
public string ReadString()
{
int len = 0;
// the next two lines
len = ioStream.ReadByte() * 256;
len += ioStream.ReadByte();
byte[] inBuffer = new byte[len];
ioStream.Read(inBuffer, 0, len);
return streamEncoding.GetString(inBuffer);
}
public int WriteString(string outString)
{
byte[] outBuffer = streamEncoding.GetBytes(outString);
int len = outBuffer.Length;
if (len > UInt16.MaxValue)
{
len = (int)UInt16.MaxValue;
}
// the next to lines
ioStream.WriteByte((byte)(len / 256));
ioStream.WriteByte((byte)(len & 255));
ioStream.Write(outBuffer, 0, len);
ioStream.Flush();
return outBuffer.Length + 2;
}
This code is bad, find a new tutorial.
The 256 stuff is there to convert the low 2 bytes of the length integer to bytes in order to serialize/deserialize them. This is not how it's normally done. Use BinaryReader/Writer or code not based on multiplication but on binary and and shift.
Dividing by 256 is equivalent to x >> 8. Also this only works with positive integers. x & 255 is used to take the lowest byte. This could simply be (byte)x. Sometimes people write x % 256 for this which not idiomatic and has problems with signedness.
Good code would be new byte[] { (byte)(x >> 8), (byte)(x >> 0) } and x = bytes[1] << 8 | bytes[0]. Much simpler, faster and idiomatic. I like writing >> 0 which does nothing for the sake of symmetry. It's optimized away. This might seem ridiculous for 16 bit ints but with longer ints there are 4 or 8 components and having one of them slightly off seems like needless inconsistency.
ioStream.Read(inBuffer, 0, len);
is a bug because it assumes the read completes in one chunk. Need a loop or again BinaryReader.
if (len > UInt16.MaxValue)
{
len = (int)UInt16.MaxValue;
}
I'm going to use this opportunity to warn anyone who reads this: Microsoft .NET sample code often is of extremely poor quality. Read it with great skepticism.

Defining a bit[] array in C#

currently im working on a solution for a prime-number calculator/checker. The algorythm is already working and verry efficient (0,359 seconds for the first 9012330 primes). Here is a part of the upper region where everything is declared:
const uint anz = 50000000;
uint a = 3, b = 4, c = 3, d = 13, e = 12, f = 13, g = 28, h = 32;
bool[,] prim = new bool[8, anz / 10];
uint max = 3 * (uint)(anz / (Math.Log(anz) - 1.08366));
uint[] p = new uint[max];
Now I wanted to go to the next level and use ulong's instead of uint's to cover a larger area (you can see that already), where i tapped into my problem: the bool-array.
Like everybody should know, bool's have the length of a byte what takes a lot of memory when creating the array... So I'm searching for a more resource-friendly way to do that.
My first idea was a bit-array -> not byte! <- to save the bool's, but haven't figured out how to do that by now. So if someone ever did something like this, I would appreciate any kind of tips and solutions. Thanks in advance :)
You can use BitArray collection:
http://msdn.microsoft.com/en-us/library/system.collections.bitarray(v=vs.110).aspx
MSDN Description:
Manages a compact array of bit values, which are represented as Booleans, where true indicates that the bit is on (1) and false indicates the bit is off (0).
You can (and should) use well tested and well known libraries.
But if you're looking to learn something (as it seems to be the case) you can do it yourself.
Another reason you may want to use a custom bit array is to use the hard drive to store the array, which comes in handy when calculating primes. To do this you'd need to further split addr, for example lowest 3 bits for the mask, next 28 bits for 256MB of in-memory storage, and from there on - a file name for a buffer file.
Yet another reason for custom bit array is to compress the memory use when specifically searching for primes. After all more than half of your bits will be 'false' because the numbers corresponding to them would be even, so in fact you can both speed up your calculation AND reduce memory requirements if you don't even store the even bits. You can do that by changing the way addr is interpreted. Further more you can also exclude numbers divisible by 3 (only 2 out of every 6 numbers has a chance of being prime) thus reducing memory requirements by 60% compared to plain bit array.
Notice the use of shift and logical operators to make the code a bit more efficient.
byte mask = (byte)(1 << (int)(addr & 7)); for example can be written as
byte mask = (byte)(1 << (int)(addr % 8));
and addr >> 3 can be written as addr / 8
Testing shift/logical operators vs division shows 2.6s vs 4.8s in favor of shift/logical for 200000000 operations.
Here's the code:
void Main()
{
var barr = new BitArray(10);
barr[4] = true;
Console.WriteLine("Is it "+barr[4]);
Console.WriteLine("Is it Not "+barr[5]);
}
public class BitArray{
private readonly byte[] _buffer;
public bool this[long addr]{
get{
byte mask = (byte)(1 << (int)(addr & 7));
byte val = _buffer[(int)(addr >> 3)];
bool bit = (val & mask) == mask;
return bit;
}
set{
byte mask = (byte) ((value ? 1:0) << (int)(addr & 7));
int offs = (int)addr >> 3;
_buffer[offs] = (byte)(_buffer[offs] | mask);
}
}
public BitArray(long size){
_buffer = new byte[size/8 + 1]; // define a byte buffer sized to hold 8 bools per byte. The spare +1 is to avoid dealing with rounding.
}
}

Cryptography .NET, Avoiding Timing Attack

I was browsing crackstation.net website and came across this code which was commented as following:
Compares two byte arrays in length-constant time. This comparison method is used so that password hashes cannot be extracted from on-line systems using a timing attack and then attacked off-line.
private static bool SlowEquals(byte[] a, byte[] b)
{
uint diff = (uint)a.Length ^ (uint)b.Length;
for (int i = 0; i < a.Length && i < b.Length; i++)
diff |= (uint)(a[i] ^ b[i]);
return diff == 0;
}
Can anyone please explain how does this function actual works, why do we need to convert the length to an unsigned integer and how this method avoids a timing attack? What does the line diff |= (uint)(a[i] ^ b[i]); do?
This sets diff based on whether there's a difference between a and b.
It avoids a timing attack by always walking through the entirety of the shorter of the two of a and b, regardless of whether there's a mismatch sooner than that or not.
The diff |= (uint)(a[i] ^ (uint)b[i]) takes the exclusive-or of a byte of a with a corresponding byte of b. That will be 0 if the two bytes are the same, or non-zero if they're different. It then ors that with diff.
Therefore, diff will be set to non-zero in an iteration if a difference was found between the inputs in that iteration. Once diff is given a non-zero value at any iteration of the loop, it will retain the non-zero value through further iterations.
Therefore, the final result in diff will be non-zero if any difference is found between corresponding bytes of a and b, and 0 only if all bytes (and the lengths) of a and b are equal.
Unlike a typical comparison, however, this will always execute the loop until all the bytes in the shorter of the two inputs have been compared to bytes in the other. A typical comparison would have an early-out where the loop would be broken as soon as a mismatch was found:
bool equal(byte a[], byte b[]) {
if (a.length() != b.length())
return false;
for (int i=0; i<a.length(); i++)
if (a[i] != b[i])
return false;
return true;
}
With this, based on the amount of time consumed to return false, we can learn (at least an approximation of) the number of bytes that matched between a and b. Let's say the initial test of length takes 10 ns, and each iteration of the loop takes another 10 ns. Based on that, if it returns false in 50 ns, we can quickly guess that we have the right length, and the first four bytes of a and b match.
Even without knowing the exact amounts of time, we can still use the timing differences to determine the correct string. We start with a string of length 1, and increase that one byte at a time until we see an increase in the time taken to return false. Then we run through all the possible values in the first byte until we see another increase, indicating that it has executed another iteration of the loop. Continue with the same for successive bytes until all bytes match and we get a return of true.
The original is still open to a little bit of a timing attack -- although we can't easily determine the contents of the correct string based on timing, we can at least find the string length based on timing. Since it only compares up to the shorter of the two strings, we can start with a string of length 1, then 2, then 3, and so on until the time becomes stable. As long as the time is increasing our proposed string is shorter than the correct string. When we give it longer strings, but the time remains constant, we know our string is longer than the correct string. The correct length of string will be the shortest one that takes that maximum duration to test.
Whether this is useful or not depends on the situation, but it's clearly leaking some information, regardless. For truly maximum security, we'd probably want to append random garbage to the end of the real string to make it the length of the user's input, so the time stays proportional to the length of the input, regardless of whether it's shorter, equal to, or longer than the correct string.
this version goes on for the length of the input 'a'
private static bool SlowEquals(byte[] a, byte[] b)
{
uint diff = (uint)a.Length ^ (uint)b.Length;
byte[] c = new byte[] { 0 };
for (int i = 0; i < a.Length; i++)
diff |= (uint)(GetElem(a, i, c, 0) ^ GetElem(b, i, c, 0));
return diff == 0;
}
private static byte GetElem(byte[] x, int i, byte[] c, int i0)
{
bool ok = (i < x.Length);
return (ok ? x : c)[ok ? i : i0];
}

Efficiently convert audio bytes - byte[] to short[]

I'm trying to use the XNA microphone to capture audio and pass it to an API I have that analyses the data for display purposes. However, the API requires the audio data in an array of 16 bit integers. So my question is fairly straight forward; what's the most efficient way to convert the byte array into a short array?
private void _microphone_BufferReady(object sender, System.EventArgs e)
{
_microphone.GetData(_buffer);
short[] shorts;
//Convert and pass the 16 bit samples
ProcessData(shorts);
}
Cheers,
Dave
EDIT: This is what I have come up with and seems to work, but could it be done faster?
private short[] ConvertBytesToShorts(byte[] bytesBuffer)
{
//Shorts array should be half the size of the bytes buffer, as each short represents 2 bytes (16bits)
short[] shorts = new short[bytesBuffer.Length / 2];
int currentStartIndex = 0;
for (int i = 0; i < shorts.Length - 1; i++)
{
//Convert the 2 bytes at the currentStartIndex to a short
shorts[i] = BitConverter.ToInt16(bytesBuffer, currentStartIndex);
//increment by 2, ready to combine the next 2 bytes in the buffer
currentStartIndex += 2;
}
return shorts;
}
After reading your update, I can see you need to actually copy a byte array directly into a buffer of shorts, merging bytes. Here's the relevant section from the documentation:
The byte[] buffer format used as a parameter for the SoundEffect constructor, Microphone.GetData method, and DynamicSoundEffectInstance.SubmitBuffer method is PCM wave data. Additionally, the PCM format is interleaved and in little-endian.
Now, if for some weird reason your system has BitConverter.IsLittleEndian == false, then you will need to loop through your buffer, swapping bytes as you go, to convert from little-endian to big-endian. I'll leave the code as an exercise - I am reasonably sure all the XNA systems are little-endian.
For your purposes, you can just copy the buffer directly using Marshal.Copy or Buffer.BlockCopy. Both will give you the performance of the platform's native memory copy operation, which will be extremely fast:
// Create this buffer once and reuse it! Don't recreate it each time!
short[] shorts = new short[_buffer.Length/2];
// Option one:
unsafe
{
fixed(short* pShorts = shorts)
Marshal.Copy(_buffer, 0, (IntPtr)pShorts, _buffer.Length);
}
// Option two:
Buffer.BlockCopy(_buffer, 0, shorts, 0, _buffer.Length);
This is a performance question, so: measure it!
It is worth pointing out that for measuring performance in .NET you want to do a release build and run without the debugger attached (this allows the JIT to optimise).
Jodrell's answer is worth commenting on: Using AsParallel is interesting, but it is worth checking if the cost of spinning it up is worth it. (Speculation - measure it to confirm: converting byte to short should be extremely fast, so if your buffer data is coming from shared memory and not a per-core cache, most of your cost will probably be in data transfer not processing.)
Also I am not sure that ToArray is appropriate. First of all, it may not be able to create the correct-sized array directly, having to resize the array as it builds it will make it very slow. Additionally it will always allocate the array - which is not slow itself, but adds a GC cost that you almost certainly don't want.
Edit: Based on your updated question, the code in the rest of this answer is not directly usable, as the format of the data is different. And the technique itself (a loop, safe or unsafe) is not as fast as what you can use. See my other answer for details.
So you want to pre-allocate your array. Somewhere out in your code you want a buffer like this:
short[] shorts = new short[_buffer.Length];
And then simply copy from one buffer to the other:
for(int i = 0; i < _buffer.Length; ++i)
result[i] = ((short)buffer[i]);
This should be very fast, and the JIT should be clever enough to skip one if not both of the array bounds checks.
And here's how you can do it with unsafe code: (I haven't tested this code, but it should be about right)
unsafe
{
int length = _buffer.Length;
fixed(byte* pSrc = _buffer) fixed(short* pDst = shorts)
{
byte* ps = pSrc;
short* pd = pDst;
while(pd < pd + length)
*(pd++) = (short)(*(ps++));
}
}
Now the unsafe version has the disadvantage of requiring /unsafe, and also it may actually be slower because it prevents the JIT from doing various optimisations. Once again: measure it.
(Also you can probably squeeze more performance if you try some permutations on the above examples. Measure it.)
Finally: Are you sure you want the conversion to be (short)sample? Shouldn't it be something like ((short)sample-128)*256 to take it from unsigned to signed and extend it to the correct bit-width? Update: seems I was wrong on the format here, see my other answer
The pest PLINQ I could come up with is here.
private short[] ConvertBytesToShorts(byte[] bytesBuffer)
{
//Shorts array should be half the size of the bytes buffer, as each short represents 2 bytes (16bits)
var odd = buffer.AsParallel().Where((b, i) => i % 2 != 0);
var even = buffer.AsParallell().Where((b, i) => i % 2 == 0);
return odd.Zip(even, (o, e) => {
return (short)((o << 8) | e);
}.ToArray();
}
I'm dubios about the performance but with enough data and processors who knows.
If the conversion operation is wrong ((short)((o << 8) | e)) please change to suit.

Categories