How to extract multiple parameters from a binary chromosome - c#

I am trying to use AForge.Net Genetics library to create a simple application for optimization purposes. I have a scenario where I have four input parameters, therefore I tried to modify the "OptimizationFunction2D.cs" class located in the AForge.Genetic project to handle four parameters.
While converting the binary chromosomes into 4 parameters (type = double) I am not sure if my approach is correct as I don't know how to verify the extracted values. Below is the code segment where my code differs from the original AForge code:
public double[] Translate( IChromosome chromosome )
{
// get chromosome's value
ulong val = ((BinaryChromosome) chromosome).Value;
// chromosome's length
int length = ((BinaryChromosome) chromosome).Length;
// length of W component
int wLength = length/4;
// length of X component
int xLength = length / 4;
// length of Y component
int yLength = length / 4;
// length of Z component
int zLength = length / 4;
// W maximum value - equal to X mask
ulong wMax = 0xFFFFFFFFFFFFFFFF >> (64 - wLength);
// X maximum value
ulong xMax = 0xFFFFFFFFFFFFFFFF >> (64 - xLength);
// Y maximum value - equal to X mask
ulong yMax = 0xFFFFFFFFFFFFFFFF >> (64 - yLength);
// Z maximum value
ulong zMax = 0xFFFFFFFFFFFFFFFF >> (64 - zLength);
// W component
double wPart = val & wMax;
// X component;
double xPart = (val >> wLength) & xMax;
// Y component;
double yPart = (val >> (wLength + xLength) & yMax);
// Z component;
double zPart = val >> (wLength + xLength + yLength);
// translate to optimization's function space
double[] ret = new double[4];
ret[0] = wPart * _rangeW.Length / wMax + _rangeW.Min;
ret[1] = xPart * _rangeX.Length / xMax + _rangeX.Min;
ret[2] = yPart * _rangeY.Length / yMax + _rangeY.Min;
ret[3] = zPart * _rangeZ.Length / zMax + _rangeZ.Min;
return ret;
}
I am not sure if am correctly separating the chromosome value into four part (wPart/xPart/yPart/zPart). The original function in the AForge.Genetic library looks like this:
public double[] Translate( IChromosome chromosome )
{
// get chromosome's value
ulong val = ( (BinaryChromosome) chromosome ).Value;
// chromosome's length
int length = ( (BinaryChromosome) chromosome ).Length;
// length of X component
int xLength = length / 2;
// length of Y component
int yLength = length - xLength;
// X maximum value - equal to X mask
ulong xMax = 0xFFFFFFFFFFFFFFFF >> ( 64 - xLength );
// Y maximum value
ulong yMax = 0xFFFFFFFFFFFFFFFF >> ( 64 - yLength );
// X component
double xPart = val & xMax;
// Y component;
double yPart = val >> xLength;
// translate to optimization's function space
double[] ret = new double[2];
ret[0] = xPart * rangeX.Length / xMax + rangeX.Min;
ret[1] = yPart * rangeY.Length / yMax + rangeY.Min;
return ret;
}
Can someone please confirm if my conversion process is correct or is there a better way of doing it.

No, it works but you don't need it to be so complicated.
ulong wMax = 0xFFFFFFFFFFFFFFFF >> (64 - wLength);
this returns the same value to all the results wMax xMax yMax zMax so just do one and call it componentMask
part = (val >> (wLength * pos) & componentMask);
where pos is the 0 based position of the component. so 0 for w, 1 for x ...
the rest is ok.
EDIT:
if the Length is not divided by 4 you can make the last part be just val >> (wLength * pos) to make it have the remaining bits.

Related

How to convert a float/double/half to a minifloat the optimal way (improve my already working code)?

I've written an IEEE 754 "quarter" 8-bit minifloat in a 1.3.4.−3 format in C#.
It was mostly a fun little side-project, testing whether or not I understand floats.
Actually, though, I find myself using it more than I'd like to admit :) (bandwidth > clock ticks)
Here's my code for converting the minifloat to a 32-bit float:
public static implicit operator float(quarter q)
{
int sign = (q.value & 0b1000_0000) << 24;
int fusedExponentMantissa = (q.value & 0b0111_1111) << (23 - MANTISSA_BITS);
if ((q.value & 0b0111_0000) == 0b0111_0000) // NaN/Infinity
{
return asfloat(sign | (255 << 23) | fusedExponentMantissa);
}
else // normal and subnormal
{
float magic = asfloat((255 - 1 + EXPONENT_BIAS) << 23);
return magic * asfloat(sign | fusedExponentMantissa);
}
}
where quarter.value is the stored byte and "asfloat" is simply *(float*)&myUInt.The "magic" number makes use of mantissa overflow in the subnormal case, which affects the f_32 exponent (integer multiplication and mask + add is slower than FPU-switch and float multiplication). I guess one could optimize away the branch, too.
But here comes the problematic code - float_32 to float_8:
public static explicit operator quarter(float f)
{
byte f8_sign = (byte)((asuint(f) & 0x8000_0000u) >> 24);
uint f32_exponent = asuint(f) & 0x7F80_0000u;
uint f32_mantissa = asuint(f) & 0x007F_FFFFu;
if (f32_exponent < (120 << 23)) // underflow => preserve +/- 0
{
return new quarter { value = f8_sign };
}
else if (f32_exponent > (130 << 23)) // overflow => +/- infinity or preserve NaN
{
return new quarter { value = (byte)(f8_sign | PositiveInfinity.value | touint8(isnan(f))) };
}
else
{
switch (f32_exponent)
{
case 120 << 23: // 2^(-7) * 1.(mantissa > 0) means the value is closer to quarter.epsilon than 0
{
return new quarter { value = (byte)(f8_sign | touint8(f32_mantissa != 0)) };
}
case 121 << 23: // 2^(-6) * (1 + mantissa): return +/- quarter.epsilon = 2^(-2) * (0 + 2^(-4)); if the mantissa is > 0.5 i.e. 2^(-6) * max(mantissa, 1.75), return 2^(-2) * 2^(-3)
{
return new quarter { value = (byte)(f8_sign | (Epsilon.value + touint8(f32_mantissa > 0x0040_0000))) };
}
case 122 << 23:
{
return new quarter { value = (byte)(f8_sign | 0b0000_0010u | (f32_mantissa >> 22)) };
}
case 123 << 23:
{
return new quarter { value = (byte)(f8_sign | 0b0000_0100u | (f32_mantissa >> 21)) };
}
case 124 << 23:
{
return new quarter { value = (byte)(f8_sign | 0b0000_1000u | (f32_mantissa >> 20)) };
}
default:
{
const uint exponentDelta = (127 + EXPONENT_BIAS) << 23;
return new quarter { value = (byte)(f8_sign | (((f32_exponent - exponentDelta) | f32_mantissa) >> 19)) };
}
}
}
}
... where the function
"asuint" is simply *(uint*)&myFloat and
"touint8" is simply *(byte*)&myBoolean i.e. myBoolean ? 1 : 0.
The first five cases deal with numbers that can only be represented as subnormals in a "quarter".
I want to get rid of the switch at the very least. There's obviously a pattern (same as with float8_to_float32) but I haven't been able to figure out how I could unify the entire switch for days... I tried to google how hardware converts doubles to floats but that yielded no results either.
My requirements are to hold on to the IEEE-754 standard, meaning:
NaN, infinity preservation and clamping to infinity/zero in case of over-/underflow, aswell as rounding to epsilon when the larger type's value is closer to epsilon than 0 (first switch case aswell as the underflow limit in the first if statement).
Can anyone at least push me in the right direction please?
This may not be optimal, but it uses strictly conforming C code except as noted in the first comment, so no pointer aliasing or other manipulation of the bits of a floating-point object. A thorough test program is included.
#include <inttypes.h>
#include <math.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
/* Notes on portability:
uint8_t is an optional type. Its use here is easily replaced by
unsigned char.
Round-to-nearest is required in FloatToMini.
Floating-point must be base two, and the constant in the
Dekker-Veltkamp split is hardcoded for IEEE-754 binary64 but could be
adopted to other formats. (Change the exponent in 0x1p48 to the number
of bits in the significand minus five.)
*/
/* Convert a double to a 1-3-4 floating-point format. Round-to-nearest is
required.
*/
static uint8_t FloatToMini(double x)
{
// Extract the sign bit of x, moved into its position in a mini-float.
uint8_t s = !!signbit(x) << 7;
x = fabs(x);
/* If x is a NaN, return a quiet NaN with the copied sign. Significand
bits are not preserved.
*/
if (x != x)
return s | 0x78;
/* If |x| is greater than or equal to the rounding point between the
maximum finite value and infinity, return infinity with the copied sign.
(0x1.fp0 is the largest representable significand, 0x1.f8 is that plus
half an ULP, and the largest exponent is 3, so 0x1.f8p3 is that
rounding point.)
*/
if (0x1.f8p3 <= x)
return s | 0x70;
// If x is subnormal, encode with zero exponent.
if (x < 0x1p-2 - 0x1p-7)
return s | (uint8_t) nearbyint(x * 0x1p6);
/* Round to five significand bits using the Dekker-Veltkamp Split. (The
cast eliminates the excess precision that the C standard allows.)
*/
double d = x * (0x1p48 + 1);
x = d - (double) (d-x);
/* Separate the significand and exponent. C's frexp scales the exponent
so the significand is in [.5, 1), hence the e-1 below.
*/
int e;
x = frexp(x, &e) - .5;
return s | (e-1+3) << 4 | (uint8_t) (x*0x1p5);
}
static void Show(double x)
{
printf("%g -> 0x%02" PRIx8 ".\n", x, FloatToMini(x));
}
static void Test(double x, uint8_t expected)
{
uint8_t observed = FloatToMini(x);
if (expected != observed)
{
printf("Error, %.9g (%a) produced 0x%02" PRIx8
" but expected 0x%02" PRIx8 ".\n",
x, x, observed, expected);
exit(EXIT_FAILURE);
}
}
int main(void)
{
// Set the value of an ULP in [1, 2).
static const double ULP = 0x1p-4;
// Test all even significands with normal exponents.
for (double s = 1; s < 2; s += 2*ULP)
// Test with trailing bits less than or equal to 1/2 ULP in magnitude.
for (double t = -ULP / (s == 1 ? 4 : 2); t <= +ULP/2; t += ULP/16)
// Test with all normal exponents.
for (int e = 1-3; e < 7-3; ++e)
// Test with both signs.
for (int sign = -1; sign <= +1; sign += 2)
{
// Prepare the expected encoding.
uint8_t expected =
(0 < sign ? 0 : 1) << 7
| (e+3) << 4
| (uint8_t) ((s-1) * 0x1p4);
Test(sign * ldexp(s+t, e), expected);
}
// Test all odd significands with normal exponents.
for (double s = 1 + 1*ULP; s < 2; s += 2*ULP)
// Test with trailing bits less than or equal to 1/2 ULP in magnitude.
for (double t = -ULP/2+ULP/16; t < +ULP/2; t += ULP/16)
// Test with all normal exponents.
for (int e = 1-3; e < 7-3; ++e)
// Test with both signs.
for (int sign = -1; sign <= +1; sign += 2)
{
// Prepare the expected encoding.
uint8_t expected =
(0 < sign ? 0 : 1) << 7
| (e+3) << 4
| (uint8_t) ((s-1) * 0x1p4);
Test(sign * ldexp(s+t, e), expected);
}
// Set the value of an ULP in the subnormal range.
static const double subULP = ULP * 0x1p-2;
// Test all even significands with the subnormal exponent.
for (double s = 0; s < 0x1p-2; s += 2*subULP)
// Test with trailing bits less than or equal to 1/2 ULP in magnitude.
for (double t = s == 0 ? 0 : -subULP/2; t <= +subULP/2; t += subULP/16)
{
// Test with both signs.
for (int sign = -1; sign <= +1; sign += 2)
{
// Prepare the expected encoding.
uint8_t expected =
(0 < sign ? 0 : 1) << 7
| (uint8_t) (s/subULP);
Test(sign * (s+t), expected);
}
}
// Test all odd significands with the subnormal exponent.
for (double s = 0 + 1*subULP; s < 0x1p-2; s += 2*subULP)
// Test with trailing bits less than or equal to 1/2 ULP in magnitude.
for (double t = -subULP/2 + subULP/16; t < +subULP/2; t += subULP/16)
{
// Test with both signs.
for (int sign = -1; sign <= +1; sign += 2)
{
// Prepare the expected encoding.
uint8_t expected =
(0 < sign ? 0 : 1) << 7
| (uint8_t) (s/subULP);
Test(sign * (s+t), expected);
}
}
// Test at and slightly under the point of rounding to infinity.
Test(+15.75, 0x70);
Test(-15.75, 0xf0);
Test(nexttoward(+15.75, 0), 0x6f);
Test(nexttoward(-15.75, 0), 0xef);
// Test infinities and NaNs.
Test(+INFINITY, 0x70);
Test(-INFINITY, 0xf0);
Test(+NAN, 0x78);
Test(-NAN, 0xf8);
Show(0);
Show(0x1p-6);
Show(0x1p-2);
Show(0x1.1p-2);
Show(0x1.2p-2);
Show(0x1.4p-2);
Show(0x1.8p-2);
Show(0x1p-1);
Show(15.5);
Show(15.75);
Show(16);
Show(NAN);
Show(1./6);
Show(1./3);
Show(2./3);
}
I hate to answer my own question... But this may still not be the optimal solution.
Although #Eric Postpischil's solution uses an established algorithm, it is not very well suited for minifloats, since there are so few denormals in 4 mantissa bits. Additionally, the overhead of multiple float arithmetic operations - and because of the actual code behind frexp in particular, it only has one branch less (or two when inlined and optimized) than my original solution and is also not that great in regards to instruction level parallelism.
So here's my current solution:
public static explicit operator quarter(float f)
{
byte f8_sign = (byte)((asuint(f) >> 31) << 7);
uint f32_exponent = (asuint(f) >> 23) & 0x00FFu;
uint f32_mantissa = asuint(f) & 0x007F_FFFFu;
if (f32_exponent < 120) // underflow => preserve +/- 0
{
return new quarter { value = f8_sign };
}
else if (f32_exponent > 130) // overflow => +/- infinity or preserve NaN
{
return new quarter { value = (byte)(f8_sign | PositiveInfinity.value | touint8(isnan(f))) };
}
else
{
int cmp = 125 - (int)f32_exponent;
int cmpIsZeroOrNegativeMask = (cmp - 1) >> 31;
int denormalExponent = andnot(0b0001_0000 >> cmp, cmpIsZeroOrNegativeMask); // special case 121: sets it to quarter.Epsilon
denormalExponent += touint8((f32_exponent == 121) & (f32_mantissa >= 0x0040_0000)); // case 121: 2^(-6) * (1 + mantissa): return +/- quarter.Epsilon = 2^(-2) * 2^(-4); if the mantissa is >= 0.5 return 2^(-2) * 2^(-3)
denormalExponent |= touint8((f32_exponent == 120) & (f32_mantissa != 0)); // case 120: 2^(-7) * 1.(mantissa > 0) means the value is closer to quarter.epsilon than 0
int normalExponent = (cmpIsZeroOrNegativeMask & ((int)f32_exponent - (127 + EXPONENT_BIAS))) << 4;
int mantissaShift = 19 + andnot(cmp, cmpIsZeroOrNegativeMask);
return new quarter { value = (byte)((f8_sign | normalExponent) | (denormalExponent | (f32_mantissa >> mantissaShift))) };
}
}
But note that the particular andnot(int a, int b) function I use returns a & ~b and...not ~a & b.
Thanks for your help :) I'm keeping this open since, as mentioned, this may very well not be the best solution - but at least it's my own...
PS: This is probably a good example for why PREMATURE optimization is bad; Your code is much less readable. Make sure you have the functionality backed up by unit tests and make sure you even need the optimization in the first place.
...And after some time and in the spirit of transparent progression, I want to show the final version, since I believe to have found the optimal implementation; more later.
First off, here it is (the code should speak for itself, which is why it is this "much"):
unsafe struct quarter
{
const bool IEEE_754_STANDARD = true; //standard: true
const bool SIGN_BIT = IEEE_754_STANDARD || true; //standard: true
const int BITS = 8 * sizeof(byte); //standard: 8
const int EXPONENT_BITS = 3 + (SIGN_BIT ? 0 : 1); //standard: 3
const int MANTISSA_BITS = BITS - EXPONENT_BITS - (SIGN_BIT ? 1 : 0); //standard: 4
const int EXPONENT_BIAS = -(((1 << BITS) - 1) >> (BITS - (EXPONENT_BITS - 1))); //standard: -3
const int MAX_EXPONENT = EXPONENT_BIAS + ((1 << EXPONENT_BITS) - 1) - (IEEE_754_STANDARD ? 1 : 0); //standard: 3
const int SIGNALING_EXPONENT = (MAX_EXPONENT - EXPONENT_BIAS + (IEEE_754_STANDARD ? 1 : 0)) << MANTISSA_BITS; //standard: 0b0111_0000
const int F32_BITS = 8 * sizeof(float);
const int F32_EXPONENT_BITS = 8;
const int F32_MANTISSA_BITS = 23;
const int F32_EXPONENT_BIAS = -(int)(((1L << F32_BITS) - 1) >> (F32_BITS - (F32_EXPONENT_BITS - 1)));
const int F32_MAX_EXPONENT = F32_EXPONENT_BIAS + ((1 << F32_EXPONENT_BITS) - 1 - 1);
const int F32_SIGNALING_EXPONENT = (F32_MAX_EXPONENT - F32_EXPONENT_BIAS + 1) << F32_MANTISSA_BITS;
const int F32_SHL_LOSE_SIGN = (F32_BITS - (MANTISSA_BITS + EXPONENT_BITS));
const int F32_SHR_PLACE_MANTISSA = MANTISSA_BITS + ((1 + F32_EXPONENT_BITS) - (MANTISSA_BITS + EXPONENT_BITS));
const int F32_MAGIC = (((1 << F32_EXPONENT_BITS) - 1) - (1 + EXPONENT_BITS)) << F32_MANTISSA_BITS;
byte _value;
static quarter Epsilon => new quarter { _value = 1 };
static quarter MaxValue => new quarter { _value = (byte)(SIGNALING_EXPONENT - 1) };
static quarter NaN => new quarter { _value = (byte)(SIGNALING_EXPONENT | 1) };
static quarter PositiveInfinity => new quarter { _value = (byte)SIGNALING_EXPONENT };
static uint asuint(float f) => *(uint*)&f;
static float asfloat(uint u) => *(float*)&u;
static byte tobyte(bool b) => *(byte*)&b;
static float ToFloat(quarter q, bool promiseInRange)
{
uint fusedExponentMantissa = ((uint)q._value << F32_SHL_LOSE_SIGN) >> F32_SHR_PLACE_MANTISSA;
uint sign = ((uint)q._value >> (BITS - 1)) << (F32_BITS - 1);
if (!promiseInRange)
{
bool nanInf = (q._value & SIGNALING_EXPONENT) == SIGNALING_EXPONENT;
uint ifNanInf = asuint(float.PositiveInfinity) & (uint)(-tobyte(nanInf));
return (nanInf ? 1f : asfloat(F32_MAGIC)) * asfloat(sign | fusedExponentMantissa | ifNanInf);
}
else
{
return asfloat(F32_MAGIC) * asfloat(sign | fusedExponentMantissa);
}
}
static quarter ToQuarter(float f, bool promiseInRange)
{
float inRange = f * (1f / asfloat(F32_MAGIC));
uint q = asuint(inRange) >> (F32_MANTISSA_BITS - (1 + EXPONENT_BITS));
uint f8_sign = asuint(f) >> (F32_BITS - 1);
if (!promiseInRange)
{
uint f32_exponent = asuint(f) & F32_SIGNALING_EXPONENT;
bool overflow = f32_exponent > (uint)(-F32_EXPONENT_BIAS + MAX_EXPONENT << F32_MANTISSA_BITS);
bool notNaNInf = f32_exponent != F32_SIGNALING_EXPONENT;
f8_sign ^= tobyte(!notNaNInf);
if (overflow & notNaNInf)
{
q = PositiveInfinity._value;
}
}
f8_sign <<= (BITS - 1);
return new quarter{ _value = (byte)(q ^ f8_sign) };
}
}
Turns out that in fact, the reverse operation of converting the mini-float to a 32 bit float by multiplying with a magic constant is also the reverse operation of a multiplication (wow...): a floating point division by that constant.
Luckily "by that constant" and not the other way around; we can calculate the reciprocal at compile time and multiply by it instead. This only fails, as with the reverse operation, when converting to- and from 'INF' and 'NaN'. Absolute overflow with any biased 32 exponent with exponent % (MAX_EXPONENT + 1) != 0 is not translated into 'INF' and positive 'INF' is translated into negative 'INF'.
Although this enables some optimizations through the bool paramater, this mostly just reduces code size and more importantly (especially for SIMD versions, where small data types really shine) reduces the need for constants. Speaking of SIMD: This scalar version can be optimized a little by using SSE/SSE2 intrinsics.
The (disabled) optimizations (would) run completely in parallel to the floating point multiplication followed by a shift, taking a total of 5 to 6+ clock cycles (very CPU dependant), which is astonishingly close to native hardware instructions (~4 to 5 clock cycles).

fast way to convert integer array to byte array (11 bit)

I have integer array and I need to convert it to byte array
but I need to take (only and just only) first 11 bit of each element of the هinteger array
and then convert it to a byte array
I tried this code
// ***********convert integer values to byte values
//***********to avoid the left zero padding on the byte array
// *********** first step : convert to binary string
// ***********second step : convert binary string to byte array
// *********** first step
string ByteString = Convert.ToString(IntArray[0], 2).PadLeft(11,'0');
for (int i = 1; i < IntArray.Length; i++)
ByteString = ByteString + Convert.ToString(IntArray[i], 2).PadLeft(11, '0');
// ***********second step
int numOfBytes = ByteString.Length / 8;
byte[] bytes = new byte[numOfBytes];
for (int i = 0; i < numOfBytes; ++i)
{
bytes[i] = Convert.ToByte(ByteString.Substring(8 * i, 8), 2);
}
But it takes too long time (if the file size large , the code takes more than 1 minute)
I need a very very fast code (very few milliseconds only )
can any one help me ?
Basically, you're going to be doing a lot of shifting and masking. The exact nature of that depends on the layout you want. If we assume that we pack little-endian from each int, appending on the left, so two 11-bit integers with positions:
abcdefghijk lmnopqrstuv
become the 8-bit chunks:
defghijk rstuvabc 00lmnopq
(i.e. take the lowest 8 bits of the first integer, which leaves 3 left over, so pack those into the low 3 bits of the next byte, then take the lowest 5 bits of the second integer, then finally the remaining 6 bits, padding with zero), then something like this should work:
using System;
using System.Linq;
static class Program
{
static string AsBinary(int val) => Convert.ToString(val, 2).PadLeft(11, '0');
static string AsBinary(byte val) => Convert.ToString(val, 2).PadLeft(8, '0');
static void Main()
{
int[] source = new int[1432];
var rand = new Random(123456);
for (int i = 0; i < source.Length; i++)
source[i] = rand.Next(0, 2047); // 11 bits
// Console.WriteLine(string.Join(" ", source.Take(5).Select(AsBinary)));
var raw = Encode(source);
// Console.WriteLine(string.Join(" ", raw.Take(6).Select(AsBinary)));
var clone = Decode(raw);
// now prove that it worked OK
if (source.Length != clone.Length)
{
Console.WriteLine($"Length: {source.Length} vs {clone.Length}");
}
else
{
int failCount = 0;
for (int i = 0; i < source.Length; i++)
{
if (source[i] != clone[i] && failCount++ == 0)
{
Console.WriteLine($"{i}: {source[i]} vs {clone[i]}");
}
}
Console.WriteLine($"Errors: {failCount}");
}
}
static byte[] Encode(int[] source)
{
long bits = source.Length * 11;
int len = (int)(bits / 8);
if ((bits % 8) != 0) len++;
byte[] arr = new byte[len];
int bitOffset = 0, index = 0;
for (int i = 0; i < source.Length; i++)
{
// note: this encodes little-endian
int val = source[i] & 2047;
int bitsLeft = 11;
if(bitOffset != 0)
{
val = val << bitOffset;
arr[index++] |= (byte)val;
bitsLeft -= (8 - bitOffset);
val >>= 8;
}
if(bitsLeft >= 8)
{
arr[index++] = (byte)val;
bitsLeft -= 8;
val >>= 8;
}
if(bitsLeft != 0)
{
arr[index] = (byte)val;
}
bitOffset = bitsLeft;
}
return arr;
}
private static int[] Decode(byte[] source)
{
int bits = source.Length * 8;
int len = (int)(bits / 11);
// note no need to worry about remaining chunks - no ambiguity since 11 > 8
int[] arr = new int[len];
int bitOffset = 0, index = 0;
for(int i = 0; i < source.Length; i++)
{
int val = source[i] << bitOffset;
int bitsLeftInVal = 11 - bitOffset;
if(bitsLeftInVal > 8)
{
arr[index] |= val;
bitOffset += 8;
}
else if(bitsLeftInVal == 8)
{
arr[index++] |= val;
bitOffset = 0;
}
else
{
arr[index++] |= (val & 2047);
if(index != arr.Length) arr[index] = val >> 11;
bitOffset = 8 - bitsLeftInVal;
}
}
return arr;
}
}
If you need a different layout you'll need to tweak it.
This encodes 512 MiB in just over a second on my machine.
Overview to the Encode method:
The first thing is does is pre-calculate the amount of space that is going to be required, and allocate the output buffer; since each input contributes 11 bits to the output, this is just some modulo math:
long bits = source.Length * 11;
int len = (int)(bits / 8);
if ((bits % 8) != 0) len++;
byte[] arr = new byte[len];
We know the output position won't match the input, and we know we're going to be starting each 11-bit chunk at different positions in bytes each time, so allocate variables for those, and loop over the input:
int bitOffset = 0, index = 0;
for (int i = 0; i < source.Length; i++)
{
...
}
return arr;
So: taking each input in turn (where the input is the value at position i), take the low 11 bits of the value - and observe that we have 11 bits (of this value) still to write:
int val = source[i] & 2047;
int bitsLeft = 11;
Now, if the current output value is partially written (i.e. bitOffset != 0), we should deal with that first. The amount of space left in the current output is 8 - bitOffset. Since we always have 11 input bits we don't need to worry about having more space than values to fill, so: left-shift our value by bitOffset (pads on the right with bitOffset zeros, as a binary operation), and "or" the lowest 8 bits of this with the output byte. Essentially this says "if bitOffset is 3, write the 5 low bits of val into the 5 high bits of the output buffer"; finally, fixup the values: increment our write position, record that we have fewer bits of the current value still to write, and use right-shift to discard the 8 low bits of val (which is made of bitOffset zeros and 8 - bitOffset "real" bits):
if(bitOffset != 0)
{
val = val << bitOffset;
arr[index++] |= (byte)val;
bitsLeft -= (8 - bitOffset);
val >>= 8;
}
The next question is: do we have (at least) an entire byte of data left? We might not, if bitOffset was 1 for example (so we'll have written 7 bits already, leaving just 4). If we do, we can just stamp that down and increment the write position - then once again track how many are left and throw away the low 8 bits:
if(bitsLeft >= 8)
{
arr[index++] = (byte)val;
bitsLeft -= 8;
val >>= 8;
}
And it is possible that we've still got some left-over; for example, if bitOffset was 7 we'll have written 1 bit in the first chunk, 8 bits in the second, leaving 2 more to write - or if bitOffset was 0 we won't have written anything in the first chunk, 8 in the second, leaving 3 left to write. So, stamp down whatever is left, but do not increment the write position - we've written to the low bits, but the next value might need to write to the high bits. Finally, update bitOffset to be however many low bits we wrote in the last step (which could be zero):
if(bitsLeft != 0)
{
arr[index] = (byte)val;
}
bitOffset = bitsLeft;
The Decode operation is the reverse of this logic - again, calculate the sizes and prepare the state:
int bits = source.Length * 8;
int len = (int)(bits / 11);
int[] arr = new int[len];
int bitOffset = 0, index = 0;
Now loop over the input:
for(int i = 0; i < source.Length; i++)
{
...
}
return arr;
Now, bitOffset is the start position that we want to write to in the current 11-bit value, so if we start at the start, it will be 0 on the first byte, then 8; 3 bits of the second byte join with the first 11-bit integer, so the 5 bits become part of the second - so bitOffset is 5 on the 3rd byte, etc. We can calculate the number of bits left in the current integer by subtracting from 11:
int val = source[i] << bitOffset;
int bitsLeftInVal = 11 - bitOffset;
Now we have 3 possible scenarios:
1) if we have more than 8 bits left in the current value, we can stamp down our input (as a bitwise "or") but do not increment the write position (as we have more to write for this value), and note that we're 8-bits further along:
if(bitsLeftInVal > 8)
{
arr[index] |= val;
bitOffset += 8;
}
2) if we have exactly 8 bits left in the current value, we can stamp down our input (as a bitwise "or") and increment the write position; the next loop can start at zero:
else if(bitsLeftInVal == 8)
{
arr[index++] |= val;
bitOffset = 0;
}
3) otherwise, we have less than 8 bits left in the current value; so we need to write the first bitsLeftInVal bits to the current output position (incrementing the output position), and whatever is left to the next output position. Since we already left-shifted by bitOffset, what this really means is simply: stamp down (as a bitwise "or") the low 11 bits (val & 2047) to the current position, and whatever is left (val >> 11) to the next if that wouldn't exceed our output buffer (padding zeros). Then calculate our new bitOffset:
else
{
arr[index++] |= (val & 2047);
if(index != arr.Length) arr[index] = val >> 11;
bitOffset = 8 - bitsLeftInVal;
}
And that's basically it. Lots of bitwise operations - shifts (<< / >>), masks (&) and combinations (|).
If you wanted to store the least significant 11 bits of an int into two bytes such that the least significant byte has bits 1-8 inclusive and the most significant byte has 9-11:
int toStore = 123456789;
byte msb = (byte) ((toStore >> 8) & 7); //or 0b111
byte lsb = (byte) (toStore & 255); //or 0b11111111
To check this, 123456789 in binary is:
0b111010110111100110100010101
MMMLLLLLLLL
The bits above L are lsb, and have a value of 21, above M are msb and have a value of 5
Doing the work is the shift operator >> where all the binary digits are slid to the right 8 places (8 of them disappear from the right hand side - they're gone, into oblivion):
0b111010110111100110100010101 >> 8 =
0b1110101101111001101
And the mask operator & (the mask operator works by only keeping bits where, in each position, they're 1 in the value and also 1 in the mask) :
0b111010110111100110100010101 &
0b000000000000000000011111111 (255) =
0b000000000000000000000010101
If you're processing an int array, just do this in a loop:
byte[] bs = new byte[ intarray.Length*2 ];
for(int x = 0, b=0; x < intarray.Length; x++){
int toStore = intarray[x];
bs[b++] = (byte) ((toStore >> 8) & 7);
bs[b++] = (byte) (toStore & 255);
}

Saving a vector as a single number?

I was wondering if it would be possible to get a vector with an X and a Y value as a single number, knowing that both X and Y can range from -65000 to +65000.
Is this possible in any way?
Code examples on how to convert from this kind of number and to it would be nice.
Store it in a ulong:
ulong rslt = (uint)x;
rslt = rslt << 32;
rslt |= ((uint)y);
To get it out:
int x = (int)(rslt >> 32);
int y = (int)(rslt & 0xFFFFFFFF);
Assuming X and Y are both integer values and there is no overflow (32bit values is not enough) you can use e.g. (pseudocode)
V = fromXY(X, Y) = (y+65000)*130001+(x+65000)
(X,Y) = toXY(V) = (V%130001-65000,V/130001-65000) // <= / is integer division
(130001 is the number of distinct values for X or Y)
To combine:
var limit = 65000;
var x = 1;
var y = 2;
var single = x * (limit + 1) + y;
And then:
y = single % (limit + 1);
x = single - y / (limit + 1);
See it in action.
Of course, you have to assume that the maximum value for single fits within the size of the data type that stores it (which in this case it does).
the union does what you want very easily.
See also: http://www.cplusplus.com/doc/tutorial/other_data_types/
typedef long int64;
typedef int int32;
union {
struct { int32 a, b; };
int64 a_and_b;
} stacker;
int main ()
{
stacker.a = -1000;
stacker.b = 2000;
cout << stacker.a << ", " << stacker.b << endl;
cout << stacker.a_and_b << endl;
}
this will output:
-1000, 2000 <-- a and b read as two int32
8594229558296 <-- a and b interprested as a single int64

Software Perlin noise implementation

I have written a 2D Perlin noise implementation based on information from here, here, here, and here. However, the output looks like this.
public static double Perlin(double X, double XScale, double Y, double YScale, double Persistance, double Octaves) {
double total=0.0;
for(int i=0;i<Octaves;i++){
int frq = (int) Math.Pow(2,i);
int amp = (int) Math.Pow(Persistance,i);
total += InterpolatedSmoothNoise((X / XScale) * frq, (Y / YScale) * frq) * amp;
}
return total;
}
private static double InterpolatedSmoothNoise (double X, double Y) {
int ix = (int) Math.Floor(X);
double fx = X-ix;
int iy = (int) Math.Floor(Y);
double fy = Y-iy;
double v1 = SmoothPerlin(ix,iy); // --
double v2 = SmoothPerlin(ix+1,iy); // +-
double v3 = SmoothPerlin(ix,iy+1);// -+
double v4 = SmoothPerlin(ix+1,iy+1);// ++
double i1 = Interpolate(v1,v2,fx);
double i2 = Interpolate(v3,v4,fx);
return Interpolate(i1,i2,fy);
}
private static double SmoothPerlin (int X, int Y) {
double sides=(Noise(X-1,Y,Z)+Noise(X+1,Y,Z)+Noise(X,Y-1,Z)+Noise(X,Y+1,Z)+Noise(X,Y,Z-1)+Noise(X,Y,Z+1))/12.0;
double center=Noise(X,Y,Z)/2.0;
return sides + center;
}
private static double Noise (int X, int Y) {
uint m_z = (uint) (36969 * (X & 65535) + (X >> 16));
uint m_w = (uint) (18000 * (Y & 65535) + (Y >> 16));
uint ou = (m_z << 16) + m_w;
return ((ou + 1.0) * 2.328306435454494e-10);
}
Any input on what is wrong is appreciated.
EDIT: I found a way to solve this: I used an array of doubles generated at load to fix this. Any way to implement a good random number generator is appreciated though.
I suppose this effect is due to your noise function (all other code looks ok).
The function
private static double Noise (int X, int Y) {
uint m_z = (uint) (36969 * (X & 65535) + (X >> 16));
uint m_w = (uint) (18000 * (Y & 65535) + (Y >> 16));
uint ou = (m_z << 16) + m_w;
return ((ou + 1.0) * 2.328306435454494e-10);
}
isn't very noisy but strongly correlated with your input X and Y variables. Try using any other pseudo-random function which you seed with you input.
I reconstructed your code in C and following suggestion from #Howard and this code is working well for me. I am not sure which Interpolate function you used. I used a linear interpolation in my code. I used following noise function:
static double Noise2(int x, int y) {
int n = x + y * 57;
n = (n<<13) ^ n;
return ( 1.0 - ( (n * (n * n * 15731 + 789221) + 1376312589) & 0x7fffffff) / 1073741824.0);
}

How to reverse that function

I've asked before about the opposite of Bitwise AND(&) and you told me its impossible to reverse.
Well,this is the situation: The server sends an image,which is encoded with the function I want to reverse,then it is encoded with zlib.
This is how I get the image from the server:
UInt32[] image = new UInt32[200 * 64];
int imgIndex = 0;
byte[] imgdata = new byte[compressed];
byte[] imgdataout = new byte[uncompressed];
Array.Copy(data, 17, imgdata, 0, compressed);
imgdataout = zlib.Decompress(imgdata);
for (int h = 0; h < height; h++)
{
for (int w = 0; w < width; w++)
{
imgIndex = (int)((height - 1 - h) * width + w);
image[imgIndex] = 0xFF000000;
if (((1 << (Int32)(0xFF & (w & 0x80000007))) & imgdataout[((h * width + w) >> 3)]) > 0)
{
image[imgIndex] = 0xFFFFFFFF;
}
}
}
Width,Height,Image decompressed and Image compressed length are always the same.
When this function is done I put image(UInt32[] array) in a Bitmap and I've got it.
Now I want to be the server and send that image.I have to do two things:
Reverse that function and then compress it with zlib.
How do I reverse that function so I can encode the picture?
for (int h = 0; h < height; h++)
{
for (int w = 0; w < width; w++)
{
imgIndex = (int)((height - 1 - h) * width + w);
image[imgIndex] = 0xFF000000;
if (((1 << (Int32)(0xFF & (w & 0x80000007))) & imgdataout[((h * width + w) >> 3)]) > 0)
{
image[imgIndex] = 0xFFFFFFFF;
}
}
}
EDIT:The format is 32bppRGB
The assumption that the & operator is always irreversible is incorrect.
Yes, in general if you have
c = a & b
and all you know is the value of c, then you cannot know what values a or b had before hand.
However it's very common for & to be used to extract certain bits from a longer value, where those bits were previously combined together with the | operator and where each 'bit field' is independent of every other. The fundamental difference with the generic & or | operators that makes this reversible is that the original bits were all zero beforehand, and the other bits in the word are left unchanged. i.e:
0xc0 | 0x03 = 0xc3 // combine two nybbles
0xc3 & 0xf0 = 0xc0 // extract the top nybble
0xc3 & 0x0f = 0x03 // extract the bottom nybble
In this case your current function appears to be extracting a 1 bit-per-pixel (monochrome image) and converting it to 32-bit RGBA.
You'll need something like:
int source_image[];
byte dest_image[];
for (int h = 0; h < height; ++h) {
for (int w = 0; w < width; ++w) {
int offset = (h * width) + w;
if (source_image[offset] == 0xffffffff) {
int mask = w % 8; // these two lines convert from one int-per-pixel
offset /= 8; // offset to one-bit-per-pixel
dest_image[offset] |= (1 << mask); // only changes _one_ bit
}
}
}
NB: assumes the image is a multiple of 8 pixels wide, that the dest_image array was previously all zeroes. I've used % and / in that inner test because it's easier to understand and the compiler should convert to mask / shift itself. Normally I'd do the masking and shifting myself.

Categories