Convert 24 bit two's complement to int? - c#

I have a C#/Mono app that reads data from an ADC. My read functions returns it as ulong. The data is in two complement and I need to translate it into int. E.g.:
0x7FFFFF = + 8,388,607
0x7FFFFE = + 8,388,606
0x000000 = 0
0xFFFFFF = -1
0x800001 = - 8,388,607
0x800000 = - 8,388,608
How can I do this?

const int MODULO = 1 << 24;
const int MAX_VALUE = (1 << 23) - 1;
int transform(int value) {
if (value > MAX_VALUE) {
value -= MODULO;
}
return value;
}
Explanation: MAX_VALUE is the maximum possible positive value that can be stored in 24 signed int, so if the value is less or equal then that, it should be returned as is. Otherwise value is unsigned representation of two-complement negative number. The way two-complement works is that negative number is written as a number modulo MODULO, which would effectively mean that MODULO is added. So to convert it back to signed we subtract MODULO.

Related

Shift bits of an integer only if the number of bits of its binary presentation is greater then given value

I need to shift the bits of an integer to the right only if the number of bits is greater then a certain number. For the example Lets take 10.
If the integer is 818 the then binary representation of the integer is 1100110010, In that case i do nothing.
If the Integer is 1842 the binary representation of the integer is 11100110010 which is greater then 10 by one, So i need to shift one bit to the right(Or setting bit at index 10 to 0 which gives the same result as far as i know, Maybe im wrong).
What i did until now is make an integer array of ones and zeros represent the int, But i`m sure there is more elegant way of doing this
int y = 818;
string s = Convert.ToString(y, 2);
int[] bits = s.PadLeft(8, '0')
.Select(c => int.Parse(c.ToString()))
.ToArray();
if (bits.Length > 10)
{
for (int i = 10; i < bits.Length; i++)
{
bits[i] = 0;
}
}
I also tried to do this:
if(bits.Length > 10){ y = y >> (bits.Length - 10)}
but for some reason i got 945 (1110110001) when the input was 1891 (11101100011)
There's no need to do this with strings. 2 to the power of 10 has 11 binary digits, so
if (y >= Math.Pow(2, 10))
{
y = y >> 1;
}
seems to do what you want.

Convert a variable size hex string to signed number (variable size bytes) in C#

C# provides the method Convert.ToUInt16("FFFF", 16)/Convert.ToInt16("FFFF", 16) to convert hex strings into unsigned and signed 16 bit integer. These methods works fine for 16/32 bit values but not so for 12 bit values.
I would like to convert 3 char long hex string to signed integer. How could I do it? I would prefer a solution that could take the number of character as parameter to decide signed values.
Convert(string hexString, int fromBase, int size)
Convert("FFF", 16, 12) return -1.
Convert("FFFF", 16, 16) return -1.
Convert("FFF", 16, 16) return 4095.
The easiest way I can think of converting 12 bit signed hex to a signed integer is as follows:
string value = "FFF";
int convertedValue = (Convert.ToInt32(value, 16) << 20) >> 20; // -1
The idea is to shift the result as far left as possible so that the negative bits line up, then shift right again to the original position. This works because a "signed shift right" operation keeps the negative bit in place.
You can generalize this into a method as follows:
int Convert(string value, int fromBase, int bits)
{
int bitsToShift = 32 - bits;
return (Convert.ToInt32(value, fromBase) << bitsToShift) >> bitsToShift;
}
You can cast the result to a short if you want a 16 bit value when working with 12 bit hex strings. Performance of this method will be the same as a 16 bit version because bit shift operators on short cast the values to int anyway and this gives you more flexibility to specify more than 16 bits if needed without writing another method.
Ah, you'd like to calculate the Two's Complement for a certain number of bits (12 in your case, but really it should work with anything).
Here's the code in C#, blatantly stolen from the Python example in the wiki article:
int Convert(string hexString, int fromBase, int num_bits)
{
var i = System.Convert.ToUInt16(hexString, fromBase);
var mask = 1 << (num_bits - 1);
return (-(i & mask) + (i & ~mask));
}
Convert("FFF", 16, 12) returns -1
Convert("4095", 10, 12) is also -1 as expected

How can I convert a ulong to an positive int?

I have a piece of code that is
// Bernstein hash
// http://www.eternallyconfuzzled.com/tuts/algorithms/jsw_tut_hashing.aspx
ulong result = (ulong)s[0];
for ( int i = 1; i < s.Length; ++i )
{
result = 33 * result + (ulong)s[i];
}
return (int)result % Buckets.Count;
and the problem is that it's sometimes returning negative values. I know the reason is because (int)result can be negative. But I want to coerce it to be non-negative since it's used as an index. Now I realize I could do
int k = (int)result % Buckets.Count;
k = k < 0 ? k*-1 : k;
return k;
but is there a better way?
On a deeper level, why is int used for the index of containers in C#? I come from a C++ background and we have size_t which is an unsigned integral type. That makes more sense to me.
Use
return (int)(result % (ulong)Buckets.Count);
As you sum up values you reach a positive integer number that cannot be expressed as a positive number in a signed 32 bit integer. The cast to int will return a negative number. The modulo operation will then also return a negative number. If you do the modulo operation first, you will get a low positive number and the cast to int will do no harm.
While you can find a way to cast this to an int properly, I'm wondering why you don't just calculate it as an int from the beginning.
int result = (int)s[0]; // or, if s[0] is already an int, omit the cast
for ( int i = 1; i < s.Length; ++i )
{
result = 33 * result + (int)s[i];
}
return Math.Abs(result) % Buckets.Count;
As to why C# uses a signed int for indexes, it has to do with cross-language compatibility.

Create random ints with minimum and maximum from Random.NextBytes()

Title pretty much says it all. I know I could use Random.NextInt(), of course, but I want to know if there's a way to turn unbounded random data into bounded without statistical bias. (This means no RandomInt() % (maximum-minimum)) + minimum). Surely there is a method like it, that doesn't introduce bias into the data it outputs?
If you assume that the bits are randomly distributed, I would suggest:
Generate enough bytes to get a number within the range (e.g. 1 byte to get a number in the range 0-100, 2 bytes to get a number in the range 0-30000 etc).
Use only enough bits from those bytes to cover the range you need. So for example, if you're generating numbers in the range 0-100, take the bottom 7 bits of the byte you've generated
Interpret the bits you've got as a number in the range [0, 2n) where n is the number of bit
Check whether the number is in your desired range. It should be at least half the time (on average)
If so, use it. If not, repeat the above steps until a number is in the right range.
The use of just the required number of bits is key to making this efficient - you'll throw away up to half the number of bytes you generate, but no more than that, assuming a good distribution. (And if you are generating numbers in a nicely binary range, you won't need to throw anything away.)
Implementation left as an exercise to the reader :)
You could try with something like:
public static int MyNextInt(Random rnd, int minValue, int maxValue)
{
var buffer = new byte[4];
rnd.NextBytes(buffer);
uint num = BitConverter.ToUInt32(buffer, 0);
// The +1 is to exclude the maxValue in the case that
// minValue == int.MinValue, maxValue == int.MaxValue
double dbl = num * 1.0 / ((long)uint.MaxValue + 1);
long range = (long)maxValue - minValue;
int result = (int)(dbl * range) + minValue;
return result;
}
Totally untested... I can't guarantee that the results are truly pseudo-random... But the idea of creating a double (dbl) number is the same used by the Random class. Only I use the uint.MaxValue as the base instead of int.MaxValue. In this way I don't have to check for negative values of the buffer.
I propose a generator of random integers, based on NextBytes.
This method discards only 9.62% of bits in average over the word size range for positive Int32's due to the usage of Int64 as a representation for bit manupulation.
Maximum bit loss occurs at word size of 22 bits, and it's 20 lost bits of 64 used in byte range conversion. In this case bit efficiency is 68.75%
Also, 25% of values are lost because of clipping the unbound range to maximum value.
Be careful to use Take(N) on the IEnumerable returned, because it's an infinite generator otherwise.
I'm using a buffer of 512 long values, so it generates 4096 random bytes at once. If you just need a sequence of few integers, change the buffer size from 512 to a more optimal value, down to 1.
public static class RandomExtensions
{
public static IEnumerable<int> GetRandomIntegers(this Random r, int max)
{
if (max < 1)
throw new ArgumentOutOfRangeException("max", max, "Must be a positive value.");
const int longWordsTotal = 512;
const int bufferSize = longWordsTotal * 8;
var buffer = new byte[bufferSize];
var wordSize = (int)Math.Log(max, 2) + 1;
while(true)
{
r.NextBytes(buffer);
for (var longWordIndex = 0; longWordIndex < longWordsTotal; longWordIndex++)
{
ulong longWord = BitConverter.ToUInt64(buffer, longWordIndex);
var lastStartBit = 64 - wordSize;
var count = 0;
for (var startBit = 0; startBit <= lastStartBit; startBit += wordSize)
{
count ++;
var mask = ((1UL << wordSize) - 1) << startBit;
var unboundValue = (int)((mask & longWord) >> startBit);
if (unboundValue <= max)
yield return unboundValue;
}
}
}
}
}

Converting from bitstring to integer

I need a function like
int GetIntegerFromBinaryString(string binary, int bitCount)
if binary = "01111111" and bitCount = 8, it should return 127
if binary = "10000000" and bitCount = 8, it should return -128
The numbers are stored in 2's complement form. How can I do it. Is there any built in functions that would help so that I needn't calculate manually.
Prepend the string with 0's or 1's to make up to the bitCount and do
int number = Convert.ToInt16("11111111"+"10000000", 2);
here you go.
static int GetIntegerFromBinaryString(string binary, int bitCount)
{
if (binary.Length == bitCount && binary[0] == '1')
return Convert.ToInt32(binary.PadLeft(32, '1'),2);
else
return Convert.ToInt32(binary,2);
}
Convert it to the 2-s complement version of a 32 bit number, then simply let the Convert.ToInt32 method do it's magic.

Categories