Unset All Bits Except Most Significant Bit C# - c#

Is there a quick and easy way to unset all the bits in a number except the most significant bit? In other words I would like to take an integer x and apply & operator to it where the operand is 1 left-shifted by total number of bits in x.
Example:
return UnsetAllBitsExceptMSB(400);
should return 256

Yes, there is a trick:
private int UnsetAllBitsExceptMSB(int x)
{
x |= x >> 16;
x |= x >> 8;
x |= x >> 4;
x |= x >> 2;
x |= x >> 1;
x ^= x >> 1;
return x;
}
This works by first turning on all the bits to the right of the most significant set bit (00110000 becomes 001111111). It then uses XOR with the result right shifted one to turn all but the first bit off. (00111111 XOR with 00011111 = 00100000)
There are other ways of doing this that will perform better in some circumstances, but this has a predictable performance no matter the input. (5 OR, 6 right shifts, and an XOR).

I'm not sure about "quick and easy", but you don't need any bitwise operations for this... your question could be reworded as "how can I find the largest power of 2 that's smaller than my input? So a simple way to do that:
private int UnsetAllBitsExceptMSB(int x)
{
int y = 1;
while (y <= x)
{
y*=2;
}
return y / 2;
}

Given int represents a 32-bit signed integer I guess the first bit shouldn't be taken into consideration. So, this should get what you want:
int result = 1 << 30;
while ((result & myInt) != result)
result >>= 1;

Hi here is another option to consider:
public static int GetTopBitValue(int number)
{
if (number < 0)
{
throw new ArgumentOutOfRangeException("Non negative numbers are expected");
}
int i = 1;
while (i <= number)
i = i << 1;
return i >> 1;
}
Edited to cover corner cases.

Related

How to encode a decimal number to binary in 16 bits in C#?

The problem is asking :
The user gives me integer n,
I convert it to binary in 16 bits,
inverse the binary,
then decode the inverse binary into a new integer.
example:
14769 is 0011100110110001 (the 2 zeros in the front are the problem for me)
inverse the binary:
1000110110011100
Decode:
36252
I wrote the code but when I convert to binary it only gives me
11100110110001 without 00 in front, so the whole inverse binary will change and the new integer will be different.
This is my code:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text.RegularExpressions;
namespace HelloWorld
{
public class Program
{
public static void Main(string[] args)
{
long n, n1, p, i, r, sum, inv, inv1, newint;
Console.WriteLine("give n:");
n=long.Parse(Console.ReadLine());
n1=n;
p=1;
sum=0;
i=n;
//for below is for the binary representation of n
for(i=n;i!=0;i=i/2)
{
r=i%2;
sum=sum+r*p;
p=p*10;
}
inv=0;
//for below is to inverse the above binary representation
for(i=sum;i!=0;i=i/10)
{
r=i%10;
inv=10*inv+r;
}
inv1=inv;
newint=0;
p=0;
//for below is to decode the inverse binary to its decimal representation
for(i=inv;i!=0;i=i/10)
{
r=i%10;
newint=newint+r*(long)Math.Pow(2,p);
p=p+1;
}
Console.WriteLine("The number that you gave = {0} \nIts binary
representation = {1} \n\nThe inverse binary representation = {2} \nThe integer corresponding to the inverse binary number = {3}", n1, sum, inv1, newint);
}
}
}
So how can i encode on 16 bits?
Edit:
1)We didn't learn built in functions
2)We didn't learn padding or
Convert.Int...
3)We only know the for loop (+ while loop but better not use it)
4)We can't use strings either
You could reverse the bits using some simple bitwise operators.
ushort num = 14769;
ushort result = 0;
// ushort is 16 bits, therefore exactly 16 iterations is required
for (var i = 0; i < 16; i++, num >>= 1){
// shift result bits left by 1 position
result <<= 1;
// add the i'th bit in the first position
result |= (ushort)(num & 1);
}
Console.WriteLine(result); //36252
You can try using Convert to obtain binary representation and Aggregate (Linq) to get back decimal:
using System.Linq;
...
int value = 14769;
int result = Convert
.ToString(value, 2) // Binary representation
.PadLeft(16, '0') // Ensure it is 16 characters long
.Reverse() // Reverse
.Aggregate(0, (s, a) => s * 2 + a - '0'); // Back to decimal
Console.Write($"{value} => {result}");
Output:
14769 => 36252
Edit: Loop solution (if you are not allowed to use the classes above...)
int value = 14769;
int result = 0;
for (int i = 0, v = value; i < 16; ++i, v /= 2)
result = result * 2 + v % 2;
Console.Write($"{value} => {result}");
Explanation (how for above works):
First of all how can we get all 16 bits of the number? We can use standard algorithm based on remainder:
14769 / 1 % 2 == 1,
14769 / 2 % 2 == 0,
14769 / 4 % 2 == 0,
14769 / 8 % 2 == 0,
14769 / 16 % 2 == 1,
...
these are the bits from right to left: 11100110110001. Typical code can be
int v = value; // we don't want to change value, let's work with its copy - v
for (int i = 0; i < 16; ++i) {
// rightmost bit
int bit = v % 2;
// we divide v by to to get rid of the rightmost bit
v = v / 2;
}
Note that we compute bits from right to left - in reverse order - the very order we are looking for! How can we build result from these bits?
result = bit0 + 2 * (bit1 + 2 * (bit2 + ...))))..)
So we can easily modify our loop into
int result = 0;
int v = value; // we don't want to change value, let's work with its copy - v
for (int i = 0; i < 16; ++i) {
// rightmost bit
int bit = v % 2;
result = result * 2 + bit;
// we divide v by to to get rid of the rightmost bit
v = v / 2;
}
Finally, if we get rid of bit and make v declared within loop we can get my loop solution

C# Elegant way for retaining sign of original int/double variable after operations

Is there a clever way to retain the sign of an integer/double variable after performing a bunch of operations on it? By elegant I'm probably looking more at bitwise operation or some sort of function to retain the sign.
Here's what I'd call the not-so-elegant way:
int myNum = -4;
bool isNegative = myNum < 0 ? true : false;
myNum += 8 / 2 % 4; //some operation
if ((isNegative && myNum > 0) || (!isNegative && myNum < 0))
myNum *= -1;
Edit:
The operation in my particular scenario simply wants to change the magnitude of the number to match another numbers. So say myNum is -2, matchNum is 8, i want myNum to be -8.
"The particular scenario I'm having is I have a 2D coordinate system. If abs(x) > abs(y) change the magnitude of y to match x and vice versa for abs(x) < abs(y)"
Based on what you are actually trying to do an approach like this might be simpler:
int max = Math.Max(Math.Abs(x), Math.Abs(y)); // the larger of the two magnitudes
return (max * Math.Sign(x), max * Math.Sign(y));
A more general way (as people have said, you'll need to compensate for Math.Sign() returning 0):
int myNum = -4;
int sign = Math.Sign(myNum);
myNum += 8 / 2 % 4; //some operation
myNum = Math.Abs(myNum) * sign;
A fun, fast, but unreadable way for integers which is immune to the Math.Sign() issue:
int origNum = -4;
int newNum = origNum + (8 / 2 % 4); //some operation
int signMask = (origNum ^ newNum) >> 31; // flip the sign of newNum if origNum
newNum = (newNum ^ signMask) - signMask; // and newNum have different signs.
Or perhaps for the floating-point types you can mask the sign bit, since they conform to IEEE 754. If the JIT is intelligent about this, it'll result in some very efficient SSE:
double myNum = -4.0;
long sign = GetSign(myNum);
myNum += 8.0 / 2.0 % 4.0; //some operation
myNum = SetSign(myNum, sign);
static long GetSign(double x)
{
return BitConverter.DoubleToInt64Bits(x) & signMask;
}
static double SetSign(double x, long sign)
{
return BitConverter.Int64BitsToDouble(BitConverter.DoubleToInt64Bits(x) & ~signMask | sign);
}
const long signMask = unchecked((long)(1UL << 63));
Your code can be shortened to something like
int myNum = -4;
bool isNegative = myNum < 0;
myNum += 8 / 2 % 4; //some operation
myNum *= (isNegative == myNum < 0) ? 1 : -1;
If you want to try it bitwise, you could save the highest bit and use it as new sign, but in my opinion it would not be more elegant, but less readable.
This is reasonably short and easy to understand, plus the implementation for floating-point types would be identical:
int myValue = -4;
int newValue = some_operation(myValue);
if ( (myValue < 0) ^ (newValue < 0) )
{
newValue = -newValue;
}
Dont know if it is the most elegant way. But following code could address the co-ordinate manipulation. (y/absY) gives us the sign. absY is a magnitude.
int absX = Math.Abs(x);
int absY = Math.Abs(y);
if( absX > absY)
y = (y/absY)*absX;
else if( absX < absY)
x = (x/absX)*absY;

GF(256) finite field multiplication function in C#

I'm implementing AES in C# and at some point (MixColumns function) I have to multiply two Bytes over the GF(2^8) finite field.
So, I have three options:
Use a default function that dotNet has (does it have something like that?)
Write a custom function which does that
Use lookup tables
For the custom function I found a piece of C code which I tried to rewrite for C#, but it doesn't work (I get wrong results). (*)
Here is the original C piece of code (source):
/* Multiply two numbers in the GF(2^8) finite field defined
* by the polynomial x^8 + x^4 + x^3 + x + 1 */
uint8_t gmul(uint8_t a, uint8_t b) {
uint8_t p = 0;
uint8_t counter;
uint8_t hi_bit_set;
for (counter = 0; counter < 8; counter++) {
if (b & 1)
p ^= a;
hi_bit_set = (a & 0x80);
a <<= 1;
if (hi_bit_set)
a ^= 0x1b; /* x^8 + x^4 + x^3 + x + 1 */
b >>= 1;
}
return p;
}
And this is what I rewrote:
public Byte GMul(Byte a, Byte b) { // Galois Field (256) Multiplication
Byte p = 0;
Byte counter;
Byte hi_bit_set;
for (counter = 0; counter < 8; counter++) {
if ((b & 1) != 0) {
p ^= a;
}
hi_bit_set = (Byte) (a & 0x80);
a <<= 1;
if (hi_bit_set != 0) {
a ^= 0x1b; /* x^8 + x^4 + x^3 + x + 1 */
}
b >>= 1;
}
return p;
}
I also found some lookup tables here, and it seemed a simple and fine approach, but I don't really know how to use them, though I got a hunch. (**)
Bottom line: which option should I choose, and how can I make it work, given what I wrote above is all I got so far, and that I don't really want to go very deep with the math knowledge.
UPDATE:
*) Meanwhile I realised my C# rewrote code was producing correct answers, it was just my fault because I messed up when I verified them.
**) The tables can be used as a Byte[256] array, and the answer for, let's say, x*3 is table_3[x], x being converted from HEX to DECIMAL when used as index for the table array.
In order to multiply x * 3 in GF(2), one just accesses x=table_3[x];
There's probably a 3 Look-up-table method available that uses a logarithm approach.
Just as in regular numbers a*b = 2^(log2(a)+log2(b)), the same happens in GF(2), but without floating points or rounding errors.

Find most significant bit of a BigInteger

I have read many fine algorithms for identifying the most significant bit for 32- and 64-bit integers (including other posts here on SO). But I am using BigIntegers, and will be dealing with numbers up to 4000 bits long. (The BigInteger will hold the Hilbert index into the Hilbert space-filling curve that meanders through a 1000-dimension hypercube at a fractal depth of 4.) But the bulk of the cases will involve numbers that could fit inside a 64 bit integer, so I want a solution that is optimal for the common cases but can handle the extreme cases.
The naive way is:
BigInteger n = 234762348763498247634;
int count = 0;
while (n > 0) {
n >>= 1;
count++;
}
I was thinking of converting common cases to Longs and using a 64-bit algorithm on those, otherwise using a different algorithm for the really big numbers. But I am not sure how expensive the conversion to a Long is, and whether that will swamp the efficiencies of doing the remainder of the computation on a 64-bit quantity. Any thoughts?
One intended use for this function is to help optimize inverse gray code calculations.
Update. I coded two approaches and ran a benchmark.
If the number was under Ulong.MaxValue, then converting to a Ulong and doing the binary search approach was twice as fast as using BigInteger.Log.
If the number was very large (I went as high as 10000 bits), then Log was 3.5 times faster.
96 msec elapsed for one million calls to MostSignificantBitUsingLog
(convertable to Long).
42 msec elapsed for one million calls to
MostSignificantBitUsingBinarySearch (convertable to Long).
74 msec elapsed for ten thousand calls to MostSignificantBitUsingLog
(too big to convert).
267 msec elapsed for ten thousand calls to
MostSignificantBitUsingBinarySearch (too big to convert).
Here is the code for using Log:
public static int MostSignificantBitUsingLog(BigInteger i)
{
int bit;
if (i == 0)
bit = -1;
else
bit = (int)BigInteger.Log(i, 2.0);
return bit;
}
Here is my approach to binary search. It could be improved to extend the binary division up into the BigInteger range. I will try that next.
public static int MostSignificantBitUsingBinarySearch(BigInteger i)
{
int bit;
if (i.IsZero)
bit = -1;
else if (i < ulong.MaxValue)
{
ulong y = (ulong)i;
ulong s;
bit = 0;
s = y >> 32;
if (s != 0)
{
bit = 32;
y = s;
}
s = y >> 16;
if (s != 0)
{
bit += 16;
y = s;
}
s = y >> 8;
if (s != 0)
{
bit += 8;
y = s;
}
s = y >> 4;
if (s != 0)
{
bit += 4;
y = s;
}
s = y >> 2;
if (s != 0)
{
bit += 2;
y = s;
}
s = y >> 1;
if (s != 0)
bit++;
}
else
return 64 + MostSignificantBitUsingBinarySearch(i >> 64);
return bit;
}
Update 2: I changed my binary search algorithm to work against BigIntegers up to one million binary digits and not call itself recursively in 64 bit chunks. Much better. Now it takes 18 msec to run my test, and is four times faster than calling Log! (In the code below, MSB is my ulong function that does the same sort of thing, with the loop unrolled.)
public static int MostSignificantBitUsingBinarySearch(BigInteger i)
{
int bit;
if (i.IsZero)
bit = -1;
else if (i < ulong.MaxValue)
bit = MSB((ulong)i);
else
{
bit = 0;
int shift = 1 << 20; // Accommodate up to One million bits.
BigInteger remainder;
while (shift > 0)
{
remainder = i >> shift;
if (remainder != 0)
{
bit += shift;
i = remainder;
}
shift >>= 1;
}
}
return bit;
}
You can calculate the log2 which represent the number of bits needed:
var numBits = (int)Math.Ceil(bigInt.Log(2));
You can treat it like a binary-search problem.
You have an upper limit of 4000 (add some room maybe)
int m = (lo + hi) / 2;
BigInteger x = BigInteger(1) << m;
if (x > n) ...
else ...
In .Net 5 this is now built-in...
int log2 = myBigInt.GetBitLength()
If you can use Java rather than C#, there is a library for arbitrary precision Hilbert curve indexing that you can find at http://uzaygezen.googlecode.com. For the implementation of the gray code inverse, you may want to have a closer look at LongArrayBitVector.grayCodeInverse or perhaps BitSetBackedVector.grayCodeInverse in the mentioned project.
8 years late, to find the MSB top bit (aka Log2) I came up with this quick method...
static int GetTopBit(BigInteger value)
{
if (value < 0)
BigInteger.Negate(value);
int lowerBytes = value.GetByteCount(true) - 1;
int t = value.ToByteArray(true)[lowerBytes];
int top = t > 127 ? 8 : t > 63 ? 7 : t > 31 ? 6 : t > 15 ? 5 : t > 7 ? 4 : t > 3 ? 3 : t > 1 ? 2 : 1;
int topbit = (top + lowerBytes * 8);
return topbit;
}

int to binary function without recursion?

I want to display my decimal number as bits.
int g = 2323;
for (int i = 31; i>=0; i--) // int has 32 bits
{
Console.Write(((g >> 1) & 1) == 1 ? "1" : "0"); // shift right 1 and display it
g = g / 2; // shift right= divide by 2
}
However this display the number like mirror ( 12345 -> 54321)
I could shift left from the left but then : I might get exception ..(too big number)
What should i need to change in my code to display it correct but :
no convert(...) method
no insertion to middleman array
no recursion.
Is there anything ?
Just off the top of my head:
int g = 2323;
for (uint mask = 0x80000000; mask != 0; mask >>= 1)
Console.Write(((uint)g & mask) != 0 ? "1" : "0");
You can use LINQ to simplify the code.
string.Join("", Enumerable.Range(0, 32).Select(i => (num >> (31 - i) & 1).ToString()))
Instead of shifting the number, shift a mask. Start at 0x80000000 and & it with the number. Non-zero result = '1'. Shift the mask right 31 times to examine all the bit positions.
This solution is similar to yours, but it checks the most significant bit (masked by 0x80000000, corresponding to 10000000000000000000000000000000 in binary), rather than the least significant bit (masked by 1).
uint g = 2323;
for (int i = 0; i < 32; ++i)
{
Console.Write((g & 0x80000000) == 0 ? "0" : "1");
g <<= 1;
}
Use the following variation to eliminate leading zeros:
uint g = 2323;
bool isSignificant = false;
for (int i = 0; i < 32; ++i)
{
bool isZero = (g & 0x80000000) == 0;
if (!isZero)
isSignificant = true;
if (isSignificant)
Console.Write(isZero ? "0" : "1");
g <<= 1;
}
You are going left to right, so they're printed left to right :)
You can create a mask with 1 in MSB and right-shift it 1-bit every iteration (don't forget to make the mask unsigned).
The mask can be created by shifting a 1 to the left 32 bits, like
unsigned int mask = 1 << 32;
Now you can shift it 1 bit to the left each time:
for(i = 0; i < 32; i++)
{
Console.Write((g & mask == 0) ? "0" : "1");
mask >>= 1;
}
Note: The mask must be unsigned, otherwise whenever you apply right-shift on it, the MSB/sign bit (which is 1) will be successively copied to the bits to the left.
However, you won't have this requirement it if you create the mask every time:
for(i = 31; i >= 0; i++)
{
Console.Write((g & (1 << i) == 0) ? "0" : "1");
mask >>= 1;
}
This loop is similar to the loop in your code.

Categories