Converting a string into BigInteger - c#

I have the following code that creates a very big number (BigInteger) which is converted then into a string.
// It's a console application.
BigInteger bi = 2;
for (int i = 0; i < 1234; i++)
{
bi *= 2;
}
string myBigIntegerNumber = bi.ToString();
Console.WriteLine(myBigIntegerNumber);
I know that for converting to int we can use Convert.ToInt32 and converting to long we use Convert.ToInt64, but what's about converting to BigInteger?
How can I convert a string (that represents a very very long number) to BigInteger?

Use BigInteger.Parse() method.
Converts the string representation of a number in a specified style to
its BigInteger equivalent.
BigInteger bi = 2;
for(int i = 0; i < 1234; i++)
{
bi *= 2;
}
var myBigIntegerNumber = bi.ToString();
Console.WriteLine(BigInteger.Parse(myBigIntegerNumber));
Also you can check BigInteger.TryParse() method with your conversation is successful or not.
Tries to convert the string representation of a number to its
BigInteger equivalent, and returns a value that indicates whether the
conversion succeeded.

Here is another approach which is faster compared to BigInteger.Parse()
public static BigInteger ToBigInteger(string value)
{
BigInteger result = 0;
for (int i = 0; i < value.Length; i++)
{
result = result * 10 + (value[i] - '0');
}
return result;
}

Related

C# floating point to binary string and vice versa

I am converting a floating point value to binary string representation:
float resulta = 31.0 / 15.0; //2.0666666
var rawbitsa = ToBinaryString(resulta); //returns 01000000000001000100010001000100
where ToBinaryString is coded as:
static string ToBinaryString(float value)
{
int bitCount = sizeof(float) * 8; // never rely on your knowledge of the size
// better not use string, to avoid ineffective string concatenation repeated in a loop
char[] result = new char[bitCount];
// now, most important thing: (int)value would be "semantic" cast of the same
// mathematical value (with possible rounding), something we don't want; so:
int intValue = System.BitConverter.ToInt32(BitConverter.GetBytes(value), 0);
for (int bit = 0; bit < bitCount; ++bit)
{
int maskedValue = intValue & (1 << bit); // this is how shift and mask is done.
if (maskedValue > 0)
maskedValue = 1;
// at this point, masked value is either int 0 or 1
result[bitCount - bit - 1] = maskedValue.ToString()[0];
}
return new string(result); // string from character array
}
Now I want to convert this binary string to float value.
I tried the following but it returns value "2.8293250329111622E-315"
string bstra = "01000000000001000100010001000100";
long w = 0;
for (int i = bstra.Length - 1; i >= 0; i--) w = (w << 1) + (bstra[i] - '0');
double da = BitConverter.ToDouble(BitConverter.GetBytes(w), 0); //returns 2.8293250329111622E-315
I want the value "2.0666666" by passing in value "01000000000001000100010001000100"
Why am I getting a wrong value? Am I missing something?
You're making this a lot harder than it needs to be; the error seems to be mostly in the character parsing code, but you don't need to do all that.
You could try like this instead:
static string ToBinaryString(float value)
{
const int bitCount = sizeof(float) * 8;
int intValue = System.BitConverter.ToInt32(BitConverter.GetBytes(value), 0);
return Convert.ToString(intValue, 2).PadLeft(bitCount, '0');
}
static float FromBinaryString(string bstra)
{
int intValue = Convert.ToInt32(bstra, 2);
return BitConverter.ToSingle(BitConverter.GetBytes(intValue), 0);
}
Example:
float resulta = 31.0F / 15.0F; //2.0666666
var rawbitsa = ToBinaryString(resulta);
Console.WriteLine(rawbitsa); //01000000000001000100010001000100
var back = FromBinaryString(rawbitsa);
Console.WriteLine(back); //2.0666666
Note that the usage of GetBytes is kinda inefficient; if you're OK with unsafe code, you can remove all of that.
Also note that this code is CPU-specific - it depends on the endianness.

How to better cast a char to int keeping the number value and not the ASCII value? [duplicate]

This question already has answers here:
Convert char to int in C#
(20 answers)
Closed 3 years ago.
I have just wrote a code(c#) for my sample exam in C# basics study.
Еven though I was able to write it correctly and receive all points, I am not quite satisfied with the way I have used to cast the char ASCII value to the desired int value.
I am asking for a better way to express the following code:
using System;
namespace MultiplyTable
{
class Program
{
static void Main(string[] args)
{
//Input:
string inputNumber = Console.ReadLine();
//Logic:
int firstNumber = 0;
int secondNumber = 0;
int thirdNumber = 0;
for (int i = 0; i < inputNumber.Length; i++)
{
firstNumber = inputNumber[0] - 48;
secondNumber = inputNumber[1] - 48;
thirdNumber = inputNumber[2] - 48;
}
for (int p = 1; p <= thirdNumber; p++)
{
for (int j = 1; j <= secondNumber; j++)
{
for (int k = 1; k <= firstNumber; k++)
{
Console.WriteLine($"{p} * {j} * {k} = {p * j * k};");
}
}
}
}
}
}
The input is an integer three-digit number in the range [111… 999].
I have used string instead of int, to quicker read and store all char values.
The issue here is that when I have the char let's say '3' I need to use the int value of '3' and not the ASCII Dec value of 51.
As I had a limited time to write this code I succeeded to resolve it by subtracting 48 as you can see in the code provided.
What is the correct/more advanced way to do this exercise ?
Thank you in advance!
Substracting foo's ASCII value from 0's ASCII value will give you number.
char foo = '2';
int bar = foo - '0';
Or you can just simply convert char to string and then convert to int:
int bar = int.Parse(foo.ToString());

Convert string binary to base 10

I am creating an application that will do the formula shown in this video - The Everything Formula
I suggest you watch it to understand this. I am trying to replicate the part of the video where he takes the graph and gets what 'k', (The y Coordinate), would be. I took every pixel of the image, and put it into a string containing the binary version. The binary number's length is so large, I cannot store it as an int or long.
Now, here is the part I cannot solve.
How would I convert a string containing a binary number into a base 10 number also in string format?
I Cannot use a long or int type, they are not large enough. Any conversion using the int type will also not work.
Example code:
public void GraphUpdate()
{
string binaryVersion = string.Empty;
for (int i = 0; i < 106; i++)
{
for (int m = 0; m < 17; m++)
{
PixelState p = Map[i, m]; // Map is a 2D array of PixelState, representing the grid / graph.
if (p == PixelState.Filled)
{
binaryVersion += "1";
}
else
{
binaryVersion += "0";
}
}
}
// Convert binaryVersion to base 10 without using int or long
}
public enum PixelState
{
Zero,
Filled
}
You can use BigInteger class, which is part of .NET 4.0.
See MSDN BigInteger Constructor, which takes as input byte[].
This byte[] is your binary number.
Result string can be retrieved by calling BigInteger.ToString()
Try using Int64. That works up to 9,223,372,036,854,775,807:
using System;
namespace StackOverflow_LargeBinStrToDeciStr
{
class Program
{
static void Main(string[] args)
{
Int64 n = Int64.MaxValue;
Console.WriteLine($"n = {n}"); // 9223372036854775807
string binStr = Convert.ToString(n, 2);
Console.WriteLine($"n as binary string = {binStr}"); // 111111111111111111111111111111111111111111111111111111111111111
Int64 x = Convert.ToInt64(binStr, 2);
Console.WriteLine($"x = {x}"); // 9223372036854775807
Console.ReadKey();
}
}
}

Evaluating Knuth's arrow notation in a function

I am having trouble calculating Knuth's arrow notation, which is ↑ and can be found here, within a function. What I've made so far is:
int arrowCount = (int)arrowNum.Value; // Part of
BigInteger a = (int)aNum.Value; // the input I
BigInteger b = (int)bNum.Value; // already have
BigInteger result = a;
BigInteger temp = a;
for(int i = 0; i < arrowCount; i++)
{
result = Power(temp, b);
temp = r;
b = a;
}
with power being
BigInteger Power(BigInteger Base, BigInteger Pow)
{
BigInteger x = Base;
for(int i = 0; i < (Pow-1); i++)
{
x *= Base;
}
return x;
}
but it's incorrect with it's values and I can't figure out a way to fix it. It can handle 1 arrow problems like 3↑3 (which is 3^3 = 9), but it can't handle any more arrows than that.
I need a way to figure out more arrows, such as 3↑↑3,
which should be 7625597484987 (3^27) and I get 19683 (27^3). If you could help me to figure how I could get the proper output and explain what it is I'm doing wrong, I would greatly appreciate it.
I wrote it in java, and use double for input parameter:
private static double knuthArrowMath(double a, double b, int arrowNum)
{
if( arrowNum == 1)
return Math.pow(a, b);
double result = a;
for (int i = 0; i < b - 1; i++)
{
result = knuthArrowMath(a, result, arrowNum - 1);
}
return result;
}
If you expect 7625597484987 (3^27) but get 19683 (27^3), isn't it then a simple matter of swapping the arguments when calling your power function?
Looking at your Power function your code snippet seems to call Power with temp as base and b as power:
int arrowCount = (int)arrowNum.Value; // Part of
BigInteger a = (int)aNum.Value; // the input I
BigInteger b = (int)bNum.Value; // already have
BigInteger result = a;
BigInteger temp = a;
for(int i = 0; i < arrowCount; i++)
{
result = Power(temp, b);
temp = result;
b = a;
}
Shouldn't temp an b be swapped so you get result = Power(b, temp) to get the desired result?
So pass 1 results calls Power(3, 3) resulting in temp = 27 and pass 2 calls Power(3, 27). The reason it only works for single arrow now is because swapping arguments for the first Power(base, power) call doesn't matter.
As you point out in your answer this doesn't cover all situations. Given the examples you provided I created this little console application:
class Program
{
static void Main(string[] args)
{
Console.WriteLine(Arrow(3, 3));
Console.WriteLine(Arrow(4, 4, 1));
Console.WriteLine(Arrow(3, 4, 1));
Console.ReadKey();
}
private static BigInteger Arrow(BigInteger baseNumber, BigInteger arrows)
{
return Arrow(baseNumber, baseNumber, arrows-1);
}
private static int Arrow(BigInteger baseNumber, BigInteger currentPower, BigInteger arrows)
{
Console.WriteLine("{0}^{1}", baseNumber, currentPower);
var result = Power(baseNumber, currentPower);
if (arrows == 1)
{
return result;
}
else
{
return Arrow(baseNumber, result, arrows - 1);
}
}
private static BigInteger Power(BigInteger number, BigInteger power)
{
int x = number;
for (int i = 0; i < (power - 1); i++)
{
x *= number;
}
return x;
}
}
I came up with a way to use the BigInteger.Pow() function.
It might look a little odd, but that is because the C# BigInterger.Pow(x, y) only accepts an int for y, and teterations have HUGE exponents. I had to "flip the script" and convert x^y = y^x for this specific case.
I didn't add in any error checking, and it expects all numbers to be positive ints.
I know this works for x^^2 and x^^3. I also know it works for 2^^4 and 2^^5. I don't have the computing power/memory/math knowledge to know if it works for any other numbers. 2^^4 and 2^^5 were the only ones I could check and test. It may work for other numbers but I was not able to confirm that.
int baseNum = 4;
int exp = 3;
// this example is 4^^3
BigInteger bigAnswer = tetration(baseNum, exp);
// Here is what the method that "does the work" looks like.
// This looks a little odd but that is because I am using BigInteger.Pow(x,y)
// Unfortunately, y can only be an int. Tetrations have huge exponents, so I had to figure out a
// way to have x^y work as y^x for this specific application
// no error checking in here, and it expects positive ints only
// I *know* this works for x^^2, x^^3, but I don't know if it works for
// any other number than 2 at ^^4 or higher
public static BigInteger tetration(int baseNum, int exp)
{
if (exp > 2)
{
exp = (int)Math.Pow(baseNum, (exp - 3));
}
else
{
exp = exp - 2;
}
Func<BigInteger, int, BigInteger> bigPowHelper = (x, y) => BigInteger.Pow(x, y);
BigInteger bigAnswer = baseNum;
for (int i = 0; i < Math.Pow(baseNum, exp); i++)
{
bigAnswer = bigPowHelper(bigAnswer, baseNum);
}
return bigAnswer;
}

How to convert integer to binary string in C#?

I'm writing a number converter. How can I convert a integer to a binary string in C# WITHOUT using built-in functions (Convert.ToString does different things based on the value given)?
Binary -> Sign magnitude
Binary -> One's complement
Binary > Two's complement
Simple soution:
IntToBinValue = Convert.ToString(6, 2);
Almost all computers today use two's complement representation internally, so if you do a straightforward conversion like this, you'll get the two's complement string:
public string Convert(int x) {
char[] bits = new char[32];
int i = 0;
while (x != 0) {
bits[i++] = (x & 1) == 1 ? '1' : '0';
x >>= 1;
}
Array.Reverse(bits, 0, i);
return new string(bits);
}
That's your basis for the remaining two conversions. For sign-magnitude, simply extract the sign beforehand and convert the absolute value:
byte sign;
if (x < 0) {
sign = '1';
x = -x;
} else {
sign = '0';
}
string magnitude = Convert(x);
For one's complement, subtract one if the number is negative:
if (x < 0)
x--;
string onec = Convert(x);
At least part of the answer is to use decimal.GetBits(someValue) to convert the decimal to its binary representation.
BitConverter.GetBytes can be used, in turn, on the elements returned from decimal.GetBits() to convert integers into bytes.
You may find the decimal.GetBits() documentation useful.
I'm not sure how to go from bytes to decimal, though.
Update: Based on Author's update:
BitConverter contains methods for converting numbers to bytes, which is convenient for getting the binary representation. The GetBytes() and ToInt32() methods are convenient for conversions in each direction. The ToString() overloads are convenient for creating a hexadecimal string representation if you would find that easier to interpret as 1's and 0's.
var a = Convert.ToString(4, 2).PadLeft(8, '0');
Here's mine:
(The upper part convert 32-char binary string to 32-bit integer, the lower part convert 32-bit integer back to 32-char binary string).
Hope this helps.
string binaryString = "011100100111001001110011";
int G = 0;
for (int i = 0; i < binaryString.Length; i++)
G += (int)((binaryString[binaryString.Length - (i + 1)] & 1) << (i % 32));
Console.WriteLine(G); //‭7500403‬
binaryString = string.Empty;
for (int i = 31; i >= 0; i--)
{
binaryString += (char)(((G & (1 << (i % 32))) >> (i % 32)) | 48);
}
Console.WriteLine(binaryString); //00000000011100100111001001110011
You can construct the representations digit by digit from first principles.
Not sure what built-in functions you don't want to use, but presumably you can construct a string character by character?
Start with the highest power of two greater than the number.
Push a "1" into your string.
Subtract that power of two from your number.
Take the next-lowest power of two. If you've reached one-half, stop. You're done.
If the number that's left is greater than this power of two, go back to step 2. If not, push a "0" into the string and go back to step 4.
For one's complement and two's complement, calculate those with an additional step.
Or is this way too basic for what you need?
This is an unsafe implementation:
private static unsafe byte[] GetDecimalBytes(decimal d)
{
byte* dp = (byte*) &d;
byte[] result = new byte[sizeof(decimal)];
for (int i = 0; i < sizeof(decimal); i++, dp++)
{
result[i] = *dp;
}
return result;
}
And here is reverting back:
private static unsafe decimal GetDecimal(Byte[] bytes)
{
if (bytes == null)
throw new ArgumentNullException("bytes");
if (bytes.Length != sizeof(decimal))
throw new ArgumentOutOfRangeException("bytes", "length must be 16");
decimal d = 0;
byte* dp = (byte*)&d;
byte[] result = new byte[sizeof(decimal)];
for (int i = 0; i < sizeof(decimal); i++, dp++)
{
*dp = bytes[i];
}
return d;
}
Here is an elegant solution:
// Convert Integer to binary and return as string
private static string GetBinaryString(Int32 n)
{
char[] b = new char[sizeof(Int32) * 8];
for (int i = 0; i < b.Length; i++)
b[b.Length-1 - i] = ((n & (1 << i)) != 0) ? '1' : '0';
return new string(b).TrimStart('0');
}

Categories