C# floating point to binary string and vice versa - c#

I am converting a floating point value to binary string representation:
float resulta = 31.0 / 15.0; //2.0666666
var rawbitsa = ToBinaryString(resulta); //returns 01000000000001000100010001000100
where ToBinaryString is coded as:
static string ToBinaryString(float value)
{
int bitCount = sizeof(float) * 8; // never rely on your knowledge of the size
// better not use string, to avoid ineffective string concatenation repeated in a loop
char[] result = new char[bitCount];
// now, most important thing: (int)value would be "semantic" cast of the same
// mathematical value (with possible rounding), something we don't want; so:
int intValue = System.BitConverter.ToInt32(BitConverter.GetBytes(value), 0);
for (int bit = 0; bit < bitCount; ++bit)
{
int maskedValue = intValue & (1 << bit); // this is how shift and mask is done.
if (maskedValue > 0)
maskedValue = 1;
// at this point, masked value is either int 0 or 1
result[bitCount - bit - 1] = maskedValue.ToString()[0];
}
return new string(result); // string from character array
}
Now I want to convert this binary string to float value.
I tried the following but it returns value "2.8293250329111622E-315"
string bstra = "01000000000001000100010001000100";
long w = 0;
for (int i = bstra.Length - 1; i >= 0; i--) w = (w << 1) + (bstra[i] - '0');
double da = BitConverter.ToDouble(BitConverter.GetBytes(w), 0); //returns 2.8293250329111622E-315
I want the value "2.0666666" by passing in value "01000000000001000100010001000100"
Why am I getting a wrong value? Am I missing something?

You're making this a lot harder than it needs to be; the error seems to be mostly in the character parsing code, but you don't need to do all that.
You could try like this instead:
static string ToBinaryString(float value)
{
const int bitCount = sizeof(float) * 8;
int intValue = System.BitConverter.ToInt32(BitConverter.GetBytes(value), 0);
return Convert.ToString(intValue, 2).PadLeft(bitCount, '0');
}
static float FromBinaryString(string bstra)
{
int intValue = Convert.ToInt32(bstra, 2);
return BitConverter.ToSingle(BitConverter.GetBytes(intValue), 0);
}
Example:
float resulta = 31.0F / 15.0F; //2.0666666
var rawbitsa = ToBinaryString(resulta);
Console.WriteLine(rawbitsa); //01000000000001000100010001000100
var back = FromBinaryString(rawbitsa);
Console.WriteLine(back); //2.0666666
Note that the usage of GetBytes is kinda inefficient; if you're OK with unsafe code, you can remove all of that.
Also note that this code is CPU-specific - it depends on the endianness.

Related

C# exponential format: force the first digit to be zero

I have no problem converting a double to such a string: 7.8746137240E-008
I don't know how to force the first digit to always be zero: 0.7874613724E-007
How to achieve that using a custom string format in C#?
Maybe do it yourself ;)
double foo = 7.8746137240E-008;
var numOfDigits = foo == 0 ? 0 : (int)Math.Ceiling(Math.Log10(Math.Abs(foo)));
string formatString = string.Format("{0:0.000000}E{1:+000;-000;+000}", foo / Math.Pow(10, numOfDigits), numOfDigits);
I found a simple solution:
value.ToString("\\0.0000000000E+000;-\\0.0000000000E+000")
You can do this by formatting with standard exponential notation followed by some post-processing:
public static string FormatNumberExpZero(double value, IFormatProvider format = null) {
if (!double.IsFinite(value)) // Infinity and NaN
return value.ToString(format);
// Format the number to a temporary buffer.
// "E10" means exponential notation with 10 decimal places.
Span<char> buffer = stackalloc char[24];
value.TryFormat(buffer, out int charCount, "E10", format);
// Don't touch any negative sign.
Span<char> bufferNoSign = (buffer[0] == '-') ? buffer.Slice(1) : buffer;
// Move everything after '.' one character forward to make space for the additional zero.
bufferNoSign.Slice(2, charCount - 2).CopyTo(bufferNoSign.Slice(3));
charCount++;
// Change 'X.' to '0.X'
bufferNoSign[2] = bufferNoSign[0];
bufferNoSign[1] = '.';
bufferNoSign[0] = '0';
// Read the exponent from the buffer.
Span<char> expChars = buffer.Slice(charCount - 4, 4);
int exponent = (expChars[1] - '0') * 100 + (expChars[2] - '0') * 10 + expChars[3] - '0';
if (expChars[0] == '-')
exponent = -exponent;
// Add 1 to the exponent to compensate.
exponent++;
// Write the new exponent back.
expChars[0] = (exponent < 0) ? '-' : '+';
int expAbs = (exponent < 0) ? -exponent : exponent;
int expDigit1 = expAbs / 100;
int expDigit2 = (expAbs - expDigit1 * 100) / 10;
int expDigit3 = expAbs - expDigit1 * 100 - expDigit2 * 10;
Console.WriteLine((expDigit1, expDigit2, expDigit3));
expChars[1] = (char)(expDigit1 + '0');
expChars[2] = (char)(expDigit2 + '0');
expChars[3] = (char)(expDigit3 + '0');
// Create the string.
return new string(buffer.Slice(0, charCount));
}
This solution is better than the one by #MarkSouls because it does not suffer from floating-point inaccuracy and/or overflow to infinity of doing value * 10. This requires .NET Standard 2.1 and so doesn't work with .NET Framework, though it can be modified to work with it (at the cost of allocating an additional string and char array).
I know no fancy way of achieving what you want but you can do it by writing your own function.
public static class Extender
{
public static string MyToString(this double value)
{
string s = (value * 10).ToString("E");
s = s.Replace(".", "");
return "0." + s;
}
}
It's just modifying exponential count and moving . front then adding 0.
public static void Main(string[] args)
{
Console.WriteLine(1d.MyToString());
Console.WriteLine(3.14159.MyToString());
Console.WriteLine(0.0033.MyToString());
Console.WriteLine(999414128.0.MyToString());
}
/* Output
0.1000000E+001
0.3141590E+001
0.3300000E-002
0.9994141E+009
*/
Not super cool code but it works, though I didn't check edge cases.
I wonder if there's more formal way to do it.

Evaluating Knuth's arrow notation in a function

I am having trouble calculating Knuth's arrow notation, which is ↑ and can be found here, within a function. What I've made so far is:
int arrowCount = (int)arrowNum.Value; // Part of
BigInteger a = (int)aNum.Value; // the input I
BigInteger b = (int)bNum.Value; // already have
BigInteger result = a;
BigInteger temp = a;
for(int i = 0; i < arrowCount; i++)
{
result = Power(temp, b);
temp = r;
b = a;
}
with power being
BigInteger Power(BigInteger Base, BigInteger Pow)
{
BigInteger x = Base;
for(int i = 0; i < (Pow-1); i++)
{
x *= Base;
}
return x;
}
but it's incorrect with it's values and I can't figure out a way to fix it. It can handle 1 arrow problems like 3↑3 (which is 3^3 = 9), but it can't handle any more arrows than that.
I need a way to figure out more arrows, such as 3↑↑3,
which should be 7625597484987 (3^27) and I get 19683 (27^3). If you could help me to figure how I could get the proper output and explain what it is I'm doing wrong, I would greatly appreciate it.
I wrote it in java, and use double for input parameter:
private static double knuthArrowMath(double a, double b, int arrowNum)
{
if( arrowNum == 1)
return Math.pow(a, b);
double result = a;
for (int i = 0; i < b - 1; i++)
{
result = knuthArrowMath(a, result, arrowNum - 1);
}
return result;
}
If you expect 7625597484987 (3^27) but get 19683 (27^3), isn't it then a simple matter of swapping the arguments when calling your power function?
Looking at your Power function your code snippet seems to call Power with temp as base and b as power:
int arrowCount = (int)arrowNum.Value; // Part of
BigInteger a = (int)aNum.Value; // the input I
BigInteger b = (int)bNum.Value; // already have
BigInteger result = a;
BigInteger temp = a;
for(int i = 0; i < arrowCount; i++)
{
result = Power(temp, b);
temp = result;
b = a;
}
Shouldn't temp an b be swapped so you get result = Power(b, temp) to get the desired result?
So pass 1 results calls Power(3, 3) resulting in temp = 27 and pass 2 calls Power(3, 27). The reason it only works for single arrow now is because swapping arguments for the first Power(base, power) call doesn't matter.
As you point out in your answer this doesn't cover all situations. Given the examples you provided I created this little console application:
class Program
{
static void Main(string[] args)
{
Console.WriteLine(Arrow(3, 3));
Console.WriteLine(Arrow(4, 4, 1));
Console.WriteLine(Arrow(3, 4, 1));
Console.ReadKey();
}
private static BigInteger Arrow(BigInteger baseNumber, BigInteger arrows)
{
return Arrow(baseNumber, baseNumber, arrows-1);
}
private static int Arrow(BigInteger baseNumber, BigInteger currentPower, BigInteger arrows)
{
Console.WriteLine("{0}^{1}", baseNumber, currentPower);
var result = Power(baseNumber, currentPower);
if (arrows == 1)
{
return result;
}
else
{
return Arrow(baseNumber, result, arrows - 1);
}
}
private static BigInteger Power(BigInteger number, BigInteger power)
{
int x = number;
for (int i = 0; i < (power - 1); i++)
{
x *= number;
}
return x;
}
}
I came up with a way to use the BigInteger.Pow() function.
It might look a little odd, but that is because the C# BigInterger.Pow(x, y) only accepts an int for y, and teterations have HUGE exponents. I had to "flip the script" and convert x^y = y^x for this specific case.
I didn't add in any error checking, and it expects all numbers to be positive ints.
I know this works for x^^2 and x^^3. I also know it works for 2^^4 and 2^^5. I don't have the computing power/memory/math knowledge to know if it works for any other numbers. 2^^4 and 2^^5 were the only ones I could check and test. It may work for other numbers but I was not able to confirm that.
int baseNum = 4;
int exp = 3;
// this example is 4^^3
BigInteger bigAnswer = tetration(baseNum, exp);
// Here is what the method that "does the work" looks like.
// This looks a little odd but that is because I am using BigInteger.Pow(x,y)
// Unfortunately, y can only be an int. Tetrations have huge exponents, so I had to figure out a
// way to have x^y work as y^x for this specific application
// no error checking in here, and it expects positive ints only
// I *know* this works for x^^2, x^^3, but I don't know if it works for
// any other number than 2 at ^^4 or higher
public static BigInteger tetration(int baseNum, int exp)
{
if (exp > 2)
{
exp = (int)Math.Pow(baseNum, (exp - 3));
}
else
{
exp = exp - 2;
}
Func<BigInteger, int, BigInteger> bigPowHelper = (x, y) => BigInteger.Pow(x, y);
BigInteger bigAnswer = baseNum;
for (int i = 0; i < Math.Pow(baseNum, exp); i++)
{
bigAnswer = bigPowHelper(bigAnswer, baseNum);
}
return bigAnswer;
}

Match a sequence of bits in a number and then convert the match into zeroes?

My assignment is to search through the binary representation of a number and replace a matched pattern of another binary representation of a number. If I get a match, I convert the matching bits from the first integer into zeroes and move on.
For example the number 469 would be 111010101 and I have to match it with 5 (101). Here's the program I've written so far. Doesn't work as expected.
using System;
namespace Conductors
{
class Program
{
static void Main(string[] args)
{
//this is the number I'm searching for a match in
int binaryTicket = 469;
//This is the pattern I'm trying to match (101)
int binaryPerforator = 5;
string binaryTicket01 = Convert.ToString(binaryTicket, 2);
bool match = true;
//in a 32 bit integer, position 29 is the last one I would
//search in, since I'm searching for the next 3
for (int pos = 0; pos < 29; pos++)
{
for (int j = 0; j <= 3; j++)
{
var posInBinaryTicket = pos + j;
var posInPerforator = j;
int bitInBinaryTicket = (binaryTicket & (1 << posInBinaryTicket)) >> posInBinaryTicket;
int bitInPerforator = (binaryPerforator & (1 << posInPerforator)) >> posInPerforator;
if (bitInBinaryTicket != bitInPerforator)
{
match = false;
break;
}
else
{
//what would be the proper bitwise operator here?
bitInBinaryTicket = 0;
}
}
Console.WriteLine(binaryTicket01);
}
}
}
}
Few things:
Use uint for this. Makes things a hell of a lot easier when dealing with binary numbers.
You aren't really setting anything - you're simply storing information, which is why you're printing out the same number so often.
You should loop the x times where x = length of the binary string (not just 29). There's no need for inner loops
static void Main(string[] args)
{
//this is the number I'm searching for a match in
uint binaryTicket = 469;
//This is the pattern I'm trying to match (101)
uint binaryPerforator = 5;
var numBinaryDigits = Math.Ceiling(Math.Log(binaryTicket, 2));
for (var i = 0; i < numBinaryDigits; i++)
{
var perforatorShifted = binaryPerforator << i;
//We need to mask off the result (otherwise we fail for checking 101 -> 111)
//The mask will put 1s in each place the perforator is checking.
var perforDigits = (int)Math.Ceiling(Math.Log(perforatorShifted, 2));
uint mask = (uint)Math.Pow(2, perforDigits) - 1;
Console.WriteLine("Ticket:\t" + GetBinary(binaryTicket));
Console.WriteLine("Perfor:\t" + GetBinary(perforatorShifted));
Console.WriteLine("Mask :\t" + GetBinary(mask));
if ((binaryTicket & mask) == perforatorShifted)
{
Console.WriteLine("Match.");
//Imagine we have the case:
//Ticket:
//111010101
//Perforator:
//000000101
//Is a match. What binary operation can we do to 0-out the final 101?
//We need to AND it with
//111111010
//To get that value, we need to invert the perforatorShifted
//000000101
//XOR
//111111111
//EQUALS
//111111010
//Which would yield:
//111010101
//AND
//111110000
//Equals
//111010000
var flipped = perforatorShifted ^ ((uint)0xFFFFFFFF);
binaryTicket = binaryTicket & flipped;
}
}
string binaryTicket01 = Convert.ToString(binaryTicket, 2);
Console.WriteLine(binaryTicket01);
}
static string GetBinary(uint v)
{
return Convert.ToString(v, 2).PadLeft(32, '0');
}
Please read over the above code - if there's anything you don't understand, leave me a comment and I can run through it with you.

itoa conversion in C#

It was an interview question asked to me - write itoa conversion without using any builtin functions.
The following is the algorithm I am using. But ('0' + n % 10); is throwing an error:
cannot convert string to int
private static string itoa(int n)
{
string result = string.Empty;
char c;
bool sign = n > 0 ? true : false;
while (true)
{
result = result + ('0' + n % 10); //'0'
n = n / 10;
if(n <= 0)
{
break;
}
}
if(sign)
{
result = result + '-';
}
return strReverse(result);
}
I'm unclear why you'd want to do this; just call ToString on your integer. You can specify whatever formatting you need with the various overloads.
As #minitech commented, we usually just use ToString() to do that in C#. If you really want to write the algorithm on your own, the following is an implementation:
public static partial class TestClass {
public static String itoa(int n, int radix) {
if(0==n)
return "0";
var index=10;
var buffer=new char[1+index];
var xlat="0123456789abcdefghijklmnopqrstuvwxyz";
for(int r=Math.Abs(n), q; r>0; r=q) {
q=Math.DivRem(r, radix, out r);
buffer[index-=1]=xlat[r];
}
if(n<0) {
buffer[index-=1]='-';
}
return new String(buffer, index, buffer.Length-index);
}
public static void TestMethod() {
Console.WriteLine("{0}", itoa(-0x12345678, 16));
}
}
It works only for int. The range int is -2147483648 to 2147483647, the length in the string representation would be max to 11.
For the signature of itoa in C is char * itoa(int n, char * buffer, int radix);, but we don't need to pass the buffer in C#, we can allocate it locally.
The approach that add '0' to the remainder may not work when the radix is greater than 10; if I recall correctly, itoa in C supports up to 36 based numbers, as this implementation is.
('0' + n % 10) results in an int value, so you should cast it back to char. There are also several other issues with your code, like adding - sign on the wrong side, working with negative values, etc.
My version:
static string itoa(int n)
{
char[] result = new char[11]; // 11 = "-2147483648".Length
int index = result.Length;
bool sign = n < 0;
do
{
int digit = n % 10;
if(sign)
{
digit = -digit;
}
result[--index] = (char)('0' + digit);
n /= 10;
}
while(n != 0);
if(sign)
{
result[--index] = '-';
}
return new string(result, index, result.Length - index);
}

How to convert integer to binary string in C#?

I'm writing a number converter. How can I convert a integer to a binary string in C# WITHOUT using built-in functions (Convert.ToString does different things based on the value given)?
Binary -> Sign magnitude
Binary -> One's complement
Binary > Two's complement
Simple soution:
IntToBinValue = Convert.ToString(6, 2);
Almost all computers today use two's complement representation internally, so if you do a straightforward conversion like this, you'll get the two's complement string:
public string Convert(int x) {
char[] bits = new char[32];
int i = 0;
while (x != 0) {
bits[i++] = (x & 1) == 1 ? '1' : '0';
x >>= 1;
}
Array.Reverse(bits, 0, i);
return new string(bits);
}
That's your basis for the remaining two conversions. For sign-magnitude, simply extract the sign beforehand and convert the absolute value:
byte sign;
if (x < 0) {
sign = '1';
x = -x;
} else {
sign = '0';
}
string magnitude = Convert(x);
For one's complement, subtract one if the number is negative:
if (x < 0)
x--;
string onec = Convert(x);
At least part of the answer is to use decimal.GetBits(someValue) to convert the decimal to its binary representation.
BitConverter.GetBytes can be used, in turn, on the elements returned from decimal.GetBits() to convert integers into bytes.
You may find the decimal.GetBits() documentation useful.
I'm not sure how to go from bytes to decimal, though.
Update: Based on Author's update:
BitConverter contains methods for converting numbers to bytes, which is convenient for getting the binary representation. The GetBytes() and ToInt32() methods are convenient for conversions in each direction. The ToString() overloads are convenient for creating a hexadecimal string representation if you would find that easier to interpret as 1's and 0's.
var a = Convert.ToString(4, 2).PadLeft(8, '0');
Here's mine:
(The upper part convert 32-char binary string to 32-bit integer, the lower part convert 32-bit integer back to 32-char binary string).
Hope this helps.
string binaryString = "011100100111001001110011";
int G = 0;
for (int i = 0; i < binaryString.Length; i++)
G += (int)((binaryString[binaryString.Length - (i + 1)] & 1) << (i % 32));
Console.WriteLine(G); //‭7500403‬
binaryString = string.Empty;
for (int i = 31; i >= 0; i--)
{
binaryString += (char)(((G & (1 << (i % 32))) >> (i % 32)) | 48);
}
Console.WriteLine(binaryString); //00000000011100100111001001110011
You can construct the representations digit by digit from first principles.
Not sure what built-in functions you don't want to use, but presumably you can construct a string character by character?
Start with the highest power of two greater than the number.
Push a "1" into your string.
Subtract that power of two from your number.
Take the next-lowest power of two. If you've reached one-half, stop. You're done.
If the number that's left is greater than this power of two, go back to step 2. If not, push a "0" into the string and go back to step 4.
For one's complement and two's complement, calculate those with an additional step.
Or is this way too basic for what you need?
This is an unsafe implementation:
private static unsafe byte[] GetDecimalBytes(decimal d)
{
byte* dp = (byte*) &d;
byte[] result = new byte[sizeof(decimal)];
for (int i = 0; i < sizeof(decimal); i++, dp++)
{
result[i] = *dp;
}
return result;
}
And here is reverting back:
private static unsafe decimal GetDecimal(Byte[] bytes)
{
if (bytes == null)
throw new ArgumentNullException("bytes");
if (bytes.Length != sizeof(decimal))
throw new ArgumentOutOfRangeException("bytes", "length must be 16");
decimal d = 0;
byte* dp = (byte*)&d;
byte[] result = new byte[sizeof(decimal)];
for (int i = 0; i < sizeof(decimal); i++, dp++)
{
*dp = bytes[i];
}
return d;
}
Here is an elegant solution:
// Convert Integer to binary and return as string
private static string GetBinaryString(Int32 n)
{
char[] b = new char[sizeof(Int32) * 8];
for (int i = 0; i < b.Length; i++)
b[b.Length-1 - i] = ((n & (1 << i)) != 0) ? '1' : '0';
return new string(b).TrimStart('0');
}

Categories