I'm trying to convert a float to it's string representation without the result appearing in scientific notation.
I first tried :
ToString("0." + new string('#', 7))
but this doesn't seem to work for large values. For example :
float largeNumber = 12345678f;
string str = largeNumber.ToString("0." + new string('#', 7));
results in "12345680"
I then tried ToString("R")
this works for the large number above, but if the numbers get too large, it displays them in scientific notation. For example 5000000000f results in "5E+09". And small numbers like 0.0005 result in 0.0004999999966
I've also tried mixing the 2, but I still get scientific notation in some cases.
My test program is pasted below. I appreciate that there will be precision issues, but I'm wondering if I can do any better than what I have?
class Program
{
static void Main(string[] args)
{
Write(0.123456789f);
Write(0.12345678f);
Write(0.1234567f);
Write(0.123456f);
Write(0.12345f);
Write(0.1234f);
Write(0.123f);
Write(0.12f);
Write(0.1f);
Write(1);
Write(12);
Write(123);
Write(1234);
Write(12345);
Write(123456);
Write(1234567);
Write(12345678);
Write(123456789);
Console.WriteLine();
float f = 5000000000f;
for (int i = 0; i < 17; ++i)
{
Write(f);
f /= 10;
}
Console.WriteLine();
f = 5000000000f;
for (int i = 0; i < 17; ++i)
{
Write(f < 1 ? f + 1 : f);
f /= 10;
}
Console.Read();
}
static void Write(float f)
{
//string str = f.ToString("0." + new string('#', 7));
//string str = f.ToString("R");
string str = Math.Abs(f) < 1 ? f.ToString("0." + new string('#', 7)) : f.ToString("R");
Console.WriteLine(str);
}
}
The problem is that float only supports 7 digits of precision. There's no way to represent 12345678f precisely in a float, so it gets converted to the nearest representable float value, which is 12345680f. It's not the size of the number, but the number of digits of precision that is the key issue.
Also, 0.0005 cannot be represented exactly in a binary floating-point number; the closest 8-bit binary floating point number to 0.0005 is 0.0004999999966
decimal supports much greater precision, and can represented base-10 numbers precisely using the standard N format specifier.
try this:
largeNumber.ToString("r")
You can find the list of available formats here:
http://msdn.microsoft.com/en-us/library/dwhawy9k%28v=vs.110%29.aspx
The 0 in the format string is not for zeroes, it represents "digit if non-zero and zero if zero", while '#' represents "digit if non-zero or nothing if zero".
You can use the format f.ToString("0." + new string('#', 7)) for numbers over zero
I just tested it in PowerShell and it works fine for me, although it's probably using doubles or decimals:
PS C:\> $test = "{0:0.0######}"
PS C:\> $test -f 0.5
0,5
PS C:\> $test -f 0.4443423
0,4443423
PS C:\> $test -f 123.4443423
123,4443423
PS C:\> $test -f 45425123.4443423
45425123,4443423
Definitely, the problem seems to be float precision:
PS C:\> $test -f [float]45425123.4443423
45425120,0
Related
I'm trying to implement SQL Server Vardecimal decompression. Values stored as 3 digits decimals per every 10 bits. But during implementation I found strange behavior of math. Here is simple test I made
private SqlDecimal Test() {
SqlDecimal mantissa = 0;
SqlDecimal sign = -1;
byte exponent = 0x20;
int numDigits = 0;
// -999999999999999999999999999999999.99999
for (int i = 0; i < 13; i++) {
int temp = 999;
//equal to mantissa = mantissa * 1000 + temp;
numDigits += 3;
int pwr = exponent - (numDigits - 1);
mantissa += temp * (SqlDecimal)Math.Pow(10, pwr);
}
return sign * mantissa;
}
First 2 passes are fine, I have
999000000000000000000000000000000
999999000000000000000000000000000
but third have
999999998999999999999980020000000
Is it some bug in C# SqlDecimal math or am I doing something wrong?
This is an issue with how you're constructing the value to add here:
mantissa += temp * (SqlDecimal)Math.Pow(10, pwr);
The problem starts when pwr is 24. You can see this very clearly here:
Console.WriteLine((SqlDecimal) Math.Pow(10, 24));
The output on my box is:
999999999999999980000000
Now I don't know exactly where that's coming from - but it's simplest to remove the floating point arithmetic entirely. While it may not be efficient, this is a simple way of avoiding the problem:
static SqlDecimal PowerOfTen(int power)
{
// Note: only works for non-negative power values at the moment!
// (To handle negative input, divide by 10 on each iteration instead.)
SqlDecimal result = 1;
for (int i = 0; i < power; i++)
{
result = result * 10;
}
return result;
}
If you then change the line to:
mantissa += temp * PowerOfTen(pwr);
then you'll get the results you expect - at least while pwr is greater than zero. It should be easy to fix PowerOfTen to handle negative values as well though.
Update
Amending the below method to just work with Parse and ToString should improve performance for larger numbers (which would be the general use case for these types):
public static SqlDecimal ToSqlDecimal(this BigInteger bigint)
{
return SqlDecimal.Parse(bigint.ToString());
}
This trick also works for the double returned by the original Math.Pow call; so you could just do:
SqlDecimal.Parse(string.Format("{0:0}",Math.Pow(10,24)))
Original Answer
Obviously #JonSkeet's answer's best, as it only involves 24 iterations, vs potentially thousands in my attempt. However, here's an alternate solution, which may help out in other scenarios where you need to convert large integers (i.e. System.Numeric.BigInteger) to SqlDecimal / where performance is less of a concern.
Fiddle Example
//using System.Data.SqlTypes;
//using System.Numerics; //also needs an assembly reference to System.Numerics.dll
public static class BigIntegerExtensions
{
public static SqlDecimal ToSqlDecimal(this BigInteger bigint)
{
SqlDecimal result = 0;
var longMax = (SqlDecimal)long.MaxValue; //cache the converted value to minimise conversions
var longMin = (SqlDecimal)long.MinValue;
while (bigint > long.MaxValue)
{
result += longMax;
bigint -= long.MaxValue;
}
while (bigint < long.MinValue)
{
result += longMin;
bigint -= long.MinValue;
}
return result + (SqlDecimal)(long)bigint;
}
}
For your above use case, you could use this like so (uses the BigInteger.Pow method):
mantissa += temp * BigInteger.Pow(10, pwr).ToSqlDecimal();
I am using BigInteger.Parse(some string) but it takes forever and I'm not even sure if it finishes.
However, I can convert the huge string to a byte array and jam the byte array into a BigInteger constructor in very little time but it munges the original number stored in the string because of the endian issue with BigInteger and byte arrays.
Is there a way to convert the string to a byte array and put the byte array into the BigInteger object while preserving the original number stored in ASCII in the string?
String s = "12345"; // Some huge string, millions of digits.
BigInteger bi = new BigInteger(Encoding.ASCII.GetBytes(s); // very fast but the 12345 is lost
// OR...
BigInteger bi = BigInteger.Parse(s); // Takes forever therefore unuseable.
The byte[] representation of BigInteger has little to do with the ASCII characters. Much like the byte representation of an int has little to do with the ASCII representation of it.
To parse the number, each character must be converted to the digit value, and added to the previously parsed value multiplied by 10. That is probably why it's taking so long, and any version you write will probably not perform better. It has to do something like:
var nr=0;
foreach(var c in "123") nr=nr*10+(c-'0');
Edit
While it is not possible to perform the conversion by just converting to a byte array, the library implementation is slower then it has to be (at least for simple scenarios that do not need internationalization for example). Using the trick suggested by Rudy Velthuis in the comments and not taking into account hex formats or internationalization, I was able to produce a version which for 303104 characters runs ~5 times faster (from 18.2s to 3.75s. For 1 milion digits the fast method takes 47s, long, but it is a huge number):
public static class Helper
{
static BigInteger[] factors = Enumerable.Range(0, 19).Select(i=> BigInteger.Pow(10, i)).ToArray();
public static BigInteger ParseFast(string str)
{
var result = new BigInteger(0);
var n = str.Length;
var hasSgn = str[0] == '-';
int j;
for (var i = hasSgn ? 1 : 0; i < n; i += j - i)
{
long gr = 0;
for (j = i; j < i + 18 && j < n; j++)
{
gr = gr * 10 + (str[j] - '0');
}
result = result * factors[j-i]+ gr;
}
if (hasSgn)
{
result = BigInteger.MinusOne * result;
}
return result;
}
}
I'm a C# newbie learning how to work with arrays. I wrote a small console app that converts binary numbers to their decimal equivalents; however, the sytax I've used seems to be causing the app to - at some point - use the unicode designation of integers instead of the true value of the integer itself, so 1 becomes 49, and 0 becomes 48.
How can I write the app differently to avoid this? Thanks
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace Sandbox
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Key in binary number and press Enter to calculate decimal equivalent");
string inputString = Console.ReadLine();
////This is supposed to change the user input into character array - possible issue here
char[] digitalState = inputString.ToArray();
int exponent = 0;
int numberBase = 2;
int digitIndex = inputString.Length - 1;
int decimalValue = 0;
int intermediateValue = 0;
//Calculates the decimal value of each binary digit by raising two to the power of the position of the digit. The result is then multiplied by the binary digit (i.e. 1 or 0, the "digitalState") to determine whether the result should be accumulated into the final result for the binary number as a whole ("decimalValue").
while (digitIndex > 0 || digitIndex == 0)
{
intermediateValue = (int)Math.Pow(numberBase, exponent) * digitalState[digitIndex]; //The calculation here gives the wrong result, possibly because of the unicode designation vs. true value issue
decimalValue = decimalValue + intermediateValue;
digitIndex--;
exponent++;
}
Console.WriteLine("The decimal equivalent of {0} is {1}", inputString, intermediateValue);
Console.ReadLine();
}
}
}
Simply use the following code
for (int i = 0; i < digitalState.Length; i++)
{
digitalState[i] = (char)(digitalState[i] - 48);
}
After
char[] digitalState = inputString.ToArray();
Note that the value of a character, for example '1' is different from what it represents. As you already noticed '1' is equal to ASCII code 49. When you subtract 48 from its value (49) it becomes 1.
there were two errors: you missed the "-48" and wrote the intermediate instead of the result (last line). Not sure how to unline some parts in the codeblock ;)
intermediateValue = (int)Math.Pow(numberBase, exponent) * (digitalState[digitIndex]-48;
//The calculation here gives the wrong result,
//possibly because of the unicode designation vs. true value issue
decimalValue += intermediateValue;
(.....)
Console.WriteLine("The decimal equivalent of {0} is {1}", inputString, decimalValue);
#CharlesMager says it all. However, I assume this is a homework assignment. So as you said multiplying by the ASCII value is wrong. So just subtract '0' (decimal value 48) from ASCII value.
intermediateValue = (int)Math.Pow(numberBase, exponent)
* ((int)digitalState[digitIndex] - 48);
You code is ugly, there is no need to go backwards from the string. Also using Math.Power is inefficient, shifting (<<) is equivalent for binary powers.
long v = 0;
foreach (var c in inputString)
{
v = (v << 1) + ((int)c - 48);
}
Console.WriteLine("The decimal equivalent of {0} is {1}", inputString, v);
Console.ReadLine();
I have this program that gets all the numbers from a double variable, removes decimal marks and minuses and adds every digit separately. Here is is:
static void Main(string[] args)
{
double input = double.Parse(Console.ReadLine());
char[] chrArr = input.ToString().ToCharArray();
input = 0;
foreach (var ch in chrArr)
{
string somestring = Convert.ToString(ch);
int someint = 0;
bool z = int.TryParse(somestring, out someint);
if (z == true)
{
input += (ch - '0');
}
}
The problem is for example when I enter "9999999999999999999999999999999...." and so on, it gets represented as 1.0E+254 and what so my program just adds 1+0+2+5+4 and finishes. Is there efficient way to make this work properly ? I tried using string instad of double, but it works too slow..
You can't store "9999999999999999999999999999999..." as a double - a double only has 15 or 16 digits of precision. The compiler is giving you the closest double it can represent to what you're asking, which is 1E254.
I'd look into why using string was slow, or use BigInteger
As other answers indicate, what's stored will not be exactly the digits entered, but will be the closest double value that can be represented.
If you want to inspect all of it's digits though, use F0 as the format string.
char[] chrArr = input.ToString("F0").ToCharArray();
You can store a larger number in a Decimal as it is a 128 bit number compared to the 64 bit of a Double.
But there is obviously still a limit.
What I am looking for is something like PHPs decbin function in C#. That function converts decimals to its representation as a string.
For example, when using decbin(21) it returns 10101 as result.
I found this function which basically does what I want, but maybe there is a better / faster way?
var result = Convert.ToString(number, 2);
– Almost the only use for the (otherwise useless) Convert class.
Most ways will be better and faster than the function that you found. It's not a very good example on how to do the conversion.
The built in method Convert.ToString(num, base) is an obvious choice, but you can easily write a replacement if you need it to work differently.
This is a simple method where you can specify the length of the binary number:
public static string ToBin(int value, int len) {
return (len > 1 ? ToBin(value >> 1, len - 1) : null) + "01"[value & 1];
}
It uses recursion, the first part (before the +) calls itself to create the binary representation of the number except for the last digit, and the second part takes care of the last digit.
Example:
Console.WriteLine(ToBin(42, 8));
Output:
00101010
int toBase = 2;
string binary = Convert.ToString(21, toBase); // "10101"
To have the binary value in (at least) a specified number of digits, padded with zeroes:
string bin = Convert.ToString(1234, 2).PadLeft(16, '0');
The Convert.ToString does the conversion to a binary string.
The PadLeft adds zeroes to fill it up to 16 digits.
This is my answer:
static bool[] Dec2Bin(int value)
{
if (value == 0) return new[] { false };
var n = (int)(Math.Log(value) / Math.Log(2));
var a = new bool[n + 1];
for (var i = n; i >= 0; i--)
{
n = (int)Math.Pow(2, i);
if (n > value) continue;
a[i] = true;
value -= n;
}
Array.Reverse(a);
return a;
}
Using Pow instead of modulo and divide so i think it's faster way.