Double to Decimal without rounding after 15 digits - c#

When converting a "high" precision Double to a Decimal I lose precision with Convert.ToDecimal or casting to (Decimal) due to Rounding.
Example :
double d = -0.99999999999999956d;
decimal result = Convert.ToDecimal(d); // Result = -1
decimal result = (Decimal)(d); // Result = -1
The Decimal value returned by Convert.ToDecimal(double) contains a maximum of 15 significant digits. If the value parameter contains more than 15 significant digits, it is rounded using rounding to nearest.
So I in order to keep my precision, I have to convert my double to a String and then call Convert.ToDecimal(String):
decimal result = System.Convert.ToDecimal(d.ToString("G20")); // Result = -0.99999999999999956d
This method is working but I would like to avoid using a String variable in order to convert a Double to Decimal without rounding after 15 digits?

One possible solution is to decompose d as the exact sum of n doubles, the last of which is small and contains all the trailing significant digits that you desire when converted to decimal, and the first (n-1) of which convert exactly to decimal.
For the source double d between -1.0 and 1.0:
decimal t = 0M;
bool b = d < 0;
if (b) d = -d;
if (d >= 0.5) { d -= 0.5; t = 0.5M; }
if (d >= 0.25) { d -= 0.25; t += 0.25M; }
if (d >= 0.125) { d -= 0.125; t += 0.125M; }
if (d >= 0.0625) { d -= 0.0625; t += 0.0625M; }
t += Convert.ToDecimal(d);
if (b) t = -t;
Test it on ideone.com.
Note that the operations d -= are exact, even if C# computes the binary floating-point operations at a higher precision than double (which it allows itself to do).
This is cheaper than a conversion from double to string, and provides a few additional digits of accuracy in the result (four bits of accuracy for the above four if-then-elses).
Remark: if C# did not allow itself to do floating-point computations at a higher precision, a good trick would have been to use Dekker splitting to split d into two values d1 and d2 that would convert each exactly to decimal. Alas, Dekker splitting only works with a strict interpretation of IEEE 754 multiplication and addition.
Another idea is to use C#'s version of frexp to obtain the significand s and exponent e of d, and to compute (Decimal)((long) (s * 4503599627370496.0d)) * <however one computes 2^e in Decimal>.

There are two approaches, one of which will work for values up below 2^63, and the other of which will work for values larger than 2^53.
Split smaller values into whole-number and fractional parts. The whole-number part may be precisely cast to long and then Decimal [note that a direct cast to Decimal may not be precise!] The fractional part may be precisely multiplied by 9007199254740992.0 (2^53), converted to long and then Decimal, and then divided by 9007199254740992.0m. Adding the result of that division to the whole-number part should yield a Decimal value which is within one least-significant-digit of being correct [it may not be precisely rounded, but will still be far better than the built-in conversions!]
For larger values, multiply by (1.0/281474976710656.0) (2^-48), take the whole-number part of that result, multiply it back by 281474976710656.0, and subtract it from the original result. Convert the whole-number results from the division and the subtraction to Decimal (they should convert precisely), multiply the former by 281474976710656m, and add the latter.

Related

Dividing two numbers always returns 0 [duplicate]

How come dividing two 32 bit int numbers as ( int / int ) returns to me 0, but if I use Decimal.Divide() I get the correct answer? I'm by no means a c# guy.
int is an integer type; dividing two ints performs an integer division, i.e. the fractional part is truncated since it can't be stored in the result type (also int!). Decimal, by contrast, has got a fractional part. By invoking Decimal.Divide, your int arguments get implicitly converted to Decimals.
You can enforce non-integer division on int arguments by explicitly casting at least one of the arguments to a floating-point type, e.g.:
int a = 42;
int b = 23;
double result = (double)a / b;
In the first case, you're doing integer division, so the result is truncated (the decimal part is chopped off) and an integer is returned.
In the second case, the ints are converted to decimals first, and the result is a decimal. Hence they are not truncated and you get the correct result.
The following line:
int a = 1, b = 2;
object result = a / b;
...will be performed using integer arithmetic. Decimal.Divide on the other hand takes two parameters of the type Decimal, so the division will be performed on decimal values rather than integer values. That is equivalent of this:
int a = 1, b = 2;
object result = (Decimal)a / (Decimal)b;
To examine this, you can add the following code lines after each of the above examples:
Console.WriteLine(result.ToString());
Console.WriteLine(result.GetType().ToString());
The output in the first case will be
0
System.Int32
..and in the second case:
0,5
System.Decimal
I reckon Decimal.Divide(decimal, decimal) implicitly converts its 2 int arguments to decimals before returning a decimal value (precise) where as 4/5 is treated as integer division and returns 0
You want to cast the numbers:
double c = (double)a/(double)b;
Note: If any of the arguments in C# is a double, a double divide is used which results in a double. So, the following would work too:
double c = (double)a/b;
here is a Small Program :
static void Main(string[] args)
{
int a=0, b = 0, c = 0;
int n = Convert.ToInt16(Console.ReadLine());
string[] arr_temp = Console.ReadLine().Split(' ');
int[] arr = Array.ConvertAll(arr_temp, Int32.Parse);
foreach (int i in arr)
{
if (i > 0) a++;
else if (i < 0) b++;
else c++;
}
Console.WriteLine("{0}", (double)a / n);
Console.WriteLine("{0}", (double)b / n);
Console.WriteLine("{0}", (double)c / n);
Console.ReadKey();
}
In my case nothing worked above.
what I want to do is divide 278 by 575 and multiply by 100 to find percentage.
double p = (double)((PeopleCount * 1.0 / AllPeopleCount * 1.0) * 100.0);
%: 48,3478260869565 --> 278 / 575 ---> 0
%: 51,6521739130435 --> 297 / 575 ---> 0
if I multiply the PeopleCount by 1.0 it makes it decimal and division will be 48.34...
also multiply by 100.0 not 100.
If you are looking for 0 < a < 1 answer, int / int will not suffice. int / int does integer division. Try casting one of the int's to a double inside the operation.
The answer marked as such is very nearly there, but I think it is worth adding that there is a difference between using double and decimal.
I would not do a better job explaining the concepts than Wikipedia, so I will just provide the pointers:
floating-point arithmetic
decimal data type
In financial systems, it is often a requirement that we can guarantee a certain number of (base-10) decimal places accuracy. This is generally impossible if the input/source data is in base-10 but we perform the arithmetic in base-2 (because the number of decimal places required for the decimal expansion of a number depends on the base; one third takes infinitely many decimal places to express in base-10 as 0.333333..., but it takes only one decimal in base-3: 0.1).
Floating-point numbers are faster to work with (in terms of CPU time; programming-wise they are equally simple) and preferred whenever you want to minimize rounding error (as in scientific applications).

Parsing json string number losing minimum precision

i'm working in a C# (Unity3D compatible = .NET 2.0) Json library and i'm having precision problems. Firstly i have this logic in order to parse number strings:
...
string jsonPart ="-1.7555215491128452E-19"
enter code here
long longValue = 0;
if (long.TryParse(jsonPart, NumberStyles.Any, CultureInfo.InvariantCulture, out longValue))
{
if (longValue > int.MaxValue || longValue < int.MinValue)
{
jsonPartValue = new JsonBasic(longValue);
}
else
{
jsonPartValue = new JsonBasic((int)longValue);
}
}
else
{
decimal decimalValue = 0;
if (decimal.TryParse(jsonPart, NumberStyles.Any, CultureInfo.InvariantCulture, out decimalValue))
{
jsonPartValue = new JsonBasic(decimalValue);
}
}
...
The problem comes because decimal type is not the best type always for big decimal numbers. I have an output log to show you the problem (using .ToString()):
String = "-1.7555215491128452E-19"
Float Parsed : -1.755522E-19
Double parsed : -1.75552154911285E-19
Decimal Parsed : -0.0000000000000000001755521549
but on the other way , this examples with decimal type is the right one:
String = "0.1666666666666666666"
Float Parsed : 0.1666667
Double parsed : 0.166666666666667
Decimal Parsed : 0.1666666666666666666
String = "-1.30142114406914976E17"
Float Parsed : -1.301421E+17
Double parsed : -1.30142114406915E+17
Decimal Parsed : -130142114406914976
I suppost there is many other cases that can balance to one type or another.
Is there any smart way to parse it loosing minimum precision?
The difference you are seeing is because, although decimal can hold up to 28 or 29 digits of precision compared to double's 15 or 16 digits, its range is much lower than double.
A decimal has a range of (-7.9 x 10^28 to 7.9 x 10^28) / (10^(0 to 28))
A decimal stores ALL the digits, including zeros after a decimal point which is preceeded by a zero (e.g. 0.00000001) - i.e. it doesn't store numbers using exponential format.
A double has a range of ±5.0 × 10^−324 to ±1.7 × 10^308
A double can store a number using exponential format which means it doesn't have to store the leading zeroes in a number like 0.0000001.
The consequence of this is that for numbers that are at the edges of the decimal range, it actually has less precision than a double.
For example, consider the number -1.7555215491128452E-19:
Converting that to non-exponential notation you get:
-0.00000000000000000017555215491128452
1 2 3
12345678901234567890123456789012345
You can see that the number of decimal digits of that is 35, which exceeds the range of a decimal.
As you have observed, when you print that number out after storing it in a decimal, you get:
-0.0000000000000000001755521549
1 2
1234567901234567890123456789
which is giving you only 29 digits, as per Microsoft's specification.
A double, however, stores its numbers using exponential notation which means that it doesn't store all the leading zeroes, which allows it to store that particular number with greater precision.
For example, a double stores -0.00000000000000000017555215491128452 as an exponential number with 15 or 16 digits of precision.
If you take 15 digits of precision from the above number you get:
-0.000000000000000000175552154911285
1
123456789012345
which is indeed what is printed out if you do this:
double d = -1.7555215491128452E-19;
Console.WriteLine(d.ToString("F35"));

Getting a precise percent from two Big Integers

This obviously doesn't work.
BigInteger Total = 1000000000000000000000000000000000000000000000000000022234235423534543;
BigInteger Actual = 83450348250384508349058934085;
string Percent = ((Decimal)100.0/Total*Actual).ToString()+"%";
The question is, how to I get my precise percent?
Currently I use..
string sTotal = (task.End - task.Start).ToString();
BigInteger current = task.End;
string sCurrent = (task.End-current).ToString().PadLeft(sTotal.Length, '0');
Int32 maxLength = sCurrent.Length;
if (maxLength > Int64.MaxValue.ToString().Length - 1)
maxLength = Int64.MaxValue.ToString().Length - 1;
UInt64 currentI = Convert.ToUInt64(sCurrent.Substring(0, maxLength));
UInt64 totalI = Convert.ToUInt64(sTotal.Substring(0, maxLength));
Percent = (Decimal)100.0 / totalI
* currentI;
Can you suggest better?
You're computing a rational, not an integer, so you should install the Solver Foundation:
http://msdn.microsoft.com/en-us/library/ff524509(v=VS.93).aspx
and use Rational rather than BigInteger:
http://msdn.microsoft.com/en-us/library/ff526610(v=vs.93).aspx
You can then call ToDouble if you want to get the rational as the nearest double.
I need it accurate to 56 decimal places
OK, that is a ridiculous amount of precision, but I'll take you at your word.
Since a double has only 15 decimal places of precision and a decimal only 29, you can't use double or decimal. You're going to have to write the code yourself to do the division.
Here are two ways to do it:
First, write an algorithm that emulates doing long division. You can do it by hand, so you can write a computer program to do it. Keep going until you generate the required number of bits of precision.
Second: WOLOG assume that the rational in question is positive and is of the form x / y where x and y are big integers. Let b be 10p for a desired precision p. You wish to find the big integer a with the property that:
a * y < b * x
and
b * x < (a + 1) * y
Either a/b or (a+1)/b is the decimal fraction with p digits closest to x/y.
Make sense?
You can find the value of a by doing a binary search over the set of non-negative BigIntegers.
To do the binary search, first you have to find upper and lower bounds. Lower is easy enough; you know that 0 is a lower bound because by assumption the fraction x/y is positive. To find the upper bound, try 1/b, 10/b, 100/b ... and so on until you find a value that is larger than x/y. Now you have an upper and lower bound, and you can binary search the resulting space to find the exact value of a that makes the inequalities true.

Round doubles operations

Suppose I have three doubles, a, b and c.
double a = 1.234560123;
double b = 7.890120123;
double c = a * b;
c = 9.740827669535655129
I want to work with numbers with only 5 decimal places. So if I round a and b using Math.Round(a, 5) and Math.Round(b, 5) I get:
double a_r = Math.Round(a, 5);
double b_r = Math.Round(b, 5);
a_r = 1.23456
b_r = 7.89012
double c_r = a_r * b_r;
c_r = 9.7408265472
But when I calculate c, I still get a number with more than 5 decimal places (this will happen in every multiplication, division, potentiation and similar operations). I could round all results in my code, but that's hard work that I want to avoid.
As I use c in other operations and the results of this operations in other ones, I don't want to round all the intermediate results every time to not propagate the error caused by undesired decimal places.
Is there a way to define doubles with a fixed number of decimal places, independently of the operation?
Typically, it's best to leave the doubles in place, and use the custom formatting to display the values to 5 decimal points:
double a = 1.234560123;
double b = 7.890120123;
double c = a * b;
Console.WriteLine("Result = {0:N5}", c);
Nearly all routines that convert numeric values into strings allow the use of the Standard Numeric Format Strings as well as Custom Numeric Format Strings.
You can´t define a double with limited decimal places. You should rely on formatting the number when you display it. See this question
I found a way to solve my problem using Operator Overload.
So I re-defined all my operations as multiplication, division, complex multiplication and matrices operations to round the result to the number of decimal places I wanted.
An example:
public static double operator *(double d1, double d2)
{
double result;
result = Math.Round(d1 * d2, 5);
return result;
}

C#: divide an int by 100

How do I divide an int by 100?
eg:
int x = 32894;
int y = 32894 / 100;
Why does this result in y being 328 and not 328.94?
When one integer is divided by another, the arithmetic is performed as integer arithmetic.
If you want it to be performed as float, double or decimal arithmetic, you need to cast one of the values appropriately. For example:
decimal y = ((decimal) x) / 100;
Note that I've changed the type of y as well - it doesn't make sense to perform decimal arithmetic but then store the result in an int. The int can't possibly store 328.94.
You only need to force one of the values to the right type, as then the other will be promoted to the same type - there's no operator defined for dividing a decimal by an integer, for example. If you're performing arithmetic using several values, you might want to force all of them to the desired type just for clarity - it would be unfortunate for one operation to be performed using integer arithmetic, and another using double arithmetic, when you'd expected both to be in double.
If you're using literals, you can just use a suffix to indicate the type instead:
decimal a = x / 100m; // Use decimal arithmetic due to the "m"
double b = x / 100.0; // Use double arithmetic due to the ".0"
double c = x / 100d; // Use double arithmetic due to the "d"
double d = x / 100f; // Use float arithmetic due to the "f"
As for whether you should be using decimal, double or float, that depends on what you're trying to do. Read my articles on decimal floating point and binary floating point. Usually double is appropriate if you're dealing with "natural" quantities such as height and weight, where any value will really be an approximation; decimal is appropriate with artificial quantities such as money, which are typically represented exactly as decimal values to start with.
328.94 is not an integer. Integer / divide rounds down; that is how it works.
I suggest you cast to decimal:
decimal y = 32894M / 100;
or with variables:
decimal y = (decimal)x / 100;
Because an int is only a whole number. Try this instead.
int x = 32894;
double y = x / 100.0;
Because you're doing integer division. Add a period behind the 100 and you'll get a double instead.
When you divide two integers, the result is an integer. Integers don't have decimal places, so they're just truncated.
its programming fundamental that int(integer) dividing is different from float(floating point) dividing.
if u want .94 use float or double
var num = 3294F/100F

Categories