Get decimals of a double? - c#

I have a function that adds a double to another double but I need to add only to the digits after the decimal point and the number of digits varies based on the size of the number.
public double Calculate(double x, double add)
{
string xstr;
if (x >= 10)
xstr = x.ToString("00.0000", NumberFormatInfo.InvariantInfo);
if (x >= 100)
xstr = x.ToString("000.000", NumberFormatInfo.InvariantInfo);
if (x < 10)
xstr = x.ToString("0.00000", NumberFormatInfo.InvariantInfo);
string decimals = xstr.Remove(0, xstr.IndexOf(".") + 1);
decimals = (Convert.ToDouble(decimals) + add).ToString();
xstr = xstr.Substring(0, xstr.IndexOf(".") + 1) + decimals;
x = Convert.ToDouble(xstr, NumberFormatInfo.InvariantInfo);
return x;
}
I'm wondering if there isn't a simpler way to do this without having to convert the number to string first and then adding to the decimal part of it.
As you can see the number to be added to should always be a 6 digit number where ever the decimal separator is.

If you take the remainder of the object divided by 1 you'll get the fractional portion of that number:
double remainder = someDouble % 1;
To write the whole method out, it's as simple as:
public double Calculate(double x, double add)
{
return Math.Floor(x) + (x + add) % 1;
}
(This is one of those times where you're glad that % computes the remainder, rather than the modulus. This will work as is for negative numbers as well.)

A little more elegant and much more faster:
public static double Calculate(double x, double add)
{
var pow = 5 - Math.Truncate(Math.Log10(x));
var multiplier = Math.Pow(10, pow);
var decimals = Math.Truncate((x % 1)* multiplier) + add;
x = Math.Truncate(x) + Math.Truncate(decimals) / multiplier;
return x;
}

So all you want to do is add the fractional part of each value? Why don't you just do this?
public double Calculate(double x, double y)
{
double fractional_x = x - Math.Floor(x);
double fractional_y = y - Math.Floor(y);
return fractional_x + fractional_y;
}

Here is yet another version:
public double Calculate(double x, double add)
{
return (x - (int)x) + (add - (int)add);
}

Related

C# Decimal Precision Improvement in Powers and Fibonacci

I am trying to solve the Fibonacci sequence with both negative numbers and large numbers and came up with the following code and algorithm. I am certain the algorithm works, but the issue I am having is for very large numbers the precision of the result is incorrect. Here is the code:
public class Fibonacci
{
public static BigInteger fib(int n)
{
decimal p = (decimal) (1 + Math.Sqrt(5)) / 2;
decimal q = (decimal) (1 - Math.Sqrt(5)) / 2;
decimal r = (decimal) Math.Sqrt(5);
Console.WriteLine("n: {0} p: {1}, q: {2}, t: {3}",
n,
p,
q,
(Pow(p, n) - Pow(q, n)) / r);
return (BigInteger) (Decimal.Round((Pow(p, n) - Pow(q, n)) / r));
}
public static decimal Pow(decimal x, int y)
{
if(y < 0)
return 1 / Pow(x, -1 * y);
else if(y == 0)
return 1;
else if(y % 2 == 0)
{
decimal z = Pow(x, y / 2);
return z * z;
}
else if(y % 2 == 1)
return Pow(x, y - 1) * x;
else
return 1;
}
Small values of If we take a large number like -96 to get the Fibonacci for, I get a result of -51680708573203484173 but the real number is -51680708854858323072. I checked the rounding was OK, but it appears somewhere along the way my result is losing precision and not saving its values correctly. I thought using decimals would solve this precision issue (previously used doubles), but that did not work.
Where in my code am I incorrectly missing precision or is there another issue with my code I am misdiagnosing?
Try this.
public static BigInteger Fibonacci(int n)
{
BigInteger a = 0;
BigInteger b = 1;
for (int i = 31; i >= 0; i--)
{
BigInteger d = a * (b * 2 - a);
BigInteger e = a * a + b * b;
a = d;
b = e;
if ((((uint)n >> i) & 1) != 0)
{
BigInteger c = a + b;
a = b;
b = c;
}
}
return a;
}
Good Luck!
As you wrote, decimal has approximately 28 decimal digits of precision. However, Math.Sqrt(5), being a double, does not.
Using a more accurate square root of 5 enables this algorithm to stay exact for longer, though of course it is still limited by precision eventually, just later.
public static BigInteger fib(int n)
{
decimal sqrt5 = 2.236067977499789696409173668731276235440618359611525724270m;
decimal p = (1 + sqrt5) / 2;
decimal q = (1 - sqrt5) / 2;
decimal r = sqrt5;
return (BigInteger) (Decimal.Round((Pow(p, n) - Pow(q, n)) / r));
}
This way fib(96) = 51680708854858323072 which is correct. However, it becomes wrong again at 128.

Adding % sign without affecting my calculation

I'm doing the following calculation: 0.697 * 100 in C# to get a percentage
in that same formula i'd like to add a % sign without it affecting the calculation simply as a plain text character.
There must be a way to include that sign without it considering 100% but instead 100 (plus just the sign%)
Adding the % character to a percentage is something you do when presenting it to the user. So, you add it when displaying the info:
float perc = 0.697 * 100;
System.out.println(String.Format("{0} %", perc));
Why not just return a string then?
public string Calculate(double x, double y) {
double result = x * y;
string returnResult = $"{result}%";
return returnResult;
}
public string Calculate(double x, double y) {
double result = x * y;
string returnResult = String.Format(result + "{0}", "%");
return returnResult;
}

Why this sin(x) function in C# return NaN instead of a number

I have this function wrote in C# to calc the sin(x). But when I try with x = 3.14, the printed result of sin X is NaN (not a number),
but when debugging, its is very near to 0.001592653
The value is not too big, neither too small. So how could the NaN appear here?
static double pow(double x, int mu)
{
if (mu == 0)
return 1;
if (mu == 1)
return x;
return x * pow(x, mu - 1);
}
static double fact(int n)
{
if (n == 1 || n == 0)
return 1;
return n * fact(n - 1);
}
static double sin(double x)
{
var s = x;
for (int i = 1; i < 1000; i++)
{
s += pow(-1, i) * pow(x, 2 * i + 1) / fact(2 * i + 1);
}
return s;
}
public static void Main(String[] param)
{
try
{
while (true)
{
Console.WriteLine("Enter x value: ");
double x = double.Parse(Console.ReadLine());
var sinX = sin(x);
Console.WriteLine("Sin of {0} is {1}: " , x , sinX);
Console.ReadLine();
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
It fails because both pow(x, 2 * i + 1) and fact(2 * i + 1) eventually return Infinity.
In my case, it's when x = 4, i = 256.
Note that pow(x, 2 * i + 1) = 4 ^ (2 * 257) = 2.8763090157797054523668883052624395737887631663 × 10^309 - a stupidly large number which is just over the max value of a double, which is approximately 1.79769313486232 x 10 ^ 308.
You might be interested in just using Math.Sin(x)
Also note that fact(2 * i + 1) = 513! =an even more ridiculously large number which is more than 10^1000 times larger than the estimated number of atoms in the observable universe.
When x == 3.14 and i == 314 then you get Infinity:
?pow(-1, 314)
1.0
?pow(x, 2 * 314 + 1)
Infinity
? fact(2 * 314 + 1)
Infinity
The problem here is an understanding of floating point representation of 'real' numbers.
Double numbers while allowing a large range of values only has a precision of 15 to 17 decimal digits.
In this example we are calculating a value between -1 and 1.
We calculate the value of the sin function by using the series expansion of it which is basically a the sum of terms. In that expansion the terms become smaller and smaller as we go along.
When the terms have reached a value less than 1e-17 adding them to what is already there will not make any difference. This is so because we only have 52 bit of precision which are used up by the time we get to a term of less than 1e-17.
So instead of doing a constant 1000 loops you should do something like this:
static double sin(double x)
{
var s = x;
for (int i = 1; i < 1000; i++)
{
var term = pow(x, 2 * i + 1) / fact(2 * i + 1);
if (term < 1e-17)
break;
s += pow(-1, i) * term;
}
return s;
}

Dividing a BigIntegers to return double

I want to calculate the slope of a line.
public sealed class Point
{
public System.Numerics.BigInteger x = 0;
public System.Numerics.BigInteger y = 0;
public double CalculateSlope (Point point)
{
return ((point.Y - this.Y) / (point.X - this.X));
}
}
I know that BigInteger has a DivRem function that returns the division result plus the remainder but am not sure how to apply it to get a double. The numbers I'm dealing with are far far beyond the range of Int64.MaxValue so the remainder itself could be out of range to calculate by conventional division.
EDIT:
Not sure if it helps but I'm dealing with only positive integers (>=1).
IMPORTANT: I only need a few decimal points of precision (5 should be good enough for my purpose).
Get BigRational from Codeplex. Its part of Microsoft's Base Class Library, so it's a work-in-progress for .Net. Once you have that, then do something like:
System.Numerics.BigInteger x = GetDividend() ;
System.Numerics.BigInteger y = GetDivisor() ;
BigRational r = new BigRational( x , y ) ;
double value = (double) r ;
Dealing with the inevitable overflow/underflow/loss of precision is, of course, another problem.
Since you can't drop the BigRational library into your code, evidently, the other approach would be to get out the right algorithms book and roll your own...
The easy way, of course, of "rolling one's own" here, since a rational number is represented as the ratio (division) of two integers, is to grab the explicit conversion to double operator from the BigRational class and tweak it to suit. It took me about 15 minutes.
About the only significant modification I made is in how the sign of the result is set when the result is positive or negative zero/infinity. While I was at it, I converted it to a BigInteger extension method for you:
public static class BigIntExtensions
{
public static double DivideAndReturnDouble( this BigInteger x , BigInteger y )
{
// The Double value type represents a double-precision 64-bit number with
// values ranging from -1.79769313486232e308 to +1.79769313486232e308
// values that do not fit into this range are returned as +/-Infinity
if (SafeCastToDouble(x) && SafeCastToDouble(y))
{
return (Double) x / (Double) y;
}
// kick it old-school and figure out the sign of the result
bool isNegativeResult = ( ( x.Sign < 0 && y.Sign > 0 ) || ( x.Sign > 0 && y.Sign < 0 ) ) ;
// scale the numerator to preseve the fraction part through the integer division
BigInteger denormalized = (x * s_bnDoublePrecision) / y ;
if ( denormalized.IsZero )
{
return isNegativeResult ? BitConverter.Int64BitsToDouble(unchecked((long)0x8000000000000000)) : 0d; // underflow to -+0
}
Double result = 0 ;
bool isDouble = false ;
int scale = DoubleMaxScale ;
while ( scale > 0 )
{
if (!isDouble)
{
if ( SafeCastToDouble(denormalized) )
{
result = (Double) denormalized;
isDouble = true;
}
else
{
denormalized = denormalized / 10 ;
}
}
result = result / 10 ;
scale-- ;
}
if (!isDouble)
{
return isNegativeResult ? Double.NegativeInfinity : Double.PositiveInfinity;
}
else
{
return result;
}
}
private const int DoubleMaxScale = 308 ;
private static readonly BigInteger s_bnDoublePrecision = BigInteger.Pow( 10 , DoubleMaxScale ) ;
private static readonly BigInteger s_bnDoubleMaxValue = (BigInteger) Double.MaxValue;
private static readonly BigInteger s_bnDoubleMinValue = (BigInteger) Double.MinValue;
private static bool SafeCastToDouble(BigInteger value)
{
return s_bnDoubleMinValue <= value && value <= s_bnDoubleMaxValue;
}
}
The BigRational library has a conversion operator to double.
Also, remember to return infinity as a special case for a vertical line, you'll get a divide by zero exception with your current code. Probably best to just calculate the X1 - X2 first, and return infinity if it's zero, then do the division, to avoid redundant operations.
This does not deal with negative but hopefully give you a start.
double doubleMax = double.MaxValue;
BigInteger numerator = 120;
BigInteger denominator = 50;
if (denominator != 0)
{
Debug.WriteLine(numerator / denominator);
Debug.WriteLine(numerator % denominator);
BigInteger ansI = numerator / denominator;
if (ansI < (int)doubleMax)
{
double slope = (double)ansI + ((double)(numerator % denominator) / (double)denominator); ;
Debug.WriteLine(slope);
}
}

How do I round doubles in human-friendly manner in C#?

In my C# program I have a double obtained from some computation and its value is something like 0,13999 or 0,0079996 but this value has to be presented to a human so it's better displayed as 0,14 or 0,008 respectively.
So I need to round the value, but have no idea to which precision - I just need to "throw away those noise digits".
How could I do that in my code?
To clarify - I need to round the double values to a precision that is unknown at compile time - this needs to be determined at runtime. What would be a good heuristic to achieve this?
You seem to want to output a value which is not very different to the input value, so try increasing numbers of digits until a given error is achieved:
static double Round(double input, double errorDesired)
{
if (input == 0.0)
return 0.0;
for (int decimals = 0; decimals < 17; ++decimals)
{
var output = Math.Round(input, decimals);
var errorAchieved = Math.Abs((output - input) / input);
if (errorAchieved <= errorDesired)
return output;
}
return input;
}
}
static void Main(string[] args)
{
foreach (var input in new[] { 0.13999, 0.0079996, 0.12345 })
{
Console.WriteLine("{0} -> {1} (.1%)", input, Round(input, 0.001));
Console.WriteLine("{0} -> {1} (1%)", input, Round(input, 0.01));
Console.WriteLine("{0} -> {1} (10%)", input, Round(input, 0.1));
}
}
private double PrettyRound(double inp)
{
string d = inp.ToString();
d = d.Remove(0,d.IndexOf(',') + 1);
int decRound = 1;
bool onStartZeroes = true;
for (int c = 1; c < d.Length; c++ )
{
if (!onStartZeroes && d[c] == d[c - 1])
break;
else
decRound++;
if (d[c] != '0')
onStartZeroes = false;
}
inp = Math.Round(inp, decRound);
return inp;
}
Test:
double d1 = 0.13999; //no zeroes
double d2 = 0.0079996; //zeroes
double d3 = 0.00700956; //zeroes within decimal
Response.Write(d1 + "<br/>" + d2 + "<br/>" + d3 + "<br/><br/>");
d1 = PrettyRound(d1);
d2 = PrettyRound(d2);
d3 = PrettyRound(d3);
Response.Write(d1 + "<br/>" + d2 + "<br/>" + d3 +"<br/><br/>");
Prints:
0,13999
0,0079996
0,00700956
0,14
0,008
0,007
Rounds your numbers as you wrote in your example..
I can think of a solution though it isn't very efficient...
My assumption is that you can tell when a number is in the "best" human readable format when extra digits make no difference to how it is rounded.
eg in the example of 0,13999 rounding it to various numbers of decimal places gives:
0
0.1
0.14
0.14
0.14
0.13999
I'd suggest that you could loop through and detect that stable patch and cut off there.
This method seems to do this:
public double CustomRound(double d)
{
double currentRound = 0;
int stability = 0;
int roundLevel = 0;
while (stability < 3)
{
roundLevel++;
double current = Math.Round(d, roundLevel);
if (current == currentRound)
{
stability++;
}
else
{
stability = 1;
currentRound=current;
}
}
return Math.Round(d, roundLevel);
}
This code might be cleanable but it does the job and is a sufficient proof of concept. :)
I should emphasise that that initial assumption (that no change when rounding) is the criteria we are looking at which means that something like 0.3333333333 will not get rounded at all. With the examples given I'm unable to say if this is correct or not but I assume if this is a double issues that the problem is with the very slight variations from the "right" value and the value as a double.
Heres what I tried:
public decimal myRounding(decimal number)
{
double log10 = Math.Log10((double) number);
int precision = (int)(log10 >= 0 ? 0 : Math.Abs(log10)) + (number < 0.01m ? 1 : 2);
return Math.Round(number, precision);
}
test:
Console.WriteLine(myRounding(0.0000019999m)); //0.000002
Console.WriteLine(myRounding(0.0003019999m)); //0.0003
Console.WriteLine(myRounding(2.56777777m)); //2.57
Console.WriteLine(myRounding(0.13999m)); //0.14
Console.WriteLine(myRounding(0.0079996m)); //0.008
You can do it without converting to string. This is what I created fast:
private static double RoundDecimal(double number)
{
double temp2 = number;
int temp, counter = 0;
do
{
temp2 = 10 * temp2;
temp = (int)temp2;
counter++;
} while (temp < 1);
return Math.Round(number, counter < 2 ? 2 : counter);
}
or
private static double RoundDecimal(double number)
{
int counter = 0;
if (number > 0) {
counter = Math.Abs((int) Math.Log10(number)) + 1;
return Math.Round(arv, counter < 2 ? 2 : counter);
}
After giving it another thought I did the following and looks like it does what I want so far.
I iterate over the number of digits and compare Round( value, number ) and Round( value, number + 1 ). Once they are equal (not == of course - I compare the difference against a small number) then number is the number of digits I'm looking for.
Double.ToString() can take a string format as an argument. This will display as many characters as you require, rounding to the decimal place. E.G:
double Value = 1054.32179;
MessageBox.Show(Value.ToString("0.000"));
Will display "1054.322".
Source
Generic formats (i.e, pre-generated)
How to generate custom formats
You can use no of digits with Math.Round Function
Double doubleValue = 4.052102;
Math.Round(doubleValue, 2);
This will return 4.05 as your required answer.
This is tested code, can u explain me how i am wrong. So i need to change.

Categories