Compare two doubles with different number of decimal places in C# - c#

I want to compare two doubles a and b in C# (where for example b has more decimal places) in the way that: if I round the number b to number of decimal places of a i should get the same number if they are the same. Example:
double a = 0.123;
double b = 0.1234567890;
should be same.
double a = 0.123457
double b = 0.123456789
should be same.
I cannot write
if(Math.Abs(a-b) < eps)
because I don't know how to calculate precision eps.

If I have understood what you want you could just shift the digits before the decimal place til the "smaller" (i.e. one with least significant figures) is a integer, then compare:
i.e. in some class...
static bool comp(double a, double b)
{
while((a-(int)a)>0 && (b - (int)b)>0)
{
a *= 10;
b *= 10;
}
a = (int)a;
b = (int)b;
return a == b;
}
Edit
Clearly calling (int)x on a double is asking for trouble since double can store bigger numbers than ints. This is better:
while((a-Math.Floor(a))>0 && (b - Math.Floor(b))>0)
//...

double has epsilon double.Epsilon or set prefer error by hardcode, f.e. 0.00001

Related

Dividing two numbers always returns 0 [duplicate]

How come dividing two 32 bit int numbers as ( int / int ) returns to me 0, but if I use Decimal.Divide() I get the correct answer? I'm by no means a c# guy.
int is an integer type; dividing two ints performs an integer division, i.e. the fractional part is truncated since it can't be stored in the result type (also int!). Decimal, by contrast, has got a fractional part. By invoking Decimal.Divide, your int arguments get implicitly converted to Decimals.
You can enforce non-integer division on int arguments by explicitly casting at least one of the arguments to a floating-point type, e.g.:
int a = 42;
int b = 23;
double result = (double)a / b;
In the first case, you're doing integer division, so the result is truncated (the decimal part is chopped off) and an integer is returned.
In the second case, the ints are converted to decimals first, and the result is a decimal. Hence they are not truncated and you get the correct result.
The following line:
int a = 1, b = 2;
object result = a / b;
...will be performed using integer arithmetic. Decimal.Divide on the other hand takes two parameters of the type Decimal, so the division will be performed on decimal values rather than integer values. That is equivalent of this:
int a = 1, b = 2;
object result = (Decimal)a / (Decimal)b;
To examine this, you can add the following code lines after each of the above examples:
Console.WriteLine(result.ToString());
Console.WriteLine(result.GetType().ToString());
The output in the first case will be
0
System.Int32
..and in the second case:
0,5
System.Decimal
I reckon Decimal.Divide(decimal, decimal) implicitly converts its 2 int arguments to decimals before returning a decimal value (precise) where as 4/5 is treated as integer division and returns 0
You want to cast the numbers:
double c = (double)a/(double)b;
Note: If any of the arguments in C# is a double, a double divide is used which results in a double. So, the following would work too:
double c = (double)a/b;
here is a Small Program :
static void Main(string[] args)
{
int a=0, b = 0, c = 0;
int n = Convert.ToInt16(Console.ReadLine());
string[] arr_temp = Console.ReadLine().Split(' ');
int[] arr = Array.ConvertAll(arr_temp, Int32.Parse);
foreach (int i in arr)
{
if (i > 0) a++;
else if (i < 0) b++;
else c++;
}
Console.WriteLine("{0}", (double)a / n);
Console.WriteLine("{0}", (double)b / n);
Console.WriteLine("{0}", (double)c / n);
Console.ReadKey();
}
In my case nothing worked above.
what I want to do is divide 278 by 575 and multiply by 100 to find percentage.
double p = (double)((PeopleCount * 1.0 / AllPeopleCount * 1.0) * 100.0);
%: 48,3478260869565 --> 278 / 575 ---> 0
%: 51,6521739130435 --> 297 / 575 ---> 0
if I multiply the PeopleCount by 1.0 it makes it decimal and division will be 48.34...
also multiply by 100.0 not 100.
If you are looking for 0 < a < 1 answer, int / int will not suffice. int / int does integer division. Try casting one of the int's to a double inside the operation.
The answer marked as such is very nearly there, but I think it is worth adding that there is a difference between using double and decimal.
I would not do a better job explaining the concepts than Wikipedia, so I will just provide the pointers:
floating-point arithmetic
decimal data type
In financial systems, it is often a requirement that we can guarantee a certain number of (base-10) decimal places accuracy. This is generally impossible if the input/source data is in base-10 but we perform the arithmetic in base-2 (because the number of decimal places required for the decimal expansion of a number depends on the base; one third takes infinitely many decimal places to express in base-10 as 0.333333..., but it takes only one decimal in base-3: 0.1).
Floating-point numbers are faster to work with (in terms of CPU time; programming-wise they are equally simple) and preferred whenever you want to minimize rounding error (as in scientific applications).

How to round the precision digits for double?

I'm working with double in C#. Double has Precision ~15-17 digits.
In my program, the user enters: 0.011 - 0.001, and the result shows 0.0099999999999999985 --> that's ugly in the user's eye - they will question my math ability.
I'm okay with the result being 0.0099999999999999985 internal, but when display, I want to find a way to fix the precision digits. For example:
double a = 0.011;
double b = 0.001;
double result, display;
result = a - b;//result = 0.0099999999999999985
display = ConvertForDisplay(result);//I want display = 0.01
I see that the build-in Android app "Calculator" on Samsung smartphone (e.g Galaxy Tab SM-T295) also use double (because I enter 10^309 and it gives error) but when I enter the math 0.011 - 0.001, it properly shows me 0.01.
I thought about Math.Round (double value, int digits), but the digits must be in range [0,15], while I also need to display the numbers like 1E-200.
I've search around the internet but I just saw questions about how to deal with the floating point accuracy problems (e.g don't use == but use Math.Abs(a-b) < double.Epsilon, Math.Round,...). But I can't find how to round the precision digits (e.g the value 0.0099999999999999985 has the precision digits 99999999999999985, when the those digits contains more than 15 "9"s, then it should be round for display).
So what is the proper way to fix this problem (e.g convert 0.0099999999999999985 into 0.01)?
Extra: when I enter 9999999999999999d, C# gives me 1E+16, I know this happens because C# sees that all the precision digits, which it can hold, is 9 so it rounds up to 10, but is there any way I can make it keeps all the 9s - I see that the Android app "Calculator", which I mention above, also gives this behavior, so I count this as extra - not a must-fix, but I want to know if it's fixable just for fun.
Update:
I see the behavior: (1.021 - 1.01) = 0.010999999999999899
But (1.021 - 1.01) + 1 = 1.011
So when I add 1, double triggers its "internal round" feature (I guess) and it rounds to the number I want. Maybe this could lead to my solution?
Here's another interesting discover: cast the value to float can also give the number I want, e.g: (float)(1.021 - 1.01) = 0.011
I re-think the issue, how about I use decimal as a tool to fix this problem. The idea is to use decimal to solve the math (if the range is suitable for decimal), then cast the decimal back to double. Here is the code:
const int LEFT = 0;
const int RIGHT = 1;
private static bool IsConvertableToDecimal(double value)
{
const double MAX = (double)decimal.MaxValue;
const double MIN = (double)decimal.MinValue;
const double EXP_MIN = -1E-28;
const double EXP_MAX = 1E-28;
return
(value >= EXP_MAX && value < MAX) ||
value == 0 ||
(value > MIN && value <= EXP_MIN);
}
private static double BasicFunc(double[] arr,
Func<double, double, double> funcDouble,
Func<decimal, decimal, decimal> funcDecima)
{
var result = funcDouble(arr[LEFT], arr[RIGHT]);
//Check if can convert to decimal for better accurancy
if (IsConvertableToDecimal(result) &&
IsConvertableToDecimal(arr[LEFT]) &&
IsConvertableToDecimal(arr[RIGHT]))
result = (double)(funcDecima((decimal)arr[LEFT], (decimal)arr[RIGHT]));
return result;
}
public static double ADD(double[] arr)
=> BasicFunc(arr, (x, y) => x + y, decimal.Add);
public static double SUBTRACT(double[] arr)
=> BasicFunc(arr, (x, y) => x - y, decimal.Subtract);
public static double MULT(double[] arr)
=> BasicFunc(arr, (x, y) => x * y, decimal.Multiply);
public static double DIV(double[] arr)
=> BasicFunc(arr, (x, y) => x / y, decimal.Divide);
//Usage ================
void Test()
{
var arr = new double[2];
arr[0] = 0.011;
arr[1] = 0.001;
var result = SUBTRACT(arr);
Print(result);//Result = 0.01 ==> Success!!!
}

Why result of % operator for double differ from decimal? [duplicate]

Consider this:
double x,y;
x =120.0;
y = 0.05;
double z= x % y;
I tried this and expected the result to be 0, but it came out 0.04933333.
However,
x =120.0;
y = 0.5;
double z= x % y;
did indeed gave the correct result of 0.
What is happening here?
I tried Math.IEEERemainder(double, double) but it's not returning 0 either. What is going on here?
Also, as an aside, what is the most appropriate way to find remainder in C#?
Because of its storage format, doubles cannot store every values exactly as is is entered or displayed. The human representation of numbers is usually in decimal format, while doubles are based on the dual system.
In a double, 120 is stored precisely because it's an integer value. But 0.05 is not. The double is approximated to the closest number to 0.05 it can represent. 0.5 is a power of 2 (1/2), so it can be stored precisely and you don't get a rounding error.
To have all numbers exactly the same way you enter / display it in the decimal system, use decimal instead.
decimal x, y;
x = 120.0M;
y = 0.05M;
decimal z = x % y; // z is 0
You could do something like:
double a, b, r;
a = 120;
b = .05;
r = a - Math.floor(a / b) * b;
This should help ;)
I believe if you tried the same with decimal it would work properly.
http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems can help you understand why you get these "strange" results. There's a particular precision that floating point numbers can have. Just try these queries and have a look at the results:
0.5 in base 2
0.05 in base 2
Modulus should only be used with integer. The remainder come from an euclidean division. With double, you can have unexpected results.
See this article
This is what we use.. :)
public double ModuloOf(double v1, double v2)
{
var mult = 0;
//find number of decimals
while (v2 % 1 > 0)
{
mult++;
v2 = v2 * 10;
}
v1 = v1 * Math.Pow(10, mult);
var rem = v1 % v2;
return rem / Math.Pow(10, mult);
}

Math.Floor on the whole number

I am using Math.Floor method to find out how many times can number a be in number b. In this concrete example, variables are with these values:
double a = 1.2;
double b = 0.1;
double c = Math.Floor(a / b) * b;
This returns c = 11 instead of c = 12 as I thought it will. I guess it has something to do with rounding, but how can I get it work properly? When I raise number a to 1,21, it returns 12.
double and float are internally represented in a way that lacks precision for many numbers, which means that a may not be exactly 1.2 (but, say, 1.199999999...) and b not exactly 0.1.
If you want exact precision, for quantities that do not have a margin of error like money, use decimal.
Have a look here for how Math.Floor works. ..basically , it rounds down ...
My guess would be that with doubles you don't really get 12 as the division (even though this will seems to work: double d = a/b; // d=12).
Solal pointed to the same, and gave a good advice about decimal :)
Here's what happens with decimals:
decimal a = 1.2m;
decimal b = 0.1m;
decimal c = Math.Floor(a / b) ; // c =12
The result of your code is 1.1.
I am assuming you want to get 1.2.
You need to use
double c = Math.Ceiling(a / b) * b;

Round doubles operations

Suppose I have three doubles, a, b and c.
double a = 1.234560123;
double b = 7.890120123;
double c = a * b;
c = 9.740827669535655129
I want to work with numbers with only 5 decimal places. So if I round a and b using Math.Round(a, 5) and Math.Round(b, 5) I get:
double a_r = Math.Round(a, 5);
double b_r = Math.Round(b, 5);
a_r = 1.23456
b_r = 7.89012
double c_r = a_r * b_r;
c_r = 9.7408265472
But when I calculate c, I still get a number with more than 5 decimal places (this will happen in every multiplication, division, potentiation and similar operations). I could round all results in my code, but that's hard work that I want to avoid.
As I use c in other operations and the results of this operations in other ones, I don't want to round all the intermediate results every time to not propagate the error caused by undesired decimal places.
Is there a way to define doubles with a fixed number of decimal places, independently of the operation?
Typically, it's best to leave the doubles in place, and use the custom formatting to display the values to 5 decimal points:
double a = 1.234560123;
double b = 7.890120123;
double c = a * b;
Console.WriteLine("Result = {0:N5}", c);
Nearly all routines that convert numeric values into strings allow the use of the Standard Numeric Format Strings as well as Custom Numeric Format Strings.
You canĀ“t define a double with limited decimal places. You should rely on formatting the number when you display it. See this question
I found a way to solve my problem using Operator Overload.
So I re-defined all my operations as multiplication, division, complex multiplication and matrices operations to round the result to the number of decimal places I wanted.
An example:
public static double operator *(double d1, double d2)
{
double result;
result = Math.Round(d1 * d2, 5);
return result;
}

Categories