Math.Ceiling() on expression when dividing and multiplying by same number - c#

I have a little trouble with Math.Ceiling() in C#.
I call it on the result of a division by a number followed by a multiplication by the same number, e.g. 20 000 / 184 * 184. I would expect that result to be 20 000 but it is 20 001. Are there any possible ways how to avoid this behavior when trying round up value?
Thank you in advance

When running the code you supplied we have the following
twentyThousand/oneEightyFour * oneEightyFour
The answer is 20000.000000000000000000000001
Hence when you do the ceiling we have 20001.
By the following article I think the result is due to in inaccuracy introduced when performing the division , this yields 108.69565217391304347826086957 and as Jon stated
As a very broad rule of thumb, if you end up seeing a very long string representation (ie most of the 28/29 digits are non-zero) then chances are you've got some inaccuracy along the way.
http://csharpindepth.com/Articles/General/Decimal.aspx

As light pointed out in the comments, you shouldn't be getting 20001 at all.
20000 / 184 would yield 108. Which then would give you 19872 when multiplied by 184.
Somewhere you are doing something other than what you posted. Where is Math.Ceiling() even called?
I will say, if the numbers are hard coded, you can put a decimal in the code and it will treat it as such. If you are using variables that represent numbers, be sure they are formatted as some floating point type (decimal,double,float) depending on the accuracy needed.
Console.WriteLine(20000 / 184 * 184); // 19872
Console.WriteLine((20000.0 / 184.0 * 184.0)); // 20000

Are there any possible ways how to avoid this behavior when trying round up value?
In this particular case you can avoid the problem by multiplying first, then dividing:
result = (20000m * 184m) / 184m ;
Since the precision is lost in the division, multiplying first prevents that imprecision form getting exaggerated when you multiply.

Related

Computations in C# as accurately as with Windows Calculator

When I do the following double multiplication in C# 100.0 * 1.005 I get 100,49999999999999 as a result. I believe this is because the exact number (or some intermedia result when evaluting the expression) cannot be represented. When I do the same computation in calc.exe I get 100.5 as expected.
Another example is the ninefold incrementation of 0.001 (that is the first time a deviation occurs) so basically 9d * 0.001d = 0,0090000000000000011. When I do the same computation in calc.exe I get 0.009 as expected.
Now one can argue, that I should choose decimal instead. But with decimal I get the problem with other computations for example with ((1M / 3M) * 3M) = 0,9999999999999999999999999999 while calc.exe says 1.
With calc.exe I can divide 1 by 3 several times until some real small number and then multiply with 3 again as many several times and then I reach exacty 1. I therefore suspect, that calc.exe computes internally with fractions, but obviously with real big ones, because it computes
(677605234775492641 / 116759166847407000) + (932737194383944703 / 2451942503795547000)
where the common denominator is -3422539506717149376 (an overflow occured) when doing a long computation, so it must be at least ulong. Does anybody know how computation in calc.exe is implemented? Is this implementation made somewhere public for reuse?
As described here, calc uses an arbitrary-precision engine for its calculations, while double is standard IEEE-754 arithmetic, and decimal is also floating-point arithmetic, just in decimal, which, as you point out, has the same problems, just in another base.
You can try finding such an arbitrary-precision arithmetic library for C# and use it, e.g. this one (no idea whether it's good; was the first result). The one inside calc is not available as an API, so you cannot use it.
Another point is that when you round the result to a certain number of places (less than 15), you'd also get the intuitively "correct" result in a lot of cases. C# already does some rounding to hide the exact value of a double from you (where 0.3 is definitely not exactly 0.3, but probably something like 0.30000000000000004). By reducing the number of digits you display you lessen the occurrence of such very small differences from the correct value.

Calculating the total amount of change given

Hello I'm currently following the Computing with C# and the .NET Framework book and I'm having difficulty on one of the exercises which is
Write a C# program to make change. Enter the cost of an item that is less than one dollar. Output
the coins given as change, using quarters, dimes, nickels, and pennies. Use the fewest coins
possible. For example, if the item cost 17 cents, the change would be three quarters, one nickel,
and three pennies
Since I'm still trying to grasp c# programming the best method I came up with is using the while loop.
while(costOfItem >= 0.50)
{
costOfItem -= 0.50;
fiftyPence++;
}
I have these for each of the pences 20,10,5 etc..
I'm checking if the amount is greater than or equal to 50 pence, if so, i reduce 50 pence from the amount given by the user and add 1 to the fiftypence variable.
then it moves onto the next while loop, which I have for each pences. The problem is, somewhere along the line one of the loops takes away, lets say 20 pence, and the costOfItem becomes something like "0.1999999999999" then it never drops down to 0, which it should to get the correct amount of change.
Any help is appreciated, please don't suggest over complex procedures that I have yet covered.
Never use double or float for money operations. Use Decimal.
For all other problems of calculation accuracy you have to use "Double Epsilon Comparison" like Double.Epsilon for equality, greater than, less than, less than or equal to, greater than or equal to
If you do the calculation in cents, you can use integers and then you don't get into floating point rounding problems
Sounds to me you are using float or double as datatype for costOfItem.
float and double store their values in binary. But not all decimal values have an exact representation in binary. Therefore, a very close approximation is stored. This is why you get values like "0.1999999999999".
For money calculations, you should always use decimal, since they avoid those small inaccuracies.
You can read more about the difference in Jon Skeets awesome answer to Difference between Decimal, Float and Double in .NET?
Thank you all for the fast replies. The issue was that i used double instead of decimal the links provided have people explaining why it's wrong to do so in some cases. Using double should be avoided in arithmetic since apparently some float numbers do not have exact a binary representation, so instead of .23 it gives 0.2299999999999. Changing the variable to decimal fixed my issue

Fast lookup suffering from floating point inaccuracies

Suppose I have equally spaced doubles (64 bit floating point numbers) x0,x1,...,xn. Equally spaced means that for all i, x(i+1) - xi is constant; call it w for width.
Given a number y in the range [x0,xn] I want to find the largest i such that xi <= y.
A naive approach would visit each i in turn (O(n)). Marginally better is to use a binary search (O(log n)).
A constant time lookup would be to calculate (y-x0)/w and cast it to an integer. However, this will occasionally give the wrong result due to floating point inaccuracy. E.g. Suppose there are 100 intervals of width 0.01 starting at 0.
(int)(0.29/0.01) = 28 //want 29 here
Can I retain the constant time lookup but ensure that the results are always identical to the binary search? Performing the calculation with decimals rather than doubles for 'w' and 'x0' seems to work here, but will it always work? I could always follow the direct lookup with a comparison with the xs either side, but this seems ugly and inefficient.
To clarify - I am given the xi and the value y as doubles - I cannot change this. But any intermediate calculation performed before returning the integer index can use any datatypes I like. Additionally, I can perform one-off "preparation" work in order to make the runtime calculation faster.
Edit: Apologies - turns out that I didn't check "equally spaced" properly - these numbers are often not "equally spaced" when their difference is calculated using floating point arithmetic.
Do the following
Calculate (int)(0.29/0.01) = 28 //want 29 here
Next, calculate back i * 0.01 for i between 28-1 and 28+1 and pick up the one that is correct.
What do you mean equally spaced? If can make some assumptions about the numbers, for example - that they increase on an interval, you can actually use median selction that is O(1) in the best case and O(log2(N)) in the worst case.

C# loss of precision when dividing doubles

I know this has been discussed time and time again, but I can't seem to get even the most simple example of a one-step division of doubles to result in the expected, unrounded outcome in C# - so I'm wondering if perhaps there's i.e. some compiler flag or something else strange I'm not thinking of. Consider this example:
double v1 = 0.7;
double v2 = 0.025;
double result = v1 / v2;
When I break after the last line and examine it in the VS debugger, the value of "result" is 27.999999999999996. I'm aware that I can resolve it by changing to "decimal," but that's not possible in the case of the surrounding program. Is it not strange that two low-precision doubles like this can't divide to the correct value of 28? Is the only solution really to Math.Round the result?
Is it not strange that two low-precision doubles like this can't divide to the correct value of 28?
No, not really. Neither 0.7 nor 0.025 can be exactly represented in the double type. The exact values involved are:
0.6999999999999999555910790149937383830547332763671875
0.025000000000000001387778780781445675529539585113525390625
Now are you surprised that the division doesn't give exactly 28? Garbage in, garbage out...
As you say, the right result to represent decimal numbers exactly is to use decimal. If the rest of your program is using the wrong type, that just means you need to work out which is higher: the cost of getting the wrong answer, or the cost of changing the whole program.
Precision is always a problem, in case you are dealing with float or double.
Its a known issue in Computer Science and every programming language is affected by it. To minimize these sort of errors, which are mostly related to rounding, a complete field of Numerical Analysis is dedicated to it.
For instance, let take the following code.
What would you expect?
You will expect the answer to be 1, but this is not the case, you will get 0.9999907.
float v = .001f;
float sum = 0;
for (int i = 0; i < 1000; i++ )
{
sum += v;
}
It has nothing to do with how 'simple' or 'small' the double numbers are. Strictly speaking, neither 0.7 or 0.025 may be stored as exactly those numbers in computer memory, so performing calculations on them may provide interesting results if you're after heavy precision.
So yes, use decimal or round.
To explain this by analogy:
Imagine that you are working in base 3. In base 3, 0.1 is (in decimal) 1/3, or 0.333333333'.
So you can EXACTLY represent 1/3 (decimal) in base 3, but you get rounding errors when trying to express it in decimal.
Well, you can get exactly the same thing with some decimal numbers: They can be exactly expressed in decimal, but they CAN'T be exactly expressed in binary; hence, you get rounding errors with them.
Short answer to your first question: No, it's not strange. Floating-point numbers are discrete approximations of the real numbers, which means that rounding errors will propagate and scale when you do arithmetic operations.
Theres' a whole field of mathematics called numerical analyis that basically deal with how to minimize the errors when working with such approximations.
It's the usual floating point imprecision. Not every number can be represented as a double, and those minor representation inaccuracies add up. It's also a reason why you should not compare doubles to exact numbers. I just tested it, and result.ToString() showed 28 (maybe some kind of rounding happens in double.ToString()?). result == 28 returned false though. And (int)result returned 27. So you'll just need to expect imprecisions like that.

Is a double really unsuitable for money?

I always tell in c# a variable of type double is not suitable for money. All weird things could happen. But I can't seem to create an example to demonstrate some of these issues. Can anyone provide such an example?
(edit; this post was originally tagged C#; some replies refer to specific details of decimal, which therefore means System.Decimal).
(edit 2: I was specific asking for some c# code, so I don't think this is language agnostic only)
Very, very unsuitable. Use decimal.
double x = 3.65, y = 0.05, z = 3.7;
Console.WriteLine((x + y) == z); // false
(example from Jon's page here - recommended reading ;-p)
You will get odd errors effectively caused by rounding. In addition, comparisons with exact values are extremely tricky - you usually need to apply some sort of epsilon to check for the actual value being "near" a particular one.
Here's a concrete example:
using System;
class Test
{
static void Main()
{
double x = 0.1;
double y = x + x + x;
Console.WriteLine(y == 0.3); // Prints False
}
}
Yes it's unsuitable.
If I remember correctly double has about 17 significant numbers, so normally rounding errors will take place far behind the decimal point. Most financial software uses 4 decimals behind the decimal point, that leaves 13 decimals to work with so the maximum number you can work with for single operations is still very much higher than the USA national debt. But rounding errors will add up over time. If your software runs for a long time you'll eventually start losing cents. Certain operations will make this worse. For example adding large amounts to small amounts will cause a significant loss of precision.
You need fixed point datatypes for money operations, most people don't mind if you lose a cent here and there but accountants aren't like most people..
edit
According to this site http://msdn.microsoft.com/en-us/library/678hzkk9.aspx Doubles actually have 15 to 16 significant digits instead of 17.
#Jon Skeet decimal is more suitable than double because of its higher precision, 28 or 29 significant decimals. That means less chance of accumulated rounding errors becoming significant. Fixed point datatypes (ie integers that represent cents or 100th of a cent like I've seen used) like Boojum mentions are actually better suited.
Since decimal uses a scaling factor of multiples of 10, numbers like 0.1 can be represented exactly. In essence, the decimal type represents this as 1 / 10 ^ 1, whereas a double would represent this as 104857 / 2 ^ 20 (in reality it would be more like really-big-number / 2 ^ 1023).
A decimal can exactly represent any base 10 value with up to 28/29 significant digits (like 0.1). A double can't.
My understanding is that most financial systems express currency using integers -- i.e., counting everything in cents.
IEEE double precision actually can represent all integers exactly in the range -2^53 through +2^53. (Hacker's Delight, pg. 262) If you use only addition, subtraction and multiplication, and keep everything to integers within this range then you should see no loss of precision. I'd be very wary of division or more complex operations, however.
Using double when you don't know what you are doing is unsuitable.
"double" can represent an amount of a trillion dollars with an error of 1/90th of a cent. So you will get highly precise results. Want to calculate how much it costs to put a man on Mars and get him back alive? double will do just fine.
But with money there are often very specific rules saying that a certain calculation must give a certain result and no other. If you calculate an amount that is very very very close to $98.135 then there will often be a rule that determines whether the result should be $98.14 or $98.13 and you must follow that rule and get the result that is required.
Depending on where you live, using 64 bit integers to represent cents or pennies or kopeks or whatever is the smallest unit in your country will usually work just fine. For example, 64 bit signed integers representing cents can represent values up to 92,223 trillion dollars. 32 bit integers are usually unsuitable.
No a double will always have rounding errors, use "decimal" if you're on .Net...
Actually floating-point double is perfectly well suited to representing amounts of money as long as you pick a suitable unit.
See http://www.idinews.com/moneyRep.html
So is fixed-point long. Either consumes 8 bytes, surely preferable to the 16 consumed by a decimal item.
Whether or not something works (i.e. yields the expected and correct result) is not a matter of either voting or individual preference. A technique either works or it doesn't.

Categories