crafting a class representing a fraction - c#

what are some of the things i need to consider when designing this object
this is all i can think of
int nominator
int denominator
int sign
this object can be used in a math operation

Unless you use unsigned int and you are sure you don't want the denominator and numerator to contain signs, you should probably get rid of the third member (sign), as it is redundant.
Then it depends on the language you are using, you might want to overload some operators for this class (C++), or implement methods to compute the behaviour, like Rohith said.

Consider the behavior of the class. What operations do you want to be able to perform on this fraction class?
Create it - this means a constructor. So what arguments should the constructor take? 2 ints? 2 ints and a boolean for sign? The ints could carry signs too.
Add two fractions, subtract, multiply, divide - do you want these to be static methods or object methods [ aFraction.Add(anotherFraction) or Fraction.Add(aFraction, anotherFraction) ]. What do these methods return - a Fraction object? a float?
How do you compare two fractions are equal? If you want to do this without breaking encapsulation, then make sure you provide an equals method - Java and C# have a particular signature for the equals method.

Consider the following multiplication problem:
2/3 * 3/4. The answer is 6/12, naively. But 1/2, intelligently. You'll need to take this into account to deal with equality.
Now, what about 2000000000/3000000000 * 3/4? If you use 32-bit ints to represent your numerator and denominator, you'll overflow if you do the naive computation first. Of course, if your language supports bignums, this is not so big a problem.
When you're reducing to lowest terms, don't forget to consider the sign of the result -- in general, decide pick one of the numerator or the denominator to be negative when representing negative rationals, and stick with it.

You may also want a member function to output a decimal value, and a toString function so you can print numerator/denominator without extra effort.
There is a "boundary" case of sorts here, too - the denominator can't be zero or your value is undefined. Your constructor and any setters will need to respond to this possibility.

Chapter 2 in Timothy Budd's "Classic Data Structures In C++" has a very nice Rational class in C++. It includes all the points made here, including an implementation of GCD to normalize 6/12 into 1/2. Well worth reading.

Related

.NET - What data type to use for a very big decimal number?

I'm looking for some data type what to use in C# .NET for storing very big decimal number in range <0;1) eg. 1000 decimals (as much as possible, the more is better). I will need to use this number for basic mathematical operations (+, -, *, /, <, >). Data type decimal is too small for me. I know BigInteger, but it is not for decimal number and it's operations.
Thank you for any help.
There's nothing in the BCL for it.
However, I've found a custom built type that looks quite interesting called BigFloat. I've had a scan through the code and it looks quite good. It takes a BigInteger as the denominator so that should give you much greater accuracy.
It also covers your add, subtract, multiplation and so on. It even goes into square root/logarithms etc.
Here it is: https://github.com/Osinko/BigFloat
I'd take a look and see if it fits your purpose, there's an example in the repository.

Convert.ToDecimal(double x) - System.OverflowException

What is the maximum double value that can be represented\converted to a decimal?
How can this value be derived - example please.
Update
Given a maximum value for a double that can be converted to a decimal, I would expect to be able to round-trip the double to a decimal, and then back again. However, given a figure such as (2^52)-1 as in #Jirka's answer, this does not work. For example:
Test]
public void round_trip_double_to_decimal()
{
double maxDecimalAsDouble = (Math.Pow(2, 52) - 1);
decimal toDecimal = Convert.ToDecimal(maxDecimalAsDouble);
double toDouble = Convert.ToDouble(toDecimal);
//Fails.
Assert.That(toDouble, Is.EqualTo(maxDecimalAsDouble));
}
All integers between -9,007,199,254,740,992 and 9,007,199,254,740,991 can be exactly represented in a double. (Keep reading, though.)
The upper bound is derived as 2^53 - 1. The internal representation of it is something like (0x1.fffffffffffff * 2^52) if you pardon my hexadecimal syntax.
Outside of this range, many integers can be still exactly represented if they are a multiple of a power of two.
The highest integer whatsoever that can be accurately represented would therefore be 9,007,199,254,740,991 * (2 ^ 1023), which is even higher than Decimal.MaxValue but this is a pretty meaningless fact, given that the value does not bother to change, for example, when you subtract 1 in double arithmetic.
Based on the comments and further research, I am adding info on .NET and Mono implementations of C# that relativizes most conclusions you and I might want to make.
Math.Pow does not seem to guarantee any particular accuracy and it seems to deliver a bit or two fewer than what a double can represent. This is not too surprising with a floating point function. The Intel floating point hardware does not have an instruction for exponentiation and I expect that the computation involves logarithm and multiplication instructions, where intermediate results lose some precision. One would use BigInteger.Pow if integral accuracy was desired.
However, even (decimal)(double)9007199254740991M results in a round trip violation. This time it is, however, a known bug, a direct violation of Section 6.2.1 of the C# spec. Interestingly I see the same bug even in Mono 2.8. (The referenced source shows that this conversion bug can hit even with much lower values.)
Double literals are less rounded, but still a little: 9007199254740991D prints out as 9007199254740990D. This is an artifact of internal multiplication by 10 when parsing the string literal (before the upper and lower bound converge to the same double value based on the "first zero after the decimal point"). This again violates the C# spec, this time Section 9.4.4.3.
Unlike C, C# has no hexadecimal floating point literals, so we cannot avoid that multiplication by 10 by any other syntax, except perhaps by going through Decimal or BigInteger, if these only provided accurate conversion operators. I have not tested BigInteger.
The above could almost make you wonder whether C# does not invent its own unique floating point format with reduced precision. No, Section 11.1.6 references 64bit IEC 60559 representation. So the above are indeed bugs.
So, to conclude, you should be able to fit even 9007199254740991M in a double precisely, but it's quite a challenge to get the value in place!
The moral of the story is that the traditional belief that "Arithmetic should be barely more precise than the data and the desired result" is wrong, as this famous article demonstrates (page 36), albeit in the context of a different programming language.
Don't store integers in floating point variables unless you have to.
MSDN Double data type
Decimal vs double
The value of Decimal.MaxValue is positive 79,228,162,514,264,337,593,543,950,335.

Is it safe to compare two floored doubles using the equality operator?

I know you should never compare floating point value using the == equality operator in .NET, but is it safe to do so if the two numbers were floored using Math.Floor?
I am working with a mapping program, and chunks of the map are stored in different "region" files. I can determine what region to retrieve by dividing the world coordinates by 16 and flooring the result, which gets me region coordinates.
I'm essentially asking whether or not two values that have the same whole number portion (e.g. 4.3 and 4.8) that are floored will be compared as equal using the == operator.
The general issue with floating point comparisons is that they can easily accrue rounding error. When you take a value like 1.2 (which cannot be exactly represented as a decimal) multiply it by 100 and compare it for equality to 120. The recommendation is to always compare the difference like so:
var a = 1.2;
a *= 100;
if (a - 120 < 0.0001)
{
}
The Math.Floor operation, however, always results in an integer value. That is to say that any fractional values will be truncated, and the exact integer value will be left.
So, if your semantics really are to use a floor, you are safe.
However, if you are really trying to round, then use Math.Round() instead.
Well, it depends on what you're trying to do.
That will tell you whether the floored values are equal - but if one input was just a smidge under 2, and one input was just a smidge over 2, then they'll be seen as different, despite the difference between them being potentially tiny.
Is that okay for your scenario? In some cases it will be, in some it won't.
I think your question is predicated on a faulty assumption. It's perfectly safe to compare floating point values using == in .Net. The only odd behavior associated with == and floating point values is that Double.NaN and Single.NaN when compared to themselves with == will return false (as dictated by the floating point specification).
Using Math.Floor doesn't make this situation any better. If any of the special floating point values (NaN, NegativeInfinity, PositiveInfinity) are passed to Math.Floor they are returned unaltered. So the comparison via == will still have the odd behavior (Reference)
The main effect using Math.Floor will have is more floating values will compare equal to each other. For example 7.1 and 7.5 will be equal after a Math.Floor. That's not inherently any better but could be in the context of your application but it's hard to say it will be without more information.. Could you provide some more detail here on why you think == is unsafe?

Why can't c# calculate exact values of mathematical functions

Why can't c# do any exact operations.
Math.Pow(Math.Sqrt(2.0),2) == 2.0000000000000004
I know how doubles work, I know where the rounding error is from, I know that it's almost the correct value, and I know that you can't store infinite numbers in a finite double. But why isn't there a way that c# can calculate it exactly, while my calculator can do it.
Edit
It's not about my calculator, I was just giving an example:
http://www.wolframalpha.com/input/?i=Sqrt%282.000000000000000000000000000000000000000000000000000000000000000000000000000000001%29%5E2
Cheers
Chances are your calculator can't do it exactly - but it's probably storing more information than it's displaying, so the error after squaring ends up outside the bounds of what's displayed. Either that, or its errors happen to cancel out in this case - but that's not the same as getting it exactly right in a deliberate way.
Another option is that the calculator is remembering the operations that resulted in the previous results, and applying algebra to cancel out the operations... that seems pretty unlikely though. .NET certainly won't try to do that - it will calculate the intermediate value (the root of two) and then square it.
If you think you can do any better, I suggest you try writing out the square root of two to (say) 50 decimal places, and then square it exactly. See whether you come out with exactly 2...
Your calculator is not calculating it exactly, it just that the rounding error is so small that it's not displayed.
I believe most calculators use binary-coded decimals, which is the equivalent of C#'s decimal type (and thus is entirely accurate). That is, each byte contains two digits of the number and maths is done via logarithms.
What makes you think your calculator can do it? It's almost certainly displaying less digits than it calculates with and you'd get the 'correct' result if you printed out your 2.0000000000000004 with only five fractional digits (for example).
I think you'll probably find that it can't. When I do the square root of 2 and then multiply that by itself, I get 1.999999998.
The square root of 2 is one of those annoying irrational numbers like PI and therefore can't be represented with normal IEEE754 doubles or even decimal types. To represent it exactly, you need a system capable of symbolic math where the value is stored as "the square root of two" so that subsequent calculations can deliver correct results.
The way calculators round up numbers vary from model to model. My TI Voyage 200 does algebra to simplify equations (among other things) but most calculators will display only a portion of the real value calculated, after applying a round function on the result. For example, you may find the square root of 2 and the calculator would store (let's say) 54 decimals, but will only display 12 rounded decimals. Thus when doing a square root of 2, then do a power of that result by 2 would return the same value since the result is rounded. In any case, unless the calculator can keep an infinite number of decimals, you'll always have a best approximate result from complexe operations.
By the way, try to represent 10.0 in binary and you'll realize that you can't represent it evenly and you'll end up with (something like) 10.00000000000..01
Your calculator has methods which recognize and manipulate irrational input values.
For example: 2^(1/2) is likely not evaluated to a number in the calculator if you do not explicitly tell it to do so (as in the ti89/92).
Additionally, the calculator has logic it can use to manipulate them such as x^(1/2) * y^(1/2) = (x*y)^1/2 where it can then wash, rinse, repeat the method for working with irrational values.
If you were to give c# some method to do this, I suppose it could as well. After all, algebraic solvers such as mathematica are not magical.
It has been mentioned before, but I think what you are looking for is a computer algebra system. Examples of these are Maxima and Mathematica, and they are designed solely to provide exact values to mathematical calculations, something not covered by the CPU.
The mathematical routines in languages like C# are designed for numerical calculations: it is expected that if you are doing calculations as a program you will have simplified it already, or you will only need a numerical result.
2.0000000000000004 and 2. are both represented as 10. in single precision. In your case, using single precision for C# should give the exact answer
For your other example, Wolfram Alpha may use higher precision than machine precision for calculation. This adds a big performance penalty. For instance, in Mathematica, going to higher precision makes calculations about 300 times slower
k = 1000000;
vec1 = RandomReal[1, k];
vec2 = SetPrecision[vec1, 20];
AbsoluteTiming[vec1^2;]
AbsoluteTiming[vec2^2;]
It's 0.01 second vs 3 seconds on my machine
You can see the difference in results using single precision and double precision introduced by doing something like the following in Java
public class Bits {
public static void main(String[] args) {
double a1=2.0;
float a2=(float)2.0;
double b1=Math.pow(Math.sqrt(a1),2);
float b2=(float)Math.pow(Math.sqrt(a2),2);
System.out.println(Long.toBinaryString(Double.doubleToRawLongBits(a1)));
System.out.println(Integer.toBinaryString(Float.floatToRawIntBits(a2)));
System.out.println(Long.toBinaryString(Double.doubleToRawLongBits(b1)));
System.out.println(Integer.toBinaryString(Float.floatToRawIntBits(b2)));
}
}
You can see that single precision result is exact, whereas double precision is off by one bit

How do you deal with numbers larger than UInt64 (C#)

In C#, how can one store and calculate with numbers that significantly exceed UInt64's max value (18,446,744,073,709,551,615)?
Can you use the .NET 4.0 beta? If so, you can use BigInteger.
Otherwise, if you're sticking within 28 digits, you can use decimal - but be aware that obviously that's going to perform decimal arithmetic, so you may need to round at various places to compensate.
By using a BigInteger class; there's one in the the J# libraries (definitely accessible from C#), another in F# (need to test this one), and there are freestanding implementations such as this one in pure C#.
What is it that you wish to use these numbers for? If you are doing calculations with really big numbers, do you still need the accuracy down to the last digit?
If not, you should consider using floating point values instead. They can be huge, the max value for the double type is 1.79769313486231570E+308, (in case you are not used to scientific notation it means 1.79769313486231570 multiplied by 10000000...0000 - 308 zeros).
That should be large enough for most applications
BigInteger represents an arbitrarily large signed integer.
using System.Numerics;
var a = BigInteger.Parse("91389681247993671255432112000000");
var b = new BigInteger(1790322312);
var c = a * b;
Decimal has greater range.
There is support for bigInteger in .NET 4.0 but that is still not out of beta.
There are several libraries for computing with big integers, most of the cryptography libraries also offer a class for that. See this for a free library.
Also, do check that you truly need a variable with greater capacity than Int64 and aren't falling foul of C#'s integer arithmetic.
For example, this code will yield an overflow error:
Int64 myAnswer = 20000*1024*1024;
At first glance that might seem to be because the result is too large for an Int64 variable, but actually it's because each of the numbers on the right side of the formula are implicitly typed as Int32 so the temporary memory space reserved for the result of the calculation will be Int32 size, and that's what causes the overflow.
The result will actually easily fit into an Int64, it just needs one of the values on the right to be cast to Int64:
Int64 myAnswer = (Int64)20000*1024*1024;
This is discussed in more detail in this answer.
(I appreciate this doesn't apply in the OP's case, but it was just this sort of issue that brought me here!)
You can use decimal. It is greater than Int64.
It has 28-29 significant digits.

Categories