Working with numbers larger than max decimal value - c#

I'm working with the product of the first 26 prime numbers. This requires more than 52 bits of precision, which I believe is the max a double can handle, and more than the 28-29 significant digits a decimal can provide. So what would be some strategies for performing multiplication and division on numbers this large?
Also, what would the performance impacts be of whatever hoops I'd have to jump through to make this happen?
The product of the first 22 prime numbers (the most I can multiply together on my calculator without dropping into scientific mode) is:
10,642,978,845,819,148,849,204,664,294,430
The product of the last four is
72,370,439
When multiplied together, I get:
7.7023705133964511682328635583552e+38
The performance impacts are especially important here, because we're essentially trying to resolve the question of whether a prime-number string comparison solution is faster in practice than a straight comparison of characters. The post which prompted this investigation is here. Processors are optimized for floating-point calculations; ideally I'd want to leverage as much of that optimization in whatever solution I end up with.
TIA!
James
PS: The code I do have is for a competing solution; I don't think the prime number solution can possibly be faster, but I'm trying to give it the fairest chance I can.

You can use BigInteger in C#4.0. For older versions, I think you need an open source library such as this one

I read the post you linked to, about the interview question. Since you're only multiplying and dividing these large integers, a huge optimization is to keep them in their prime-factorized form. Each large integer is an array [0..25] of ints, each element representing the exponent of the nth prime in the factorization. To multiply two large integers in this form, simply add the exponents element-by-element; to divide, subtract exponents.
But you will see this is equivalent to tabulating character counts on the two strings.

Related

Cutting corners with float-to-string representation?

I have a float I need to turn into a string with 5 decimals precision (X.XXXXX), which means I need to have at least 6 decimals for round up/down. The issue is that the operation to get integer representation results in a very big number which I cant store (I'd need something like Big Integer but I cant rely on any built-in stuff for compatibility reasons and I wont pretend I understand how to re-invent one, in a fairly simple manner as well). I can pre-emptively limit it:
result = (m * Pow(5, +exp) / Pow(10,8));
but this will only give correct results for a handful of normalized floats like 0.3f, something like 1-E5 or 113.754f (this now has 3 more "leading" digits for the "ceil" part) will be wrong.
Taking into account I need 5 (6) decimals precision max - is there a shortcut I can take?
is there a shortcut I can take?
No and yes.
No in getting the best conversion result. Shortcuts run into the table-maker's dilemma. In short, there will be corner cases that oblige a fair amount of code for float to string conversion. Typically this means doing most of the conversion using integer math. Example.
Yes if code is willing to tolerate some error. This error results from the accumulated rounding of floating point operations. As typical float has 24 bits of binary precision (akin to at least 6 significant decimal digits) the "5 decimals precision (X.XXXXX)," (which is really 6 significant decimal digits) will be hard to obtain without error.
Using wider math can greatly reduces errors (perhaps by a factor of 100s millions), yet not eliminate them.

Bitwise representation of division of floats - how division of floats works

A number can have multiple representations if we use a float, so the results of a division of floats may produce bitwise different floats. But what if the denominator is a power of 2?
AFAIK, dividing by a power of 2 would only shift the exponent, leaving the same mantissa, always producing bitwise identical floats. Is that right?
float a = xxx;
float result = n/1024f; // always the same result?
--- UPDATE ----------------------
Sorry for my lack of knowledge in the IEEE black magic for floating points :) , but I'm talking about those numbers Guvante mentioned: no representation for certain decimal numbers, 'inaccurate' floats. For the rest of this post I'll use 'accurate' and 'inaccurate' considering Guvante's definition of these words.
To simplify, let's say the numerator is always an 'accurate' number. Also, let's divide not by any power of 2, but always for 1024. Additionally, I'm doing the operation the same way every time (same method), so I'm talking about getting the same results in different executions (for the same inputs, sure).
I'm asking all this because I see different numbers coming from the same inputs, so I thought: well if I only use 'accurate' floats as numerators and divide by 1024 I will only shift the exponent, still having an 'accurate' float.
You asked for an example. The real problem is this: I have a simulator producing sometimes 0.02999994 and sometimes 0.03000000 for the same inputs. I thought I could multiply these numbers by 1024, round to get an 'integer' ('accurate' float) that would be the same for those two numbers, and then divide by 1024 to get an 'accurate' rounded float.
I was told (in my other question) that I could convert to decimal, round and cast to float, but I want to know if this way works.
A number can have multiple representations if we use a float
The question appears to be predicated on an incorrect premise; the only number that has multiple representations as a float is zero, which can be represented as either "positive zero" or "negative zero". Other than zero a given number only has one representation as a float, assuming that you are talking about the "double" or "float" types.
Or perhaps I misunderstand. Is the issue that you are referring to the fact that the compiler is permitted to do floating point operations in higher precision than the 32 or 64 bits available for storage? That can cause divisions and multiplications to produce different results in some cases.
Since people often don't fully grasp floating point numbers I will go over some of your points real quick. Each particular combination of bits in a floating point number represent a unique number. However because that number has a base 2 fractional component, there is no representation for certain decimal numbers. For instance 1.1. In those cases you take the closest number. IEEE 754-2008 specifies round to nearest, ties to even in these cases.
The real difficulty is when you combine two of these 'inaccurate' numbers. This can introduce problems as each intermediate step will involve rounding. If you calculate the same value using two different methods, you could come up with subtly different values. Typically this is handled with an epsilon when you want equality.
Now onto your real question, can you divide by a power of two and avoid introducing any additional 'inaccuracies'? Normally you can, however as with all floating point numbers, denormals and other odd cases have their own logic, and obviously if your mantissa overflows you will have difficulty. And again note, that no mathematical errors are introduced during any of this, it is simply math being done with limited percision, which involves intermittent rounding of results.
EDIT: In response to new question
What you are saying could work, but is pretty much equivalent to rounding. Additionally if you are just looking for equality, you should use an episilon as I mentioned earlier (a - b) < e for some small value e (0.0001 would work in your example). If you are looking to print out a pretty number, and the framework you are using isn't doing it to your liking, some rounding would be the most direct way of describing your solution, which is always a plus.

Why can't c# calculate exact values of mathematical functions

Why can't c# do any exact operations.
Math.Pow(Math.Sqrt(2.0),2) == 2.0000000000000004
I know how doubles work, I know where the rounding error is from, I know that it's almost the correct value, and I know that you can't store infinite numbers in a finite double. But why isn't there a way that c# can calculate it exactly, while my calculator can do it.
Edit
It's not about my calculator, I was just giving an example:
http://www.wolframalpha.com/input/?i=Sqrt%282.000000000000000000000000000000000000000000000000000000000000000000000000000000001%29%5E2
Cheers
Chances are your calculator can't do it exactly - but it's probably storing more information than it's displaying, so the error after squaring ends up outside the bounds of what's displayed. Either that, or its errors happen to cancel out in this case - but that's not the same as getting it exactly right in a deliberate way.
Another option is that the calculator is remembering the operations that resulted in the previous results, and applying algebra to cancel out the operations... that seems pretty unlikely though. .NET certainly won't try to do that - it will calculate the intermediate value (the root of two) and then square it.
If you think you can do any better, I suggest you try writing out the square root of two to (say) 50 decimal places, and then square it exactly. See whether you come out with exactly 2...
Your calculator is not calculating it exactly, it just that the rounding error is so small that it's not displayed.
I believe most calculators use binary-coded decimals, which is the equivalent of C#'s decimal type (and thus is entirely accurate). That is, each byte contains two digits of the number and maths is done via logarithms.
What makes you think your calculator can do it? It's almost certainly displaying less digits than it calculates with and you'd get the 'correct' result if you printed out your 2.0000000000000004 with only five fractional digits (for example).
I think you'll probably find that it can't. When I do the square root of 2 and then multiply that by itself, I get 1.999999998.
The square root of 2 is one of those annoying irrational numbers like PI and therefore can't be represented with normal IEEE754 doubles or even decimal types. To represent it exactly, you need a system capable of symbolic math where the value is stored as "the square root of two" so that subsequent calculations can deliver correct results.
The way calculators round up numbers vary from model to model. My TI Voyage 200 does algebra to simplify equations (among other things) but most calculators will display only a portion of the real value calculated, after applying a round function on the result. For example, you may find the square root of 2 and the calculator would store (let's say) 54 decimals, but will only display 12 rounded decimals. Thus when doing a square root of 2, then do a power of that result by 2 would return the same value since the result is rounded. In any case, unless the calculator can keep an infinite number of decimals, you'll always have a best approximate result from complexe operations.
By the way, try to represent 10.0 in binary and you'll realize that you can't represent it evenly and you'll end up with (something like) 10.00000000000..01
Your calculator has methods which recognize and manipulate irrational input values.
For example: 2^(1/2) is likely not evaluated to a number in the calculator if you do not explicitly tell it to do so (as in the ti89/92).
Additionally, the calculator has logic it can use to manipulate them such as x^(1/2) * y^(1/2) = (x*y)^1/2 where it can then wash, rinse, repeat the method for working with irrational values.
If you were to give c# some method to do this, I suppose it could as well. After all, algebraic solvers such as mathematica are not magical.
It has been mentioned before, but I think what you are looking for is a computer algebra system. Examples of these are Maxima and Mathematica, and they are designed solely to provide exact values to mathematical calculations, something not covered by the CPU.
The mathematical routines in languages like C# are designed for numerical calculations: it is expected that if you are doing calculations as a program you will have simplified it already, or you will only need a numerical result.
2.0000000000000004 and 2. are both represented as 10. in single precision. In your case, using single precision for C# should give the exact answer
For your other example, Wolfram Alpha may use higher precision than machine precision for calculation. This adds a big performance penalty. For instance, in Mathematica, going to higher precision makes calculations about 300 times slower
k = 1000000;
vec1 = RandomReal[1, k];
vec2 = SetPrecision[vec1, 20];
AbsoluteTiming[vec1^2;]
AbsoluteTiming[vec2^2;]
It's 0.01 second vs 3 seconds on my machine
You can see the difference in results using single precision and double precision introduced by doing something like the following in Java
public class Bits {
public static void main(String[] args) {
double a1=2.0;
float a2=(float)2.0;
double b1=Math.pow(Math.sqrt(a1),2);
float b2=(float)Math.pow(Math.sqrt(a2),2);
System.out.println(Long.toBinaryString(Double.doubleToRawLongBits(a1)));
System.out.println(Integer.toBinaryString(Float.floatToRawIntBits(a2)));
System.out.println(Long.toBinaryString(Double.doubleToRawLongBits(b1)));
System.out.println(Integer.toBinaryString(Float.floatToRawIntBits(b2)));
}
}
You can see that single precision result is exact, whereas double precision is off by one bit

Calculate factorials in C#

How can you calculate large factorials using C#? Windows calculator in Win 7 overflows at Factorial (3500). As a programming and mathematical question I am interested in knowing how you can calculate factorial of a larger number (20000, may be) in C#. Any pointers?
[Edit] I just checked with a calc on Win 2k3, since I could recall doing a bigger factorial on Win 2k3. I was surprised by the way things worked out.
Calc on Win2k3 worked with even big numbers. I tried !50000 and I got an answer, 3.3473205095971448369154760940715e+213236
It was very fast while I did all this.
The main question here is not only to find out the appropriate data type, but also a bit mathematical. If I try to write a simple factorial code in C# [recursive or loop], the performance is really bad. It takes multiple seconds to get an answer. How is the calc in Windows 2k3 (or XP) able to perform such a huge factorial in less than 10 seconds? Is there any other way of calculating factorial programmatically in C#?
Have a look at the BigInteger structure:
http://msdn.microsoft.com/en-us/library/system.numerics.biginteger.aspx
Maybe this can help you implement this functionality.
CodeProject has an implementation for older versions of the framework at http://www.codeproject.com/KB/cs/biginteger.aspx.
If I try to write a simple factorial code in C# [recursive or loop], the performance is really bad. It takes multiple seconds to get an answer.
Let's do a quick order-of-magnitude calculation here for a naive implementation of factorial that performs n multiplications. Suppose we are on the last step. 19999! is about 218 bits. 20000 is about 25 bits; we'll assume that it is a 32 bit integer. The final multiplication therefore involves the addition of up to 25 partial results each roughly 218 bits long. The number of bit operations will therefore be on the order of 223.
That's for the last stage; there will be 20000 = 216 such operations at each stage, so that is a total of about 239 operations. Some of them will of course be cheaper, but we're going for an order of magnitude here.
A modern processor does about 232 operations per second. Therefore it will take about 27 seconds to get the result.
Of course, the big integer library writers were not naive; they take advantage of the ability of the chip to do many bit operations in parallel. They're probably doing the math in 32 bit chunks, giving speedups of a factor of 25. So our total order-of-magnitude calculation is that it should take about 22 seconds to get a result.
22 is 4. So your observation that it takes a few seconds to get a result is expected.
How is the calc in Windows 2k3 (or XP) able to perform such a huge factorial in less than 10 seconds?
I don't know. Extreme cleverness in exploiting the math operations on the chip probably. Or, using a non-naive algorithm for calculating factorial. Or, possibly they are using Stirling's Approximation and getting an inexact result.
Is there any other way of calculating factorial programmatically in C#?
Sure. If all you care about is the order of magnitude then you can use Stirling's Approximation. If you care about the exact value then you're going to have to compute it.
There exist sophisticated computational algorithms for efficiently computing the factorials of large, arbitrary precision numbers. The Schönhage–Strassen algorithm, for instance, allows you to perform asymptotically fast multiplication for arbitrarily large integers.
Case in point, Mathematica computes 22000! on my machine in less than 1 second. The Implementation Notes page at reference.wolfram.com states:
(Mathematica's) n! uses an O(log(n) M(n)) algorithm of Schönhage based on dynamic decomposition to prime powers.
Unfortunately, the implementation of such algorithms is both complicated and error prone. Rather than trying to roll your own implementation, it may be wiser for you to license a copy of Mathematica (or a similar product that meets your functional and performance needs) and either use it, or a .NET programming interface to it, to perform your computation.
Have you looked at System.Numerics.BigInteger?
Using System.Numerics BigInteger
var bi = new BigInteger(1);
var factorial = 171;
for (var i = 1; i <= factorial; i++)
{
bi *= i;
}
will be calculated to
1241018070217667823424840524103103992616605577501693185388951803611996075221691752992751978120487585576464959501670387052809889858690710767331242032218484364310473577889968548278290754541561964852153468318044293239598173696899657235903947616152278558180061176365108428800000000000000000000000000000000000000000
For 50000! it takes a couple seconds to calculate but it seems to work and the result is a 213237 digit number and that's also what Wolfram says.
You will probably have to implement your own arbitrary precision numeric type.
There are various approaches. probably not the most efficient, but perhaps the simplest is to have variable length arrays of byte (unsigned char). Each element represents a digit. ideally this would be included in a class, and you can then add a method which let's you multiply the number with another arbitrary precision number. A multiply with a standard C# integer would probably also be a good idea, but a little trickier to implement.
Since they don't give you the result down to the last digit, they may be "cheating" using some approximation.
Check out http://mathworld.wolfram.com/StirlingsApproximation.html
Using Stirling's formula you can calculate (an approximation of) the factorial of n in logn time. Of course, they might as well have a dictionary with pre-calculated values of factorial(n) for every n up to one million, making the calculator show the result extremely fast.
This answer covers limits for basic .Net types to compute and represent n!
Basic code to calculate factorial for "SomeType" that supports multiplication:
SomeType factorial = 1;
int n = 35;
for (int i = 1; i <= n; i++)
{
factorial *= i;
}
Limits for built in number types:
short - correct results up to 7!, incorrect results afterwards, code returns 0 starting 18 (similar to int)
int - correct results up to 12!, incorrect results afterwards, code returns 0 starting at 34 (Why computing factorial of realtively small numbers (34+) returns 0)
float - precise results up to 14!, correct but not precise afterwards, returns infinity starting at 35
long - correct results up to 20!, incorrect results afterwards, code returns 0 starting at 66 (similar to int)
double - precise results up to 22!, correct but not precise afterwards, returns infinity starting at 171
BigInteger - precise and upper limit is set by memory usage only.
Note: integer types overflow pretty quickly and start producing incorrect results. Realistically if you need factorials for any practical usage long is the type to go (up to 20!), if you can't expect limited numbers - BigInteger is the only type provided in .Net Framework to provide precise results (albeit slow for large numbers as there is no built-in optimized n! method)
You need a special big-number library for this. This link introduces the System.Numeric.BigInteger class, and incidentally has an example program that calculates factorials. But don't use the example! If you recurse like that, your stack will grow horribly. Just write a for-loop to do the multiplication.
I don't know how you could do this in a language without arbitrary precision arithmetic. I guess a start could be to count factors of 5 and 2, removing them from the product, and add on these zeroes at the end.
As you can see there are many.
>>> factorial(20000)
<<non-zeroes removed>>0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000L

Is a double really unsuitable for money?

I always tell in c# a variable of type double is not suitable for money. All weird things could happen. But I can't seem to create an example to demonstrate some of these issues. Can anyone provide such an example?
(edit; this post was originally tagged C#; some replies refer to specific details of decimal, which therefore means System.Decimal).
(edit 2: I was specific asking for some c# code, so I don't think this is language agnostic only)
Very, very unsuitable. Use decimal.
double x = 3.65, y = 0.05, z = 3.7;
Console.WriteLine((x + y) == z); // false
(example from Jon's page here - recommended reading ;-p)
You will get odd errors effectively caused by rounding. In addition, comparisons with exact values are extremely tricky - you usually need to apply some sort of epsilon to check for the actual value being "near" a particular one.
Here's a concrete example:
using System;
class Test
{
static void Main()
{
double x = 0.1;
double y = x + x + x;
Console.WriteLine(y == 0.3); // Prints False
}
}
Yes it's unsuitable.
If I remember correctly double has about 17 significant numbers, so normally rounding errors will take place far behind the decimal point. Most financial software uses 4 decimals behind the decimal point, that leaves 13 decimals to work with so the maximum number you can work with for single operations is still very much higher than the USA national debt. But rounding errors will add up over time. If your software runs for a long time you'll eventually start losing cents. Certain operations will make this worse. For example adding large amounts to small amounts will cause a significant loss of precision.
You need fixed point datatypes for money operations, most people don't mind if you lose a cent here and there but accountants aren't like most people..
edit
According to this site http://msdn.microsoft.com/en-us/library/678hzkk9.aspx Doubles actually have 15 to 16 significant digits instead of 17.
#Jon Skeet decimal is more suitable than double because of its higher precision, 28 or 29 significant decimals. That means less chance of accumulated rounding errors becoming significant. Fixed point datatypes (ie integers that represent cents or 100th of a cent like I've seen used) like Boojum mentions are actually better suited.
Since decimal uses a scaling factor of multiples of 10, numbers like 0.1 can be represented exactly. In essence, the decimal type represents this as 1 / 10 ^ 1, whereas a double would represent this as 104857 / 2 ^ 20 (in reality it would be more like really-big-number / 2 ^ 1023).
A decimal can exactly represent any base 10 value with up to 28/29 significant digits (like 0.1). A double can't.
My understanding is that most financial systems express currency using integers -- i.e., counting everything in cents.
IEEE double precision actually can represent all integers exactly in the range -2^53 through +2^53. (Hacker's Delight, pg. 262) If you use only addition, subtraction and multiplication, and keep everything to integers within this range then you should see no loss of precision. I'd be very wary of division or more complex operations, however.
Using double when you don't know what you are doing is unsuitable.
"double" can represent an amount of a trillion dollars with an error of 1/90th of a cent. So you will get highly precise results. Want to calculate how much it costs to put a man on Mars and get him back alive? double will do just fine.
But with money there are often very specific rules saying that a certain calculation must give a certain result and no other. If you calculate an amount that is very very very close to $98.135 then there will often be a rule that determines whether the result should be $98.14 or $98.13 and you must follow that rule and get the result that is required.
Depending on where you live, using 64 bit integers to represent cents or pennies or kopeks or whatever is the smallest unit in your country will usually work just fine. For example, 64 bit signed integers representing cents can represent values up to 92,223 trillion dollars. 32 bit integers are usually unsuitable.
No a double will always have rounding errors, use "decimal" if you're on .Net...
Actually floating-point double is perfectly well suited to representing amounts of money as long as you pick a suitable unit.
See http://www.idinews.com/moneyRep.html
So is fixed-point long. Either consumes 8 bytes, surely preferable to the 16 consumed by a decimal item.
Whether or not something works (i.e. yields the expected and correct result) is not a matter of either voting or individual preference. A technique either works or it doesn't.

Categories