Is there a 128 or 256 bit double class in .net? - c#

I have an application that I want to be able to use large numbers and very precise numbers. For this, I needed a precision interpretation and IntX only works for integers.
Is there a class in .net framework or even third party(preferably free) that would do this?
Is there another way to do this?

Maybe the Decimal type would work for you?

You can use the freely available, arbitrary precision, BigDecimal from java.math, which is part of the J# redistributable package from Microsoft and is a managed .NET library.
Place a reference to vjslib in your project and you can something like this:
using java.math;
public void main()
{
BigDecimal big = new BigDecimal("1234567890123456789011223344556677889900.0000009876543210000987654321");
big.add(new BigDecimal(1.0));
Debug.Print(big);
}
Will print the following to the debug console:
1234567890123456789011223344556677889901.0000009876543210000987654321
Note that, as already mentioned, .NET 2010 contains a BigInteger class which, as a matter of fact, was already available in earlier versions, but only as internal class (i.e., you'd need some reflection to get it to work).

The F# library has some really big number types as well if you're okay with using that...

I've been searching for a solution for this for a long time, and today came across this library:
Quadruple Precision Double in C#
Signed 128-bit floating point data type library, with 64 effective bits of precision (vs. 53 for Doubles) and a 64 bit exponent (vs. 11 for Doubles). Quads have greater precision and far greater range than Doubles and are especially useful when dealing with very large or very small values, such as those in probabilistic models. As of version 2.0, all Quad arithmetic is checked (underflowing to 0, overflowing to +/- infinity), has special PositiveInfinity, NegativeInfinity, and NaN values, and follows the same rules as .Net Double arithmetic and comparison operators (e.g. 1/0 == PositiveInfinity, 0 * PositiveInfinity == NaN, NaN != NaN), making it a convenient drop-in replacement for Doubles in existing code.

If Decimal doesn't work for you, try implementing (or grabbing code from somewhere) Rational arithmetic using large integers. That will provide the precision you need.

Shameless plug: QPFloat emulates the IEEE standard to full precision.

Use decimal for this if possible.

Decimal is a 128-bit (16 byte) value type that is used for highly precise calculations. It is a floating point type that is represented internally as base 10 instead of base 2 (i.e. binary). If you need to be highly precise, you should use Decimal - but the drawback is that Decimal is about 20 times slower than using floats.

Well, I'm about 12 years late to the party. 🙃
I just thought you'd like to use my arbitrary precision floating point class called BigDecimal (I should have named it BigFloat, but its kinda late for that now).
Well, more correctly it will provide precision up to the number of digits you specify (by setting the static BigDecimal.Precision member), that way it doesn't use up all your ram trying to represent irrational numbers.
I put a lot of effort into ensuring its correctness and working out all the bugs. It comes with a test project that tests every method in multiple ways, and each bug I fixed started by adding a test case.
And unlike QPFloat and the J# redistributable's BigDecimal, the code is not an incomprehensible mess and follows C# coding style and naming conventions (to be fair, part of unreadability of J#'s version comes from the fact that you have to decompile the assembly first, so it'll be missing the names of all the private members).
Link drop:
BigDecimal on GitHub
ExtendedNumerics.BigDecimal on NuGet

Decimal is 128 bits if that would work.

Related

Is Java's BigDecimal the closest data type corresponding to C#'s Decimal?

According to the chart here, the equivalent data type in Java to C#'s Decimal is BigDecimal.
Is this really so? What's up with the "Big" preamble? There doesn't seem to be a "SmallDecimal" or "LittleDecimal" (let alone "MediumSizedDecimal") in Java.
I must say, though, that chart was the clearest thing I found on the subject; the other links here and here and here were about as clear to me as the Mississippi River after a torrential tempest.
Is this really so?
They are similar but not identical. To be more specific: the Java version can represent every value that the C# version can, but the opposite is not true.
What's up with the "Big" preamble?
A Java BigDecimal can have arbitrarily much precision and therefore can be arbitrarily large. If you want to make a BigDecimal with a thousand places of precision, you go right ahead.
By contrast, a C# decimal has a fixed size; it takes up 128 bits and gives you 28 decimal places of precision.
To be more precise: both types give you numbers of the form
+/- someInteger / 10 ^ someExponent
In C#, someInteger is a 96 bit unsigned integer and someExponent is an integer between 0 and 28.
In Java, someInteger is of arbitrary size and someExponent is a signed 32 bit integer.
Yep - that's the corresponding type.
Since you are using Java after C# - don't be too surprised to find little nuances like this - or be too upset when there is no easy way to do something that's "easy" to do C#. The first thing that comes to my mind is int & int? - in Java you just use int and Integer.
C# had the luxury of coming after Java so lots of (what I subjectively see as) bad decisions have been fixed/streamlined. Also, it helps that C# was designed by Andres Hejlsberg (who is arguably one of the best programming language designers alive) and is regularly "updated" unlike Java (you probably witnessed all things added to C# since 2000 - complete list)
The C# Decimal and java BigDecimal types are not equivalents. BigDecimal is arbitrary precision. It can literally represent any number to any precision (until you run out of RAM).
C# Decimal is floating point but "fixed length" (128 bits). Most of the time, that's enough! Decimal is much faster than BigDecimal, so unless you really need a lot of precision it is a superior option.
What you probably want for Java is https://github.com/tools4j/decimal4j or a similar library.

Why doesn't 0.9 recurring always equal 1

Mathematically, 0.9 recurring can be shown to be equal to 1. This question however, is not about infinity, convergence, or the maths behind this.
The above assumption can be represented using doubles in C# with the following.
var oneOverNine = 1d / 9d;
var resultTimesNine = oneOverNine * 9d;
Using the code above, (resultTimesNine == 1d) evaluates to true.
When using decimals instead, the evaluation yields false, yet, my question is not about the disparate precision of double and decimal.
Since no type has infinite precision, how and why does double maintain such an equality where decimal does not? What is happening literally 'between the lines' of code above, with regards to the manner in which the oneOverNine variable is stored in memory?
It depends on the rounding used to get the closest representable value to 1/9. It could go either way. You can investigate the issue of representability at Rob Kennedy's useful page: http://pages.cs.wisc.edu/~rkennedy/exact-float
But don't think that somehow double is able to achieve exactness. It isn't. If you try with 2/9, 3/9 etc. you will find cases where the rounding goes the other way. The bottom line is that 1/9 is not exactly representable in binary floating point. And so rounding happens and your calculations are subject to rounding errors.
What is happening literally 'between the lines' of code above, with regards to the manner in which the oneOverNine variable is stored in memory?
What you're asking about is called IEEE 754. This is the spec that C#, it's underlying .Net runtime, and most other programming platforms use to store and manipulate decimal values. This is because support for IEEE 754 is typically implemented directly at the CPU/chipset-level, making it both far more performant than an alternative implemented solely in software and far easier when building compilers, because the operations will map almost directly to specific CPU instructions.

Are there any "type" on Borland C++ Builder like "decimal" from C#?

In C#, there is a type called decimal (the System.Decimal structure). I have found information that shows how it is better than float and double types for some cases:
StackOverflow - double-precision-problems-on-net
StackOverflow - how-to-make-doubles-work-properly-c-sharp
Is there any similar type for Borland C++ Builder programs?
The decimal type in C#, .NET's System.Decimal type, is just a floating point number stored as base-10 instead of base-2 encoding. float and double are more typical base-2 floating point numbers. That is, a double is stored as +/- x * 2^y while a decimal is stored as +/- x * 10 ^ y. That's why it does better for, as one example, financial data, which is typically expressed in terms of x * 10^-2. The IEEE standard 754 (the floating point math standard) calls this "decimal floating point" math, and defines a 32- and 64-bit version of each.
In C++, these types are implemented in the std::decimal namespace, and are called std::decimal::decimal32 and std::decimal::decimal64, in the <decimal> header. If Borland C++ builder has such a type, you will find it there. GNU's C++ library includes this header but, AFAIK it's not actually part of the standard yet, so BCB may not have it. If that's the case, you'll need to use a third-party library. #dash's example of Intel's Decimal Floating Point library is probably the best known such library, though a Google search for IEEE 754 Decimal should turn up others if, for some reason, you need them.
These are the float types that you can use in Delphi:
single : 4 bytes (32bits)
real : 6 bytes (48bits)
double : 8 bytes (64bits)
currency: 8 bytes (64bits) (this is probably what you're looking for)
extended: 10 bytes (80bits) (maps to double when you compile to x64!)
In C++ builder there seems to be a System::Currency class that mimics Delphi's built in currency type. Maybe it helps to look into that.
I found this link Borland C++ Primitive Data types . View it in HTML.
There is a long double type having capacity of 10 bytes.
The document is informative. You may want to read it anyway.

How do you deal with numbers larger than UInt64 (C#)

In C#, how can one store and calculate with numbers that significantly exceed UInt64's max value (18,446,744,073,709,551,615)?
Can you use the .NET 4.0 beta? If so, you can use BigInteger.
Otherwise, if you're sticking within 28 digits, you can use decimal - but be aware that obviously that's going to perform decimal arithmetic, so you may need to round at various places to compensate.
By using a BigInteger class; there's one in the the J# libraries (definitely accessible from C#), another in F# (need to test this one), and there are freestanding implementations such as this one in pure C#.
What is it that you wish to use these numbers for? If you are doing calculations with really big numbers, do you still need the accuracy down to the last digit?
If not, you should consider using floating point values instead. They can be huge, the max value for the double type is 1.79769313486231570E+308, (in case you are not used to scientific notation it means 1.79769313486231570 multiplied by 10000000...0000 - 308 zeros).
That should be large enough for most applications
BigInteger represents an arbitrarily large signed integer.
using System.Numerics;
var a = BigInteger.Parse("91389681247993671255432112000000");
var b = new BigInteger(1790322312);
var c = a * b;
Decimal has greater range.
There is support for bigInteger in .NET 4.0 but that is still not out of beta.
There are several libraries for computing with big integers, most of the cryptography libraries also offer a class for that. See this for a free library.
Also, do check that you truly need a variable with greater capacity than Int64 and aren't falling foul of C#'s integer arithmetic.
For example, this code will yield an overflow error:
Int64 myAnswer = 20000*1024*1024;
At first glance that might seem to be because the result is too large for an Int64 variable, but actually it's because each of the numbers on the right side of the formula are implicitly typed as Int32 so the temporary memory space reserved for the result of the calculation will be Int32 size, and that's what causes the overflow.
The result will actually easily fit into an Int64, it just needs one of the values on the right to be cast to Int64:
Int64 myAnswer = (Int64)20000*1024*1024;
This is discussed in more detail in this answer.
(I appreciate this doesn't apply in the OP's case, but it was just this sort of issue that brought me here!)
You can use decimal. It is greater than Int64.
It has 28-29 significant digits.

Is casting narrow types to wider types to save memory and keep high-precision calculations a terrible idea?

I'm dealing with financial data, so there's a lot of it and it needs to be relatively high-precision (64bit floating point or wider).
The standard practice around my workplace seems to be to represent all of it as the c# decimal type which is a 128bit wide floating point specifically created to support round-off free base10 operations.
Since 64bit is wide enough to maintain the representative precision, is it ridiculous to cast the data to the wider type for all calculations (mult,div,add,etc) and then back to 64bit for sitting in memory (which is where it spends of most if its time)?
For reference: memory is definitely the limiting resource here.
The point of using decimal (128 bits) over double (64 bits) and float (32 bits) isn't usually to do with the size. It's to do with the base. While double and float are floating binary point types, decimal is a floating decimal point type - and it's that feature that lets it represent numbers like 0.1 exactly where float/double can't.
There's no conceptual reason why we couldn't haven't a 64-bit decimal type, and in many cases that would indeed be enough - but until such a type comes along or you write it yourself, please don't use the "shorter" (and binary floating point) types of float/double for financial calculations. If you do, you're asking for trouble.
If you're suggesting writing a storage type which can convert to/from decimal and is still a floating decimal type, that sounds like a potentially good idea even without it being able to do any calculations. You'll need to be very careful when you think about what to do if you're ever asked to convert a decimal value which you can't represent exactly though. I'd be interested in seeing such a type, to be honest. Hmm...
(As other answers have indicated, I'd really make sure that it's the numbers which are taking up the memory before doing this, however. If you don't need to do it, there's little point in introducing the extra complexity speculatively.)
64bit floating point cannot maintain precision of financial data. It is not a matter of space, it is a matter of which number system the data types use; double uses base-2, decimal is base-10, and base-2 cannot represent exact base-10 decimals even if it had 1000 bits of precision.
Don't believe me? Run this:
double d = 0.0;
for (int i = 0; i < 100; i++)
d += 0.1;
Console.WriteLine(d);
> 9.99999999999998
If you need base-10 calculations you need the decimal type.
(Edit: damn, beaten by Jon Skeet again...)
If the decimal type really is the bottleneck, you can use a long number of pennies (or 1/8 cent or whatever your unit is) instead of decimal dollars.
You should use a profiler to see what objects are taking up a lot of memory. If your decimal objects are the culprit, then I would say yes go after them. Otherwise you are just making guesses. Profiler will tell you for sure.
It is perfectly reasonable to store your numbers at 64 bit, cast them to the decimal type for calculations, and cast the result back to 64 bit, if you don't mind the performance hit.
We require this level of precision where I work, so this is exactly what we do here. We take a two orders of magnitude hit in speed by doing the cast, but we never have to worry about large errors in the floating point arithmetic. Without the cast, the calculation can be wildly inaccurate, depending on the range of the numbers and the type of calculation being performed.
For more on floating point arithmetic, and why errors can creep into your calculations, see "What Every Computer Scientist Should Know About Floating-Point Arithmetic" at http://docs.sun.com/source/806-3568/ncg_goldberg.html
This seems perfectly sane, if 64 bit floating point is truly enough to represent the precision you want. The extra precision decimal is, as you say, often used purely to minimize cumulative errors over multiple operations.
As most of the other posts have already pointed out, converting between 128-bit decimal and 64-bit floating point representations is not a conversion that will always maintain accuracy.
However, if you are dealing with the prices of financial shares, you could consider representing them as ints (the number of pennies) rather than as decimal value (the number of fractional dollars). Perform all financial calculations in pennies and then only expose them to the outside world as decimals when requested.
Another approach may be to improve the algorithmic efficiency of your system rather than "compressing" the storage type. Do you really need all that data in memory at once? Can you virtualize it somehow?
If not, given the volume of data you are managing, you may want to look into organizing the data in a way the reduces the redundancy. For example, not every share has a historic price back in time (some companies don't exist far enough back in time). So orgnize your data as a dictionary of stock prices by day (or year) rather than as a tabular structure for each stock. There may be other alternatives depending on how your data is available and how you intend to perform calculations with it.
You need to do the numerical analysis to see if the practice (of keeping 128 bits) is ridiculous, or just lazy, or really necessary.
Is "just add more memory" an acceptable answer?
How much cost is involved in properly coding and testing the proposed approach of moving the values between these representations. Compare that cost with shovelling more memory into a machine with the app running as a 64 bit process.
From MSDN decimal: There is no implicit conversion between floating-point types and the decimal type; therefore, a cast must be used to convert between these two types.
It looks like it is REQUIRED to do the cast in the case you are using.
That being said, it is very important that you understand what most other people here are posing about in regards to the problems of representing currency in floating point.
You may way to consider creating/finding a 64bit BCD (Binary Coded Decimal) implementation that you can use for your system.
Same doubles converted to decimals and then converted to byte[] and then compressed takes c.2x less space (I have just tested this with several compression libraries: Blosc with default, lz4, zlib with or without shuffle, with shuffle decimals are the best).
One option is to store compressed decimals in memory or on disk, since CPUs are starving nowdays. See a number of presentations here: http://blosc.org/docs/

Categories