We are storing financial data in a SQL Server database using the decimal data type and we need 6-8 digits of precision in the decimal. When we get this value back through our data access layer into our C# server, it is coming back as the decimal data type.
Due to some design constraints that are beyond my control, this needs to be converted. Converting to a string isn't a problem. Converting to a double is as the MS documentation says "[converting from decimal to double] can produce round-off errors because a double-precision floating-point number has fewer significant digits than a decimal."
As the double (or string) we can round to 2 decimal places after any calculations are done, so what is the "right" way to do the decimal conversion to ensure that we don't lose any precision before the rounding?
The conversion won't produce errors within the first 8 digits. double has 15-16 digits of precision - less than the 28-29 of decimal, but enough for your purposes by the sounds of it.
You should definitely put in place some sort of plan to avoid using double in the future, however - it's an unsuitable datatype for financial calculations.
If you round to 2dp, IMO the "right" way would be store an integer that is the multiple - i.e. for 12.34 you store the integer 1234. No more double rounding woe.
If you must use double, this still works; all integers are guaranteed to be stored exactly in double - so still use the same trick.
Related
From what I understand decimal is used for precision and is recommended for monetary calculations. Double gives better range, but less precision and is a lot faster than decimal.
What if I have time and rate, I feel like double is suited for time and decimal for rate. I can't mix the two and run calculations without casting which is yet another performance bottleneck. What's the best approach here? Just use decimal for time and rate?
Use double for both. decimal is for currency or other situations where the base-10 representation of the number is important. If you don't care about the base-10 representation of a number, don't use decimal. For things like time or rates of change of physical quantities, the base-10 representation generally doesn't matter, so decimal is not the most appropriate choice.
The important thing to realize is that decimal is still a floating-point type. It still suffers from rounding error and cannot represent certain "simple" numbers (such as 1/3). Its one advantage (and one purpose) is that it can represent decimal numbers with fewer than 29 significant digits exactly. That means numbers like 0.1 or 12345.6789. Basically any decimal you can write down on paper with fewer than 29 digits. If you have a repeating decimal or an irrational number, decimal offers no major benefits.
The rule of thumb is to use the type that is more suitable to the values you will handle. This means that you should use DateTime or TimeSpan for time, unless you only care about a specific unit, like seconds, days, etc., in which case you can use any integer type. Usually for time you need precision and don't want any error due to rounding, so I wouldn't use any floating point type like float or double.
For anything related to money, of course you don't want any rounding error either, so you should really use decimal here.
Finally, only if for some very specific requirements you need absolute speed in a calculation that is done millions of times and for which decimal happens not to be fast enough, only then I would think of using another faster type. I would first try with integer values (maybe multiplying your value by a power of 10 if you have decimals) and only divide by this power of 10 at the end. If this can't be done, only then I would think of using a double. Don't do a premature optimization if you are not sure it's needed.
We are using these double values to represent bill amounts. I read that it is better to use 'decimal" datatype rather than double to minimize rounding errors. But Its a very big project and changing all dataypes to decimal is a herculean task.
SO we tried Math.Round with both kinds of midpoint rounding but nothing works. There is some kind of an error.
Is there anyway to make the rounding upto 2 decimal places accurately?
EDIT:
Sorry for not providing examples. The problem is once the values (totally there are 24 "double"values)get added before rounding(they were upto 15 places originally), the summed value comes to 18167.04 which is desired. But when they are rounded to 2 decimal places (using Math.Round or Math.Round with MidpointRounding), the summed value is 18167.07 (differs by .03).
Using Decimal datatype is apt for monetary calculations but since it a huge project, for now, implementing the change in datatype is a task.
No way of Rounding works.
Is the problem really with the datatype here or because of rounding?
WIll the same Rounding method work if decimal datatype is used?
Floating point accuracy is not defined by decimal points, it is defined by significant digits. 123456789123456789.0 is no more or less accurate than 0.123456789123456789.
There is a problem that frequently occurs when dealing with financial values in programming:
Float values do not tanslate well to decimal fractions.
Many developers tend to think of float values as decimal fractions, because they are mostly represented by these, when converting them to strings and vice versa. This is not the case. Float values have their fractional part stored as binary fraction, as descibed here.
This makes float values (and their calulations) being slightly askew from their decimal representations.
One way out of this problem, is (as you stated) to use the data type decimal, that was construced to do calculations that have to translate directly to decimal fractions (as financial calculations do).
Another would be to round all results of floating point calculations, before displaying or storing them. For financial calculations one should use:
Math.Round([your value here], 2, MidpointRounding.AwayFromZero);
I would advise to opt for the former, whenever possible. It spares many a headache, when calculation is done. With the rounding approach, one has to take rounding errors into account at many points. It may be a big task to convert an existing project to decimal, but it will most propably pay out in the long run (if it is possible, in the fist place)...
Regarding your edit:
This problem arises, because the rounding errors get cumulated if you round before you sum. You could opt rounding the sum after summing up the un-rounded values. Whether you use double or decimal is irellevant in this regard, as this applies to both.
Whether this is allowed, depends on the type of financial calculation you do.
If the the summands appear in printed form on e.g. an invoice, then it is most propably not allowed to do so.
double in C# don't hold enough precision for my needs. I am writing a fractal program, and after zooming in a few times I run out of precision.
I there a data type that can hold more precise floating-point information (i.e more decimal places) than a double?
Yes, decimal is designed for just that.
However, do be aware that the range of the decimal type is smaller than a double. That is double can hold a larger value, but it does so by losing precision. Or, as stated on MSDN:
The decimal keyword denotes a 128-bit
data type. Compared to floating-point
types, the decimal type has a greater
precision and a smaller range, which
makes it suitable for financial and
monetary calculations. The approximate
range and precision for the decimal
type are shown in the following table.
The primary difference between decimal and double is that decimal is fixed-point and double is floating point. That means that decimal stores an exact value, while double represents a value represented by a fraction, and is less precise. A decimalis 128 bits, so it takes the double space to store. Calculations on decimal is also slower (measure !).
If you need even larger precision, then BigInteger can be used from .NET 4. (You will need to handle decimal points yourself). Here you should be aware, that BigInteger is immutable, so any arithmetic operation on it will create a new instance - if numbers are large, this might be crippling for performance.
I suggest you look into exactly how precise you need to be. Perhaps your algorithm can work with normalized values, that can be smaller ? If performance is an issue, one of the built in floating point types are likely to be faster.
The .NET Framework 4 introduces the System.Numerics.BigInteger struct that can hold numbers with an arbitrary large precision.
Check out BigInteger (.NET 4) if you need even more precision than Decimal gives you.
I looked at decimal in C# but I wasnt 100% sure what it did.
Is it lossy? in C# writing 1.0000000000001f+1.0000000000001f results in 2 when using float (double gets you 2.0000000000002 which is correct) is it possible to add two things with decimal and not get the correct answer?
How many decimal places can I use? I see the MaxValue is 79228162514264337593543950335 but if i subtract 1 how many decimal places can I use?
Are there quirks I should know of? In C# its 128bits, in other language how many bits is it and will it work the same way as C# decimal does? (when adding, dividing, multiplication)
What you're showing isn't decimal - it's float. They're very different types. f is the suffix for float, aka System.Single. m is the suffix for decimal, aka System.Decimal. It's not clear from your question whether you thought this was actually using decimal, or whether you were just using float to demonstrate your fears.
If you use 1.0000000000001m + 1.0000000000001m you'll get exactly the right value. Note that the double version wasn't able to express either of the individual values exactly, by the way.
I have articles on both kinds of floating point in .NET, and you should read them thoroughly, along other resources:
Binary floating point (float/double)
Decimal floating point (decimal)
All floating point types have their limits of course, but in particular you should not expect binary floating point to accurately represent decimal values such as 0.1. It still can't represent anything that isn't exactly representable in 28/29 decimal digits though - so if you divide 1 by 3, you won't get the exact answer of course.
You should also note that the range of decimal is considerably smaller than that of double. So while it can have 28-29 decimal digits of precision, you can't represent truly huge numbers (e.g. 10200) or miniscule numbers (e.g. 10-200).
Decimals in programming are (almost) never 100% accurate. Sometimes it's even better to multiply the decimal value with a very high number and then calculate, but that's only if you're for example sure that the value is always between 0 and 100(so it won't get out of range of the maxvalue)
Floting point is inherently imprecise. Some numbers can't be represented faithfully. Decimal is a large floating point with high precision. If you look on the page at msdn you can see there are "28-29 significant digits." The .net framework classes are language agnostic. they will work the same in every language that uses .net.
edit (in response to Jon Skeet): If you initialize the Decimal class with the numbers above, which are less than 28 digits each after the decimal point, the number will be stored faithfully as long as the binary representation is exact. Since it works in 64-bit format, I assume the 128-bit will handle it perfectly fine. Some numbers, such as 0.1, will never be exactly representable because they are a repeating sequence in binary.
This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Why is floating point arithmetic in C# imprecise?
I have been dealing with some numbers and C#, and the following line of code results in a different number than one would expect:
double num = (3600.2 - 3600.0);
I expected num to be 0.2, however, it turned out to be 0.1999999999998181. Is there any reason why it is producing a close, but still different decimal?
This is because double is a floating point datatype.
If you want greater accuracy you could switch to using decimal instead.
The literal suffix for decimal is m, so to use decimal arithmetic (and produce a decimal result) you could write your code as
var num = (3600.2m - 3600.0m);
Note that there are disadvantages to using a decimal. It is a 128 bit datatype as opposed to 64 bit which is the size of a double. This makes it more expensive both in terms of memory and processing. It also has a much smaller range than double.
There is a reason.
The reason is, that the way the number is stored in memory, in case of the double data type, doesn't allow for an exact representation of the number 3600.2. It also doesn't allow for an exact representation of the number 0.2.
0.2 has an infinite representation in binary. If You want to store it in memory or processor registers, to perform some calculations, some number close to 0.2 with finite representation is stored instead. It may not be apparent if You run code like this.
double num = (0.2 - 0.0);
This is because in this case, all binary digits available for representing numbers in double data type are used to represent the fractional part of the number (there is only the fractional part) and the precision is higher. If You store the number 3600.2 in an object of type double, some digits are used to represent the integer part - 3600 and there is less digits representing fractional part. The precision is lower and fractional part that is in fact stored in memory differs from 0.2 enough, that it becomes apparent after conversion from double to string
Change your type to decimal:
decimal num = (3600.2m - 3600.0m);
You should also read this.
See Wikipedia
Can't explain it better. I can also suggest reading What Every Computer Scientist Should Know About Floating-Point Arithmetic. Or see related questions on StackOverflow.