I have the following test code:
decimal test1 = 0.0500000000000000045656554454M;
double test2 = (double)test1;
This results in test2 showing as 0.05 when debugging. Why is it being rounded to 2 decimal places?
Thanks
The value from that conversion is actually 0.050000000000000009714451465470119728706777095794677734375, as shown by DoubleConverter. That's the exact value of the nearest double to the decimal you converted.
When you use the debugger or normal string formatting, you aren't usually shown the exact result.
The reason is that double can contain no more than 15-16 significant digits.
see double (C# Reference)
You should take a look at this article about floating-point arithmetic and .NET. The rounding occurs due to a combination of how the number gets converted to a double-precision floating point value and how it is formatted when printed, since .NET defaults to 15 decimals for doubles, and your original number contains decimal past the 15th.
You could try test2.ToString("0.000000000000000000000000") to see if you might squeeze out any more information from the number, but I doubt it will.
There are two reasons I can think of:
Due to the different representation of decimal and double. See this article for more information about floating point representation. It is possible that there are not enough bits for the whole number representation in the double.
Due to the way numbers are printed. It is possible that in your printing options, there are less than 18 numbers after the decimal point specified - in which case, you'll get the rounded result.
I would check for tweaking the printing options first to make sure that the problem isn't there first.
.. But know that the only solution for the first problem is stop using double :-)
Related
I have a class that does some length calculations based on a height on a ticket. It's been in place for years and working quite well... Until we got a unique ticket size.
They are entered by sales people in inches and are normally nice numbers like 3, 4 or 3.5 and store in a database - This one is however 3.66666 recurring (or 11/3) But it is being entered as 3.666 and causing the calculation to fail due to lost precision.
I have thought of a bit of a hack to restore precision for certain numbers, but thought maybe someone knows of a better way of getting a 3.666 or a 93.1333 back to it's number + two thirds status?
Thanks,
Mick.
As you explained in comments I see your point now. I've checked the numbers:
168000 / 3.666 = 45826.5139
168000 / 3.666666 = 45818.1901488
168000 * 3 / 11 = 45818.1818182
It makes a difference of 8 tickets. I have a feeling that your issue can be solved in many ways. On the side of user input for example. Or on the side of database. But back to your question:
How do I convert 3.666 or a 93.1333 back to it's number + two thirds
status?
You are looking for converting decimal (or double) to fraction.
There is already a question on SO: Algorithm for simplifying decimal to fractions which has many answeres. I've tested some of them, and none of them were satisfying. Some of them don't even hanlde recurrence. Perhaps I've missed the correct one, you can look by yourself.
Anyway, I believe you don't need to fully implement a conversion from 1.666 to 3/2, since it's not easy and you have a real-world sizes. You've said, that most of the time numbers are aroung 3, 3.5, 4 etc. So I suggest you to take a look at a question I've linked above and search for an algorythm of detecting the recurrence number. It was also discussed here How to know the repeating decimal in a fraction?
After what just convert 1.666 to 1.666666, since 1/1000000 of inch won't mess your calculations, as numbers above show.
It would be difficult to get the accurate value of double as double is floating point.
The MSDN says:
Remember that a floating-point number
can only approximate a decimal number,
and that the precision of a
floating-point number determines how
accurately that number approximates a
decimal number. By default, a Double
value contains 15 decimal digits of
precision, although a maximum of 17
digits is maintained internally. The
precision of a floating-point number
has several consequences:
Two floating-point numbers that appear equal for a particular
precision might not compare equal
because their least significant digits
are different.
A mathematical or comparison operation that uses a floating-point
number might not yield the same result
if a decimal number is used because
the floating-point number might not
exactly approximate the decimal
number.
I looked at decimal in C# but I wasnt 100% sure what it did.
Is it lossy? in C# writing 1.0000000000001f+1.0000000000001f results in 2 when using float (double gets you 2.0000000000002 which is correct) is it possible to add two things with decimal and not get the correct answer?
How many decimal places can I use? I see the MaxValue is 79228162514264337593543950335 but if i subtract 1 how many decimal places can I use?
Are there quirks I should know of? In C# its 128bits, in other language how many bits is it and will it work the same way as C# decimal does? (when adding, dividing, multiplication)
What you're showing isn't decimal - it's float. They're very different types. f is the suffix for float, aka System.Single. m is the suffix for decimal, aka System.Decimal. It's not clear from your question whether you thought this was actually using decimal, or whether you were just using float to demonstrate your fears.
If you use 1.0000000000001m + 1.0000000000001m you'll get exactly the right value. Note that the double version wasn't able to express either of the individual values exactly, by the way.
I have articles on both kinds of floating point in .NET, and you should read them thoroughly, along other resources:
Binary floating point (float/double)
Decimal floating point (decimal)
All floating point types have their limits of course, but in particular you should not expect binary floating point to accurately represent decimal values such as 0.1. It still can't represent anything that isn't exactly representable in 28/29 decimal digits though - so if you divide 1 by 3, you won't get the exact answer of course.
You should also note that the range of decimal is considerably smaller than that of double. So while it can have 28-29 decimal digits of precision, you can't represent truly huge numbers (e.g. 10200) or miniscule numbers (e.g. 10-200).
Decimals in programming are (almost) never 100% accurate. Sometimes it's even better to multiply the decimal value with a very high number and then calculate, but that's only if you're for example sure that the value is always between 0 and 100(so it won't get out of range of the maxvalue)
Floting point is inherently imprecise. Some numbers can't be represented faithfully. Decimal is a large floating point with high precision. If you look on the page at msdn you can see there are "28-29 significant digits." The .net framework classes are language agnostic. they will work the same in every language that uses .net.
edit (in response to Jon Skeet): If you initialize the Decimal class with the numbers above, which are less than 28 digits each after the decimal point, the number will be stored faithfully as long as the binary representation is exact. Since it works in 64-bit format, I assume the 128-bit will handle it perfectly fine. Some numbers, such as 0.1, will never be exactly representable because they are a repeating sequence in binary.
So, WPF calls ToString() on objects when generating TextColumns in DataGrid and then i found out strange thing about ToString() method:
Check this out :
object a = 0.3780000001;//Something like this
Console.WriteLine(a.ToString());//Gets truncated in some cases
First, I thought it was just rounding, but few times I was able to reproduce such behavior on
doubles with < 15 digits after dot. Am I missing something?
To the computer, 0.378 and 0.378000...0001 are the same number. See this question: Why is floating point arithmetic in C# imprecise?
As defined on the MSDN page for System.Double, the double type only contains a maximum of fifteen digits of precision. Even though it maintains 17 internally, your figure contains 18 significant digits; this is outside the range of System.Double.
Use decimal instead of float for a more precise type.
By default, Double.ToString() truncates to 15 digits after the dot, but if you really want to use the double data type and you need those 2 extra digits, you can use th "G17" formatting string:
double x = 3.1415926535897932;
string pi = x.ToString("G17");
This will give you a string with the full 17 digits.
I wouldn't assume (so fast) that you found a bug in something as crucial as C#'s ToString implementation.
The behaviour you're experiencing is caused by the fact that a float is imprecisely stored in computer memory (also see this question).
maybe the number format's accuracy range doesn't contain that number? (ie, float only has accuracy to a few significant figures)
If you're data-binding the value, you can supply a ValueConverter which formats the number any way you want.
http://msdn.microsoft.com/en-us/library/system.windows.data.ivalueconverter.aspx
Set a to be an Decimal and it will print it correctly!
decimal a = 0.378000000000000001m;
Console.WriteLine(a.ToString());
You could have a common decimal format setting to use all the time.
eg
object a = 0.378000000000000001;
Console.WriteLine(a.ToString(Settings.DecimalFormat));
I'm using the following piece of code and under some mysterious circumstances the result of the addition is not as it's supposed to be:
double _west = 9.482935905456543;
double _off = 0.00000093248155508263153;
double _lon = _west + _off;
// check for the expected result
Debug.Assert(_lon == 9.4829368379380981);
// sometimes i get 9.48293685913086 for _lon (which is wrong)
I'm using some native DLLs within my application and i suspect that some DLL is responsible for this 'miscalculation', but i need to figure out which one.
Can anyone give me a hint how to figure out the root of my problem?
double is not completely accurate, try using decimal instead
The advanteage of using double and float over decimal is performance
At first I thought this was a rounding error but actually it is your assertion that is wrong. Try adding the entire result of your calculation without any arbitrary rounding on your part.
Try this:
using System;
class Program
{
static void Main()
{
double _west = 9.482935905456543;
double _off = 0.00000093248155508263153;
double _lon = _west + _off;
// check for the expected result
Console.WriteLine(_lon == 9.48293683793809808263153);
}
}
In the future though it is best to use System.Decimal in cases where you need to avoid rounding errors that are usually associated with the System.Single and System.Double types.
That being said, however, this is not the case here. By arbitrarily rounding the number at a given point you are assuming that the type will also round at that same point which is not how it works. Floating point numbers are stored to their maximum representational capacity and only once that threshold has been reached does rounding take place.
The problem is that Double only has precision of 15 - 16 digits (and you seem to need more precision in your example) whereas Decimal has precision to 28 - 29. How are you converting between Double and Decimal?
You are being bitten by rounding and precision problems. See this. Decimal might help. Go here for details on conversions and rounding.
From MSDN:
When you convert float or double to decimal, the source value is converted to decimal representation and rounded to the nearest number after the 28th decimal place if required. Depending on the value of the source value, one of the following results may occur:
If the source value is too small to be represented as a decimal, the result becomes zero.
If the source value is NaN (not a number), infinity, or too large to be represented as a decimal, an OverflowException is thrown.
You cannot represent every floating point number in the decimal system with a floating point in a binary system accurately, this isn't even directly related to how "small" the decimal number is, some numbers just don't "fit" in base-2 nicely.
Using a longer bit width can help in most cases, but not always.
To specify your constants in Decimal (128-bit floating point ) precision, use this declaration:
decimal _west = 9.482935905456543m;
decimal _off = 0.00000093248155508263153m;
decimal _lon = _west + _off;
Does anyone know of an elegant way to get the decimal part of a number only? In particular I am looking to get the exact number of places after the decimal point so that the number can be formatted appropriately. I was wondering if there is away to do this without any kind of string extraction using the culture specific decimal separator....
For example
98.0 would be formatted as 98
98.20 would be formatted as 98.2
98.2765 would be formatted as 98.2765 etc.
It it's only for formatting purposes, just calling ToString will do the trick, I guess?
double d = (double)5 / 4;
Console.WriteLine(d.ToString()); // prints 1.75
d = (double)7 / 2;
Console.WriteLine(d.ToString()); // prints 3.5
d = 7;
Console.WriteLine(d.ToString()); // prints 7
That will, of course, format the number according to the current culture (meaning that the decimal sign, thousand separators and such will vary).
Update
As Clement H points out in the comments; if we are dealing with great numbers, at some point d.ToString() will return a string with scientific formatting instead (such as "1E+16" instead of "10000000000000000"). One way to overcome this probem, and force the full number to be printed, is to use d.ToString("0.#"), which will also result in the same output for lower numbers as the code sample above produces.
You can get all of the relevant information from the Decimal.GetBits method assuming you really mean System.Decimal. (If you're talking about decimal formatting of a float/double, please clarify the question.)
Basically GetBits will return you 4 integers in an array.
You can use the scaling factor (the fourth integer, after masking out the sign) to indicate the number of decimal places, but you should be aware that it's not necessarily the number of significant decimal places. In particular, the decimal representations of 1 and 1.0 are different (the former is 1/1, the latter is 10/10).
Unfortunately, manipulating the 96 bit integer is going to require some fiddly arithmetic unless you can use .NET 4.0 and BigInteger.
To be honest, you'll get a simpler solution by using the built in formatting with CultureInfo.InvariantCulture and then finding everything to the right of "."
Just to expand on the point about getbits, this expression gets the scaling factor from a decimal called foo:
(decimal.GetBits(foo)[3] & 16711680)>>16
You could use the Int() function to get the whole number component, then subtract from the original.