Explicit conversion from Single to Decimal results in different bit representation - c#

If I convert single s into decimal d I've noticed it's bit representation differs from that of the decimal created directly.
For example:
Single s = 0.01f;
Decimal d = 0.01m;
int[] bitsSingle = Decimal.GetBits((decimal)s)
int[] bitsDecimal = Decimal.GetBits(d)
Returns (middle elements removed for brevity):
bitsSingle:
[0] = 10
[3] = 196608
bitsDecimal:
[0] = 1
[3] = 131072
Both of these are decimal numbers, which both (appear) to be accurately representing 0.01:
Looking at the spec sheds no light except perhaps:
§4.1.7 Contrary to the float and double data types, decimal fractional
numbers such as 0.1 can be represented exactly in the decimal
representation.
Suggesting that this is somehow affected by single not being able accurately represent 0.01 before the conversion, therefore:
Why is this not accurate by the time the conversion is done?
Why do we seem to have two ways to represent 0.01 in the same datatype?

TL;DR
Both decimals precisely represent 0.1. It's just that the decimal format, allows multiple bitwise-different values that represent the exact same number.
Explanation
It isn't about single not being able to represent 0.1 precisely. As per the documentation of GetBits:
The binary representation of a Decimal number consists of a 1-bit
sign, a 96-bit integer number, and a scaling factor used to divide the
integer number and specify what portion of it is a decimal fraction.
The scaling factor is implicitly the number 10, raised to an exponent
ranging from 0 to 28.
The return value is a four-element array of 32-bit signed integers.
The first, second, and third elements of the returned array contain
the low, middle, and high 32 bits of the 96-bit integer number.
The fourth element of the returned array contains the scale factor and
sign. It consists of the following parts:
Bits 0 to 15, the lower word, are unused and must be zero.
Bits 16 to 23 must contain an exponent between 0 and 28, which
indicates the power of 10 to divide the integer number.
Bits 24 to 30 are unused and must be zero.
Bit 31 contains the sign: 0 mean positive, and 1 means negative.
Note that the bit representation differentiates between negative and
positive zero. These values are treated as being equal in all
operations.
The fourth integer of each decimal in your example is 0x00030000 for bitsSingle and 0x00020000 for bitsDecimal. In binary this maps to:
bitsSingle 00000000 00000011 00000000 00000000
|\-----/ \------/ \---------------/
| | | |
sign <-+ unused exponent unused
| | | |
|/-----\ /------\ /---------------\
bitsDecimal 00000000 00000010 00000000 00000000
NOTE: exponent represents multiplication by negative power of 10
Therefore, in the first case the 96-bit integer is divided by an additional factor of 10 compared to the second -- bits 16 to 23 give the value 3 instead of 2. But that is offset by the 96-bit integer itself, which in the first case is also 10 times greater than in the second (obvious from the values of the first elements).
The difference in observed values can therefore be attributed simply to the fact that the conversion from single uses subtly different logic to derive the internal representation compared to the "straight" constructor.

Related

Truncate significant figures [duplicate]

Why do some numbers lose accuracy when stored as floating point numbers?
For example, the decimal number 9.2 can be expressed exactly as a ratio of two decimal integers (92/10), both of which can be expressed exactly in binary (0b1011100/0b1010). However, the same ratio stored as a floating point number is never exactly equal to 9.2:
32-bit "single precision" float: 9.19999980926513671875
64-bit "double precision" float: 9.199999999999999289457264239899814128875732421875
How can such an apparently simple number be "too big" to express in 64 bits of memory?
In most programming languages, floating point numbers are represented a lot like scientific notation: with an exponent and a mantissa (also called the significand). A very simple number, say 9.2, is actually this fraction:
5179139571476070 * 2 -49
Where the exponent is -49 and the mantissa is 5179139571476070. The reason it is impossible to represent some decimal numbers this way is that both the exponent and the mantissa must be integers. In other words, all floats must be an integer multiplied by an integer power of 2.
9.2 may be simply 92/10, but 10 cannot be expressed as 2n if n is limited to integer values.
Seeing the Data
First, a few functions to see the components that make a 32- and 64-bit float. Gloss over these if you only care about the output (example in Python):
def float_to_bin_parts(number, bits=64):
if bits == 32: # single precision
int_pack = 'I'
float_pack = 'f'
exponent_bits = 8
mantissa_bits = 23
exponent_bias = 127
elif bits == 64: # double precision. all python floats are this
int_pack = 'Q'
float_pack = 'd'
exponent_bits = 11
mantissa_bits = 52
exponent_bias = 1023
else:
raise ValueError, 'bits argument must be 32 or 64'
bin_iter = iter(bin(struct.unpack(int_pack, struct.pack(float_pack, number))[0])[2:].rjust(bits, '0'))
return [''.join(islice(bin_iter, x)) for x in (1, exponent_bits, mantissa_bits)]
There's a lot of complexity behind that function, and it'd be quite the tangent to explain, but if you're interested, the important resource for our purposes is the struct module.
Python's float is a 64-bit, double-precision number. In other languages such as C, C++, Java and C#, double-precision has a separate type double, which is often implemented as 64 bits.
When we call that function with our example, 9.2, here's what we get:
>>> float_to_bin_parts(9.2)
['0', '10000000010', '0010011001100110011001100110011001100110011001100110']
Interpreting the Data
You'll see I've split the return value into three components. These components are:
Sign
Exponent
Mantissa (also called Significand, or Fraction)
Sign
The sign is stored in the first component as a single bit. It's easy to explain: 0 means the float is a positive number; 1 means it's negative. Because 9.2 is positive, our sign value is 0.
Exponent
The exponent is stored in the middle component as 11 bits. In our case, 0b10000000010. In decimal, that represents the value 1026. A quirk of this component is that you must subtract a number equal to 2(# of bits) - 1 - 1 to get the true exponent; in our case, that means subtracting 0b1111111111 (decimal number 1023) to get the true exponent, 0b00000000011 (decimal number 3).
Mantissa
The mantissa is stored in the third component as 52 bits. However, there's a quirk to this component as well. To understand this quirk, consider a number in scientific notation, like this:
6.0221413x1023
The mantissa would be the 6.0221413. Recall that the mantissa in scientific notation always begins with a single non-zero digit. The same holds true for binary, except that binary only has two digits: 0 and 1. So the binary mantissa always starts with 1! When a float is stored, the 1 at the front of the binary mantissa is omitted to save space; we have to place it back at the front of our third element to get the true mantissa:
1.0010011001100110011001100110011001100110011001100110
This involves more than just a simple addition, because the bits stored in our third component actually represent the fractional part of the mantissa, to the right of the radix point.
When dealing with decimal numbers, we "move the decimal point" by multiplying or dividing by powers of 10. In binary, we can do the same thing by multiplying or dividing by powers of 2. Since our third element has 52 bits, we divide it by 252 to move it 52 places to the right:
0.0010011001100110011001100110011001100110011001100110
In decimal notation, that's the same as dividing 675539944105574 by 4503599627370496 to get 0.1499999999999999. (This is one example of a ratio that can be expressed exactly in binary, but only approximately in decimal; for more detail, see: 675539944105574 / 4503599627370496.)
Now that we've transformed the third component into a fractional number, adding 1 gives the true mantissa.
Recapping the Components
Sign (first component): 0 for positive, 1 for negative
Exponent (middle component): Subtract 2(# of bits) - 1 - 1 to get the true exponent
Mantissa (last component): Divide by 2(# of bits) and add 1 to get the true mantissa
Calculating the Number
Putting all three parts together, we're given this binary number:
1.0010011001100110011001100110011001100110011001100110 x 1011
Which we can then convert from binary to decimal:
1.1499999999999999 x 23 (inexact!)
And multiply to reveal the final representation of the number we started with (9.2) after being stored as a floating point value:
9.1999999999999993
Representing as a Fraction
9.2
Now that we've built the number, it's possible to reconstruct it into a simple fraction:
1.0010011001100110011001100110011001100110011001100110 x 1011
Shift mantissa to a whole number:
10010011001100110011001100110011001100110011001100110 x 1011-110100
Convert to decimal:
5179139571476070 x 23-52
Subtract the exponent:
5179139571476070 x 2-49
Turn negative exponent into division:
5179139571476070 / 249
Multiply exponent:
5179139571476070 / 562949953421312
Which equals:
9.1999999999999993
9.5
>>> float_to_bin_parts(9.5)
['0', '10000000010', '0011000000000000000000000000000000000000000000000000']
Already you can see the mantissa is only 4 digits followed by a whole lot of zeroes. But let's go through the paces.
Assemble the binary scientific notation:
1.0011 x 1011
Shift the decimal point:
10011 x 1011-100
Subtract the exponent:
10011 x 10-1
Binary to decimal:
19 x 2-1
Negative exponent to division:
19 / 21
Multiply exponent:
19 / 2
Equals:
9.5
Further reading
The Floating-Point Guide: What Every Programmer Should Know About Floating-Point Arithmetic, or, Why don’t my numbers add up? (floating-point-gui.de)
What Every Computer Scientist Should Know About Floating-Point Arithmetic (Goldberg 1991)
IEEE Double-precision floating-point format (Wikipedia)
Floating Point Arithmetic: Issues and Limitations (docs.python.org)
Floating Point Binary
This isn't a full answer (mhlester already covered a lot of good ground I won't duplicate), but I would like to stress how much the representation of a number depends on the base you are working in.
Consider the fraction 2/3
In good-ol' base 10, we typically write it out as something like
0.666...
0.666
0.667
When we look at those representations, we tend to associate each of them with the fraction 2/3, even though only the first representation is mathematically equal to the fraction. The second and third representations/approximations have an error on the order of 0.001, which is actually much worse than the error between 9.2 and 9.1999999999999993. In fact, the second representation isn't even rounded correctly! Nevertheless, we don't have a problem with 0.666 as an approximation of the number 2/3, so we shouldn't really have a problem with how 9.2 is approximated in most programs. (Yes, in some programs it matters.)
Number bases
So here's where number bases are crucial. If we were trying to represent 2/3 in base 3, then
(2/3)10 = 0.23
In other words, we have an exact, finite representation for the same number by switching bases! The take-away is that even though you can convert any number to any base, all rational numbers have exact finite representations in some bases but not in others.
To drive this point home, let's look at 1/2. It might surprise you that even though this perfectly simple number has an exact representation in base 10 and 2, it requires a repeating representation in base 3.
(1/2)10 = 0.510 = 0.12 = 0.1111...3
Why are floating point numbers inaccurate?
Because often-times, they are approximating rationals that cannot be represented finitely in base 2 (the digits repeat), and in general they are approximating real (possibly irrational) numbers which may not be representable in finitely many digits in any base.
While all of the other answers are good there is still one thing missing:
It is impossible to represent irrational numbers (e.g. π, sqrt(2), log(3), etc.) precisely!
And that actually is why they are called irrational. No amount of bit storage in the world would be enough to hold even one of them. Only symbolic arithmetic is able to preserve their precision.
Although if you would limit your math needs to rational numbers only the problem of precision becomes manageable. You would need to store a pair of (possibly very big) integers a and b to hold the number represented by the fraction a/b. All your arithmetic would have to be done on fractions just like in highschool math (e.g. a/b * c/d = ac/bd).
But of course you would still run into the same kind of trouble when pi, sqrt, log, sin, etc. are involved.
TL;DR
For hardware accelerated arithmetic only a limited amount of rational numbers can be represented. Every not-representable number is approximated. Some numbers (i.e. irrational) can never be represented no matter the system.
There are infinitely many real numbers (so many that you can't enumerate them), and there are infinitely many rational numbers (it is possible to enumerate them).
The floating-point representation is a finite one (like anything in a computer) so unavoidably many many many numbers are impossible to represent. In particular, 64 bits only allow you to distinguish among only 18,446,744,073,709,551,616 different values (which is nothing compared to infinity). With the standard convention, 9.2 is not one of them. Those that can are of the form m.2^e for some integers m and e.
You might come up with a different numeration system, 10 based for instance, where 9.2 would have an exact representation. But other numbers, say 1/3, would still be impossible to represent.
Also note that double-precision floating-points numbers are extremely accurate. They can represent any number in a very wide range with as much as 15 exact digits. For daily life computations, 4 or 5 digits are more than enough. You will never really need those 15, unless you want to count every millisecond of your lifetime.
Why can we not represent 9.2 in binary floating point?
Floating point numbers are (simplifying slightly) a positional numbering system with a restricted number of digits and a movable radix point.
A fraction can only be expressed exactly using a finite number of digits in a positional numbering system if the prime factors of the denominator (when the fraction is expressed in it's lowest terms) are factors of the base.
The prime factors of 10 are 5 and 2, so in base 10 we can represent any fraction of the form a/(2b5c).
On the other hand the only prime factor of 2 is 2, so in base 2 we can only represent fractions of the form a/(2b)
Why do computers use this representation?
Because it's a simple format to work with and it is sufficiently accurate for most purposes. Basically the same reason scientists use "scientific notation" and round their results to a reasonable number of digits at each step.
It would certainly be possible to define a fraction format, with (for example) a 32-bit numerator and a 32-bit denominator. It would be able to represent numbers that IEEE double precision floating point could not, but equally there would be many numbers that can be represented in double precision floating point that could not be represented in such a fixed-size fraction format.
However the big problem is that such a format is a pain to do calculations on. For two reasons.
If you want to have exactly one representation of each number then after each calculation you need to reduce the fraction to it's lowest terms. That means that for every operation you basically need to do a greatest common divisor calculation.
If after your calculation you end up with an unrepresentable result because the numerator or denominator you need to find the closest representable result. This is non-trivil.
Some Languages do offer fraction types, but usually they do it in combination with arbitary precision, this avoids needing to worry about approximating fractions but it creates it's own problem, when a number passes through a large number of calculation steps the size of the denominator and hence the storage needed for the fraction can explode.
Some languages also offer decimal floating point types, these are mainly used in scenarios where it is imporant that the results the computer gets match pre-existing rounding rules that were written with humans in mind (chiefly financial calculations). These are slightly more difficult to work with than binary floating point, but the biggest problem is that most computers don't offer hardware support for them.

How unchecked int overflow work c#

We know that an int value has a maximum value of 2^31 - 1 and a minimum value of -2^31. If we were to set int to the maximum value:
int x = int.MaxValue;
And we take that x and add one to it in a unchecked field
unchecked
{
x++;
}
Then we get x = the minimum value of int. My question is why does this happen and how does it happen in terms of binary.
In C#, the built-in integers are represented by a sequence of bit values of a predefined length. For the basic int datatype that length is 32 bits. Since 32 bits can only represent 4,294,967,296 different possible values (since that is 2^32),
Since int can hold both positive and negative numbers, the sign of the number must be encoded somehow. This is done with first bit. If the first bit is 1, then the number is negative.
Here are the int values laid out on a number-line in hexadecimal and decimal:
Hexadecimal Decimal
----------- -----------
0x80000000 -2147483648
0x80000001 -2147483647
0x80000002 -2147483646
... ...
0xFFFFFFFE -2
0xFFFFFFFF -1
0x00000000 0
0x00000001 1
0x00000002 2
... ...
0x7FFFFFFE 2147483646
0x7FFFFFFF 2147483647
As you can see from this chart, the bits that represent the smallest possible value are what you would get by adding one to the largest possible value, while ignoring the interpretation of the sign bit. When a signed number is added in this way, it is called "integer overflow". Whether or not an integer overflow is allowed or treated as an error is configurable with the checked and unchecked statements in C#.
This representation is called 2's complement
You can check this link if you want to go deeper.
Int has a maximum value of 2^31 - 1 because , int is an alias for Int32 that stores 4 bytes in memory to represent the value.
Now lets come to the concept of int.Max + 1. Here you need to know about the Signed number representations that is used to store the negative values. In Binary number representation, there's nothing like negative numbers but they are represented by the one's complement and two's complement bits.
Lets say, my int1 has a storage memory of 1 byte, ie. 8 bits. So Max value you can store into int1 is 2^8 -1 = 255 . Now let's add 1 into this value-
11111111
+ 00000000
----------
100000000
your output is 100000000 = (512) in decimal, that is beyond the storage capacity of int1 and that represents the negative value of -256 in decimal (as first bit shows the negative value).
This is the reason adding 1 to int.Max become, int.Minimum. ie.
`int.Maximum + 1 = int.Minimum`
Okay, I'm going to assume for sake of convenience that there's a 4 bit integer type in C#. The most significant bit is used to store the sign, the remaining three are used to store the magnitude.
The max number that can be stored in such a representation is +7 (positive 7 in the base 10). In binary, this is:
0111
Now let's add positive one:
0111
+0001
_____
1000
Whoops, that carried over from the magnitudes and flipped the sign bit! The sum you see above is actually the number -8, the smallest possible number that can be stored in this representation. If you're not familiar with two's complement, which is how signed numbers are commonly represented in binary, you might think that number is negative zero (which would be the case in a sign and magnitude representation). You can read up on two's complement to understand this better.

Why does a float variable stop incrementing at 16777216 in C#?

float a = 0;
while (true)
{
a++;
if (a > 16777216)
break; // Will never break... a stops at 16777216
}
Can anyone explain this to me why a float value stops incrementing at 16777216 in this code?
Edit:
Or even more simple:
float a = 16777217; // a becomes 16777216
Short roundup of IEEE-754 floating point numbers (32-bit) off the top of my head:
1 bit sign (0 means positive number, 1 means negative number)
8 bit exponent (with -127 bias, not important here)
23 bits "mantissa"
With exceptions for the exponent values 0 and 255, you can calculate the value as: (sign ? -1 : +1) * 2^exponent * (1.0 + mantissa)
The mantissa bits represent binary digits after the decimal separator, e.g. 1001 0000 0000 0000 0000 000 = 2^-1 + 2^-4 = .5 + .0625 = .5625 and the value in front of the decimal separator is not stored but implicitly assumed as 1 (if exponent is 255, 0 is assumed but that's not important here), so for an exponent of 30, for instance, this mantissa example represents the value 1.5625
Now to your example:
16777216 is exactly 224, and would be represented as 32-bit float like so:
sign = 0 (positive number)
exponent = 24 (stored as 24+127=151=10010111)
mantissa = .0
As 32 bits floating-point representation: 0 10010111 00000000000000000000000
Therefore: Value = (+1) * 2^24 * (1.0 + .0) = 2^24 = 16777216
Now let's look at the number 16777217, or exactly 224+1:
sign and exponent are the same
mantissa would have to be exactly 2-24 so that (+1) * 2^24 * (1.0 + 2^-24) = 2^24 + 1 = 16777217
And here's the problem. The mantissa cannot have the value 2-24 because it only has 23 bits, so the number 16777217 just cannot be represented with the accuracy of 32-bit floating points numbers!
16777217 cannot be represented exactly with a float. The next highest number that a float can represent exactly is 16777218.
So, you try to increment the float value 16777216 to 16777217, which cannot be represented in a float.
When you look at that value in its binary representation, you'll see that it's a one and many zeroes, namely 1 0000 0000 0000 0000 0000 0000, or exactly 2^24. That means, at 16777216, the number has just grown by one digit.
As it's a floating point number, this can mean that the last digit at its end that is still stored (i.e. within its precision) is shifted to the left, as well.
Probably, what you're seeing is that the last digit of precision has just shifted to something bigger than one, so adding one doesn't make any difference any more.
Imagine this in decimal form. Suppose you had the number:
1.000000 * 10^6
or 1,000,000. If all you had were six digits of accuracy, adding 0.5 to this number would yield
1.0000005 * 10^6
However, current thinking with fp rounding modes is to use "Round to Even," rather than "Round to Nearest." In this instance, every time you increment this value, it will round back down in the floating point unit back to 16,777,216, or 2^24. Singles in IEE 754 is represented as:
+/- exponent (1.) fraction
where the "1." is implied and the fraction is another 23 bits, all zeros, in this case. The extra binary 1 will spill into the guard digit, carry down to the rounding step, and be deleted each time, no matter how many times you increment it. The ulp or unit in the last place will always be zero. The last successful increment is from:
+2^23 * (+1.) 11111111111111111111111 -> +2^24 * (1.) 00000000000000000000000

Why do float and int have such different maximum values even though they're the same number of bits? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
what the difference between the float and integer data type when the size is same in java?
As you probably know, both of these types are 32-bits.int can hold only integer numbers, whereas float also supports floating point numbers (as the type names suggest).
How is it possible then that the max value of int is 231, and the max value of float is 3.4*1038, while both of them are 32 bits?
I think that int's max value capacity should be higher than the float because it doesn't save memory for the floating number and accepts only integer numbers. I'll be glad for an explanation in that case.
Your intuition quite rightly tells you that there can be no more information content in one than the other, because they both have 32 bits. But that doesn't mean we can't use those bits to represent different values.
Suppose I invent two new datatypes, uint4 and foo4. uint4 uses 4 bits to represent an integer, in the standard binary representation, so we have
bits value
0000 0
0001 1
0010 2
...
1111 15
But foo4 uses 4 bits to represent these values:
bits value
0000 0
0001 42
0010 -97
0011 1
...
1110 pi
1111 e
Now foo4 has a much wider range of values than uint4, despite having the same number of bits! How? Because there are some uint4 values that can't be represented by foo4, so those 'slots' in the bit mapping are available for other values.
It is the same for int and float - they can both store values from a set of 232 values, just different sets of 232 values.
A float might store a higher number value, but it will not be precise even on digits before the decimal dot.
Consider the following example:
float a = 123456789012345678901234567890f; //30 digits
Console.WriteLine(a); // 1.234568E+29
Notice that barely any precision is kept.
An integer on the other hand will always precisely store any number within its range of values.
For the sake of comparison, let's look at a double precision floating point number:
double a = 123456789012345678901234567890d; //30 digits
Console.WriteLine(a); // 1.23456789012346E+29
Notice that roughly twice as much significant digits are preserved.
These are based on IEEE754 floating point specification, that is why it is possible. Please read this documentation. It is not just about how many bits.
The hint is in the "floating" part of "floating point". What you say basically assumes fixed point. A floating point number does not "reserve space" for the digits after the decimal point - it has a limited number of digits (23 binary) and remembers what power of two to multiply it by.

Is there any practical difference between the .net decimal values 1m and 1.0000m?

Is there any practical difference between the .net decimal values 1m and 1.0000m?
The internal storage is different:
1m : 0x00000001 0x00000000 0x00000000 0x00000000
1.0000m : 0x000186a0 0x00000000 0x00000000 0x00050000
But, is there a situation where the knowledge of "significant digits" would be used by a method in the BCL?
I ask because I'm working on a means of compressing the space required for decimal values for disk storage or network transport and am toying with the idea of "normalizing" the value before I store it to improve it's compressability. But, I'd like to know if it is likely to cause issues down the line. I'm guessing that it should be fine, but only because I don't see any methods or properties that expose the precision of the value. Does anyone know otherwise?
The reason for the difference in encoding is because the Decimal data type stores the number as a whole number (96 bit integer), with a scale which is used to form the divisor to get the fractional number. The value is essentially
integer / 10^scale
Internally the Decimal type is represented as 4 Int32, see the documentation of Decimal.GetBits for more detail. In summary, GetBits returns an array of 4 Int32s, where each element represents the follow portion of the Decimal encoding
Element 0,1,2 - Represent the low, middle and high 32 bits on the 96 bit integer
Element 3 - Bits 0-15 Unused
Bits 16-23 exponent which is the power of 10 to divide the integer by
Bits 24-30 Unused
Bit 31 the sign where 0 is positive and 1 is negative
So in your example, very simply put when 1.0000m is encoded as a decimal the actual representation is 10000 / 10^4 while 1m is represented as 1 / 10^0 mathematically the same value just encoded differently.
If you use the native .NET operators for the decimal type and do not manipulate/compare the bit/bytes yourself you should be safe.
You will also notice that the string conversions will also take this binary representation into consideration and produce different strings so you need to be careful in that case if you ever rely on the string representation.
The decimal type tracks scale because it's important in arithmetic. If you do long multiplication, by hand, of two numbers — for instance, 3.14 * 5.00 — the result has 6 digits of precision and a scale of 4.
To do the multiplication, ignore the decimal points (for now) and treat the two numbers as integers.
3.14
* 5.00
------
0000 -- 0 * 314 (0 in the one's place)
00000 -- 0 * 314 (0 in the 10's place)
157000 -- 5 * 314 (5 in the 100's place)
------
157000
That gives you the unscaled results. Now, count the total number of digits to the right of the decimal point in the expression (that would be 4) and insert the decimal point 4 places to the left:
15.7000
That result, while equivalent in value to 15.7, is more precise than the value 15.7. The value 15.7000 has 6 digits of precision and a scale of 4; 15.7 has 3 digits of precision and a scale of 1.
If one is trying to do precision arithmetic, it is important to track the precision and scale of your values and results as it tells you something about the precision of your results (note that precision isnt' the same as accuracy: measure something with a ruler graduated in 1/10ths of an inch and the best you can say about the resulting measurement, no matter how many trailing zeros you put to the right of the decimal point is that it is accurate to, at best, a 1/10th of an inch. Another way of putting it would be to say that your measurement is accurate, at best, within +/- 5/100ths of the stated value.
The only reason I can think of is so invoking `ToString returns the exact textual representation in the source code.
Console.WriteLine(1m); // 1
Console.WriteLine(1.000m); // 1.000

Categories