I am implementing RFC4506 (XDR) in c#.
Does the binary format for floats in c# (BitConverter.GetBytes) use the IEEE standard?
How would I convert a float in c# into the XDR binary format for an IEEE single precision floating point number, I could do the legwork manually, but I would like to know if there is an existing method to do this?
The standard defines the floating-point data type "float" (32 bits or
4 bytes). The encoding used is the IEEE standard for normalized
single-precision floating-point numbers [IEEE]. The following three
fields describe the single-precision floating-point number:
S: The sign of the number. Values 0 and 1 represent positive and
negative, respectively. One bit.
E: The exponent of the number, base 2. 8 bits are devoted to this
field. The exponent is biased by 127.
F: The fractional part of the number's mantissa, base 2. 23 bits
are devoted to this field.
Therefore, the floating-point number is described by:
(-1)**S * 2**(E-Bias) * 1.F
the exact layout is as follows.
+-------+-------+-------+-------+
|byte 0 |byte 1 |byte 2 |byte 3 | SINGLE-PRECISION
S| E | F | FLOATING-POINT NUMBER
+-------+-------+-------+-------+
1|<- 8 ->|<-------23 bits------>|
<------------32 bits------------>
Just as the most and least significant bytes of a number are 0 and 3,
the most and least significant bits of a single-precision floating-
point number are 0 and 31. The beginning bit (and most significant
bit) offsets of S, E, and F are 0, 1, and 9, respectively. Note that
these numbers refer to the mathematical positions of the bits, and
NOT to their actual physical locations (which vary from medium to
medium).
.NET uses IEEE representation as well so BitConverter.GetBytes gets you the bytes you need.
It is however a half-baked standard, it only specifies the bit interpretation but does not demand a physical representation. The RFC is infected with Unix preferences from the previous century, like all networking standards are, byte order is big-endian.
On most any machine that can run .NET you need to reverse the bytes. Thus:
public static byte[] XdrFloat(float value) {
byte[] bytes = BitConverter.GetBytes(value);
if (BitConverter.IsLittleEndian) Array.Reverse(bytes);
return bytes;
}
Related
I'm working with a binary file (3d model file for an old video game) in C#. The file format isn't officially documented, but some of it has been reverse-engineered by the game's community.
I'm having trouble understanding how to read/write the 4-byte floating point values. I was provided this explanation by a member of the community:
For example, the bytes EE 62 ED FF represent the value -18.614.
The bytes are little endian ordered. The first 2 bytes represent the decimal part of the value, and the last 2 bytes represent the whole part of the value.
For the decimal part, 62 EE converted to decimal is 25326. This represents the fraction out of 1, which also can be described as 65536/65536. Thus, divide 25326 by 65536 and you'll get 0.386.
For the whole part, FF ED converted to decimal is 65517. 65517 represents the whole number -19 (which is 65517 - 65536).
This makes the value -19 + .386 = -18.614.
This explanation mostly makes sense, but I'm confused by 2 things:
Does the magic number 65536 have any significance?
BinaryWriter.Write(-18.613f) writes the bytes as 79 E9 94 C1, so my assumption is the binary file I'm working with uses its own proprietary method of storing 4-byte floating point values (i.e. I can't use C#'s float interchangably and will need to encode/decode the values first)?
Firstly, this isn't a Floating Point Number its a Fix Point Number
Note : A fixed point number has a specific number of bits (or digits) reserved for the integer part (the part to the left of the decimal point)
Does the magic number 65536 have any significance
Its the max number of values unsigned 16 bit number can hold, or 2^16, yeah its significant, because the number you are working with is 2 * 16 bit values encoded for integral and fractional components.
so my assumption is the binary file I'm working with uses its own
proprietary method of storing 4-byte floating point values
Nope wrong again, floating point values in .Net adhere to the IEEE Standard for Floating-Point Arithmetic (IEEE 754) technical standard
When you use BinaryWriter.Write(float); it basically just shifts the bits into bytes and writes it to the Stream.
uint TmpValue = *(uint *)&value;
_buffer[0] = (byte) TmpValue;
_buffer[1] = (byte) (TmpValue >> 8);
_buffer[2] = (byte) (TmpValue >> 16);
_buffer[3] = (byte) (TmpValue >> 24);
OutStream.Write(_buffer, 0, 4);
If you want to read and write this special value you will need to do the same thing, you are going to have to read and write the bytes and convert them your self
This should be a build-in value unique to the game.
It should be more similar to Fraction Value.
Where 62 EE represent the Fraction Part of the value and FF ED represent the Whole Number Part of the value.
The While Number Part is easy to understand so I'm not going to explain it.
The explanation of Fraction Part is:
For every 2 bytes, there are 65536 possibilities (0 ~ 65535).
256 X 256 = 65536
hence the magic number 65536.
And the game itself must have a build-in algorithm to divide the first 2 bytes by 65536.
Choosing any number other than this will be a waste of memory space and result in decreased accuracy of the value which can be represented.
Of course, it's all depended on what kind of accuracy the game wish to present.
Why do some numbers lose accuracy when stored as floating point numbers?
For example, the decimal number 9.2 can be expressed exactly as a ratio of two decimal integers (92/10), both of which can be expressed exactly in binary (0b1011100/0b1010). However, the same ratio stored as a floating point number is never exactly equal to 9.2:
32-bit "single precision" float: 9.19999980926513671875
64-bit "double precision" float: 9.199999999999999289457264239899814128875732421875
How can such an apparently simple number be "too big" to express in 64 bits of memory?
In most programming languages, floating point numbers are represented a lot like scientific notation: with an exponent and a mantissa (also called the significand). A very simple number, say 9.2, is actually this fraction:
5179139571476070 * 2 -49
Where the exponent is -49 and the mantissa is 5179139571476070. The reason it is impossible to represent some decimal numbers this way is that both the exponent and the mantissa must be integers. In other words, all floats must be an integer multiplied by an integer power of 2.
9.2 may be simply 92/10, but 10 cannot be expressed as 2n if n is limited to integer values.
Seeing the Data
First, a few functions to see the components that make a 32- and 64-bit float. Gloss over these if you only care about the output (example in Python):
def float_to_bin_parts(number, bits=64):
if bits == 32: # single precision
int_pack = 'I'
float_pack = 'f'
exponent_bits = 8
mantissa_bits = 23
exponent_bias = 127
elif bits == 64: # double precision. all python floats are this
int_pack = 'Q'
float_pack = 'd'
exponent_bits = 11
mantissa_bits = 52
exponent_bias = 1023
else:
raise ValueError, 'bits argument must be 32 or 64'
bin_iter = iter(bin(struct.unpack(int_pack, struct.pack(float_pack, number))[0])[2:].rjust(bits, '0'))
return [''.join(islice(bin_iter, x)) for x in (1, exponent_bits, mantissa_bits)]
There's a lot of complexity behind that function, and it'd be quite the tangent to explain, but if you're interested, the important resource for our purposes is the struct module.
Python's float is a 64-bit, double-precision number. In other languages such as C, C++, Java and C#, double-precision has a separate type double, which is often implemented as 64 bits.
When we call that function with our example, 9.2, here's what we get:
>>> float_to_bin_parts(9.2)
['0', '10000000010', '0010011001100110011001100110011001100110011001100110']
Interpreting the Data
You'll see I've split the return value into three components. These components are:
Sign
Exponent
Mantissa (also called Significand, or Fraction)
Sign
The sign is stored in the first component as a single bit. It's easy to explain: 0 means the float is a positive number; 1 means it's negative. Because 9.2 is positive, our sign value is 0.
Exponent
The exponent is stored in the middle component as 11 bits. In our case, 0b10000000010. In decimal, that represents the value 1026. A quirk of this component is that you must subtract a number equal to 2(# of bits) - 1 - 1 to get the true exponent; in our case, that means subtracting 0b1111111111 (decimal number 1023) to get the true exponent, 0b00000000011 (decimal number 3).
Mantissa
The mantissa is stored in the third component as 52 bits. However, there's a quirk to this component as well. To understand this quirk, consider a number in scientific notation, like this:
6.0221413x1023
The mantissa would be the 6.0221413. Recall that the mantissa in scientific notation always begins with a single non-zero digit. The same holds true for binary, except that binary only has two digits: 0 and 1. So the binary mantissa always starts with 1! When a float is stored, the 1 at the front of the binary mantissa is omitted to save space; we have to place it back at the front of our third element to get the true mantissa:
1.0010011001100110011001100110011001100110011001100110
This involves more than just a simple addition, because the bits stored in our third component actually represent the fractional part of the mantissa, to the right of the radix point.
When dealing with decimal numbers, we "move the decimal point" by multiplying or dividing by powers of 10. In binary, we can do the same thing by multiplying or dividing by powers of 2. Since our third element has 52 bits, we divide it by 252 to move it 52 places to the right:
0.0010011001100110011001100110011001100110011001100110
In decimal notation, that's the same as dividing 675539944105574 by 4503599627370496 to get 0.1499999999999999. (This is one example of a ratio that can be expressed exactly in binary, but only approximately in decimal; for more detail, see: 675539944105574 / 4503599627370496.)
Now that we've transformed the third component into a fractional number, adding 1 gives the true mantissa.
Recapping the Components
Sign (first component): 0 for positive, 1 for negative
Exponent (middle component): Subtract 2(# of bits) - 1 - 1 to get the true exponent
Mantissa (last component): Divide by 2(# of bits) and add 1 to get the true mantissa
Calculating the Number
Putting all three parts together, we're given this binary number:
1.0010011001100110011001100110011001100110011001100110 x 1011
Which we can then convert from binary to decimal:
1.1499999999999999 x 23 (inexact!)
And multiply to reveal the final representation of the number we started with (9.2) after being stored as a floating point value:
9.1999999999999993
Representing as a Fraction
9.2
Now that we've built the number, it's possible to reconstruct it into a simple fraction:
1.0010011001100110011001100110011001100110011001100110 x 1011
Shift mantissa to a whole number:
10010011001100110011001100110011001100110011001100110 x 1011-110100
Convert to decimal:
5179139571476070 x 23-52
Subtract the exponent:
5179139571476070 x 2-49
Turn negative exponent into division:
5179139571476070 / 249
Multiply exponent:
5179139571476070 / 562949953421312
Which equals:
9.1999999999999993
9.5
>>> float_to_bin_parts(9.5)
['0', '10000000010', '0011000000000000000000000000000000000000000000000000']
Already you can see the mantissa is only 4 digits followed by a whole lot of zeroes. But let's go through the paces.
Assemble the binary scientific notation:
1.0011 x 1011
Shift the decimal point:
10011 x 1011-100
Subtract the exponent:
10011 x 10-1
Binary to decimal:
19 x 2-1
Negative exponent to division:
19 / 21
Multiply exponent:
19 / 2
Equals:
9.5
Further reading
The Floating-Point Guide: What Every Programmer Should Know About Floating-Point Arithmetic, or, Why don’t my numbers add up? (floating-point-gui.de)
What Every Computer Scientist Should Know About Floating-Point Arithmetic (Goldberg 1991)
IEEE Double-precision floating-point format (Wikipedia)
Floating Point Arithmetic: Issues and Limitations (docs.python.org)
Floating Point Binary
This isn't a full answer (mhlester already covered a lot of good ground I won't duplicate), but I would like to stress how much the representation of a number depends on the base you are working in.
Consider the fraction 2/3
In good-ol' base 10, we typically write it out as something like
0.666...
0.666
0.667
When we look at those representations, we tend to associate each of them with the fraction 2/3, even though only the first representation is mathematically equal to the fraction. The second and third representations/approximations have an error on the order of 0.001, which is actually much worse than the error between 9.2 and 9.1999999999999993. In fact, the second representation isn't even rounded correctly! Nevertheless, we don't have a problem with 0.666 as an approximation of the number 2/3, so we shouldn't really have a problem with how 9.2 is approximated in most programs. (Yes, in some programs it matters.)
Number bases
So here's where number bases are crucial. If we were trying to represent 2/3 in base 3, then
(2/3)10 = 0.23
In other words, we have an exact, finite representation for the same number by switching bases! The take-away is that even though you can convert any number to any base, all rational numbers have exact finite representations in some bases but not in others.
To drive this point home, let's look at 1/2. It might surprise you that even though this perfectly simple number has an exact representation in base 10 and 2, it requires a repeating representation in base 3.
(1/2)10 = 0.510 = 0.12 = 0.1111...3
Why are floating point numbers inaccurate?
Because often-times, they are approximating rationals that cannot be represented finitely in base 2 (the digits repeat), and in general they are approximating real (possibly irrational) numbers which may not be representable in finitely many digits in any base.
While all of the other answers are good there is still one thing missing:
It is impossible to represent irrational numbers (e.g. π, sqrt(2), log(3), etc.) precisely!
And that actually is why they are called irrational. No amount of bit storage in the world would be enough to hold even one of them. Only symbolic arithmetic is able to preserve their precision.
Although if you would limit your math needs to rational numbers only the problem of precision becomes manageable. You would need to store a pair of (possibly very big) integers a and b to hold the number represented by the fraction a/b. All your arithmetic would have to be done on fractions just like in highschool math (e.g. a/b * c/d = ac/bd).
But of course you would still run into the same kind of trouble when pi, sqrt, log, sin, etc. are involved.
TL;DR
For hardware accelerated arithmetic only a limited amount of rational numbers can be represented. Every not-representable number is approximated. Some numbers (i.e. irrational) can never be represented no matter the system.
There are infinitely many real numbers (so many that you can't enumerate them), and there are infinitely many rational numbers (it is possible to enumerate them).
The floating-point representation is a finite one (like anything in a computer) so unavoidably many many many numbers are impossible to represent. In particular, 64 bits only allow you to distinguish among only 18,446,744,073,709,551,616 different values (which is nothing compared to infinity). With the standard convention, 9.2 is not one of them. Those that can are of the form m.2^e for some integers m and e.
You might come up with a different numeration system, 10 based for instance, where 9.2 would have an exact representation. But other numbers, say 1/3, would still be impossible to represent.
Also note that double-precision floating-points numbers are extremely accurate. They can represent any number in a very wide range with as much as 15 exact digits. For daily life computations, 4 or 5 digits are more than enough. You will never really need those 15, unless you want to count every millisecond of your lifetime.
Why can we not represent 9.2 in binary floating point?
Floating point numbers are (simplifying slightly) a positional numbering system with a restricted number of digits and a movable radix point.
A fraction can only be expressed exactly using a finite number of digits in a positional numbering system if the prime factors of the denominator (when the fraction is expressed in it's lowest terms) are factors of the base.
The prime factors of 10 are 5 and 2, so in base 10 we can represent any fraction of the form a/(2b5c).
On the other hand the only prime factor of 2 is 2, so in base 2 we can only represent fractions of the form a/(2b)
Why do computers use this representation?
Because it's a simple format to work with and it is sufficiently accurate for most purposes. Basically the same reason scientists use "scientific notation" and round their results to a reasonable number of digits at each step.
It would certainly be possible to define a fraction format, with (for example) a 32-bit numerator and a 32-bit denominator. It would be able to represent numbers that IEEE double precision floating point could not, but equally there would be many numbers that can be represented in double precision floating point that could not be represented in such a fixed-size fraction format.
However the big problem is that such a format is a pain to do calculations on. For two reasons.
If you want to have exactly one representation of each number then after each calculation you need to reduce the fraction to it's lowest terms. That means that for every operation you basically need to do a greatest common divisor calculation.
If after your calculation you end up with an unrepresentable result because the numerator or denominator you need to find the closest representable result. This is non-trivil.
Some Languages do offer fraction types, but usually they do it in combination with arbitary precision, this avoids needing to worry about approximating fractions but it creates it's own problem, when a number passes through a large number of calculation steps the size of the denominator and hence the storage needed for the fraction can explode.
Some languages also offer decimal floating point types, these are mainly used in scenarios where it is imporant that the results the computer gets match pre-existing rounding rules that were written with humans in mind (chiefly financial calculations). These are slightly more difficult to work with than binary floating point, but the biggest problem is that most computers don't offer hardware support for them.
If I convert single s into decimal d I've noticed it's bit representation differs from that of the decimal created directly.
For example:
Single s = 0.01f;
Decimal d = 0.01m;
int[] bitsSingle = Decimal.GetBits((decimal)s)
int[] bitsDecimal = Decimal.GetBits(d)
Returns (middle elements removed for brevity):
bitsSingle:
[0] = 10
[3] = 196608
bitsDecimal:
[0] = 1
[3] = 131072
Both of these are decimal numbers, which both (appear) to be accurately representing 0.01:
Looking at the spec sheds no light except perhaps:
§4.1.7 Contrary to the float and double data types, decimal fractional
numbers such as 0.1 can be represented exactly in the decimal
representation.
Suggesting that this is somehow affected by single not being able accurately represent 0.01 before the conversion, therefore:
Why is this not accurate by the time the conversion is done?
Why do we seem to have two ways to represent 0.01 in the same datatype?
TL;DR
Both decimals precisely represent 0.1. It's just that the decimal format, allows multiple bitwise-different values that represent the exact same number.
Explanation
It isn't about single not being able to represent 0.1 precisely. As per the documentation of GetBits:
The binary representation of a Decimal number consists of a 1-bit
sign, a 96-bit integer number, and a scaling factor used to divide the
integer number and specify what portion of it is a decimal fraction.
The scaling factor is implicitly the number 10, raised to an exponent
ranging from 0 to 28.
The return value is a four-element array of 32-bit signed integers.
The first, second, and third elements of the returned array contain
the low, middle, and high 32 bits of the 96-bit integer number.
The fourth element of the returned array contains the scale factor and
sign. It consists of the following parts:
Bits 0 to 15, the lower word, are unused and must be zero.
Bits 16 to 23 must contain an exponent between 0 and 28, which
indicates the power of 10 to divide the integer number.
Bits 24 to 30 are unused and must be zero.
Bit 31 contains the sign: 0 mean positive, and 1 means negative.
Note that the bit representation differentiates between negative and
positive zero. These values are treated as being equal in all
operations.
The fourth integer of each decimal in your example is 0x00030000 for bitsSingle and 0x00020000 for bitsDecimal. In binary this maps to:
bitsSingle 00000000 00000011 00000000 00000000
|\-----/ \------/ \---------------/
| | | |
sign <-+ unused exponent unused
| | | |
|/-----\ /------\ /---------------\
bitsDecimal 00000000 00000010 00000000 00000000
NOTE: exponent represents multiplication by negative power of 10
Therefore, in the first case the 96-bit integer is divided by an additional factor of 10 compared to the second -- bits 16 to 23 give the value 3 instead of 2. But that is offset by the 96-bit integer itself, which in the first case is also 10 times greater than in the second (obvious from the values of the first elements).
The difference in observed values can therefore be attributed simply to the fact that the conversion from single uses subtly different logic to derive the internal representation compared to the "straight" constructor.
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
what the difference between the float and integer data type when the size is same in java?
As you probably know, both of these types are 32-bits.int can hold only integer numbers, whereas float also supports floating point numbers (as the type names suggest).
How is it possible then that the max value of int is 231, and the max value of float is 3.4*1038, while both of them are 32 bits?
I think that int's max value capacity should be higher than the float because it doesn't save memory for the floating number and accepts only integer numbers. I'll be glad for an explanation in that case.
Your intuition quite rightly tells you that there can be no more information content in one than the other, because they both have 32 bits. But that doesn't mean we can't use those bits to represent different values.
Suppose I invent two new datatypes, uint4 and foo4. uint4 uses 4 bits to represent an integer, in the standard binary representation, so we have
bits value
0000 0
0001 1
0010 2
...
1111 15
But foo4 uses 4 bits to represent these values:
bits value
0000 0
0001 42
0010 -97
0011 1
...
1110 pi
1111 e
Now foo4 has a much wider range of values than uint4, despite having the same number of bits! How? Because there are some uint4 values that can't be represented by foo4, so those 'slots' in the bit mapping are available for other values.
It is the same for int and float - they can both store values from a set of 232 values, just different sets of 232 values.
A float might store a higher number value, but it will not be precise even on digits before the decimal dot.
Consider the following example:
float a = 123456789012345678901234567890f; //30 digits
Console.WriteLine(a); // 1.234568E+29
Notice that barely any precision is kept.
An integer on the other hand will always precisely store any number within its range of values.
For the sake of comparison, let's look at a double precision floating point number:
double a = 123456789012345678901234567890d; //30 digits
Console.WriteLine(a); // 1.23456789012346E+29
Notice that roughly twice as much significant digits are preserved.
These are based on IEEE754 floating point specification, that is why it is possible. Please read this documentation. It is not just about how many bits.
The hint is in the "floating" part of "floating point". What you say basically assumes fixed point. A floating point number does not "reserve space" for the digits after the decimal point - it has a limited number of digits (23 binary) and remembers what power of two to multiply it by.
Would a higher range of floats be more accurate to multiply / divide / add / subtract, than a lower range.
For example, would 567.56 / 345.54 be more accurate than .00097854 / .00021297 ?
The answer to your question is "no." Floating point numbers are (usually*) represented with a normalized mantissa and an exponent. Multiplication and division operate first on the normalized mantissa, then on the exponents.
Addition and subtraction are, of course, another story. Operations like your examples:
567.56 + 345.54 or .00097854 - .00021297
work fine. But operations with disparate orders of magnitude like
567.56 + .00097854 or 345.54 - .00021297
may lose some low-order precision.
The IEEE Floating point standards includes denormalized numbers. If you are an astrophysicist or runtime-library developer, you may need to understand them. See http://en.wikipedia.org/wiki/Denormal_number
For IEEE 754 binary floating-point numbers (the most common), floating-point values have the same number of bits in the significand throughout most of the exponent range. However, there is a portion of the range where the significand has effectively fewer bits. And the relative error caused by rounding does vary depending on where the significand lies within its range.
IEEE 754 floating-point numbers are represented by a sign (+1 or -1, encoded as 0 or 1), an exponent (for double-precision, -1022 to 1023, encoded as the exponent plus 1023, so 1 to 2046), and a significand (for double-precision, a fraction usually from 1 to just under 2, represented with 53 bits but encoded with 52 bits because the first bit is implicitly 1).
E.g., the number 6.5 is encoded with the bits 0 (sign +1), 10000000001 (exponent 2), and 1010000000000000000000000000000000000000000000000000 (binary fraction 1.1010, hex 1.a, decimal 1.3125). We can write this in hexadecimal floating-point as 0x1.ap2 (hex fraction 1.a multiplied by 2 to the power of decimal 2). Writing in hexadecimal floating-point enables humans to see the floating-point representation fairly easily.
For the exponent, the encoding values of 0 and 2047 are special. When the encoding is 0, the exponent is the same as when the encoding is 1 (-1022), but the implicit bit of the fraction is 0 instead of 1. When the encoding is 2047, the floating-point object represents infinity (if the significand bits are all zero) or a NaN (otherwise).
When the encoded exponent is 0 and the significand bits are all zero, the number represents zero (with +0 and -0 distinguished by the sign). If the significand bits are not all zero, the number is said to be denormalized. This is because most numbers are “normalized” by adjusting the exponent so that the fraction is between 1 (inclusive) and 2 (exclusive). For denormalized numbers, the fraction is less than 1; it starts with “0.” instead of “1.”.
When the result of a floating-point operation is a denormalized number, it effectively has fewer bits in the significand. Thus, as numbers drop below 0x1p-1022 (2-1022), the effective precision decreases.
When numbers are in the normal range (not underflowing to denormals and not overflowing to infinity), then there are no differences in the significands of numbers with different exponents, so:
(2a+2b)/2 has exactly the same result as a+b.
(2a-2b)/2 has exactly the same result as a-b.
(2ab)/2 has exactly the same result as ab.
Note, however, that the relative error can change. When a floating-point operation is performed, the exact mathematical result must be rounded to a representable value. This rounding can happen only in units representable by the significand. For a given exponent, the bits in the significand have a fixed value. So the last bit in the significand represents a certain value. That value is a greater portion of a significand near 1 than it is of a significand near 2.
For a double-precision result, the unit of least precision (ULP) is 1 part in 252 of the value of the greatest bit in the significand. When using round-to-nearest mode (the most common default), the greatest error is at most half of that, because, if the representable number in one direction is more than half an ULP away, the number in the other direction is less than half an ULP away. And the closer number is returned by a proper floating-point operation.
Thus, the maximum relative error in a result with a significand near 1 is slightly over 2-53, but the maximum relative error in a result with a significand near 2 is slightly under 2-54.
For the sake of completeness, I have to disagree a bit and say Yes, it may matter somehow...
Indeed, if you perform 56756.0 / 34554.0, then you'll get the nearest representable Float to the exact mathematical result, with a single floating point rounding "error".
This is because 56756.0 and 34554.0 are representable exactly in floating point (single or double precision IEEE 754), and because according to IEEE 754 standard, operations perform an exact rounding operation (in default mode to the nearest).
If you write 567.56 / 345.54, then both numbers are not represented exactly in floating point in radix 2, so the result of this operation is cumulating 3 floating point rounding "errors".
Let's compare the result in Squeak Smalltalk in double precision (Float), converted to exact arithmetic (Fraction with arbitrary integer length at numerator and denominator):
((56756.0 / 34554.0) asFraction - (56756 / 34554)) asFloat.
-> -7.932275867322412e-17
So far, so good, the magnitude of error is less than or equal to half an ulp, as promised by IEEE 754:
(56756 / 34554) asFloat ulp / 2
-> 1.1102230246251565e-16
With cumulated rounding errors, you may get a larger error (but never a smaller):
((567.56 / 345.54) asFraction - (56756 / 34554)) asFloat
-> -3.0136736359825544e-16
((0.00056756 / 0.00034554) asFraction - (56756 / 34554)) asFloat
-> 3.647664511768385e-16
Above example is hard to generalize, and I perfectly agree with other answers: generally, NO, you should only care of relative precision.
... Unless maybe if you want to implement some function with very strict tolerance about round off errors...
No. In the sense that there's the same number of significant digits available no matter what the order of magnitude (exponent part) of your number is.