The double value that I defined to the variable is changing [duplicate] - c#

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 4 months ago.
I have a problem with conversation of double values.
In the picture Project's framework is .NET6 and i tried on .NET5 and i get same value again on x64.
Can you explain this situation to me?
And how can I get the value unchanged on x64?
Thank you.

This is expected behavior: floating point numbers have a limited number of significant digits, and a number that has a finite number of digits in their decimal representation may require infinite digits in their binary representation.
E.g., the decimal 0.1 is, in binary, 0.00011001100110011.... repeating. Storing this in a float or double, the number of digits is truncated.
Floating point numbers behave similar, but not identical to "real" numbers. This is something you should be aware of.
For financial mathematics, in C#, use the decimal type.
Here you find a good "what every developer should know" overview: https://floating-point-gui.de/.
The standard governing the most common (almost ubiquitous) implementation of floating point types is IEEE 754.

Related

Why returns float.Parse() without decimal places wrong result? [duplicate]

This question already has answers here:
c# string to float conversion invalid?
(3 answers)
Closed 5 years ago.
float.Parse("534818068")
returns: 534818080
I understand that there are many complications with float and decimal values. But maybe someone could explain this behaviour to me.
Thanks!
Floating point numbers have a relative precision, i.e. something like 7 or 8 digits. So only the first 7 or 8 digits are correct, independent of the actual total size of the number.
Floating point numbers are stored internally using the IEEE 754 standard (a sign, a biased exponent and a fraction).
float numbers are stored with a 32 bits representation, which means they will have a precision of 7 digits.
On the other hand, double are stored with a 64 bits representation, thus having 15-16 digits (source).
Which is why you shouldn't usually compare floats for equality for instance.

how to divide number using double or decimal in c# [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 5 years ago.
when divide in c# Answer is different and when i divide from calculator answer is different kindly solve it.
c#
double div=(double)100000/30/9;
Answer 370.37037037037038
370.37037037037038*9*30 //100000.0000000000026
in calculator
Answer 370.3703703703704
370.3703703703704*9*30 //100000
i need exact answer like a calculator.
The difference is simply one of rounding. That is not a rational number, it has a repeating sequence, and cannot be perfectly expressed in binary code. It cannot even be expressed with a finite number of digits in decimal. The only difference there is that your calculator displays fewer digits than the C# double.
The final '04' is simply your calculator rounding '038' upward.

Math.Round(1.225,2) gives 1.23, shouldn't it give 1.22 [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
AFAIK .NET's default round option is to even, so Math.Round(1.225,2) should give 1.22 but it gives 1.23.
Math.Round(2.225,2) = 2.22
Math.Round(100.225,2) = 100.22
all the values I tried rounds to nearest even but only 1.225 and -1.225 rounds to 1.23 and -1.23.
The main problem is that in float and double, the amount of decimal places is not part of the value, and the precision isn't decimal, but rather, binary. And there's no finite binary number that can represent 1.225 exactly.
So when you do Math.Round(1.225f, 2), you're actually doing something more like Math.Round(1.22500002f, 2) - there's no midpoint rounding involved.
The same problem appears with Math.Round(2.225f, 2) - it's just that the "real" value is slightly smaller than 2.225f, so the result rounds down. But there's still no midpoint rounding involved.
If you need decimal precision, use decimal. Neither float nor double are designed for decimal precision - they're fine for e.g. physics calculations, but not for e.g. accounting.
1.225 can't be represented exactly in floating point, so you're really rounding 1.225000023841858.

Why should I use Integers when I could just use floats or doubles in C#? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
Just learning C# at the moment.
I don't get why anyone would use Integers when they could use floats or doubles..
Floats will add/subtract whole numbers AND decimal numbers so why would anyone ever bother using a plain old integer?
Seems like floats or double will take care of anything that an Integer can do with the bonus of being able to handle . numbers too..
Thanks!
The main reason is the same reason we often prefer to use integer fractions instead of fixed-precision decimals. With rational fractions, (1/3) times 3 is always 1. (1/3) plus (2/3) is always 1. (1/3) times 2 is (2/3).
Why? Because integer fractions are exact, just like integers are exact.
But with fixed-precision real numbers -- it's not so pretty. If (1/3) is .33333, then 3 times (1/3) will not be 1. And if (2/3) is .66666, then (1/3)+(2/3) will not be one. But if (2/3) is .66667, then (1/3) times 2 will not be (2/3) and 1 minus (1/3) will not be (2/3).
And, of course, you can't fix this by using more places. No number of decimal digits will allow you to represent (1/3) exactly.
Floating point is a fixed-precision real format, much like my fixed-precision decimals above. It doesn't always follow the naive rules you might expect. See the classic paper What Every Computer Scientist Should Know About Floating-Point Arithmetic.
To answer your question, to a first approximation, you should use integers whenever you possibly can and use floating point numbers only when you have to. And you should always remember that floating point numbers have limited precision and comparing two floating point numbers to see if they are equal can give results you might not expect.
As you can see in here the different types each have their own size.
For example, when working with big databases, an int or float may double up the required size.
You have different datatypes because they use a different amount of bits to store data.
Typically an integer will use less memory than a double, that is why one doesn't just use the largest possible datatype.
http://en.wikipedia.org/wiki/Data_type
Most of the answer given here deal directly with math concepts. There are purely computational reasons for using integers. several jump to mind:
loop counters, yes I know you can a decimal/double in the for loop but really do you need that degree of complexity?
internal enumeration values
array indices
There are several reasons. Performance, memory, even the desire to not see decimals (without having to play with format strings).
Floating point:
In computing, floating point describes a method of representing an approximation to real numbers in a way that can support a wide range of values. Numbers are, in general, represented approximately to a fixed number of significant digits and scaled using an exponent.
Integer:
In computer science, an integer is a datum of integral data type, a data type which represents some finite subset of the mathematical integers.
So, precision is one argument.
There are several reasons. First off, like people have already said, double stores 64-bit numeric values, while int only requires 32-bit.
Float is a different case. Both int and float store 32-bit numbers, but float is less precise. A float value is precise up to 7 digits, but beyond that it is just an approximation. If you have larger numbers, or if there is some case where you purposefully want to force only integer values with no fractional numbers, int is the way to go. If you don't care about loss of precision and want to allow a wider range of values, you can use float instead.
The primary reason for using integers is memory consumption and performance.
doubles are in most cases stored in a 64b memory blocks (compared to 32b for int)and use a somewhat complicated standard for representation (in some cases approximation of the real values)
complicated representation that requires calculation of mantis and exponent.
in most cases it requires use of dedicated coprocessor for floating point arithmetic
for integers complements and shifting can be used in order to speed up the arithmetic operations.
a number of use cases where it is more appropriate (natural) to use integers like for indexing arrays, loops, for getting reminder of the division, counting etc.
Also if you would need to do all of this using types for representing real numbers your code would be more more error prone.

How a Double.ToString() displays more chars that Double's precision? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
?(1.0-0.9-0.1)
-0.000000000000000027755575615628914
?((double)1.0-(double)0.9-(double)0.1)
-0.000000000000000027755575615628914
?((double)1.0-(double)0.9-(double)0.1).GetType()
{Name = "Double" FullName = "System.Double"}
?((double)1.0-(double)0.9-(double)0.1).ToString()
"-2,77555756156289E-17"
How a Double.ToString() displays more chars(32) that double's precision(15-16)?
I expect that MyObject.ToString() represents just MyObject and not MyObject+SomeTrashFromComputer
Why
?0.1
0.1
?0.2-0.1
0.1
?0.1-0.1
0.0
BUT
?1.0-0.9-0.1
-0.000000000000000027755575615628914
WHY
?1.0-0.1-0.9
0.0
BUT
?1.0-0.9-0.1
-0.000000000000000027755575615628914
How a Double.ToString() displays more chars(32) that double's precision(15-16)?
It isn't displaying 32, it is displaying 17, leading zeros don't count. Floating point means it can keep track of changes in magnitude separately from changes in value.
I expect that MyObject.ToString() represents just MyObject
It does, there may be a slight difference due to the mechanics of floating point numbers, but the true number is represented by the string precisely.
not MyObject+SomeTrashFromComputer
There is no trash, there is floating point inaccuracy. It exists in decimal too, write down 1/3 as a decimal number exactly. You can't, it involves a repeating decimal place. Double's are stored in base 2, so even 0.1 creates a repeating "decimal".
Also note that you are getting two different representations because you are calling two different display methods. ToString has specific semantics, while your debugging window probably has different ones. Also look up scientific notation if you want to know what the E means.
Check this System.Double
Remember that a floating-point number can only approximate a decimal number, and that the precision of a floating-point number determines how accurately that number approximates a decimal number. By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally. The precision of a floating-point number has several consequences:
Two floating-point numbers that appear equal for a particular precision might not compare equal because their least significant digits are different.
A mathematical or comparison operation that uses a floating-point number might not yield the same result if a decimal number is used because the floating-point number might not exactly approximate the decimal number.
A value might not roundtrip if a floating-point number is involved. A value is said to roundtrip if an operation converts an original floating-point number to another form, an inverse operation transforms the converted form back to a floating-point number, and the final floating-point number is equal to the original floating-point number. The roundtrip might fail because one or more least significant digits are lost or changed in a conversion.
I think you're unclear on two aspects of a floating point number; the precision and the range.
The precision of a floating point representation is how closely it can approximate a given decimal. The precision of a double is 15-16 digits.
The range of a floating point representation is related to how large or small of a number can be approximated by that representation. The range of a double is +/-5.0e-324 to +/-1.7e308.
So in your case, the calculation is precise to exactly 16 characters, and after that is not, as would be expected.
Some numbers that would seem simple are just not representable in standard floating point representation. If you require absolutely no deviation, you should use a different data type like decimal.

Categories