This question already has answers here:
Is floating point math broken?
(31 answers)
Is double Multiplication Broken in .NET? [duplicate]
(6 answers)
Why is floating point arithmetic in C# imprecise?
(3 answers)
Closed 2 years ago.
I'm trying to create a system where numbers like 1,000 get converted over to 1k, 1,000,000 to 1m, etc. The code works great up until I input a number like 8,850,000. Instead of spitting out 8.85B, it spits out 8.849999B. Is this a quirk with Unity and or C#, or did I do something wrong in the code?
Edit: It wasn't the Mathf.Round function, it was the limitations of the float data type. I managed to fix the problem by using the decimal data type instead.
using System.Collections;
using System.Collections.Generic;
using UnityEngine.UI;
using UnityEngine;
public class MoneyManager : MonoBehaviour
{
public Text txt;
private static float totalMoney;
public float perClick;
private float decimalMoney;
private float roundedMoney;
private string[] numberletter = new string[] { "K", "B", "T" };
private string correctletter;
void Start()
{
CookieClick.clickAmount = perClick;
}
void Update()
{
totalMoney = CookieClick.money;
roundedMoney = Mathf.Floor(Mathf.Log10(totalMoney));
if(roundedMoney >= 3 && roundedMoney < 6)
{
correctletter = numberletter[0];
decimalMoney = Mathf.Round(totalMoney/1000f * 100.0f) * 0.01f;
Debug.Log(decimalMoney);
txt.text = decimalMoney.ToString() + correctletter;
}
else if(roundedMoney >= 6 && roundedMoney < 9)
{
correctletter = numberletter[1];
Debug.Log(totalMoney);
decimalMoney = Mathf.Round(totalMoney/1000000f * 100.0f) * 0.01f;
Debug.Log(decimalMoney);
txt.text = decimalMoney.ToString() + correctletter;
}
else
{
txt.text = totalMoney.ToString();
}
}
}
It's nothing to do with the rounding, it's just that a number like 8.85 can't be exactly represented as a float.
Floating point numbers are stored in "scientific notation" in base 2. If you write 8.85 in base 2 it's
1000.11011001100110011001100..._2 (repeating forever)
so in the computer this will be stored as
1.0001101100110011001100_2 x 2^3
where the "mantissa" on the left has been chopped off at 23 bits.
Because of this truncation of the binary expansion, the number that's actually stored is equal to
8.84999847412109375
in this case, and so when the computer converts it back to decimal to print it out in a human-readable form that's what you see.
This isn't anything special to the programming language; it's just how numbers work when you can only store so many places.
As Johan Donne mentions in the comments, you can instead instruct the computer to store this number as
885 x 10^(-2)
by using the decimal type instead of float; alternately, you could have taken that number 8.84999... above and rounded it to few enough places that it would have rounded up to 8.85.
This is not caused by Math.Round but by the data type you are using: float stores the numbers with binary fractions. So float cannot always represent a decimal value exactly. Sometimes, when you amplificate the error by multplying with a large value and showing the result or when doing many chained calculations, these errors becomes visible.
Specifically to avoid those small errors, there is the datatype decimal which stores values with decimal fractions. Whenever you want exact results using decimal numbers (e.g. in financial calculations) you should use decimal and not float or double.
So, use decimal instead of float (for the literals as well. e.g. replace 100.0fwith 100.0m).
Note: Not only does decimal store decimal values without rounding errors (even after multiple chained calculations), it also has a much higher precision: 28 digits versus 15 digits in the case of double.
On the other hand, the range for decimal is smaller than is the case for double: see https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types for an overview.
Related
This question already has answers here:
C# Maths gives wrong results!
(4 answers)
Closed 9 months ago.
Hi I have the function:
public static string MapDiePitchY(string DiePitchY)
{
float value = float.Parse(DiePitchY) * 1000;
int valInt = (int)value;
string temp = valInt.ToString();
return temp;
}
When I run it with these two numbers, I get two different behaviours:
str = MapDiePitchY("4150.8");
Returns 4150799
While
str = MapDiePitchY("2767.3");
Returns
2767300 which is what I'd expect and want everytime I pass in a float
I want the function to always return the value multiplied by 1000 but with no rounding. I've tried to replace the 1000 with 1000f and still get the behaviour where it adds extra value to 4150.8. Values that shouldn't exist there. I have no idea why this is happenign and googling hasn't given me any answers.
Is there a way to ensure I get the exact value I pass in as string but multiplied by 1000 with no rounding?
I am on C# 7.3
So you understand floating-point numbers are not exact. When you type float x = 4150.8f the internal bit representation of the number is
x = (1 + 112230 * 2^(-23)) * 2^(139-127) = 4150.7998046875
The integers m=112230 and e=139 represent the mantissa and exponent of the floating-point number.
The above is the result of the parse function. Then you multiply by 1000 which results in
value = x*1000 = 4150799.8046875
what you want to do at this point is do the rounding before converting into an integer
int valInt = (int)Math.Round(value);
which should round up the .80.. in the end into the next whole number 4150800.
This question already has an answer here:
Why does my float of 999999999 become 10000000000? [duplicate]
(1 answer)
Closed 4 years ago.
In C#, to convert int to float, we just need to do something like float floatNumber = intNumber or Convert.ToSingle(intNumber). However, when it comes to large number such as 999999999, the system cannot correctly convert the number but convert it into the unwanted number 1E+09. Now the question is, is it possible to convert that large integer into the wanted float number?
A 32-bit float can't exactly represent an integer that large: It only has 24 bits with which to do it (one is implicit in the format). In 24 bits you can represent 16777215. 16777216 also fits because it is a power of two. 999999999 can't be represented exactly as a number with at most 24 bits multiplied by a power of 2.
See this answer on SO: https://stackoverflow.com/a/3793950/751579
For more information look up details on IEEE floating point 32-bit and 64-bit formats.
Can you use decimal type?
Console.WriteLine(999999999.0f.ToString("N"));
Console.WriteLine(999999999.0m.ToString("N"));;
prints
1,000,000,000.00
999,999,999.00
The reference even has a example for a very large number
In this example, the output is formatted by using the currency format string. Notice that x is rounded because the decimal places exceed $0.99. The variable y, which represents the maximum exact digits, is displayed exactly in the correct format.
public class TestDecimalFormat
{
static void Main()
{
decimal x = 0.999m;
decimal y = 9999999999999999999999999999m;
Console.WriteLine("My amount = {0:C}", x);
Console.WriteLine("Your amount = {0:C}", y);
}
}
/* Output:
My amount = $1.00
Your amount = $9,999,999,999,999,999,999,999,999,999.00
*/
I have been looking at some way to determine the scale and precision of a decimal in C#, which led me to several SO questions, yet none of them seem to have correct answers, or have misleading titles (they really are about SQL server or some other databases, not C#), or any answers at all. The following post, I think, is the closest to what I'm after, but even this seems wrong:
Determine the decimal precision of an input number
First, there seems to be some confusion about the difference between scale and precision. Per Google (per MSDN):
Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number.
With that being said, the number 12345.67890M would have a scale of 5 and a precision of 10. I have not discovered a single code example that would accurately calculate this in C#.
I want to make two helper methods, decimal.Scale(), and decimal.Precision(), such that the following unit test passes:
[TestMethod]
public void ScaleAndPrecisionTest()
{
//arrange
var number = 12345.67890M;
//act
var scale = number.Scale();
var precision = number.Precision();
//assert
Assert.IsTrue(precision == 10);
Assert.IsTrue(scale == 5);
}
but I have yet to find a snippet that will do this, though several people have suggested using decimal.GetBits(), and others have said, convert it to a string and parse it.
Converting it to a string and parsing it is, in my mind, an awful idea, even disregarding the localization issue with the decimal point. The math behind the GetBits() method, however, is like Greek to me.
Can anyone describe what the calculations would look like for determining scale and precision in a decimal value for C#?
This is how you get the scale using the GetBits() function:
decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
byte scale = (byte) ((bits[3] >> 16) & 0x7F);
And the best way I can think of to get the precision is by removing the fraction point (i.e. use the Decimal Constructor to reconstruct the decimal number without the scale mentioned above) and then use the logarithm:
decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
//We will use false for the sign (false = positive), because we don't care about it.
//We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
decimal xx = new Decimal(bits[0], bits[1], bits[2], false, 0);
int precision = (int)Math.Floor(Math.Log10((double)xx)) + 1;
Now we can put them into extensions:
public static class Extensions{
public static int GetScale(this decimal value){
if(value == 0)
return 0;
int[] bits = decimal.GetBits(value);
return (int) ((bits[3] >> 16) & 0x7F);
}
public static int GetPrecision(this decimal value){
if(value == 0)
return 0;
int[] bits = decimal.GetBits(value);
//We will use false for the sign (false = positive), because we don't care about it.
//We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
decimal d = new Decimal(bits[0], bits[1], bits[2], false, 0);
return (int)Math.Floor(Math.Log10((double)d)) + 1;
}
}
And here is a fiddle.
First of all, solve the "physical" problem: how you're gonna decide which digits are significant. The fact is, "precision" has no physical meaning unless you know or guess the absolute error.
Now, there are 2 fundamental ways to determine each digit (and thus, their number):
get+interpret the meaningful parts
calculate mathematically
The 2nd way can't detect trailing zeros in the fractional part (which may or may not be significant depending on your answer to the "physical" problem), so I won't cover it unless requested.
For the first one, in the Decimal's interface, I see 2 basic methods to get the parts: ToString() (a few overloads) and GetBits().
ToString(String, IFormatInfo) is actually a reliable way since you can define the format exactly.
E.g. use the F specifier and pass a culture-neutral NumberFormatInfo in which you have manually set all the fields that affect this particular format.
regarding the NumberDecimalDigits field: a test shows that it is the minimal number - so set it to 0 (the docs are unclear on this), - and trailing zeros are printed all right if there are any
The semantics of GetBits() result are documented clearly in its MSDN article (so laments like "it's Greek to me" won't do ;) ). Decompiling with ILSpy shows that it's actually a tuple of the object's raw data fields:
public static int[] GetBits(decimal d)
{
return new int[]
{
d.lo,
d.mid,
d.hi,
d.flags
};
}
And their semantics are:
|high|mid|low| - binary digits (96 bits), interpreted as an integer (=aligned to the right)
flags:
bits 16 to 23 - "the power of 10 to divide the integer number" (=number of fractional decimal digits)
(thus (flags>>16)&0xFF is the raw value of this field)
bit 31 - sign (doesn't concern us)
as you can see, this is very similar to IEEE 754 floats.
So, the number of fractional digits is the exponent value. The number of total digits is the number of digits in the decimal representation of the 96-bit integer.
Racil's answer gives you the value of the internal scale value of the decimal which is correct, although if the internal representation ever changes it'll be interesting.
In the current format the precision portion of decimal is fixed at 96 bits, which is between 28 and 29 decimal digits depending on the number. All .NET decimal values share this precision. Since this is constant there's no internal value you can use to determine it.
What you're apparently after though is the number of digits, which we can easily determine from the string representation. We can also get the scale at the same time or at least using the same method.
public struct DecimalInfo
{
public int Scale;
public int Length;
public override string ToString()
{
return string.Format("Scale={0}, Length={1}", Scale, Length);
}
}
public static class Extensions
{
public static DecimalInfo GetInfo(this decimal value)
{
string decStr = value.ToString().Replace("-", "");
int decpos = decStr.IndexOf(".");
int length = decStr.Length - (decpos < 0 ? 0 : 1);
int scale = decpos < 0 ? 0 : length - decpos;
return new DecimalInfo { Scale = scale, Length = length };
}
}
This question already has answers here:
C# is rounding down divisions by itself
(10 answers)
C# double not working as expected [duplicate]
(1 answer)
Closed 7 years ago.
I have c# program that calculates percentage and returns int value, but it always returns 0.
I have been writing code for 16 constitutive hours so I appreciate if you find the mistakes within it.
I debugged my code and I found that the value is being passed correctly.
private int returnFlag(int carCapacity, int subscribers)
{
int percentage = (subscribers / carCapacity)*100;
return percentage;
}
What you're seeing is the result of operating on two integers, and losing the fractional portion.
This piece of code, when using the values 5 and 14, will truncate to 0:
(subscribers / carCapacity)
You need to cast one of the operands to a double or decimal:
private int returnFlag(int carCapacity, int subscribers)
{
decimal percentage = ((decimal)subscribers / carCapacity) * 100;
return (int)percentage;
}
The issue is that since you're performing math on int (read: integer) values, any fractions or remainders get thrown out. This can be seen by changing your code to
int percentage = (subscribers / carCapacity);
percentage *= 100;
Since (subscribers / carCapacity) results in less than one, the only possible number an int can hold is 0 - and 0 * 100 is 0.
You can fix this by converting to a more precise number, such as double, before performing operations:
private int returnFlag(int carCapacity, int subscribers)
{
double percentage = ((double)subscribers / (double)carCapacity) * 100.0;
return (int)percentage;
}
Integer types (int) don't work with fractions. Change the types you are working with in your division to decimal, double, single, or float.
This question already has answers here:
Comparing double values in C#
(18 answers)
Closed 8 years ago.
I have a float value that i parsed into double later i rounded off to 2.Also i have another float value to which i did exactly the same as first one.
Here is the sample code..
string pulse = arrvaluedline[2].ToString();
pCost = float.Parse(arrvaluedline[3]);
double d = System.Convert.ToDouble(spCost);
double dd = Math.Round(d,2);
string[] arrpulse = pulse.Split(':');
vodanoofPulse = float.Parse(arrpulse[0]);
calculatedCost = CallCost * Pulse;
double dcalcost = Math.Round(calculatedCost, 2);
Now here i am trying to compare
if (dcalcost.Equals(spCost)){
}
Although my both values dcalcost and spCost are 0.4 Except this ,flow is not going inside the if ..Why..Please help me .,
The Equals method should be used with caution, because two apparently equivalent values can be unequal due to the differing precision of the two values. The following example reports that the Double value .333333 and the Double value returned by dividing 1 by 3 are unequal.
// Initialize two doubles with apparently identical values
double double1 = .33333;
double double2 = 1/3;
// Compare them for equality
Console.WriteLine(double1.Equals(double2)); // displays false
Comparing doubles are not as easy as one might think. Here is an example from MSDN on how you can do it in a better way.
// Initialize two doubles with apparently identical values
double double1 = .333333;
double double2 = (double) 1/3;
// Define the tolerance for variation in their values
double difference = Math.Abs(double1 * .00001);
if (Math.Abs(double1 - double2) <= difference)
Console.WriteLine("double1 and double2 are equal.");
else
Console.WriteLine("double1 and double2 are unequal.");