When inserting a float into a SQL Server database, I'm getting:
5.03000020980835
Which is not the exact value which is being gathered.
How I'm gathering this float? Through a text box control to be converted to float
How I'm working with the data currently:
private void PayRateTextBox_TextChanged(object sender, EventArgs e)
{
PayRateBox = float.Parse(PayRateTextBox.Text);
}
The above is setting an internal float to the current updated textbox which is set:
private float PayRateBox = 0;
Then inserted as:
string Query = "INSERT INTO shifts (ShiftDate,WeekNumber,Hours,PayType,Rate) " +
"VALUES(#Date, #WeekNo, #Hours, #PayType, #Rate)";
and bound to the query via bound parameters:
CMD.Parameters.Add("#Rate", SqlDbType.Float);
CMD.Parameters["#Rate"].Value = PayRate;
The default text in the TextBox is set to 5.03, so somewhere along the lines other data is being appended to make the overall value increment. I have tried to trace where this is happening, but cannot find out how and why this is happening. Perhaps I'm overlooking something?
Precision of float is limited to 7 significant digits. In your case it means 5.030000. Values at the rest of decimal places are undefined.
To improve precision use either double with precision of 15-16 significant digits or decimal with 28-29 significant digits.
Related
This question already has answers here:
Is floating point math broken?
(31 answers)
Is double Multiplication Broken in .NET? [duplicate]
(6 answers)
Why is floating point arithmetic in C# imprecise?
(3 answers)
Closed 2 years ago.
I'm trying to create a system where numbers like 1,000 get converted over to 1k, 1,000,000 to 1m, etc. The code works great up until I input a number like 8,850,000. Instead of spitting out 8.85B, it spits out 8.849999B. Is this a quirk with Unity and or C#, or did I do something wrong in the code?
Edit: It wasn't the Mathf.Round function, it was the limitations of the float data type. I managed to fix the problem by using the decimal data type instead.
using System.Collections;
using System.Collections.Generic;
using UnityEngine.UI;
using UnityEngine;
public class MoneyManager : MonoBehaviour
{
public Text txt;
private static float totalMoney;
public float perClick;
private float decimalMoney;
private float roundedMoney;
private string[] numberletter = new string[] { "K", "B", "T" };
private string correctletter;
void Start()
{
CookieClick.clickAmount = perClick;
}
void Update()
{
totalMoney = CookieClick.money;
roundedMoney = Mathf.Floor(Mathf.Log10(totalMoney));
if(roundedMoney >= 3 && roundedMoney < 6)
{
correctletter = numberletter[0];
decimalMoney = Mathf.Round(totalMoney/1000f * 100.0f) * 0.01f;
Debug.Log(decimalMoney);
txt.text = decimalMoney.ToString() + correctletter;
}
else if(roundedMoney >= 6 && roundedMoney < 9)
{
correctletter = numberletter[1];
Debug.Log(totalMoney);
decimalMoney = Mathf.Round(totalMoney/1000000f * 100.0f) * 0.01f;
Debug.Log(decimalMoney);
txt.text = decimalMoney.ToString() + correctletter;
}
else
{
txt.text = totalMoney.ToString();
}
}
}
It's nothing to do with the rounding, it's just that a number like 8.85 can't be exactly represented as a float.
Floating point numbers are stored in "scientific notation" in base 2. If you write 8.85 in base 2 it's
1000.11011001100110011001100..._2 (repeating forever)
so in the computer this will be stored as
1.0001101100110011001100_2 x 2^3
where the "mantissa" on the left has been chopped off at 23 bits.
Because of this truncation of the binary expansion, the number that's actually stored is equal to
8.84999847412109375
in this case, and so when the computer converts it back to decimal to print it out in a human-readable form that's what you see.
This isn't anything special to the programming language; it's just how numbers work when you can only store so many places.
As Johan Donne mentions in the comments, you can instead instruct the computer to store this number as
885 x 10^(-2)
by using the decimal type instead of float; alternately, you could have taken that number 8.84999... above and rounded it to few enough places that it would have rounded up to 8.85.
This is not caused by Math.Round but by the data type you are using: float stores the numbers with binary fractions. So float cannot always represent a decimal value exactly. Sometimes, when you amplificate the error by multplying with a large value and showing the result or when doing many chained calculations, these errors becomes visible.
Specifically to avoid those small errors, there is the datatype decimal which stores values with decimal fractions. Whenever you want exact results using decimal numbers (e.g. in financial calculations) you should use decimal and not float or double.
So, use decimal instead of float (for the literals as well. e.g. replace 100.0fwith 100.0m).
Note: Not only does decimal store decimal values without rounding errors (even after multiple chained calculations), it also has a much higher precision: 28 digits versus 15 digits in the case of double.
On the other hand, the range for decimal is smaller than is the case for double: see https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/builtin-types/floating-point-numeric-types for an overview.
What is the safe range of double which can be cast to float without loosing any data (both with and without fraction part)
Example:
double value = 1423210126.00f;
float floatVal = (double)value;
//floatVal = 1423210112 if we print. There is a data loss of +14.
My Observation:
If the number of digits in double is 7 digits (or 8 inclusive "."), value is successfully cast to float without loss. Is this true always?
at first you shouldn't add f to:
double value = 1423210126.00f;
because f is used for float and you are actually saying that your number is float while here is double.try my example:
private void Form1_Load(object sender, EventArgs e)
{
double y = 123456.123456789123456789;
float x = 123456.123456789123456789f;
MessageBox.Show(x.ToString());
MessageBox.Show(y.ToString());
}
you can see that the printed numbers are:
123456.1
123456.123456789
so float can hold up about 6 digits and double about 14 digits and in this example if you convert double to float it will hold up 6 digits (with digits after ".") and while your number is in the float range it will be accurate otherwise you will lose data(more than 6 digits). also see the min and max value:
and see the below post for more info.
Convert
the 7th digit in float and 15th digit in double might be accurate or not depends if the number is in range or not.
float is 32 bit and double are 64.
Which mean float can only store half of what double can store.
So basically float can store half the size of data as what double can store.
I have a decimal database column decimal (26,6).
As far as I can gather this means a precision of 26 and a scale of 6.
I think this means that the number can be a total of 26 digits in length and 6 of these digits can be after the decimal place.
In my WPF / C# frontend I need to validate an incoming decimal so that I can be sure that it can be stored in SQL Server without truncation etc.
So my question is there a way to check that decimal has a particular precision and scale.
Also as an aside I have heard that SQL Server stores decimal in a completely different way to the CLR, is this true and if so is it something I need to worry about?
straight forward way to determine if a given precision,scale of decimal number is greater than 26,6 would be to check the length of its string equivalent.
public static bool WillItTruncate(double dNumber, int precision, int scale) {
string[] dString = dNumber.ToString("#.#", CultureInfo.InvariantCulture).Split('.');
return (dString[0].Length > (precision - scale) || dString.Length>1?dString[1].Length > scale:true);
}
The maximum precision for C# decimal datatype seems to be 29 digits whereas SQL decimal can have 38 digits. So you may not be hitting the maximum value of SQL decimal from C#.
If you already know destination scale and precision of decimal type at compile time, do simple comparison. For decimal(13,5):
public static bool IsValidDecimal13_5(decimal value)
{
return -99999999.99999M <= value && value <= 99999999.99999M;
}
In C#, is it possible to perform ToString on a float and get the value without using exponentials?
For example, consider the following:
float dummy;
dummy = 0.000006F;
Console.WriteLine(dummy.ToString());
This gives the output
6E-06
However, what I was is
0.000006
The closest I could find was using the "F" qualifier, however I then need to specify the number of decimal places otherwise the value get rounded.
Is there actually a way of doing this automatically or do I need to do a load of funky logic to either trim zeroes or figure out the number of required decimals.
Thanks;
Richard Moss
Try this
Console.WriteLine(dummy.ToString("F"));
You can also specify number of decimal places. For example F5, F3, etc.
Also, you can check custom format specifier
Console.WriteLine(dummy.ToString("0.#########"));
string dum = string.Format("{0:f99}",dummy).TrimEnd('0');
if (dum.EndsWith(",")) dum = dum.Remove(dum.Length - 1);
Without some further background info, it's hard to tell - but it sounds like you want decimal semantics. So why not use the decimal type instead?
decimal dummy;
dummy = 0.000006M;
The decimal type is more accurate at representing decimal numbers than float or double, but it is not as performant. See here for more info.
Console.WriteLine(dummy.ToString("N5"));
where 5 its number of decimal places
float dummy = 0.000006F;
Console.WriteLine(dummy.ToString("0." + new string('#', 60)));
If you'll be doing this a lot then it makes sense to store the format string in a static field/property somewhere and re-use it, rather than constructing a new string every time:
private static readonly string _allFloatDigits = "0." + new string('#', 60);
// ...
float dummy = 0.000006F;
Console.WriteLine(dummy.ToString(_allFloatDigits));
I'm writing a routine that validates data before inserting it into a database, and one of the steps is to see if numeric values fit the precision and scale of a Numeric(x,y) SQL-Server type.
I have the precision and scale from SQL-Server already, but what's the most efficient way in C# to get the precision and scale of a CLR value, or at least to test if it fits a given constraint?
At the moment, I'm converting the CLR value to a string, then looking for the location of the decimal point with .IndexOf(). Is there a faster way?
System.Data.SqlTypes.SqlDecimal.ConvertToPrecScale( new SqlDecimal (1234.56789), 8, 2)
gives 1234.57. it will truncate extra digits after the decimal place, and will throw an error rather than try to truncate digits before the decimal place (i.e. ConvertToPrecScale(12344234, 5,2)
Without triggering an exception, you could use the following method to determine if the value fits the precision and scale constraints.
private static bool IsValid(decimal value, byte precision, byte scale)
{
var sqlDecimal = new SqlDecimal(value);
var actualDigitsToLeftOfDecimal = sqlDecimal.Precision - sqlDecimal.Scale;
var allowedDigitsToLeftOfDecimal = precision - scale;
return
actualDigitsToLeftOfDecimal <= allowedDigitsToLeftOfDecimal &&
sqlDecimal.Scale <= scale;
}
Here's a maths based approach.
private static bool IsValidSqlDecimal(decimal value, int precision, int scale)
{
var minOverflowValue = (decimal)Math.Pow(10, precision - scale) - (decimal)Math.Pow(10, -scale) / 2;
return Math.Abs(value) < minOverflowValue;
}
This takes into account how sql server will do rounding and prevent overflow errors, even if we exceed the precision. For example:
DECLARE #value decimal(10,2)
SET #value = 99999999.99499 -- Works
SET #value = 99999999.995 -- Error
You can use decimal.Truncate(val) to get the integral part of the value and decimal.Remainder(val, 1) to get the part after the decimal point and then check that each part meets your constraints (I'm guessing this can be a simple > or < check)