I have a decimal database column decimal (26,6).
As far as I can gather this means a precision of 26 and a scale of 6.
I think this means that the number can be a total of 26 digits in length and 6 of these digits can be after the decimal place.
In my WPF / C# frontend I need to validate an incoming decimal so that I can be sure that it can be stored in SQL Server without truncation etc.
So my question is there a way to check that decimal has a particular precision and scale.
Also as an aside I have heard that SQL Server stores decimal in a completely different way to the CLR, is this true and if so is it something I need to worry about?
straight forward way to determine if a given precision,scale of decimal number is greater than 26,6 would be to check the length of its string equivalent.
public static bool WillItTruncate(double dNumber, int precision, int scale) {
string[] dString = dNumber.ToString("#.#", CultureInfo.InvariantCulture).Split('.');
return (dString[0].Length > (precision - scale) || dString.Length>1?dString[1].Length > scale:true);
}
The maximum precision for C# decimal datatype seems to be 29 digits whereas SQL decimal can have 38 digits. So you may not be hitting the maximum value of SQL decimal from C#.
If you already know destination scale and precision of decimal type at compile time, do simple comparison. For decimal(13,5):
public static bool IsValidDecimal13_5(decimal value)
{
return -99999999.99999M <= value && value <= 99999999.99999M;
}
Related
This question already has an answer here:
Why does my float of 999999999 become 10000000000? [duplicate]
(1 answer)
Closed 4 years ago.
In C#, to convert int to float, we just need to do something like float floatNumber = intNumber or Convert.ToSingle(intNumber). However, when it comes to large number such as 999999999, the system cannot correctly convert the number but convert it into the unwanted number 1E+09. Now the question is, is it possible to convert that large integer into the wanted float number?
A 32-bit float can't exactly represent an integer that large: It only has 24 bits with which to do it (one is implicit in the format). In 24 bits you can represent 16777215. 16777216 also fits because it is a power of two. 999999999 can't be represented exactly as a number with at most 24 bits multiplied by a power of 2.
See this answer on SO: https://stackoverflow.com/a/3793950/751579
For more information look up details on IEEE floating point 32-bit and 64-bit formats.
Can you use decimal type?
Console.WriteLine(999999999.0f.ToString("N"));
Console.WriteLine(999999999.0m.ToString("N"));;
prints
1,000,000,000.00
999,999,999.00
The reference even has a example for a very large number
In this example, the output is formatted by using the currency format string. Notice that x is rounded because the decimal places exceed $0.99. The variable y, which represents the maximum exact digits, is displayed exactly in the correct format.
public class TestDecimalFormat
{
static void Main()
{
decimal x = 0.999m;
decimal y = 9999999999999999999999999999m;
Console.WriteLine("My amount = {0:C}", x);
Console.WriteLine("Your amount = {0:C}", y);
}
}
/* Output:
My amount = $1.00
Your amount = $9,999,999,999,999,999,999,999,999,999.00
*/
I have been looking at some way to determine the scale and precision of a decimal in C#, which led me to several SO questions, yet none of them seem to have correct answers, or have misleading titles (they really are about SQL server or some other databases, not C#), or any answers at all. The following post, I think, is the closest to what I'm after, but even this seems wrong:
Determine the decimal precision of an input number
First, there seems to be some confusion about the difference between scale and precision. Per Google (per MSDN):
Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number.
With that being said, the number 12345.67890M would have a scale of 5 and a precision of 10. I have not discovered a single code example that would accurately calculate this in C#.
I want to make two helper methods, decimal.Scale(), and decimal.Precision(), such that the following unit test passes:
[TestMethod]
public void ScaleAndPrecisionTest()
{
//arrange
var number = 12345.67890M;
//act
var scale = number.Scale();
var precision = number.Precision();
//assert
Assert.IsTrue(precision == 10);
Assert.IsTrue(scale == 5);
}
but I have yet to find a snippet that will do this, though several people have suggested using decimal.GetBits(), and others have said, convert it to a string and parse it.
Converting it to a string and parsing it is, in my mind, an awful idea, even disregarding the localization issue with the decimal point. The math behind the GetBits() method, however, is like Greek to me.
Can anyone describe what the calculations would look like for determining scale and precision in a decimal value for C#?
This is how you get the scale using the GetBits() function:
decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
byte scale = (byte) ((bits[3] >> 16) & 0x7F);
And the best way I can think of to get the precision is by removing the fraction point (i.e. use the Decimal Constructor to reconstruct the decimal number without the scale mentioned above) and then use the logarithm:
decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
//We will use false for the sign (false = positive), because we don't care about it.
//We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
decimal xx = new Decimal(bits[0], bits[1], bits[2], false, 0);
int precision = (int)Math.Floor(Math.Log10((double)xx)) + 1;
Now we can put them into extensions:
public static class Extensions{
public static int GetScale(this decimal value){
if(value == 0)
return 0;
int[] bits = decimal.GetBits(value);
return (int) ((bits[3] >> 16) & 0x7F);
}
public static int GetPrecision(this decimal value){
if(value == 0)
return 0;
int[] bits = decimal.GetBits(value);
//We will use false for the sign (false = positive), because we don't care about it.
//We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
decimal d = new Decimal(bits[0], bits[1], bits[2], false, 0);
return (int)Math.Floor(Math.Log10((double)d)) + 1;
}
}
And here is a fiddle.
First of all, solve the "physical" problem: how you're gonna decide which digits are significant. The fact is, "precision" has no physical meaning unless you know or guess the absolute error.
Now, there are 2 fundamental ways to determine each digit (and thus, their number):
get+interpret the meaningful parts
calculate mathematically
The 2nd way can't detect trailing zeros in the fractional part (which may or may not be significant depending on your answer to the "physical" problem), so I won't cover it unless requested.
For the first one, in the Decimal's interface, I see 2 basic methods to get the parts: ToString() (a few overloads) and GetBits().
ToString(String, IFormatInfo) is actually a reliable way since you can define the format exactly.
E.g. use the F specifier and pass a culture-neutral NumberFormatInfo in which you have manually set all the fields that affect this particular format.
regarding the NumberDecimalDigits field: a test shows that it is the minimal number - so set it to 0 (the docs are unclear on this), - and trailing zeros are printed all right if there are any
The semantics of GetBits() result are documented clearly in its MSDN article (so laments like "it's Greek to me" won't do ;) ). Decompiling with ILSpy shows that it's actually a tuple of the object's raw data fields:
public static int[] GetBits(decimal d)
{
return new int[]
{
d.lo,
d.mid,
d.hi,
d.flags
};
}
And their semantics are:
|high|mid|low| - binary digits (96 bits), interpreted as an integer (=aligned to the right)
flags:
bits 16 to 23 - "the power of 10 to divide the integer number" (=number of fractional decimal digits)
(thus (flags>>16)&0xFF is the raw value of this field)
bit 31 - sign (doesn't concern us)
as you can see, this is very similar to IEEE 754 floats.
So, the number of fractional digits is the exponent value. The number of total digits is the number of digits in the decimal representation of the 96-bit integer.
Racil's answer gives you the value of the internal scale value of the decimal which is correct, although if the internal representation ever changes it'll be interesting.
In the current format the precision portion of decimal is fixed at 96 bits, which is between 28 and 29 decimal digits depending on the number. All .NET decimal values share this precision. Since this is constant there's no internal value you can use to determine it.
What you're apparently after though is the number of digits, which we can easily determine from the string representation. We can also get the scale at the same time or at least using the same method.
public struct DecimalInfo
{
public int Scale;
public int Length;
public override string ToString()
{
return string.Format("Scale={0}, Length={1}", Scale, Length);
}
}
public static class Extensions
{
public static DecimalInfo GetInfo(this decimal value)
{
string decStr = value.ToString().Replace("-", "");
int decpos = decStr.IndexOf(".");
int length = decStr.Length - (decpos < 0 ? 0 : 1);
int scale = decpos < 0 ? 0 : length - decpos;
return new DecimalInfo { Scale = scale, Length = length };
}
}
i'm working in a C# (Unity3D compatible = .NET 2.0) Json library and i'm having precision problems. Firstly i have this logic in order to parse number strings:
...
string jsonPart ="-1.7555215491128452E-19"
enter code here
long longValue = 0;
if (long.TryParse(jsonPart, NumberStyles.Any, CultureInfo.InvariantCulture, out longValue))
{
if (longValue > int.MaxValue || longValue < int.MinValue)
{
jsonPartValue = new JsonBasic(longValue);
}
else
{
jsonPartValue = new JsonBasic((int)longValue);
}
}
else
{
decimal decimalValue = 0;
if (decimal.TryParse(jsonPart, NumberStyles.Any, CultureInfo.InvariantCulture, out decimalValue))
{
jsonPartValue = new JsonBasic(decimalValue);
}
}
...
The problem comes because decimal type is not the best type always for big decimal numbers. I have an output log to show you the problem (using .ToString()):
String = "-1.7555215491128452E-19"
Float Parsed : -1.755522E-19
Double parsed : -1.75552154911285E-19
Decimal Parsed : -0.0000000000000000001755521549
but on the other way , this examples with decimal type is the right one:
String = "0.1666666666666666666"
Float Parsed : 0.1666667
Double parsed : 0.166666666666667
Decimal Parsed : 0.1666666666666666666
String = "-1.30142114406914976E17"
Float Parsed : -1.301421E+17
Double parsed : -1.30142114406915E+17
Decimal Parsed : -130142114406914976
I suppost there is many other cases that can balance to one type or another.
Is there any smart way to parse it loosing minimum precision?
The difference you are seeing is because, although decimal can hold up to 28 or 29 digits of precision compared to double's 15 or 16 digits, its range is much lower than double.
A decimal has a range of (-7.9 x 10^28 to 7.9 x 10^28) / (10^(0 to 28))
A decimal stores ALL the digits, including zeros after a decimal point which is preceeded by a zero (e.g. 0.00000001) - i.e. it doesn't store numbers using exponential format.
A double has a range of ±5.0 × 10^−324 to ±1.7 × 10^308
A double can store a number using exponential format which means it doesn't have to store the leading zeroes in a number like 0.0000001.
The consequence of this is that for numbers that are at the edges of the decimal range, it actually has less precision than a double.
For example, consider the number -1.7555215491128452E-19:
Converting that to non-exponential notation you get:
-0.00000000000000000017555215491128452
1 2 3
12345678901234567890123456789012345
You can see that the number of decimal digits of that is 35, which exceeds the range of a decimal.
As you have observed, when you print that number out after storing it in a decimal, you get:
-0.0000000000000000001755521549
1 2
1234567901234567890123456789
which is giving you only 29 digits, as per Microsoft's specification.
A double, however, stores its numbers using exponential notation which means that it doesn't store all the leading zeroes, which allows it to store that particular number with greater precision.
For example, a double stores -0.00000000000000000017555215491128452 as an exponential number with 15 or 16 digits of precision.
If you take 15 digits of precision from the above number you get:
-0.000000000000000000175552154911285
1
123456789012345
which is indeed what is printed out if you do this:
double d = -1.7555215491128452E-19;
Console.WriteLine(d.ToString("F35"));
I need to compare two values.
One value is price which represents current price of something and so decimal because you actually can buy or sell something by this price.
Another value is estimatedPrice which is the result of mathematical calculations and so double because it's just estimation and you can not actually do any "operations" by this esitmatedPrice
now i want to buy if price < estimatedPrice.
So should i cast price to double or estimatedPrice to decimal?
I understand that after casting I will get "slighty another" numbers, but it seems I don't have other options.
It depends on the data. Decimal has greater precision; double has greater range. If the double could be outside the decimal range, you should cast the decimal to double (or indeed you could write a method that returns a result without casting, if the double value is outside the decimal range; see example below).
In this case, it seems unlikely that the double value would be outside the decimal range, so (especially since you're working with price data) you should cast the double to decimal.
Example (could use some logic to handle NaN values):
private static int Compare(double d, decimal m)
{
const double decimalMin = (double)decimal.MinValue;
const double decimalMax = (double)decimal.MaxValue;
if (d < decimalMin) return -1;
if (d > decimalMax) return 1;
return ((decimal)d).CompareTo(m);
}
decimal vs double! - Which one should I use and when?
If you're more concerned with precision, convert to decimal. If you're not, go with doubles.
I've never worked with prices before in software, but from what I've heard, many deal with integers. For example, for $1.23, store it as the integer 123. The conversion to a decimal is done at the very end when you output results to the user.
Similarly, for your estimated price, can you deal with numbers that are (say) 100 times larger?
I recommend you to convert to decimal. because it seems you want manipulate money values. but I can give this short answer : for accurate (manipulating money value specially for financial applications) application use decimal. and if you prefer less resource and more speed use double.
I'm writing a routine that validates data before inserting it into a database, and one of the steps is to see if numeric values fit the precision and scale of a Numeric(x,y) SQL-Server type.
I have the precision and scale from SQL-Server already, but what's the most efficient way in C# to get the precision and scale of a CLR value, or at least to test if it fits a given constraint?
At the moment, I'm converting the CLR value to a string, then looking for the location of the decimal point with .IndexOf(). Is there a faster way?
System.Data.SqlTypes.SqlDecimal.ConvertToPrecScale( new SqlDecimal (1234.56789), 8, 2)
gives 1234.57. it will truncate extra digits after the decimal place, and will throw an error rather than try to truncate digits before the decimal place (i.e. ConvertToPrecScale(12344234, 5,2)
Without triggering an exception, you could use the following method to determine if the value fits the precision and scale constraints.
private static bool IsValid(decimal value, byte precision, byte scale)
{
var sqlDecimal = new SqlDecimal(value);
var actualDigitsToLeftOfDecimal = sqlDecimal.Precision - sqlDecimal.Scale;
var allowedDigitsToLeftOfDecimal = precision - scale;
return
actualDigitsToLeftOfDecimal <= allowedDigitsToLeftOfDecimal &&
sqlDecimal.Scale <= scale;
}
Here's a maths based approach.
private static bool IsValidSqlDecimal(decimal value, int precision, int scale)
{
var minOverflowValue = (decimal)Math.Pow(10, precision - scale) - (decimal)Math.Pow(10, -scale) / 2;
return Math.Abs(value) < minOverflowValue;
}
This takes into account how sql server will do rounding and prevent overflow errors, even if we exceed the precision. For example:
DECLARE #value decimal(10,2)
SET #value = 99999999.99499 -- Works
SET #value = 99999999.995 -- Error
You can use decimal.Truncate(val) to get the integral part of the value and decimal.Remainder(val, 1) to get the part after the decimal point and then check that each part meets your constraints (I'm guessing this can be a simple > or < check)