Based on this thread decimal vs double!, decimal is always used for money. What is the proper way to define percent? like TaxPercent? If it's double then for calculating amount * 8% (double) you would have to cast it.
What's the proper way to define percent value (ie tax) and what would the calculation be.
Use the 'm' suffix to specify a literal as a decimal. So it must be 0.08m to ensure a double doesn't creep into the calculation.
decimal tax = amount * 0.08m;
You'll find a list of valid suffix characters in this post.
Related
I want to print floats with greater precision than is the default.
For example when I print the value of PI from a float i get 6 decimals. But if I copy the same value from the float into a double and print it i get 14 decimals.
Why do I get more precision when printing the same value but as a double?
How can I get Console.WriteLine() to output more decimals when printing floats without needing to copy it into a double first?
I also tried the 0.0000000000 but it did not write with more precision, it just added more zeroes. :-/
My test code:
float x = (float)Math.PI;
double xx = x;
Console.WriteLine(x);
Console.WriteLine(xx);
Console.WriteLine($"{x,18:0.0000000000}"); // Try to force more precision
Console.WriteLine($"{xx,18:0.0000000000}");
Output:
3,141593
3,14159274101257
3,1415930000 <-- The program just added more zeroes :-(
3,1415927410
I also tried to enter PI at https://www.h-schmidt.net/FloatConverter/IEEE754.html
The binary float representation of PI is 0b01000000010010010000111111011011
So the value is: 2 * 0b1.10010010000111111011011 = 2 * 1.57079637050628662109375 = 3.1415927410125732421875
So there are more decimals to output. How would I get C# to output this whole value?
There is no more precision in a float. When you convert it to a double, the accuracy is worse, even though the precision (number of digits) increased - basically, all those extra digits you see in the double print are conversion artifacts - they are wrong, just the best representation of that given float number you can get in a double. This has everything to do with how binary floating point numbers work.
Let's look at the binary representation of the float:
01000000010010010000111111011100
The first bit is sign, the next eight are the exponent, and the rest is the mantissa. What do we get when we cast it to a double? The exponent stays the same, but the mantissa is filled in with zeroes. But that actually changes the number - you get 3.1415929794311523 (rounded, as always with binary floats) instead of the correct 3.14159265358979 double value of pi. You get the illusion of greater precision, but it's only an illusion - the number is no more accurate than before, you just replaced zeroes with noise.
There's no 1:1 mapping between floats and doubles. The same float value can be represented by many different double values, and the decimal representation of the number can change accordingly. Every cast of float to double should be followed by a rounding operation if you care about decimal precision. Consider this code instead:
double.Parse(((float)Math.PI).ToString())
Instead of casting the float to a double, I first changed it to a decimal representation (in a string), and created the double from that. Now instead of having a "truncated" double, you have a proper double that doesn't lie about extra precision; when you print it out, you get 3.1415930000. Still rounded, of course, since it's still a binary->decimal conversion, but no longer pretending to have more precision than is actually there - the rounding happens at a later digit than in the float version, the zeroes are really zeroes, except for the last one (which is only approximately zero).
If you want real decimal precision, you might want to use a decimal type (e.g. int or decimal). float and double are both binary numbers, and only have binary precision. A finite decimal number isn't necessarily finite in binary, just like a finite trinary number isn't necessarily finite in decimal (e.g. 1/3 doesn't have a finite decimal representation).
A float within c# has a precision of 7 digits and no more. That means 1 digit before the decimal and 6 after.
If you do have any more digits in your output, they might be entirely wrong.
If you do need more digits, you have to use either double which has 15-16 digits precision or decimal which has 28-29 digits.
See MSDN for reference.
You can easily verify that as the digits of PI are a little different from your output. The correct first 40 digits are: 3.14159 26535 89793 23846 26433 83279 50288 4197
To print the value with 6 places after the decimal.
Console.WriteLine("{0:N6}", (double)2/3);
Output :
0.666667
Some SO user said to me that I should not use float/double for stuff like student grades, see the last comments: SequenceEqual() is not equal with custom class and float values
"Because it is often easier and safer to do all arithmetic in integers, and then convert to a suitable display format at the last minute, than it is to attempt to do arithmetic in floating point formats. "
I tried what he said but the result is not satisfying.
int grade1 = 580;
int grade2 = 210;
var average = (grade1 + grade2) / 2;
string result = string.Format("{0:0.0}", average / 100);
result is "3,0"
double grade3 = 5.80d;
double grade4 = 2.10d;
double average1 = (grade3 + grade4) / 2;
double averageFinal = Math.Round(average1);
string result1 = string.Format("{0:0.0}", averageFinal);
result1 is "4,0"
I would expect 4,0 because 3,95 should result in 4,0. That worked because I use Math.Round which again works only on a double or decimal. That would not work on an integer.
So what do I wrong here?
First of all, the specific problem you cite is one that vexes me greatly. You almost never want to do a problem in integer arithmetic and then convert it to a floating point type, because the computation will be done entirely in integers. I wish the C# compiler warned about this one; I see this all the time.
Second, the reason to prefer integer or decimal arithmetic to double arithmetic is that a double can only represent with perfect accuracy a fraction whose denominator is a power of two. When you say 0.1 in a double, you don't get 1/10, because 1/10 is not a fraction whose denominator is any power of two. You get the fraction that is closest to 1/10 that does have a power of two in the denominator.
This usually is "close enough", right up until it isn't. It is particularly nasty when you have tiny errors close to hard cutoffs. You want to say, for instance, that a student must have a 2.4 GPA in order to meet some condition, and the computations you do involving fractions with two in the denominator just happen to work out to 2.39999999999999999956...
Now, you do not necessarily get away from these problems with decimal arithmetic; decimal arithmetic has the same restriction: it can only represent numbers that are fractions with powers of ten in the denominator. You try to represent 1/3, and you're going to get a small, but non-zero error on every computation.
Thus standard advice is: if you are doing any computation where you expect exact arithmetic when making computations that involve fractions that are powers of ten in the numerator, such as financial computations, use decimal, or do the computation entirely in integers, scaled appropriately. If you're doing computations that involve physical quantities, where there is no inherent "base" to the computations, then use "double".
So why use integer over decimal or vice versa? Integer arithmetic can be smaller and faster; decimals take more time and space. But ultimately you should not worry about these small performance differences: pick the data type that most accurately reflects the mathematical domain you are working in, and use it.
You need to "convert to a suitable display format at the last minute":
string result = string.Format("{0:0.0}", average / 100.0);
average is a int, 100 is a int.
When you do average/100, you have an integer divided by an integer, in which you will get back an integer, but since 3.95 is not an integer, it is truncated to 3.
If you want to get a float or a double as your result, a double or float has to be involved in the arithmetic.
In this case, you can cast your result to a double (double)average/100 or divide by a double, average/100.0
The only reason you want to stay away from doing too many arithmetic operations with floats/decimals until the last second is the same reason you don't just plug in values for variables in long physics equations at the start, you lose precision. This is a very important concept in Numerical Methods when you have to deal with floating point representation of numbers, e.g. Machine epsilon.
I think you chose the wrong option of the two given. Knowing Eric, if you had been clear that you were doing floating-point operations like rounding, averaging, etc. then he would not have suggested using integers. The reason to not use double is because they can not always represent a decimal value precisely (you cannot represent 1.1 exactly in a double).
If you want to use floating-point math but still maintain decimal accuracy up to 28 significant digits, then use decimal:
decimal grade1 = 580;
decimal grade2 = 210;
var average = (grade1 + grade2) / 2;
average = Math.Round(average);
string result = string.Format("{0:0.0}", average);
Not that easy.
If the grades are in integer format then store in integer is OK.
But I would convert do decimal PRIOR to performing any math to remove rounding errors from the math unless you specifically want integer math.
For financial calculations decimal is recommended. In fact the suffix is m for money.
decimal (C# Reference)
The decimal keyword indicates a 128-bit data type. Compared to
floating-point types, the decimal type has more precision and a
smaller range, which makes it appropriate for financial and monetary
calculations. The approximate range and precision for the decimal type
are shown in the following table.
Float and Double are floating-point types. You get a bigger range then decimal but less precision. Decimal has a large enough range for grades.
decimal grade3 = 5.80m;
decimal grade4 = 2.10m;
decimal average1 = (grade3 + grade4) / 2;
int grade1 = 580;
int grade2 = 210;
var average = (grade1 + grade2) / 2;
string result = string.Format("{0:0.0}", average / 100); // 3.0
Debug.WriteLine(result);
decimal avg = ((decimal)grade1 + (decimal)grade2) / 200m; // 3.95
Debug.WriteLine(avg);
Debug.WriteLine(string.Format("{0:0.0}", avg)); // 4.0
I was reading an article related to difference between Float and double. and they given an example as below :
Say you have a $100 item, and you give a 10% discount. Your prices are all in full dollars, so you use int variables to store prices. Here is what you get:
int fullPrice = 100;
float discount = 0.1F;
Int32 finalPrice = (int)(fullPrice * (1-discount));
Console.WriteLine("The discounted price is ${0}.", finalPrice);
Guess what: the final price is $89, not the expected $90. Your customers will be happy, but you won't. You've given them an extra 1% discount.
In above example, to calculate the final price they have used fullPrice * (1-discount) . why they used (1-disocunt) ? it should be fullPrice * discount.
so my confusion is about logic to calculate the final price. why thay used (1-discount) instead of discount ?
the final price is $89, not the expected $90
That's because when 0.1 is float, it is not exactly 0.1, it's a little more. When you subtract it from 1 to do the math, you get $89.9999998509884. Casting to int truncates the result to 89 (demo).
You can make this work by using decimal data type for your discount. This type can represent 0.1 without a precision loss (demo).
why they used (1-disocunt)
1 represents 100%. The price after discount is (100%-10%)=90%
This question is actually about math.
Let's suggest you have an item which costs 100$.
The seller provides a discount to you - 10%.
Now you need to calculate what is the final price of an item.
I.e., how much money should you give to get it?
The answer: you need to pay the full price of an item minus discounted price.
100% of a cost - 10% discount = 90% final price
That's why it is fullPrice * (1 - discount).
If you calculate it using your formula fullPrice * discount then it will mean that the item which costs 100$ will be sold for 10$ due to 10% discount - which is incorrect. Actually, this formula fullPrice * discount may be used for calculation of discounted amount.
There is nothing wrong with the overall logic of the above example, but it does have very unfortunate choice of data types. This leads to the values being converted implicitly to double, introducing a slight rounding error in the process. By casting back to int, the result is truncated. This greatly amplifies the rounding error.
This is a good example of a problem that frequently occurs when dealing with financial values in programming:
Float values do not tanslate well to decimal fractions.
Most new developers tend to think of float values as decimal fractions, because they are mostly represented by these, when converting them to strings and vice versa. This is not the case. Float values have their fractional part stored as binary fraction, as descibed here.
This makes float values (and their calulations) being slightly askew from their decimal representations. This is why the following results to $89,9999998509884:
double fullPrice = 100;
double discount = 0.1F;
double finalPrice = (fullPrice * (1 - discount));
Console.WriteLine("The discounted price is ${0}.", finalPrice);
(Not so) fun fact: The above will work fine when using float as data type, because the afforementioned error lies below the resolution of single precision values in this example and the assembly code does use double precision behind the scenes. When the result gets converted to single precision, the error gets lost.
One way out of this problem, is to use the data type decimal, that was construced to do calculations that have to translate directly to decimal fractions (as financial calculations do):
decimal fullPrice = 100;
decimal discount = 0.1m;
decimal finalPrice = (fullPrice * (1 - discount));
Console.WriteLine("The discounted price is ${0}.", finalPrice);
Another would be to round all results of floating point calculations, before displaying or storing them. For financial calculations one shoud use:
Math.Round([your value here], 2, MidpointRounding.AwayFromZero);
I want to reduce the float type precision from 7 digits to 6 after the "." I tried multiplying the number by 10 but this didn't work. Any ideas?
If you're only trying to format the number on output (ie. in conversion to a string), you just need to use a proper format string:
13.651234f.ToString("f6"); // Always six decimal places
If you need to do that for your application logic, you probably want to use decimal rather than float - float is a binary number, so the notion of "decimal" decimal places is a bit off.
This i what I am trying to achieve:
If a double has more than 3 decimal places, I want to truncate any decimal places beyond the third. (do not round.)
Eg.: 12.878999 -> 12.878
If a double has less than 3 decimals, leave unchanged
Eg.: 125 -> 125
89.24 -> 89.24
I came across this command:
double example = 12.34567;
double output = Math.Round(example, 3);
But I do not want to round. According to the command posted above,
12.34567 -> 12.346
I want to truncate the value so that it becomes: 12.345
Doubles don't have decimal places - they're not based on decimal digits to start with. You could get "the closest double to the current value when truncated to three decimal digits", but it still wouldn't be exactly the same. You'd be better off using decimal.
Having said that, if it's only the way that rounding happens that's a problem, you can use Math.Truncate(value * 1000) / 1000; which may do what you want. (You don't want rounding at all, by the sounds of it.) It's still potentially "dodgy" though, as the result still won't really just have three decimal places. If you did the same thing with a decimal value, however, it would work:
decimal m = 12.878999m;
m = Math.Truncate(m * 1000m) / 1000m;
Console.WriteLine(m); // 12.878
EDIT: As LBushkin pointed out, you should be clear between truncating for display purposes (which can usually be done in a format specifier) and truncating for further calculations (in which case the above should work).
I can't think of a reason to explicitly lose precision outside of display purposes. In that case, simply use string formatting.
double example = 12.34567;
Console.Out.WriteLine(example.ToString("#.000"));
double example = 3.1416789645;
double output = Convert.ToDouble(example.ToString("N3"));
Multiply by 1000 then use Truncate then divide by 1000.
If your purpose in truncating the digits is for display reasons, then you just just use an appropriate formatting when you convert the double to a string.
Methods like String.Format() and Console.WriteLine() (and others) allow you to limit the number of digits of precision a value is formatted with.
Attempting to "truncate" floating point numbers is ill advised - floating point numbers don't have a precise decimal representation in many cases. Applying an approach like scaling the number up, truncating it, and then scaling it down could easily change the value to something quite different from what you'd expected for the "truncated" value.
If you need precise decimal representations of a number you should be using decimal rather than double or float.
You can use:
double example = 12.34567;
double output = ( (double) ( (int) (example * 1000.0) ) ) / 1000.0 ;
Good answers above- if you're looking for something reusable here is the code. Note that you might want to check the decimal places value, and this may overflow.
public static decimal TruncateToDecimalPlace(this decimal numberToTruncate, int decimalPlaces)
{
decimal power = (decimal)(Math.Pow(10.0, (double)decimalPlaces));
return Math.Truncate((power * numberToTruncate)) / power;
}
In C lang:
double truncKeepDecimalPlaces(double value, int numDecimals)
{
int x = pow(10, numDecimals);
return (double)trunc(value * x) / x;
}