C# Calculations With Decimal Places - c#

I am having trouble with basic multiplication and division in C#.
It returns 0 for ((150 / 336) * 460) but the answer should be 205.357142857.
I presume this is because (150/336) is a fractional number, and C# rounds this down to 0.
How do I correctly calculate this taking into consideration all decimal places?

No, it is because 150/336 is an integer division which always truncates the decimal part since the result will also be an int.
So one of both must be a decimal number:
double d = 150d / 336;
See: 7.7.2 Division operator
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.

((150 / 336) * 460)
Those numbers are integers, they have no decimal places. Since 150 / 336 evaluates to 0 in integer math, multiplying it by anything will also result in 0.
You need to explicitly make each number a double. Something like this:
((150d / 336d) * 460d)

You are doing integer arithmetic not floating/double. To specify a floating point double constant use the 'd' suffix.
double d = (150d / 336d) * 460d;

150/336 gives you an int as result, thus 0. you need to the division so it you'll have a double as result
(((double)150 / 336) * 460)

If you're using variables then you should write it down like this:
double d = ((double)firstNumber/ secondNumber) * thirdNumber;
For more information: https://www.dotnetperls.com/divide

Related

Why is my result not displaying decimals? C# [duplicate]

Does anyone know why integer division in C# returns an integer and not a float?
What is the idea behind it? (Is it only a legacy of C/C++?)
In C#:
float x = 13 / 4;
//== operator is overridden here to use epsilon compare
if (x == 3.0)
print 'Hello world';
Result of this code would be:
'Hello world'
Strictly speaking, there is no such thing as integer division (division by definition is an operation which produces a rational number, integers are a very small subset of which.)
While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.
First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.
Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.
Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.
See C# specification. There are three types of division operators
Integer division
Floating-point division
Decimal division
In your case we have Integer division, with following rules applied:
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.
I think the reason why C# use this type of division for integers (some languages return floating result) is hardware - integers division is faster and simpler.
Each data type is capable of overloading each operator. If both the numerator and the denominator are integers, the integer type will perform the division operation and it will return an integer type. If you want floating point division, you must cast one or more of the number to floating point types before dividing them. For instance:
int x = 13;
int y = 4;
float x = (float)y / (float)z;
or, if you are using literals:
float x = 13f / 4f;
Keep in mind, floating points are not precise. If you care about precision, use something like the decimal type, instead.
Since you don't use any suffix, the literals 13 and 4 are interpreted as integer:
Manual:
If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.
Thus, since you declare 13 as integer, integer division will be performed:
Manual:
For an operation of the form x / y, binary operator overload resolution is applied to select a specific operator implementation. The operands are converted to the parameter types of the selected operator, and the type of the result is the return type of the operator.
The predefined division operators are listed below. The operators all compute the quotient of x and y.
Integer division:
int operator /(int x, int y);
uint operator /(uint x, uint y);
long operator /(long x, long y);
ulong operator /(ulong x, ulong y);
And so rounding down occurs:
The division rounds the result towards zero, and the absolute value of the result is the largest possible integer that is less than the absolute value of the quotient of the two operands. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs.
If you do the following:
int x = 13f / 4f;
You'll receive a compiler error, since a floating-point division (the / operator of 13f) results in a float, which cannot be cast to int implicitly.
If you want the division to be a floating-point division, you'll have to make the result a float:
float x = 13 / 4;
Notice that you'll still divide integers, which will implicitly be cast to float: the result will be 3.0. To explicitly declare the operands as float, using the f suffix (13f, 4f).
Might be useful:
double a = 5.0/2.0;
Console.WriteLine (a); // 2.5
double b = 5/2;
Console.WriteLine (b); // 2
int c = 5/2;
Console.WriteLine (c); // 2
double d = 5f/2f;
Console.WriteLine (d); // 2.5
It's just a basic operation.
Remember when you learned to divide. In the beginning we solved 9/6 = 1 with remainder 3.
9 / 6 == 1 //true
9 % 6 == 3 // true
The /-operator in combination with the %-operator are used to retrieve those values.
The result will always be of type that has the greater range of the numerator and the denominator. The exceptions are byte and short, which produce int (Int32).
var a = (byte)5 / (byte)2; // 2 (Int32)
var b = (short)5 / (byte)2; // 2 (Int32)
var c = 5 / 2; // 2 (Int32)
var d = 5 / 2U; // 2 (UInt32)
var e = 5L / 2U; // 2 (Int64)
var f = 5L / 2UL; // 2 (UInt64)
var g = 5F / 2UL; // 2.5 (Single/float)
var h = 5F / 2D; // 2.5 (Double)
var i = 5.0 / 2F; // 2.5 (Double)
var j = 5M / 2; // 2.5 (Decimal)
var k = 5M / 2F; // Not allowed
There is no implicit conversion between floating-point types and the decimal type, so division between them is not allowed. You have to explicitly cast and decide which one you want (Decimal has more precision and a smaller range compared to floating-point types).
As a little trick to know what you are obtaining you can use var, so the compiler will tell you the type to expect:
int a = 1;
int b = 2;
var result = a/b;
your compiler will tell you that result would be of type int here.

How can I convert fraction to double or decimal?

Case:
double x = Math.Pow(Convert.ToDouble(0.07003 + 1 ),(1/12));
Console.WriteLine(x);
Output:
1
The above output is incorrect, because the result of x is 1 instead of 1.005657
How to convert 1/12 in a format that it gives fractional value and is accepted by Math.Pow().
And the real problem is division of 1 by 12 (1/12) which gives value 0 (instead of 0,083333...).
Try this:
double x = Math.Pow(Convert.ToDouble(0.07003 + 1 ),(1/12d));
This makes 12 a double which makes the result of 1/12 a double. So instead of getting 0 from that division the result will be 0.0833333333333333.
The literals 1 and 12 are both integers, so 1/12 is an integer division, giving an integer result (0). Change at least one literal to double or decimal to perform a floating point division.
To make a number literal a double, add decimal places (e.g. 1.0) or 'D' suffix (1D). To make it a decimal, add 'M' suffix (1M).

Simple calculator always return 0

In my C# application I want to implement a simple calculation. I got this code:
private void button1_Click(object sender, EventArgs e)
{
int percentField;
int priceField;
int result;
percentField = int.Parse(txtPercentNew.Text);
priceField = int.Parse(txtPriceNew.Text);
result = priceField / 100 * percentField;
MessageBox.Show(result.ToString());
}
But the problem is the MessageBox displays me 0. I can't figure out why.
Can someone please give me a hint what I am doing wrong?
Your variables are integers, which means that / performs integer division. Unless priceField is at least equal to 100 you will always get 0 as the result.
You can correct the problem by casting priceField to a floating point type before dividing:
(double)priceField / 100 * percentField;
However, this will not work while result is of type int because the compiler wants to protect you from inadvertent rounding errors. So you either have to cast back to an integer (losing precision due to rounding):
result = (int)((double)priceField / 100 * percentField);
or else make result be a double as well.
You are using integers instead of floating point numbers.
As a consequence, rounding off occurs during calculation.
Use float or double instead of int.
Probably your priceField is less then 100 and since you doing integer division, it creates 0 as a result.
From / Operator (C# Reference)
When you divide two integers, the result is always an integer. For
example, the result of 7 / 3 is 2. To determine the remainder of 7 /
3, use the remainder operator (%). To obtain a quotient as a rational
number or fraction, give the dividend or divisor type float or type
double. You can assign the type implicitly if you express the dividend
or divisor as a decimal by putting a digit to the right side of the
decimal point, as the following example shows.
Just cast one of your variables to floating point type like;
result = priceField / 100d * percentField;
or
result = (double)priceField / 100 * percentField;
You are working with only integers, try
double result;
result = priceField / (double)100 * percentField;
In your code, you are dividing by 100. Which means every int you are going to divide less than 100 will result in a value between [0 - 1]. When implicitly casting to an int, the result will be floored. Therefor, a 0.1 will become 0 - a 0.9 will become 0 - ...
try cast to double, because you're working with integers it results in 0.
example here
The problem is you're using int for each value.
Change result to a double and try this:
result = (double)priceField / 100 * percentField;
It should work; however, if you want to do this properly I recommend you read about MidpointRounding.

Dividing by a higher number returning 0

When I try to do this
double test = ((2 / 7) * 100);
it returns 0.
Does anybody know why this is and how to work around it?
Thanks
2 / 7 is integer division, and will return 0. Try this instead
2.0 / 7
(double) 2 / 7
You're dividing integers.
If you want a non-integer result, at least one operand must be a float or double (or decimal).
You can do that by adding .00 to any of the literals to create a literal.
You are dividing integers, so 2 / 7 becomes already 0. Just try 2.0 / 7.0 and you'll get the correct result.
It's doing integer division because all the operands are integers.
To fix it, change at least one the operands to doubles like this:
double test = ((2.0 / 7.0) * 100.0);
You are doing integer math, and only converting to double when you have the final result.
2 / 7 = 0
while
2.0 / 7.0 = 0.285714285714285
Do the math with double values:
double test = ((2.0 / 7.0) * 100.0);
It's because of division. A division of two int numbers returns int number truncating any decimal points. Hence the result of the operation 2/7 will be 0.
It should be something like this:
double test = ((2.0 / 7.0) * 100.0);

Why does integer division in C# return an integer and not a float?

Does anyone know why integer division in C# returns an integer and not a float?
What is the idea behind it? (Is it only a legacy of C/C++?)
In C#:
float x = 13 / 4;
//== operator is overridden here to use epsilon compare
if (x == 3.0)
print 'Hello world';
Result of this code would be:
'Hello world'
Strictly speaking, there is no such thing as integer division (division by definition is an operation which produces a rational number, integers are a very small subset of which.)
While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.
First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.
Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.
Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.
See C# specification. There are three types of division operators
Integer division
Floating-point division
Decimal division
In your case we have Integer division, with following rules applied:
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.
I think the reason why C# use this type of division for integers (some languages return floating result) is hardware - integers division is faster and simpler.
Each data type is capable of overloading each operator. If both the numerator and the denominator are integers, the integer type will perform the division operation and it will return an integer type. If you want floating point division, you must cast one or more of the number to floating point types before dividing them. For instance:
int x = 13;
int y = 4;
float x = (float)y / (float)z;
or, if you are using literals:
float x = 13f / 4f;
Keep in mind, floating points are not precise. If you care about precision, use something like the decimal type, instead.
Since you don't use any suffix, the literals 13 and 4 are interpreted as integer:
Manual:
If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.
Thus, since you declare 13 as integer, integer division will be performed:
Manual:
For an operation of the form x / y, binary operator overload resolution is applied to select a specific operator implementation. The operands are converted to the parameter types of the selected operator, and the type of the result is the return type of the operator.
The predefined division operators are listed below. The operators all compute the quotient of x and y.
Integer division:
int operator /(int x, int y);
uint operator /(uint x, uint y);
long operator /(long x, long y);
ulong operator /(ulong x, ulong y);
And so rounding down occurs:
The division rounds the result towards zero, and the absolute value of the result is the largest possible integer that is less than the absolute value of the quotient of the two operands. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs.
If you do the following:
int x = 13f / 4f;
You'll receive a compiler error, since a floating-point division (the / operator of 13f) results in a float, which cannot be cast to int implicitly.
If you want the division to be a floating-point division, you'll have to make the result a float:
float x = 13 / 4;
Notice that you'll still divide integers, which will implicitly be cast to float: the result will be 3.0. To explicitly declare the operands as float, using the f suffix (13f, 4f).
Might be useful:
double a = 5.0/2.0;
Console.WriteLine (a); // 2.5
double b = 5/2;
Console.WriteLine (b); // 2
int c = 5/2;
Console.WriteLine (c); // 2
double d = 5f/2f;
Console.WriteLine (d); // 2.5
It's just a basic operation.
Remember when you learned to divide. In the beginning we solved 9/6 = 1 with remainder 3.
9 / 6 == 1 //true
9 % 6 == 3 // true
The /-operator in combination with the %-operator are used to retrieve those values.
The result will always be of type that has the greater range of the numerator and the denominator. The exceptions are byte and short, which produce int (Int32).
var a = (byte)5 / (byte)2; // 2 (Int32)
var b = (short)5 / (byte)2; // 2 (Int32)
var c = 5 / 2; // 2 (Int32)
var d = 5 / 2U; // 2 (UInt32)
var e = 5L / 2U; // 2 (Int64)
var f = 5L / 2UL; // 2 (UInt64)
var g = 5F / 2UL; // 2.5 (Single/float)
var h = 5F / 2D; // 2.5 (Double)
var i = 5.0 / 2F; // 2.5 (Double)
var j = 5M / 2; // 2.5 (Decimal)
var k = 5M / 2F; // Not allowed
There is no implicit conversion between floating-point types and the decimal type, so division between them is not allowed. You have to explicitly cast and decide which one you want (Decimal has more precision and a smaller range compared to floating-point types).
As a little trick to know what you are obtaining you can use var, so the compiler will tell you the type to expect:
int a = 1;
int b = 2;
var result = a/b;
your compiler will tell you that result would be of type int here.

Categories