Why is my result not displaying decimals? C# [duplicate] - c#

Does anyone know why integer division in C# returns an integer and not a float?
What is the idea behind it? (Is it only a legacy of C/C++?)
In C#:
float x = 13 / 4;
//== operator is overridden here to use epsilon compare
if (x == 3.0)
print 'Hello world';
Result of this code would be:
'Hello world'
Strictly speaking, there is no such thing as integer division (division by definition is an operation which produces a rational number, integers are a very small subset of which.)

While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.
First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.
Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.
Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.

See C# specification. There are three types of division operators
Integer division
Floating-point division
Decimal division
In your case we have Integer division, with following rules applied:
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.
I think the reason why C# use this type of division for integers (some languages return floating result) is hardware - integers division is faster and simpler.

Each data type is capable of overloading each operator. If both the numerator and the denominator are integers, the integer type will perform the division operation and it will return an integer type. If you want floating point division, you must cast one or more of the number to floating point types before dividing them. For instance:
int x = 13;
int y = 4;
float x = (float)y / (float)z;
or, if you are using literals:
float x = 13f / 4f;
Keep in mind, floating points are not precise. If you care about precision, use something like the decimal type, instead.

Since you don't use any suffix, the literals 13 and 4 are interpreted as integer:
Manual:
If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.
Thus, since you declare 13 as integer, integer division will be performed:
Manual:
For an operation of the form x / y, binary operator overload resolution is applied to select a specific operator implementation. The operands are converted to the parameter types of the selected operator, and the type of the result is the return type of the operator.
The predefined division operators are listed below. The operators all compute the quotient of x and y.
Integer division:
int operator /(int x, int y);
uint operator /(uint x, uint y);
long operator /(long x, long y);
ulong operator /(ulong x, ulong y);
And so rounding down occurs:
The division rounds the result towards zero, and the absolute value of the result is the largest possible integer that is less than the absolute value of the quotient of the two operands. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs.
If you do the following:
int x = 13f / 4f;
You'll receive a compiler error, since a floating-point division (the / operator of 13f) results in a float, which cannot be cast to int implicitly.
If you want the division to be a floating-point division, you'll have to make the result a float:
float x = 13 / 4;
Notice that you'll still divide integers, which will implicitly be cast to float: the result will be 3.0. To explicitly declare the operands as float, using the f suffix (13f, 4f).

Might be useful:
double a = 5.0/2.0;
Console.WriteLine (a); // 2.5
double b = 5/2;
Console.WriteLine (b); // 2
int c = 5/2;
Console.WriteLine (c); // 2
double d = 5f/2f;
Console.WriteLine (d); // 2.5

It's just a basic operation.
Remember when you learned to divide. In the beginning we solved 9/6 = 1 with remainder 3.
9 / 6 == 1 //true
9 % 6 == 3 // true
The /-operator in combination with the %-operator are used to retrieve those values.

The result will always be of type that has the greater range of the numerator and the denominator. The exceptions are byte and short, which produce int (Int32).
var a = (byte)5 / (byte)2; // 2 (Int32)
var b = (short)5 / (byte)2; // 2 (Int32)
var c = 5 / 2; // 2 (Int32)
var d = 5 / 2U; // 2 (UInt32)
var e = 5L / 2U; // 2 (Int64)
var f = 5L / 2UL; // 2 (UInt64)
var g = 5F / 2UL; // 2.5 (Single/float)
var h = 5F / 2D; // 2.5 (Double)
var i = 5.0 / 2F; // 2.5 (Double)
var j = 5M / 2; // 2.5 (Decimal)
var k = 5M / 2F; // Not allowed
There is no implicit conversion between floating-point types and the decimal type, so division between them is not allowed. You have to explicitly cast and decide which one you want (Decimal has more precision and a smaller range compared to floating-point types).

As a little trick to know what you are obtaining you can use var, so the compiler will tell you the type to expect:
int a = 1;
int b = 2;
var result = a/b;
your compiler will tell you that result would be of type int here.

Related

Dividing two numbers always returns 0 [duplicate]

How come dividing two 32 bit int numbers as ( int / int ) returns to me 0, but if I use Decimal.Divide() I get the correct answer? I'm by no means a c# guy.
int is an integer type; dividing two ints performs an integer division, i.e. the fractional part is truncated since it can't be stored in the result type (also int!). Decimal, by contrast, has got a fractional part. By invoking Decimal.Divide, your int arguments get implicitly converted to Decimals.
You can enforce non-integer division on int arguments by explicitly casting at least one of the arguments to a floating-point type, e.g.:
int a = 42;
int b = 23;
double result = (double)a / b;
In the first case, you're doing integer division, so the result is truncated (the decimal part is chopped off) and an integer is returned.
In the second case, the ints are converted to decimals first, and the result is a decimal. Hence they are not truncated and you get the correct result.
The following line:
int a = 1, b = 2;
object result = a / b;
...will be performed using integer arithmetic. Decimal.Divide on the other hand takes two parameters of the type Decimal, so the division will be performed on decimal values rather than integer values. That is equivalent of this:
int a = 1, b = 2;
object result = (Decimal)a / (Decimal)b;
To examine this, you can add the following code lines after each of the above examples:
Console.WriteLine(result.ToString());
Console.WriteLine(result.GetType().ToString());
The output in the first case will be
0
System.Int32
..and in the second case:
0,5
System.Decimal
I reckon Decimal.Divide(decimal, decimal) implicitly converts its 2 int arguments to decimals before returning a decimal value (precise) where as 4/5 is treated as integer division and returns 0
You want to cast the numbers:
double c = (double)a/(double)b;
Note: If any of the arguments in C# is a double, a double divide is used which results in a double. So, the following would work too:
double c = (double)a/b;
here is a Small Program :
static void Main(string[] args)
{
int a=0, b = 0, c = 0;
int n = Convert.ToInt16(Console.ReadLine());
string[] arr_temp = Console.ReadLine().Split(' ');
int[] arr = Array.ConvertAll(arr_temp, Int32.Parse);
foreach (int i in arr)
{
if (i > 0) a++;
else if (i < 0) b++;
else c++;
}
Console.WriteLine("{0}", (double)a / n);
Console.WriteLine("{0}", (double)b / n);
Console.WriteLine("{0}", (double)c / n);
Console.ReadKey();
}
In my case nothing worked above.
what I want to do is divide 278 by 575 and multiply by 100 to find percentage.
double p = (double)((PeopleCount * 1.0 / AllPeopleCount * 1.0) * 100.0);
%: 48,3478260869565 --> 278 / 575 ---> 0
%: 51,6521739130435 --> 297 / 575 ---> 0
if I multiply the PeopleCount by 1.0 it makes it decimal and division will be 48.34...
also multiply by 100.0 not 100.
If you are looking for 0 < a < 1 answer, int / int will not suffice. int / int does integer division. Try casting one of the int's to a double inside the operation.
The answer marked as such is very nearly there, but I think it is worth adding that there is a difference between using double and decimal.
I would not do a better job explaining the concepts than Wikipedia, so I will just provide the pointers:
floating-point arithmetic
decimal data type
In financial systems, it is often a requirement that we can guarantee a certain number of (base-10) decimal places accuracy. This is generally impossible if the input/source data is in base-10 but we perform the arithmetic in base-2 (because the number of decimal places required for the decimal expansion of a number depends on the base; one third takes infinitely many decimal places to express in base-10 as 0.333333..., but it takes only one decimal in base-3: 0.1).
Floating-point numbers are faster to work with (in terms of CPU time; programming-wise they are equally simple) and preferred whenever you want to minimize rounding error (as in scientific applications).

Why is C# casting double to int?

I have a function in which I need to pass a double. To call that function, I am using the following code:-
static int Main()
{
double d = 1/7 ;
Console.WriteLine("The value of d is {0}", d) ;
calc(d) ;
return 0 ;
}
The output of the following program is
The value of d is 0
Why is this so, why is C# truncating the part ahead of decimal, despite storing 1/7 in a double?
An int divided by an int uses integer truncation.
Use:
static int Main()
{
double d = 1.0 / 7 ;
//^^ or d = 1.0 / 7.0
Console.WriteLine("The value of d is {0}", d) ;
calc(d) ;
return 0 ;
}
Promoting either numerator or denominator (or both) to a floating point type promotes the result of the division to a floating point type.
Refs:
Division operator
/ Operator
Because what you doing is here called integer division. It always discards the fractional part. That's why 1 / 7 always give you 0 as a result regardless which type you assign it.
.NET has 3 type of division. From 7.7.2 Division operator
Integer division
Floating-point division
Decimal division
Also from / Operator (C# Reference)
When you divide two integers, the result is always an integer. For
example, the result of 7 / 3 is 2. To obtain a quotient as a rational
number or fraction, give the dividend or divisor type float or type
double.
So, as a result, you can use one of these if you want fractional part;
double d = 1.0 / 7 ;
double d = 1 / 7.0 ;
double d = 1.0 / 7.0 ;
According to C# reference
For an operation of the form x / y, binary operator overload
resolution (Section 7.2.4) is applied to select a specific operator
implementation. The operands are converted to the parameter types of
the selected operator, and the type of the result is the return type
of the operator.
This means that the operator / selects the correct overloads looking at its parameters. In your case your parameters are integer so, the operator selects the integer division that returns an integer (truncating the remainder)
To avoid this and select the floating point division you should give an hint forcing one of your constants to be a double/float
double d = 1.0 / 7 ;
Because the first parameter in 1/7 is an integer, so c# does a integer-division.
You'll get the correct result if you type:
double d = (double)1/7;
What you have here is operation precedence.
In effect you have written
int temp = 1 / 7;
double d = temp;
Which actually gets compiled to
int temp = 0;
double d = temp;
or
double d = 0;
The reason being is that you are using the int divide operator
static operator int / (int, int)
when you meant to use the
static operator double /(double, double)
You can force that by writing
double d = 1.0 / 7;
OR
double d = 1d / 7d;
etc etc
C# is statically-typed at compile time. Your code (double d = 1/7;) is run in the following manner in the run time.
var temp = 1/7;
double d = temp;
Here, 1 and 7 are integers. So, the division operation returns only integer and stored it in the temporary location. After that, the variable d is created and the temporary value is stored in that variable. So, here the implicit type conversion will not work.
So, you have to done the explicit type conversion at the time of division. 1.0/7 or 1/7.0 or (double)1/7 or 1/(double)7 will return the double value. So, the integer to double implicit cast will not apply here and you will get your desired result.
If you specify your number as 1 without decimal point, an int type is assumed. Replace the line
double d = 1/7 ;
with
double d = 1.0/7 ;
Alternatively you can specify the type as double using suffix:
double d = 1d/7 ;
In C# and Java (and most programming languages) the type of the result is the type of the of the numerator and denominator. You have to cast the integers that make up the numerator and denominator into doubles if you want the result to be a double.
Try double d = 1d/7d or double d = (double)(1/7)

C# Calculations With Decimal Places

I am having trouble with basic multiplication and division in C#.
It returns 0 for ((150 / 336) * 460) but the answer should be 205.357142857.
I presume this is because (150/336) is a fractional number, and C# rounds this down to 0.
How do I correctly calculate this taking into consideration all decimal places?
No, it is because 150/336 is an integer division which always truncates the decimal part since the result will also be an int.
So one of both must be a decimal number:
double d = 150d / 336;
See: 7.7.2 Division operator
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.
((150 / 336) * 460)
Those numbers are integers, they have no decimal places. Since 150 / 336 evaluates to 0 in integer math, multiplying it by anything will also result in 0.
You need to explicitly make each number a double. Something like this:
((150d / 336d) * 460d)
You are doing integer arithmetic not floating/double. To specify a floating point double constant use the 'd' suffix.
double d = (150d / 336d) * 460d;
150/336 gives you an int as result, thus 0. you need to the division so it you'll have a double as result
(((double)150 / 336) * 460)
If you're using variables then you should write it down like this:
double d = ((double)firstNumber/ secondNumber) * thirdNumber;
For more information: https://www.dotnetperls.com/divide

Why double a = 8/3 return 2?

I have the following code :
double a = 8/ 3;
Response.Write(a);
It returns the value 2. Why? I need at least one decimal digit. Something like 2.6, or 2.66. How can I get such results?
Try
double a = 8/3.0d;
or
double a = 8.0d/3;
to get a precise answer.
Since in expression a = 8/3 both the operands are int so the result is int irrespective of the fact that it is being stored in a double. The results are always in the higher data type of operands
EDIT
To answer
8 and 3 are get from variable. Can I do a sort of cast?
In case the values are coming from a variable you can cast one of the operands into double like:
int b = 8;
int c = 3;
double a = ((double) b) /c;
Because the calculation are being done in integer type not double. To make it double use:
double a = 8d/ 3d;
Response.Write(a);
Or
double a = 8.0/ 3.0;
Response.Write(a);
One of your operands should be explicitly marked as double either by using d or specifying a decimal point 0
or if you need you can cast them to double before the calculations. You can cast either one or both operands to double.
double a = ((double) 8)/((double)3)
because 8 and 3 are integer numbers and interpreter rounds it to 2.
You can simply advise to interpreter that you numbers are floating numbers:
double a = (double)8 / 3;
Because its making a rounding towards minus, its the way its implemented in the framework. However if you specify the precision by using the above example:
double a = 8/3.0d;
then rounding is no longer performed.
Or in simple terms you assigned an integer value to a double, thats why the rounding was performed in the first place. It saw an operation with integers.
Coz 8 and 3 both ints. And int's division operator with two ints in it returns int as well. (F12 when the cursor is on slash sign).

Why does integer division in C# return an integer and not a float?

Does anyone know why integer division in C# returns an integer and not a float?
What is the idea behind it? (Is it only a legacy of C/C++?)
In C#:
float x = 13 / 4;
//== operator is overridden here to use epsilon compare
if (x == 3.0)
print 'Hello world';
Result of this code would be:
'Hello world'
Strictly speaking, there is no such thing as integer division (division by definition is an operation which produces a rational number, integers are a very small subset of which.)
While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.
First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.
Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.
Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.
See C# specification. There are three types of division operators
Integer division
Floating-point division
Decimal division
In your case we have Integer division, with following rules applied:
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.
I think the reason why C# use this type of division for integers (some languages return floating result) is hardware - integers division is faster and simpler.
Each data type is capable of overloading each operator. If both the numerator and the denominator are integers, the integer type will perform the division operation and it will return an integer type. If you want floating point division, you must cast one or more of the number to floating point types before dividing them. For instance:
int x = 13;
int y = 4;
float x = (float)y / (float)z;
or, if you are using literals:
float x = 13f / 4f;
Keep in mind, floating points are not precise. If you care about precision, use something like the decimal type, instead.
Since you don't use any suffix, the literals 13 and 4 are interpreted as integer:
Manual:
If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.
Thus, since you declare 13 as integer, integer division will be performed:
Manual:
For an operation of the form x / y, binary operator overload resolution is applied to select a specific operator implementation. The operands are converted to the parameter types of the selected operator, and the type of the result is the return type of the operator.
The predefined division operators are listed below. The operators all compute the quotient of x and y.
Integer division:
int operator /(int x, int y);
uint operator /(uint x, uint y);
long operator /(long x, long y);
ulong operator /(ulong x, ulong y);
And so rounding down occurs:
The division rounds the result towards zero, and the absolute value of the result is the largest possible integer that is less than the absolute value of the quotient of the two operands. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs.
If you do the following:
int x = 13f / 4f;
You'll receive a compiler error, since a floating-point division (the / operator of 13f) results in a float, which cannot be cast to int implicitly.
If you want the division to be a floating-point division, you'll have to make the result a float:
float x = 13 / 4;
Notice that you'll still divide integers, which will implicitly be cast to float: the result will be 3.0. To explicitly declare the operands as float, using the f suffix (13f, 4f).
Might be useful:
double a = 5.0/2.0;
Console.WriteLine (a); // 2.5
double b = 5/2;
Console.WriteLine (b); // 2
int c = 5/2;
Console.WriteLine (c); // 2
double d = 5f/2f;
Console.WriteLine (d); // 2.5
It's just a basic operation.
Remember when you learned to divide. In the beginning we solved 9/6 = 1 with remainder 3.
9 / 6 == 1 //true
9 % 6 == 3 // true
The /-operator in combination with the %-operator are used to retrieve those values.
The result will always be of type that has the greater range of the numerator and the denominator. The exceptions are byte and short, which produce int (Int32).
var a = (byte)5 / (byte)2; // 2 (Int32)
var b = (short)5 / (byte)2; // 2 (Int32)
var c = 5 / 2; // 2 (Int32)
var d = 5 / 2U; // 2 (UInt32)
var e = 5L / 2U; // 2 (Int64)
var f = 5L / 2UL; // 2 (UInt64)
var g = 5F / 2UL; // 2.5 (Single/float)
var h = 5F / 2D; // 2.5 (Double)
var i = 5.0 / 2F; // 2.5 (Double)
var j = 5M / 2; // 2.5 (Decimal)
var k = 5M / 2F; // Not allowed
There is no implicit conversion between floating-point types and the decimal type, so division between them is not allowed. You have to explicitly cast and decide which one you want (Decimal has more precision and a smaller range compared to floating-point types).
As a little trick to know what you are obtaining you can use var, so the compiler will tell you the type to expect:
int a = 1;
int b = 2;
var result = a/b;
your compiler will tell you that result would be of type int here.

Categories