Why does (float.MaxValue / float.MinValue) evaluate to Infinity? - c#

Consider the following code and results:
double min = (double) float.MinValue;
double max = (double) float.MaxValue;
double epsilon = (double) float.Epsilon;
double range = max - min;
double delta = range / epsilon;
Console.WriteLine ($#"Min: [{min}].");
Console.WriteLine ($#"Max: [{max}].");
Console.WriteLine ($#"Epsilon: [{epsilon}].");
Console.WriteLine ($#"Range: [{range}].");
Console.WriteLine ($#"Delta: [{delta}].");
// Results:
// Min: [-3.4028234663852886E+38].
// Max: [3.4028234663852886E+38].
// Epsilon: [1.401298464324817E-45].
// Range: [6.805646932770577E+38].
// Delta: [4.8566719410840996E+83].
I was trying out some calculus, trying to get as close to Zero (0) as possible, and was surprised that I never thought about representing a numeric type's range before.
How would one represent a numeric type's range? In the case above, we're using Double to represent Single ranges. For Int32, we could use Int64, etc.
How would we represent ranges for Int64, Double, and Decimal, etc.?
Why does (float.MaxValue / float.Epsilon) evaluate to Infinity? Should it not evaluate to a number very close to, but less than float.MaxValue?

The numeric types in any programming language are approximations of mathematical concepts. Since these concepts include infinities, they cannot be represented accurately in real computers.
The range (defined as difference between the maximum and minimum value of a type) can only be represented by a type having a lager range. E.g., you could use decimal or System.Numerics.BigInteger to represent the range of Int64. BigInteger could also be used to represent the range of float and double or at least the integer part of it.
float.MaxValue / float.Epsilon: float.Epsilon is a positive number smaller than one (public const float Epsilon = 1.401298E-45;). If you divide a positive number by a positive number smaller than one, the result is lager than this number. E.g., 10 / 0.5 = 20. But since you cannot store a float bigger than float.MaxValue in a float, Microsoft decided to assign it Single.PositiveInfinity instead. They also could have decided the result should have been Single.NaN (Not a Number), Single.MaxValue or even to throw an exception. But that's how it was implemented. The Single type (float in C#`) complies with the IEC 60559:1989 (IEEE 754) standard for binary floating-point arithmetic.

Related

Why is my result not displaying decimals? C# [duplicate]

Does anyone know why integer division in C# returns an integer and not a float?
What is the idea behind it? (Is it only a legacy of C/C++?)
In C#:
float x = 13 / 4;
//== operator is overridden here to use epsilon compare
if (x == 3.0)
print 'Hello world';
Result of this code would be:
'Hello world'
Strictly speaking, there is no such thing as integer division (division by definition is an operation which produces a rational number, integers are a very small subset of which.)
While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.
First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.
Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.
Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.
See C# specification. There are three types of division operators
Integer division
Floating-point division
Decimal division
In your case we have Integer division, with following rules applied:
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.
I think the reason why C# use this type of division for integers (some languages return floating result) is hardware - integers division is faster and simpler.
Each data type is capable of overloading each operator. If both the numerator and the denominator are integers, the integer type will perform the division operation and it will return an integer type. If you want floating point division, you must cast one or more of the number to floating point types before dividing them. For instance:
int x = 13;
int y = 4;
float x = (float)y / (float)z;
or, if you are using literals:
float x = 13f / 4f;
Keep in mind, floating points are not precise. If you care about precision, use something like the decimal type, instead.
Since you don't use any suffix, the literals 13 and 4 are interpreted as integer:
Manual:
If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.
Thus, since you declare 13 as integer, integer division will be performed:
Manual:
For an operation of the form x / y, binary operator overload resolution is applied to select a specific operator implementation. The operands are converted to the parameter types of the selected operator, and the type of the result is the return type of the operator.
The predefined division operators are listed below. The operators all compute the quotient of x and y.
Integer division:
int operator /(int x, int y);
uint operator /(uint x, uint y);
long operator /(long x, long y);
ulong operator /(ulong x, ulong y);
And so rounding down occurs:
The division rounds the result towards zero, and the absolute value of the result is the largest possible integer that is less than the absolute value of the quotient of the two operands. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs.
If you do the following:
int x = 13f / 4f;
You'll receive a compiler error, since a floating-point division (the / operator of 13f) results in a float, which cannot be cast to int implicitly.
If you want the division to be a floating-point division, you'll have to make the result a float:
float x = 13 / 4;
Notice that you'll still divide integers, which will implicitly be cast to float: the result will be 3.0. To explicitly declare the operands as float, using the f suffix (13f, 4f).
Might be useful:
double a = 5.0/2.0;
Console.WriteLine (a); // 2.5
double b = 5/2;
Console.WriteLine (b); // 2
int c = 5/2;
Console.WriteLine (c); // 2
double d = 5f/2f;
Console.WriteLine (d); // 2.5
It's just a basic operation.
Remember when you learned to divide. In the beginning we solved 9/6 = 1 with remainder 3.
9 / 6 == 1 //true
9 % 6 == 3 // true
The /-operator in combination with the %-operator are used to retrieve those values.
The result will always be of type that has the greater range of the numerator and the denominator. The exceptions are byte and short, which produce int (Int32).
var a = (byte)5 / (byte)2; // 2 (Int32)
var b = (short)5 / (byte)2; // 2 (Int32)
var c = 5 / 2; // 2 (Int32)
var d = 5 / 2U; // 2 (UInt32)
var e = 5L / 2U; // 2 (Int64)
var f = 5L / 2UL; // 2 (UInt64)
var g = 5F / 2UL; // 2.5 (Single/float)
var h = 5F / 2D; // 2.5 (Double)
var i = 5.0 / 2F; // 2.5 (Double)
var j = 5M / 2; // 2.5 (Decimal)
var k = 5M / 2F; // Not allowed
There is no implicit conversion between floating-point types and the decimal type, so division between them is not allowed. You have to explicitly cast and decide which one you want (Decimal has more precision and a smaller range compared to floating-point types).
As a little trick to know what you are obtaining you can use var, so the compiler will tell you the type to expect:
int a = 1;
int b = 2;
var result = a/b;
your compiler will tell you that result would be of type int here.

C# Calculations With Decimal Places

I am having trouble with basic multiplication and division in C#.
It returns 0 for ((150 / 336) * 460) but the answer should be 205.357142857.
I presume this is because (150/336) is a fractional number, and C# rounds this down to 0.
How do I correctly calculate this taking into consideration all decimal places?
No, it is because 150/336 is an integer division which always truncates the decimal part since the result will also be an int.
So one of both must be a decimal number:
double d = 150d / 336;
See: 7.7.2 Division operator
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.
((150 / 336) * 460)
Those numbers are integers, they have no decimal places. Since 150 / 336 evaluates to 0 in integer math, multiplying it by anything will also result in 0.
You need to explicitly make each number a double. Something like this:
((150d / 336d) * 460d)
You are doing integer arithmetic not floating/double. To specify a floating point double constant use the 'd' suffix.
double d = (150d / 336d) * 460d;
150/336 gives you an int as result, thus 0. you need to the division so it you'll have a double as result
(((double)150 / 336) * 460)
If you're using variables then you should write it down like this:
double d = ((double)firstNumber/ secondNumber) * thirdNumber;
For more information: https://www.dotnetperls.com/divide

Getting a precise percent from two Big Integers

This obviously doesn't work.
BigInteger Total = 1000000000000000000000000000000000000000000000000000022234235423534543;
BigInteger Actual = 83450348250384508349058934085;
string Percent = ((Decimal)100.0/Total*Actual).ToString()+"%";
The question is, how to I get my precise percent?
Currently I use..
string sTotal = (task.End - task.Start).ToString();
BigInteger current = task.End;
string sCurrent = (task.End-current).ToString().PadLeft(sTotal.Length, '0');
Int32 maxLength = sCurrent.Length;
if (maxLength > Int64.MaxValue.ToString().Length - 1)
maxLength = Int64.MaxValue.ToString().Length - 1;
UInt64 currentI = Convert.ToUInt64(sCurrent.Substring(0, maxLength));
UInt64 totalI = Convert.ToUInt64(sTotal.Substring(0, maxLength));
Percent = (Decimal)100.0 / totalI
* currentI;
Can you suggest better?
You're computing a rational, not an integer, so you should install the Solver Foundation:
http://msdn.microsoft.com/en-us/library/ff524509(v=VS.93).aspx
and use Rational rather than BigInteger:
http://msdn.microsoft.com/en-us/library/ff526610(v=vs.93).aspx
You can then call ToDouble if you want to get the rational as the nearest double.
I need it accurate to 56 decimal places
OK, that is a ridiculous amount of precision, but I'll take you at your word.
Since a double has only 15 decimal places of precision and a decimal only 29, you can't use double or decimal. You're going to have to write the code yourself to do the division.
Here are two ways to do it:
First, write an algorithm that emulates doing long division. You can do it by hand, so you can write a computer program to do it. Keep going until you generate the required number of bits of precision.
Second: WOLOG assume that the rational in question is positive and is of the form x / y where x and y are big integers. Let b be 10p for a desired precision p. You wish to find the big integer a with the property that:
a * y < b * x
and
b * x < (a + 1) * y
Either a/b or (a+1)/b is the decimal fraction with p digits closest to x/y.
Make sense?
You can find the value of a by doing a binary search over the set of non-negative BigIntegers.
To do the binary search, first you have to find upper and lower bounds. Lower is easy enough; you know that 0 is a lower bound because by assumption the fraction x/y is positive. To find the upper bound, try 1/b, 10/b, 100/b ... and so on until you find a value that is larger than x/y. Now you have an upper and lower bound, and you can binary search the resulting space to find the exact value of a that makes the inequalities true.

Why does integer division in C# return an integer and not a float?

Does anyone know why integer division in C# returns an integer and not a float?
What is the idea behind it? (Is it only a legacy of C/C++?)
In C#:
float x = 13 / 4;
//== operator is overridden here to use epsilon compare
if (x == 3.0)
print 'Hello world';
Result of this code would be:
'Hello world'
Strictly speaking, there is no such thing as integer division (division by definition is an operation which produces a rational number, integers are a very small subset of which.)
While it is common for new programmer to make this mistake of performing integer division when they actually meant to use floating point division, in actual practice integer division is a very common operation. If you are assuming that people rarely use it, and that every time you do division you'll always need to remember to cast to floating points, you are mistaken.
First off, integer division is quite a bit faster, so if you only need a whole number result, one would want to use the more efficient algorithm.
Secondly, there are a number of algorithms that use integer division, and if the result of division was always a floating point number you would be forced to round the result every time. One example off of the top of my head is changing the base of a number. Calculating each digit involves the integer division of a number along with the remainder, rather than the floating point division of the number.
Because of these (and other related) reasons, integer division results in an integer. If you want to get the floating point division of two integers you'll just need to remember to cast one to a double/float/decimal.
See C# specification. There are three types of division operators
Integer division
Floating-point division
Decimal division
In your case we have Integer division, with following rules applied:
The division rounds the result towards zero, and the absolute value of
the result is the largest possible integer that is less than the
absolute value of the quotient of the two operands. The result is zero
or positive when the two operands have the same sign and zero or
negative when the two operands have opposite signs.
I think the reason why C# use this type of division for integers (some languages return floating result) is hardware - integers division is faster and simpler.
Each data type is capable of overloading each operator. If both the numerator and the denominator are integers, the integer type will perform the division operation and it will return an integer type. If you want floating point division, you must cast one or more of the number to floating point types before dividing them. For instance:
int x = 13;
int y = 4;
float x = (float)y / (float)z;
or, if you are using literals:
float x = 13f / 4f;
Keep in mind, floating points are not precise. If you care about precision, use something like the decimal type, instead.
Since you don't use any suffix, the literals 13 and 4 are interpreted as integer:
Manual:
If the literal has no suffix, it has the first of these types in which its value can be represented: int, uint, long, ulong.
Thus, since you declare 13 as integer, integer division will be performed:
Manual:
For an operation of the form x / y, binary operator overload resolution is applied to select a specific operator implementation. The operands are converted to the parameter types of the selected operator, and the type of the result is the return type of the operator.
The predefined division operators are listed below. The operators all compute the quotient of x and y.
Integer division:
int operator /(int x, int y);
uint operator /(uint x, uint y);
long operator /(long x, long y);
ulong operator /(ulong x, ulong y);
And so rounding down occurs:
The division rounds the result towards zero, and the absolute value of the result is the largest possible integer that is less than the absolute value of the quotient of the two operands. The result is zero or positive when the two operands have the same sign and zero or negative when the two operands have opposite signs.
If you do the following:
int x = 13f / 4f;
You'll receive a compiler error, since a floating-point division (the / operator of 13f) results in a float, which cannot be cast to int implicitly.
If you want the division to be a floating-point division, you'll have to make the result a float:
float x = 13 / 4;
Notice that you'll still divide integers, which will implicitly be cast to float: the result will be 3.0. To explicitly declare the operands as float, using the f suffix (13f, 4f).
Might be useful:
double a = 5.0/2.0;
Console.WriteLine (a); // 2.5
double b = 5/2;
Console.WriteLine (b); // 2
int c = 5/2;
Console.WriteLine (c); // 2
double d = 5f/2f;
Console.WriteLine (d); // 2.5
It's just a basic operation.
Remember when you learned to divide. In the beginning we solved 9/6 = 1 with remainder 3.
9 / 6 == 1 //true
9 % 6 == 3 // true
The /-operator in combination with the %-operator are used to retrieve those values.
The result will always be of type that has the greater range of the numerator and the denominator. The exceptions are byte and short, which produce int (Int32).
var a = (byte)5 / (byte)2; // 2 (Int32)
var b = (short)5 / (byte)2; // 2 (Int32)
var c = 5 / 2; // 2 (Int32)
var d = 5 / 2U; // 2 (UInt32)
var e = 5L / 2U; // 2 (Int64)
var f = 5L / 2UL; // 2 (UInt64)
var g = 5F / 2UL; // 2.5 (Single/float)
var h = 5F / 2D; // 2.5 (Double)
var i = 5.0 / 2F; // 2.5 (Double)
var j = 5M / 2; // 2.5 (Decimal)
var k = 5M / 2F; // Not allowed
There is no implicit conversion between floating-point types and the decimal type, so division between them is not allowed. You have to explicitly cast and decide which one you want (Decimal has more precision and a smaller range compared to floating-point types).
As a little trick to know what you are obtaining you can use var, so the compiler will tell you the type to expect:
int a = 1;
int b = 2;
var result = a/b;
your compiler will tell you that result would be of type int here.

C#: divide an int by 100

How do I divide an int by 100?
eg:
int x = 32894;
int y = 32894 / 100;
Why does this result in y being 328 and not 328.94?
When one integer is divided by another, the arithmetic is performed as integer arithmetic.
If you want it to be performed as float, double or decimal arithmetic, you need to cast one of the values appropriately. For example:
decimal y = ((decimal) x) / 100;
Note that I've changed the type of y as well - it doesn't make sense to perform decimal arithmetic but then store the result in an int. The int can't possibly store 328.94.
You only need to force one of the values to the right type, as then the other will be promoted to the same type - there's no operator defined for dividing a decimal by an integer, for example. If you're performing arithmetic using several values, you might want to force all of them to the desired type just for clarity - it would be unfortunate for one operation to be performed using integer arithmetic, and another using double arithmetic, when you'd expected both to be in double.
If you're using literals, you can just use a suffix to indicate the type instead:
decimal a = x / 100m; // Use decimal arithmetic due to the "m"
double b = x / 100.0; // Use double arithmetic due to the ".0"
double c = x / 100d; // Use double arithmetic due to the "d"
double d = x / 100f; // Use float arithmetic due to the "f"
As for whether you should be using decimal, double or float, that depends on what you're trying to do. Read my articles on decimal floating point and binary floating point. Usually double is appropriate if you're dealing with "natural" quantities such as height and weight, where any value will really be an approximation; decimal is appropriate with artificial quantities such as money, which are typically represented exactly as decimal values to start with.
328.94 is not an integer. Integer / divide rounds down; that is how it works.
I suggest you cast to decimal:
decimal y = 32894M / 100;
or with variables:
decimal y = (decimal)x / 100;
Because an int is only a whole number. Try this instead.
int x = 32894;
double y = x / 100.0;
Because you're doing integer division. Add a period behind the 100 and you'll get a double instead.
When you divide two integers, the result is an integer. Integers don't have decimal places, so they're just truncated.
its programming fundamental that int(integer) dividing is different from float(floating point) dividing.
if u want .94 use float or double
var num = 3294F/100F

Categories