i read C# book, and there is this example. the question is, why the heck float lose the numeric "1" from int value???
isn't float have bigger magnitude?
int i1 = 100000001;
float f = i1; // Magnitude preserved, precision lost (WHY? #_#)
int i2 = (int)f; // 100000000
A float is a 32 bit number made up of a 24 bit mantissa and an 8 bit exponent. What happens when
float f = ii;
is an attempt to squeeze a 32 bit integer into a 24 bit mantissa. The mantissa will only store 24 bits (around 6-7 significant figures) so anything past the 6th or 7th digit will be lost.
If the assignment is made with a double, which has more significant digits, the value will be preserved.
float was not designed for big integer numbers. If you want to use big numbers and you know it is not always integers, use double.
int i1 = 100000001;
double f = Convert.ToDouble(i1);
int i2 = Convert.ToInt32(f); // 100000001
If all integers and you will want to be able to do calculations with them use Int64 instead of int.
Related
How come dividing two 32 bit int numbers as ( int / int ) returns to me 0, but if I use Decimal.Divide() I get the correct answer? I'm by no means a c# guy.
int is an integer type; dividing two ints performs an integer division, i.e. the fractional part is truncated since it can't be stored in the result type (also int!). Decimal, by contrast, has got a fractional part. By invoking Decimal.Divide, your int arguments get implicitly converted to Decimals.
You can enforce non-integer division on int arguments by explicitly casting at least one of the arguments to a floating-point type, e.g.:
int a = 42;
int b = 23;
double result = (double)a / b;
In the first case, you're doing integer division, so the result is truncated (the decimal part is chopped off) and an integer is returned.
In the second case, the ints are converted to decimals first, and the result is a decimal. Hence they are not truncated and you get the correct result.
The following line:
int a = 1, b = 2;
object result = a / b;
...will be performed using integer arithmetic. Decimal.Divide on the other hand takes two parameters of the type Decimal, so the division will be performed on decimal values rather than integer values. That is equivalent of this:
int a = 1, b = 2;
object result = (Decimal)a / (Decimal)b;
To examine this, you can add the following code lines after each of the above examples:
Console.WriteLine(result.ToString());
Console.WriteLine(result.GetType().ToString());
The output in the first case will be
0
System.Int32
..and in the second case:
0,5
System.Decimal
I reckon Decimal.Divide(decimal, decimal) implicitly converts its 2 int arguments to decimals before returning a decimal value (precise) where as 4/5 is treated as integer division and returns 0
You want to cast the numbers:
double c = (double)a/(double)b;
Note: If any of the arguments in C# is a double, a double divide is used which results in a double. So, the following would work too:
double c = (double)a/b;
here is a Small Program :
static void Main(string[] args)
{
int a=0, b = 0, c = 0;
int n = Convert.ToInt16(Console.ReadLine());
string[] arr_temp = Console.ReadLine().Split(' ');
int[] arr = Array.ConvertAll(arr_temp, Int32.Parse);
foreach (int i in arr)
{
if (i > 0) a++;
else if (i < 0) b++;
else c++;
}
Console.WriteLine("{0}", (double)a / n);
Console.WriteLine("{0}", (double)b / n);
Console.WriteLine("{0}", (double)c / n);
Console.ReadKey();
}
In my case nothing worked above.
what I want to do is divide 278 by 575 and multiply by 100 to find percentage.
double p = (double)((PeopleCount * 1.0 / AllPeopleCount * 1.0) * 100.0);
%: 48,3478260869565 --> 278 / 575 ---> 0
%: 51,6521739130435 --> 297 / 575 ---> 0
if I multiply the PeopleCount by 1.0 it makes it decimal and division will be 48.34...
also multiply by 100.0 not 100.
If you are looking for 0 < a < 1 answer, int / int will not suffice. int / int does integer division. Try casting one of the int's to a double inside the operation.
The answer marked as such is very nearly there, but I think it is worth adding that there is a difference between using double and decimal.
I would not do a better job explaining the concepts than Wikipedia, so I will just provide the pointers:
floating-point arithmetic
decimal data type
In financial systems, it is often a requirement that we can guarantee a certain number of (base-10) decimal places accuracy. This is generally impossible if the input/source data is in base-10 but we perform the arithmetic in base-2 (because the number of decimal places required for the decimal expansion of a number depends on the base; one third takes infinitely many decimal places to express in base-10 as 0.333333..., but it takes only one decimal in base-3: 0.1).
Floating-point numbers are faster to work with (in terms of CPU time; programming-wise they are equally simple) and preferred whenever you want to minimize rounding error (as in scientific applications).
I've encountered a float calculation precision problem in C#, here is the minimal working example :
int num = 160;
float test = 1.3f;
float result = num * test;
int result_1 = (int)result;
int result_2 = (int)(num * test);
int result_3 = (int)(float)(num * test);
Console.WriteLine("{0} {1} {2} {3}", result, result_1, result_2, result_3);
The code above will output "208 208 207 208", could someone explain something on the weird value of result_2 which should be 208?
(binary can not represent 1.3 precisely which will cause float precision problem, but I'm curious on the details)
num * test will probably give you a result like 207.9999998... and when you cast this float value to int you get 207, because casting to int will round the result down to the nearest integer in this case 207 (similar as Math.Floor()).
If you assign num * test to a float type like float result = num * test; the value 207.9999998... will be rounded to the nearest float value witch is 208.
Let's summerize:
float result = num * test; gives you 208 because you are assigning num * test to a float type.
int result_1 = (int)result; gives you 208 because you are casting the value of result to int -> (int)208 .
int result_2 = (int)(num * test); gives you 207 because you are casting something like 207.9999998... to int -> (int)207.9999998....
int result_3 = (int)(float)(num * test); gives you 208 because you are first casting 207.9999998... to float which gives you 208 and then you are casting 208 to int.
You can also take a look at C# language specification:
Floating-point operations may be performed with higher precision than the result type of the operation. For example, some hardware architectures support an “extended” or “long double” floating-point type with greater range and precision than the double type, and implicitly perform all floating-point operations using this higher precision type. Only at excessive cost in performance can such hardware architectures be made to perform floating-point operations with less precision, and rather than require an implementation to forfeit both performance and precision, C# allows a higher precision type to be used for all floating-point operations. Other than delivering more precise results, this rarely has any measurable effects. However, in expressions of the form x * y / z, where the multiplication produces a result that is outside the double range, but the subsequent division brings the temporary result back into the double range, the fact that the expression is evaluated in a higher range format may cause a finite result to be produced instead of an infinity.
So basically to answer your question - No, it shouldn't. You can use different types, i.e. decimal or Binary floating points etc. And if you are more interested about floating point concepts and formats, you can read Jeffrey Sax's - Floating Point in .NET part 1: Concepts and Formats.
I have been looking at some way to determine the scale and precision of a decimal in C#, which led me to several SO questions, yet none of them seem to have correct answers, or have misleading titles (they really are about SQL server or some other databases, not C#), or any answers at all. The following post, I think, is the closest to what I'm after, but even this seems wrong:
Determine the decimal precision of an input number
First, there seems to be some confusion about the difference between scale and precision. Per Google (per MSDN):
Precision is the number of digits in a number. Scale is the number of digits to the right of the decimal point in a number.
With that being said, the number 12345.67890M would have a scale of 5 and a precision of 10. I have not discovered a single code example that would accurately calculate this in C#.
I want to make two helper methods, decimal.Scale(), and decimal.Precision(), such that the following unit test passes:
[TestMethod]
public void ScaleAndPrecisionTest()
{
//arrange
var number = 12345.67890M;
//act
var scale = number.Scale();
var precision = number.Precision();
//assert
Assert.IsTrue(precision == 10);
Assert.IsTrue(scale == 5);
}
but I have yet to find a snippet that will do this, though several people have suggested using decimal.GetBits(), and others have said, convert it to a string and parse it.
Converting it to a string and parsing it is, in my mind, an awful idea, even disregarding the localization issue with the decimal point. The math behind the GetBits() method, however, is like Greek to me.
Can anyone describe what the calculations would look like for determining scale and precision in a decimal value for C#?
This is how you get the scale using the GetBits() function:
decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
byte scale = (byte) ((bits[3] >> 16) & 0x7F);
And the best way I can think of to get the precision is by removing the fraction point (i.e. use the Decimal Constructor to reconstruct the decimal number without the scale mentioned above) and then use the logarithm:
decimal x = 12345.67890M;
int[] bits = decimal.GetBits(x);
//We will use false for the sign (false = positive), because we don't care about it.
//We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
decimal xx = new Decimal(bits[0], bits[1], bits[2], false, 0);
int precision = (int)Math.Floor(Math.Log10((double)xx)) + 1;
Now we can put them into extensions:
public static class Extensions{
public static int GetScale(this decimal value){
if(value == 0)
return 0;
int[] bits = decimal.GetBits(value);
return (int) ((bits[3] >> 16) & 0x7F);
}
public static int GetPrecision(this decimal value){
if(value == 0)
return 0;
int[] bits = decimal.GetBits(value);
//We will use false for the sign (false = positive), because we don't care about it.
//We will use 0 for the last argument instead of bits[3] to eliminate the fraction point.
decimal d = new Decimal(bits[0], bits[1], bits[2], false, 0);
return (int)Math.Floor(Math.Log10((double)d)) + 1;
}
}
And here is a fiddle.
First of all, solve the "physical" problem: how you're gonna decide which digits are significant. The fact is, "precision" has no physical meaning unless you know or guess the absolute error.
Now, there are 2 fundamental ways to determine each digit (and thus, their number):
get+interpret the meaningful parts
calculate mathematically
The 2nd way can't detect trailing zeros in the fractional part (which may or may not be significant depending on your answer to the "physical" problem), so I won't cover it unless requested.
For the first one, in the Decimal's interface, I see 2 basic methods to get the parts: ToString() (a few overloads) and GetBits().
ToString(String, IFormatInfo) is actually a reliable way since you can define the format exactly.
E.g. use the F specifier and pass a culture-neutral NumberFormatInfo in which you have manually set all the fields that affect this particular format.
regarding the NumberDecimalDigits field: a test shows that it is the minimal number - so set it to 0 (the docs are unclear on this), - and trailing zeros are printed all right if there are any
The semantics of GetBits() result are documented clearly in its MSDN article (so laments like "it's Greek to me" won't do ;) ). Decompiling with ILSpy shows that it's actually a tuple of the object's raw data fields:
public static int[] GetBits(decimal d)
{
return new int[]
{
d.lo,
d.mid,
d.hi,
d.flags
};
}
And their semantics are:
|high|mid|low| - binary digits (96 bits), interpreted as an integer (=aligned to the right)
flags:
bits 16 to 23 - "the power of 10 to divide the integer number" (=number of fractional decimal digits)
(thus (flags>>16)&0xFF is the raw value of this field)
bit 31 - sign (doesn't concern us)
as you can see, this is very similar to IEEE 754 floats.
So, the number of fractional digits is the exponent value. The number of total digits is the number of digits in the decimal representation of the 96-bit integer.
Racil's answer gives you the value of the internal scale value of the decimal which is correct, although if the internal representation ever changes it'll be interesting.
In the current format the precision portion of decimal is fixed at 96 bits, which is between 28 and 29 decimal digits depending on the number. All .NET decimal values share this precision. Since this is constant there's no internal value you can use to determine it.
What you're apparently after though is the number of digits, which we can easily determine from the string representation. We can also get the scale at the same time or at least using the same method.
public struct DecimalInfo
{
public int Scale;
public int Length;
public override string ToString()
{
return string.Format("Scale={0}, Length={1}", Scale, Length);
}
}
public static class Extensions
{
public static DecimalInfo GetInfo(this decimal value)
{
string decStr = value.ToString().Replace("-", "");
int decpos = decStr.IndexOf(".");
int length = decStr.Length - (decpos < 0 ? 0 : 1);
int scale = decpos < 0 ? 0 : length - decpos;
return new DecimalInfo { Scale = scale, Length = length };
}
}
This obviously doesn't work.
BigInteger Total = 1000000000000000000000000000000000000000000000000000022234235423534543;
BigInteger Actual = 83450348250384508349058934085;
string Percent = ((Decimal)100.0/Total*Actual).ToString()+"%";
The question is, how to I get my precise percent?
Currently I use..
string sTotal = (task.End - task.Start).ToString();
BigInteger current = task.End;
string sCurrent = (task.End-current).ToString().PadLeft(sTotal.Length, '0');
Int32 maxLength = sCurrent.Length;
if (maxLength > Int64.MaxValue.ToString().Length - 1)
maxLength = Int64.MaxValue.ToString().Length - 1;
UInt64 currentI = Convert.ToUInt64(sCurrent.Substring(0, maxLength));
UInt64 totalI = Convert.ToUInt64(sTotal.Substring(0, maxLength));
Percent = (Decimal)100.0 / totalI
* currentI;
Can you suggest better?
You're computing a rational, not an integer, so you should install the Solver Foundation:
http://msdn.microsoft.com/en-us/library/ff524509(v=VS.93).aspx
and use Rational rather than BigInteger:
http://msdn.microsoft.com/en-us/library/ff526610(v=vs.93).aspx
You can then call ToDouble if you want to get the rational as the nearest double.
I need it accurate to 56 decimal places
OK, that is a ridiculous amount of precision, but I'll take you at your word.
Since a double has only 15 decimal places of precision and a decimal only 29, you can't use double or decimal. You're going to have to write the code yourself to do the division.
Here are two ways to do it:
First, write an algorithm that emulates doing long division. You can do it by hand, so you can write a computer program to do it. Keep going until you generate the required number of bits of precision.
Second: WOLOG assume that the rational in question is positive and is of the form x / y where x and y are big integers. Let b be 10p for a desired precision p. You wish to find the big integer a with the property that:
a * y < b * x
and
b * x < (a + 1) * y
Either a/b or (a+1)/b is the decimal fraction with p digits closest to x/y.
Make sense?
You can find the value of a by doing a binary search over the set of non-negative BigIntegers.
To do the binary search, first you have to find upper and lower bounds. Lower is easy enough; you know that 0 is a lower bound because by assumption the fraction x/y is positive. To find the upper bound, try 1/b, 10/b, 100/b ... and so on until you find a value that is larger than x/y. Now you have an upper and lower bound, and you can binary search the resulting space to find the exact value of a that makes the inequalities true.
I receive an integer that represents a dollar amount in fractional denominations. I would like an algorithm that can add those numbers without parsing and converting them into doubles or decimals.
For example, I receive the integer 50155, which means 50 and 15.5/32 dollars. I then receive 10210 which is 10 and 21/32 dollars. So 50 15.5/32 + 10 21/32 = 61 4.5/32, thus:
50155 + 10210 = 61045
Again, I want to avoid this:
int a = 50155;
int b = a / 1000;
float c = a % 1000;
float d = b;
d += c / 320f;
// d = 50.484375
I would much prefer this:
int a = 50155;
int b = 10210;
int c = MyClass.Add(a.b); // c = 61045
...
public int Add(int a, int b)
{
// ?????
}
Thanks in advance for the help!
Well I don't think you need to use floating point...
public static int Add(int a, int b)
{
int firstWhole = a / 1000;
int secondWhole = b / 1000;
int firstFraction = a % 1000;
int secondFraction = b % 1000;
int totalFraction = firstFraction + secondFraction;
int totalWhole = firstWhole + secondWhole + (totalFraction / 320);
return totalWhole * 1000 + (totalFraction % 320);
}
Alternatively, you might want to create a custom struct that can convert to and from your integer format, and overloads the + operator. That would allow you to write more readable code which didn't accidentally lead to other integers being treated as this slightly odd format.
EDIT: If you're forced to stick with a "single integer" format but get to adjust it somewhat you may want to consider using 512 instead of 1000. That way you can use simple mask and shift:
public static int Add(int a, int b)
{
int firstWhole = a >> 9;
int secondWhole = b >> 9;
int firstFraction = a & 0x1ff
int secondFraction = b & 0x1ff;
int totalFraction = firstFraction + secondFraction;
int totalWhole = firstWhole + secondWhole + (totalFraction / 320);
return (totalWhole << 9) + (totalFraction % 320);
}
There's still the messing around with 320, but it's at least somewhat better.
Break the string up in the part that represents whole dollars, and the part that represents fractions of dollars. For the latter, instead of treating it as 10.5 thirty-seconds of a dollar, it's probably easier to treat it as 105 three hundred and twentieths of a dollar (i.e. multiply both by ten to the numerator is always an integer).
From there, doing math is fairly simple (if somewhat tedious to write): add the fractions. If that exceeds a whole dollar, carry a dollar (and subtract 320 from the fraction part). Then add the whole dollars. Subtraction likewise -- though in this case you need to take borrowing into account instead of carrying.
Edit:
This answer suggests that one "stays away" from float arithmetic. Surprisingly, the OP indicated that his float-based logic (not shown for proprietary reasons) was twice as fast as the integer-modulo solution below! Comes to show that FPUs are not that bad after all...
Definitively, stay away from floats (for this particular problem). Integer arithmetic is both more efficient and doesn't introduce rounding error issues.
Something like the following should do the trick
Note: As written, assumes A and B are positive.
int AddMyOddlyEncodedDollars (int A, int B) {
int sum;
sum = A + B
if (sum % 1000 < 320);
return sum
else
return sum + 1000 - 320;
}
Edit: On the efficiency of the modulo operator in C
I depends very much on the compiler... Since the modulo value is known at compile time, I'd expect most modern compilers to go the "multiply [by reciprocal] and shift" approach, and this is fast.
This concern about performance (with this rather contrived format) is a calling for premature optimization, but then again, I've seen software in the financial industry mightily optimized (to put it politely), and justifiably so.
As a point for learning, this representation is called "fixed point". There are a number of implementations that you can look at. I would strongly suggest that you do NOT use int as your top level data type, but instead create a type called Fixed that encapsulates the operations. It will keep your bug count down when you mistakenly add a plain int to a fixed point number without scaling it first, or scale a number and forget to unscale it.
Looks like a strange encoding to me.
Anyway, if the format is in 10-base Nxxx where N is an integer denoting whole dollars and xxx is interpreted as
(xxx / 320)
and you want to add them together, the only thing you need to handle is to do carry when xxx exceeds 320:
int a = ..., b = ...; // dollar amounts
int c = (a + b); // add together
// Calculate carry
int carry = (c % 1000) / 320; // integer division
c += carry * 1000;
c -= carry * 320;
// done
Note: this works because if a and b are encoded correctly, the fractional parts add together to 638 at most and thus there is no "overflow" to the whole dollars part.
BEWARE: This post is wrong, wrong, wrong. I will remove it as soon as I stop feeling a fool for trying it.
Here is my go: You can trade space for time.
Construct a mapping for the first 10 bits to a tuple: count of dollars, count of piecesof32.
Then use bit manipulation on your integer:
ignore bits 11 and above, apply map.
shift the whole number 10 times, add small change dollars from mapping above
you now have the dollar amoung and the piecesof32 amount
add both
move overflow to dollar amount
Next, to convert back to "canonical" notation, you need a reverse lookup map for your piecesof32 and "borrow" dollars to fill up the bits. Unshift the dollars 10 times and add the piecesof32.
EDIT: I should remove this, but I am too ashamed. Of course, it cannot work. I'm so stupid :(
The reason being: shifting by 10 to the right is the same as dividing by 1024 - it's not as if some of the lower bits have a dollar amount and some a piecesof32 amount. Decimal and binary notation just don't split up nicely. Thats why we use hexadecimal notation (grouping of 4 bits). Bummer.
If you insist on working in ints you can't solve your problem without parsing -- after all your data is not integer. I call into evidence the (so far) 3 answers which all parse your ints into their components before performing arithmetic.
An alternative would be to use rational numbers with 2 (integer) components, one for the whole part, and one for the number of 320ths in the fractional part. Then implement the appropriate rational arithmetic. As ever, choose your representations of data carefully and your algorithms become much easier to implement.
I can't say that I think this alternative is particularly better on any axis of comparison but it might satisfy your urge not to parse.