So I've got a nice convoluted piece of C# code that deals with substitution into mathematical equations. It's working almost perfectly. However, when given the equation (x - y + 1) / z and values x=2 y=0 z=5, it fails miserably and inexplicably.
The problem is not that the values are passed to the function wrong. That's fine. The problem is that no matter what type I use, C# seems to think that 3/5=0.
Here's the piece of code in question:
public static void TrapRule(string[] args)
{
// ...
string equation = args[0];
int ordinates = Convert.ToInt32(args[1]);
int startX = Convert.ToInt32(args[2]);
int endX = Convert.ToInt32(args[3]);
double difference = (endX - startX + 1) / ordinates;
// ...
}
It gets passed args as:
args[0] = Pow(6,[x])
args[1] = 5
args[2] = 0
args[3] = 2
(Using NCalc, by the way, so the Pow() function gets evaluated by that - which works fine.)
The result? difference = 0.
The same thing happens when using float, and when trying simple math:
Console.Write((3 / 5));
produces the same result.
What's going on?
The / operator looks at its operands and when it discovers that they are two integers it returns an integer. If you want to get back a double value then you need to cast one of the two integers to a double
double difference = (endX - startX + 1) / (double)ordinates;
You can find a more formal explanation in the C# reference
They're called integers. Integers don't store any fractional parts of a number. Moreover, when you divide an integer divided by another integer... the result is still an integer.
So when you take 3 / 5 in integer land, you can't store the .6 result. All you have left is 0. The fractional part is always truncated, never rounded. Most programming languages work this way.
For something like this, I'd recommend working in the decimal type, instead.
Related
I wonder why my next statement always returns 1, and how I can fix it. I accounted for integer division by casting the first element in the division to float. Apart from that I'm not getting much further.
int value = any int;
float test = (float)value / int.MaxValue / 2 + 1;
By the way my intention is to make this convert ANY integer to a 0-1 float
To rescale a number in the range s..e to 0..1, you do (value-s)/(e-s).
So in this case:
double d = ((double)value - int.MinValue) / ((double)int.MaxValue - int.MinValue);
float test = (float)d;
It doesn't always return zero. For example, this code:
int value = 12345678;
float test = (float)value / int.MaxValue / 2 + 1;
Console.WriteLine(test);
Prints 1.002874
The problem is that floats are not very precise, so for small values of value, the result will be 0 to the number of digits of precision that floats can handle.
For example, value == 2300 will print 0, but value == 2400 will print 1.000001.
If you use double you get better results:
int value = 1;
double test = (double)value / int.MaxValue / 2 + 1;
Console.WriteLine(test);
Prints 1.00000000023283
Avoid implicit type conversions. Use all elements in your expression of type double, if that is the type you want. Convert the int.MaxValue and the 2 to double before using them in the division, so that no implicit type conversions are involved.
Also, you might want to parenthesize your expression to make it more readable. As it is, it is error prone.
Finally, if your expression is too complex and you don't know what's going on, split it into simpler expressions, all of them of type double.
P.S.: By the way, trying to get precise results and using float instead of double is not a very wise thing to do. Use float for precise floating point calculations.
In my C# application I want to implement a simple calculation. I got this code:
private void button1_Click(object sender, EventArgs e)
{
int percentField;
int priceField;
int result;
percentField = int.Parse(txtPercentNew.Text);
priceField = int.Parse(txtPriceNew.Text);
result = priceField / 100 * percentField;
MessageBox.Show(result.ToString());
}
But the problem is the MessageBox displays me 0. I can't figure out why.
Can someone please give me a hint what I am doing wrong?
Your variables are integers, which means that / performs integer division. Unless priceField is at least equal to 100 you will always get 0 as the result.
You can correct the problem by casting priceField to a floating point type before dividing:
(double)priceField / 100 * percentField;
However, this will not work while result is of type int because the compiler wants to protect you from inadvertent rounding errors. So you either have to cast back to an integer (losing precision due to rounding):
result = (int)((double)priceField / 100 * percentField);
or else make result be a double as well.
You are using integers instead of floating point numbers.
As a consequence, rounding off occurs during calculation.
Use float or double instead of int.
Probably your priceField is less then 100 and since you doing integer division, it creates 0 as a result.
From / Operator (C# Reference)
When you divide two integers, the result is always an integer. For
example, the result of 7 / 3 is 2. To determine the remainder of 7 /
3, use the remainder operator (%). To obtain a quotient as a rational
number or fraction, give the dividend or divisor type float or type
double. You can assign the type implicitly if you express the dividend
or divisor as a decimal by putting a digit to the right side of the
decimal point, as the following example shows.
Just cast one of your variables to floating point type like;
result = priceField / 100d * percentField;
or
result = (double)priceField / 100 * percentField;
You are working with only integers, try
double result;
result = priceField / (double)100 * percentField;
In your code, you are dividing by 100. Which means every int you are going to divide less than 100 will result in a value between [0 - 1]. When implicitly casting to an int, the result will be floored. Therefor, a 0.1 will become 0 - a 0.9 will become 0 - ...
try cast to double, because you're working with integers it results in 0.
example here
The problem is you're using int for each value.
Change result to a double and try this:
result = (double)priceField / 100 * percentField;
It should work; however, if you want to do this properly I recommend you read about MidpointRounding.
My scenario is that if
47/15= 3.13333
i want to convert it into 4, if the result has decimal i want to increase the result by 1, right now i am doing this like
float res = ((float)(62-15) / 15);
if (res.ToString().Contains("."))
{
string digit=res.ToString().Substring(0, res.ToString().IndexOf('.'));
int incrementDigit=Convert.ToInt16(k) + 1;
}
I want to know is there any shortcut way or built in function in c# so that i can do this fast without implementing string functions.
Thanks a lot.
Do you mean you want to perform integer division, but always rounding up? I suspect you want:
public static int DivideByFifteenRoundingUp(int value) {
return (value + 14) / 15;
}
This avoids using floating point arithmetic at all - it just allows any value which isn't an exact multiple of 15 to be rounded up, due to the way that integer arithmetic truncates towards zero.
Note that this does not work for negative input - for example, if you passed in -15 this would return 0. you could fix this with:
public static int DivideByFifteenRoundingUp(int value) {
return value < 0 ? value / 15 : (value + 14) / 15;
}
Use Math.Ceiling Quoting MSDN:
Returns the smallest integral value that is greater than or equal to
the specified decimal number.
You are looking for Math.Ceiling().
Convert the value you have to a Decimal or Double and the result of that method is what you need. Like:
double number = ((double)(62-15) / (double)15);
double result = Math.Ceiling(number);
Note the fact that I cast 15 to a double, so I avoid integer division. That is most likely not what you want here.
Another way of doing what you ask is to add 0.5 to every number, then floor it (truncate the decimal places). I'm afraid I don't have access to a C# compiler right now to confirm the exact function calls!
NB: But as others have confirmed, I would think the Math.Ceiling function best communicates to others what you intend.
Something like:
float res = ((float)(62-15) / 15);
int incrementDigit = (int)Math.Ceiling(res);
or
int incrementDigit = (int)(res + 0.5f);
My boss reported my a bug today because my configuration was decreasing whenever he wanted to specify a value below 5%. I know that I can just round my number before casting it as int to fix my problem, but I don't understand why this problem occurs.
I have a app.config file with the value "0.04" and a configuration section with a float property. When the section is read, the float value retrieved is 0.04, which is fine. I want to put this value in a windows forms TrackBar which accept an integer value so I multiply my value by 100 and a cast it as int. For some reason, the result is not 4, but it's 3. You can test it like this :
Console.WriteLine((int)(float.Parse("0.04", System.Globalization.CultureInfo.InvariantCulture) * 100)); // 3
What happened?
It's because 0.04 can't be exactly represented as a float - and neither can the result of multiplying it by 100. The result is very slightly less than 4, so the cast to int truncates it.
Basically, if you want to use numbers represented accurately in decimal, you should use the decimal type instead of float or double. See my articles on decimal floating point and binary floating point for more information.
EDIT: There's something more interesting going on here, actually... in particular, if you assign the result to a local variable first, that changes the result:
using System;
using System.Globalization;
class Test
{
static void Main()
{
// Assign first, then multiply and assign back, then print
float f = Foo();
f *= 100;
Console.WriteLine((int) f); // Prints 4
// Assign once, then multiply within the expression...
f = Foo();
Console.WriteLine((int) (f * 100)); // Prints 4
Console.WriteLine((int) (Foo() * 100)); // Prints 3
}
// No need to do parsing here. We just need to get the results from a method
static float Foo()
{
return 0.04f;
}
}
I'm not sure exactly what's going on here, but the exact value of 0.04f is:
0.039999999105930328369140625
... so it does make sense for it not to print 4, potentially.
I can force the result of 3 if the multiplication by 100 is performed with double arithmetic instead of float:
f = Foo();
Console.WriteLine((int) ((double)f * 100)); // Prints 3
... but it's not clear to me why that's happening in the original version, given that float.Parse returns float, not double. At a guess, the result remains in registers and the subsequent multiplication is performed using double arithmetic (which is valid according to the spec) but it's certainly a surprising difference.
This happens because the float value is really more like 0.039999999999; you are therefore converting a value like 3.99999999999 to int, which yields 3.
You can solve the problem by rounding:
Console.WriteLine((int)Math.Round(float.Parse("0.04", System.Globalization.CultureInfo.InvariantCulture) * 100));
As a float 0.04*100 might well be represented as 3.9999999999, and casting to an int just truncates it, so that is why yo are seeing 3
It's actually not 4 but 3,99999 and lots of other numbers. Do something like this:
(int)(float.Parse("0.04") * 100.0 + 0.5)
Casting to float is like a floor operator and as this is not exactly 4 it is truncated to 3.
I'm running NUnit tests to evaluate some known test data and calculated results. The numbers are floating point doubles so I don't expect them to be exactly equal, but I'm not sure how to treat them as equal for a given precision.
In NUnit we can compare with a fixed tolerance:
double expected = 0.389842845321551d;
double actual = 0.38984284532155145d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
and that works fine for numbers below zero, but as the numbers grow the tolerance really needs to be changed so we always care about the same number of digits of precision.
Specifically, this test fails:
double expected = 1.95346834136148d;
double actual = 1.9534683413614817d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
and of course larger numbers fail with tolerance..
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
What's the correct way to evaluate two floating point numbers are equal with a given precision? Is there a built-in way to do this in NUnit?
From msdn:
By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
Let's assume 15, then.
So, we could say that we want the tolerance to be to the same degree.
How many precise figures do we have after the decimal point? We need to know the distance of the most significant digit from the decimal point, right? The magnitude. We can get this with a Log10.
Then we need to divide 1 by 10 ^ precision to get a value around the precision we want.
Now, you'll need to do more test cases than I have, but this seems to work:
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d; // really comes from a data import
// Log10(100) = 2, so to get the manitude we add 1.
int magnitude = 1 + (expected == 0.0 ? -1 : Convert.ToInt32(Math.Floor(Math.Log10(expected))));
int precision = 15 - magnitude ;
double tolerance = 1.0 / Math.Pow(10, precision);
Assert.That(actual, Is.EqualTo(expected).Within(tolerance));
It's late - there could be a gotcha in here. I tested it against your three sets of test data and each passed. Changing pricision to be 16 - magnitude caused the test to fail. Setting it to 14 - magnitude obviously caused it to pass as the tolerance was greater.
This is what I came up with for The Floating-Point Guide (Java code, but should translate easily, and comes with a test suite, which you really really need):
public static boolean nearlyEqual(float a, float b, float epsilon)
{
final float absA = Math.abs(a);
final float absB = Math.abs(b);
final float diff = Math.abs(a - b);
if (a * b == 0) { // a or b or both are zero
// relative error is not meaningful here
return diff < (epsilon * epsilon);
} else { // use relative error
return diff / (absA + absB) < epsilon;
}
}
The really tricky question is what to do when one of the numbers to compare is zero. The best answer may be that such a comparison should always consider the domain meaning of the numbers being compared rather than trying to be universal.
How about converting the items each to string and comparing the strings?
string test1 = String.Format("{0:0.0##}", expected);
string test2 = String.Format("{0:0.0##}", actual);
Assert.AreEqual(test1, test2);
Assert.That(x, Is.EqualTo(y).Within(10).Percent);
is a decent option (changes it to a relative comparison, where x is required to be within 10% of y). You may want to add extra handling for 0, as otherwise you'll get an exact comparison in that case.
Update:
Another good option is
Assert.That(x, Is.EqualTo(y).Within(1).Ulps);
where Ulps means units in the last place. See https://docs.nunit.org/articles/nunit/writing-tests/constraints/EqualConstraint.html#comparing-floating-point-values.
I don't know if there's a built-in way to do it with nunit, but I would suggest multiplying each float by the 10x the precision you're seeking, storing the results as longs, and comparing the two longs to each other.
For example:
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d;
//for a precision of 4
long lActual = (long) 10000 * actual;
long lExpected = (long) 10000 * expected;
if(lActual == lExpected) { // Do comparison
// Perform desired actions
}
This is a quick idea, but how about shifting them down till they are below zero? Should be something like num/(10^ceil(log10(num))) . . . not to sure about how well it would work, but its an idea.
1632.4587642911599 / (10^ceil(log10(1632.4587642911599))) = 0.16324587642911599
How about:
const double significantFigures = 10;
Assert.AreEqual(Actual / Expected, 1.0, 1.0 / Math.Pow(10, significantFigures));
The difference between the two values should be less than either value divided by the precision.
Assert.Less(Math.Abs(firstValue - secondValue), firstValue / Math.Pow(10, precision));
open FsUnit
actual |> should (equalWithin errorMargin) expected