I've got a very simple little program to solve quadratic equations, in the main it works but for some reason it won't calculate square roots. I just get an error saying NaN but I can't see how it's not a number?
int a = Convert.ToInt16(txta.Text);
int b = Convert.ToInt16(txtb.Text);
int c = Convert.ToInt16(txtc.Text);
listBox1.Items.Add(Convert.ToString(Math.Sqrt(((b * b) - (4 * a * c)))));
The conversions aren't the cause because if they didn't convert properly or if there was an overflow you'd get a FormatException or OverflowException respectively. None the less, since you're doing math you might want to convert to double types.
double a = Convert.ToDouble(txta.Text);
double b = Convert.ToDouble(txtb.Text);
double c = Convert.ToDouble(txtc.Text);
I believe your expression: (b * b) - (4 * a * c) is the problem. If it evaluates to a negative number, that will result in a NaN result.
See Math.Sqrt Method on MSDN for more information.
It's likely getting a negative number. It might help to convert it to a double instead of an Int16, because Int16 will round every time.
Related
So I've got a nice convoluted piece of C# code that deals with substitution into mathematical equations. It's working almost perfectly. However, when given the equation (x - y + 1) / z and values x=2 y=0 z=5, it fails miserably and inexplicably.
The problem is not that the values are passed to the function wrong. That's fine. The problem is that no matter what type I use, C# seems to think that 3/5=0.
Here's the piece of code in question:
public static void TrapRule(string[] args)
{
// ...
string equation = args[0];
int ordinates = Convert.ToInt32(args[1]);
int startX = Convert.ToInt32(args[2]);
int endX = Convert.ToInt32(args[3]);
double difference = (endX - startX + 1) / ordinates;
// ...
}
It gets passed args as:
args[0] = Pow(6,[x])
args[1] = 5
args[2] = 0
args[3] = 2
(Using NCalc, by the way, so the Pow() function gets evaluated by that - which works fine.)
The result? difference = 0.
The same thing happens when using float, and when trying simple math:
Console.Write((3 / 5));
produces the same result.
What's going on?
The / operator looks at its operands and when it discovers that they are two integers it returns an integer. If you want to get back a double value then you need to cast one of the two integers to a double
double difference = (endX - startX + 1) / (double)ordinates;
You can find a more formal explanation in the C# reference
They're called integers. Integers don't store any fractional parts of a number. Moreover, when you divide an integer divided by another integer... the result is still an integer.
So when you take 3 / 5 in integer land, you can't store the .6 result. All you have left is 0. The fractional part is always truncated, never rounded. Most programming languages work this way.
For something like this, I'd recommend working in the decimal type, instead.
I am using Math.Floor method to find out how many times can number a be in number b. In this concrete example, variables are with these values:
double a = 1.2;
double b = 0.1;
double c = Math.Floor(a / b) * b;
This returns c = 11 instead of c = 12 as I thought it will. I guess it has something to do with rounding, but how can I get it work properly? When I raise number a to 1,21, it returns 12.
double and float are internally represented in a way that lacks precision for many numbers, which means that a may not be exactly 1.2 (but, say, 1.199999999...) and b not exactly 0.1.
If you want exact precision, for quantities that do not have a margin of error like money, use decimal.
Have a look here for how Math.Floor works. ..basically , it rounds down ...
My guess would be that with doubles you don't really get 12 as the division (even though this will seems to work: double d = a/b; // d=12).
Solal pointed to the same, and gave a good advice about decimal :)
Here's what happens with decimals:
decimal a = 1.2m;
decimal b = 0.1m;
decimal c = Math.Floor(a / b) ; // c =12
The result of your code is 1.1.
I am assuming you want to get 1.2.
You need to use
double c = Math.Ceiling(a / b) * b;
I have the following code :
double a = 8/ 3;
Response.Write(a);
It returns the value 2. Why? I need at least one decimal digit. Something like 2.6, or 2.66. How can I get such results?
Try
double a = 8/3.0d;
or
double a = 8.0d/3;
to get a precise answer.
Since in expression a = 8/3 both the operands are int so the result is int irrespective of the fact that it is being stored in a double. The results are always in the higher data type of operands
EDIT
To answer
8 and 3 are get from variable. Can I do a sort of cast?
In case the values are coming from a variable you can cast one of the operands into double like:
int b = 8;
int c = 3;
double a = ((double) b) /c;
Because the calculation are being done in integer type not double. To make it double use:
double a = 8d/ 3d;
Response.Write(a);
Or
double a = 8.0/ 3.0;
Response.Write(a);
One of your operands should be explicitly marked as double either by using d or specifying a decimal point 0
or if you need you can cast them to double before the calculations. You can cast either one or both operands to double.
double a = ((double) 8)/((double)3)
because 8 and 3 are integer numbers and interpreter rounds it to 2.
You can simply advise to interpreter that you numbers are floating numbers:
double a = (double)8 / 3;
Because its making a rounding towards minus, its the way its implemented in the framework. However if you specify the precision by using the above example:
double a = 8/3.0d;
then rounding is no longer performed.
Or in simple terms you assigned an integer value to a double, thats why the rounding was performed in the first place. It saw an operation with integers.
Coz 8 and 3 both ints. And int's division operator with two ints in it returns int as well. (F12 when the cursor is on slash sign).
How would you refactor this code?
double p = (Convert.ToDouble(inta) / Convert.ToDouble(intb)) * 100;
double v = (p / 100) * Convert.ToDouble(intc);
return (int)v;
It seems very messy to me, I know I could squeeze it onto one line but i'd be interested to know what others would do.
Thanks
Assuming that inta, intb and intc are typed as int/Int32 then Convert.ToDouble is basically the same as a simple cast to double.
return (int)((inta / (double)intb) * intc);
Whether this is actually a worthwhile refactoring is another matter. It often makes more sense to keep intermediate calculations as separate statements to improve readability, even if you don't need those intermediate results. And, of course, having meaningful variable names makes a big difference.
Seriously, don't. What's wrong with the code as it is - ignoring possible mathematical problems and just looking at the code structure itself?
I wouldn't refactor. Squeezing it all on to one line would make it a lot harder to read. If I absolutely had to do something, I'd create new variables for the double versions of a, b, and c like this:
//set up variables
double doubleA = Convert.ToDouble(inta);
double doubleB = Convert.ToDouble(intb);
double doubleC = Convert.ToDouble(intc);
//do calculations
double p = (doubleA / doubleB) * 100
double v = (p / 100) * doubleC; //why did we divide by 100 when we multiplied by it on the line above?
return (int)v; //why are we casting back to int after all the fuss and bother with doubles?
but really I'd rather just leave it alone!
Well, for a start I'd use more meaningful names, and at a guess this is taking a ratio of integers, converting it to a percentage, applying that percentage to another original value, and returning a new value, which is the result truncated to an integer.
double percent = (Convert.ToDouble( numer ) / Convert.ToDouble( denom )) * 100;
double value = (percent / 100) * Convert.ToDouble( originalValue );
return (int)value;
One difference between using Convert and a cast is that Convert will throw an exception for out of bounds, but casting won't, and casting to int results in Int32.MinValue. So if value is too big or too little for an int, or Infinity or NaN, you will get Int32.MinValue rather than an exception at the end. The other converts can't fail, as any int can be represented as a double.
So you could write it using casts with no change in meaning, and exploit the fact that in an expression involving ints and doubles the ints are cast to doubles automatically:
double percent = ((double) numer ) / denom ) * 100;
double value = (percent / 100) * originalValue;
return (int)value;
Now, C# truncates double results on assignment to 15-16 but it's implementation defined whether intermediates are operated at higher precision. I don't think that will change the output within the range that can be cast to an int, but I don't know, and the value space is too large for an exhaustive test. So without having a specification for exactly what the function is intended to do, there is very little else you can change and be sure that you will not change the output.
If you compare these refactorings, each of which are naively mathematical equivalent, and run a range of values through them:
static int test0(int numer, int denom, int initialValue)
{
double percent = (Convert.ToDouble(numer) / Convert.ToDouble(denom)) * 100;
double value = (percent / 100) * Convert.ToDouble(initialValue);
return (int)value;
}
static int test1(int numer, int denom, int initialValue)
{
return (int)((((((double)numer) / denom) * 100 ) / 100 ) * initialValue);
}
static int test2(int numer, int denom, int initialValue)
{
return (int)((((double)numer) / denom) * initialValue);
}
static int test3(int numer, int denom, int initialValue)
{
return (int)((((double)numer) * initialValue) / denom);
}
static int test4(int numer, int denom, int initialValue)
{
if (denom == 0) return int.MinValue;
return (numer * initialValue / denom);
}
Then you get the following result of counting the number of times testN does not equal test0 and letting it run a few hours:
numer in [-10000,10000]
denom in [-10000,0) (0,10000]
initialValue in [-10000,-8709] # will get to +10000 eventually
test1 fails = 0 of 515428330128 tests, 100% accuracy.
test2 fails = 110365664 of 515428330128 tests, 99.9785875828803% accuracy.
test3 fails = 150082166 of 515428330128 tests, 99.9708820495057% accuracy.
test4 fails = 150082166 of 515428330128 tests, 99.9708820495057% accuracy.
So if you want an exactly equivalent function, then it seems that you can get to test1. Although the 100s should cancel out in test2, in reality they do effect the result in a few edge cases - the rounding of the intermediate values pushes value one side or the other of an integer. For this test, the input values were in the range -10000 to +10000, so the integer multiplication in test4 doesn't overflow, so test3 and test4 are the same. For wider input ranges, test4 will deviate more often.
Always verify your refactoring against automated tests. And don't assume that the values worked on by computers behave like the numbers in high-school mathematics.
First I will give p, v, inta, intb etc. meaningful names.
First two lines can be combined:
double pv = ((double)inta/intb)*intc;
return (int)pv;
return (int)(Convert.ToDouble(inta * intc) / Convert.ToDouble(intb));
return (int)(((double)inta / intb) * intc);
(Fixed)
A variation on #FrustratedWithFormsDes's answer:
double doubleA = (double) (inta * intc);
double doubleB = (double) intb;
return (int) (doubleA / doubleB);
There are a few interesting points that nobody else seems to have covered, so I'll add to the mix...
The first refactoring I'd do is to use good naming. Three lines of code is fine, but "p", "v", and "inta" are terrible names.
Convert.ToXXX could throw exceptions if "inta" et al are not convertible to double. In that case, you'd use double.TryParse() or a try...catch to make this code robust for any type. (Of course, as mentioned by many, if the values are just ints, then a (double) cast will suffice).
If intb has the value 0, then you will get a divide by zero exception. So you might wish to to check if intb is nonzero before using it.
So to the maths... The *100 and /100 will cancel out so are pointless. Assuming the inputs are integers (and not huge), then if you premultiply inta by intc before doing the divide, you can eliminate one (double) operation, as (int * intc) can be safely done at integer precision.
So (assuming non-huge int values, and accepting that we might throw a div-by-zero exception) the end result (without renaming for clarity) could be:
return((int) ((inta * intc) / (double) intb));
It's not a lot different from the accepted answer, but on some platforms could perform slightly better (by using an integer multiply instead of a double one).
Surely this is just
inta * intc / intb
If you don't need the value of p explicitly, you don't need to do any casting at all.
To return a double, do I have to cast to double even if types are double?
e.g.
double a = 32.34;
double b = 234.24;
double result = a - b + 1/12 + 3/12;
Do I have to cast (double) ?
No, you don't. However, your expression almost certainly doesn't do what you want it to.
The expressions 1/12 and 3/12 will be performed using integer arithmetic.
You probably want:
double result = a - b + 1/12d + 3/12d;
or
double result = a - b + 1/(double) 12 + 3/(double) 12;
Both of these will force the division to be performed using floating point arithmetic.
The problem here is that if both operands of an arithmetic operator are integers, then the operation is performed using integer arithmetic, even if it's part of a bigger expression which is of type double. Here, your expression is effectively:
double result = a - b + (1 / 12) + (3 / 12);
The addition and subtraction is okay, because the types of a and b force them to be performed using floating point arithmetic - but because division "binds more tightly" than addition and subtraction (i.e. it's like using the brackets above) only the immediate operands are considered.
Does that make sense?
EDIT: As it's so popular, it makes sense to include devinb's comment here too:
double result = a - b + 1.0/12.0 + 3.0/12.0;
It's all the same thing as far as the compiler is concerned - you just need to decide which is clearer for you:
(double) 1
1.0
1d
Nope, no casting is needed.
Of course, there's a clean way to do this without casting:
double result = a - b + 1.0/12 + 3.0/12;
That really depends on what you're asking about casting. Here are two different cases:
double a = 32.34;
double b = 234.24;
double result1 = a - b + 1/12 + 3/12;
double result2 = a - b + (double)1/12 + (double)3/12;
Console.WriteLine(result1); // Result: -201.9
Console.WriteLine(result2); // Result: -201.566666666667
Either way you're not going to get any complaints about assigning the value to result, but in the first case the division at the end is done using integers which resolve to 0.
Hmm - The way that I've done this for the integer division issue is to do something like:
double result = a - b + 1/12.0 + 3/12.0
Aside from those though, no casting would be needed.
You should not need to, although it's hard to tell from your question where the cast would be.
When doing math, C# will return the result as the type of whichever argument has the larger type.
As I recall, the order goes something like this (from largest to smallest):
decimal
double
float
long / int64
int / int32
short / int16
byte
Of course, this is skipping the unsigned versions of each of these.
In general no casting is needed, but in your example, the 1/12 and 3/12 are integer division, resulting in 0 for each expression. You would need to cast one of the numerator or denominator to double.
Casting is used to indicate whenever you want to change one datatype to another type. This is because changing the type usually involves a possible loss of data.
e.g.
double a = 8.49374;
//casting is needed because the datatypes are different.
// a_int will end up with the value is 8, because the precision is lost.
int a_int = (int) a;
double b = (double) a_int;
In that example 'b' will end up with the value of "8.00000..." this is because a_int does not contain ANY decimal information, and so b only has the integer related information at its disposal.