How would you refactor this code?
double p = (Convert.ToDouble(inta) / Convert.ToDouble(intb)) * 100;
double v = (p / 100) * Convert.ToDouble(intc);
return (int)v;
It seems very messy to me, I know I could squeeze it onto one line but i'd be interested to know what others would do.
Thanks
Assuming that inta, intb and intc are typed as int/Int32 then Convert.ToDouble is basically the same as a simple cast to double.
return (int)((inta / (double)intb) * intc);
Whether this is actually a worthwhile refactoring is another matter. It often makes more sense to keep intermediate calculations as separate statements to improve readability, even if you don't need those intermediate results. And, of course, having meaningful variable names makes a big difference.
Seriously, don't. What's wrong with the code as it is - ignoring possible mathematical problems and just looking at the code structure itself?
I wouldn't refactor. Squeezing it all on to one line would make it a lot harder to read. If I absolutely had to do something, I'd create new variables for the double versions of a, b, and c like this:
//set up variables
double doubleA = Convert.ToDouble(inta);
double doubleB = Convert.ToDouble(intb);
double doubleC = Convert.ToDouble(intc);
//do calculations
double p = (doubleA / doubleB) * 100
double v = (p / 100) * doubleC; //why did we divide by 100 when we multiplied by it on the line above?
return (int)v; //why are we casting back to int after all the fuss and bother with doubles?
but really I'd rather just leave it alone!
Well, for a start I'd use more meaningful names, and at a guess this is taking a ratio of integers, converting it to a percentage, applying that percentage to another original value, and returning a new value, which is the result truncated to an integer.
double percent = (Convert.ToDouble( numer ) / Convert.ToDouble( denom )) * 100;
double value = (percent / 100) * Convert.ToDouble( originalValue );
return (int)value;
One difference between using Convert and a cast is that Convert will throw an exception for out of bounds, but casting won't, and casting to int results in Int32.MinValue. So if value is too big or too little for an int, or Infinity or NaN, you will get Int32.MinValue rather than an exception at the end. The other converts can't fail, as any int can be represented as a double.
So you could write it using casts with no change in meaning, and exploit the fact that in an expression involving ints and doubles the ints are cast to doubles automatically:
double percent = ((double) numer ) / denom ) * 100;
double value = (percent / 100) * originalValue;
return (int)value;
Now, C# truncates double results on assignment to 15-16 but it's implementation defined whether intermediates are operated at higher precision. I don't think that will change the output within the range that can be cast to an int, but I don't know, and the value space is too large for an exhaustive test. So without having a specification for exactly what the function is intended to do, there is very little else you can change and be sure that you will not change the output.
If you compare these refactorings, each of which are naively mathematical equivalent, and run a range of values through them:
static int test0(int numer, int denom, int initialValue)
{
double percent = (Convert.ToDouble(numer) / Convert.ToDouble(denom)) * 100;
double value = (percent / 100) * Convert.ToDouble(initialValue);
return (int)value;
}
static int test1(int numer, int denom, int initialValue)
{
return (int)((((((double)numer) / denom) * 100 ) / 100 ) * initialValue);
}
static int test2(int numer, int denom, int initialValue)
{
return (int)((((double)numer) / denom) * initialValue);
}
static int test3(int numer, int denom, int initialValue)
{
return (int)((((double)numer) * initialValue) / denom);
}
static int test4(int numer, int denom, int initialValue)
{
if (denom == 0) return int.MinValue;
return (numer * initialValue / denom);
}
Then you get the following result of counting the number of times testN does not equal test0 and letting it run a few hours:
numer in [-10000,10000]
denom in [-10000,0) (0,10000]
initialValue in [-10000,-8709] # will get to +10000 eventually
test1 fails = 0 of 515428330128 tests, 100% accuracy.
test2 fails = 110365664 of 515428330128 tests, 99.9785875828803% accuracy.
test3 fails = 150082166 of 515428330128 tests, 99.9708820495057% accuracy.
test4 fails = 150082166 of 515428330128 tests, 99.9708820495057% accuracy.
So if you want an exactly equivalent function, then it seems that you can get to test1. Although the 100s should cancel out in test2, in reality they do effect the result in a few edge cases - the rounding of the intermediate values pushes value one side or the other of an integer. For this test, the input values were in the range -10000 to +10000, so the integer multiplication in test4 doesn't overflow, so test3 and test4 are the same. For wider input ranges, test4 will deviate more often.
Always verify your refactoring against automated tests. And don't assume that the values worked on by computers behave like the numbers in high-school mathematics.
First I will give p, v, inta, intb etc. meaningful names.
First two lines can be combined:
double pv = ((double)inta/intb)*intc;
return (int)pv;
return (int)(Convert.ToDouble(inta * intc) / Convert.ToDouble(intb));
return (int)(((double)inta / intb) * intc);
(Fixed)
A variation on #FrustratedWithFormsDes's answer:
double doubleA = (double) (inta * intc);
double doubleB = (double) intb;
return (int) (doubleA / doubleB);
There are a few interesting points that nobody else seems to have covered, so I'll add to the mix...
The first refactoring I'd do is to use good naming. Three lines of code is fine, but "p", "v", and "inta" are terrible names.
Convert.ToXXX could throw exceptions if "inta" et al are not convertible to double. In that case, you'd use double.TryParse() or a try...catch to make this code robust for any type. (Of course, as mentioned by many, if the values are just ints, then a (double) cast will suffice).
If intb has the value 0, then you will get a divide by zero exception. So you might wish to to check if intb is nonzero before using it.
So to the maths... The *100 and /100 will cancel out so are pointless. Assuming the inputs are integers (and not huge), then if you premultiply inta by intc before doing the divide, you can eliminate one (double) operation, as (int * intc) can be safely done at integer precision.
So (assuming non-huge int values, and accepting that we might throw a div-by-zero exception) the end result (without renaming for clarity) could be:
return((int) ((inta * intc) / (double) intb));
It's not a lot different from the accepted answer, but on some platforms could perform slightly better (by using an integer multiply instead of a double one).
Surely this is just
inta * intc / intb
If you don't need the value of p explicitly, you don't need to do any casting at all.
Related
The title is not really well phrased, I'm aware - can't think of a better way of writing it though.
Here's the scenario - I have two input boxes, both representing integer quantities. One is represented in our units, the other in the vendor's units. There is a multiplier defining how to convert from ours to theirs. In the below example, I'm saying that two of our units is equal to five of theirs. So, for example,
decimal multiplier = 0.4; // Two of our units equals five of theirs
int requestedQuantity = 11; // Our units
int suppliedQuantity = 37; // Their units
// Should return 12, since that is the next highest whole number that results in both of us having whole numbers (12 of ours = 30 of theirs)
int correctedFromRequestedQuantity = GetCorrectedRequestedQuantity(requestedQuantity, null, multiplier);
// Should return 16, since that is the next highest whole number that results in both of us having whole numbers (16 of ours = 40 of theirs);
int correctedFromSuppliedQuantity = GetCorrectedRequestedQuantity(suppliedQuantity, multiplier, null);
Here's the function I've written to handle this. I'm not doing a divide by zero check on the multiplier / rounder since I've already checked for that elsewhere. It seems crazy to do all that converting, but is there a better way of doing it?
public int GetCorrectedRequestedQuantity(int? input, decimal? multiplier, decimal? rounder)
{
if (multiplier == null)
{
if (rounder == null)
return input.GetValueOrDefault();
else
return (int)Math.Ceiling((decimal)((decimal)Math.Ceiling(input.GetValueOrDefault() / rounder.Value) * rounder.Value));
}
else if (input.HasValue)
{
// This is insane...
return (int)Math.Ceiling((decimal)((decimal)Math.Ceiling((int)Math.Ceiling((decimal)input * multiplier.Value) / multiplier.Value) * multiplier.Value));
}
else
return 0;
}
Represent the multiplier as a fraction in lowest terms. I don't know if .NET has a fractions class but if not you can probably find a C# implementation, or just write your own. So assume the multiplier is given by two integers a / b in lowest terms, with a ≠ 0 and b ≠ 0. That also means that conversion in the other direction is given by multiplying by b / a. In your example, a = 2 and b = 5, and a / b = 0.4.
Now suppose you want to convert an integer X. If you think about it a bit you'll see what you really want is to nudge X up until b divides X. The number you need to add to X is simply (b - (X%b)) % b. So to convert on one direction is just
return (a * (X + (b - (X % b) % b))) / b;
and to convert Y going in the other direction is just
return (b * (Y + (a - (X % a) % a))) / a;
My best idea of my head is to semi brute-force it. It does sound like it is basically Fraction Mathematics. So there might be a way easier solution for this.
First we need to find in what sort of "Batch" the multiplier becomes whole. That way, we can stop working with floats/doubles altogether. Ideally this should be supplied with the multiplier (as float math is messy).
double currentMultiple=multiplier;
int currentCount=0;
//This is the best check for "is an integer" could think off.
while(currentMultiple % 1 = 0){
//The Framework can detect Arithmetic Overflow. Let us turn that one on
//If we ever get there, likely the math is non-solveable
checked{
currentMultiple+= multiplier;
currentCount += 1;
}
}
//You get here either via exception or because you got a multiple that solves it.
//Store the value of currentCount into a variable "OurBatchSize"
//Also store the value of currentMultiple in "TheirBatchSize"
Getting the closest Multiple of OurBatchSize:
int requestedQuantity = 11; // Our units
int result = OurBatchSize;
int batchCount = 0;
while(temp < requestedQuantity){
result += OurBatchSize;
batchCount++
}
//result contains the answer here. Return it
//batchCount * TheirBatchSize will also tell you how much they get.
Edit: Credit for this goes mostly to James Reinstate Monica Polk. He had the math idea to use Modulo for this. Here is what I got with explanation:
int result;
int rest = requestedAmout % BatchSize;
if (rest != 0){
//Correct upwards to the next multiple
int DistanceToNextMultiple = BatchSize - Rest;
result = requestedAmout + DistanceToMultiple;
}
else{
//It already is right
result = requestedAmout;
}
For the BatchSize of 4, you will get:
13; 13%4=1; 4-1=3; 13+3=16;
14; 14%4=2; 4-2=2; 14+2=16;
15; 15%4=3; 4-3=1; 15+1=16;
16; 16%4=0; Else is used. 16 is already right.
I am attempting to implement the BesselK method from Boost (a C++ library).
The Boost method accepts two doubles and returns a double. (I have it implemented below as cyl_bessel_k .)
The equation I modeled this off of comes from Boosts documention:
http://www.boost.org/doc/libs/1_45_0/libs/math/doc/sf_and_dist/html/math_toolkit/special/bessel/mbessel.html
I have also been checking values against Wolfram:
http://www.wolframalpha.com/input/?i=BesselK%283%2C1%29
I am able to match output from the Boost method when passing a positive non-integer value for "v". However, when an integer is passed, my output is severely off. So,there is an obvious discontinuity issue. From reading up on this, it seems that this issue arises from passing a negative integer to the gamma function. Somehow reflection comes into play here with the Bessel_I method, but I'm nearing the end of my math skillset.
1.) What needs to happen to the bessel_i method with reflection to make this work?
2.) I'm currently doing a partial sum approach. Boost uses a continuous fraction approach. How can I modify this to account for convergence?
Any input is appreciated! Thank you!
static double cyl_bessel_k(double v, double x)
{
if (v > 0)
{
double iNegativeV = cyl_bessel_i(-v, x);
double iPositiveV = cyl_bessel_i(v, x);
double besselSecondKind = (Math.PI / 2) * ((iNegativeV - iPositiveV ) / (Math.Sin(Math.PI * v)));
return besselSecondKind;
}
else
{
//error handling
}
}
static double cyl_bessel_i(double v, double x)
{
if (x == 0)
{
return 0;
}
double summed = 0;
double a = Math.Pow((0.5d * x), v);
for (double k = 0; k < 10; k++) //how to account for convergence? 10 is arbitrary
{
double b = Math.Pow(0.25d * Math.Pow(x, 2), k);
double kFactorial = SpecialFunctions.Factorial((int)k); //comes from MathNet.Numerics (Nuget)
double gamma = SpecialFunctions.Gamma(v + k + 1); //comes from MathNet.Numerics
summed += b / (kFactorial * gamma);
}
return a * summed;
}
After lots of refactoring and trying things that didn't work, this is what I came up with. It's mostly Boost logic that has been adapted and translated into C#.
It's not perfect though (likely due to rounding, precision,etc). Any improvements are welcome! Max error is 0.0000001926% between true Bessel_K value from Wolfram and my adapted method. This is occurs when parameter 'v' is an integer. For my purposes, this was close enough.
Link to fiddle:
https://dotnetfiddle.net/QIYzK6
Hopefully it saves someone some headache.
I wonder why my next statement always returns 1, and how I can fix it. I accounted for integer division by casting the first element in the division to float. Apart from that I'm not getting much further.
int value = any int;
float test = (float)value / int.MaxValue / 2 + 1;
By the way my intention is to make this convert ANY integer to a 0-1 float
To rescale a number in the range s..e to 0..1, you do (value-s)/(e-s).
So in this case:
double d = ((double)value - int.MinValue) / ((double)int.MaxValue - int.MinValue);
float test = (float)d;
It doesn't always return zero. For example, this code:
int value = 12345678;
float test = (float)value / int.MaxValue / 2 + 1;
Console.WriteLine(test);
Prints 1.002874
The problem is that floats are not very precise, so for small values of value, the result will be 0 to the number of digits of precision that floats can handle.
For example, value == 2300 will print 0, but value == 2400 will print 1.000001.
If you use double you get better results:
int value = 1;
double test = (double)value / int.MaxValue / 2 + 1;
Console.WriteLine(test);
Prints 1.00000000023283
Avoid implicit type conversions. Use all elements in your expression of type double, if that is the type you want. Convert the int.MaxValue and the 2 to double before using them in the division, so that no implicit type conversions are involved.
Also, you might want to parenthesize your expression to make it more readable. As it is, it is error prone.
Finally, if your expression is too complex and you don't know what's going on, split it into simpler expressions, all of them of type double.
P.S.: By the way, trying to get precise results and using float instead of double is not a very wise thing to do. Use float for precise floating point calculations.
I am doing a calculation which frequently involves values like 3.47493E+17298. This is way beyond what a double can handle, and I don't need extra precision, just extra range of exponents, so I created my own little struct in C#.
My struct uses a long for significand and sign, and an int for exponent, so I effectively have:
1 sign bit
32 exponent bits (regular 2's complement exponent)
63 significand bits
I am curious what steps could be made to make my multiplication routine more efficient. I am running an enormous number of multiplications of these extended range values, and it is pretty fast, but I was looking for hints as to making it faster.
My multiplication routine:
public static BigFloat Multiply(BigFloat left, BigFloat right)
{
long shsign1;
long shsign2;
if (left.significand == 0)
{
return bigZero;
}
if (right.significand == 0)
{
return bigZero;
}
shsign1 = left.significand;
shsign2 = right.significand;
// scaling down significand to prevent overflow multiply
// s1 and s2 indicate how much the left and right
// significands need shifting.
// The multLimit is a long constant indicating the
// max value I want either significand to be
int s1 = qshift(shsign1, multLimit);
int s2 = qshift(shsign2, multLimit);
shsign1 >>= s1;
shsign2 >>= s2;
BigFloat r;
r.significand = shsign1 * shsign2;
r.exponent = left.exponent + right.exponent + s1 + s2;
return r;
}
And the qshift:
It just finds out how much to shift the val to make it smaller in absolute value than the limit.
public static int qshift(long val, long limit)
{
long q = val;
long c = limit;
long nc = -limit;
int counter = 0;
while (q > c || q < nc)
{
q >>= 1;
counter++;
}
return counter;
}
Here is a completely different idea...
Use the hardware's floating-point machinery, but augment it with your own integer exponents. Put another way, make BigFloat.significand be a floating-point number instead of an integer.
Then you can use ldexp and frexp to keep the actual exponent on the float equal to zero. These should be single machine instructions.
So BigFloat multiply becomes:
r.significand = left.significand * right.significand
r.exponent = left.exponent + right.exponent
tmp = (actual exponent of r.significand from frexp)
r.exponent += tmp
(use ldexp to subtract tmp from actual exponent of r.significand)
Unfortunately,the last two steps require frexp and ldexp, which searches suggest are not available in C#. So you might have to write this bit in C.
...
Or, actually...
Use floating-point numbers for the significands, but just keep them normalized between 1 and 2. So again, use floats for the significands, and multiply like this:
r.significand = left.significand * right.significand;
r.exponent = left.exponent + right.exponent;
if (r.significand >= 2) {
r.significand /= 2;
r.exponent += 1;
}
assert (r.significand >= 1 && r.significand < 2); // for debugging...
This should work as long as you maintain the invariant mentioned in the assert(). (Because if x is between 1 and 2 and y is between 1 and 2 then x*y is between 1 and 4, so the normalization step is just has to check for when the significand product is between 2 and 4.)
You will also need to normalize the results of additions etc., but I suspect you are already doing that.
Although you will need to special-case zero after all :-).
[edit, to flesh out the frexp version]
BigFloat BigFloat::normalize(BigFloat b)
{
double temp = b.significand;
double tempexp = b.exponent;
double temp2, tempexp2;
temp2 = frexp(temp, &tempexp2);
// Need to test temp2 for infinity and NaN here
tempexp += tempexp2;
if (tempexp < MIN_EXP)
// underflow!
if (tempexp > MAX_EXP)
// overflow!
BigFloat r;
r.exponent = tempexp;
r.significand = temp2;
}
In other words, I would suggest factoring this out as a "normalize" routine, since presumably you want to use it following additions, subtractions, multiplications, and divisions.
And then there are all the corner cases to worry about...
You probably want to handle underflow by returning zero. Overflow depends on your tastes; should either be an error or +-infinity. Finally, if the result of frexp() is infinity or NaN, the value of tempexp2 is undefined, so you might want to check those cases, too.
I am not much of a C# programmer, but here are some general ideas.
First, are there any profiling tools for C#? If so, start with those...
The time is very likely being spent in your qshift() function; in particular, the loop. Mispredicted branches are nasty.
I would rewrite it as:
long q = abs(val);
int x = q/nc;
(find next power of 2 bigger than x)
For that last step, see this question and answer.
Then instead of shifting by qshift, just divide by this power of 2. (Does C# have "find first set" (aka. ffs)? If so, you can use it to get the shift count from the power of 2; it should be one instruction.)
Definitely inline this sequence if the compiler will not do it for you.
Also, I would ditch the special cases for zero, unless you are multiplying by zero a lot. Linear code good; conditionals bad.
If you're sure there won't be an overflow, you can use an unchecked block.
That will remove the overflow checks, and give you a bit more performance.
I'm running NUnit tests to evaluate some known test data and calculated results. The numbers are floating point doubles so I don't expect them to be exactly equal, but I'm not sure how to treat them as equal for a given precision.
In NUnit we can compare with a fixed tolerance:
double expected = 0.389842845321551d;
double actual = 0.38984284532155145d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
and that works fine for numbers below zero, but as the numbers grow the tolerance really needs to be changed so we always care about the same number of digits of precision.
Specifically, this test fails:
double expected = 1.95346834136148d;
double actual = 1.9534683413614817d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
and of course larger numbers fail with tolerance..
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d; // really comes from a data import
Expect(actual, EqualTo(expected).Within(0.000000000000001));
What's the correct way to evaluate two floating point numbers are equal with a given precision? Is there a built-in way to do this in NUnit?
From msdn:
By default, a Double value contains 15 decimal digits of precision, although a maximum of 17 digits is maintained internally.
Let's assume 15, then.
So, we could say that we want the tolerance to be to the same degree.
How many precise figures do we have after the decimal point? We need to know the distance of the most significant digit from the decimal point, right? The magnitude. We can get this with a Log10.
Then we need to divide 1 by 10 ^ precision to get a value around the precision we want.
Now, you'll need to do more test cases than I have, but this seems to work:
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d; // really comes from a data import
// Log10(100) = 2, so to get the manitude we add 1.
int magnitude = 1 + (expected == 0.0 ? -1 : Convert.ToInt32(Math.Floor(Math.Log10(expected))));
int precision = 15 - magnitude ;
double tolerance = 1.0 / Math.Pow(10, precision);
Assert.That(actual, Is.EqualTo(expected).Within(tolerance));
It's late - there could be a gotcha in here. I tested it against your three sets of test data and each passed. Changing pricision to be 16 - magnitude caused the test to fail. Setting it to 14 - magnitude obviously caused it to pass as the tolerance was greater.
This is what I came up with for The Floating-Point Guide (Java code, but should translate easily, and comes with a test suite, which you really really need):
public static boolean nearlyEqual(float a, float b, float epsilon)
{
final float absA = Math.abs(a);
final float absB = Math.abs(b);
final float diff = Math.abs(a - b);
if (a * b == 0) { // a or b or both are zero
// relative error is not meaningful here
return diff < (epsilon * epsilon);
} else { // use relative error
return diff / (absA + absB) < epsilon;
}
}
The really tricky question is what to do when one of the numbers to compare is zero. The best answer may be that such a comparison should always consider the domain meaning of the numbers being compared rather than trying to be universal.
How about converting the items each to string and comparing the strings?
string test1 = String.Format("{0:0.0##}", expected);
string test2 = String.Format("{0:0.0##}", actual);
Assert.AreEqual(test1, test2);
Assert.That(x, Is.EqualTo(y).Within(10).Percent);
is a decent option (changes it to a relative comparison, where x is required to be within 10% of y). You may want to add extra handling for 0, as otherwise you'll get an exact comparison in that case.
Update:
Another good option is
Assert.That(x, Is.EqualTo(y).Within(1).Ulps);
where Ulps means units in the last place. See https://docs.nunit.org/articles/nunit/writing-tests/constraints/EqualConstraint.html#comparing-floating-point-values.
I don't know if there's a built-in way to do it with nunit, but I would suggest multiplying each float by the 10x the precision you're seeking, storing the results as longs, and comparing the two longs to each other.
For example:
double expected = 1632.4587642911599d;
double actual = 1632.4587642911633d;
//for a precision of 4
long lActual = (long) 10000 * actual;
long lExpected = (long) 10000 * expected;
if(lActual == lExpected) { // Do comparison
// Perform desired actions
}
This is a quick idea, but how about shifting them down till they are below zero? Should be something like num/(10^ceil(log10(num))) . . . not to sure about how well it would work, but its an idea.
1632.4587642911599 / (10^ceil(log10(1632.4587642911599))) = 0.16324587642911599
How about:
const double significantFigures = 10;
Assert.AreEqual(Actual / Expected, 1.0, 1.0 / Math.Pow(10, significantFigures));
The difference between the two values should be less than either value divided by the precision.
Assert.Less(Math.Abs(firstValue - secondValue), firstValue / Math.Pow(10, precision));
open FsUnit
actual |> should (equalWithin errorMargin) expected