I was wondering if anyone had any suggestions for minimizing a function, f(x,y), where x and y are integers. I have researched lots of minimization and optimization techniques, like BFGS and others out of GSL, and things out of Numerical Recipes. So far, I have tried implenting a couple of different schemes. The first works by picking the direction of largest descent f(x+1,y),f(x-1,y),f(x,y+1),f(x,y-1), and follow that direction with line minimization. I have also tried using a downhill simplex (Nelder-Mead) method. Both methods get stuck far away from a minimum. They both appear to work on simpler functions, like finding the minimum of a paraboloid, but I think that both, and especially the former, are designed for functions where x and y are real-valued (doubles). One more problem is that I need to call f(x,y) as few times as possible. It talks to external hardware, and takes a couple of seconds for each call. Any ideas for this would be greatly appreciated.
Here's an example of the error function. Sorry I didn't post this before. This function takes a couple of seconds to evaluate. Also, the information we query from the device does not add to the error if it is below our desired value, only if it is above
double Error(x,y)
{
SetDeviceParams(x,y);
double a = QueryParamA();
double b = QueryParamB();
double c = QueryParamC();
double _fReturnable = 0;
if(a>=A_desired)
{
_fReturnable+=(A_desired-a)*(A_desired-a);
}
if(b>=B_desired)
{
_fReturnable+=(B_desired-b)*(B_desired-b);
}
if(c>=C_desired)
{
_fReturnable+=(C_desired-c)*(C_desired-c);
}
return Math.sqrt(_fReturnable)
}
There are many, many solutions here. In fact, there are entire books and academic disciplines based on the subject. I am reading an excellent one right now: How to Solve It: Modern Heuristics.
There is no one solution that is correct - different solutions have different advantages based on specific knowledge of your function. It has even been proven that there is no one heuristic that performs the best at all optimization tasks.
If you know that your function is quadratic, you can use Newton-Gauss to find the minimum in one step. A genetic algorithm can be a great general-purpose tool, or you can try simulated annealing, which is less complicated.
Have you looked at genetic algorithms? They are very, very good at finding minimums and maximums, while avoiding local minimum/maximums.
How do you define f(x,y) ? Minimisation is a hard problem, depending on the complexity of your function.
Genetic Algorithms could be a good candidate.
Resources:
Genetic Algorithms in Search, Optimization, and Machine Learning
Implementing a Genetic Algorithms in C#
Simple C# GA
If it's an arbitrary function, there's no neat way of doing this.
Suppose we have a function defined as:
f(x, y) = 0 for x==100, y==100
100 otherwise
How could any algorithm realistically find (100, 100) as the minimum? It could be any possible combination of values.
Do you know anything about the function you're testing?
What you are generally looking for is called an optimisation technique in mathematics. In general, they apply to real-valued functions, but many can be adapted for integral-valued functions.
In particular, I would recommend looking into non-linear programming and gradient descent. Both would seem quite suitable for your application.
If you could perhaps provide any more details, I might be able to suggest somethign a little more specific.
Jon Skeet's answer is correct. You really do need information about f and it's derivatives even if f is everywhere continuous.
The easiest way to appreciate the difficulties of what you ask(minimization of f at integer values only) is just to think about an f: R->R (f is a real valued function of the reals) of one variable that makes large excursions between individual integers. You can easily construct such a function so that there is NO correllation between the local minimums on the real line and the minimums at the integers as well as having no relationship to the first derivative.
For an arbitrary function I see no way except brute force.
So let's look at your problem in math-speak. This is all assuming I understand
your problem fully. Feel free to correct me if I am mistaken.
we want to minimize the following:
\sqrt((a-a_desired)^2 + (b-b_desired)^2 + (c-c_desired)^2)
or in other notation
||Pos(x - x_desired)||_2
where x = (a,b,c) and Pos(y) = max(y, 0) means we want the "positive part"(this accounts
for your if statements). Finally, we wish to restrict ourself
to solutions where x is integer valued.
Unlike the above posters, I don't think genetic algorithms are what you want at all.
In fact, I think the solution is much easier (assuming I am understanding your problem).
1) Run any optimization routine on the function above. THis will give you
the solution x^* = (a^*, b^*,c^*). As this function is increasing with respect
to the variables, the best integer solution you can hope for is
(ceil(a^*),ceil(b^*),ceil(c^*)).
Now you say that your function is possibly hard to evaluate. There exist tools
for this which are not based on heuristics. The go under the name Derivative-Free
Optimization. People use these tools to optimize objective based on simulations (I have
even heard of a case where the objective function is based on crop crowing yields!)
Each of these methods have different properties, but in general they attempt to
minimize not only the objective, but the number of objective function evaluations.
Sorry the formatting was so bad previously. Here's an example of the error function
double Error(x,y)
{
SetDeviceParams(x,y);
double a = QueryParamA();
double b = QueryParamB();
double c = QueryParamC();
double _fReturnable = 0;
if(a>=A_desired)
{
_fReturnable+=(A_desired-a)*(A_desired-a);
}
if(b>=B_desired)
{
_fReturnable+=(B_desired-b)*(B_desired-b);
}
if(c>=C_desired)
{
_fReturnable+=(C_desired-c)*(C_desired-c);
}
return Math.sqrt(_fReturnable)
}
Related
Is there a generally accepted best approach to coding complex math? For example:
double someNumber = .123 + .456 * Math.Pow(Math.E, .789 * Math.Pow((homeIndex + .22), .012));
Is this a point where hard-coding the numbers is okay? Or should each number have a constant associated with it? Or is there even another way, like storing the calculations in config and invoking them somehow?
There will be a lot of code like this, and I'm trying to keep it maintainable.
Note: The example shown above is just one line. There would be tens or hundreds of these lines of code. And not only could the numbers change, but the formula could as well.
Generally, there are two kinds of constants - ones with the meaning to the implementation, and ones with the meaning to the business logic.
It is OK to hard-code the constants of the first kind: they are private to understanding your algorithm. For example, if you are using a ternary search and need to divide the interval in three parts, dividing by a hard-coded 3 is the right approach.
Constants with the meaning outside the code of your program, on the other hand, should not be hard-coded: giving them explicit names gives someone who maintains your code after you leave the company non-zero chances of making correct modifications without having to rewrite things from scratch or e-mailing you for help.
"Is it okay"? Sure. As far as I know, there's no paramilitary police force rounding up those who sin against the one true faith of programming. (Yet.).
Is it wise?
Well, there are all sorts of ways of deciding that - performance, scalability, extensibility, maintainability etc.
On the maintainability scale, this is pure evil. It make extensibility very hard; performance and scalability are probably not a huge concern.
If you left behind a single method with loads of lines similar to the above, your successor would have no chance maintaining the code. He'd be right to recommend a rewrite.
If you broke it down like
public float calculateTax(person)
float taxFreeAmount = calcTaxFreeAmount(person)
float taxableAmount = calcTaxableAmount(person, taxFreeAmount)
float taxAmount = calcTaxAmount(person, taxableAmount)
return taxAmount
end
and each of the inner methods is a few lines long, but you left some hardcoded values in there - well, not brilliant, but not terrible.
However, if some of those hardcoded values are likely to change over time (like the tax rate), leaving them as hardcoded values is not okay. It's awful.
The best advice I can give is:
Spend an afternoon with Resharper, and use its automatic refactoring tools.
Assume the guy picking this up from you is an axe-wielding maniac who knows where you live.
I usually ask myself whether I can maintain and fix the code at 3 AM being sleep deprived six months after writing the code. It has served me well. Looking at your formula, I'm not sure I can.
Ages ago I worked in the insurance industry. Some of my colleagues were tasked to convert the actuarial formulas into code, first FORTRAN and later C. Mathematical and programming skills varied from colleague to colleague. What I learned was the following reviewing their code:
document the actual formula in code; without it, years later you'll have trouble remember the actual formula. External documentation goes missing, become dated or simply may not be accessible.
break the formula into discrete components that can be documented, reused and tested.
use constants to document equations; magic numbers have very little context and often require existing knowledge for other developers to understand.
rely on the compiler to optimize code where possible. A good compiler will inline methods, reduce duplication and optimize the code for the particular architecture. In some cases it may duplicate portions of the formula for better performance.
That said, there are times where hard coding just simplify things, especially if those values are well understood within a particular context. For example, dividing (or multiplying) something by 100 or 1000 because you're converting a value to dollars. Another one is to multiply something by 3600 when you'd like to convert hours to seconds. Their meaning is often implied from the greater context. The following doesn't say much about magic number 100:
public static double a(double b, double c)
{
return (b - c) * 100;
}
but the following may give you a better hint:
public static double calculateAmountInCents(double amountDue, double amountPaid)
{
return (amountDue - amountPaid) * 100;
}
As the above comment states, this is far from complex.
You can however store the Magic numbers in constants/app.config values, so as to make it easier for the next developer to maitain your code.
When storing such constants, make sure to explain to the next developer (read yourself in 1 month) what your thoughts were, and what they need to keep in mind.
Also ewxplain what the actual calculation is for and what it is doing.
Do not leave in-line like this.
Constant so you can reuse, easily find, easily change and provides for better maintaining when someone comes looking at your code for the first time.
You can do a config if it can/should be customized. What is the impact of a customer altering the value(s)? Sometimes it is best to not give them that option. They could change it on their own then blame you when things don't work. Then again, maybe they have it in flux more often than your release schedules.
Its worth noting that the C# compiler (or is it the CLR) will automatically inline 1 line methods so if you can extract certain formulas into one liners you can just extract them as methods without any performance loss.
EDIT:
Constants and such more or less depends on the team and the quantity of use. Obviously if you're using the same hard-coded number more than once, constant it. However if you're writing a formula that its likely only you will ever edit (small team) then hard coding the values is fine. It all depends on your teams views on documentation and maintenance.
If the calculation in your line explains something for the next developer then you can leave it, otherwise its better to have calculated constant value in your code or configuration files.
I found one line in production code which was like:
int interval = 1 * 60 * 60 * 1000;
Without any comment, it wasn't hard that the original developer meant 1 hour in milliseconds, rather than seeing a value of 3600000.
IMO May be leaving out calculations is better for scenarios like that.
Names can be added for documentation purposes. The amount of documentation needed depends largely on the purpose.
Consider following code:
float e = m * 8.98755179e16;
And contrast it with the following one:
const float c = 299792458;
float e = m * c * c;
Even though the variable names are not very 'descriptive' in the latter you'll have much better idea what the code is doing the the first one - arguably there is no need to rename the c to speedOfLight, m to mass and e to energy as the names are explanatory in their domains.
const float speedOfLight = 299792458;
float energy = mass * speedOfLight * speedOfLight;
I would argue that the second code is the clearest one - especially if programmer can expect to find STR in the code (LHC simulator or something similar). To sum up - you need to find an optimal point. The more verbose code the more context you provide - which might both help to understand the meaning (what is e and c vs. we do something with mass and speed of light) and obscure the big picture (we square c and multiply by m vs. need of scanning whole line to get equation).
Most constants have some deeper meening and/or established notation so I would consider at least naming it by the convention (c for speed of light, R for gas constant, sPerH for seconds in hour). If notation is not clear the longer names should be used (sPerH in class named Date or Time is probably fine while it is not in Paginator). The really obvious constants could be hardcoded (say - division by 2 in calculating new array length in merge sort).
In my project i face a scenario where i have a function with numerous inputs. At a certain point i am provided with an result and i need to find one combination of inputs that generates that result.
Here is some pseudocode that illustrates the problem:
Double y = f(x_0,..., x_n)
I am provided with y and i need to find any combination that fits the input.
I tried several things on paper that could generate something, but my each parameter has a range of 6.5 x 10^9 possible values - so i would like to get an optimal execution time.
Can someone name an algorithm or a topic that will be useful for me so i can read up on how other people solved simmilar problems.
I was thinking along the lines of creating a vector from the inputs and judjing how good that vektor fits the problem. This sounds awful lot like an NN, but there is no training phase available.
Edit:
Thank you all for the feedback. The comments sum up the Problems i have and i will try something along the lines of hill climbing.
The general case for your problem might be impossible to solve, but for some cases there are numerical methods that can help you solve your problem.
For example, in 1D space, if you can find a number that is smaller then y and one that is higher then y - you can use the numerical method regula-falsi in order to numerically find the "root" (which is y in your case, by simply invoking the method onf(x) -y).
Other numerical method to find roots is newton-raphson
I admit, I am not familiar with how to apply these methods on multi dimensional space - but it could be a starter. I'd search the literature for these if I were you.
Note: using such a method almost always requires some knowledge on the function.
Another possible solution is to take g(X) = |f(X) - y)|, and use some heuristical algorithms in order to find a minimal value of g. The problem with heuristical methods is they will get you "close enough" - but seldom will get you exactly to the target (unless the function is convex)
Some optimizations algorithms are: Genethic Algorithm, Hill Climbing, Gradient Descent (where you can numerically find the gradient)
I've got two geometrical data sets to match, both containing tens of thousands PathGeometries. To be exact I need to find areas which overlap from one set to the other, so I got a loop like
foreach (var p1 in firstGeometries)
{
foreach (var p2 in secondGeometries)
{
PathGeometry sharedArea = PathGeometry.Combine(p1, p2, GeometryCombineMode.Intersect, null);
if (sharedArea.GetArea() > 0) // only true 0.01% of the time
{
[...]
}
}
}
Now, due to the nature of my data, 99,99% of the times the combinations do not intersect at all. Profiling told me this is the most 'expensive' part of this calculation.
Is there any way to speed up or get a faster collision detection between two PathGeometries?
Adding a new answer since I'm more familiar with the Geometry class now. First, I'd test for intersection using their bounding boxes. Though honestly, PathGeometry.Combine probably already does this. So what's the real problem? That testing the boundary of each object against the boundary of every other object is quadratic time. If you instead found intersections (or collisions in some areas of CS) using a quadtree, you could have significant performance gains. Hard to say without testing and fine tuning though. http://gamedev.tutsplus.com/tutorials/implementation/quick-tip-use-quadtrees-to-detect-likely-collisions-in-2d-space/
Maybe you can use the Parallel.ForEach method, if you have more than one cpu core avaiable.
Though I am not sure about the exact nature of each of the path geometry, but assuming that they are polygons:
You can sort each object based on their bounds. This way, you are assured that, once the condition if (sharedArea.GetArea() > 0) fails, the remaining elements in the inner loop will not produce an area greater than 0, so you can break out of the loop.
It will significantly improve the running time, since the condition is likely to fail most of the time.
I haven't tested it, but it may be helpful to use GetFlattenedPathGeometry and combine the results of that instead. Depending on the type of geometry your combining, it's likely getting converted to a polygonal approximation each time. Using GetFlattenedPathGeometry ahead of time will hopefully eliminate the redundant computation.
You definitely need a "broad and narrow phase" to do this.
Bounding-Box checks are a must for something like this.
A much simpler alternative to a quad tree would be to use "spatial hashing" (sometimes also called "spatial indexing"). This technique should reduce the needed time a thousandfold. For a reference use: http://www.playchilla.com/as3-spatial-hash It's in AS3 but it's trivial to convert it to C#
I need an formula for determining a debt payoff plan where the following are known: number of payments, amount per payment, and principal and need to figure out what the interest rate would be from that. I am re-factoring existing code and the current method uses the following (compounded = 12;interest rate starts at .1) :
while (counter < 100)
{
intermediatePayment = (interestRate*(principal/compounded))/(1 - (1/Math.Pow(interestRate/compounded + 1,(compounded*numberOfYears))));
interestIncrement = Math.Abs(interestRate - previousRate)/2;
previousRate = interestRate;
if(intermediatePayment == payment)
break;
if (intermediatePayment > payment)
interestRate -= interestIncrement;
else
interestRate += interestIncrement;
counter++;
}
Now I understand what this formula does but I would never be able to arrive at it myself. What's here is actually an equation that is supposed to be used to determine monthly payment if interest rate,principal, and number of payments is known. It is using brute force and looping (at most 100 times) until the calculated payment equals the desired payment. It arrives at an answer usually after about 40-50 loops and that could be optimized by reducing significant digits.
Seems to me if we just solved for interestRate there would be no looping. Try as I might, I can't get the equation to solve for I, so that's my main question.
Now, if you understand the problem well enough and know financial formulas and compounding interest, you might provide me with an even better solution altogether, which would be awesome. I have done significant research myself and found tools but not the raw equation, or more often I find different formulas for determining interest related stuff but am not knowledgeable to retool them for my needs.
Basically I've spent too much time on this and my boss thinks since the loop works I need to leave it be or ask for help. Fair enough, so I am. :)
Here's a more traditional layout of the formula if that helps any: http://i.imgur.com/BCdsV.png
And for test data: if
P=45500
c=12
y=3
m=1400
then
I = .0676
Thanks for the help
If you attempt to solve the formula you linked to for I, the interest rate, you'll find that you get a polynomial of degree cy+1, that is, the total number of payments plus one. It is difficult/impossible to find closed form solutions to high degree polynomials, so an approximation is the best you can do.
The algorithm you've given has some nice properties: it is pretty clear what it is doing, and it gives the right answer in a reasonable amount of time. My attitude would therefore be "if it ain't broke don't try to fix it".
If it turned out that this algorithm was too slow for some reason then there are algorithms which converge to the right answer faster; you could work out what the polynomial you need to find roots of is, work out its derivative using simple calculus, and then use Newton's Method to converge to the roots faster. But for this simple example where the answer only has to be accurate to four decimal places anyway, that seems like overkill.
This formula cannot be explicitly solved for I, so you can stop trying. On the other hand, the loop goes way beyond common sense in precision. You can surely stop when you are within half cent of the payment amount or when the increment in the estimate of I gets below 0.0001, since there was some rounding during the original calculations anyway.
So far it looks like Fabrice Bellard's base 2 equation is the way to go
Ironically this will require a BigReal type; do we have this for .Net? .Net 4.0 has BigInteger.
Anyone have a Haskell version?
Since you're asking for a Haskell version, here is a paper by Jerzy Karczmarczuk, called "The Most Unreliable Technique in the World to compute π":
This paper is an atypical exercice in
lazy functional coding, written for
fun and instruction. It can be read
and understood by anybody who
understands the programming language
Haskell. We show how to implement the
Bailey-Borwein-Ploue formula for π
in a co-recursive, incremental way
which produces the digits 3, 1, 4, 1,
5, 9. . . until the memory
exhaustion. This is not a way to
proceed if somebody needs many
digits! Our coding strategy is
perverse and dangerous, and it
provably breaks down. It is based on
the arithmetics over the domain of
infinite sequences of digits
representing proper fractions expanded
in an integer base. We show how to
manipulate: add, multiply by an
integer, etc. such sequences from the
left to the right ad infinitum,
which obviously cannot work in all
cases because of ambiguities. Some
deep philosophical consequences are
discussed in the conclusions.
It doesn't really solve the problem in an efficient or very practical way, but is entertaining and shows some of the problems with lazy infinite precision arithmetic.
Then there's also this paper by Jeremy Gibbons.
By far my favorite Haskell spigot for pi comes from Jeremy Gibbons:
pi = g(1,0,1,1,3,3) where
g(q,r,t,k,n,l) =
if 4*q+r-t<n*t
then n : g(10*q,10*(r-n*t),t,k,div(10*(3*q+r))t-10*n,l)
else g(q*k,(2*q+r)*l,t*l,k+1,div(q*(7*k+2)+r*l)(t*l),l+2)
The mathematical background that justifies that implementation can be found in:
A Spigot Algorithm for the Digits of Pi
Wikipedia details a lot of ways to get numerical approximations of pi here. They also give some sample pseudo-code
Edit : If you're interested in this kind of mathematical problems without having any related real-world problem to solve (which is definitely a good attitude to have, IMHO), you could visit the Euler Project page
There exists such possibility to process big rational numbers in DLR-based dynamic languages (e.g. IronPython). Or you can use any portable C/C++ implementation of big real numbers through P/Invoke.