Numerically solve convex lagrange multiplier problems with conjugate gradient descent.
Convexity is key, otherwise the descent algorithm can return the wrong answer.
Convexity can be tested by assuring that the hessian of the lagrangian is positive definite over region the function is defined in.
I have provided test that the hessian is positive definite at a point, which is something, but not enough to ensure that the whole function is convex.
Be that as it may, if you know what the your lagrangian is convex you can use
find the minimum.
For example, find the maximum entropy with the constraint that the probabilities add up to one.
solve 0.00001 (negate . sum . map (x -> x * log x), [(sum, 1)]) 3
Gives the answer ([0.33, 0.33, 0.33], [-0.09])
The first elements of the result pair are the arguments for the objective function at the minimum. The second elements are the lagrange multipliers.
|-> (forall a. Floating a => ([a] -> a, [Constraint a]))|
A pair of the function to minimize and the constraints
The arity of the objective function and the constraints.
|-> Either (Result, Statistics) ([Double], [Double])|
Either an explaination of why the gradient descent failed or a pair of the arguments at the minimum and the lagrange multipliers
This is the lagrangrain multiplier solver. It is assumed that the objective function and all of the constraints take in the same about of arguments.
This is not a true feasibility test for the function. I am not sure exactly how to
implement that. This just checks the feasiblility at point. If this ever returns
solve can fail.