| Safe Haskell | None |
|---|
Numeric.AD.Lagrangian
Contents
Description
Numerically solve convex lagrange multiplier problems with conjugate gradient descent.
Here is an example from the Wikipedia page on Lagrange multipliers. Maximize f(x, y) = x + y, subject to the constraint x^2 + y^2 = 1
>>>maximize 0.00001 (\[x, y] -> x + y) [(\[x, y] -> x^2 + y^2) <=> 1] 2Right ([0.707,0.707], [-0.707])
The first elements of the result pair are the arguments for the objective function at the minimum. The second elements are the lagrange multipliers.
- type AD2 s r a = AD s (AD r a)
- newtype FU a = FU {}
- (<=>) :: (forall s r. (Mode s, Mode r) => [AD2 s r a] -> AD2 s r a) -> a -> Constraint a
- type Constraint a = (FU a, a)
- maximize :: Double -> (forall s r. (Mode s, Mode r) => [AD2 s r Double] -> AD2 s r Double) -> [Constraint Double] -> Int -> Either (Result, Statistics) (Vector Double, Vector Double)
- minimize :: Double -> (forall s r. (Mode s, Mode r) => [AD2 s r Double] -> AD2 s r Double) -> [Constraint Double] -> Int -> Either (Result, Statistics) (Vector Double, Vector Double)
- feasible :: (forall s r. (Mode s, Mode r) => [AD2 s r Double] -> AD2 s r Double) -> [Constraint Double] -> [Double] -> Bool
Helper types
A newtype wrapper for working with the rank 2 types constraint functions.
(<=>) :: (forall s r. (Mode s, Mode r) => [AD2 s r a] -> AD2 s r a) -> a -> Constraint aSource
Build a Constraint from a function and a constant
type Constraint a = (FU a, a)Source
A constraint of the form g(x, y, ...) = c. See <=> for constructing a Constraint.
Solver
Arguments
| :: Double | |
| -> (forall s r. (Mode s, Mode r) => [AD2 s r Double] -> AD2 s r Double) | The function to maximize |
| -> [Constraint Double] | The constraints as pairs |
| -> Int | The arity of the objective function which should equal the arity of the constraints. |
| -> Either (Result, Statistics) (Vector Double, Vector Double) | Either an explanation of why the gradient descent failed or a pair containing the arguments at the minimum and the lagrange multipliers |
Finding the maximum is the same as the minimum with the objective function inverted
Arguments
| :: Double | |
| -> (forall s r. (Mode s, Mode r) => [AD2 s r Double] -> AD2 s r Double) | The function to minimize |
| -> [Constraint Double] | The constraints as pairs |
| -> Int | The arity of the objective function which should equal the arity of the constraints. |
| -> Either (Result, Statistics) (Vector Double, Vector Double) | Either an explanation of why the gradient descent failed or a pair containing the arguments at the minimum and the lagrange multipliers |
This is the lagrangian multiplier solver. It is assumed that the objective function and all of the constraints take in the same amount of arguments.
Experimental features
feasible :: (forall s r. (Mode s, Mode r) => [AD2 s r Double] -> AD2 s r Double) -> [Constraint Double] -> [Double] -> BoolSource
WARNING. Experimental.
This is not a true feasibility test for the function. I am not sure
exactly how to implement that. This just checks the feasiblility at a point.
If this ever returns false, solve can fail.