lagrangian-0.5.0.0: Solve Lagrange multiplier problems

Safe HaskellNone

Numeric.AD.Lagrangian

Contents

Description

Numerically solve convex lagrange multiplier problems with conjugate gradient descent.

Here is an example from the Wikipedia page on Lagrange multipliers. Maximize f(x, y) = x + y, subject to the constraint x^2 + y^2 = 1

>>> maximize 0.00001 (\[x, y] -> x + y) [(\[x, y] -> x^2 + y^2) <=> 1] 2
Right ([0.707,0.707], [-0.707])

The first elements of the result pair are the arguments for the objective function at the minimum. The second elements are the lagrange multipliers.

Synopsis

Helper types

type AD2 s r a = AD s (AD r a)Source

newtype FU a Source

A newtype wrapper for working with the rank 2 types constraint functions.

Constructors

FU 

Fields

unFU :: forall s r. (Mode s, Mode r) => [AD2 s r a] -> AD2 s r a
 

(<=>) :: (forall s r. (Mode s, Mode r) => [AD2 s r a] -> AD2 s r a) -> a -> Constraint aSource

Build a Constraint from a function and a constant

type Constraint a = (FU a, a)Source

A constraint of the form g(x, y, ...) = c. See <=> for constructing a Constraint.

Solver

maximizeSource

Arguments

:: Double 
-> (forall s r. (Mode s, Mode r) => [AD2 s r Double] -> AD2 s r Double)

The function to maximize

-> [Constraint Double]

The constraints as pairs g <=> c which represent equations of the form g(x, y, ...) = c

-> Int

The arity of the objective function which should equal the arity of the constraints.

-> Either (Result, Statistics) (Vector Double, Vector Double)

Either an explanation of why the gradient descent failed or a pair containing the arguments at the minimum and the lagrange multipliers

Finding the maximum is the same as the minimum with the objective function inverted

minimizeSource

Arguments

:: Double 
-> (forall s r. (Mode s, Mode r) => [AD2 s r Double] -> AD2 s r Double)

The function to minimize

-> [Constraint Double]

The constraints as pairs g <=> c which represent equations of the form g(x, y, ...) = c

-> Int

The arity of the objective function which should equal the arity of the constraints.

-> Either (Result, Statistics) (Vector Double, Vector Double)

Either an explanation of why the gradient descent failed or a pair containing the arguments at the minimum and the lagrange multipliers

This is the lagrangian multiplier solver. It is assumed that the objective function and all of the constraints take in the same amount of arguments.

Experimental features

feasible :: (forall s r. (Mode s, Mode r) => [AD2 s r Double] -> AD2 s r Double) -> [Constraint Double] -> [Double] -> BoolSource

WARNING. Experimental. This is not a true feasibility test for the function. I am not sure exactly how to implement that. This just checks the feasiblility at a point. If this ever returns false, solve can fail.