Copyright  (c) Matthew Peddie 2017 

License  BSD3 
Maintainer  Matthew Peddie <mpeddie@gmail.com> 
Stability  provisional 
Portability  GHC 
Safe Haskell  SafeInferred 
Language  Haskell2010 
This module provides a highlevel, hmatrix
compatible interface to
the NLOPT library by
Steven G. Johnson.
Documentation
Most nonnumerical details are documented, but for specific information on what the optimization methods do, how constraints are handled, etc., you should consult:
Example program
The following interactive session example uses the NelderMead simplex
algorithm, a derivativefree local optimizer, to minimize a trivial
function with a minimum of 22.0 at (0, 0)
.
>>>
import Numeric.LinearAlgebra ( dot, fromList )
>>>
let objf x = x `dot` x + 22  define objective
>>>
let stop = ObjectiveRelativeTolerance 1e6 : []  define stopping criterion
>>>
let algorithm = NELDERMEAD objf [] Nothing  specify algorithm
>>>
let problem = LocalProblem 2 stop algorithm  specify problem
>>>
let x0 = fromList [5, 10]  specify initial guess
>>>
minimizeLocal problem x0
Right (Solution {solutionCost = 22.0, solutionParams = [0.0,0.0], solutionResult = FTOL_REACHED})
Synopsis
 type Objective = Vector Double > Double
 type ObjectiveD = Vector Double > (Double, Vector Double)
 type Preconditioner = Vector Double > Vector Double > Vector Double
 data Bounds
 = LowerBounds (Vector Double)
  UpperBounds (Vector Double)
 type ScalarConstraint = Vector Double > Double
 type ScalarConstraintD = Vector Double > (Double, Vector Double)
 type VectorConstraint = Vector Double > Word > Vector Double
 type VectorConstraintD = Vector Double > Word > (Vector Double, Matrix Double)
 data Constraint s v
 = Scalar s
  Vector Word v
  Preconditioned Preconditioner s
 data EqualityConstraint s v = EqualityConstraint {}
 data InequalityConstraint s v = InequalityConstraint {}
 type EqualityConstraints = [EqualityConstraint ScalarConstraint VectorConstraint]
 type EqualityConstraintsD = [EqualityConstraint ScalarConstraintD VectorConstraintD]
 type InequalityConstraints = [InequalityConstraint ScalarConstraint VectorConstraint]
 type InequalityConstraintsD = [InequalityConstraint ScalarConstraintD VectorConstraintD]
 data StoppingCondition
 data NonEmpty a = a : [a]
 data RandomSeed
 newtype Population = Population Word
 newtype VectorStorage = VectorStorage Word
 newtype InitialStep = InitialStep (Vector Double)
 data LocalAlgorithm
 = LBFGS_NOCEDAL ObjectiveD (Maybe VectorStorage)
  LBFGS ObjectiveD (Maybe VectorStorage)
  VAR2 ObjectiveD (Maybe VectorStorage)
  VAR1 ObjectiveD (Maybe VectorStorage)
  TNEWTON ObjectiveD (Maybe VectorStorage)
  TNEWTON_RESTART ObjectiveD (Maybe VectorStorage)
  TNEWTON_PRECOND ObjectiveD (Maybe VectorStorage)
  TNEWTON_PRECOND_RESTART ObjectiveD (Maybe VectorStorage)
  MMA ObjectiveD InequalityConstraintsD
  SLSQP ObjectiveD [Bounds] InequalityConstraintsD EqualityConstraintsD
  CCSAQ ObjectiveD Preconditioner
  PRAXIS Objective [Bounds] (Maybe InitialStep)
  COBYLA Objective [Bounds] InequalityConstraints EqualityConstraints (Maybe InitialStep)
  NEWUOA Objective (Maybe InitialStep)
  NEWUOA_BOUND Objective [Bounds] (Maybe InitialStep)
  NELDERMEAD Objective [Bounds] (Maybe InitialStep)
  SBPLX Objective [Bounds] (Maybe InitialStep)
  BOBYQA Objective [Bounds] (Maybe InitialStep)
 data LocalProblem = LocalProblem {}
 minimizeLocal :: LocalProblem > Vector Double > Either Result Solution
 data GlobalAlgorithm
 = DIRECT Objective
  DIRECT_L Objective
  DIRECT_L_RAND Objective RandomSeed
  DIRECT_NOSCAL Objective
  DIRECT_L_NOSCAL Objective
  DIRECT_L_RAND_NOSCAL Objective RandomSeed
  ORIG_DIRECT Objective InequalityConstraints
  ORIG_DIRECT_L Objective InequalityConstraints
  STOGO ObjectiveD
  STOGO_RAND ObjectiveD RandomSeed
  CRS2_LM Objective RandomSeed (Maybe Population)
  ISRES Objective InequalityConstraints EqualityConstraints RandomSeed (Maybe Population)
  ESCH Objective
  MLSL Objective LocalProblem (Maybe Population)
  MLSL_LDS Objective LocalProblem (Maybe Population)
 data GlobalProblem = GlobalProblem {}
 minimizeGlobal :: GlobalProblem > Vector Double > Either Result Solution
 data AugLagAlgorithm
 data AugLagProblem = AugLagProblem {}
 minimizeAugLag :: AugLagProblem > Vector Double > Either Result Solution
 data Solution = Solution {}
 data Result
Specifying the objective function
An objective function that calculates the objective value at the given parameter vector.
type ObjectiveD Source #
An objective function that calculates both the objective value and the gradient of the objective with respect to the input parameter vector, at the given parameter vector.
type Preconditioner Source #
= Vector Double  Parameter vector 
> Vector Double  Vector 
> Vector Double  Preconditioned vector 
A preconditioner function, which computes vpre = H(x) v
, where
H
is the Hessian matrix: the positive semidefinite second
derivative at the given parameter vector x
, or an approximation
thereof.
Specifying the constraints
Bound constraints
Bound constraints are specified by vectors of the same dimension as the parameter space.
Example program
The following interactive session example enforces lower bounds on
the example from the beginning of the module. This prevents the
optimizer from locating the true minimum at (0, 0)
; a slightly
higher constrained minimum at (1, 1)
is found. Note that the
optimizer returns XTOL_REACHED
rather than FTOL_REACHED
,
because the bound constraint is active at the final minimum.
>>>
import Numeric.LinearAlgebra ( dot, fromList )
>>>
let objf x = x `dot` x + 22  define objective
>>>
let stop = ObjectiveRelativeTolerance 1e6 : []  define stopping criterion
>>>
let lowerbound = LowerBounds $ fromList [1, 1]  specify bounds
>>>
let algorithm = NELDERMEAD objf [lowerbound] Nothing  specify algorithm
>>>
let problem = LocalProblem 2 stop algorithm  specify problem
>>>
let x0 = fromList [5, 10]  specify initial guess
>>>
minimizeLocal problem x0
Right (Solution {solutionCost = 24.0, solutionParams = [1.0,1.0], solutionResult = XTOL_REACHED})
LowerBounds (Vector Double)  Lower bound vector 
UpperBounds (Vector Double)  Upper bound vector 
Nonlinear constraints
Note that most NLOPT algorithms do not support nonlinear
constraints natively; if you need to enforce nonlinear constraints,
you may want to use the AugLagAlgorithm
family of solvers, which
can add nonlinear constraints to some algorithm that does not
support them by a principled modification of the objective
function.
Example program
The following interactive session example enforces a scalar
constraint on the problem given in the beginning of the module: the
parameters must always sum to 1. The minimizer finds a constrained
minimum of 22.5 at (0.5, 0.5)
.
>>>
import Numeric.LinearAlgebra ( dot, fromList, toList )
>>>
let objf x = x `dot` x + 22
>>>
let stop = ObjectiveRelativeTolerance 1e9 : []
>>>
 define constraint function:
>>>
let constraintf x = sum (toList x)  1.0
>>>
 define constraint object to pass to the algorithm:
>>>
let constraint = EqualityConstraint (Scalar constraintf) 1e6
>>>
let algorithm = COBYLA objf [] [] [constraint] Nothing
>>>
let problem = LocalProblem 2 stop algorithm
>>>
let x0 = fromList [5, 10]
>>>
minimizeLocal problem x0
Right (Solution {solutionCost = 22.500000000013028, solutionParams = [0.5000025521533521,0.49999744784664796], solutionResult = FTOL_REACHED})
Constraint functions
type ScalarConstraint Source #
A constraint function which returns c(x)
given the parameter
vector x
. The constraint will enforce that c(x) == 0
(equality
constraint) or c(x) <= 0
(inequality constraint).
type ScalarConstraintD Source #
= Vector Double  Parameter vector 
> (Double, Vector Double)  (Constraint violation, constraint gradient) 
A constraint function which returns c(x)
given the parameter
vector x
along with the gradient of c(x)
with respect to x
at
that point. The constraint will enforce that c(x) == 0
(equality
constraint) or c(x) <= 0
(inequality constraint).
type VectorConstraint Source #
= Vector Double  Parameter vector 
> Word  Constraint vector size 
> Vector Double  Constraint violation vector 
A constraint function which returns a vector c(x)
given the
parameter vector x
. The constraint will enforce that c(x) == 0
(equality constraint) or c(x) <= 0
(inequality constraint).
type VectorConstraintD Source #
= Vector Double  Parameter vector 
> Word  Constraint vector size 
> (Vector Double, Matrix Double)  (Constraint violation vector, constraint Jacobian) 
A constraint function which returns c(x)
given the parameter
vector x
along with the Jacobian (first derivative) matrix of
c(x)
with respect to x
at that point. The constraint will
enforce that c(x) == 0
(equality constraint) or c(x) <= 0
(inequality constraint).
Constraint types
data Constraint s v Source #
Scalar s  A scalar constraint. 
Vector Word v  A vector constraint. 
Preconditioned Preconditioner s  A scalar constraint with an attached preconditioning function. 
data EqualityConstraint s v Source #
An equality constraint, comprised of both the constraint function (or functions, if a preconditioner is used) along with the desired tolerance.
data InequalityConstraint s v Source #
An inequality constraint, comprised of both the constraint function (or functions, if a preconditioner is used) along with the desired tolerance.
Collections of constraints
type EqualityConstraints = [EqualityConstraint ScalarConstraint VectorConstraint] Source #
A collection of equality constraints that do not supply constraint derivatives.
type EqualityConstraintsD = [EqualityConstraint ScalarConstraintD VectorConstraintD] Source #
A collection of equality constraints that supply constraint derivatives.
type InequalityConstraints = [InequalityConstraint ScalarConstraint VectorConstraint] Source #
A collection of inequality constraints that do not supply constraint derivatives.
type InequalityConstraintsD = [InequalityConstraint ScalarConstraintD VectorConstraintD] Source #
A collection of inequality constraints that supply constraint derivatives.
Stopping conditions
The NonEmpty
data type from NonEmpty
is reexported
here, because it is used to ensure that you always specify at least
one stopping condition.
data StoppingCondition Source #
A StoppingCondition
tells NLOPT when to stop working on a
minimization problem. When multiple StoppingCondition
s are
provided, the problem will stop when any one condition is met.
MinimumValue Double  Stop minimizing when an objective value 
ObjectiveRelativeTolerance Double  Stop minimizing when an optimization step changes the objective
value 
ObjectiveAbsoluteTolerance Double  Stop minimizing when an optimization step changes the objective value by less than the provided tolerance. 
ParameterRelativeTolerance Double  Stop when an optimization step changes every element of the
parameter vector 
ParameterAbsoluteTolerance (Vector Double)  Stop when an optimization step changes every element of the
parameter vector 
MaximumEvaluations Word  Stop when the number of evaluations of the objective function exceeds the provided count. 
MaximumTime Double  Stop when the optimization time exceeds the provided time (in seconds). This is not a precise limit. 
Instances
Read StoppingCondition Source #  
Defined in Numeric.NLOPT  
Show StoppingCondition Source #  
Defined in Numeric.NLOPT showsPrec :: Int > StoppingCondition > ShowS # show :: StoppingCondition > String # showList :: [StoppingCondition] > ShowS #  
Eq StoppingCondition Source #  
Defined in Numeric.NLOPT (==) :: StoppingCondition > StoppingCondition > Bool # (/=) :: StoppingCondition > StoppingCondition > Bool # 
Nonempty (and nonstrict) list type.
Since: base4.9.0.0
a : [a] infixr 5 
Instances
Foldable NonEmpty  Since: base4.9.0.0 
Defined in Data.Foldable fold :: Monoid m => NonEmpty m > m # foldMap :: Monoid m => (a > m) > NonEmpty a > m # foldMap' :: Monoid m => (a > m) > NonEmpty a > m # foldr :: (a > b > b) > b > NonEmpty a > b # foldr' :: (a > b > b) > b > NonEmpty a > b # foldl :: (b > a > b) > b > NonEmpty a > b # foldl' :: (b > a > b) > b > NonEmpty a > b # foldr1 :: (a > a > a) > NonEmpty a > a # foldl1 :: (a > a > a) > NonEmpty a > a # elem :: Eq a => a > NonEmpty a > Bool # maximum :: Ord a => NonEmpty a > a # minimum :: Ord a => NonEmpty a > a #  
Traversable NonEmpty  Since: base4.9.0.0 
Applicative NonEmpty  Since: base4.9.0.0 
Functor NonEmpty  Since: base4.9.0.0 
Monad NonEmpty  Since: base4.9.0.0 
Semigroup (NonEmpty a)  Since: base4.9.0.0 
Generic (NonEmpty a)  
Read a => Read (NonEmpty a)  Since: base4.11.0.0 
Show a => Show (NonEmpty a)  Since: base4.11.0.0 
Eq a => Eq (NonEmpty a)  Since: base4.9.0.0 
Ord a => Ord (NonEmpty a)  Since: base4.9.0.0 
Generic1 NonEmpty  
type Rep (NonEmpty a)  Since: base4.6.0.0 
Defined in GHC.Generics type Rep (NonEmpty a) = D1 ('MetaData "NonEmpty" "GHC.Base" "base" 'False) (C1 ('MetaCons ":" ('InfixI 'LeftAssociative 9) 'False) (S1 ('MetaSel ('Nothing :: Maybe Symbol) 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 a) :*: S1 ('MetaSel ('Nothing :: Maybe Symbol) 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec0 [a])))  
type Rep1 NonEmpty  Since: base4.6.0.0 
Defined in GHC.Generics type Rep1 NonEmpty = D1 ('MetaData "NonEmpty" "GHC.Base" "base" 'False) (C1 ('MetaCons ":" ('InfixI 'LeftAssociative 9) 'False) (S1 ('MetaSel ('Nothing :: Maybe Symbol) 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) Par1 :*: S1 ('MetaSel ('Nothing :: Maybe Symbol) 'NoSourceUnpackedness 'NoSourceStrictness 'DecidedLazy) (Rec1 []))) 
Additional configuration
data RandomSeed Source #
This specifies how to initialize the random number generator for stochastic algorithms.
SeedValue Word  Seed the RNG with the provided value. 
SeedFromTime  Seed the RNG using the system clock. 
Don'tSeed  Don't perform any explicit initialization of the RNG. 
Instances
Read RandomSeed Source #  
Defined in Numeric.NLOPT readsPrec :: Int > ReadS RandomSeed # readList :: ReadS [RandomSeed] # readPrec :: ReadPrec RandomSeed # readListPrec :: ReadPrec [RandomSeed] #  
Show RandomSeed Source #  
Defined in Numeric.NLOPT showsPrec :: Int > RandomSeed > ShowS # show :: RandomSeed > String # showList :: [RandomSeed] > ShowS #  
Eq RandomSeed Source #  
Defined in Numeric.NLOPT (==) :: RandomSeed > RandomSeed > Bool # (/=) :: RandomSeed > RandomSeed > Bool # 
newtype Population Source #
This specifies the population size for algorithms that use a pool of solutions.
Instances
Read Population Source #  
Defined in Numeric.NLOPT readsPrec :: Int > ReadS Population # readList :: ReadS [Population] # readPrec :: ReadPrec Population # readListPrec :: ReadPrec [Population] #  
Show Population Source #  
Defined in Numeric.NLOPT showsPrec :: Int > Population > ShowS # show :: Population > String # showList :: [Population] > ShowS #  
Eq Population Source #  
Defined in Numeric.NLOPT (==) :: Population > Population > Bool # (/=) :: Population > Population > Bool # 
newtype VectorStorage Source #
This specifies the memory size to be used by algorithms like
LBFGS
which store approximate Hessian or Jacobian matrices.
Instances
Read VectorStorage Source #  
Defined in Numeric.NLOPT readsPrec :: Int > ReadS VectorStorage # readList :: ReadS [VectorStorage] #  
Show VectorStorage Source #  
Defined in Numeric.NLOPT showsPrec :: Int > VectorStorage > ShowS # show :: VectorStorage > String # showList :: [VectorStorage] > ShowS #  
Eq VectorStorage Source #  
Defined in Numeric.NLOPT (==) :: VectorStorage > VectorStorage > Bool # (/=) :: VectorStorage > VectorStorage > Bool # 
newtype InitialStep Source #
This vector with the same dimension as the parameter vector x
specifies the initial step for the optimizer to take. (This
applies to local gradientfree algorithms, which cannot use
gradients to estimate how big a step to take.)
Instances
Read InitialStep Source #  
Defined in Numeric.NLOPT readsPrec :: Int > ReadS InitialStep # readList :: ReadS [InitialStep] # readPrec :: ReadPrec InitialStep # readListPrec :: ReadPrec [InitialStep] #  
Show InitialStep Source #  
Defined in Numeric.NLOPT showsPrec :: Int > InitialStep > ShowS # show :: InitialStep > String # showList :: [InitialStep] > ShowS #  
Eq InitialStep Source #  
Defined in Numeric.NLOPT (==) :: InitialStep > InitialStep > Bool # (/=) :: InitialStep > InitialStep > Bool # 
Minimization problems
Local minimization
data LocalAlgorithm Source #
These are the local minimization algorithms provided by NLOPT. Please see the NLOPT algorithm manual for more details on how the methods work and how they relate to one another. Note that some local methods require you provide derivatives (gradients or Jacobians) for your objective function and constraint functions.
Optional parameters are wrapped in a Maybe
; for example, if you
see Maybe
VectorStorage
, you can simply specify Nothing
to
use the default behavior.
LBFGS_NOCEDAL ObjectiveD (Maybe VectorStorage)  Limitedmemory BFGS 
LBFGS ObjectiveD (Maybe VectorStorage)  Limitedmemory BFGS 
VAR2 ObjectiveD (Maybe VectorStorage)  Shifted limitedmemory variablemetric, rank2 
VAR1 ObjectiveD (Maybe VectorStorage)  Shifted limitedmemory variablemetric, rank1 
TNEWTON ObjectiveD (Maybe VectorStorage)  Truncated Newton's method 
TNEWTON_RESTART ObjectiveD (Maybe VectorStorage)  Truncated Newton's method with automatic restarting 
TNEWTON_PRECOND ObjectiveD (Maybe VectorStorage)  Preconditioned truncated Newton's method 
TNEWTON_PRECOND_RESTART ObjectiveD (Maybe VectorStorage)  Preconditioned truncated Newton's method with automatic restarting 
MMA ObjectiveD InequalityConstraintsD  Method of moving averages 
SLSQP ObjectiveD [Bounds] InequalityConstraintsD EqualityConstraintsD  Sequential LeastSquares Quadratic Programming 
CCSAQ ObjectiveD Preconditioner  Conservative Convex Separable Approximation 
PRAXIS Objective [Bounds] (Maybe InitialStep)  PRincipal AXIS gradientfree local optimization 
COBYLA Objective [Bounds] InequalityConstraints EqualityConstraints (Maybe InitialStep)  Constrained Optimization BY Linear Approximations 
NEWUOA Objective (Maybe InitialStep)  Powell's NEWUOA algorithm 
NEWUOA_BOUND Objective [Bounds] (Maybe InitialStep)  Powell's NEWUOA algorithm with bounds by SGJ 
NELDERMEAD Objective [Bounds] (Maybe InitialStep)  NelderMead Simplex gradientfree method 
SBPLX Objective [Bounds] (Maybe InitialStep)  NLOPT implementation of Rowan's Subplex algorithm 
BOBYQA Objective [Bounds] (Maybe InitialStep)  Bounded Optimization BY Quadratic Approximations 
data LocalProblem Source #
LocalProblem  

minimizeLocal :: LocalProblem > Vector Double > Either Result Solution Source #
Example program
The following interactive session example enforces the same scalar constraint as the nonlinear constraint example, but this time it uses the SLSQP solver to find the minimum.
>>>
import Numeric.LinearAlgebra ( dot, fromList, toList, scale )
>>>
let objf x = (x `dot` x + 22, 2 `scale` x)
>>>
let stop = ObjectiveRelativeTolerance 1e9 : []
>>>
let constraintf x = (sum (toList x)  1.0, fromList [1, 1])
>>>
let constraint = EqualityConstraint (Scalar constraintf) 1e6
>>>
let algorithm = SLSQP objf [] [] [constraint]
>>>
let problem = LocalProblem 2 stop algorithm
>>>
let x0 = fromList [5, 10]
>>>
minimizeLocal problem x0
Right (Solution {solutionCost = 22.5, solutionParams = [0.4999999999999998,0.5000000000000002], solutionResult = FTOL_REACHED})
Global minimization
data GlobalAlgorithm Source #
These are the global minimization algorithms provided by NLOPT. Please see the NLOPT algorithm manual for more details on how the methods work and how they relate to one another.
Optional parameters are wrapped in a Maybe
; for example, if you
see Maybe
Population
, you can simply specify Nothing
to use
the default behavior.
DIRECT Objective  DIviding RECTangles 
DIRECT_L Objective  DIviding RECTangles, locallybiased variant 
DIRECT_L_RAND Objective RandomSeed  DIviding RECTangles, "slightly randomized" 
DIRECT_NOSCAL Objective  DIviding RECTangles, unscaled version 
DIRECT_L_NOSCAL Objective  DIviding RECTangles, locallybiased and unscaled 
DIRECT_L_RAND_NOSCAL Objective RandomSeed  DIviding RECTangles, locallybiased, unscaled and "slightly randomized" 
ORIG_DIRECT Objective InequalityConstraints  DIviding RECTangles, original FORTRAN implementation 
ORIG_DIRECT_L Objective InequalityConstraints  DIviding RECTangles, locallybiased, original FORTRAN implementation 
STOGO ObjectiveD  Stochastic Global Optimization.
This algorithm is only available if you have linked with 
STOGO_RAND ObjectiveD RandomSeed  Stochastic Global Optimization, randomized variant.
This algorithm is only available if you have linked with 
CRS2_LM Objective RandomSeed (Maybe Population)  Controlled Random Search with Local Mutation 
ISRES Objective InequalityConstraints EqualityConstraints RandomSeed (Maybe Population)  Improved Stochastic Ranking Evolution Strategy 
ESCH Objective  Evolutionary Algorithm 
MLSL Objective LocalProblem (Maybe Population)  Original MultiLevel SingleLinkage 
MLSL_LDS Objective LocalProblem (Maybe Population)  MultiLevel SingleLinkage with Sobol LowDiscrepancy Sequence for starting points 
data GlobalProblem Source #
GlobalProblem  

:: GlobalProblem  Problem specification 
> Vector Double  Initial parameter guess 
> Either Result Solution  Optimization results 
Solve the specified global optimization problem.
Example program
The following interactive session example uses the ISRES
algorithm, a stochastic, derivativefree global optimizer, to
minimize a trivial function with a minimum of 22.0 at (0, 0)
.
The search is conducted within a box from 10 to 10 in each
dimension.
>>>
import Numeric.LinearAlgebra ( dot, fromList )
>>>
let objf x = x `dot` x + 22  define objective
>>>
let stop = ObjectiveRelativeTolerance 1e12 : []  define stopping criterion
>>>
let algorithm = ISRES objf [] [] (SeedValue 22) Nothing  specify algorithm
>>>
let lowerbounds = fromList [10, 10]  specify bounds
>>>
let upperbounds = fromList [10, 10]  specify bounds
>>>
let problem = GlobalProblem lowerbounds upperbounds stop algorithm
>>>
let x0 = fromList [5, 8]  specify initial guess
>>>
minimizeGlobal problem x0
Right (Solution {solutionCost = 22.000000000002807, solutionParams = [1.660591102367038e6,2.2407062393213684e7], solutionResult = FTOL_REACHED})
Minimization by augmented Lagrangian
data AugLagAlgorithm Source #
The Augmented Lagrangian solvers allow you to enforce nonlinear
constraints while using local or global algorithms that don't
natively support them. The subsidiary problem is used to do the
minimization, but the AUGLAG
methods modify the objective to
enforce the constraints. Please see
the NLOPT algorithm manual
for more details on how the methods work and how they relate to one another.
See the documentation for AugLagProblem
for an important note
about the constraint functions.
AUGLAG_LOCAL LocalProblem InequalityConstraints InequalityConstraintsD  AUGmented LAGrangian with a local subsidiary method 
AUGLAG_EQ_LOCAL LocalProblem  AUGmented LAGrangian with a local subsidiary method and with penalty functions only for equality constraints 
AUGLAG_GLOBAL GlobalProblem InequalityConstraints InequalityConstraintsD  AUGmented LAGrangian with a global subsidiary method 
AUGLAG_EQ_GLOBAL GlobalProblem  AUGmented LAGrangian with a global subsidiary method and with penalty functions only for equality constraints. 
data AugLagProblem Source #
IMPORTANT NOTE
For augmented lagrangian problems, you, the user, are responsible
for providing the appropriate type of constraint. If the
subsidiary problem requires an ObjectiveD
, then you should
provide constraint functions with derivatives. If the subsidiary
problem requires an Objective
, you should provide constraint
functions without derivatives. If you don't do this, you may get a
runtime error.
AugLagProblem  

minimizeAugLag :: AugLagProblem > Vector Double > Either Result Solution Source #
Example program
The following interactive session example enforces the same scalar
constraint as the nonlinear constraint example, but this time it
uses the augmented Lagrangian method to enforce the constraint and
the SBPLX
algorithm, which does not support nonlinear constraints
itself, to perform the minimization. As before, the parameters
must always sum to 1, and the minimizer finds the same constrained
minimum of 22.5 at (0.5, 0.5)
.
>>>
import Numeric.LinearAlgebra ( dot, fromList, toList )
>>>
let objf x = x `dot` x + 22
>>>
let stop = ObjectiveRelativeTolerance 1e9 : []
>>>
let algorithm = SBPLX objf [] Nothing
>>>
let subproblem = LocalProblem 2 stop algorithm
>>>
let x0 = fromList [5, 10]
>>>
minimizeLocal subproblem x0
Right (Solution {solutionCost = 22.0, solutionParams = [0.0,0.0], solutionResult = FTOL_REACHED})>>>
 define constraint function:
>>>
let constraintf x = sum (toList x)  1.0
>>>
 define constraint object to pass to the algorithm:
>>>
let constraint = EqualityConstraint (Scalar constraintf) 1e6
>>>
let problem = AugLagProblem [constraint] [] (AUGLAG_EQ_LOCAL subproblem)
>>>
minimizeAugLag problem x0
Right (Solution {solutionCost = 22.500000015505844, solutionParams = [0.5000880506776678,0.4999119493223323], solutionResult = FTOL_REACHED})
Results
This structure is returned in the event of a successful optimization.
Solution  

Mostly selfexplanatory.
FAILURE  Generic failure code 
INVALID_ARGS  
OUT_OF_MEMORY  
ROUNDOFF_LIMITED  
FORCED_STOP  
SUCCESS  Generic success code 
STOPVAL_REACHED  
FTOL_REACHED  
XTOL_REACHED  
MAXEVAL_REACHED  
MAXTIME_REACHED 