nlopt-haskell-0.1.1.0: Low-level bindings to the NLOPT optimization library

Copyright(c) Matthew Peddie 2017
LicenseBSD3
MaintainerMatthew Peddie <mpeddie@gmail.com>
Stabilityprovisional
PortabilityGHC
Safe HaskellNone
LanguageHaskell2010

Numeric.Optimization.NLOPT.Bindings

Contents

Description

Low-level interface to the NLOPT library. Please see the NLOPT reference manual for detailed information; the Haskell functions in this module closely follow the interface to the C library in nlopt.h.

Differences between this module and the C interface are documented here; functions with identical interfaces are not. In general:

Opt
corresponds to an nlopt_opt object
Result
corresponds to nlopt_result
Vector Double
corresponds to a const double * input or a double * output
ScalarFunction
corresponds to nlopt_func
VectorFunction
corresponds to nlopt_mfunc
PreconditionerFunction
corresponds to nlopt_precond

User data that is handled by void * in the C bindings can be any Haskell value.

Synopsis

C enums

data Algorithm Source #

The NLOPT algorithm names, apart from the names of the actual optimization methods, follow this scheme:

G
means a global method
L
means a local method
D
means a method that requires the derivative
N
means a method that does not require the derivative
*_RAND
means the algorithm involves some randomization.
*_NOSCAL
means the algorithm is *not* scaled to a unit hypercube (i.e. it is sensitive to the units of x)

Constructors

GN_DIRECT

DIviding RECTangles

GN_DIRECT_L

DIviding RECTangles, locally-biased variant

GN_DIRECT_L_RAND

DIviding RECTangles, "slightly randomized"

GN_DIRECT_NOSCAL

DIviding RECTangles, unscaled version

GN_DIRECT_L_NOSCAL

DIviding RECTangles, locally-biased and unscaled

GN_DIRECT_L_RAND_NOSCAL

DIviding RECTangles, locally-biased, unscaled and "slightly randomized"

GN_ORIG_DIRECT

DIviding RECTangles, original FORTRAN implementation

GN_ORIG_DIRECT_L

DIviding RECTangles, locally-biased, original FORTRAN implementation

GD_STOGO

Stochastic Global Optimization

GD_STOGO_RAND

Stochastic Global Optimization, randomized variant

LD_LBFGS_NOCEDAL

Limited-memory BFGS

LD_LBFGS

Limited-memory BFGS

LN_PRAXIS

PRincipal AXIS gradient-free local optimization

LD_VAR2

Shifted limited-memory variable-metric, rank-2

LD_VAR1

Shifted limited-memory variable-metric, rank-1

LD_TNEWTON

Truncated Newton's method

LD_TNEWTON_RESTART

Truncated Newton's method with automatic restarting

LD_TNEWTON_PRECOND

Preconditioned truncated Newton's method

LD_TNEWTON_PRECOND_RESTART

Preconditioned truncated Newton's method with automatic restarting

GN_CRS2_LM

Controlled Random Search with Local Mutation

GN_MLSL

Original Multi-Level Single-Linkage

GD_MLSL

Original Multi-Level Single-Linkage, user-provided derivative

GN_MLSL_LDS

Multi-Level Single-Linkage with Sobol Low-Discrepancy Sequence for starting points

GD_MLSL_LDS

Multi-Level Single-Linkage with Sobol Low-Discrepancy Sequence for starting points, user-provided derivative

LD_MMA

Method of moving averages

LN_COBYLA

Constrained Optimization BY Linear Approximations

LN_NEWUOA

Powell's NEWUOA algorithm

LN_NEWUOA_BOUND

Powell's NEWUOA algorithm with bounds by SGJ

LN_NELDERMEAD

Nelder-Mead Simplex gradient-free method

LN_SBPLX

NLOPT implementation of Rowan's Subplex algorithm

LN_AUGLAG

AUGmented LAGrangian

LD_AUGLAG

AUGmented LAGrangian, user-provided derivative

LN_AUGLAG_EQ

AUGmented LAGrangian with penalty functions only for equality constraints

LD_AUGLAG_EQ

AUGmented LAGrangian with penalty functions only for equality constraints, user-provided derivative

LN_BOBYQA

Bounded Optimization BY Quadratic Approximations

GN_ISRES

Improved Stochastic Ranking Evolution Strategy

AUGLAG

AUGmented LAGrangian, requires local_optimizer to be set

AUGLAG_EQ

AUGmented LAGrangian with penalty functions only for equality constraints, requires local_optimizer to be set

G_MLSL

Original Multi-Level Single-Linkage, user-provided derivative, requires local_optimizer to be set

G_MLSL_LDS

Multi-Level Single-Linkage with Sobol Low-Discrepancy Sequence for starting points, requires local_optimizer to be set

LD_SLSQP

Sequential Least-SQuares Programming

LD_CCSAQ

Conservative Convex Separable Approximation

GN_ESCH

Evolutionary Algorithm

Optimizer object

data Opt Source #

An optimizer object which must be created, configured and then passed to optimize to solve a problem

create Source #

Arguments

:: Algorithm

Choice of algorithm

-> Word

Parameter vector dimension

-> IO (Maybe Opt)

Optimizer object

Create a new Opt object

destroy :: Opt -> IO () Source #

Random number generator seeding

srand :: Integral a => a -> IO () Source #

Metadata

Callbacks

type ScalarFunction a Source #

Arguments

 = Vector Double

Parameter vector

-> Maybe (IOVector Double)

Gradient vector to be filled in

-> a

User data

-> IO Double

Scalar result

This function type corresponds to nlopt_func in C and is used for scalar functions of the parameter vector. You may pass data of any type a to the functions in this module that take a ScalarFunction as an argument; this data will be supplied to your your function when it is called.

type VectorFunction a Source #

Arguments

 = Vector Double

Parameter vector

-> IOVector Double

Output vector to be filled in

-> Maybe (IOVector Double)

Gradient vector to be filled in

-> a

User data

-> IO () 

This function type corresponds to nlopt_mfunc in C and is used for vector functions of the parameter vector. You may pass data of any type a to the functions in this module that take a VectorFunction as an argument; this data will be supplied to your function when it is called.

type PreconditionerFunction a Source #

Arguments

 = Vector Double

Parameter vector

-> Vector Double

Vector v to precondition

-> IOVector Double

Output vector vpre to be filled in

-> a

User data

-> IO () 

This function type corresponds to nlopt_precond in C and is used for functions that precondition a vector at a given point in the parameter space. You may pass data of any type a to the functions in this module that take a PreconditionerFunction as an argument; this data will be supplied to your function when it is called.

Running the optimizer

data Output Source #

The output of an NLOPT optimizer run.

Constructors

Output 

Fields

optimize Source #

Arguments

:: Opt

Optimizer object set up to solve the problem

-> Vector Double

Initial-guess parameter vector

-> IO Output

Results of the optimization run

This function is very similar to the C function nlopt_optimize, but it does not use mutable vectors and returns an Output structure.

Objective function configuration

Bound configuration

Constraint configuration

Stopping criterion configuration

Algorithm-specific configuration

set_local_optimizer Source #

Arguments

:: Opt

Primary optimizer

-> Opt

Subsidiary (local) optimizer

-> IO Result