ad-4.3.2.1: Automatic Differentiation

Copyright(c) Edward Kmett 2010-2015
LicenseBSD3
Maintainerekmett@gmail.com
Stabilityexperimental
PortabilityGHC only
Safe HaskellNone
LanguageHaskell2010

Numeric.AD.Newton

Contents

Description

 

Synopsis

Newton's Method (Forward AD)

findZero :: (Fractional a, Eq a) => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> [a] Source #

The findZero function finds a zero of a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.) If the stream becomes constant ("it converges"), no further elements are returned.

Examples:

>>> take 10 $ findZero (\x->x^2-4) 1
[1.0,2.5,2.05,2.000609756097561,2.0000000929222947,2.000000000000002,2.0]
>>> last $ take 10 $ findZero ((+1).(^2)) (1 :+ 1)
0.0 :+ 1.0

findZeroNoEq :: Fractional a => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> [a] Source #

The findZeroNoEq function behaves the same as findZero except that it doesn't truncate the list once the results become constant. This means it can be used with types without an Eq instance.

inverse :: (Fractional a, Eq a) => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> a -> [a] Source #

The inverse function inverts a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.) If the stream becomes constant ("it converges"), no further elements are returned.

Example:

>>> last $ take 10 $ inverse sqrt 1 (sqrt 10)
10.0

inverseNoEq :: Fractional a => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> a -> [a] Source #

The inverseNoEq function behaves the same as inverse except that it doesn't truncate the list once the results become constant. This means it can be used with types without an Eq instance.

fixedPoint :: (Fractional a, Eq a) => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> [a] Source #

The fixedPoint function find a fixedpoint of a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.)

If the stream becomes constant ("it converges"), no further elements are returned.

>>> last $ take 10 $ fixedPoint cos 1
0.7390851332151607

fixedPointNoEq :: Fractional a => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> [a] Source #

The fixedPointNoEq function behaves the same as fixedPoint except that it doesn't truncate the list once the results become constant. This means it can be used with types without an Eq instance.

extremum :: (Fractional a, Eq a) => (forall s. AD s (On (Forward (Forward a))) -> AD s (On (Forward (Forward a)))) -> a -> [a] Source #

The extremum function finds an extremum of a scalar function using Newton's method; produces a stream of increasingly accurate results. (Modulo the usual caveats.) If the stream becomes constant ("it converges"), no further elements are returned.

>>> last $ take 10 $ extremum cos 1
0.0

extremumNoEq :: Fractional a => (forall s. AD s (On (Forward (Forward a))) -> AD s (On (Forward (Forward a)))) -> a -> [a] Source #

The extremumNoEq function behaves the same as extremum except that it doesn't truncate the list once the results become constant. This means it can be used with types without an Eq instance.

Gradient Ascent/Descent (Reverse AD)

gradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> [f a] Source #

The gradientDescent function performs a multivariate optimization, based on the naive-gradient-descent in the file stalingrad/examples/flow-tests/pre-saddle-1a.vlad from the VLAD compiler Stalingrad sources. Its output is a stream of increasingly accurate results. (Modulo the usual caveats.)

It uses reverse mode automatic differentiation to compute the gradient.

constrainedDescent :: forall f a. (Traversable f, RealFloat a, Floating a, Ord a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> [CC f a] -> f a -> [(a, f a)] Source #

constrainedDescent obj fs env optimizes the convex function obj subject to the convex constraints f <= 0 where f elem fs. This is done using a log barrier to model constraints (i.e. Boyd, Chapter 11.3). The returned optimal point for the objective function must satisfy fs, but the initial environment, env, needn't be feasible.

data CC f a where Source #

Convex constraint, CC, is a GADT wrapper that hides the existential (s) which is so prevalent in the rest of the API. This is an engineering convenience for managing the skolems.

Constructors

CC :: forall f a. (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> CC f a 

eval :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> a Source #

gradientAscent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> [f a] Source #

Perform a gradient descent using reverse mode automatic differentiation to compute the gradient.

conjugateGradientDescent :: (Traversable f, Ord a, Fractional a) => (forall s. Chosen s => f (Or s (On (Forward (Forward a))) (Kahn a)) -> Or s (On (Forward (Forward a))) (Kahn a)) -> f a -> [f a] Source #

Perform a conjugate gradient descent using reverse mode automatic differentiation to compute the gradient, and using forward-on-forward mode for computing extrema.

>>> let sq x = x * x
>>> let rosenbrock [x,y] = sq (1 - x) + 100 * sq (y - sq x)
>>> rosenbrock [0,0]
1
>>> rosenbrock (conjugateGradientDescent rosenbrock [0, 0] !! 5) < 0.1
True

conjugateGradientAscent :: (Traversable f, Ord a, Fractional a) => (forall s. Chosen s => f (Or s (On (Forward (Forward a))) (Kahn a)) -> Or s (On (Forward (Forward a))) (Kahn a)) -> f a -> [f a] Source #

Perform a conjugate gradient ascent using reverse mode automatic differentiation to compute the gradient.

stochasticGradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Scalar a) -> f (Reverse s a) -> Reverse s a) -> [f (Scalar a)] -> f a -> [f a] Source #

The stochasticGradientDescent function approximates the true gradient of the constFunction by a gradient at a single example. As the algorithm sweeps through the training set, it performs the update for each training example.

It uses reverse mode automatic differentiation to compute the gradient The learning rate is constant through out, and is set to 0.001