ad-4.2.1.1: Automatic Differentiation

Copyright(c) Edward Kmett 2010-2014
LicenseBSD3
Maintainerekmett@gmail.com
Stabilityexperimental
PortabilityGHC only
Safe HaskellNone
LanguageHaskell2010

Numeric.AD.Newton

Contents

Description

 

Synopsis

Newton's Method (Forward AD)

findZero :: (Fractional a, Eq a) => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> [a] Source

The findZero function finds a zero of a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.) If the stream becomes constant ("it converges"), no further elements are returned.

Examples:

>>> take 10 $ findZero (\x->x^2-4) 1
[1.0,2.5,2.05,2.000609756097561,2.0000000929222947,2.000000000000002,2.0]
>>> last $ take 10 $ findZero ((+1).(^2)) (1 :+ 1)
0.0 :+ 1.0

inverse :: (Fractional a, Eq a) => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> a -> [a] Source

The inverse function inverts a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.) If the stream becomes constant ("it converges"), no further elements are returned.

Example:

>>> last $ take 10 $ inverse sqrt 1 (sqrt 10)
10.0

fixedPoint :: (Fractional a, Eq a) => (forall s. AD s (Forward a) -> AD s (Forward a)) -> a -> [a] Source

The fixedPoint function find a fixedpoint of a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.)

If the stream becomes constant ("it converges"), no further elements are returned.

>>> last $ take 10 $ fixedPoint cos 1
0.7390851332151607

extremum :: (Fractional a, Eq a) => (forall s. AD s (On (Forward (Forward a))) -> AD s (On (Forward (Forward a)))) -> a -> [a] Source

The extremum function finds an extremum of a scalar function using Newton's method; produces a stream of increasingly accurate results. (Modulo the usual caveats.) If the stream becomes constant ("it converges"), no further elements are returned.

>>> last $ take 10 $ extremum cos 1
0.0

Gradient Ascent/Descent (Reverse AD)

gradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> [f a] Source

The gradientDescent function performs a multivariate optimization, based on the naive-gradient-descent in the file stalingrad/examples/flow-tests/pre-saddle-1a.vlad from the VLAD compiler Stalingrad sources. Its output is a stream of increasingly accurate results. (Modulo the usual caveats.)

It uses reverse mode automatic differentiation to compute the gradient.

gradientAscent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Reverse s a) -> Reverse s a) -> f a -> [f a] Source

Perform a gradient descent using reverse mode automatic differentiation to compute the gradient.

conjugateGradientDescent :: (Traversable f, Ord a, Fractional a) => (forall s. Chosen s => f (Or s (On (Forward (Forward a))) (Kahn a)) -> Or s (On (Forward (Forward a))) (Kahn a)) -> f a -> [f a] Source

Perform a conjugate gradient descent using reverse mode automatic differentiation to compute the gradient, and using forward-on-forward mode for computing extrema.

>>> let sq x = x * x
>>> let rosenbrock [x,y] = sq (1 - x) + 100 * sq (y - sq x)
>>> rosenbrock [0,0]
1
>>> rosenbrock (conjugateGradientDescent rosenbrock [0, 0] !! 5) < 0.1
True

conjugateGradientAscent :: (Traversable f, Ord a, Fractional a) => (forall s. Chosen s => f (Or s (On (Forward (Forward a))) (Kahn a)) -> Or s (On (Forward (Forward a))) (Kahn a)) -> f a -> [f a] Source

Perform a conjugate gradient ascent using reverse mode automatic differentiation to compute the gradient.

stochasticGradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Reifies s Tape => f (Scalar a) -> f (Reverse s a) -> Reverse s a) -> [f (Scalar a)] -> f a -> [f a] Source

The stochasticGradientDescent function approximates the true gradient of the constFunction by a gradient at a single example. As the algorithm sweeps through the training set, it performs the update for each training example.

It uses reverse mode automatic differentiation to compute the gradient The learning rate is constant through out, and is set to 0.001