ad-4.2.0.1: Automatic Differentiation

PortabilityGHC only
Stabilityexperimental
Maintainerekmett@gmail.com
Safe HaskellNone

Numeric.AD.Rank1.Newton

Contents

Description

 

Synopsis

Newton's Method (Forward)

findZero :: (Fractional a, Eq a) => (Forward a -> Forward a) -> a -> [a]Source

The findZero function finds a zero of a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.) If the stream becomes constant (it converges), no further elements are returned.

Examples:

>>> take 10 $ findZero (\x->x^2-4) 1
[1.0,2.5,2.05,2.000609756097561,2.0000000929222947,2.000000000000002,2.0]
>>> last $ take 10 $ findZero ((+1).(^2)) (1 :+ 1)
0.0 :+ 1.0

inverse :: (Fractional a, Eq a) => (Forward a -> Forward a) -> a -> a -> [a]Source

The inverse function inverts a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.) If the stream becomes constant (it converges), no further elements are returned.

Example:

>>> last $ take 10 $ inverse sqrt 1 (sqrt 10)
10.0

fixedPoint :: (Fractional a, Eq a) => (Forward a -> Forward a) -> a -> [a]Source

The fixedPoint function find a fixedpoint of a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.)

If the stream becomes constant (it converges), no further elements are returned.

>>> last $ take 10 $ fixedPoint cos 1
0.7390851332151607

extremum :: (Fractional a, Eq a) => (On (Forward (Forward a)) -> On (Forward (Forward a))) -> a -> [a]Source

The extremum function finds an extremum of a scalar function using Newton's method; produces a stream of increasingly accurate results. (Modulo the usual caveats.) If the stream becomes constant (it converges), no further elements are returned.

>>> last $ take 10 $ extremum cos 1
0.0

Gradient Ascent/Descent (Kahn)

gradientDescent :: (Traversable f, Fractional a, Ord a) => (f (Kahn a) -> Kahn a) -> f a -> [f a]Source

The gradientDescent function performs a multivariate optimization, based on the naive-gradient-descent in the file stalingrad/examples/flow-tests/pre-saddle-1a.vlad from the VLAD compiler Stalingrad sources. Its output is a stream of increasingly accurate results. (Modulo the usual caveats.)

It uses reverse mode automatic differentiation to compute the gradient.

gradientAscent :: (Traversable f, Fractional a, Ord a) => (f (Kahn a) -> Kahn a) -> f a -> [f a]Source

Perform a gradient descent using reverse mode automatic differentiation to compute the gradient.