ad-3.3.1: Automatic Differentiation

Portability GHC only experimental ekmett@gmail.com None

Numeric.AD.Newton

Description

Synopsis

# Newton's Method (Forward AD)

findZero :: (Fractional a, Eq a) => (forall s. Mode s => AD s a -> AD s a) -> a -> [a]Source

The `findZero` function finds a zero of a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.) If the stream becomes constant (it converges), no further elements are returned.

Examples:

````>>> ````take 10 \$ findZero (\x->x^2-4) 1
```[1.0,2.5,2.05,2.000609756097561,2.0000000929222947,2.000000000000002,2.0]
```
````>>> ````import Data.Complex
````>>> ````last \$ take 10 \$ findZero ((+1).(^2)) (1 :+ 1)
```0.0 :+ 1.0
```

inverse :: (Fractional a, Eq a) => (forall s. Mode s => AD s a -> AD s a) -> a -> a -> [a]Source

The `inverse` function inverts a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.) If the stream becomes constant (it converges), no further elements are returned.

Example:

````>>> ````last \$ take 10 \$ inverse sqrt 1 (sqrt 10)
```10.0
```

fixedPoint :: (Fractional a, Eq a) => (forall s. Mode s => AD s a -> AD s a) -> a -> [a]Source

The `fixedPoint` function find a fixedpoint of a scalar function using Newton's method; its output is a stream of increasingly accurate results. (Modulo the usual caveats.)

If the stream becomes constant (it converges), no further elements are returned.

````>>> ````last \$ take 10 \$ fixedPoint cos 1
```0.7390851332151607
```

extremum :: (Fractional a, Eq a) => (forall s. Mode s => AD s a -> AD s a) -> a -> [a]Source

The `extremum` function finds an extremum of a scalar function using Newton's method; produces a stream of increasingly accurate results. (Modulo the usual caveats.) If the stream becomes constant (it converges), no further elements are returned.

````>>> ````last \$ take 10 \$ extremum cos 1
```0.0
```

# Gradient Ascent/Descent (Reverse AD)

gradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Mode s => f (AD s a) -> AD s a) -> f a -> [f a]Source

The `gradientDescent` function performs a multivariate optimization, based on the naive-gradient-descent in the file `stalingrad/examples/flow-tests/pre-saddle-1a.vlad` from the VLAD compiler Stalingrad sources. Its output is a stream of increasingly accurate results. (Modulo the usual caveats.)

It uses reverse mode automatic differentiation to compute the gradient.

gradientAscent :: (Traversable f, Fractional a, Ord a) => (forall s. Mode s => f (AD s a) -> AD s a) -> f a -> [f a]Source

Perform a gradient descent using reverse mode automatic differentiation to compute the gradient.

conjugateGradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Mode s => f (AD s a) -> AD s a) -> f a -> [f a]Source

Perform a conjugate gradient descent using reverse mode automatic differentiation to compute the gradient.

conjugateGradientAscent :: (Traversable f, Fractional a, Ord a) => (forall s. Mode s => f (AD s a) -> AD s a) -> f a -> [f a]Source

Perform a conjugate gradient ascent using reverse mode automatic differentiation to compute the gradient.