| Portability | GHC only |
|---|---|
| Stability | experimental |
| Maintainer | ekmett@gmail.com |
Numeric.AD.Newton
Description
- findZero :: Fractional a => (forall s. Mode s => AD s a -> AD s a) -> a -> [a]
- inverse :: Fractional a => (forall s. Mode s => AD s a -> AD s a) -> a -> a -> [a]
- fixedPoint :: Fractional a => (forall s. Mode s => AD s a -> AD s a) -> a -> [a]
- extremum :: Fractional a => (forall s. Mode s => AD s a -> AD s a) -> a -> [a]
- gradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Mode s => f (AD s a) -> AD s a) -> f a -> [f a]
- newtype AD f a = AD {
- runAD :: f a
- class Lifted t => Mode t where
Newton's Method (Forward AD)
findZero :: Fractional a => (forall s. Mode s => AD s a -> AD s a) -> a -> [a]Source
The findZero function finds a zero of a scalar function using
Newton's method; its output is a stream of increasingly accurate
results. (Modulo the usual caveats.)
Examples:
take 10 $ findZero (\\x->x^2-4) 1 -- converge to 2.0
module Data.Complex take 10 $ findZero ((+1).(^2)) (1 :+ 1) -- converge to (0 :+ 1)@
inverse :: Fractional a => (forall s. Mode s => AD s a -> AD s a) -> a -> a -> [a]Source
The inverseNewton function inverts a scalar function using
Newton's method; its output is a stream of increasingly accurate
results. (Modulo the usual caveats.)
Example:
take 10 $ inverseNewton sqrt 1 (sqrt 10) -- converge to 10
fixedPoint :: Fractional a => (forall s. Mode s => AD s a -> AD s a) -> a -> [a]Source
The fixedPoint function find a fixedpoint of a scalar
function using Newton's method; its output is a stream of
increasingly accurate results. (Modulo the usual caveats.)
extremum :: Fractional a => (forall s. Mode s => AD s a -> AD s a) -> a -> [a]Source
The extremum function finds an extremum of a scalar
function using Newton's method; produces a stream of increasingly
accurate results. (Modulo the usual caveats.)
Gradient Descent (Reverse AD)
gradientDescent :: (Traversable f, Fractional a, Ord a) => (forall s. Mode s => f (AD s a) -> AD s a) -> f a -> [f a]Source
The gradientDescent function performs a multivariate
optimization, based on the naive-gradient-descent in the file
stalingrad/examples/flow-tests/pre-saddle-1a.vlad from the
VLAD compiler Stalingrad sources. Its output is a stream of
increasingly accurate results. (Modulo the usual caveats.)
It uses reverse mode automatic differentiation to compute the gradient.
Exposed Types
AD serves as a common wrapper for different Mode instances, exposing a traditional
numerical tower. Universal quantification is used to limit the actions in user code to
machinery that will return the same answers under all AD modes, allowing us to use modes
interchangeably as both the type level "brand" and dictionary, providing a common API.
Instances
| Primal f => Primal (AD f) | |
| Mode f => Mode (AD f) | |
| Lifted f => Lifted (AD f) | |
| Var (AD Reverse) | |
| Iso (f a) (AD f a) | |
| (Num a, Lifted f, Bounded a) => Bounded (AD f a) | |
| (Num a, Lifted f, Enum a) => Enum (AD f a) | |
| (Num a, Lifted f, Eq a) => Eq (AD f a) | |
| (Lifted f, Floating a) => Floating (AD f a) | |
| (Lifted f, Fractional a) => Fractional (AD f a) | |
| (Lifted f, Num a) => Num (AD f a) | |
| (Num a, Lifted f, Ord a) => Ord (AD f a) | |
| (Lifted f, Real a) => Real (AD f a) | |
| (Lifted f, RealFloat a) => RealFloat (AD f a) | |
| (Lifted f, RealFrac a) => RealFrac (AD f a) | |
| (Lifted f, Show a) => Show (AD f a) |
class Lifted t => Mode t whereSource
Methods
lift :: Num a => a -> t aSource
Embed a constant
(<+>) :: Num a => t a -> t a -> t aSource
Vector sum
(*^) :: Num a => a -> t a -> t aSource
Scalar-vector multiplication
(^*) :: Num a => t a -> a -> t aSource
Vector-scalar multiplication
(^/) :: Fractional a => t a -> a -> t aSource
Scalar division
'zero' = 'lift' 0