backprop-0.0.3.0: Heterogeneous, type-safe automatic backpropagation in Haskell

Numeric.Backprop.Implicit

Description

Offers full functionality for implicit-graph back-propagation. The intended usage is to write a BPOp, which is a normal Haskell function from BVars to a result BVar. These BVars can be manipulated using their Num / Fractional / Floating instances.

The library can then perform back-propagation on the function (using backprop or grad) by using an implicitly built graph.

This should actually be powerful enough for most use cases, but falls short for a couple of situations:

1. If the result of a function on BVars is used twice (like z in let z = x * y in z + z), this will allocate a new redundant graph node for every usage site of z. You can explicitly force z, but only using an explicit graph description using Numeric.Backprop.
2. This can't handle sum types, like Numeric.Backprop can. You can never pattern match on the constructors of a value inside a BVar. I'm not sure if this is a fundamental limitation (I suspect it might be) or if I just can't figure out how to implement it. Suggestions welcome!

As a comparison, this module offers functionality and an API very similar to Numeric.AD.Mode.Reverse from the ad library, except for the fact that it can handle heterogeneous values.

Note that every type involved has to be an instance of Num. This is because gradients all need to be "summable" (which is implemented using sum and +), and we also need to able to generate gradients of '1' and '0'.

Synopsis

# Types

## Backprop types

type BPOp rs a = forall s. Prod (BVar s rs) rs -> BVar s rs a Source #

An operation on BVars that can be backpropagated. A value of type:

BPOp rs a


takes a bunch of BVars containg rs and uses them to (purely) produce a BVar containing an a.

foo :: BPOp '[ Double, Double ] Double
foo (x :< y :< 'Ø') = x + sqrt y


BPOp here is related to BPOpI from the normal explicit-graph backprop module Numeric.Backprop.

data BVar :: Type -> [Type] -> Type -> Type Source #

The basic unit of manipulation inside BP (or inside an implicit-graph backprop function). Instead of directly working with values, you work with BVars contating those values. When you work with a BVar, the backprop library can keep track of what values refer to which other values, and so can perform back-propagation to compute gradients.

A BVar s rs a refers to a value of type a, with an environment of values of the types rs. The phantom parameter s is used to ensure that stray BVars don't leak outside of the backprop process.

(That is, if you're using implicit backprop, it ensures that you interact with BVars in a polymorphic way. And, if you're using explicit backprop, it ensures that a BVar s rs a never leaves the BP s rs that it was created in.)

BVars have Num, Fractional, Floating, etc. instances, so they can be manipulated using polymorphic functions and numeric functions in Haskell. You can add them, subtract them, etc., in "implicit" backprop style.

(However, note that if you directly manipulate BVars using those instances or using liftB, it delays evaluation, so every usage site has to re-compute the result/create a new node. If you want to re-use a BVar you created using + or - or liftB, use bindVar to force it first. See documentation for bindVar for more details.)

Instances

 Floating a => Floating (BVar s rs a) Source # See note for Num instance. Methodspi :: BVar s rs a #exp :: BVar s rs a -> BVar s rs a #log :: BVar s rs a -> BVar s rs a #sqrt :: BVar s rs a -> BVar s rs a #(**) :: BVar s rs a -> BVar s rs a -> BVar s rs a #logBase :: BVar s rs a -> BVar s rs a -> BVar s rs a #sin :: BVar s rs a -> BVar s rs a #cos :: BVar s rs a -> BVar s rs a #tan :: BVar s rs a -> BVar s rs a #asin :: BVar s rs a -> BVar s rs a #acos :: BVar s rs a -> BVar s rs a #atan :: BVar s rs a -> BVar s rs a #sinh :: BVar s rs a -> BVar s rs a #cosh :: BVar s rs a -> BVar s rs a #tanh :: BVar s rs a -> BVar s rs a #asinh :: BVar s rs a -> BVar s rs a #acosh :: BVar s rs a -> BVar s rs a #atanh :: BVar s rs a -> BVar s rs a #log1p :: BVar s rs a -> BVar s rs a #expm1 :: BVar s rs a -> BVar s rs a #log1pexp :: BVar s rs a -> BVar s rs a #log1mexp :: BVar s rs a -> BVar s rs a # Fractional a => Fractional (BVar s rs a) Source # See note for Num instance. Methods(/) :: BVar s rs a -> BVar s rs a -> BVar s rs a #recip :: BVar s rs a -> BVar s rs a #fromRational :: Rational -> BVar s rs a # Num a => Num (BVar s rs a) Source # Note that if you use the Num instance to create BVars, the resulting BVar is deferred/delayed. At every location you use it, it will be recomputed, and a separate graph node will be created. If you are using a BVar you made with the Num instance in multiple locations, use bindVar first to force it and prevent recomputation. Methods(+) :: BVar s rs a -> BVar s rs a -> BVar s rs a #(-) :: BVar s rs a -> BVar s rs a -> BVar s rs a #(*) :: BVar s rs a -> BVar s rs a -> BVar s rs a #negate :: BVar s rs a -> BVar s rs a #abs :: BVar s rs a -> BVar s rs a #signum :: BVar s rs a -> BVar s rs a #fromInteger :: Integer -> BVar s rs a #

type Op as a = forall m. Monad m => OpM m as a Source #

An Op as a describes a differentiable function from as to a.

For example, a value of type

Op '[Int, Bool] Double


is a function from an Int and a Bool, returning a Double. It can be differentiated to give a gradient of an Int and a Bool if given a total derivative for the Double. If we call Bool $$2$$, then, mathematically, it is akin to a:

$f : \mathbb{Z} \times 2 \rightarrow \mathbb{R}$

See runOp, gradOp, and gradOpWith for examples on how to run it, and Op for instructions on creating it.

This type is abstracted over using the pattern synonym with constructor Op, so you can create one from scratch with it. However, it's simplest to create it using op2', op1', op2', and op3' helper smart constructors And, if your function is a numeric function, they can even be created automatically using op1, op2, op3, and opN with a little help from Numeric.AD from the ad library.

Note that this type is a subset or subtype of OpM (and also of OpB). So, if a function ever expects an OpM m as a (or a OpB), you can always provide an Op as a instead.

Many functions in this library will expect an OpM m as a (or an OpB s as a), and in all of these cases, you can provide an Op as a.

type OpB s as a = OpM (ST s) as a Source #

A subclass of OpM (and superclass of Op), representing Ops that the backprop library uses to perform backpropation.

An

OpB s rs a


represents a differentiable function that takes a tuple of rs and produces an a a, which can be run on BVar ss and also inside BP ss. For example, an OpB s '[ Int, Double ] Bool takes an Int and a Double and produces a Bool, and does it in a differentiable way.

# back-propagation

backprop :: Every Num rs => BPOp rs a -> Tuple rs -> (a, Tuple rs) Source #

Run back-propagation on a BPOp function, getting both the result and the gradient of the result with respect to the inputs.

foo :: BPOp '[Double, Double] Double
foo (x :< y :< Ø) =
let z = x * sqrt y
in  z + x ** y

>>> 'backprop' foo (2 ::< 3 ::< Ø)
(11.46, 13.73 ::< 6.12 ::< Ø)


grad :: Every Num rs => BPOp rs a -> Tuple rs -> Tuple rs Source #

Run the BPOp on an input tuple and return the gradient of the result with respect to the input tuple.

foo :: BPOp '[Double, Double] Double
foo (x :< y :< Ø) =
let z = x * sqrt y
in  z + x ** y

>>> grad foo (2 ::< 3 ::< Ø)
13.73 ::< 6.12 ::< Ø


eval :: (Known Length rs, Num a) => BPOp rs a -> Tuple rs -> a Source #

Simply run the BPOp on an input tuple, getting the result without bothering with the gradient or with back-propagation.

foo :: BPOp '[Double, Double] Double
foo (x :< y :< Ø) =
let z = x * sqrt y
in  z + x ** y

>>> eval foo (2 ::< 3 ::< Ø)
11.46


# Var manipulation

constVar :: a -> BVar s rs a Source #

Create a BVar that represents just a specific value, that doesn't depend on any other BVars.

liftB :: OpB s as a -> Prod (BVar s rs) as -> BVar s rs a Source #

Apply OpB over a Prod of BVars, as inputs. Provides "implicit-graph" back-propagation, with deferred evaluation.

If you had an OpB s '[a, b, c] d, this function will expect a 3-Prod of a BVar s rs a, a BVar s rs b, and a BVar s rs c, and the result will be a BVar s rs d:

myOp :: OpB s '[a, b, c] d
x    :: BVar s rs a
y    :: BVar s rs b
z    :: BVar s rs c

x :< y :< z :< Ø              :: Prod (BVar s rs) '[a, b, c]
liftB myOp (x :< y :< z :< Ø) :: BVar s rs d


Note that OpB is a superclass of Op, so you can provide any Op here, as well (like those created by op1, op2, constOp, op0 etc.)

liftB has an infix alias, .$, so the above example can also be written as: myOp .$ (x :< y :< z :< Ø) :: BVar s rs d


to let you pretend that you're applying the myOp function to three inputs.

The result is a new deferred BVar. This should be fine in most cases, unless you use the result in more than one location. This will cause evaluation to be duplicated and multiple redundant graph nodes to be created. If you need to use it in two locations, you should use opVar instead of liftB, or use bindVar:

opVar o xs = bindVar (liftB o xs)


liftB can be thought of as a "deferred evaluation" version of opVar.

(.$) :: OpB s as a -> Prod (BVar s rs) as -> BVar s rs a infixr 5 Source # Infix synonym for liftB, which lets you pretend that you're applying OpBs as if they were functions: myOp :: OpB s '[a, b, c] d x :: BVar s rs a y :: BVar s rs b z :: BVar s rs c x :< y :< z :< Ø :: Prod (BVar s rs) '[a, b, c] myOp .$ (x :< y :< z :< Ø) :: BVar s rs d


Note that OpB is a superclass of Op, so you can pass in any Op here, as well (like those created by op1, op2, constOp, op0 etc.)

See the documentation for liftB for all the caveats of this usage.

.$ can also be thought of as a "deferred evaluation" version of ~$:

o ~$ xs = bindVar (o .$ xs)


liftB1 :: OpB s '[a] b -> BVar s rs a -> BVar s rs b Source #

Convenient wrapper over liftB that takes an OpB with one argument and a single BVar argument. Lets you not have to type out the entire Prod.

liftB1 o x = liftB o (x :< 'Ø')

myOp :: Op '[a] b
x    :: BVar s rs a

liftB1 myOp x :: BVar s rs b


Note that OpB is a superclass of Op, so you can pass in an Op here (like one made with op1) as well.

See the documentation for liftB for caveats and potential problematic situations with this.

liftB2 :: OpB s '[a, b] c -> BVar s rs a -> BVar s rs b -> BVar s rs c Source #

Convenient wrapper over liftB that takes an OpB with two arguments and two BVar arguments. Lets you not have to type out the entire Prod.

liftB2 o x y = liftB o (x :< y :< 'Ø')

myOp :: Op '[a, b] c
x    :: BVar s rs a
y    :: BVar s rs b

liftB2 myOp x y :: BVar s rs c


Note that OpB is a superclass of Op, so you can pass in an Op here (like one made with op2) as well.

See the documentation for liftB for caveats and potential problematic situations with this.

liftB3 :: OpB s '[a, b, c] d -> BVar s rs a -> BVar s rs b -> BVar s rs c -> BVar s rs d Source #

Convenient wrapper over liftB that takes an OpB with three arguments and three BVar arguments. Lets you not have to type out the entire Prod.

liftB3 o x y z = liftB o (x :< y :< z :< 'Ø')

myOp :: Op '[a, b, c] d
x    :: BVar s rs a
y    :: BVar s rs b
z    :: BVar s rs c

liftB3 myOp x y z :: BVar s rs d


Note that OpB is a superclass of Op, so you can pass in an Op here (like one made with op3) as well.

See the documentation for liftB for caveats and potential problematic situations with this.

## As Parts

partsVar :: forall s rs bs a. (Every Num bs, Known Length bs) => Iso' a (Tuple bs) -> BVar s rs a -> Prod (BVar s rs) bs Source #

Use an Iso (or compatible Iso from the lens library) to "pull out" the parts of a data type and work with each part as a BVar.

If there is an isomorphism between a b and a Tuple as (that is, if an a is just a container for a bunch of as), then it lets you break out the as inside and work with those.

data Foo = F Int Bool

fooIso :: Iso' Foo (Tuple '[Int, Bool])
fooIso = iso (F i b) -> i ::< b ::< Ø) (\(i ::< b ::< Ø) -> F i b ) partsVar fooIso :: BVar rs Foo -> Prod (BVar s rs) '[Int, Bool] stuff :: BPOp s '[Foo] a stuff (foo :< Ø) = case partsVar fooIso foo of i ::< Ø - -- now, i is a BVar pointing to the Int inside foo -- and b is a BVar pointing to the Bool inside foo -- you can do stuff with the i and b here  You can use this to pass in product types as the environment to a BP, and then break out the type into its constituent products. Note that for a type like Foo, fooIso can be generated automatically with Generic from GHC.Generics and Generic from Generics.SOP and generics-sop, using the gTuple iso. See gSplit for more information. Also, if you are literally passing a tuple (like BP s '[Tuple '[Int, Bool]) then you can give in the identity isomorphism (id) or use splitVars. At the moment, this implicit partsVar is less efficient than the explicit partsVar, but this might change in the future. withParts :: forall s rs bs a r. (Every Num bs, Known Length bs) => Iso' a (Tuple bs) -> BVar s rs a -> (Prod (BVar s rs) bs -> r) -> r Source # A continuation-based version of partsVar. Instead of binding the parts and using it in the rest of the block, provide a continuation to handle do stuff with the parts inside. Building on the example from partsVar: data Foo = F Int Bool fooIso :: Iso' Foo (Tuple '[Int, Bool]) fooIso = iso (\(F i b) -> i ::< b ::< Ø) (\(i ::< b ::< Ø) -> F i b ) stuff :: BPOp s '[Foo] a stuff (foo :< Ø) = withParts fooIso foo  \case i :< b :< Ø -> -- now, i is a BVar pointing to the Int inside foo -- and b is a BVar pointing to the Bool inside foo -- you can do stuff with the i and b here  Mostly just a stylistic alternative to partsVar. splitVars :: forall s rs as. (Every Num as, Known Length as) => BVar s rs (Tuple as) -> Prod (BVar s rs) as Source # Split out a BVar of a tuple into a tuple (Prod) of BVars. -- the environment is a single Int-Bool tuple, tup stuff :: BPOp s '[ Tuple '[Int, Bool] ] a stuff (tup :< Ø) = case splitVar tup of i :< b :< Ø <- splitVars tup -- now, i is a BVar pointing to the Int inside tup -- and b is a BVar pointing to the Bool inside tup -- you can do stuff with the i and b here  Note that splitVars = partsVar id  gSplit :: forall s rs as a. (Generic a, Code a ~ '[as], Every Num as, Known Length as) => BVar s rs a -> Prod (BVar s rs) as Source # Using Generic from GHC.Generics and Generic from Generics.SOP, split a BVar containing a product type into a tuple (Prod) of BVars pointing to each value inside. Building on the example from partsVar: import qualified Generics.SOP as SOP data Foo = F Int Bool deriving Generic instance SOP.Generic Foo gSplit :: BVar rs Foo -> Prod (BVar s rs) '[Int, Bool] stuff :: BPOp s '[Foo] a stuff (foo :< Ø) = case gSplit foo of i ::< Ø - -- now, i is a BVar pointing to the Int inside foo -- and b is a BVar pointing to the Bool inside foo -- you can do stuff with the i and b here  Because Foo is a straight up product type, gSplit can use GHC.Generics and take out the items inside. Note that gSplit = splitVars gTuple  gTuple :: (Generic a, Code a ~ '[as]) => Iso' a (Tuple as) Source # An Iso between a type that is a product type, and a tuple that contains all of its components. Uses Generics.SOP and the Generic typeclass. >>> import qualified Generics.SOP as SOP >>> data Foo = A Int Bool deriving Generic >>> instance SOP.Generic Foo >>> view gTuple (A 10 True) 10 ::< True ::< Ø >>> review gTuple (15 ::< False ::< Ø) A 15 False  partsVar' :: forall s rs bs a. Every Num bs => Length bs -> Iso' a (Tuple bs) -> BVar s rs a -> Prod (BVar s rs) bs Source # A version of partsVar taking explicit Length, indicating the number of items in the input tuple and their types. Requiring an explicit Length is mostly useful for rare "extremely polymorphic" situations, where GHC can't infer the type and length of the internal tuple. If you ever actually explicitly write down bs as a list of types, you should be able to just use partsVar. withParts' :: forall s rs bs a r. Every Num bs => Length bs -> Iso' a (Tuple bs) -> BVar s rs a -> (Prod (BVar s rs) bs -> r) -> r Source # A version of withParts taking explicit Length, indicating the number of internal items and their types. Requiring an explicit Length is mostly useful for rare "extremely polymorphic" situations, where GHC can't infer the type and length of the internal tuple. If you ever actually explicitly write down bs as a list of types, you should be able to just use withParts. splitVars' :: forall s rs as. Every Num as => Length as -> BVar s rs (Tuple as) -> Prod (BVar s rs) as Source # A version of splitVars taking explicit Length, indicating the number of internal items and their types. Requiring an explicit Length is mostly useful for rare "extremely polymorphic" situations, where GHC can't infer the type and length of the internal tuple. If you ever actually explicitly write down as as a list of types, you should be able to just use splitVars. gSplit' :: forall s rs as a. (Generic a, Code a ~ '[as], Every Num as) => Length as -> BVar s rs a -> Prod (BVar s rs) as Source # A version of gSplit taking explicit Length, indicating the number of internal items and their types. Requiring an explicit Length is mostly useful for rare "extremely polymorphic" situations, where GHC can't infer the type and length of the internal tuple. If you ever actually explicitly write down as as a list of types, you should be able to just use gSplit. # Op op1 :: Num a => (forall s. AD s (Forward a) -> AD s (Forward a)) -> Op '[a] a Source # Automatically create an Op of a numerical function taking one argument. Uses diff, and so can take any numerical function polymorphic over the standard numeric types. >>> gradOp' (op1 (recip . negate)) (5 ::< Ø) (-0.2, 0.04 ::< Ø)  op2 :: Num a => (forall s. Reifies s Tape => Reverse s a -> Reverse s a -> Reverse s a) -> Op '[a, a] a Source # Automatically create an Op of a numerical function taking two arguments. Uses grad, and so can take any numerical function polymorphic over the standard numeric types. >>> gradOp' (op2 (\x y -> x * sqrt y)) (3 ::< 4 ::< Ø) (6.0, 2.0 ::< 0.75 ::< Ø)  op3 :: Num a => (forall s. Reifies s Tape => Reverse s a -> Reverse s a -> Reverse s a -> Reverse s a) -> Op '[a, a, a] a Source # Automatically create an Op of a numerical function taking three arguments. Uses grad, and so can take any numerical function polymorphic over the standard numeric types. >>> gradOp' (op3 (\x y z -> (x * sqrt y)**z)) (3 ::< 4 ::< 2 ::< Ø) (36.0, 24.0 ::< 9.0 ::< 64.503 ::< Ø)  opN :: (Num a, Known Nat n) => (forall s. Reifies s Tape => Vec n (Reverse s a) -> Reverse s a) -> Op (Replicate n a) a Source # Automatically create an Op of a numerical function taking multiple arguments. Uses grad, and so can take any numerical function polymorphic over the standard numeric types. >>> gradOp' (opN (\(x :+ y :+ Ø) -> x * sqrt y)) (3 ::< 4 ::< Ø) (6.0, 2.0 ::< 0.75 ::< Ø)  op1' :: (a -> (b, Maybe b -> a)) -> Op '[a] b Source # Create an Op of a function taking one input, by giving its explicit derivative. The function should return a tuple containing the result of the function, and also a function taking the derivative of the result and return the derivative of the input. If we have \eqalign{ f &: \mathbb{R} \rightarrow \mathbb{R}\cr y &= f(x)\cr z &= g(y) } Then the derivative \( \frac{dz}{dx}, it would be:

$\frac{dz}{dx} = \frac{dz}{dy} \frac{dy}{dx}$

If our Op represents $$f$$, then the second item in the resulting tuple should be a function that takes $$\frac{dz}{dy}$$ and returns $$\frac{dz}{dx}$$.

If the input is Nothing, then $$\frac{dz}{dy}$$ should be taken to be $$1$$.

As an example, here is an Op that squares its input:

square :: Num a => Op '[a] a
square = op1' \x -> (x*x, \case Nothing -> 2 * x Just d -> 2 * d * x )  Remember that, generally, end users shouldn't directly construct Ops; they should be provided by libraries or generated automatically. For numeric functions, single-input Ops can be generated automatically using op1. op2' :: (a -> b -> (c, Maybe c -> (a, b))) -> Op '[a, b] c Source # Create an Op of a function taking two inputs, by giving its explicit gradient. The function should return a tuple containing the result of the function, and also a function taking the derivative of the result and return the derivative of the input. If we have \eqalign{ f &: \mathbb{R}^2 \rightarrow \mathbb{R}\cr z &= f(x, y)\cr k &= g(z) } Then the gradient $$\left< \frac{\partial k}{\partial x}, \frac{\partial k}{\partial y} \right>$$ would be: $\left< \frac{\partial k}{\partial x}, \frac{\partial k}{\partial y} \right> = \left< \frac{dk}{dz} \frac{\partial z}{dx}, \frac{dk}{dz} \frac{\partial z}{dy} \right>$ If our Op represents $$f$$, then the second item in the resulting tuple should be a function that takes $$\frac{dk}{dz}$$ and returns $$\left< \frac{\partial k}{dx}, \frac{\partial k}{dx} \right>$$. If the input is Nothing, then $$\frac{dk}{dz}$$ should be taken to be $$1$$. As an example, here is an Op that multiplies its inputs: mul :: Num a => Op '[a, a] a mul = op2' \x y -> (x*y, \case Nothing -> (y  , x  )
Just d  -> (d*y, x*d)
)


Remember that, generally, end users shouldn't directly construct Ops; they should be provided by libraries or generated automatically.

For numeric functions, two-input Ops can be generated automatically using op2.

op3' :: (a -> b -> c -> (d, Maybe d -> (a, b, c))) -> Op '[a, b, c] d Source #

Create an Op of a function taking three inputs, by giving its explicit gradient. See documentation for op2' for more details.

# Utility

pattern (:>) :: forall k f a b. f a -> f b -> Prod k f ((:) k a ((:) k b ([] k))) infix 6 #

Construct a two element Prod. Since the precedence of (:>) is higher than (:<), we can conveniently write lists like:

>>> a :< b :> c


Which is identical to:

>>> a :< b :< c :< Ø


only :: f a -> Prod k f ((:) k a ([] k)) #

Build a singleton Prod.

head' :: Prod k f ((:<) k a as) -> f a #

pattern (::<) :: forall a as. a -> Tuple as -> Tuple ((:<) * a as) infixr 5 #

Cons onto a Tuple.

only_ :: a -> Tuple ((:) * a ([] *)) #

Singleton Tuple.

## Numeric Ops

(+.) :: Num a => Op '[a, a] a Source #

Optimized version of op1 (+).

(-.) :: Num a => Op '[a, a] a Source #

Optimized version of op1 (-).

(*.) :: Num a => Op '[a, a] a Source #

Optimized version of op1 (*).

negateOp :: Num a => Op '[a] a Source #

Optimized version of op1 negate.

absOp :: Num a => Op '[a] a Source #

Optimized version of op1 abs.

signumOp :: Num a => Op '[a] a Source #

Optimized version of op1 signum.

(/.) :: Fractional a => Op '[a, a] a Source #

Optimized version of op1 (/).

recipOp :: Fractional a => Op '[a] a Source #

Optimized version of op1 recip.

expOp :: Floating a => Op '[a] a Source #

Optimized version of op1 exp.

logOp :: Floating a => Op '[a] a Source #

Optimized version of op1 log.

sqrtOp :: Floating a => Op '[a] a Source #

Optimized version of op1 sqrt.

(**.) :: Floating a => Op '[a, a] a Source #

Optimized version of op1 (**).

logBaseOp :: Floating a => Op '[a, a] a Source #

Optimized version of op2 logBase.

sinOp :: Floating a => Op '[a] a Source #

Optimized version of op1 sin.

cosOp :: Floating a => Op '[a] a Source #

Optimized version of op1 cos.

tanOp :: Floating a => Op '[a] a Source #

Optimized version of op1 tan.

asinOp :: Floating a => Op '[a] a Source #

Optimized version of op1 asin.

acosOp :: Floating a => Op '[a] a Source #

Optimized version of op1 acos.

atanOp :: Floating a => Op '[a] a Source #

Optimized version of op1 atan.

sinhOp :: Floating a => Op '[a] a Source #

Optimized version of op1 sinh.

coshOp :: Floating a => Op '[a] a Source #

Optimized version of op1 cosh.

tanhOp :: Floating a => Op '[a] a Source #

Optimized version of op1 tanh.

asinhOp :: Floating a => Op '[a] a Source #

Optimized version of op1 asinh.

acoshOp :: Floating a => Op '[a] a Source #

Optimized version of op1 acosh.

atanhOp :: Floating a => Op '[a] a Source #

Optimized version of op1 atanh.