mR      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQGHC only experimentalekmett@gmail.com Safe-InferredZip a R f with a S g assuming f! has at least as many entries as g. Zip a R f with a S g assuming f, using a default value after f is exhausted. GHC only experimentalekmett@gmail.comNone is used by  deriveMode but is not exposed  via  # to prevent its abuse by end users  via the AD data type.   is used by  deriveMode but is not exposed  via the  ) class to prevent its abuse by end users  via the AD data type. QIt provides direct access to the result, stripped of its derivative information, K but this is unsafe in general as (auto . primal) would discard derivative N information. The end user is protected from accidentally using this function G by the universal quantification on the various combinators we expose. @allowed to return False for items with a zero derivative, but we'*ll give more NaNs than strictly necessary 6allowed to return False for zero, but we give more NaN's than strictly necessary then Embed a constant  Vector sum Scalar-vector multiplication Vector-scalar multiplication Scalar division aExponentiation, this should be overloaded if you can figure out anything about what is constant!   'zero' = 'lift' 0 ^^ t provides   instance Lifted $t given supplied instances for  + instance Lifted $t => Primal $t where ... - instance Lifted $t => Jacobian $t where ... The seemingly redundant  $tB constraints are caused by Template Haskell staging restrictions. T$Find all the members defined in the  data type __ f g# provides the following instances:  < instance ('Lifted' $f, 'Num' a, 'Enum' a) => 'Enum' ($g a) 8 instance ('Lifted' $f, 'Num' a, 'Eq' a) => 'Eq' ($g a) : instance ('Lifted' $f, 'Num' a, 'Ord' a) => 'Ord' ($g a) B instance ('Lifted' $f, 'Num' a, 'Bounded' a) => 'Bounded' ($g a) 3 instance ('Lifted' $f, 'Show' a) => 'Show' ($g a) 1 instance ('Lifted' $f, 'Num' a) => 'Num' ($g a) ? instance ('Lifted' $f, 'Fractional' a) => 'Fractional' ($g a) ; instance ('Lifted' $f, 'Floating' a) => 'Floating' ($g a) = instance ('Lifted' $f, 'RealFloat' a) => 'RealFloat' ($g a) ; instance ('Lifted' $f, 'RealFrac' a) => 'RealFrac' ($g a) 3 instance ('Lifted' $f, 'Real' a) => 'Real' ($g a) j  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]UVWXYZ[\^]T_^_^  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_^ ] ^_ !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\  C !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]UVWXYZ[\^]T_^_GHC only experimentalekmett@gmail.comNone`4Used to sidestep the need for UndecidableInstances. `A `D is a tower of all (higher order) partial derivatives of a function At each step, a ` f& is wrapped in another layer worth of f. + a :- f a :- f (f a) :- f (f (f a)) :- ... bTake the tail of a `. cTake the head of a `. d Construct a ` by unzipping the layers of a a Comonad. `b`acbcddefghij`abcd `b`acbcddefghijGHC only experimentalekmett@gmail.comNoneee* serves as a common wrapper for different  # instances, exposing a traditional X numerical tower. Universal quantification is used to limit the actions in user code to Z machinery that will return the same answers under all AD modes, allowing us to use modes ( interchangeably as both the type level "brand") and dictionary, providing a common API. efgklmnopefgefgklmnopGHC only experimentalekmett@gmail.comNonej>Used to mark variables for inspection during the reverse pass hiqrsjklmnopqrtuvw hijklmnopqr jklmnpoqhir hiqrsjklmnopqrtuvwGHC only experimentalekmett@gmail.comNonesTower is an AD  B that calculates a tangent tower by forward AD, and provides fast diffsUU, diffsUF stuvwxyz{|}~xyz{|stuvwxyz{|}~stuvwxyz}{|~stuvwxyz{|}~xyz{|NoneJWe only store partials in sorted order, so the map contained in a partial M will only contain partials with equal or greater keys to that of the map in S which it was found. This should be key for efficiently computing sparse hessians. K there are only (n + k - 1) choose k distinct nth partial derivatives of a  function with k inputs. }drop keys below a given value $}~}~ non-portable experimentalekmett@gmail.comNoneGHC only experimentalekmett@gmail.comNoneGHC only experimentalekmett@gmail.comNone mode AD Calculate the  using forward mode AD.  GHC only experimentalekmett@gmail.comNoneKahn is a  A using reverse-mode automatic differentiation that provides fast diffFU, diff2FU, grad, grad2 and a fast jacobianF when you have a significantly smaller number of outputs than inputs. A Tape\ records the information needed back propagate from the output to each input during reverse   AD. +back propagate sensitivities along a tape. 6This returns a list of contributions to the partials. 2 The variable ids returned in the list are likely not unique!  Return an  of $ given bounds for the variable IDs.  Return an  of sparse partials ! non-portable experimentalekmett@gmail.comNone non-portable experimentalekmett@gmail.comNone GHC only experimentalekmett@gmail.comNone rThis is used to create a new entry on the chain given a unary function, its derivative with respect to its input, C the variable ID of its input, and the value of its input. Used by  and  internally. uThis is used to create a new entry on the chain given a binary function, its derivatives with respect to its inputs, ( their variable IDs and values. Used by  internally. aHelper that extracts the derivative of a chain when the chain was constructed with one variable. qHelper that extracts both the primal and derivative of a chain when the chain was constructed with one variable. 6Used internally to push sensitivities down the chain. EExtract the partials from the current chain for a given AD variable.  Return an  of $ given bounds for the variable IDs.  Return an  of sparse partials "Construct a tape that starts with n variables.  GHC only experimentalekmett@gmail.comNone   GHC only experimentalekmett@gmail.comNone?The composition of two AD modes is an AD mode in its own right ?Functor composition, used to nest the use of jacobian and grad GHC only experimentalekmett@gmail.comNoneFEvaluate a scalar-to-scalar function in the trivial identity AD mode. IEvaluate a scalar-to-nonscalar function in the trivial identity AD mode. IEvaluate a nonscalar-to-scalar function in the trivial identity AD mode. LEvaluate a nonscalar-to-nonscalar function in the trivial identity AD mode.  `abcdefg efg`acbd GHC only experimentalekmett@gmail.comNoneCCompute the directional derivative of a function given a zipped up + of the input values and their derivatives NCompute the answer and directional derivative of a function given a zipped up + of the input values and their derivatives MCompute a vector of directional derivatives for a function given a zipped up , of the input values and their derivatives. YCompute a vector of answers and directional derivatives for a function given a zipped up , of the input values and their derivatives. The Y function calculates the first derivative of a scalar-to-scalar function by forward-mode e  diff sin 01.0The U function calculates the result and first derivative of scalar-to-scalar function by  mode e      ==      f = f  d f  diff' sin 0 (0.0,1.0) diff' exp 0 (1.0,1.0)The N function calculates the first derivatives of scalar-to-nonscalar function by  mode e diffF (\a -> [sin a, cos a]) 0 [1.0,-0.0]The \ function calculates the result and first derivatives of a scalar-to-non-scalar function by  mode e diffF' (\a -> [sin a, cos a]) 0[(0.0,1.0),(1.0,-0.0)]CA fast, simple, transposed Jacobian computed with forward-mode AD. 2A fast, simple, transposed Jacobian computed with  mode e* that combines the output with the input. Compute the Jacobian using  mode e%. This must transpose the result, so ) is faster and allows more result types. 7jacobian (\[x,y] -> [y,x,x+y,x*y,exp x * sin y]) [pi,1]_[[0.0,1.0],[1.0,0.0],[1.0,1.0],[1.0,3.141592653589793],[19.472221418841606,12.502969588876512]]Compute the Jacobian using  mode eK and combine the output with the input. This must transpose the result, so * is faster, and allows more result types. Compute the Jacobian using  mode e along with the actual answer. Compute the Jacobian using  mode eX combined with the input using a user specified function, along with the actual answer. :Compute the gradient of a function using forward mode AD. Note, this performs O(n) worse than  for n3 inputs, in exchange for better space utilization. ECompute the gradient and answer to a function using forward mode AD. Note, this performs O(n) worse than  for n3 inputs, in exchange for better space utilization. Compute the gradient of a function using forward mode AD and combine the result with the input using a user-specified function. Note, this performs O(n) worse than   for n3 inputs, in exchange for better space utilization. wCompute the gradient of a function using forward mode AD and the answer, and combine the result with the input using a  user-specified function. Note, this performs O(n) worse than ! for n3 inputs, in exchange for better space utilization. gradWith' (,) sum [0..4]$(10,[(0,1),(1,1),(2,1),(3,1),(4,1)])SCompute the product of a vector with the Hessian using forward-on-forward-mode AD. KCompute the gradient and hessian product using forward-on-forward-mode AD.  GHC only experimentalekmett@gmail.comNoneGHC only experimentalekmett@gmail.comNoneThe l function calculates the gradient of a non-scalar-to-scalar function with reverse-mode AD in a single pass.  grad (\[x,y,z] -> x*y+z) [1,2,3][2,1,1] The  w function calculates the result and gradient of a non-scalar-to-scalar function with reverse-mode AD in a single pass. !grad' (\[x,y,z] -> x*y+z) [1,2,3] (5,[2,1,1])  g fE function calculates the gradient of a non-scalar-to-scalar function f( with reverse-mode AD in a single pass. L The gradient is combined element-wise with the argument using the function g.    ==   (_ dx -> dx)   ==       g fG calculates the result and gradient of a non-scalar-to-scalar function f' with reverse-mode AD in a single pass L the gradient is combined element-wise with the argument using the function g.     ==   (_ dx -> dx)  The  c function calculates the jacobian of a non-scalar-to-non-scalar function with reverse AD lazily in m passes for m outputs. $jacobian (\[x,y] -> [y,x,x*y]) [2,1][[0,1],[1,0],[1,2]] The  b function calculates both the result and the Jacobian of a nonscalar-to-nonscalar function, using m invocations of reverse AD,  where m( is the output dimensionality. Applying fmap snd* to the result will recover the result of    | An alias for gradF' %jacobian' (\[x,y] -> [y,x,x*y]) [2,1][(1,[0,1]),(2,[1,0]),(2,[1,2])]'jacobianWith g f'@ calculates the Jacobian of a non-scalar-to-non-scalar function f with reverse AD lazily in m passes for m outputs. kInstead of returning the Jacobian matrix, the elements of the matrix are combined with the input using the g.     ==  (_ dx -> dx)    == (f x ->  x  f x)  g f'R calculates both the result and the Jacobian of a nonscalar-to-nonscalar function f, using m invocations of reverse AD,  where m( is the output dimensionality. Applying fmap snd* to the result will recover the result of  kInstead of returning the Jacobian matrix, the elements of the matrix are combined with the input using the g.   ==  (_ dx -> dx)&Compute the derivative of a function.  diff sin 01.0The [ function calculates the result and derivative, as a pair, of a scalar-to-scalar function.  diff' sin 0 (0.0,1.0) diff' exp 0 (1.0,1.0)aCompute the derivatives of each result of a scalar-to-vector function with regards to its input. diffF (\a -> [sin a, cos a]) 0 [1.0,0.0]wCompute the derivatives of each result of a scalar-to-vector function with regards to its input along with the answer. diffF' (\a -> [sin a, cos a]) 0[(0.0,1.0),(1.0,0.0)]Compute the hessian via the jacobian of the gradient. gradient is computed in reverse mode and then the jacobian is computed in reverse mode. However, since the  f :: f a -> f ai is square this is not as fast as using the forward-mode Jacobian of a reverse mode gradient provided by ". hessian (\[x,y] -> x*y) [1,2] [[0,1],[1,0]]Compute the order 3 Hessian tensor on a non-scalar-to-non-scalar function via the reverse-mode Jacobian of the reverse-mode Jacobian of the function. Less efficient than #$. 0hessianF (\[x,y] -> [x*y,x+y,exp x*cos y]) [1,2][[[0.0,1.0],[1.0,0.0]],[[0.0,0.0],[0.0,0.0]],[[-1.1312043837568135,-2.4717266720048188],[-2.4717266720048188,1.1312043837568135]]]                    GHC only experimentalekmett@gmail.comNone !"# !"# !"# !"#GHC only experimentalekmett@gmail.comNone$The $2 function finds a zero of a scalar function using  Newton':s method; its output is a stream of increasingly accurate F results. (Modulo the usual caveats.) If the stream becomes constant  ( it converges%), no further elements are returned.  Examples:  take 10 $ findZero (\x->x^2-4) 1I[1.0,2.5,2.05,2.000609756097561,2.0000000929222947,2.000000000000002,2.0]import Data.Complex.last $ take 10 $ findZero ((+1).(^2)) (1 :+ 1) 0.0 :+ 1.0%The %* function inverts a scalar function using  Newton':s method; its output is a stream of increasingly accurate = results. (Modulo the usual caveats.) If the stream becomes  constant ( it converges%), no further elements are returned.  Example: )last $ take 10 $ inverse sqrt 1 (sqrt 10)10.0&The &( function find a fixedpoint of a scalar  function using Newton'$s method; its output is a stream of = increasingly accurate results. (Modulo the usual caveats.)  If the stream becomes constant ( it converges), no further  elements are returned. !last $ take 10 $ fixedPoint cos 10.7390851332151607'The '( function finds an extremum of a scalar  function using Newton',s method; produces a stream of increasingly > accurate results. (Modulo the usual caveats.) If the stream  becomes constant ( it converges%), no further elements are returned. last $ take 10 $ extremum cos 10.0(The (" function performs a multivariate ? optimization, based on the naive-gradient-descent in the file   stalingrad/examples/ flow-tests/pre-saddle-1a.vlad from the > VLAD compiler Stalingrad sources. Its output is a stream of = increasingly accurate results. (Modulo the usual caveats.) HIt uses reverse mode automatic differentiation to compute the gradient. )aPerform a gradient descent using reverse mode automatic differentiation to compute the gradient. *kPerform a conjugate gradient descent using reverse mode automatic differentiation to compute the gradient. +jPerform a conjugate gradient ascent using reverse mode automatic differentiation to compute the gradient. $%&'()*+$%&'()*+$%&'()*+$%&'()*+GHC only experimentalekmett@gmail.comNone,The ,2 function finds a zero of a scalar function using  Halley':s method; its output is a stream of increasingly accurate F results. (Modulo the usual caveats.) If the stream becomes constant  ( it converges%), no further elements are returned.  Examples:  take 10 $ findZero (\x->x^2-4) 1B[1.0,1.8571428571428572,1.9997967892704736,1.9999999999994755,2.0]import Data.Complex.last $ take 10 $ findZero ((+1).(^2)) (1 :+ 1) 0.0 :+ 1.0-The -* function inverts a scalar function using  Halley':s method; its output is a stream of increasingly accurate F results. (Modulo the usual caveats.) If the stream becomes constant  ( it converges%), no further elements are returned.  Note: the "take 10 $ inverse sqrt 1 (sqrt 10) example that works for Newton' s method  fails with Halley'0s method because the preconditions do not hold! .The .( function find a fixedpoint of a scalar  function using Halley'$s method; its output is a stream of = increasingly accurate results. (Modulo the usual caveats.)  If the stream becomes constant ( it converges), no further  elements are returned. !last $ take 10 $ fixedPoint cos 10.7390851332151607/The /( function finds an extremum of a scalar  function using Halley',s method; produces a stream of increasingly F accurate results. (Modulo the usual caveats.) If the stream becomes  constant ( it converges%), no further elements are returned. take 10 $ extremum cos 1G[1.0,0.29616942658570555,4.59979519460002e-3,1.6220740159042513e-8,0.0],-./,-./,-./,-./GHC only experimentalekmett@gmail.comNone0The 0l function calculates the gradient of a non-scalar-to-scalar function with reverse-mode AD in a single pass.  grad (\[x,y,z] -> x*y+z) [1,2,3][2,1,1]1The 1w function calculates the result and gradient of a non-scalar-to-scalar function with reverse-mode AD in a single pass. +grad' (\[x,y,z] -> 4*x*exp y+cos z) [1,2,3]L(28.566231899122155,[29.5562243957226,29.5562243957226,-0.1411200080598672])20 g fE function calculates the gradient of a non-scalar-to-scalar function f( with reverse-mode AD in a single pass. L The gradient is combined element-wise with the argument using the function g.   0 = 2 (_ dx -> dx)   = 2 const 31 g fG calculates the result and gradient of a non-scalar-to-scalar function f' with reverse-mode AD in a single pass L the gradient is combined element-wise with the argument using the function g. 1 == 3 (_ dx -> dx)4The 4c function calculates the jacobian of a non-scalar-to-non-scalar function with reverse AD lazily in m passes for m outputs. $jacobian (\[x,y] -> [y,x,x*y]) [2,1][[0,1],[1,0],[1,2]],jacobian (\[x,y] -> [exp y,cos x,x+y]) [1,2]<[[0.0,7.38905609893065],[-0.8414709848078965,0.0],[1.0,1.0]]5The 5b function calculates both the result and the Jacobian of a nonscalar-to-nonscalar function, using m invocations of reverse AD,  where m( is the output dimensionality. Applying fmap snd* to the result will recover the result of 4  | An alias for gradF' ghci> jacobian' ([x,y] -> [y,x,x*y]) [2,1]  [(1,[0,1] ),(2,[1,0] ),(2,[1,2])] 6'jacobianWith g f'@ calculates the Jacobian of a non-scalar-to-non-scalar function f with reverse AD lazily in m passes for m outputs. kInstead of returning the Jacobian matrix, the elements of the matrix are combined with the input using the g.   4 = 6 (_ dx -> dx)  6  = (f x ->  x  f x) 76 g f'R calculates both the result and the Jacobian of a nonscalar-to-nonscalar function f, using m invocations of reverse AD,  where m( is the output dimensionality. Applying fmap snd* to the result will recover the result of 6 kInstead of returning the Jacobian matrix, the elements of the matrix are combined with the input using the g. 5 == 7 (_ dx -> dx)8&Compute the derivative of a function.  diff sin 01.0cos 01.09The 94 function calculates the value and derivative, as a ' pair, of a scalar-to-scalar function.  diff' sin 0 (0.0,1.0):^Compute the derivatives of a function that returns a vector with regards to its single input. diffF (\a -> [sin a, cos a]) 0 [1.0,0.0];]Compute the derivatives of a function that returns a vector with regards to its single input  as well as the primal answer. diffF' (\a -> [sin a, cos a]) 0[(0.0,1.0),(1.0,0.0)]< Compute the < via the 4D of the gradient. gradient is computed in reverse mode and then the 4 is computed in reverse mode. However, since the 0 f :: f a -> f a9 is square this is not as fast as using the forward-mode 4( of a reverse mode gradient provided by ". hessian (\[x,y] -> x*y) [1,2] [[0,1],[1,0]]=Compute the order 3 Hessian tensor on a non-scalar-to-non-scalar function via the reverse-mode Jacobian of the reverse-mode Jacobian of the function. Less efficient than #$. 0hessianF (\[x,y] -> [x*y,x+y,exp x*cos y]) [1,2][[[0.0,1.0],[1.0,0.0]],[[0.0,0.0],[0.0,0.0]],[[-1.1312043837568135,-2.4717266720048188],[-2.4717266720048188,1.1312043837568135]]]0123456789:;<=0123456789:;<=01234567<=89:;0123456789:;<=GHC only experimentalekmett@gmail.comNone>Calculate the Jacobian of a non-scalar-to-non-scalar function, automatically choosing between forward and reverse mode AD based on the number of inputs and outputs. @If you know the relative number of inputs and outputs, consider %& or '&. ?Calculate both the answer and Jacobian of a non-scalar-to-non-scalar function, automatically choosing between forward- and reverse- mode AD based on the relative, based on the number of inputs @If you know the relative number of inputs and outputs, consider %( or '(. @@ g f calculates the Jacobian of a non-scalar-to-non-scalar function, automatically choosing between forward and reverse mode AD based on the number of inputs and outputs. SThe resulting Jacobian matrix is then recombined element-wise with the input using g. @If you know the relative number of inputs and outputs, consider %) or '). AA g f calculates the answer and Jacobian of a non-scalar-to-non-scalar function, automatically choosing between sparse and reverse mode AD based on the number of inputs and outputs. SThe resulting Jacobian matrix is then recombined element-wise with the input using g. @If you know the relative number of inputs and outputs, consider %* or '*. BB f wv% computes the product of the hessian H$ of a non-scalar-to-scalar function f at w =   $ wv with a vector v = snd  $ wv using " Pearlmutter's method" from  ?http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.6143, which states:  ' H v = (d/dr) grad_w (w + r v) | r = 0 Or in other words, we take the directional derivative of the gradient. The gradient is calculated in reverse mode, then the directional derivative is calculated in forward mode. CC f wv6 computes both the gradient of a non-scalar-to-scalar f at w =   $ wv and the product of the hessian H at w with a vector v = snd  $ wv using " Pearlmutter's method"8. The outputs are returned wrapped in the same functor.  ' H v = (d/dr) grad_w (w + r v) | r = 0 Or in other words, we return the gradient and the directional derivative of the gradient. The gradient is calculated in reverse mode, then the directional derivative is calculated in forward mode. DCompute the Hessian via the Jacobian of the gradient. gradient is computed in reverse mode and then the Jacobian is computed in sparse (forward) mode. EPCompute the order 3 Hessian tensor on a non-scalar-to-non-scalar function using Sparse or 'Sparse'-on-'Reverse' >?@ABCDE*   !#()*+>?@ABCDE*   >?@AD!E#BC()*+ >?@ABCDEGHC only experimentalekmett@gmail.comNone FGHIJKLMNOPQ FGHIJKLMNOPQ PQNOLMFKJIHGFKJIHGLMNOPQ+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                      & ) ( *   !  !&()*"$ !&()*"$ !&()*"$&()*"$  &(     !"#$%&'()*+,-../0123456789:;<=>?@ABCCDEFGHIJKLMNOPQRSTUVWXY Z[\]^_`ab../c@Ad2e f g Y h i j k l m n o p q r s t u v w x y z { | } ~               .ad-3.4Numeric.AD.Internal.CombinatorsNumeric.AD.Internal.ClassesNumeric.AD.TypesNumeric.AD.Internal.VarNumeric.AD.Internal.TowerNumeric.AD.Internal.SparseNumeric.AD.Internal.ForwardNumeric.AD.Internal.KahnNumeric.AD.Internal.ReverseNumeric.AD.Internal.DenseNumeric.AD.Internal.CompositionNumeric.AD.Mode.ForwardNumeric.AD.Mode.TowerNumeric.AD.Mode.ReverseNumeric.AD.Mode.SparseNumeric.AD.NewtonNumeric.AD.HalleyNumeric.AD.Mode.Kahn Numeric.ADNumeric.AD.Mode.DirectedNumeric.AD.Internal.JetNumeric.AD.Internal.TypesNumeric.AD.Variadic.SparseNumeric.AD.Internal.IdentityNumeric.AD.Variadic.KahnNumeric.AD.Variadic Control.Arrow&&&Numeric.AD.Mode.Wengertgradgrad'gradWith gradWith'hessianNumeric.AD.Mode.MixedhessianFNumeric.AD.ReversejacobianNuneric.AD.Sparse jacobian' jacobianWith jacobianWith'zipWithTzipWithDefaultTJacobianDunarylift1lift1_binarylift2lift2_PrimalprimalModeisKnownConstant isKnownZeroauto<+>*^^*^/<**>zeroLifted showsPrec1==!compare1 fromInteger1+!*!-!negate1signum1abs1/!recip1 fromRational1 toRational1pi1exp1sqrt1log1**!logBase1sin1atan1acos1asin1tan1cos1sinh1atanh1acosh1asinh1tanh1cosh1properFraction1 truncate1floor1ceiling1round1 floatRadix1 floatDigits1 floatRange1 decodeFloat1 encodeFloat1 exponent1 significand1 scaleFloat1isNaN1isIEEE1isNegativeZero1isDenormalized1 isInfinite1atan21succ1pred1toEnum1 fromEnum1 enumFrom1 enumFromThen1 enumFromTo1enumFromThenTo1 minBound1 maxBound1erf1erfc1normcdf1inverf1inverfc1 invnormcdf1Isoisoosione deriveLifted deriveNumericJet:-tailJetheadJetjetADrunADVariableVarvarvarIdbindunbind unbindWith unbindMapunbindMapWithDefaultvaryTowergetTowerzeroPadzeroPadF transposePadFdd'tangentsbundlewithDapply getADTowertowerSparseZeroIndex emptyIndex addToIndexindicesvarsskeletondspartialspartialGradspacksunpacksGradpackunpackunpack'vgradvgrad'vgradsForwardLifttangentunbundlebind'bindWith bindWith' transposeWithKahnTapeUnaryBinary derivative derivative'partials partialArray partialMapReversegetTapeHeadCellsNil derivativeOf derivativeOf'partialArrayOf partialMapOf reifyTapeDenseds' ComposeModerunComposeModeComposeFunctordecomposeFunctor composeMode decomposeModelowerUUlowerUFlowerFUlowerFFdudu'duFduF'diffdiff'diffFdiffF' jacobianT jacobianWithThessianProducthessianProduct'diffsdiffs0diffsFdiffs0Ftaylortaylor0 maclaurin maclaurin0dusdus0dusFdus0Fgrads jacobianshessian' hessianF'findZeroinverse fixedPointextremumgradientDescentgradientAscentconjugateGradientDescentconjugateGradientAscent DirectionMixedbase Data.FoldableFoldableData.Traversable Traversable liftedMembersnegOne withPrimalfromBy fromIntegral1square1 discrete1 discrete2 discrete3varA lowerInstance$fIsoaaShowable free-3.4.2Control.Comonad.CofreeCofreeshowablejetTyCon$fTypeable1Jet$fTraversableJet $fFoldableJet $fFunctorJet $fShowJet$fShowShowableadTyConadConstr adDataType$fDataAD $fTypeable1AD$fEnumADSrunS$fPrimalVariable $fVarVariable$fMonadS$fVarAD$fJacobianTower $fModeTower $fPrimalTower $fShowTower $fLiftedTowerdropMaptimes$fJacobianSparse $fModeSparse$fPrimalSparse$fGrads(->)(->)a$fGradsADCofreea$fGrad(->)(->)(->)a$fGradAD[](,)a$fLiftedSparseIdrunIdprobeunprobepidunpidprobedunprobed $fInvErfId$fErfId $fPrimalId$fModeId $fLiftedId $fMonadId$fApplicativeId$fTraversableId $fFoldableId $fFunctorId$fJacobianForward $fModeForward$fPrimalForward$fLiftedForward backPropagateGHC.ArrArraycontainers-0.5.0.0Data.IntMap.BaseIntMap$fJacobianKahn $fPrimalKahn $fModeKahn $fMuRefKahntopSortAcyclic $fVarKahn $fLiftedKahnunarilybinarily dropCellsunbin modifyTape$fJacobianReverse$fPrimalReverse $fModeReverse $fVarReverse$fLiftedReverse$fJacobianDense $fModeDense $fPrimalDense $fShowDense $fLiftedDensecomposeFunctorTyConcomposeFunctorConstrcomposeFunctorDataTypecomposeModeTyConcomposeModeConstrcomposeModeDataType$fDataComposeMode$fTypeableComposeMode$fTypeable1ComposeMode$fLiftedComposeMode$fModeComposeMode$fPrimalComposeMode$fDataComposeFunctor$fTypeable1ComposeFunctor$fTraversableComposeFunctor$fFoldableComposeFunctor$fFunctorComposeFunctorGHC.BaseFunctor GHC.Floatsincosidconst Data.Functor<$>secondd2d2' Data.TuplefstNatZsizebig