L      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~          !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKGHC only experimentalekmett@gmail.com Safe-InferredZip a L f with a M g assuming f! has at least as many entries as g. Zip a L f with a M g assuming f, using a default value after f is exhausted. GHC only experimentalekmett@gmail.comNone is used by  deriveMode but is not exposed  via  # to prevent its abuse by end users  via the AD data type.   is used by  deriveMode but is not exposed  via the  ) class to prevent its abuse by end users  via the AD data type. QIt provides direct access to the result, stripped of its derivative information, K but this is unsafe in general as (auto . primal) would discard derivative N information. The end user is protected from accidentally using this function G by the universal quantification on the various combinators we expose. @allowed to return False for items with a zero derivative, but we'*ll give more NaNs than strictly necessary 6allowed to return False for zero, but we give more NaN's than strictly necessary then Embed a constant  Vector sum Scalar-vector multiplication Vector-scalar multiplication Scalar division aExponentiation, this should be overloaded if you can figure out anything about what is constant!   'zero' = 'lift' 0 XX t provides   instance Lifted $t given supplied instances for  + instance Lifted $t => Primal $t where ... - instance Lifted $t => Jacobian $t where ... The seemingly redundant  $tB constraints are caused by Template Haskell staging restrictions. N$Find all the members defined in the  data type YY f g# provides the following instances:  < instance ('Lifted' $f, 'Num' a, 'Enum' a) => 'Enum' ($g a) 8 instance ('Lifted' $f, 'Num' a, 'Eq' a) => 'Eq' ($g a) : instance ('Lifted' $f, 'Num' a, 'Ord' a) => 'Ord' ($g a) B instance ('Lifted' $f, 'Num' a, 'Bounded' a) => 'Bounded' ($g a) 3 instance ('Lifted' $f, 'Show' a) => 'Show' ($g a) 1 instance ('Lifted' $f, 'Num' a) => 'Num' ($g a) ? instance ('Lifted' $f, 'Fractional' a) => 'Fractional' ($g a) ; instance ('Lifted' $f, 'Floating' a) => 'Floating' ($g a) = instance ('Lifted' $f, 'RealFloat' a) => 'RealFloat' ($g a) ; instance ('Lifted' $f, 'RealFrac' a) => 'RealFrac' ($g a) 3 instance ('Lifted' $f, 'Real' a) => 'Real' ($g a) d  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWOPQRSTUVXWNYXYX  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYX W XY !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUV  = !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWOPQRSTUVXWNYXYGHC only experimentalekmett@gmail.comNoneZ4Used to sidestep the need for UndecidableInstances. ZA ZD is a tower of all (higher order) partial derivatives of a function At each step, a Z f& is wrapped in another layer worth of f. + a :- f a :- f (f a) :- f (f (f a)) :- ... \Take the tail of a Z. ]Take the head of a Z. ^ Construct a Z by unzipping the layers of a [ Comonad. Z\Z[]\]^^_`abcdZ[\]^ Z\Z[]\]^^_`abcdGHC only experimentalekmett@gmail.comNone__* serves as a common wrapper for different  # instances, exposing a traditional X numerical tower. Universal quantification is used to limit the actions in user code to Z machinery that will return the same answers under all AD modes, allowing us to use modes ( interchangeably as both the type level "brand") and dictionary, providing a common API. _`aefghij_`a_`aefghijGHC only experimentalekmett@gmail.comNoned>Used to mark variables for inspection during the reverse pass bcklmdefghijklnopq bcdefghijkl defghjikbcl bcklmdefghijklnopqGHC only experimentalekmett@gmail.comNonemTower is an AD  B that calculates a tangent tower by forward AD, and provides fast diffsUU, diffsUF mnopqrstuvwxyzrstuvmnopqrstuvwxyzmnopqrstwuvxyzmnopqrstuvwxyzrstuvNone{JWe only store partials in sorted order, so the map contained in a partial M will only contain partials with equal or greater keys to that of the map in S which it was found. This should be key for efficiently computing sparse hessians. K there are only (n + k - 1) choose k distinct nth partial derivatives of a  function with k inputs. wdrop keys below a given value ${|}~wxyz{|}~{|}~~{}|{}|~wxyz{|}~ non-portable experimentalekmett@gmail.comNoneGHC only experimentalekmett@gmail.comNoneGHC only experimentalekmett@gmail.comNone rThis is used to create a new entry on the chain given a unary function, its derivative with respect to its input, C the variable ID of its input, and the value of its input. Used by  and  internally. uThis is used to create a new entry on the chain given a binary function, its derivatives with respect to its inputs, ( their variable IDs and values. Used by  internally. aHelper that extracts the derivative of a chain when the chain was constructed with one variable. qHelper that extracts both the primal and derivative of a chain when the chain was constructed with one variable. 6Used internally to push sensitivities down the chain. EExtract the partials from the current chain for a given AD variable.  Return an  of $ given bounds for the variable IDs.  Return an  of sparse partials "Construct a tape that starts with n variables. GHC only experimentalekmett@gmail.comNone mode AD Calculate the  using forward mode AD.   GHC only experimentalekmett@gmail.comNoneReverse is a  A using reverse-mode automatic differentiation that provides fast diffFU, diff2FU, grad, grad2 and a fast jacobianF when you have a significantly smaller number of outputs than inputs. A TapeT records the information needed back propagate from the output to each input during    AD. +back propagate sensitivities along a tape. 6This returns a list of contributions to the partials. 2 The variable ids returned in the list are likely not unique!  Return an  of $ given bounds for the variable IDs.  Return an  of sparse partials ! non-portable experimentalekmett@gmail.comNone non-portable experimentalekmett@gmail.comNone GHC only experimentalekmett@gmail.comNone   GHC only experimentalekmett@gmail.comNone?The composition of two AD modes is an AD mode in its own right ?Functor composition, used to nest the use of jacobian and grad GHC only experimentalekmett@gmail.comNoneFEvaluate a scalar-to-scalar function in the trivial identity AD mode. IEvaluate a scalar-to-nonscalar function in the trivial identity AD mode. IEvaluate a nonscalar-to-scalar function in the trivial identity AD mode. LEvaluate a nonscalar-to-nonscalar function in the trivial identity AD mode.  Z[\]^_`a _`aZ[]\^ GHC only experimentalekmett@gmail.comNoneCCompute the directional derivative of a function given a zipped up + of the input values and their derivatives NCompute the answer and directional derivative of a function given a zipped up + of the input values and their derivatives MCompute a vector of directional derivatives for a function given a zipped up , of the input values and their derivatives. YCompute a vector of answers and directional derivatives for a function given a zipped up , of the input values and their derivatives. The Y function calculates the first derivative of a scalar-to-scalar function by forward-mode _  diff sin 01.0The U function calculates the result and first derivative of scalar-to-scalar function by  mode _      ==      f = f  d f  diff' sin 0 (0.0,1.0) diff' exp 0 (1.0,1.0)The N function calculates the first derivatives of scalar-to-nonscalar function by  mode _ diffF (\a -> [sin a, cos a]) 0 [1.0,-0.0]The \ function calculates the result and first derivatives of a scalar-to-non-scalar function by  mode _ diffF' (\a -> [sin a, cos a]) 0[(0.0,1.0),(1.0,-0.0)]CA fast, simple, transposed Jacobian computed with forward-mode AD. 2A fast, simple, transposed Jacobian computed with  mode _* that combines the output with the input. Compute the Jacobian using  mode _%. This must transpose the result, so ) is faster and allows more result types. 7jacobian (\[x,y] -> [y,x,x+y,x*y,exp x * sin y]) [pi,1]_[[0.0,1.0],[1.0,0.0],[1.0,1.0],[1.0,3.141592653589793],[19.472221418841606,12.502969588876512]]Compute the Jacobian using  mode _K and combine the output with the input. This must transpose the result, so * is faster, and allows more result types. Compute the Jacobian using  mode _ along with the actual answer. Compute the Jacobian using  mode _X combined with the input using a user specified function, along with the actual answer. :Compute the gradient of a function using forward mode AD. Note, this performs O(n) worse than  for n3 inputs, in exchange for better space utilization. ECompute the gradient and answer to a function using forward mode AD. Note, this performs O(n) worse than  for n3 inputs, in exchange for better space utilization. Compute the gradient of a function using forward mode AD and combine the result with the input using a user-specified function. Note, this performs O(n) worse than  for n3 inputs, in exchange for better space utilization. SCompute the product of a vector with the Hessian using forward-on-forward-mode AD. KCompute the gradient and hessian product using forward-on-forward-mode AD.  GHC only experimentalekmett@gmail.comNoneGHC only experimentalekmett@gmail.comNoneThe J function calculates the gradient of a non-scalar-to-scalar function with  AD in a single pass.  grad (\[x,y,z] -> x*y+z) [1,2,3][2,1,1]The U function calculates the result and gradient of a non-scalar-to-scalar function with  AD in a single pass. +grad' (\[x,y,z] -> 4*x*exp y+cos z) [1,2,3]L(28.566231899122155,[29.5562243957226,29.5562243957226,-0.1411200080598672]) g fE function calculates the gradient of a non-scalar-to-scalar function f( with reverse-mode AD in a single pass. L The gradient is combined element-wise with the argument using the function g.    =  (_ dx -> dx)   =  const  g fG calculates the result and gradient of a non-scalar-to-scalar function f with  AD in a single pass L the gradient is combined element-wise with the argument using the function g.  ==  (_ dx -> dx)The c function calculates the jacobian of a non-scalar-to-non-scalar function with reverse AD lazily in m passes for m outputs. $jacobian (\[x,y] -> [y,x,x*y]) [2,1][[0,1],[1,0],[1,2]],jacobian (\[x,y] -> [exp y,cos x,x+y]) [1,2]<[[0.0,7.38905609893065],[-0.8414709848078965,0.0],[1.0,1.0]]The b function calculates both the result and the Jacobian of a nonscalar-to-nonscalar function, using m invocations of reverse AD,  where m( is the output dimensionality. Applying fmap snd* to the result will recover the result of   | An alias for gradF' ghci> jacobian' ([x,y] -> [y,x,x*y]) [2,1]  [(1,[0,1] ),(2,[1,0] ),(2,[1,2])] 'jacobianWith g f'@ calculates the Jacobian of a non-scalar-to-non-scalar function f with reverse AD lazily in m passes for m outputs. kInstead of returning the Jacobian matrix, the elements of the matrix are combined with the input using the g.    =  (_ dx -> dx)    = (f x ->  x  f x)   g f'R calculates both the result and the Jacobian of a nonscalar-to-nonscalar function f, using m invocations of reverse AD,  where m( is the output dimensionality. Applying fmap snd* to the result will recover the result of  kInstead of returning the Jacobian matrix, the elements of the matrix are combined with the input using the g.  ==   (_ dx -> dx) &Compute the derivative of a function.  diff sin 01.0cos 01.0 The  4 function calculates the value and derivative, as a ' pair, of a scalar-to-scalar function.  diff' sin 0 (0.0,1.0) ^Compute the derivatives of a function that returns a vector with regards to its single input. diffF (\a -> [sin a, cos a]) 0 [1.0,0.0] ]Compute the derivatives of a function that returns a vector with regards to its single input  as well as the primal answer. diffF' (\a -> [sin a, cos a]) 0[(0.0,1.0),(1.0,0.0)] Compute the  via the D of the gradient. gradient is computed in reverse mode and then the  is computed in reverse mode. However, since the  f :: f a -> f a9 is square this is not as fast as using the forward-mode ( of a reverse mode gradient provided by  . hessian (\[x,y] -> x*y) [1,2] [[0,1],[1,0]]Compute the order 3 Hessian tensor on a non-scalar-to-non-scalar function via the reverse-mode Jacobian of the reverse-mode Jacobian of the function. Less efficient than !". 0hessianF (\[x,y] -> [x*y,x+y,exp x*cos y]) [1,2][[[0.0,1.0],[1.0,0.0]],[[0.0,0.0],[0.0,0.0]],[[-1.1312043837568135,-2.4717266720048188],[-2.4717266720048188,1.1312043837568135]]]                    GHC only experimentalekmett@gmail.comNoneGHC only experimentalekmett@gmail.comNoneThe 2 function finds a zero of a scalar function using  Newton':s method; its output is a stream of increasingly accurate F results. (Modulo the usual caveats.) If the stream becomes constant  ( it converges%), no further elements are returned.  Examples:  take 10 $ findZero (\x->x^2-4) 1I[1.0,2.5,2.05,2.000609756097561,2.0000000929222947,2.000000000000002,2.0]import Data.Complex.last $ take 10 $ findZero ((+1).(^2)) (1 :+ 1) 0.0 :+ 1.0The * function inverts a scalar function using  Newton':s method; its output is a stream of increasingly accurate = results. (Modulo the usual caveats.) If the stream becomes  constant ( it converges%), no further elements are returned.  Example: )last $ take 10 $ inverse sqrt 1 (sqrt 10)10.0 The  ( function find a fixedpoint of a scalar  function using Newton'$s method; its output is a stream of = increasingly accurate results. (Modulo the usual caveats.)  If the stream becomes constant ( it converges), no further  elements are returned. !last $ take 10 $ fixedPoint cos 10.7390851332151607!The !( function finds an extremum of a scalar  function using Newton',s method; produces a stream of increasingly > accurate results. (Modulo the usual caveats.) If the stream  becomes constant ( it converges%), no further elements are returned. last $ take 10 $ extremum cos 10.0"The "" function performs a multivariate ? optimization, based on the naive-gradient-descent in the file   stalingrad/examples/ flow-tests/pre-saddle-1a.vlad from the > VLAD compiler Stalingrad sources. Its output is a stream of = increasingly accurate results. (Modulo the usual caveats.) HIt uses reverse mode automatic differentiation to compute the gradient. #aPerform a gradient descent using reverse mode automatic differentiation to compute the gradient. $kPerform a conjugate gradient descent using reverse mode automatic differentiation to compute the gradient. %jPerform a conjugate gradient ascent using reverse mode automatic differentiation to compute the gradient.  !"#$% !"#$% !"#$% !"#$%GHC only experimentalekmett@gmail.comNone&The &2 function finds a zero of a scalar function using  Halley':s method; its output is a stream of increasingly accurate F results. (Modulo the usual caveats.) If the stream becomes constant  ( it converges%), no further elements are returned.  Examples:  take 10 $ findZero (\x->x^2-4) 1B[1.0,1.8571428571428572,1.9997967892704736,1.9999999999994755,2.0]import Data.Complex.last $ take 10 $ findZero ((+1).(^2)) (1 :+ 1) 0.0 :+ 1.0'The '* function inverts a scalar function using  Halley':s method; its output is a stream of increasingly accurate F results. (Modulo the usual caveats.) If the stream becomes constant  ( it converges%), no further elements are returned.  Note: the "take 10 $ inverse sqrt 1 (sqrt 10) example that works for Newton' s method  fails with Halley'0s method because the preconditions do not hold! (The (( function find a fixedpoint of a scalar  function using Halley'$s method; its output is a stream of = increasingly accurate results. (Modulo the usual caveats.)  If the stream becomes constant ( it converges), no further  elements are returned. !last $ take 10 $ fixedPoint cos 10.7390851332151607)The )( function finds an extremum of a scalar  function using Halley',s method; produces a stream of increasingly F accurate results. (Modulo the usual caveats.) If the stream becomes  constant ( it converges%), no further elements are returned. take 10 $ extremum cos 1G[1.0,0.29616942658570555,4.59979519460002e-3,1.6220740159042513e-8,0.0]&'()&'()&'()&'()GHC only experimentalekmett@gmail.comNone*The *l function calculates the gradient of a non-scalar-to-scalar function with reverse-mode AD in a single pass.  grad (\[x,y,z] -> x*y+z) [1,2,3][2,1,1]+The +w function calculates the result and gradient of a non-scalar-to-scalar function with reverse-mode AD in a single pass. !grad' (\[x,y,z] -> x*y+z) [1,2,3] (5,[2,1,1]),* g fE function calculates the gradient of a non-scalar-to-scalar function f( with reverse-mode AD in a single pass. L The gradient is combined element-wise with the argument using the function g.   * == , (_ dx -> dx)   == ,  -+ g fG calculates the result and gradient of a non-scalar-to-scalar function f' with reverse-mode AD in a single pass L the gradient is combined element-wise with the argument using the function g.   + == - (_ dx -> dx) .The .c function calculates the jacobian of a non-scalar-to-non-scalar function with reverse AD lazily in m passes for m outputs. $jacobian (\[x,y] -> [y,x,x*y]) [2,1][[0,1],[1,0],[1,2]]/The /b function calculates both the result and the Jacobian of a nonscalar-to-nonscalar function, using m invocations of reverse AD,  where m( is the output dimensionality. Applying fmap snd* to the result will recover the result of .  | An alias for gradF' %jacobian' (\[x,y] -> [y,x,x*y]) [2,1][(1,[0,1]),(2,[1,0]),(2,[1,2])]0'jacobianWith g f'@ calculates the Jacobian of a non-scalar-to-non-scalar function f with reverse AD lazily in m passes for m outputs. kInstead of returning the Jacobian matrix, the elements of the matrix are combined with the input using the g.   . == 0 (_ dx -> dx)  0  == (f x ->  x  f x) 10 g f'R calculates both the result and the Jacobian of a nonscalar-to-nonscalar function f, using m invocations of reverse AD,  where m( is the output dimensionality. Applying fmap snd* to the result will recover the result of 0 kInstead of returning the Jacobian matrix, the elements of the matrix are combined with the input using the g. / == 1 (_ dx -> dx)2&Compute the derivative of a function.  diff sin 01.03The 3[ function calculates the result and derivative, as a pair, of a scalar-to-scalar function.  diff' sin 0 (0.0,1.0) diff' exp 0 (1.0,1.0)4aCompute the derivatives of each result of a scalar-to-vector function with regards to its input. diffF (\a -> [sin a, cos a]) 0 [1.0,0.0]5wCompute the derivatives of each result of a scalar-to-vector function with regards to its input along with the answer. diffF' (\a -> [sin a, cos a]) 0[(0.0,1.0),(1.0,0.0)]6Compute the hessian via the jacobian of the gradient. gradient is computed in reverse mode and then the jacobian is computed in reverse mode. However, since the * f :: f a -> f ai is square this is not as fast as using the forward-mode Jacobian of a reverse mode gradient provided by  . hessian (\[x,y] -> x*y) [1,2] [[0,1],[1,0]]7Compute the order 3 Hessian tensor on a non-scalar-to-non-scalar function via the reverse-mode Jacobian of the reverse-mode Jacobian of the function. Less efficient than !". 0hessianF (\[x,y] -> [x*y,x+y,exp x*cos y]) [1,2][[[0.0,1.0],[1.0,0.0]],[[0.0,0.0],[0.0,0.0]],[[-1.1312043837568135,-2.4717266720048188],[-2.4717266720048188,1.1312043837568135]]]*+,-./01234567*+,-./01234567*+,-./01672345*+,-./01234567GHC only experimentalekmett@gmail.comNone8Calculate the Jacobian of a non-scalar-to-non-scalar function, automatically choosing between forward and reverse mode AD based on the number of inputs and outputs. @If you know the relative number of inputs and outputs, consider #$ or %$. 9Calculate both the answer and Jacobian of a non-scalar-to-non-scalar function, automatically choosing between forward- and reverse- mode AD based on the relative, based on the number of inputs @If you know the relative number of inputs and outputs, consider #& or %&. :: g f calculates the Jacobian of a non-scalar-to-non-scalar function, automatically choosing between forward and reverse mode AD based on the number of inputs and outputs. SThe resulting Jacobian matrix is then recombined element-wise with the input using g. @If you know the relative number of inputs and outputs, consider #' or %'. ;; g f calculates the answer and Jacobian of a non-scalar-to-non-scalar function, automatically choosing between sparse and reverse mode AD based on the number of inputs and outputs. SThe resulting Jacobian matrix is then recombined element-wise with the input using g. @If you know the relative number of inputs and outputs, consider #( or %(. << f wv% computes the product of the hessian H$ of a non-scalar-to-scalar function f at w =   $ wv with a vector v = snd  $ wv using " Pearlmutter's method" from  ?http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.29.6143, which states:  ' H v = (d/dr) grad_w (w + r v) | r = 0 Or in other words, we take the directional derivative of the gradient. The gradient is calculated in reverse mode, then the directional derivative is calculated in forward mode. == f wv6 computes both the gradient of a non-scalar-to-scalar f at w =   $ wv and the product of the hessian H at w with a vector v = snd  $ wv using " Pearlmutter's method"8. The outputs are returned wrapped in the same functor.  ' H v = (d/dr) grad_w (w + r v) | r = 0 Or in other words, we return the gradient and the directional derivative of the gradient. The gradient is calculated in reverse mode, then the directional derivative is calculated in forward mode. >Compute the Hessian via the Jacobian of the gradient. gradient is computed in reverse mode and then the Jacobian is computed in sparse (forward) mode. ?PCompute the order 3 Hessian tensor on a non-scalar-to-non-scalar function using Sparse or 'Sparse'-on-'Reverse' 89:;<=>?&89:;<=>?&89:;>?<= 89:;<=>?GHC only experimentalekmett@gmail.comNone @ABCDEFGHIJK @ABCDEFGHIJK JKHIFG@EDCBA@EDCBAFGHIJK)*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                   $ ' & (    $&'( "$&'( "$&'( "$&'( "$&      !"#$%&''()*+,-./0123456789:;<<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_` N a b c d ' ' ( e 9 : f + g h i j k l m n o p q r s t u v w x y z { | }~~~'ad-3.2.1Numeric.AD.Internal.CombinatorsNumeric.AD.Internal.ClassesNumeric.AD.TypesNumeric.AD.Internal.VarNumeric.AD.Internal.TowerNumeric.AD.Internal.SparseNumeric.AD.Internal.WengertNumeric.AD.Internal.ForwardNumeric.AD.Internal.ReverseNumeric.AD.Internal.DenseNumeric.AD.Internal.CompositionNumeric.AD.Mode.ForwardNumeric.AD.Mode.TowerNumeric.AD.Mode.ReverseNumeric.AD.Mode.SparseNumeric.AD.NewtonNumeric.AD.HalleyNumeric.AD.Mode.Wengert Numeric.ADNumeric.AD.Mode.DirectedNumeric.AD.Internal.JetNumeric.AD.Internal.TypesNumeric.AD.Variadic.SparseNumeric.AD.Internal.IdentityNumeric.AD.Variadic.ReverseNumeric.AD.Variadic Control.Arrow&&&gradgrad'gradWithhessianNumeric.AD.Mode.MixedhessianFNumeric.AD.ReversejacobianNuneric.AD.Sparse jacobian' jacobianWith jacobianWith'zipWithTzipWithDefaultTJacobianDunarylift1lift1_binarylift2lift2_PrimalprimalModeisKnownConstant isKnownZeroauto<+>*^^*^/<**>zeroLifted showsPrec1==!compare1 fromInteger1+!*!-!negate1signum1abs1/!recip1 fromRational1 toRational1pi1exp1sqrt1log1**!logBase1sin1atan1acos1asin1tan1cos1sinh1atanh1acosh1asinh1tanh1cosh1properFraction1 truncate1floor1ceiling1round1 floatRadix1 floatDigits1 floatRange1 decodeFloat1 encodeFloat1 exponent1 significand1 scaleFloat1isNaN1isIEEE1isNegativeZero1isDenormalized1 isInfinite1atan21succ1pred1toEnum1 fromEnum1 enumFrom1 enumFromThen1 enumFromTo1enumFromThenTo1 minBound1 maxBound1Isoisoosione deriveLifted deriveNumericJet:-tailJetheadJetjetADrunADVariableVarvarvarIdbindunbind unbindWith unbindMapunbindMapWithDefaultvaryTowergetTowerzeroPadzeroPadF transposePadFdd'tangentsbundlewithDapply getADTowertowerSparseZeroIndex emptyIndex addToIndexindicesvarsskeletondspartialspartialGradspacksunpacksGradpackunpackunpack'vgradvgrad'vgradsWengertLiftTapegetTapeHeadCellsBinaryUnaryNil derivativeOf derivativeOf'partialspartialArrayOf partialMapOf reifyTapeForwardtangentunbundlebind'bindWith bindWith' transposeWithReverse derivative derivative' partialArray partialMapDenseds' ComposeModerunComposeModeComposeFunctordecomposeFunctor composeMode decomposeModelowerUUlowerUFlowerFUlowerFFdudu'duFduF'diffdiff'diffFdiffF' jacobianT jacobianWithT gradWith'hessianProducthessianProduct'diffsdiffs0diffsFdiffs0Ftaylortaylor0 maclaurin maclaurin0dusdus0dusFdus0Fgrads jacobianshessian' hessianF'findZeroinverse fixedPointextremumgradientDescentgradientAscentconjugateGradientDescentconjugateGradientAscent DirectionMixedbase Data.FoldableFoldableData.Traversable Traversable liftedMembersnegOne withPrimalfromBy fromIntegral1square1 discrete1 discrete2 discrete3varA lowerInstance$fIsoaaShowablefree-3.2Control.Comonad.CofreeCofreeshowablejetTyCon$fTypeable1Jet$fTraversableJet $fFoldableJet $fFunctorJet $fShowJet$fShowShowableadTyConadConstr adDataType$fDataAD $fTypeable1AD$fEnumADSrunS$fPrimalVariable $fVarVariable$fMonadS$fVarAD$fJacobianTower $fModeTower $fPrimalTower $fShowTower $fLiftedTowerdropMaptimes$fJacobianSparse $fModeSparse$fPrimalSparse$fGrads(->)(->)a$fGradsADCofreea$fGrad(->)(->)(->)a$fGradAD[](,)a$fLiftedSparseIdrunIdprobeunprobepidunpidprobedunprobed $fPrimalId$fModeId $fLiftedId $fMonadId$fApplicativeId$fTraversableId $fFoldableId $fFunctorIdunarilybinarily backPropagateGHC.ArrArraycontainers-0.5.0.0Data.IntMap.BaseIntMap dropCellsunbin modifyTape$fJacobianWengert$fPrimalWengert $fModeWengert $fVarWengert$fLiftedWengert$fJacobianForward $fModeForward$fPrimalForward$fLiftedForward$fJacobianReverse$fPrimalReverse $fModeReverse$fMuRefReversetopSortAcyclic $fVarReverse$fLiftedReverse$fJacobianDense $fModeDense $fPrimalDense $fShowDense $fLiftedDensecomposeFunctorTyConcomposeFunctorConstrcomposeFunctorDataTypecomposeModeTyConcomposeModeConstrcomposeModeDataType$fDataComposeMode$fTypeableComposeMode$fTypeable1ComposeMode$fLiftedComposeMode$fModeComposeMode$fPrimalComposeMode$fDataComposeFunctor$fTypeable1ComposeFunctor$fTraversableComposeFunctor$fFoldableComposeFunctor$fFunctorComposeFunctorGHC.BaseFunctor GHC.Floatsincosidconst Data.Functor<$>secondd2d2' Data.TuplefstNatZsizebig