@      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~None"!"%&*+,/2349:;<=@BDIOQRTbfnext free variableoptimizable parameters  flag which is true when trainingY:Heterogeneous tensor vector with the same kind of elementsZFlip at type levelShow a shape, but None is replaced by "-1"@Name an expression so that it is made available for session.run.  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~  !"#$%&'()*+,-./0123457689:;<=>?@ABCDEFGHIJKLNMOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~xywvutsrqp~onmklhijfgdec}|{zb`a_^]Z[\YWXVURSTQOPLMNKJIHGFEDBC?@A<=>;:956781234/0.-,+*)('$%&"#!      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~j5Binding to tensorflow functions (c) Jean-Philippe Bernardy, 2017LGPL-3jean-philippe.bernardy@gu.se experimentalNone!"%&*+,/2349:;<=@BDOQRTfV/traverse all the tensors over tuples of tensorsNRepeat a flexible-shape constant vector to form a heterogeneous tensor vector.ZerosOnesConstant=Declare variable which persists between calls to session.run.Declare a parameter to optimize. The shape of parameter should not depend on dimensions which can change between runs, such as the batch size.;Name a tensor so that it is made available for session.run.jModify a mutable tensor. Attention: for the assignment to happen, the resulting tensor must be evaluated!Return a list of parameters."Gradient of wrt. given parameters.Clip a gradient Clip a tensorPlaceholder (to fill)Internal. Use , etc. instead.Mean value of the input tensor.Mean value of the input tensor.Internal. Use , etc. instead.Sum along a given dimensionSum along a given dimensionSum along the first dimension)Add two tensors, broacasting along shape s)Add two tensors, broacasting along shape sIndexwise equality test.Indexwise operatorIndexwise operatorIndexwise operatorIndexwise operator'Matrix multiplication (note that shape s is preserved)%Split a tensor on the first dimension!Concatenate tensors on dimension n*Concatenate tensors on the first dimension+Concatenate tensors on the second dimension Add an extra dimension at axis (n ) of size 1.-Add an extra dimension at axis (0) of size 1.-Add an extra dimension at axis (1) of size 1.$Remove a dimension if its size is 1.,Remove the first dimension if its size is 1.-Remove the second dimension if its size is 1.?Reshape a tensor so that the first two dimensions are collapsed >Reshape a tensor so that the last two dimensions are collapsed AReshape a tensor so that the first three dimensions are collapsed BReshape a tensor so that the first dimension is expanded into two. DReshape a tensor so that the first dimension is expanded into three. :Access the last element in a tensor (in the 0th dimension)9Access the nth element in a tensor (in the 0th dimension)NAccess the nth element in a tensor (in the 0th dimension), with a static index(Take a slice at dimension n from i to j.Split a tensors into n" tensors along the first dimension Concatenate n" tensors along the first dimension Concatenate n" tensors along the first dimension Concatenate n! tensors along the last dimension>Transposition. See the type for the permutation of dimensions.>Transposition. See the type for the permutation of dimensions.>Transposition. See the type for the permutation of dimensions.>Transposition. See the type for the permutation of dimensions.>Transposition. See the type for the permutation of dimensions.Reverse sequences. See >https://www.tensorflow.org/api_docs/python/tf/reverse_sequence2Generate a mask of given length for each sequence.(gather x ix)[k] = x[ix[k]]. See 4https://www.tensorflow.org/api_docs/python/tf/gather&Size-preserving convolution operation.!Softmax along the first dimension "Softmax along the second dimension!Argmax along dimension n" Argmax along the first dimension#!Argmax along the second dimension$Cast the element type.%*(dense) softmax cross entropy with logits.&8Computes sigmoid cross entropy given logits. Measures the probability error in discrete classification tasks in which each class is independent and not mutually exclusive. For instance, one could perform multilabel classification where a picture can contain both an elephant and a dog at the same time. See Rhttps://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits')sparse softmax cross entropy with logits.(One hot vector along axis n)One hot vector along axis 0*One hot vector along axis 1+yGenerate a random tensor where each individual element is picked in a normal distribution with given standard deviation.,nGenerate a random tensor where each individual element is picked in a uniform distribution with given bounds.-aGenerate an orthorgonal matrix. If the output has more dimensions than 2 the matrix is reshaped..CRandom tensor with variance scaling according to deeplearning lore.07Product of a matrix of weight with a (batched) vector .1(Dot product between two batched vectors.24Map a function along the first dimension of a tensor33Map a function along the last dimension of a tensor68Selection of a tensor (note: this is a strict operation)7/(where_ c x y)[i] = if c[i] then x[i] else y[i]8oCreate a parameter and initialize it with a suitable default for its type. Control the exact initializer using 9.9Create a parameter.:(Flatten all the dimensions of the tensor|     input tensor (batched)filters !"#$%labelslogits&labelslogits'desired labels predictions()*+,-./0123456789:;<=>?@ABCDEf      !"#$%&'()*+,-./0123456789:;<=f9801!"#      :<$()*672345&%'+,-./;=u      !"#$%&'()*+,-./0123456789:;<=>?@ABCDE667760717*Loss functions and optimization strategies (c) Jean-Philippe Bernardy, 2017LGPL-3jean-philippe.bernardy@gu.se experimentalNone!"%&+,/234:@BDIOQRTf FModel compiler optionsHapply gradient clippingIWA standard modelling function: (input value, gold value) ! (prediction, accuracy, loss)JRTriple of values that are always output in a model: prediction, loss and accuracy.L predictionO/First type argument is the number of classes. categorical logits gold\ return (prediction, accuraccy, loss) accuracy and prediction are averaged over the batch.P/First type argument is the number of classes. #categoricalDistribution logits gold\ return (prediction, accuraccy, loss) accuracy and prediction are averaged over the batch.Q 'timedCategorical targetWeights logits ytargetWeights: a zero-one matrix of the same size as decoder_outputs. It is intended to mask padding positions outside of the target sequence lengths with values 0.R"Model with several binary outputs.Sdefault model compiler optionsTcompile a standard modelUfGeneric a model with non-standard parameters ("x" and "y" must be provided as placeholders manually).FGHIJKLMNOPQRSTUFGHIJKLNMOPQRSTUOPQJKLMNIRFGHSTU FGHIJKLMNOPQRSTUCore layers and combinators. (c) Jean-Philippe Bernardy, 2017LGPL-3jean-philippe.bernardy@gu.se experimentalNone$%&*+,/2349:;<=@BDOQRTbf XpA drop probability. (This type is used to make sure one does not confuse keep probability and drop probability)Z#Parameters for the embedding layers\SA dense layer is a linear function form a to b: a transformation matrix and a bias.`embedding layera%Dense layer (Apply a linear function)b%Dense layer (Apply a linear function)cGenerate a dropout function. The mask applied by the returned function will be constant for any given call to mkDropout. This behavior allows to use the same mask in the several steps of an RNN.d?Generate a dropout function for an heterogeneous tensor vector.e!Size-preserving convolution layerf2 by 2 maxpool layer.VWXYZ[\]^_`abcdefghijklVWXYZ[\]^_`abcdef\]^_baXYcdZ[`VWefVWXYZ[\]^_`abcdefghijkl"RNN cells, layers and combinators. (c) Jean-Philippe Bernardy, 2017LGPL-3jean-philippe.bernardy@gu.se experimentalNone$%&*+,/2349:;<=@BDOQRTbf!oA function which attends to an external input. Typically a function of this type is a closure which has the attended input in its environment.pgAn attention scoring function. This function should produce a score (between 0 and 1) for each of the nValues entries of size  valueSize.qParameter for a GRUsParameter for an LSTMuA layer in an rnn. n% is the length of the time sequence. state& is the state propagated through time.vA cell in an rnn. state& is the state propagated through time.w[Compose two rnn layers. This is useful for example to combine forward and backward layers.x[Compose two rnn layers. This is useful for example to combine forward and backward layers.y#Compose two rnn layers in parallel.z#Compose two rnn layers in parallel.{EApply a function on the cell state(s) before running the cell itself.|&Stack two RNN cells (LHS is run first)}&Stack two RNN cells (LHS is run first)~`Run the cell, and forward the input to the output, by concatenation with the output of the cell.WConvert a pure function (feed-forward layer) to an RNN cell by ignoring the RNN state.JConvert a stateless generator into an RNN cell by ignoring the RNN state.tStandard RNN gate initializer. (The recurrent kernel is orthogonal to avoid divergence; the input kernel is glorot) Standard LSTMStandard GRU cellattnExample1  h st combines each element of the vector h with s, and applies a dense layer with parameters . The "winning" element of h (using softmax) is returned.Add some attention to an RnnCell, and feed the attention vector to the next iteration in the rnn. (This follows the diagram at  Ghttps://github.com/tensorflow/nmt#background-on-the-attention-mechanism3 commit 75aa22dfb159f10a1a5b4557777d9ff547c1975a).%Luong attention function (following  Ghttps://github.com/tensorflow/nmt#background-on-the-attention-mechanism commit 75aa22dfb159f10a1a5b4557777d9ff547c1975a). Essentially a dense layer with tanh activation, on top of uniform attention.Multiplicative scoring function"An additive scoring function. See #https://arxiv.org/pdf/1412.7449.pdf Build a RNN by repeating a cell n times. Build a RNN by repeating a cell n times. However the state is propagated in the right-to-left direction (decreasing indices in the time dimension of the input and output tensors) RNN helper RNN helper RNN helper RNN helper 8(gatherFinalStates dynLen states)[i] = states[dynLen[i]]rnnWithCull dynLen@ constructs an RNN as normal, but returns the state after step dynLen only.Like  rnnWithCull$, but states are threaded backwards.,mnopqrstuvwxyz{|}~scoring functionlengths of the inputsinputsweights for the dense layerscoring functionlength of the inputinputsweights mnopqrstuvwxyz{|}~ vu|}wxyz~{stqrpmno)mnopqrstuvwxyz{|}~x z None1VWXYZ[\]^_`abcdefmnopqrstuvwxyz{|}~BHigher-Order Typed Binding to TensorFlow and Deep Learning Library (c) Jean-Philippe Bernardy, 2017LGPL-3jean-philippe.bernardy@gu.se experimentalNone  !"#$%&'()*+,-./0123457689:;<=>?@ABCDEFGHIJKLNMOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=FGHIJKLNMOPQRSTUVWXYZ[\]^_`abcdefmnopqrstuvwxyz{|}~        !"#$%&''(()*+,-./01223456789:;<=>?@ABCDEFGHIJKLMNOPQQRSSTUVWXYZZ[\]^_`abccddefghhijklmnopqrsttuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@AABCDDEFGHIJKLMNOOPPQQRRSTUVWXYZ[\]^_`abbcdeeffghijklmnopqrstuvwxyz{|}~$typedflow-0.9-LRe1g47pusyFItilEfEPa9TypedFlow.Types TypedFlow.TFTypedFlow.LearnTypedFlow.Layers.CoreTypedFlow.Layers.RNNTypedFlow.Layers TypedFlowTensorGenfromGenGStatenextVargenText genParamsgenTrainingPlaceholdergenPeeks ParamInfo paramName paramShape paramDTypeparamVarNoneKnownLenlistLen shapePeano shapeSList PeanoLengthSList'LZLSSList KnownKindkindVal KnownBitsbitsValKnownTyptypVal KnownShapeSNatT fromTensorUntypedExpressionShapeScalarTFBoolInt64Int32Float32FltTypNBitsB32B64B1KindFloatIntBoolAtDropTakeVecVNilVConsSPeanoSZeroSSucc KnownPeanopeanoIntAxis3Axis2Axis1Axis0Dim3Dim2Dim1Dim0PeanoZeroSuccBothHHTVUncurry fromUncurrySndFstPair:&HTVFfromFFMapSnocConsFunApAllHListKINPUnit:*VReverseReverse'LengthInitLastTail++SumProduct<DOCSat VecTripleVecPairVecSingHSingle plusAssoc' plusAssoc prodAssoc' prodAssoc prodHomo'prodHomo knownProduct' knownProduct initLast'initLast knownLast' knownLast splitApp'splitApp knownAppend' knownAppendhheadhtailhtmaphmaphendohapphziphzipWithhfor_htoListhsplit'hsplithsnoc vecToListshowTyp withKnownNatshapeSListProxy shapeToList' shapeToList showShape' showShapeshowShapeMinus showShapeLen rememberNatshowDim'showDimMshowDimstr newParameter peekAtAnynewVargensetGenwithDOC<--tupledictfuncallfuncall'binOpunOpassigngenFunlambdagenerate generateFilenamed $fKnownLena: $fKnownLena[]$fKnownKindKindInt$fKnownKindKindFloat$fKnownKindKindBool$fKnownTypTypTyp$fKnownBitsNBitsB64$fKnownBitsNBitsB32$fKnownBitsNBitsB1 $fKnownShape:$fKnownShape[] $fShowTyp$fKnownPeanoPeanoSucc$fKnownPeanoPeanoZero $fFun[]FMap $fFun[]Snoc $fFun[]Cons$fApplicativeV $fFunctorV $fFoldableV$fTraversableV $fShowKind $fShowNBits $fMonadGen$fMonadStateGen $fFunctorGen$fApplicativeGenParamWithDefaultdefaultInitializer KnownTensors travTensor LastEqualrepeatTzerosonesconstant persistent parameter'peekAt peekAtManymodifyPersistent getParametersgradclipByGlobalNorm clipByValue placeholder reduceMeanAll reduceSumAll reduceSum reduceMeanadd+equal⊝⊙⊘⊕matmulsigmoidtanhlogreluroundfloornegatesplit0concatTconcat0concat1 expandDim expandDim0 expandDim1squeeze0squeeze1reshapeflatten2 flattenN2flatten3inflate2inflate3last0nth0nth0'sliceslice1unstack0stack0stack1stackN transpose transposeN transposeN' transpose01 transposeN01reverseSequences sequenceMaskgather convolutionsoftmax0softmax1argmaxargmax0argmax1castsoftmaxCrossEntropyWithLogitssigmoidCrossEntropyWithLogits#sparseSoftmaxCrossEntropyWithLogitsoneHotoneHot0oneHot1truncatedNormal randomUniformrandomOrthogonalvarianceScaling glorotUniform∙·mapTmapTNzipWithT zipWithTNif_where_parameterDefault parameter flattenAll flattenHTV inflateAll inflateHTV$fFun[]CProduct$fKnownTensors(,,,)$fKnownTensors(,,)$fKnownTensors(,)$fKnownTensorsNP$fKnownTensorsT$fLastEqual[]kx:$fLastEqual[]ax:OptionsmaxGradientNormModel ModelOutputmodelY modelLoss modelAccuracy categoricalcategoricalDistributiontimedCategoricalbinarydefaultOptionscompile compileGenConvPDropProb EmbeddingPDenseP denseWeights denseBiases embedding#dense mkDropout mkDropoutsconv maxPool2D$fKnownTensorsConvP$fParamWithDefaultConvP$fParamWithDefaultDenseP$fKnownTensorsDenseP$fParamWithDefaultEmbeddingP$fKnownTensorsEmbeddingPAdditiveScoringPAttentionFunctionAttentionScoringGRUPLSTMPRnnLayerRnnCellstackRnnLayers.--. bothRnnLayers.++.onStates stackRnnCells.-. withBypasstimeDistributetimeDistribute'cellInitializerBitlstmgru uniformAttnattentiveWithFeedbackluongAttentionmultiplicativeScoringadditiveScoringrnn rnnBackward rnnWithCullrnnBackwardsWithCull"$fParamWithDefaultAdditiveScoringP$fKnownTensorsAdditiveScoringP$fParamWithDefaultGRUP$fKnownTensorsGRUP$fParamWithDefaultLSTMP$fKnownTensorsLSTMP reduceAllreduce reduceSum0squeezeCProductDistrib NormalDistr UniformDistrVarianceScaleModeVSFanInVSFanOutVSAvg unsafeReshapelambda2 EndoTensor chainForward chainBackwardchainForwardWithState transposeVgatherFinalStatesgathersbase GHC.TypeLitsKnownNat KnownSymbolghc-prim GHC.TypesNatSymbol*^<=?- CmpSymbolCmpNat TypeError sameSymbolsameNat someSymbolVal someNatVal symbolVal'natVal' symbolValnatValSomeNat SomeSymbol<= ErrorMessageText:<>::$$:ShowType