.,9$      !"#NoneB,Signed real value in the logarithmic domain.Positive componentNegative componentSmart LogSigned constructor.2Make LogSigned from a positive, log-domain number.2Make LogSigned from a negative, log-domain number.#Shift LogSigned to a normal domain. Change the  to either negative $ % or positive & %.'()*+   '()*+None  A dataset with elements of type a. A size of the dataset. _Get dataset element with a given index. The set of indices is of a {0, 1, .., size - 1} form.  Lazily load dataset from a disk.#A dataset sample of the given size.GConstruct dataset from a vector of elements and run the given handler.YConstruct dataset from a list of elements, store it on a disk and run the given handler.QUse disk or vector dataset representation depending on the first argument: when ,, use , otherwise use .     None Gradient with nonzero values stored in a logarithmic domain. Since values equal to zero have no impact on the update phase of the SGD method, it is more efficient to not to store those components in the gradient.?Add normal-domain double to the gradient at the given position.DAdd log-domain, singed number to the gradient at the given position.~Construct gradient from a list of (index, value) pairs. All values from the list are added at respective gradient positions.Construct gradient from a list of (index, signed, log-domain number) pairs. All values from the list are added at respective gradient positions.9Collect gradient components with values in normal domain.0Empty gradient, i.e. with all elements set to 0.KPerform parallel unions operation on gradient list. Experimental version.-!Parallel unions in the Par monad. .- .-None /3Type synonym for mutable vector with Double values.Vector of parameters.0SGD parameters controlling the learning process.Size of the batchRegularization varianceNumber of iterations Initial gain parameter!OAfter how many iterations over the entire dataset the gain parameter is halved"Default SGD parameter values.#A stochastic gradient descent method. A notification function can be used to provide user with information about the progress of the learning.08Add up all gradients and store results in normal domain.1$Scale the vector by the given value.2YApply gradient to the parameters vector, that is add the first vector to the second one./ !"#SGD parameter valuesNotification run every updateGradient for dataset elementDatasetStarting point SGD result012  !"#  !"#/ !"#0123       !"#$%&'()*+&',-./0123456789:; sgd-0.3.7Numeric.SGD.LogSignedNumeric.SGD.DatasetNumeric.SGD.Grad Numeric.SGD LogSignedposneg logSignedfromPosfromNegtoNorm toLogFloatDatasetsizeelemAtloadDatasamplewithVectwithDiskwithDataGradaddaddLfromList fromLogListtoListempty parUnionsParaSgdArgs batchSizeregVariterNumgain0tausgdArgsDefaultsgdbase Data.EitherLeftlogfloat-0.12.1Data.Number.LogFloatLogFloatRightzero$fNumLogSigned$fNFDataLogSigned$fOrdLogSigned $fEqLogSignedghc-prim GHC.TypesTrue parUnionsP insertWithMVectaddUpscaleapply