>              ! " # $ % & ' ( ) * + , -./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ ! Common Types(c) Alexander Ignatyev, 2016BSD-3 experimentalPOSIXNoneMatrix Types (of Doubles)Vector Types (of Doubles)Scalar Type (Double)Regularization(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNoneRegularizationNo regularizationL2 Regularization`Calculates regularization for Model.cost function. It takes regularization parameter and theta.dCalculates regularization for Model.gradient function. It takes regularization parameter and theta.Regression Model(c) Alexander Ignatyev, 2016BSD-3 experimentalPOSIXNone {Hypothesis function, a.k.a. score function (for lassifition problem) Takes X (m x n) and theta (n x 1), returns y (m x 1). Cost function J(Theta), a.k.a. loss function. It takes regularizarion parameter, matrix X (m x n), vector y (m x 1) and vector theta (n x 1). Gradient function. It takes regularizarion parameter, X (m x n), y (m x 1) and theta (n x 1). Returns vector of gradients (n x 1).     Least Squares Model(c) Alexander Ignatyev, 2016BSD-3 experimentalPOSIXNone    Logistic Regression Model(c) Alexander Ignatyev, 2016BSD-3 experimentalPOSIXNoneCalculates sigmoid!Calculates derivatives of sigmoid Gradient Descent(c) Alexander Ignatyev, 2016BSD-3 experimentalPOSIXNone,Gradient Descent method implementation. See MachineLearning.Regression for usage details.Gradient Descent(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNone6Minibatch Gradient Descent method implementation. See MachineLearning.Regression for usage details.6Minibatch Gradient Descent method implementation. See MachineLearning.Regression for usage details. seed batch sizelearning rate, alphamodel to learnepsilonmax number of iters regularization parameter, lambdamatrix of features, Xoutput vector, y %vector of initial weights, theta or w $vector of weights and learning path  batch sizelearning rate, alphamodel to learnepsilonmax number of iters regularization parameter, lambdamatrix of features, Xoutput vector, y%vector of initial weights, theta or w $vector of weights and learning path   Optimization(c) Alexander Ignatyev, 2016BSD-3 experimentalPOSIXNone>Gradient descent, takes alpha. Requires feature normalization.<Minibacth Gradietn Descent, takes seed, batch size and alpharFletcher-Reeves conjugate gradient algorithm, takes size of first trial step (0.1 is fine) and tol (0.1 is fine).pPolak-Ribiere conjugate gradient algorithm. takes size of first trial step (0.1 is fine) and tol (0.1 is fine).wBroyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm, takes size of first trial step (0.1 is fine) and tol (0.1 is fine).Returns solution vector (theta) and optimization path. Optimization path's row format: [iter number, cost function value, theta values...],Gradient checking function. Approximates the derivates of the Model's cost function and calculates derivatives using the Model's gradient functions. Returns norn_2 between 2 derivatives. Takes model, regularization, X, y, theta and epsilon (used to approximate derivatives, 1e-4 is a good value). .model (Least Squares, Logistic Regression etc)*epsilon, desired precision of the solution$maximum number of iterations allowedregularization parameterXyinitial solution, theta%solution vector and optimization path  Regression!(c) Alexander Ignatyev, 2016-2017BSD-3 experimentalPOSIXNonegNormal equation using inverse, does not require feature normalization It takes X and y, returns theta. fNormal equation using pseudo inverse, requires feature normalization It takes X and y, returns theta.      Utils(c) Alexander Ignatyev, 2016BSD-3 experimentalPOSIXNone'"Converts list of tuples into list.!"#$%&'!"#$%&'!"#$&%'!"#$%&'!Classification Internal module.!(c) Alexander Ignatyev, 2016-2017BSD-3 experimentalPOSIXNone(Calculates accuracy of Classification predictions. Takes vector expected y and vector predicted y. Returns number from 0 to 1, the closer to 1 the better accuracy. Suitable for both Classification Types: Binary and Multiclass.Process outputs for One-vs-All Classification. Takes number of labels and output vector y. Returns list of vectors of binary outputs (One-vs-All Classification). It is supposed that labels are integerets start at 0.((( Binary Classification.!(c) Alexander Ignatyev, 2016-2017BSD-3 experimentalPOSIXNone)vBinary Classification prediction function. Takes a matrix of features X and a vector theta. Returns predicted class.*Learns Binary Classification.)*(e.g. BFGS2 0.1 0.1)+epsilon, desired precision of the solution;%maximum number of iterations allowed; regularization parameter lambda; matrix X;binary vector y;initial Theta;&solution vector and optimization path. ()* )*()* One-vs-All Classification.!(c) Alexander Ignatyev, 2016-2017BSD-3 experimentalPOSIXNone+One-vs-All Classification prediction function. Takes a matrix of features X and a list of vectors theta, returns predicted class number assuming that class numbers start at 0., Learns One-vs-All Classification+, (e.g. BFGS2 0.1 0.1)+epsilon, desired precision of the solution;%maximum number of iterations allowed; regularization parameter lambda;number of labels matrix X;vector yinitial theta list;&solution vector and optimization path. (+, +,(+,Neural Network's Layer(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNoneOT-./0123456789:;-./0123456789:;-./0123456789:;-./0123456789:;Regularization(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNone<fCalcaulates regularization for forward propagation. It takes regularization parameter and theta list.=iCalculates regularization for step of backward propagation. It takes regularization parameter and theta.<=<=<=<=Neural Network's Topology(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNone>Neural network topology has at least 2 elements: numver of input and number of outputs. And sizes of hidden layers between 2 elements.?.Loss function's type. Takes x, weights and y.@pMakes Neural Network's Topology. Takes number of inputs, list of hidden layers, output layer and loss function.AWCalculates loss for the given topology. Takes topology, regularization, x, weights, y.B0Implementation of forward propagation algorithm.+Makes one forward step for the given layer.C1Implementation of backward propagation algorithm.,Makes one backward step for the given layer.D0Returns number of outputs of the given topology.GReturns dimensions of weight matrices for given neural network topologyECreate and initialize weights vector with random values for given neural network topology. Takes a seed to initialize generator of random numbers as a first parameter.FkCreate and initialize weights vector with random values for given neural network topology inside IO Monad.GnCreate and initialize weights vector with random values for given neural network topology inside RandomMonad.eCreate and initialize list of weights matrices with random values for given neural network topology.H%Flatten list of matrices into vector.IIUnflatten vector into list of matrices for given neural network topology.>?@ABCDEFGHI >?@ABCDEFGHI >?@ABCDEFGHI>?@ABCDEFGHINeural Network(c) Alexander Ignatyev, 2016BSD-3 experimentalPOSIXNoneJONeural Network Model. Takes neural network topology as a constructor argument.3Score function. Takes a topology, X and theta list.JKL (>EFGJK JK(>EFGJKLReLu Activation(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNoneMNMNMNMNTanh Activation.(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNoneOOOOSigmoid(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNonePPPPMulti SVM Loss.(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNone SVM DeltaQRSQRSQRSQRS Softmax Loss.(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNoneTUVTUVTUVTUVLogistic Loss.(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNoneWXYWXYWXYWXY$Random generation utility functions.(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNoneZSamples n+ (given as a second parameter) values from list (given as a third parameter).[Samples n+ (given as a second parameter) values from list1 (given as a third parameter) inside RandomMonad.\AReturns a list of random values distributed in a closed interval range]CReturns a vector of random values distributed in a closed interval range^CReturns a matrix of random values distributed in a closed interval rangeTakes a list of ranges `(lo, hi)`, returns a list of random values uniformly distributed in the list of closed intervals [(lo, hi)].Z[\list's lengthsrange(list of random values inside RandomMonad]vector's lengthrange^number of rowsnumber of columnsrangeZ[\]^Z[\]^Z[\]^Weight Initialization(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNone_<Weight Initialization Algorithm discussed in Nguyen at al.:  :https://web.stanford.edu/class/ee373b/nninitialization.pdf Nguyen, Derrick, Widrow, B. Improving the learning speed of 2-layer neural networks by choosing initial values of adaptive weights. In Proc. IJCNN, 1990; 3: 21-26.`8Weight Initialization Algorithm discussed in He at al.:  https://arxiv.org/abs/1502.01852 Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification._`_`_`_`Topology Maker(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNoneiUCreates toplogy. Takes number of inputs, number of outputs and list of hidden layers.abcdefghi abcdehfgi efghabcdi abcdefghi Clustering(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNone j;Cluster type (list of samples associtaed with the cluster).0Gets list if the nearest centroid to the sample.,Calculates cost associated with one claster.'Calculates cost value for all clusters.8Calculates centroid (middle point) of the given cluster.2Calculates new cluster centroids for each cluster.1Build cluster list from list of clusters indices.k/Run K-Means algorithm once inside Random Monad.lSClusters data using K-Means Algorithm inside Random Monad. Runs K-Means algorithm N1 times, returns the clustering with smaller cost. jlist of cluster cetroids;sample;index of the nearest centroid.cluster;cluster centroid; cost value. cluster list;list of cluster centroids; cost value.cluster; centroid.list of clusters;list of cluster centroids.list of samples;Clist of cluster indices (associated cluster index for each sample);list of clusters.list of samples;number of clusters (K);list of initial centroids; (list of clusters, cost values).klist of samples;number of clusters (K);iteration number;4(list of clusters, cost values) inside Random Monad.l"number of K-Means Algorithm runs (N);data to cluster;desired number of clusters (K);%list of clusters inside Random Monad.jkljlk jkl.Learn function with progress bar for terminal.(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNonemLearn the given function displaying progress bar in terminal. It takes function, initial theta and number of iterations to call the function. It returns theta and optimization path (see MachineLearning.Optimization for details).nLearn the given function displaying progress bar in terminal. It takes function, list of outputs and list of initial thetas and number of iterations to call the function. It returns list of thetas and list of optimization paths (see MachineLearning.Optimization for details).PBuild a single optimazation path matrix from list of optimization path matrices.mnmnmnmnMachine Learning(c) Alexander Ignatyev, 2016BSD-3 experimentalPOSIXNoneo'Add biad dimension to the future matrixpRemove biad dimensionqzCaclulates mean and stddev values of every feature. Takes feature matrix X, returns pair of vectors of means and stddevs.sJMaps the features into all polynomial terms of X up to the degree-th powert?Splits data matrix to features matrix X and vector of outputs yopqrstopqrstopqrstopqrstMultiClass Classification.!(c) Alexander Ignatyev, 2016-2017BSD-3 experimentalPOSIXNone u8MultiClassModel is Model wrapper class around ClassifierwCClassifier type class represents Multi-class classification models.xScore functionyJHypothesis function Takes X (m x n) and theta (n x k), returns y (m x k).zCost function J(Theta), a.k.a. loss function. It takes regularizarion parameter lambda, matrix X (m x n), vector y (m x 1) and vector theta (n x 1).{Gradient function. It takes regularizarion parameter lambda, X (m x n), y (m x 1) and theta (n x 1). Returns vector of gradients (n x 1).|Returns Number of Classes}Process outputs for MultiClass Classification. Takes Classifier and output vector y. Returns matrix of binary outputs. It is supposed that labels are integerets start at 0.~]Calculates regularization for Classifier.ccost. It takes regularization parameter and theta.aCalculates regularization for Classifier.cgradient. It takes regularization parameter and theta. uvwxyz{|}~uvwxyz{|}~wxyz{|uv}~uvwxyz{|}~.Multiclass Support Vector Machines Classifier.(c) Alexander Ignatyev, 2017.BSD-3 experimentalPOSIXNone`Multiclass SVM Classifier, takes delta and number of futures. Delta = 1.0 is good for all cases. uvwxyz{|}~Softmax Classifier.(c) Alexander Ignatyev, 2017.BSD-3 experimentalPOSIXNone,Softmax Classifier, takes number of classes. uvwxyz{|}~ Principal Component Analysis.(c) Alexander Ignatyev, 2017BSD-3 experimentalPOSIXNoneComputes "covariance matrix".^Compute eigenvectors (matrix U) and singular values (matrix S) of the given covariance matrix.Gets dimensionality reduction function, retained variance (0..1) and reduced X for given matrix X and number of dimensions to retain.Gets dimensionality reduction function, retained number of dimensions and reduced X for given matrix X and variance to retain (0..1]."#$%&'()*+,-./0123456789: ; < = > ? @ A B C D E F G H I J K!L M N M NOOPQRSTUVVWXYZ[\]^_`abcdefghijklm000n0an0an0aopqrstuvwxyz{|}`~  !^ai %mltool-0.1.0.1-IcGNAYEdkOZ3M7NOU1mt39,MachineLearning.NeuralNetwork.TanhActivationMachineLearning.TypesMachineLearning.RegularizationMachineLearning.Model!MachineLearning.LeastSquaresModelMachineLearning.LogisticModel,MachineLearning.Optimization.GradientDescent5MachineLearning.Optimization.MinibatchGradientDescentMachineLearning.OptimizationMachineLearning.RegressionMachineLearning.Utils%MachineLearning.Classification.Binary'MachineLearning.Classification.OneVsAll#MachineLearning.NeuralNetwork.Layer,MachineLearning.NeuralNetwork.Regularization&MachineLearning.NeuralNetwork.TopologyMachineLearning.NeuralNetwork,MachineLearning.NeuralNetwork.ReluActivation/MachineLearning.NeuralNetwork.SigmoidActivation*MachineLearning.NeuralNetwork.MultiSvmLoss)MachineLearning.NeuralNetwork.SoftmaxLoss*MachineLearning.NeuralNetwork.LogisticLossMachineLearning.Random2MachineLearning.NeuralNetwork.WeightInitialization+MachineLearning.NeuralNetwork.TopologyMakerMachineLearning.Clustering MachineLearning.TerminalProgressMachineLearning)MachineLearning.Classification.MultiClass"MachineLearning.MultiSvmClassifier!MachineLearning.SoftmaxClassifierMachineLearning.PCA'MachineLearning.Classification.Internalbase GHC.FloattanhMatrixVectorRRegularizationRegNoneL2costReg gradientRegModel hypothesiscostgradientLeastSquaresModel LeastSquares$fModelLeastSquaresModel LogisticModelLogisticsigmoidsigmoidGradient$fModelLogisticModelgradientDescentminibatchGradientDescentMinimizeMethodGradientDescentMinibatchGradientDescentConjugateGradientFRConjugateGradientPRBFGS2minimize checkGradientnormalEquationnormalEquation_p reduceByRowsVreduceByColumnsV reduceByRowsreduceByColumns sumByColumns sumByRowslistOfTuplesToList calcAccuracypredictlearnLayerlUnitslForward lBackward lActivationlActivationGradientlInitializeThetaMCachecacheZcacheXcacheW affineForwardaffineBackward forwardReg backwardRegTopologyLossFunc makeTopologylosspropagateForwardpropagateBackward numberOutputsinitializeThetainitializeThetaIOinitializeThetaMflatten unflattenNeuralNetworkModel NeuralNetwork$fModelNeuralNetworkModelreluscoressamplesampleMgetRandomRListMgetRandomRVectorMgetRandomRMatrixMnguyenheLoss LLogisticLSoftmax LMultiSvm ActivationASigmoidAReluATanhCluster kmeansIterMkmeanslearnWithProgressBarlearnOneVsAllWithProgressBaraddBiasDimensionremoveBiasDimension meanStddevfeatureNormalization mapFeatures splitToXYMultiClassModel MultiClass Classifiercscore chypothesisccost cgradient cnumClasses processOutputccostReg cgradientReg$fModelMultiClassModelMultiSvmClassifierMultiSvm$fClassifierMultiSvmClassifierSoftmaxClassifierSoftmax$fClassifierSoftmaxClassifier getDimReducergetDimReducer_rvminibatchGradientDescentM minimizeVDprocessOutputOneVsAll forwardPass backwardPass getThetaSizesinitializeThetaListM calcScores processParams tanhGradientsvmDrandomsInRangesM mkAffineLayer mkOutputLayerhiddenActivationhiddenGradientoutputActivationoutputGradientnearestCentroidIndexcalcClusterCostcalcCostgetNewCentroid moveCentroidsbuildClusterList kmeansIterbuildOptPathMatrixnewProgressBardoLoop learnOneClasspolynomialTermscombinationsWithReplacementremapcovarianceMatrixpca