úÎ{©y#      !"portable experimentalamy@nualeargais.ie Safe-Inferred BA vector that has been scaled so that all elements in the vector = are between zero and one. To scale a set of vectors, use   4. Alternatively, if you can identify a maximum and > minimum value for each element in a vector, you can scale  individual vectors using  . ?A vector that has been normalised, i.e., the magnitude of the  vector = 1. 'A pattern to be learned or classified. $Compares two patterns and returns a  non-negative number < representing how different the patterns are. A result of 0 . indicates that the patterns are identical.  target amount pattern returns a modified copy of  pattern that is more similar to target than pattern is. The 4 magnitude of the adjustment is controlled by the amount @ parameter, which should be a number between 0 and 1. Larger  values for amount permit greater adjustments. If amount=1, ) the result should be identical to the target. If amount=0, ' the result should be the unmodified pattern. =Calculates the square of the Euclidean distance between two  vectors.  target amount vector adjusts vector to move it  closer to target0. The amount of adjustment is controlled by the  learning rate r3, which is a number between 0 and 1. Larger values  of r permit more adjustment. If r=1, the result will be  identical to the target. If amount=0, the result will be the  unmodified pattern. Normalises a vector Given a vector qs1 of pairs of numbers, where each pair represents B the maximum and minimum value to be expected at each index in  xs,   qs xs scales the vector xs element by element, D mapping the maximum value expected at that index to one, and the  minimum value to zero. ?Scales a set of vectors by determining the maximum and minimum @ values at each index in the vector, and mapping the maximum 0 value to one, and the minimum value to zero. #$ % &'()*   #$ % &'()*portable experimentalamy@nualeargais.ie Safe-Inferred 4A machine which learns to classify input patterns.  Minimal complete definition:  trainBatch, reportAndTrain. Returns a list of index/ model pairs. 8Returns the number of models this classifier can learn. .Returns the current models of the classifier.  c target& returns the indices of all nodes in  c%, paired with the difference between target and the node's  model. classify c target" returns the index of the node in c  whose model best matches the target.  c target returns a modified copy  of the classifier c that has partially learned the target.  c targets returns a modified copy  of the classifier c that has partially learned the targets.  c target returns a tuple containing the  index of the node in c$ whose model best matches the input  target(, and a modified copy of the classifier c that has  partially learned the target . Invoking classifyAndTrain c p  may be faster than invoking (p  c, train c p) , but they " should give identical results.  c target returns a tuple containing: " 1. The indices of all nodes in c, paired with the difference  between target and the node's model ( 2. A modified copy of the classifier c that has partially  learned the target.  Invoking diffAndTrain c p may be faster than invoking  (p diff c, train c p)!, but they should give identical  results.  c f target returns a tuple containing:  1. The index of the node in c whose model best matches the  input target " 2. The indices of all nodes in c, paired with the difference  between target and the node's model ( 3. A modified copy of the classifier c that has partially  learned the target  Invoking diffAndTrain c p may be faster than invoking  (p diff c, train c p)!, but they should give identical  results.    portable experimentalamy@nualeargais.ie Safe-InferredA Self-Organising Map (SOM).  Although SOM implements GridMap , most users will only need the  interface provided by  Classifier. If you chose to use the  GridMap functions, please note:  The functions adjust, and  adjustWithKey do not increment the + counter. You can do so manually with incrementCounter.  The functions map and  mapWithKey are not implemented (they  just return an error(). It would be problematic to implement G them because the input SOM and the output SOM would have to have  the same Metric type. 3Extracts the grid and current models from the SOM. CTrains the specified node and the neighbourood around it to better  match a target.  Most users should use train!, which automatically determines 0 the BMU and trains it and its neighbourhood. ;Creates a classifier with a default (bell-shaped) learning  function. Usage is  gm r w t , where: gm6 The geometry and initial models for this classifier.  A reasonable choice here is  lazyGridMap g ps, where g is a   HexHexGrid, and ps is a set of random patterns. rA The learning rate to be applied to the BMU (Best Matching Unit)  at time3 zero. The BMU is the model which best matches the  current target pattern. w The width of the bell curve at time zero. t; Controls how rapidly the learning rate decays. After this A time, any learning done by the classifier will be negligible. A We recommend setting this parameter to the number of patterns E (or pattern batches) that will be presented to the classifier. An  estimate is fine. 7Creates a classifier with a custom learning function.  Usage is   gm g , where: gm6 The geometry and initial models for this classifier.  A reasonable choice here is  lazyGridMap g ps, where g is a   HexHexGrid, and ps is a set of random patterns. f9 A function used to adjust the models in the classifier. 6 This function will be invoked with two parameters. D The first parameter will indicate how many patterns (or pattern @ batches) have previously been presented to this classifier. E Typically this is used to make the learning rate decay over time. B The second parameter to the function is the grid distance from ; the node being updated to the BMU (Best Matching Unit). @ The output is the learning rate for that node (the amount by  which the node'0s model should be updated to match the target). 5 The learning rate should be between zero and one. ! Calculates re^(-d^2/2w^2). C This form of the Gaussian function is useful as a learning rate  function. In ! r w d, r specifies the highest learning E rate, which will be applied to the SOM node that best matches the D input pattern. The learning rate applied to other nodes will be # applied based on their distance d from the best matching node.  The value w controls the 'width' of the Gaussian. Higher values  of w7 cause the learning rate to fall off more slowly with  distance d. "8Configures a typical learning function for classifiers.  " r w0 tMax returns a bell curve-shaped F function. At time zero, the maximum learning rate (applied to the  BMU) is r!, and the neighbourhood width is w. Over time the bell A curve shrinks and the learning rate tapers off, until at time  tMax#, the learning rate is negligible. +,- !"./01  !"  !"+,- !"./01portable experimentalamy@nualeargais.ie Safe-Inferred !" !"2      !"#$%&'()*+,-./01234som-4.1Data.Datamining.Pattern%Data.Datamining.Clustering.Classifier&Data.Datamining.Clustering.SOMInternalData.Datamining.Clustering.SOM ScaledVectorNormalisedVectorPatternMetric difference makeSimilarmagnitudeSquaredeuclideanDistanceSquared adjustVector normalisescalescaleAll ClassifiertoList numModelsmodels differencesclassifytrain trainBatchclassifyAndTrain diffAndTrainreportAndTrainSOMsGridMapsLearningFunctionsCounter toGridMaptrainNeighbourhoodincrementCounter defaultSOM customSOMgaussiandecayingGaussiannorm scaleValuequantify quantify'$fPatternScaledVector$fPatternNormalisedVectorcurrentLearningFunction adjustNode justTrain$fClassifierSOMkp $fGridMapSOMp $fGridSOM $fFoldableSOM