úÎ¥¡Ü6      !"#$%&'()*+,-./012345portable experimentalamy@nualeargais.ie Safe-Inferred AA vector that has been scaled so that all elements in the vector < are between zero and one. To scale a set of vectors, use   3. Alternatively, if you can identify a maximum and = minimum value for each element in a vector, you can scale  individual vectors using  . >A vector that has been normalised, i.e., the magnitude of the  vector = 1. 'A pattern to be learned or classified. $Compares two patterns and returns a  non-negative number < representing how different the patterns are. A result of 0 . indicates that the patterns are identical.  target amount pattern returns a modified copy of  pattern that is more similar to target than pattern is. The 4 magnitude of the adjustment is controlled by the amount ? parameter, which should be a number between 0 and 1. Larger  values for amount permit greater adjustments. If amount=1, ) the result should be identical to the target. If amount=0, ' the result should be the unmodified pattern. <Calculates the square of the Euclidean distance between two  vectors.   target amount vector adjusts vector to move it  closer to target0. The amount of adjustment is controlled by the  learning rate r3, which is a number between 0 and 1. Larger values  of r permit more adjustment. If r=1, the result will be  identical to the target. If amount=0, the result will be the  unmodified pattern. Normalises a vector Given a vector qs1 of pairs of numbers, where each pair represents A the maximum and minimum value to be expected at each index in  xs,   qs xs scales the vector xs element by element, D mapping the maximum value expected at that index to one, and the  minimum value to zero. ?Scales a set of vectors by determining the maximum and minimum ? values at each index in the vector, and mapping the maximum 0 value to one, and the minimum value to zero. 678 9 :;<=>   678 9 :;<=>portable experimentalamy@nualeargais.ie Safe-Inferred 4A machine which learns to classify input patterns.  Minimal complete definition:  trainBatch, reportAndTrain. Returns a list of index/ model pairs. 8Returns the number of models this classifier can learn. .Returns the current models of the classifier.  c target& returns the indices of all nodes in  c%, paired with the difference between target and the  node' s model. classify c target" returns the index of the node in c  whose model best matches the target.  c target returns a modified copy  of the classifier c that has partially learned the target.  c targets returns a modified copy  of the classifier c that has partially learned the targets.  c target returns a tuple containing the  index of the node in c$ whose model best matches the input  target(, and a modified copy of the classifier c that has  partially learned the target . Invoking classifyAndTrain c p  may be faster than invoking (p  c, train c p), but  they " should give identical results.  c target returns a tuple containing: " 1. The indices of all nodes in c, paired with the difference  between target and the node's model ( 2. A modified copy of the classifier c that has partially  learned the target.  Invoking diffAndTrain c p may be faster than invoking  (p diff c, train c p)!, but they should give identical  results.  c f target returns a tuple containing:  1. The index of the node in c whose model best matches the  input target " 2. The indices of all nodes in c, paired with the difference  between target and the node's model ( 3. A modified copy of the classifier c that has partially  learned the target  Invoking diffAndTrain c p may be faster than invoking  (p diff c, train c p)!, but they should give identical  results.    portable experimentalamy@nualeargais.ie Safe-InferredA Self-Organising Map (DSOM).  Although DSOM implements GridMap , most users will only need the  interface provided by %Data.Datamining.Clustering.Classifier. If  you chose to use the GridMap functions, please note:  The functions adjust, and  adjustWithKey do not increment the + counter. You can do so manually with incrementCounter.  The functions map and  mapWithKey are not implemented (they  just return an error(). It would be problematic to implement D them because the input DSOM and the output DSOM would have to  have the same Metric type. 4Extracts the grid and current models from the DSOM. CTrains the specified node and the neighbourood around it to better  match a target.  Most users should use train!, which automatically determines 0 the BMU and trains it and its neighbourhood. ";Creates a classifier with a default (bell-shaped) learning  function. Usage is " gm r w t , where: gm6 The geometry and initial models for this classifier.  A reasonable choice here is  lazyGridMap g ps, where g is a   HexHexGrid, and ps is a set of random patterns. r and [p]% are the first two parameters to the  $. #6Creates a classifier with a custom learning function.  Usage is # gm g , where: gm6 The geometry and initial models for this classifier.  A reasonable choice here is  lazyGridMap g ps, where g is a   HexHexGrid, and ps is a set of random patterns. f5 A function used to determine the learning rate (for , adjusting the models in the classifier). 8 This function will be invoked with three parameters. C The first parameter will indicate how different the BMU is from  the input pattern. C The second parameter indicates how different the pattern of the ; node currently being trained is from the input pattern. E The third parameter is the grid distance from the BMU to the node > currently being trained, as a fraction of the maximum grid  distance. @ The output is the learning rate for that node (the amount by  which the node'0s model should be updated to match the target). 5 The learning rate should be between zero and one. $AConfigures a learning function that depends not on the time, but > on how good a model we already have for the target. If the = BMU is an exact match for the target, no learning occurs.  Usage is $ r p, where r is the  maximal learning rate (0 <= r < = 1), and p is the elasticity. 5NOTE: When using this learning function, ensure that  abs . difference1 is always between 0 and 1, inclusive. Otherwise ' you may get invalid learning rates.  !"#$?@AB  !"#$  !"#$  !"#$?@ABportable experimentalamy@nualeargais.ie Safe-Inferred "#$"#$ portable experimentalamy@nualeargais.ie Safe-Inferred%A Self-Organising Map (SOM).  Although SOM implements GridMap , most users will only need the  interface provided by %Data.Datamining.Clustering.Classifier. If  you chose to use the GridMap functions, please note:  The functions adjust, and  adjustWithKey do not increment the + counter. You can do so manually with incrementCounter.  The functions map and  mapWithKey are not implemented (they  just return an error(). It would be problematic to implement G them because the input SOM and the output SOM would have to have  the same Metric type. +3Extracts the grid and current models from the SOM. -CTrains the specified node and the neighbourood around it to better  match a target.  Most users should use train!, which automatically determines 0 the BMU and trains it and its neighbourhood. 2;Creates a classifier with a default (bell-shaped) learning  function. Usage is 2 gm r0 rf w0 wf tf , where: gm6 The geometry and initial models for this classifier.  A reasonable choice here is  lazyGridMap g ps, where g is a   HexHexGrid, and ps is a set of random patterns. r0 See description in 5. rf See description in 5. w0 See description in 5. wf See description in 5. tf See description in 5. 36Creates a classifier with a custom learning function.  Usage is 3 gm g , where: gm6 The geometry and initial models for this classifier.  A reasonable choice here is  lazyGridMap g ps, where g is a   HexHexGrid, and ps is a set of random patterns. f9 A function used to adjust the models in the classifier. 6 This function will be invoked with two parameters. C The first parameter will indicate how many patterns (or pattern ? batches) have previously been presented to this classifier. E Typically this is used to make the learning rate decay over time. B The second parameter to the function is the grid distance from ; the node being updated to the BMU (Best Matching Unit). @ The output is the learning rate for that node (the amount by  which the node'0s model should be updated to match the target). 5 The learning rate should be between zero and one. 4;Configures one possible learning function for classifiers.  4 r0 w0 tMax returns a bell curve-shaped E function. At time zero, the maximum learning rate (applied to the  BMU) is r0!, and the neighbourhood width is w0. Over time the A neighbourhood width shrinks and the learning rate tapers off. 58Configures a typical learning function for classifiers.  4 r0 rf w0 wf tf returns a bell curve-shaped E function. At time zero, the maximum learning rate (applied to the  BMU) is r0!, and the neighbourhood width is w0. Over time the F bell curve shrinks and the learning rate tapers off, until at time  tf4, the maximum learning rate (applied to the BMU) is rf, " and the neighbourhood width is wf. Normally the parameters  should be chosen such that:  0 < rf << r0 < 1  0 < wf << w0 where << means is much smaller than (not the Haskell <<  operator!) %&'()*+,-./012345CDEF%&'()*+,-./012345%&'()*+,-./012345%&'()*+,-./012345CDEFportable experimentalamy@nualeargais.ie Safe-Inferred %+-./0234 %234+-./0G       !"#$%&'()*++!",-#$&./0'123456789:;<=>?@ABCDsom-6.4Data.Datamining.Pattern%Data.Datamining.Clustering.Classifier'Data.Datamining.Clustering.DSOMInternal&Data.Datamining.Clustering.SOMInternalData.Datamining.Clustering.DSOMData.Datamining.Clustering.SOM ScaledVectorNormalisedVectorPatternMetric difference makeSimilar absDifference adjustNummagnitudeSquaredeuclideanDistanceSquared adjustVector normalisescalescaleAll ClassifiertoList numModelsmodels differencesclassifytrain trainBatchclassifyAndTrain diffAndTrainreportAndTrainDSOMsGridMapsLearningFunction toGridMap adjustNode scaleDistancetrainNeighbourhood justTrain defaultDSOM customDSOMrougierLearningFunctionSOMsCountercurrentLearningFunctionincrementCountercounter setCounter defaultSOM customSOMdecayingGaussiandecayingGaussian2 adjustNum'norm scaleValuequantify quantify'$fPatternScaledVector$fPatternNormalisedVector$fClassifierDSOMkp$fGridMapDSOMp $fGridDSOM$fFoldableDSOM$fClassifierSOMkp $fGridMapSOMp $fGridSOM $fFoldableSOM