úÎÁ½H      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFG(c) Amy de Buitléir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe47>L€A vector that has been scaled so that all elements in the vector are between zero and one. To scale a set of vectors, use  Œ. Alternatively, if you can identify a maximum and minimum value for each element in a vector, you can scale individual vectors using .LA vector that has been normalised, i.e., the magnitude of the vector = 1.GCalculates the square of the Euclidean distance between two vectors. target amount vector adjusts vector to move it closer to targetA. The amount of adjustment is controlled by the learning rate r9, which is a number between 0 and 1. Larger values of r permit more adjustment. If r+=1, the result will be identical to the target. If amount)=0, the result will be the unmodified pattern.Normalises a vectorGiven a vector qsu of pairs of numbers, where each pair represents the maximum and minimum value to be expected at each index in xs,  qs xs scales the vector xsr element by element, mapping the maximum value expected at that index to one, and the minimum value to zero. ­Scales a set of vectors by determining the maximum and minimum values at each index in the vector, and mapping the maximum value to one, and the minimum value to zero.HIJK LMN   HIJK LMN(c) Amy de Buitléir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe47>L TA machine which learns to classify input patterns. Minimal complete definition:  trainBatch, reportAndTrain. $Returns a list of index/model pairs. 7Returns the number of models this classifier can learn. -Returns the current models of the classifier. c target) returns the indices of all nodes in c%, paired with the difference between target and the node's model.classify c target" returns the index of the node in c" whose model best matches the target. c target. returns a modified copy of the classifier c that has partially learned the target. c targets. returns a modified copy of the classifier c that has partially learned the targets. c target8 returns a tuple containing the index of the node in c' whose model best matches the input target(, and a modified copy of the classifier c# that has partially learned the target . Invoking classifyAndTrain c p may be faster than invoking (p  c, train c p)0, but they should give identical results. c target? returns a tuple containing: 1. The indices of all nodes in c+, paired with the difference between target> and the node's model 2. A modified copy of the classifier c& that has partially learned the target. Invoking diffAndTrain c p may be faster than invoking (p diff c, train c p),, but they should give identical results. c f target< returns a tuple containing: 1. The index of the node in c* whose model best matches the input target# 2. The indices of all nodes in c+, paired with the difference between target> and the node's model 3. A modified copy of the classifier c& that has partially learned the target Invoking diffAndTrain c p may be faster than invoking (p diff c, train c p),, but they should give identical results.    (c) Amy de Buitléir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe3457>LA Self-Organising Map (DSOM). Although DSOM implements GridMap9, most users will only need the interface provided by %Data.Datamining.Clustering.Classifier. If you chose to use the GridMap functions, please note: The functions adjust, and  adjustWithKeyA do not increment the counter. You can do so manually with incrementCounter.The functions map and  mapWithKey0 are not implemented (they just return an error€). It would be problematic to implement them because the input DSOM and the output DSOM would have to have the same Metric type.bMaps patterns to tiles in a regular grid. In the context of a SOM, the tiles are called "nodes";A function which determines the how quickly the SOM learns.9A function which compares two patterns and returns a  non-negativeF numberrepresenting how different the patterns are. A result of 0+ indicates that the patterns are identical.8A function which updates models. If this function is f, then f target amount pattern returns a modified copy of pattern that is more similar to target than pattern= is. The magnitude of the adjustment is controlled by the amountN parameter, which should be a number between 0 and 1. Larger values for amount# permit greater adjustments. If amount*=1, the result should be identical to the target. If amount(=0, the result should be the unmodified pattern.3Extracts the grid and current models from the DSOM.oTrains the specified node and the neighbourood around it to better match a target. Most users should use trainP, which automatically determines the BMU and trains it and its neighbourhood.!ÈConfigures a learning function that depends not on the time, but on how good a model we already have for the target. If the BMU is an exact match for the target, no learning occurs. Usage is ! r p, where r4 is the maximal learning rate (0 <= r <= 1), and p is the elasticity.8NOTE: When using this learning function, ensure that abs . differenceW is always between 0 and 1, inclusive. Otherwise you may get invalid learning rates. !"#$%  !%$# "!  !"#$%(c) Amy de Buitléir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe ! !(c) Amy de Buitléir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe 03457>L &,A Simplified Self-Organising Map (SSOM). x@ is the type of the learning rate and the difference metric. t is the type of the counter. k& is the type of the model indices. p. is the type of the input patterns and models.(Maps patterns to nodes.)ÿµA function which determines the learning rate for a node. The input parameter indicates how many patterns (or pattern batches) have previously been presented to the classifier. Typically this is used to make the learning rate decay over time. The output is the learning rate for that node (the amount by which the node's model should be updated to match the target). The learning rate should be between zero and one.*9A function which compares two patterns and returns a  non-negativeF numberrepresenting how different the patterns are. A result of 0+ indicates that the patterns are identical.+EA function which updates models. For example, if this function is f , then f target amount pattern returns a modified copy of pattern that is more similar to target than pattern= is. The magnitude of the adjustment is controlled by the amountN parameter, which should be a number between 0 and 1. Larger values for amount# permit greater adjustments. If amount*=1, the result should be identical to the target. If amount(=0, the result should be the unmodified pattern.,UA counter used as a "time" parameter. If you create the SSOM with a counter value 0{, and don't directly modify it, then the counter will represent the number of patterns that this SSOM has classified.-0A typical learning function for classifiers. - r0 d t# returns the learning rate at time t . When t = 0, the learning rate is r0L. Over time the learning rate decays exponentially; the decay rate is d2. Normally the parameters are chosen such that: 0 < r0 < 10 < d7where << means "is much smaller than" (not the Haskell << operator!).<Extracts the current models from the SSOM. A synonym for (./MTrains the specified node to better match a target. Most users should use :, which automatically determines the BMU and trains it. &'()*+,-./012 &'()*+,-./01 -&'()*+,./012&'()*+,-./012(c) Amy de Buitléir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe &'()*+,-./ &'()*+,.-/(c) Amy de Buitléir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe 03457>L 3A Self-Organising Map (SOM). Although SOM implements GridMap9, most users will only need the interface provided by %Data.Datamining.Clustering.Classifier. If you chose to use the GridMap functions, please note: The functions adjust, and  adjustWithKeyA do not increment the counter. You can do so manually with incrementCounter.The functions map and  mapWithKey0 are not implemented (they just return an error~). It would be problematic to implement them because the input SOM and the output SOM would have to have the same Metric type.5bMaps patterns to tiles in a regular grid. In the context of a SOM, the tiles are called "nodes"6_A function which determines the how quickly the SOM learns. For example, if the function is f, then f t d; returns the learning rate for a node. The parameter tÀ indicates how many patterns (or pattern batches) have previously been presented to the classifier. Typically this is used to make the learning rate decay over time. The parameter dÿ is the grid distance from the node being updated to the BMU (Best Matching Unit). The output is the learning rate for that node (the amount by which the node's model should be updated to match the target). The learning rate should be between zero and one.79A function which compares two patterns and returns a  non-negativeF numberrepresenting how different the patterns are. A result of 0+ indicates that the patterns are identical.88A function which updates models. If this function is f, then f target amount pattern returns a modified copy of pattern that is more similar to target than pattern= is. The magnitude of the adjustment is controlled by the amountN parameter, which should be a number between 0 and 1. Larger values for amount# permit greater adjustments. If amount*=1, the result should be identical to the target. If amount(=0, the result should be the unmodified pattern.9TA counter used as a "time" parameter. If you create the SOM with a counter value 0z, and don't directly modify it, then the counter will represent the number of patterns that this SOM has classified.:0A typical learning function for classifiers. : r0 rf w0 wf tfm returns a bell curve-shaped function. At time zero, the maximum learning rate (applied to the BMU) is r0!, and the neighbourhood width is w0Y. Over time the bell curve shrinks and the learning rate tapers off, until at time tf4, the maximum learning rate (applied to the BMU) is rf$, and the neighbourhood width is wf8. Normally the parameters should be chosen such that:0 < rf << r0 < 1 0 < wf << w00 < tf7where << means "is much smaller than" (not the Haskell << operator!);RA learning function that only updates the BMU and has a constant learning rate.<tA learning function that updates all nodes with the same, constant learning rate. This can be useful for testing.?DExtracts the grid and current models from the SOM. A synonym for 5.AoTrains the specified node and the neighbourood around it to better match a target. Most users should use P, which automatically determines the BMU and trains it and its neighbourhood.3456789:;<=>?@ABCDEFG3456789:;<=>?@ABC:;<3456789GFE=>?@ABCD3456789:;<=>?@ABCDEFG(c) Amy de Buitléir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe 3456789:;<?A 3456789?:;<AO      !"#$%&'()*+,-../ !"01234(566 !"0789#:$%'4(;<=>  ?@ABCDsom_DMoofZ82sQR4LCaoLGvD1nData.Datamining.Pattern%Data.Datamining.Clustering.Classifier'Data.Datamining.Clustering.DSOMInternal'Data.Datamining.Clustering.SSOMInternal&Data.Datamining.Clustering.SOMInternalData.Datamining.Clustering.DSOMData.Datamining.Clustering.SSOMData.Datamining.Clustering.SOM ScaledVectorNormalisedVector absDifference adjustNummagnitudeSquaredeuclideanDistanceSquared adjustVector normalisescalescaleAll ClassifiertoList numModelsmodels differencesclassifytrain trainBatchclassifyAndTrain diffAndTrainreportAndTrainDSOMgridMap learningRate difference makeSimilar withGridMap toGridMap adjustNode scaleDistancetrainNeighbourhood justTrainrougierLearningFunction$fClassifierDSOMxkp$fGridMapDSOMp $fGridDSOM$fFoldableDSOMSSOMsMapcounter exponentialtoMap trainNodeincrementCounter$fClassifierSSOMxkpSOMdecayingGaussian stepFunctionconstantFunctioncurrentLearningFunction$fClassifierSOMxkp $fGridMapSOMp $fGridSOM $fFoldableSOM adjustNum'norm scaleValuequantify quantify'