.f      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcde (c) Amy de Buitlir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe47>LA vector that has been scaled so that all elements in the vector are between zero and one. To scale a set of vectors, use  . Alternatively, if you can identify a maximum and minimum value for each element in a vector, you can scale individual vectors using  .LA vector that has been normalised, i.e., the magnitude of the vector = 1.GCalculates the square of the Euclidean distance between two vectors. target amount vector adjusts each element of vector6 to move it closer to the corresponding element of targetD. The amount of adjustment is controlled by the learning rate amount9, which is a number between 0 and 1. Larger values of amount permit more adjustment. If amount(=1, the result will be identical to the target. If amount&=0, the result will be the unmodified pattern. If target is shorter than vector+, the result will be the same length as target. If target is longer than vector+, the result will be the same length as vector.Same as >, except that the result will always be the same length as vector. This means that if target is shorter than vector , the "leftover" elements of vector* will be copied the result, unmodified.Normalises a vector Given a vector qsu of pairs of numbers, where each pair represents the maximum and minimum value to be expected at each index in xs,   qs xs scales the vector xsr element by element, mapping the maximum value expected at that index to one, and the minimum value to zero. Scales a set of vectors by determining the maximum and minimum values at each index in the vector, and mapping the maximum value to one, and the minimum value to zero.fghij klm   fghij klm(c) Amy de Buitlir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe 023457>L +A Simplified Self-Organising Map (SOS). t is the type of the counter. x@ is the type of the learning rate and the difference metric. k& is the type of the model indices. p. is the type of the input patterns and models. (Maps patterns and match counts to nodes.A function which determines the learning rate for a node. The input parameter indicates how many patterns (or pattern batches) have previously been presented to the classifier. Typically this is used to make the learning rate decay over time. The output is the learning rate for that node (the amount by which the node's model should be updated to match the target). The learning rate should be between zero and one./The maximum number of models this SOS can hold.4The threshold that triggers creation of a new model.~Delete existing models to make room for new ones? The least useful (least frequently matched) models will be deleted first.8A function which compares two patterns and returns a  non-negativeG number representing how different the patterns are. A result of 0+ indicates that the patterns are identical.EA function which updates models. For example, if this function is f , then f target amount pattern returns a modified copy of pattern that is more similar to target than pattern= is. The magnitude of the adjustment is controlled by the amountN parameter, which should be a number between 0 and 1. Larger values for amount# permit greater adjustments. If amount*=1, the result should be identical to the target. If amount(=0, the result should be the unmodified pattern.*Index for the next node to add to the SOS.0A typical learning function for classifiers.  r0 d t# returns the learning rate at time t . When t = 0, the learning rate is r0L. Over time the learning rate decays exponentially; the decay rate is d2. Normally the parameters are chosen such that: 0 < r0 < 10 < d7Returns true if the SOS has no models, false otherwise.8Returns the number of models the SOS currently contains.$Returns a map from node ID to model.{Returns a map from node ID to counter (number of times the node's model has been the closest match to an input pattern).Returns the current models.rReturns the current counters (number of times the node's model has been the closest match to an input pattern).>The current "time" (number of times the SOS has been trained).Adds a new node to the SOS.@Removes a node from the SOS. Deleted nodes are never re-used.!MTrains the specified node to better match a target. Most users should use &:, which automatically determines the BMU and trains it.%% s p identifies the model s* that most closely matches the pattern p. If necessary, it will create a new node and model. Returns the ID of the node with the best matching model, the difference between the best matching model and the pattern, the differences between the input and each model in the SOS, and the (possibly updated) SOS.&& s p identifies the model in s that most closely matches p/, and updates it to be a somewhat better match.'For each pattern p in ps, ' s ps identifies the model in s that most closely matches p2, and updates it to be a somewhat better match.  !"#$%&'  !"#$%&'  !"#$%&'  !"#$%&'(c) Amy de Buitlir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe %&' %&'(c) Amy de Buitlir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe47>L (TA machine which learns to classify input patterns. Minimal complete definition:  trainBatch, reportAndTrain.)$Returns a list of index/model pairs.*7Returns the number of models this classifier can learn.+-Returns the current models of the classifier.,, c target) returns the indices of all nodes in c%, paired with the difference between target and the node's model.-classify c target" returns the index of the node in c" whose model best matches the target... c target. returns a modified copy of the classifier c that has partially learned the target.// c targets. returns a modified copy of the classifier c that has partially learned the targets.00 c target8 returns a tuple containing the index of the node in c' whose model best matches the input target(, and a modified copy of the classifier c# that has partially learned the target . Invoking classifyAndTrain c p may be faster than invoking (p - c, train c p)0, but they should give identical results.11 c target? returns a tuple containing: 1. The indices of all nodes in c+, paired with the difference between target> and the node's model 2. A modified copy of the classifier c& that has partially learned the target. Invoking diffAndTrain c p may be faster than invoking (p diff c, train c p),, but they should give identical results.22 c f target< returns a tuple containing: 1. The index of the node in c* whose model best matches the input target# 2. The indices of all nodes in c+, paired with the difference between target> and the node's model 3. A modified copy of the classifier c& that has partially learned the target Invoking diffAndTrain c p may be faster than invoking (p diff c, train c p),, but they should give identical results. ()*+,-./012 ()*+,-./012 ()*+,-./012( )*+,-./012(c) Amy de Buitlir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe 023457>L3A Self-Organising Map (DSOM). Although DSOM implements GridMap9, most users will only need the interface provided by %Data.Datamining.Clustering.Classifier. If you chose to use the GridMap functions, please note: The functions adjust, and  adjustWithKeyA do not increment the counter. You can do so manually with incrementCounter.The functions map and  mapWithKey0 are not implemented (they just return an error). It would be problematic to implement them because the input DSOM and the output DSOM would have to have the same Metric type.5bMaps patterns to tiles in a regular grid. In the context of a SOM, the tiles are called "nodes"6;A function which determines the how quickly the SOM learns.79A function which compares two patterns and returns a  non-negativeG number representing how different the patterns are. A result of 0+ indicates that the patterns are identical.88A function which updates models. If this function is f, then f target amount pattern returns a modified copy of pattern that is more similar to target than pattern= is. The magnitude of the adjustment is controlled by the amountN parameter, which should be a number between 0 and 1. Larger values for amount# permit greater adjustments. If amount*=1, the result should be identical to the target. If amount(=0, the result should be the unmodified pattern.:3Extracts the grid and current models from the DSOM.=oTrains the specified node and the neighbourood around it to better match a target. Most users should use trainP, which automatically determines the BMU and trains it and its neighbourhood.?Configures a learning function that depends not on the time, but on how good a model we already have for the target. If the BMU is an exact match for the target, no learning occurs. Usage is ? r p, where r4 is the maximal learning rate (0 <= r <= 1), and p is the elasticity.8NOTE: When using this learning function, ensure that abs . differenceW is always between 0 and 1, inclusive. Otherwise you may get invalid learning rates.3456789:;<=>?@ABC 3456789:;<=>?345678CBA9:;<=>@? 3456789:;<=>?@ABC(c) Amy de Buitlir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe 345678:=? 345678:?=(c) Amy de Buitlir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe 023457>L D,A Simplified Self-Organising Map (SSOM). x@ is the type of the learning rate and the difference metric. t is the type of the counter. k& is the type of the model indices. p. is the type of the input patterns and models.FMaps patterns to nodes.GA function which determines the learning rate for a node. The input parameter indicates how many patterns (or pattern batches) have previously been presented to the classifier. Typically this is used to make the learning rate decay over time. The output is the learning rate for that node (the amount by which the node's model should be updated to match the target). The learning rate should be between zero and one.H9A function which compares two patterns and returns a  non-negativeG number representing how different the patterns are. A result of 0+ indicates that the patterns are identical.IEA function which updates models. For example, if this function is f , then f target amount pattern returns a modified copy of pattern that is more similar to target than pattern= is. The magnitude of the adjustment is controlled by the amountN parameter, which should be a number between 0 and 1. Larger values for amount# permit greater adjustments. If amount*=1, the result should be identical to the target. If amount(=0, the result should be the unmodified pattern.JUA counter used as a "time" parameter. If you create the SSOM with a counter value 0{, and don't directly modify it, then the counter will represent the number of patterns that this SSOM has classified.K0A typical learning function for classifiers. K r0 d t# returns the learning rate at time t . When t = 0, the learning rate is r0L. Over time the learning rate decays exponentially; the decay rate is d2. Normally the parameters are chosen such that: 0 < r0 < 10 < dL<Extracts the current models from the SSOM. A synonym for F.MMTrains the specified node to better match a target. Most users should use .:, which automatically determines the BMU and trains it. DEFGHIJKLMNOP DEFGHIJKLMNO KDEFGHIJLMNOPDEFGHIJKLMNOP (c) Amy de Buitlir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe DEFGHIJKLM DEFGHIJLKM(c) Amy de Buitlir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe 023457>L QA Self-Organising Map (SOM). Although SOM implements GridMap9, most users will only need the interface provided by %Data.Datamining.Clustering.Classifier. If you chose to use the GridMap functions, please note: The functions adjust, and  adjustWithKeyA do not increment the counter. You can do so manually with incrementCounter.The functions map and  mapWithKey0 are not implemented (they just return an error~). It would be problematic to implement them because the input SOM and the output SOM would have to have the same Metric type.SbMaps patterns to tiles in a regular grid. In the context of a SOM, the tiles are called "nodes"T_A function which determines the how quickly the SOM learns. For example, if the function is f, then f t d; returns the learning rate for a node. The parameter t indicates how many patterns (or pattern batches) have previously been presented to the classifier. Typically this is used to make the learning rate decay over time. The parameter d is the grid distance from the node being updated to the BMU (Best Matching Unit). The output is the learning rate for that node (the amount by which the node's model should be updated to match the target). The learning rate should be between zero and one.U9A function which compares two patterns and returns a  non-negativeG number representing how different the patterns are. A result of 0+ indicates that the patterns are identical.V8A function which updates models. If this function is f, then f target amount pattern returns a modified copy of pattern that is more similar to target than pattern= is. The magnitude of the adjustment is controlled by the amountN parameter, which should be a number between 0 and 1. Larger values for amount# permit greater adjustments. If amount*=1, the result should be identical to the target. If amount(=0, the result should be the unmodified pattern.WTA counter used as a "time" parameter. If you create the SOM with a counter value 0z, and don't directly modify it, then the counter will represent the number of patterns that this SOM has classified.X0A typical learning function for classifiers. X r0 rf w0 wf tfm returns a bell curve-shaped function. At time zero, the maximum learning rate (applied to the BMU) is r0!, and the neighbourhood width is w0Y. Over time the bell curve shrinks and the learning rate tapers off, until at time tf4, the maximum learning rate (applied to the BMU) is rf$, and the neighbourhood width is wf8. Normally the parameters should be chosen such that:0 < rf << r0 < 1 0 < wf << w00 < tf7where << means "is much smaller than" (not the Haskell << operator!)YRA learning function that only updates the BMU and has a constant learning rate.ZtA learning function that updates all nodes with the same, constant learning rate. This can be useful for testing.]DExtracts the grid and current models from the SOM. A synonym for S._oTrains the specified node and the neighbourood around it to better match a target. Most users should use .P, which automatically determines the BMU and trains it and its neighbourhood.QRSTUVWXYZ[\]^_`abcdeQRSTUVWXYZ[\]^_`aXYZQRSTUVWedc[\]^_`abQRSTUVWXYZ[\]^_`abcde (c) Amy de Buitlir 2012-2015 BSD-styleamy@nualeargais.ie experimentalportableSafe QRSTUVWXYZ]_ QRSTUVW]XYZ_n    !"#$%&'()*+,-./0123"%4/01567889:;<=>?@ABCDEEFG+*?HII9GJKL:M;<>*?NOPQ  RSTUVWXsom_CBFPQFZt9rjIlvEIuUhznYData.Datamining.Pattern&Data.Datamining.Clustering.SOSInternal%Data.Datamining.Clustering.Classifier'Data.Datamining.Clustering.DSOMInternal'Data.Datamining.Clustering.SSOMInternal&Data.Datamining.Clustering.SOMInternalData.Datamining.Clustering.SOSData.Datamining.Clustering.DSOMData.Datamining.Clustering.SSOMData.Datamining.Clustering.SOM ScaledVectorNormalisedVector absDifference adjustNummagnitudeSquaredeuclideanDistanceSquared adjustVectoradjustVectorPreserveLength normalisescalescaleAllSOStoMap learningRatemaxSize diffThreshold allowDeletion difference makeSimilar nextIndex exponentialmakeSOSisEmpty numModelsmodelMap counterMapmodelscounterstimeaddNode deleteNodeincrementCounter trainNodeleastUsefulNodedeleteLeastUsefulNodeaddModelclassifytrain trainBatch ClassifiertoList differencesclassifyAndTrain diffAndTrainreportAndTrainDSOMgridMap withGridMap toGridMap adjustNode scaleDistancetrainNeighbourhood justTrainrougierLearningFunction$fClassifierDSOMxkp$fGridMapDSOMp $fGridDSOM$fFoldableDSOMSSOMsMapcounter$fClassifierSSOMxkpSOMdecayingGaussian stepFunctionconstantFunctioncurrentLearningFunction$fClassifierSOMxkp $fGridMapSOMp $fGridSOM $fFoldableSOM adjustNum'avplnorm scaleValuequantify quantify'