ú΢tžƒD      !"#$%&'()*+,-./0123456789:;<=>?@ABC(c) Amy de Buitléir 2012-2013 BSD-styleamy@nualeargais.ie experimentalportable Safe-Inferred36=K €A vector that has been scaled so that all elements in the vector are between zero and one. To scale a set of vectors, use  Œ. Alternatively, if you can identify a maximum and minimum value for each element in a vector, you can scale individual vectors using  .LA vector that has been normalised, i.e., the magnitude of the vector = 1.&A pattern to be learned or classified.$Compares two patterns and returns a  non-negativeD number representing how different the patterns are. A result of 0. indicates that the patterns are identical. target amount pattern returns a modified copy of pattern that is more similar to target than pattern= is. The magnitude of the adjustment is controlled by the amountN parameter, which should be a number between 0 and 1. Larger values for amount permit greater adjustments. If amount-=1, the result should be identical to the target. If amount+=0, the result should be the unmodified pattern. GCalculates the square of the Euclidean distance between two vectors.   target amount vector adjusts vector to move it closer to targetA. The amount of adjustment is controlled by the learning rate r9, which is a number between 0 and 1. Larger values of r permit more adjustment. If r+=1, the result will be identical to the target. If amount)=0, the result will be the unmodified pattern. Normalises a vector Given a vector qsu of pairs of numbers, where each pair represents the maximum and minimum value to be expected at each index in xs,   qs xs scales the vector xsr element by element, mapping the maximum value expected at that index to one, and the minimum value to zero. ­Scales a set of vectors by determining the maximum and minimum values at each index in the vector, and mapping the maximum value to one, and the minimum value to zero.DEF G HIJKL   DEF G HIJKL(c) Amy de Buitléir 2012-2013 BSD-styleamy@nualeargais.ie experimentalportable Safe-Inferred36=K TA machine which learns to classify input patterns. Minimal complete definition:  trainBatch, reportAndTrain.$Returns a list of index/model pairs.7Returns the number of models this classifier can learn.-Returns the current models of the classifier. c target) returns the indices of all nodes in c%, paired with the difference between target and the node's model.classify c target" returns the index of the node in c" whose model best matches the target. c target. returns a modified copy of the classifier c that has partially learned the target. c targets. returns a modified copy of the classifier c that has partially learned the targets. c target8 returns a tuple containing the index of the node in c' whose model best matches the input target(, and a modified copy of the classifier c# that has partially learned the target . Invoking classifyAndTrain c p may be faster than invoking (p  c, train c p)0, but they should give identical results. c target? returns a tuple containing: 1. The indices of all nodes in c+, paired with the difference between target> and the node's model 2. A modified copy of the classifier c& that has partially learned the target. Invoking diffAndTrain c p may be faster than invoking (p diff c, train c p),, but they should give identical results. c f target< returns a tuple containing: 1. The index of the node in c* whose model best matches the input target# 2. The indices of all nodes in c+, paired with the difference between target> and the node's model 3. A modified copy of the classifier c& that has partially learned the target Invoking diffAndTrain c p may be faster than invoking (p diff c, train c p),, but they should give identical results.    (c) Amy de Buitléir 2012-2013 BSD-styleamy@nualeargais.ie experimentalportable Safe-Inferred2346=KA Self-Organising Map (DSOM). Although DSOM implements GridMap9, most users will only need the interface provided by %Data.Datamining.Clustering.Classifier. If you chose to use the GridMap functions, please note: The functions adjust, and  adjustWithKeyA do not increment the counter. You can do so manually with incrementCounter.The functions map and  mapWithKey0 are not implemented (they just return an error€). It would be problematic to implement them because the input DSOM and the output DSOM would have to have the same Metric type.3Extracts the grid and current models from the DSOM. oTrains the specified node and the neighbourood around it to better match a target. Most users should use trainP, which automatically determines the BMU and trains it and its neighbourhood."QCreates a classifier with a default (bell-shaped) learning function. Usage is " gm r w t, where: gmTThe geometry and initial models for this classifier. A reasonable choice here is  lazyGridMap g ps, where g is a  HexHexGrid, and ps is a set of random patterns.rand [p)] are the first two parameters to the $.#BCreates a classifier with a custom learning function. Usage is # gm g, where: gmTThe geometry and initial models for this classifier. A reasonable choice here is  lazyGridMap g ps, where g is a  HexHexGrid, and ps is a set of random patterns.fÿµA function used to determine the learning rate (for adjusting the models in the classifier). This function will be invoked with three parameters. The first parameter will indicate how different the BMU is from the input pattern. The second parameter indicates how different the pattern of the node currently being trained is from the input pattern. The third parameter is the grid distance from the BMU to the node currently being trained, as a fraction of the maximum grid distance. The output is the learning rate for that node (the amount by which the node's model should be updated to match the target). The learning rate should be between zero and one.$ÈConfigures a learning function that depends not on the time, but on how good a model we already have for the target. If the BMU is an exact match for the target, no learning occurs. Usage is $ r p, where r4 is the maximal learning rate (0 <= r <= 1), and p is the elasticity.8NOTE: When using this learning function, ensure that abs . differenceW is always between 0 and 1, inclusive. Otherwise you may get invalid learning rates. !"#$%&'(  !"#$('& !%"#$  !"#$%&'((c) Amy de Buitléir 2012-2013 BSD-styleamy@nualeargais.ie experimentalportable Safe-Inferred "#$"#$ (c) Amy de Buitléir 2012-2013 BSD-styleamy@nualeargais.ie experimentalportable Safe-Inferred 02346=K )A Self-Organising Map (SOM). Although SOM implements GridMap9, most users will only need the interface provided by %Data.Datamining.Clustering.Classifier. If you chose to use the GridMap functions, please note: The functions adjust, and  adjustWithKeyA do not increment the counter. You can do so manually with incrementCounter.The functions map and  mapWithKey0 are not implemented (they just return an error~). It would be problematic to implement them because the input SOM and the output SOM would have to have the same Metric type.+bMaps patterns to tiles in a regular grid. In the context of a SOM, the tiles are called "nodes",&The function used to update the nodes.-TA counter used as a "time" parameter. If you create the SOM with a counter value 0z, and don't directly modify it, then the counter will represent the number of patterns that this SOM has classified..tA learning function that updates all nodes with the same, constant learning rate. This can be useful for testing.0RA learning function that only updates the BMU and has a constant learning rate.20A typical learning function for classifiers. 2 r0 rf w0 wf tfm returns a bell curve-shaped function. At time zero, the maximum learning rate (applied to the BMU) is r0!, and the neighbourhood width is w0Y. Over time the bell curve shrinks and the learning rate tapers off, until at time tf4, the maximum learning rate (applied to the BMU) is rf$, and the neighbourhood width is wf8. Normally the parameters should be chosen such that:0 < rf << r0 < 1 0 < wf << w00 < tf7where << means "is much smaller than" (not the Haskell << operator!)45A function used to adjust the models in a classifier.66 f t d8 returns the learning rate for a node. The parameter f, is the learning function. The parameter t½ indicates how many patterns (or pattern batches) have previously been presented to the classifier. Typically this is used to make the learning rate decay over time. The parameter dÿ is the grid distance from the node being updated to the BMU (Best Matching Unit). The output is the learning rate for that node (the amount by which the node's model should be updated to match the target). The learning rate should be between zero and one.8DExtracts the grid and current models from the SOM. A synonym for +.:oTrains the specified node and the neighbourood around it to better match a target. Most users should use trainP, which automatically determines the BMU and trains it and its neighbourhood.)*+,-./0123456789:;<=>?@ABC)*+,-./0123456789:;<45623C01B./A)*+,-@?>789:;<=)*+,-./0123456789:;<=>?@ABC(c) Amy de Buitléir 2012-2013 BSD-styleamy@nualeargais.ie experimentalportable Safe-Inferred )*+,-238: )*+,-238:M       !"#$%&'()*+,-.//0123344556789#$&:';<=>?@ABCDEFGHI som-7.2.3Data.Datamining.Pattern%Data.Datamining.Clustering.Classifier'Data.Datamining.Clustering.DSOMInternal&Data.Datamining.Clustering.SOMInternalData.Datamining.Clustering.DSOMData.Datamining.Clustering.SOM ScaledVectorNormalisedVectorPatternMetric difference makeSimilar absDifference adjustNummagnitudeSquaredeuclideanDistanceSquared adjustVector normalisescalescaleAll ClassifiertoList numModelsmodels differencesclassifytrain trainBatchclassifyAndTrain diffAndTrainreportAndTrainDSOMsGridMapsLearningFunction toGridMap adjustNode scaleDistancetrainNeighbourhood justTrain defaultDSOM customDSOMrougierLearningFunction$fClassifierDSOMkp$fGridMapDSOMp $fGridDSOM$fFoldableDSOMSOMgridMaplearningFunctioncounterConstantFunction StepFunctionDecayingGaussianLearningFunction LearningRateratecurrentLearningFunctionincrementCounter$fClassifierSOMkp $fGridMapSOMp $fGridSOM $fFoldableSOM"$fLearningFunctionConstantFunction$fLearningFunctionStepFunction"$fLearningFunctionDecayingGaussian adjustNum'norm scaleValuequantify quantify'$fPatternScaledVector$fPatternNormalisedVector