h$b_~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ (c) 2012-2021 Amy de Buitlir BSD-styleamy@nualeargais.ie experimentalportable Safe-Inferred? Z somA machine which learns to classify input patterns. Minimal complete definition:  trainBatch, reportAndTrain.som$Returns a list of index/model pairs.som7Returns the number of models this classifier can learn.som-Returns the current models of the classifier.som c target( returns the indices of all nodes in c%, paired with the difference between target and the node's model.somclassify c target" returns the index of the node in c! whose model best matches the target.som c target. returns a modified copy of the classifier c that has partially learned the target.som c targets. returns a modified copy of the classifier c that has partially learned the targets.som c target8 returns a tuple containing the index of the node in c' whose model best matches the input target(, and a modified copy of the classifier c# that has partially learned the target . Invoking classifyAndTrain c p may be faster than invoking (p  c, train c p)/, but they should give identical results. som  c target? returns a tuple containing: 1. The indices of all nodes in c+, paired with the difference between target> and the node's model 2. A modified copy of the classifier c& that has partially learned the target. Invoking diffAndTrain c p may be faster than invoking (p diff c, train c p),, but they should give identical results. som  c f target< returns a tuple containing: 1. The index of the node in c* whose model best matches the input target# 2. The indices of all nodes in c+, paired with the difference between target> and the node's model 3. A modified copy of the classifier c& that has partially learned the target Invoking diffAndTrain c p may be faster than invoking (p diff c, train c p),, but they should give identical results.     (c) 2012-2021 Amy de Buitlir BSD-styleamy@nualeargais.ie experimentalportable Safe-Inferred 8:>? somA Self-Organising Map (DSOM). Although DSOM implements GridMap9, most users will only need the interface provided by %Data.Datamining.Clustering.Classifier. If you chose to use the GridMap functions, please note: The functions adjust, and  adjustWithKey do not increment the counter. You can do so manually with incrementCounter.The functions map and  mapWithKey0 are not implemented (they just return an error). It would be problematic to implement them because the input DSOM and the output DSOM would have to have the same Metric type. somMaps patterns to tiles in a regular grid. In the context of a SOM, the tiles are called "nodes"som;A function which determines the how quickly the SOM learns.som8A function which compares two patterns and returns a  non-negative number representing how different the patterns are. A result of 0+ indicates that the patterns are identical.som8A function which updates models. If this function is f, then f target amount pattern returns a modified copy of pattern that is more similar to target than pattern= is. The magnitude of the adjustment is controlled by the amount parameter, which should be a number between 0 and 1. Larger values for amount# permit greater adjustments. If amount*=1, the result should be identical to the target. If amount(=0, the result should be the unmodified pattern.somInternal method.som3Extracts the grid and current models from the DSOM.somInternal method.somInternal method.somTrains the specified node and the neighbourood around it to better match a target. Most users should use train, which automatically determines the BMU and trains it and its neighbourhood.somInternal method.somConfigures a learning function that depends not on the time, but on how good a model we already have for the target. If the BMU is an exact match for the target, no learning occurs. Usage is  r p, where r4 is the maximal learning rate (0 <= r <= 1), and p is the elasticity.8NOTE: When using this learning function, ensure that abs . difference is always between 0 and 1, inclusive. Otherwise you may get invalid learning rates.    (c) 2012-2021 Amy de Buitlir BSD-styleamy@nualeargais.ie experimentalportable Safe-Inferred>    (c) 2012-2021 Amy de Buitlir BSD-styleamy@nualeargais.ie experimentalportable Safe-Inferred 8:>?.L%som+A Simplified Self-Organising Map (SGM). t is the type of the counter. x is the type of the learning rate and the difference metric. k& is the type of the model indices. p. is the type of the input patterns and models. som(Maps patterns and match counts to nodes.!somA function which determines the learning rate for a node. The input parameter indicates how many patterns (or pattern batches) have previously been presented to the classifier. Typically this is used to make the learning rate decay over time. The output is the learning rate for that node (the amount by which the node's model should be updated to match the target). The learning rate should be between zero and one."som/The maximum number of models this SGM can hold.#som8A function which compares two patterns and returns a  non-negative number representing how different the patterns are. A result of 0+ indicates that the patterns are identical.$somA function which updates models. For example, if this function is f , then f target amount pattern returns a modified copy of pattern that is more similar to target than pattern= is. The magnitude of the adjustment is controlled by the amount parameter, which should be a number between 0 and 1. Larger values for amount# permit greater adjustments. If amount*=1, the result should be identical to the target. If amount(=0, the result should be the unmodified pattern.%som*Index for the next node to add to the SGM.&som0A typical learning function for classifiers. & r0 d t# returns the learning rate at time t . When t = 0, the learning rate is r0. Over time the learning rate decays exponentially; the decay rate is d2. Normally the parameters are chosen such that: 0 < r0 < 10 < d'som' lr n diff ms creates a new SGM that does not (yet) contain any models. It will learn at the rate determined by the learning function lr$, and will be able to hold up to n models. It will create a new model based on a pattern presented to it when the SGM is not at capacity, or a less useful model can be replaced. It will use the function diff to measure the similarity between an input pattern and a model. It will use the function ms to adjust models as needed to make them more similar to input patterns.(som7Returns true if the SGM has no models, false otherwise.)som8Returns the number of models the SGM currently contains.*som$Returns a map from node ID to model.+somReturns a map from node ID to counter (number of times the node's model has been the closest match to an input pattern).,som&Returns the model at a specified node.-som/Returns the match counter for a specified node..somReturns the current labels./som>The current "time" (number of times the SGM has been trained).0somAdds a new node to the SGM.2somIncrements the match counter.3somTrains the specified node to better match a target. Most users should use D:, which automatically determines the BMU and trains it.7somCalculates the difference between all pairs of non-identical labels in the SGM.8som7Generates all pairs of non-identical labels in the SGM.9som1Pairs a node label with all labels except itself.:somReturns the labels of the two most similar models, and the difference between them.;somDeletes the least used (least matched) model in a pair, and returns its label (now available) and the updated SGM. TODO: Modify the other model to make it slightly more similar to the one that was deleted?<somReturns True if the SOM is full; returns False if it can add one or more models.=som= s finds the two most similar models, and combines them. This can be used to free up more space for learning. It returns the index of the newly free node, and the updated SGM.?somSet the model for a node. Useful when merging two models and replacing one.@som@ s p identifies the model s* that most closely matches the pattern p. It will not make any changes to the classifier. (I.e., it will not change the models or match counts.) Returns the ID of the node with the best matching model, the difference between the best matching model and the pattern, and the SGM labels paired with the model and the difference between the input and the corresponding model. The final paired list is sorted in decreasing order of similarity.AsomOrder models by ascending difference from the input pattern, then by creation order (label number).BsomB s p identifies the model in s that most closely matches p, and updates it to be a somewhat better match. If necessary, it will create a new node and model. Returns the ID of the node with the best matching model, the difference between the pattern and the best matching model in the original SGM (before training or adding a new model), the differences between the pattern and each model in the updated SGM, and the updated SGM.CsomInternal method. NOTE: This function will adjust the model and update the match for the BMU.DsomD s p identifies the model in s that most closely matches p, and updates it to be a somewhat better match. If necessary, it will create a new node and model.EsomFor each pattern p in ps, E s ps identifies the model in s that most closely matches p2, and updates it to be a somewhat better match.FsomSame as ).GsomSame as ".HsomReturns a copy of the SOM containing only models that satisfy the predicate.+%$! #"&'()*+,-./0123456789:;<=>?@ABCDEFGH+&%$! #"'()*+,-./0123456789:;<=>?@ABCDEFGH(c) 2012-2021 Amy de Buitlir BSD-styleamy@nualeargais.ie experimentalportable Safe-Inferred/"# !$%&'()*+,/56@BDEH"# !$%'/()*+,&@BDE56H(c) 2012-2021 Amy de Buitlir BSD-styleamy@nualeargais.ie experimentalportable Safe-Inferred 8:>?EI!Ksom+A Simplified Self-Organising Map (SGM). t is the type of the counter. x is the type of the learning rate and the difference metric. k& is the type of the model indices. p. is the type of the input patterns and models.Msom(Maps patterns and match counts to nodes.NsomA function which determines the learning rate for a node. The input parameter indicates how many patterns (or pattern batches) have previously been presented to the classifier. Typically this is used to make the learning rate decay over time. The output is the learning rate for that node (the amount by which the node's model should be updated to match the target). The learning rate should be between zero and one.Osom/The maximum number of models this SGM can hold.Psom4The threshold that triggers creation of a new model.QsomDelete existing models to make room for new ones? The least useful (least frequently matched) models will be deleted first.Rsom8A function which compares two patterns and returns a  non-negative number representing how different the patterns are. A result of 0+ indicates that the patterns are identical.SsomA function which updates models. For example, if this function is f , then f target amount pattern returns a modified copy of pattern that is more similar to target than pattern= is. The magnitude of the adjustment is controlled by the amount parameter, which should be a number between 0 and 1. Larger values for amount# permit greater adjustments. If amount*=1, the result should be identical to the target. If amount(=0, the result should be the unmodified pattern.Tsom*Index for the next node to add to the SGM.Usom0A typical learning function for classifiers. U r0 d t# returns the learning rate at time t . When t = 0, the learning rate is r0. Over time the learning rate decays exponentially; the decay rate is d2. Normally the parameters are chosen such that: 0 < r0 < 10 < dVsomV lr n dt diff ms creates a new SGM that does not (yet) contain any models. It will learn at the rate determined by the learning function lr$, and will be able to hold up to n models. It will create a new model based on a pattern presented to it when (1) the SGM contains no models, or (2) the difference between the pattern and the closest matching model exceeds the threshold dt. It will use the function diff to measure the similarity between an input pattern and a model. It will use the function ms to adjust models as needed to make them more similar to input patterns.Wsom7Returns true if the SGM has no models, false otherwise.Xsom8Returns the number of models the SGM currently contains.Ysom$Returns a map from node ID to model.ZsomReturns a map from node ID to counter (number of times the node's model has been the closest match to an input pattern).[som&Returns the model at a specified node.\somReturns the current labels.]somReturns the current models.^somReturns the current counters (number of times the node's model has been the closest match to an input pattern)._som>The current "time" (number of times the SGM has been trained).`somAdds a new node to the SGM.asomRemoves a node from the SGM. Deleted nodes are never re-used.bsomIncrements the match counter.csomTrains the specified node to better match a target. Most users should use k:, which automatically determines the BMU and trains it.dsom3Returns the node that has been the BMU least often.esom3Deletes the node that has been the BMU least often.fsomAdds a new node to the SGM, deleting the least useful node/model if necessary to make room.gsomg s p identifies the model s* that most closely matches the pattern p. It will not make any changes to the classifier. Returns the ID of the node with the best matching model, the difference between the best matching model and the pattern, and the SGM labels paired with the model and the difference between the input and the corresponding model. The final paired list is sorted in decreasing order of similarity.hsomInternal method. NOTE: This function may create a new model, but it does not modify existing models.isomOrder models by ascending difference from the input pattern, then by creation order (label number).jsomj s p identifies the model in s that most closely matches p, and updates it to be a somewhat better match. If necessary, it will create a new node and model. Returns the ID of the node with the best matching model, the difference between the best matching model and the pattern, the differences between the input and each model in the SGM, and the updated SGM.ksomk s p identifies the model in s that most closely matches p, and updates it to be a somewhat better match. If necessary, it will create a new node and model.lsomFor each pattern p in ps, l s ps identifies the model in s that most closely matches p2, and updates it to be a somewhat better match."KLQPOTSNMRUVWXYZ[\]^_`abcdefghijkl"UKLQPOTSNMRVWXYZ[\]^_`abcdefghijkl (c) 2012-2021 Amy de Buitlir BSD-styleamy@nualeargais.ie experimentalportable Safe-InferredFKLRMNSTOPQUVWXYZ[_gjklKLRMNSTOPQV_WXYZ[Ugjkl(c) 2012-2021 Amy de Buitlir BSD-styleamy@nualeargais.ie experimentalportable Safe-Inferred 8:>?UyosomA Self-Organising Map (SOM). Although SOM implements GridMap9, most users will only need the interface provided by %Data.Datamining.Clustering.Classifier. If you chose to use the GridMap functions, please note: The functions adjust, and  adjustWithKey do not increment the counter. You can do so manually with incrementCounter.The functions map and  mapWithKey0 are not implemented (they just return an error). It would be problematic to implement them because the input SOM and the output SOM would have to have the same Metric type.qsomMaps patterns to tiles in a regular grid. In the context of a SOM, the tiles are called "nodes"rsomA function which determines the how quickly the SOM learns. For example, if the function is f, then f t d; returns the learning rate for a node. The parameter t indicates how many patterns (or pattern batches) have previously been presented to the classifier. Typically this is used to make the learning rate decay over time. The parameter d is the grid distance from the node being updated to the BMU (Best Matching Unit). The output is the learning rate for that node (the amount by which the node's model should be updated to match the target). The learning rate should be between zero and one.ssom8A function which compares two patterns and returns a  non-negative number representing how different the patterns are. A result of 0+ indicates that the patterns are identical.tsom8A function which updates models. If this function is f, then f target amount pattern returns a modified copy of pattern that is more similar to target than pattern= is. The magnitude of the adjustment is controlled by the amount parameter, which should be a number between 0 and 1. Larger values for amount# permit greater adjustments. If amount*=1, the result should be identical to the target. If amount(=0, the result should be the unmodified pattern.usomA counter used as a "time" parameter. If you create the SOM with a counter value 0, and don't directly modify it, then the counter will represent the number of patterns that this SOM has classified.vsom0A typical learning function for classifiers. v r0 rf w0 wf tf returns a bell curve-shaped function. At time zero, the maximum learning rate (applied to the BMU) is r0!, and the neighbourhood width is w0. Over time the bell curve shrinks and the learning rate tapers off, until at time tf4, the maximum learning rate (applied to the BMU) is rf$, and the neighbourhood width is wf8. Normally the parameters should be chosen such that:0 < rf << r0 < 1 0 < wf << w00 < tf7where << means "is much smaller than" (not the Haskell << operator!)wsomA learning function that only updates the BMU and has a constant learning rate.xsomA learning function that updates all nodes with the same, constant learning rate. This can be useful for testing.ysomInternal method.zsom>Returns the learning function currently being used by the SOM.{somExtracts the grid and current models from the SOM. A synonym for q.|somInternal method.}somTrains the specified node and the neighbourood around it to better match a target. Most users should use , which automatically determines the BMU and trains it and its neighbourhood.~somIncrement the match counter.somInternal method.oputrqsvwxyz{|}~vwxoputrqsyz{|}~ (c) 2012-2021 Amy de Buitlir BSD-styleamy@nualeargais.ie experimentalportable Safe-InferredV opsqrtuvwx{} opsqrtu{vwx}(c) 2012-2021 Amy de Buitlir BSD-styleamy@nualeargais.ie experimentalportable Safe-Inferred?_ somA vector that has been scaled so that all elements in the vector are between zero and one. To scale a set of vectors, use . Alternatively, if you can identify a maximum and minimum value for each element in a vector, you can scale individual vectors using .somA vector that has been normalised, i.e., the magnitude of the vector = 1.som4Returns the absolute difference between two numbers.som7Adjusts a number to make it more similar to the target.som;Returns the sum of the squares of the elements of a vector.somCalculates the square of the Euclidean distance between two vectors.som target amount vector adjusts each element of vector6 to move it closer to the corresponding element of target. The amount of adjustment is controlled by the learning rate amount9, which is a number between 0 and 1. Larger values of amount permit more adjustment. If amount(=1, the result will be identical to the target. If amount&=0, the result will be the unmodified pattern. If target is shorter than vector+, the result will be the same length as target. If target is longer than vector+, the result will be the same length as vector.somSame as >, except that the result will always be the same length as vector. This means that if target is shorter than vector , the "leftover" elements of vector* will be copied the result, unmodified.somNormalises a vectorsomGiven a vector qs of pairs of numbers, where each pair represents the maximum and minimum value to be expected at each index in xs,  qs xs scales the vector xs element by element, mapping the maximum value expected at that index to one, and the minimum value to zero.somScales a set of vectors by determining the maximum and minimum values at each index in the vector, and mapping the maximum value to one, and the minimum value to zero.    Safe-Inferred_i   !"#$%&'())*+,-./0123456789:;<=>?@ABCDEFGHIJKLM))*JNO,-./1235P67Q9:RSTUGHLMVVWXYZ[ 9!\]^_`abcdefghijklmn o p q r s t u vsom-10.1.11-inplace%Data.Datamining.Clustering.Classifier'Data.Datamining.Clustering.DSOMInternal'Data.Datamining.Clustering.SGM4Internal&Data.Datamining.Clustering.SGMInternal&Data.Datamining.Clustering.SOMInternalData.Datamining.PatternData.Datamining.Clustering.DSOMData.Datamining.Clustering.SGM4Data.Datamining.Clustering.SGMData.Datamining.Clustering.SOM Paths_som ClassifiertoList numModelsmodels differencesclassifytrain trainBatchclassifyAndTrain diffAndTrainreportAndTrainDSOMgridMap learningRate difference makeSimilar withGridMap toGridMap adjustNode scaleDistancetrainNeighbourhood justTrainrougierLearningFunction$fClassifierDSOMxkp$fGridMapDSOMp $fGridDSOM$fFoldableDSOM $fGenericDSOM $fNFDataDSOMSGMtoMapcapacity nextIndex exponentialmakeSGMisEmptysizemodelMap counterMapmodelAt counterAtlabelstimeaddNode addNodeAtincrementCounter trainNodehasLabelimprint imprintBatch modelDiffs labelPairs labelPairs'twoMostSimilar mergeModels atCapacity consolidateconsolidateAndAddsetModel matchOrdertrainAndClassifytrainAndClassify'maxSizefilter $fGenericSGM $fNFDataSGM diffThreshold allowDeletioncounters deleteNodeleastUsefulNodedeleteLeastUsefulNodeaddModel classify'SOMcounterdecayingGaussian stepFunctionconstantFunctioncurrentLearningFunction$fClassifierSOMxkp $fGridMapSOMp $fGridSOM $fFoldableSOM $fGenericSOM $fNFDataSOM ScaledVectorNormalisedVector absDifference adjustNummagnitudeSquaredeuclideanDistanceSquared adjustVectoradjustVectorPreserveLength normalisescalescaleAll$fShowScaledVector$fShowNormalisedVectorversion getBinDir getLibDir getDynLibDir getDataDir getLibexecDir getSysconfDirgetDataFileName