!^Rf      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcde (c) Amy de Buitlir 2012-2018 BSD-styleamy@nualeargais.ie experimentalportableSafe>@AHV$ somSA machine which learns to classify input patterns. Minimal complete definition:  trainBatch, reportAndTrain.som$Returns a list of index/model pairs.som7Returns the number of models this classifier can learn.som-Returns the current models of the classifier.som c target( returns the indices of all nodes in c%, paired with the difference between target and the node's model.somclassify c target" returns the index of the node in c! whose model best matches the target.som c target. returns a modified copy of the classifier c that has partially learned the target.som c targets. returns a modified copy of the classifier c that has partially learned the targets.som c target8 returns a tuple containing the index of the node in c' whose model best matches the input target(, and a modified copy of the classifier c# that has partially learned the target . Invoking classifyAndTrain c p may be faster than invoking (p  c, train c p)/, but they should give identical results. som  c target? returns a tuple containing: 1. The indices of all nodes in c+, paired with the difference between target> and the node's model 2. A modified copy of the classifier c& that has partially learned the target. Invoking diffAndTrain c p may be faster than invoking (p diff c, train c p),, but they should give identical results. som  c f target< returns a tuple containing: 1. The index of the node in c* whose model best matches the input target# 2. The indices of all nodes in c+, paired with the difference between target> and the node's model 3. A modified copy of the classifier c& that has partially learned the target Invoking diffAndTrain c p may be faster than invoking (p diff c, train c p),, but they should give identical results.     (c) Amy de Buitlir 2012-2018 BSD-styleamy@nualeargais.ie experimentalportableSafe 79=>?@AHVI somA Self-Organising Map (DSOM). Although DSOM implements GridMap9, most users will only need the interface provided by %Data.Datamining.Clustering.Classifier. If you chose to use the GridMap functions, please note: The functions adjust, and  adjustWithKeyA do not increment the counter. You can do so manually with incrementCounter.The functions map and  mapWithKey0 are not implemented (they just return an error). It would be problematic to implement them because the input DSOM and the output DSOM would have to have the same Metric type. sombMaps patterns to tiles in a regular grid. In the context of a SOM, the tiles are called "nodes"som;A function which determines the how quickly the SOM learns.som8A function which compares two patterns and returns a  non-negativeG number representing how different the patterns are. A result of 0+ indicates that the patterns are identical.som8A function which updates models. If this function is f, then f target amount pattern returns a modified copy of pattern that is more similar to target than pattern= is. The magnitude of the adjustment is controlled by the amountN parameter, which should be a number between 0 and 1. Larger values for amount# permit greater adjustments. If amount*=1, the result should be identical to the target. If amount(=0, the result should be the unmodified pattern.somInternal method.som3Extracts the grid and current models from the DSOM.somInternal method.somInternal method.somoTrains the specified node and the neighbourood around it to better match a target. Most users should use trainP, which automatically determines the BMU and trains it and its neighbourhood.somInternal method.somConfigures a learning function that depends not on the time, but on how good a model we already have for the target. If the BMU is an exact match for the target, no learning occurs. Usage is  r p, where r4 is the maximal learning rate (0 <= r <= 1), and p is the elasticity.8NOTE: When using this learning function, ensure that abs . differenceW is always between 0 and 1, inclusive. Otherwise you may get invalid learning rates.    (c) Amy de Buitlir 2012-2018 BSD-styleamy@nualeargais.ie experimentalportableSafeK    (c) Amy de Buitlir 2012-2018 BSD-styleamy@nualeargais.ie experimentalportableSafe 79=>?@AHVF!som+A Simplified Self-Organising Map (SGM). t is the type of the counter. x@ is the type of the learning rate and the difference metric. k& is the type of the model indices. p. is the type of the input patterns and models. som(Maps patterns and match counts to nodes.!somA function which determines the learning rate for a node. The input parameter indicates how many patterns (or pattern batches) have previously been presented to the classifier. Typically this is used to make the learning rate decay over time. The output is the learning rate for that node (the amount by which the node's model should be updated to match the target). The learning rate should be between zero and one."som/The maximum number of models this SGM can hold.#som4The threshold that triggers creation of a new model.$som~Delete existing models to make room for new ones? The least useful (least frequently matched) models will be deleted first.%som8A function which compares two patterns and returns a  non-negativeG number representing how different the patterns are. A result of 0+ indicates that the patterns are identical.&somEA function which updates models. For example, if this function is f , then f target amount pattern returns a modified copy of pattern that is more similar to target than pattern= is. The magnitude of the adjustment is controlled by the amountN parameter, which should be a number between 0 and 1. Larger values for amount# permit greater adjustments. If amount*=1, the result should be identical to the target. If amount(=0, the result should be the unmodified pattern.'som*Index for the next node to add to the SGM.(som0A typical learning function for classifiers. ( r0 d t# returns the learning rate at time t . When t = 0, the learning rate is r0L. Over time the learning rate decays exponentially; the decay rate is d2. Normally the parameters are chosen such that: 0 < r0 < 10 < d)som) lr n dt diff ms creates a new SGM that does not (yet) contain any models. It will learn at the rate determined by the learning function lr$, and will be able to hold up to n models. It will create a new model based on a pattern presented to it when (1) the SGM contains no models, or (2) the difference between the pattern and the closest matching model exceeds the threshold dt. It will use the function diff` to measure the similarity between an input pattern and a model. It will use the function msK to adjust models as needed to make them more similar to input patterns.*som7Returns true if the SGM has no models, false otherwise.+som8Returns the number of models the SGM currently contains.,som$Returns a map from node ID to model.-som{Returns a map from node ID to counter (number of times the node's model has been the closest match to an input pattern)..som&Returns the model at a specified node./somReturns the current labels.0somReturns the current models.1somrReturns the current counters (number of times the node's model has been the closest match to an input pattern).2som>The current "time" (number of times the SGM has been trained).3somAdds a new node to the SGM.4som@Removes a node from the SGM. Deleted nodes are never re-used.5somIncrements the match counter.6somMTrains the specified node to better match a target. Most users should use >:, which automatically determines the BMU and trains it.7som3Returns the node that has been the BMU least often.8som3Deletes the node that has been the BMU least often.9som^Adds a new node to the SGM, deleting the least useful node/model if necessary to make room.:som: s p identifies the model s* that most closely matches the pattern pj. It will not make any changes to the classifier. Returns the ID of the node with the best matching model, the difference between the best matching model and the pattern, and the SGM labels paired with the model and the difference between the input and the corresponding model. The final paired list is sorted in decreasing order of similarity.;somfInternal method. NOTE: This function may create a new model, but it does not modify existing models.<somfOrder models by ascending difference from the input pattern, then by creation order (label number).=som= s p identifies the model in s that most closely matches p<, and updates it to be a somewhat better match. If necessary, it will create a new node and model. Returns the ID of the node with the best matching model, the difference between the best matching model and the pattern, the differences between the input and each model in the SGM, and the updated SGM.>som> s p identifies the model in s that most closely matches pe, and updates it to be a somewhat better match. If necessary, it will create a new node and model.?somFor each pattern p in ps, ? s ps identifies the model in s that most closely matches p2, and updates it to be a somewhat better match."'$#"&! %()*+,-./0123456789:;<=>?"('$#"&! %)*+,-./0123456789:;<=>?(c) Amy de Buitlir 2012-2018 BSD-styleamy@nualeargais.ie experimentalportableSafe% !&"#$'()*+,-.2:=>?% !&"#$')2*+,-.(:=>?(c) Amy de Buitlir 2012-2018 BSD-styleamy@nualeargais.ie experimentalportableSafe 79=>?@AHVWBsomA Self-Organising Map (SOM). Although SOM implements GridMap9, most users will only need the interface provided by %Data.Datamining.Clustering.Classifier. If you chose to use the GridMap functions, please note: The functions adjust, and  adjustWithKeyA do not increment the counter. You can do so manually with incrementCounter.The functions map and  mapWithKey0 are not implemented (they just return an error~). It would be problematic to implement them because the input SOM and the output SOM would have to have the same Metric type.DsombMaps patterns to tiles in a regular grid. In the context of a SOM, the tiles are called "nodes"Esom_A function which determines the how quickly the SOM learns. For example, if the function is f, then f t d; returns the learning rate for a node. The parameter t indicates how many patterns (or pattern batches) have previously been presented to the classifier. Typically this is used to make the learning rate decay over time. The parameter d is the grid distance from the node being updated to the BMU (Best Matching Unit). The output is the learning rate for that node (the amount by which the node's model should be updated to match the target). The learning rate should be between zero and one.Fsom8A function which compares two patterns and returns a  non-negativeG number representing how different the patterns are. A result of 0+ indicates that the patterns are identical.Gsom8A function which updates models. If this function is f, then f target amount pattern returns a modified copy of pattern that is more similar to target than pattern= is. The magnitude of the adjustment is controlled by the amountN parameter, which should be a number between 0 and 1. Larger values for amount# permit greater adjustments. If amount*=1, the result should be identical to the target. If amount(=0, the result should be the unmodified pattern.HsomTA counter used as a "time" parameter. If you create the SOM with a counter value 0z, and don't directly modify it, then the counter will represent the number of patterns that this SOM has classified.Isom0A typical learning function for classifiers. I r0 rf w0 wf tfm returns a bell curve-shaped function. At time zero, the maximum learning rate (applied to the BMU) is r0!, and the neighbourhood width is w0Y. Over time the bell curve shrinks and the learning rate tapers off, until at time tf4, the maximum learning rate (applied to the BMU) is rf$, and the neighbourhood width is wf8. Normally the parameters should be chosen such that:0 < rf << r0 < 1 0 < wf << w00 < tf7where << means "is much smaller than" (not the Haskell << operator!)JsomRA learning function that only updates the BMU and has a constant learning rate.KsomtA learning function that updates all nodes with the same, constant learning rate. This can be useful for testing.LsomInternal method.Msom>Returns the learning function currently being used by the SOM.NsomDExtracts the grid and current models from the SOM. A synonym for D.OsomInternal method.PsomoTrains the specified node and the neighbourood around it to better match a target. Most users should use P, which automatically determines the BMU and trains it and its neighbourhood.QsomIncrement the match counter.RsomInternal method.BCHGEDFIJKLMNOPQRIJKBCHGEDFLMNOPQR(c) Amy de Buitlir 2012-2018 BSD-styleamy@nualeargais.ie experimentalportableSafeJ BCFDEGHIJKNP BCFDEGHNIJKP(c) Amy de Buitlir 2012-2018 BSD-styleamy@nualeargais.ie experimentalportableSafe>@AHV YsomA vector that has been scaled so that all elements in the vector are between zero and one. To scale a set of vectors, use c. Alternatively, if you can identify a maximum and minimum value for each element in a vector, you can scale individual vectors using b.ZsomLA vector that has been normalised, i.e., the magnitude of the vector = 1.[som4Returns the absolute difference between two numbers.\som7Adjusts a number to make it more similar to the target.]som;Returns the sum of the squares of the elements of a vector.^somGCalculates the square of the Euclidean distance between two vectors._som_ target amount vector adjusts each element of vector6 to move it closer to the corresponding element of targetD. The amount of adjustment is controlled by the learning rate amount9, which is a number between 0 and 1. Larger values of amount permit more adjustment. If amount(=1, the result will be identical to the target. If amount&=0, the result will be the unmodified pattern. If target is shorter than vector+, the result will be the same length as target. If target is longer than vector+, the result will be the same length as vector.`somSame as _>, except that the result will always be the same length as vector. This means that if target is shorter than vector , the "leftover" elements of vector* will be copied the result, unmodified.asomNormalises a vectorbsomGiven a vector qsu of pairs of numbers, where each pair represents the maximum and minimum value to be expected at each index in xs, b qs xs scales the vector xsr element by element, mapping the maximum value expected at that index to one, and the minimum value to zero.csomScales a set of vectors by determining the maximum and minimum values at each index in the vector, and mapping the maximum value to one, and the minimum value to zero. YZ[\]^_`abc \[_`^]ZaYbc Safe-fghijklmn     !"#$%&''()*+,-./ 0123 456789:;<=>?@ABBCDEFG8HIJKLMNOPQRSTUVWXYZ [ \ ] ^ _ ` a bc!som-10.1.8-4U7idyllz4eEJA79g8KqEw%Data.Datamining.Clustering.Classifier'Data.Datamining.Clustering.DSOMInternal&Data.Datamining.Clustering.SGMInternal&Data.Datamining.Clustering.SOMInternalData.Datamining.PatternData.Datamining.Clustering.DSOMData.Datamining.Clustering.SGMData.Datamining.Clustering.SOM Paths_som ClassifiertoList numModelsmodels differencesclassifytrain trainBatchclassifyAndTrain diffAndTrainreportAndTrainDSOMgridMap learningRate difference makeSimilar withGridMap toGridMap adjustNode scaleDistancetrainNeighbourhood justTrainrougierLearningFunction$fClassifierDSOMxkp$fGridMapDSOMp $fGridDSOM$fFoldableDSOM $fGenericDSOM $fNFDataDSOMSGMtoMapmaxSize diffThreshold allowDeletion nextIndex exponentialmakeSGMisEmptymodelMap counterMapmodelAtlabelscounterstimeaddNode deleteNodeincrementCounter trainNodeleastUsefulNodedeleteLeastUsefulNodeaddModel classify' matchOrdertrainAndClassify $fGenericSGM $fNFDataSGMSOMcounterdecayingGaussian stepFunctionconstantFunctioncurrentLearningFunction$fClassifierSOMxkp $fGridMapSOMp $fGridSOM $fFoldableSOM $fGenericSOM $fNFDataSOM ScaledVectorNormalisedVector absDifference adjustNummagnitudeSquaredeuclideanDistanceSquared adjustVectoradjustVectorPreserveLength normalisescalescaleAll$fShowNormalisedVector$fShowScaledVectorversion getBinDir getLibDir getDynLibDir getDataDir getLibexecDir getSysconfDirgetDataFileName