hmm-hmatrix-0.0.0.1: Hidden Markov Models using HMatrix primitives

Safe HaskellNone
LanguageHaskell2010

Math.HiddenMarkovModel

Synopsis

Documentation

data T distr prob Source

A Hidden Markov model consists of a number of (hidden) states and a set of emissions. There is a vector for the initial probability of each state and a matrix containing the probability for switching from one state to another one. The distribution field points to probability distributions that associate every state with emissions of different probability. Famous distribution instances are discrete and Gaussian distributions. See Math.HiddenMarkovModel.Distribution for details.

The transition matrix is transposed with respect to popular HMM descriptions. But I think this is the natural orientation, because this way you can write "transition matrix times probability column vector".

The type has two type parameters, although the one for the distribution would be enough. However, replacing prob by Distr.Probability distr would prohibit the derived Show and Read instances.

Constructors

Cons 

Fields

initial :: Vector prob
 
transition :: Matrix prob
 
distribution :: distr
 

Instances

(Read distr, Read prob, Element prob) => Read (T distr prob) 
(Show distr, Show prob, Element prob) => Show (T distr prob) 

type Discrete prob symbol = T (Discrete prob symbol) prob Source

type DiscreteTrained prob symbol = Trained (DiscreteTrained prob symbol) prob Source

type Gaussian a = T (Gaussian a) a Source

uniform :: (Info distr, Probability distr ~ prob) => distr -> T distr prob Source

Create a model with uniform probabilities for initial vector and transition matrix given a distribution for the emissions. You can use this as a starting point for trainUnsupervised.

generate :: (RandomGen g, Ord prob, Random prob, Generate distr, Probability distr ~ prob, Emission distr ~ emission) => T distr prob -> g -> [emission] Source

logLikelihood :: (EmissionProb distr, Floating prob, Probability distr ~ prob, Emission distr ~ emission, Traversable f) => T distr prob -> T f emission -> prob Source

Logarithm of the likelihood to observe the given sequence. We return the logarithm because the likelihood can be so small that it may be rounded to zero in the choosen number type.

reveal :: (EmissionProb distr, Probability distr ~ prob, Emission distr ~ emission, Traversable f, Reverse f) => T distr prob -> T f emission -> T f State Source

Reveal the state sequence that led most likely to the observed sequence of emissions. It is found using the Viterbi algorithm.

data Trained distr prob Source

A trained model is a temporary form of a Hidden Markov model that we need during the training on multiple training sequences. It allows to collect knowledge over many sequences with mergeTrained, even with mixed supervised and unsupervised training. You finish the training by converting the trained model back to a plain modul using finishTraining.

You can create a trained model in three ways:

  • supervised training using an emission sequence with associated states,
  • unsupervised training using an emission sequence and an existing Hidden Markov Model,
  • derive it from state sequence patterns, cf. Math.HiddenMarkovModel.Pattern.

Constructors

Trained 

Instances

(Read distr, Read prob, Element prob) => Read (Trained distr prob) 
(Show distr, Show prob, Element prob) => Show (Trained distr prob) 
(Estimate tdistr, (~) * (Distribution tdistr) distr, (~) * (Probability distr) prob) => Semigroup (Trained tdistr prob) 

trainSupervised :: (Estimate tdistr, Distribution tdistr ~ distr, Probability distr ~ prob, Emission distr ~ emission) => Int -> T [] (State, emission) -> Trained tdistr prob Source

Contribute a manually labeled emission sequence to a HMM training.

trainUnsupervised :: (Estimate tdistr, Distribution tdistr ~ distr, Probability distr ~ prob, Emission distr ~ emission) => T distr prob -> T [] emission -> Trained tdistr prob Source

Consider a superposition of all possible state sequences weighted by the likelihood to produce the observed emission sequence. Now train the model with respect to all of these sequences with respect to the weights. This is done by the Baum-Welch algorithm.

mergeTrained :: (Estimate tdistr, Distribution tdistr ~ distr, Probability distr ~ prob, Emission distr ~ emission) => Trained tdistr prob -> Trained tdistr prob -> Trained tdistr prob Source

finishTraining :: (Estimate tdistr, Distribution tdistr ~ distr, Probability distr ~ prob, Emission distr ~ emission) => Trained tdistr prob -> T distr prob Source

trainMany :: (Estimate tdistr, Distribution tdistr ~ distr, Probability distr ~ prob, Foldable f) => (trainingData -> Trained tdistr prob) -> T f trainingData -> T distr prob Source

deviation :: (Field prob, Ord prob) => T distr prob -> T distr prob -> prob Source

Compute maximum deviation between initial and transition probabilities. You can use this as abort criterion for unsupervised training. We omit computation of differences between the emission probabilities. This simplifies matters a lot and should suffice for defining an abort criterion.

toCSV :: (CSV distr, Field prob, Show prob) => T distr prob -> String Source

fromCSV :: (CSV distr, Field prob, Read prob) => String -> Exceptional String (T distr prob) Source