Safe Haskell  None 

Language  Haskell2010 
Synopsis
 data T typ sh prob = Cons {
 initial :: Vector sh prob
 transition :: Square sh prob
 distribution :: T typ sh prob
 type Discrete symbol sh prob = T (Discrete symbol) sh prob
 type DiscreteTrained symbol sh prob = Trained (Discrete symbol) sh prob
 type Gaussian emiSh stateSh a = T (Gaussian emiSh) stateSh a
 type GaussianTrained emiSh stateSh a = Trained (Gaussian emiSh) stateSh a
 uniform :: (Info typ, C sh, Real prob) => T typ sh prob > T typ sh prob
 generate :: (Generate typ, Indexed sh, Real prob, RandomGen g, Random prob, Emission typ prob ~ emission) => T typ sh prob > g > [emission]
 generateLabeled :: (Generate typ, Indexed sh, Index sh ~ state, RandomGen g, Random prob, Real prob, Emission typ prob ~ emission) => T typ sh prob > g > [(state, emission)]
 probabilitySequence :: (EmissionProb typ, Indexed sh, Index sh ~ state, Real prob, Emission typ prob ~ emission, Traversable f) => T typ sh prob > f (state, emission) > f prob
 logLikelihood :: (EmissionProb typ, C sh, Eq sh, Floating prob, Real prob, Emission typ prob ~ emission, Traversable f) => T typ sh prob > T f emission > prob
 reveal :: (EmissionProb typ, InvIndexed sh, Eq sh, Index sh ~ state, Emission typ prob ~ emission, Real prob, Traversable f) => T typ sh prob > T f emission > T f state
 data Trained typ sh prob = Trained {
 trainedInitial :: Vector sh prob
 trainedTransition :: Square sh prob
 trainedDistribution :: Trained typ sh prob
 trainSupervised :: (Estimate typ, Indexed sh, Index sh ~ state, Real prob, Emission typ prob ~ emission) => sh > T [] (state, emission) > Trained typ sh prob
 trainUnsupervised :: (Estimate typ, C sh, Eq sh, Real prob, Emission typ prob ~ emission) => T typ sh prob > T [] emission > Trained typ sh prob
 mergeTrained :: (Estimate typ, C sh, Eq sh, Real prob) => Trained typ sh prob > Trained typ sh prob > Trained typ sh prob
 finishTraining :: (Estimate typ, C sh, Eq sh, Real prob) => Trained typ sh prob > T typ sh prob
 trainMany :: (Estimate typ, C sh, Eq sh, Real prob, Foldable f) => (trainingData > Trained typ sh prob) > T f trainingData > T typ sh prob
 deviation :: (C sh, Eq sh, Real prob) => T typ sh prob > T typ sh prob > prob
 toCSV :: (ToCSV typ, Indexed sh, Real prob, Show prob) => T typ sh prob > String
 fromCSV :: (FromCSV typ, Indexed sh, Eq sh, Real prob, Read prob) => (Int > sh) > String > Exceptional String (T typ sh prob)
Documentation
A Hidden Markov model consists of a number of (hidden) states
and a set of emissions.
There is a vector for the initial probability of each state
and a matrix containing the probability for switching
from one state to another one.
The distribution
field points to probability distributions
that associate every state with emissions of different probability.
Famous distribution instances are discrete and Gaussian distributions.
See Math.HiddenMarkovModel.Distribution for details.
The transition matrix is transposed with respect to popular HMM descriptions. But I think this is the natural orientation, because this way you can write "transition matrix times probability column vector".
Cons  

Instances
(C sh, Storable prob, Show sh, Show prob, Show typ) => Show (T typ sh prob) Source #  
(NFData typ, NFData sh, C sh, NFData prob, Storable prob) => NFData (T typ sh prob) Source #  
Defined in Math.HiddenMarkovModel.Private  
(Format typ, FormatArray sh, Real prob) => Format (T typ sh prob) Source #  
type DiscreteTrained symbol sh prob = Trained (Discrete symbol) sh prob Source #
type GaussianTrained emiSh stateSh a = Trained (Gaussian emiSh) stateSh a Source #
uniform :: (Info typ, C sh, Real prob) => T typ sh prob > T typ sh prob Source #
Create a model with uniform probabilities
for initial vector and transition matrix
given a distribution for the emissions.
You can use this as a starting point for trainUnsupervised
.
generate :: (Generate typ, Indexed sh, Real prob, RandomGen g, Random prob, Emission typ prob ~ emission) => T typ sh prob > g > [emission] Source #
generateLabeled :: (Generate typ, Indexed sh, Index sh ~ state, RandomGen g, Random prob, Real prob, Emission typ prob ~ emission) => T typ sh prob > g > [(state, emission)] Source #
probabilitySequence :: (EmissionProb typ, Indexed sh, Index sh ~ state, Real prob, Emission typ prob ~ emission, Traversable f) => T typ sh prob > f (state, emission) > f prob Source #
logLikelihood :: (EmissionProb typ, C sh, Eq sh, Floating prob, Real prob, Emission typ prob ~ emission, Traversable f) => T typ sh prob > T f emission > prob Source #
Logarithm of the likelihood to observe the given sequence. We return the logarithm because the likelihood can be so small that it may be rounded to zero in the choosen number type.
reveal :: (EmissionProb typ, InvIndexed sh, Eq sh, Index sh ~ state, Emission typ prob ~ emission, Real prob, Traversable f) => T typ sh prob > T f emission > T f state Source #
Reveal the state sequence that led most likely to the observed sequence of emissions. It is found using the Viterbi algorithm.
data Trained typ sh prob Source #
A trained model is a temporary form of a Hidden Markov model
that we need during the training on multiple training sequences.
It allows to collect knowledge over many sequences with mergeTrained
,
even with mixed supervised and unsupervised training.
You finish the training by converting the trained model
back to a plain modul using finishTraining
.
You can create a trained model in three ways:
 supervised training using an emission sequence with associated states,
 unsupervised training using an emission sequence and an existing Hidden Markov Model,
 derive it from state sequence patterns, cf. Math.HiddenMarkovModel.Pattern.
Trained  

Instances
(C sh, Storable prob, Show sh, Show prob, Show typ) => Show (Trained typ sh prob) Source #  
(Estimate typ, C sh, Eq sh, Real prob) => Semigroup (Trained typ sh prob) Source #  
(NFData typ, NFData sh, C sh, NFData prob, Storable prob) => NFData (Trained typ sh prob) Source #  
Defined in Math.HiddenMarkovModel.Private 
trainSupervised :: (Estimate typ, Indexed sh, Index sh ~ state, Real prob, Emission typ prob ~ emission) => sh > T [] (state, emission) > Trained typ sh prob Source #
Contribute a manually labeled emission sequence to a HMM training.
trainUnsupervised :: (Estimate typ, C sh, Eq sh, Real prob, Emission typ prob ~ emission) => T typ sh prob > T [] emission > Trained typ sh prob Source #
Consider a superposition of all possible state sequences weighted by the likelihood to produce the observed emission sequence. Now train the model with respect to all of these sequences with respect to the weights. This is done by the BaumWelch algorithm.
mergeTrained :: (Estimate typ, C sh, Eq sh, Real prob) => Trained typ sh prob > Trained typ sh prob > Trained typ sh prob Source #
finishTraining :: (Estimate typ, C sh, Eq sh, Real prob) => Trained typ sh prob > T typ sh prob Source #
trainMany :: (Estimate typ, C sh, Eq sh, Real prob, Foldable f) => (trainingData > Trained typ sh prob) > T f trainingData > T typ sh prob Source #
deviation :: (C sh, Eq sh, Real prob) => T typ sh prob > T typ sh prob > prob Source #
Compute maximum deviation between initial and transition probabilities. You can use this as abort criterion for unsupervised training. We omit computation of differences between the emission probabilities. This simplifies matters a lot and should suffice for defining an abort criterion.