| Safe Haskell | None |
|---|---|
| Language | Haskell2010 |
Math.HiddenMarkovModel.Normalized
Description
Counterparts to functions in Math.HiddenMarkovModel.Private that normalize interim results. We need to do this in order to prevent to round very small probabilities to zero.
Synopsis
- logLikelihood :: (EmissionProb typ, C sh, Eq sh, Floating prob, Real prob, Emission typ prob ~ emission, Traversable f) => T typ sh prob -> T f emission -> prob
- alpha :: (EmissionProb typ, C sh, Eq sh, Real prob, Emission typ prob ~ emission, Traversable f) => T typ sh prob -> T f emission -> T f (prob, Vector sh prob)
- beta :: (EmissionProb typ, C sh, Eq sh, Real prob, Emission typ prob ~ emission, Traversable f, Reverse f) => T typ sh prob -> f (prob, emission) -> T f (Vector sh prob)
- alphaBeta :: (EmissionProb typ, C sh, Eq sh, Real prob, Emission typ prob ~ emission, Traversable f, Zip f, Reverse f) => T typ sh prob -> T f emission -> (T f (prob, Vector sh prob), T f (Vector sh prob))
- xiFromAlphaBeta :: (EmissionProb typ, C sh, Eq sh, Real prob, Emission typ prob ~ emission, Traversable f, Zip f) => T typ sh prob -> T f emission -> T f (prob, Vector sh prob) -> T f (Vector sh prob) -> f (Square sh prob)
- zetaFromAlphaBeta :: (C sh, Eq sh, Real prob, Zip f) => T f (prob, Vector sh prob) -> T f (Vector sh prob) -> T f (Vector sh prob)
- reveal :: (EmissionProb typ, InvIndexed sh, Eq sh, Index sh ~ state, Emission typ prob ~ emission, Real prob, Traversable f) => T typ sh prob -> T f emission -> T f state
- nonEmptyScanr :: (Traversable f, Reverse f) => (a -> b -> b) -> b -> f a -> T f b
- trainUnsupervised :: (Estimate typ, C sh, Eq sh, Real prob, Emission typ prob ~ emission) => T typ sh prob -> T [] emission -> Trained typ sh prob
Documentation
>>>import qualified Data.NonEmpty as NonEmpty
logLikelihood :: (EmissionProb typ, C sh, Eq sh, Floating prob, Real prob, Emission typ prob ~ emission, Traversable f) => T typ sh prob -> T f emission -> prob Source #
Logarithm of the likelihood to observe the given sequence. We return the logarithm because the likelihood can be so small that it may be rounded to zero in the choosen number type.
alpha :: (EmissionProb typ, C sh, Eq sh, Real prob, Emission typ prob ~ emission, Traversable f) => T typ sh prob -> T f emission -> T f (prob, Vector sh prob) Source #
beta :: (EmissionProb typ, C sh, Eq sh, Real prob, Emission typ prob ~ emission, Traversable f, Reverse f) => T typ sh prob -> f (prob, emission) -> T f (Vector sh prob) Source #
alphaBeta :: (EmissionProb typ, C sh, Eq sh, Real prob, Emission typ prob ~ emission, Traversable f, Zip f, Reverse f) => T typ sh prob -> T f emission -> (T f (prob, Vector sh prob), T f (Vector sh prob)) Source #
xiFromAlphaBeta :: (EmissionProb typ, C sh, Eq sh, Real prob, Emission typ prob ~ emission, Traversable f, Zip f) => T typ sh prob -> T f emission -> T f (prob, Vector sh prob) -> T f (Vector sh prob) -> f (Square sh prob) Source #
zetaFromAlphaBeta :: (C sh, Eq sh, Real prob, Zip f) => T f (prob, Vector sh prob) -> T f (Vector sh prob) -> T f (Vector sh prob) Source #
reveal :: (EmissionProb typ, InvIndexed sh, Eq sh, Index sh ~ state, Emission typ prob ~ emission, Real prob, Traversable f) => T typ sh prob -> T f emission -> T f state Source #
Reveal the state sequence that led most likely to the observed sequence of emissions. It is found using the Viterbi algorithm.
nonEmptyScanr :: (Traversable f, Reverse f) => (a -> b -> b) -> b -> f a -> T f b Source #
Variant of NonEmpty.scanr with less stack consumption.
\x xs -> nonEmptyScanr (-) x xs == NonEmpty.scanr (-) x (xs::[Int])
trainUnsupervised :: (Estimate typ, C sh, Eq sh, Real prob, Emission typ prob ~ emission) => T typ sh prob -> T [] emission -> Trained typ sh prob Source #
Consider a superposition of all possible state sequences weighted by the likelihood to produce the observed emission sequence. Now train the model with respect to all of these sequences with respect to the weights. This is done by the Baum-Welch algorithm.