-- Hoogle documentation, generated by Haddock -- See Hoogle, http://www.haskell.org/hoogle/ -- | Compute Maximum Entropy Distributions -- -- The maximum entropy method, or MAXENT, is variational approach for -- computing probability distributions given a list of moment, or -- expected value, constraints. -- -- Here are some links for background info. -- -- A good overview of applications: -- http://cmm.cit.nih.gov/maxent/letsgo.html -- -- On the idea of maximum entropy in general: -- http://en.wikipedia.org/wiki/Principle_of_maximum_entropy -- -- Use this package to compute discrete maximum entropy distributions -- over a list of values and list of constraints. -- -- Here is a the example from Probability the Logic of Science -- --
-- > maxent 0.00001 [1,2,3] [average 1.5] -- Right [0.61, 0.26, 0.11] ---- -- The classic dice example -- --
-- > maxent 0.00001 [1,2,3,4,5,6] [average 4.5] -- Right [.05, .07, 0.11, 0.16, 0.23, 0.34] ---- -- One can use different constraints besides the average value there. @package maxent @version 0.6.0.3 -- | The maximum entropy method, or MAXENT, is variational approach for -- computing probability distributions given a list of moment, or -- expected value, constraints. -- -- Here are some links for background info. A good overview of -- applications: http://cmm.cit.nih.gov/maxent/letsgo.html On the -- idea of maximum entropy in general: -- http://en.wikipedia.org/wiki/Principle_of_maximum_entropy -- -- Use this package to compute discrete maximum entropy distributions -- over a list of values and list of constraints. -- -- Here is a the example from Probability the Logic of Science -- --
-- >>> maxent 0.00001 [1,2,3] [average 1.5] -- Right [0.61, 0.26, 0.11] ---- -- The classic dice example -- --
-- >>> maxent 0.00001 [1,2,3,4,5,6] [average 4.5] -- Right [.05, .07, 0.11, 0.16, 0.23, 0.34] ---- -- One can use different constraints besides the average value there. -- -- As for why you want to maximize the entropy to find the probability -- constraint, I will say this for now. In the case of the average -- constraint it is a kin to choosing a integer partition with the most -- interger compositions. I doubt that makes any sense, but I will try to -- explain more with a blog post soon. module Numeric.MaxEnt -- | A constraint of the form g(x, y, ...) = c. See -- <=> for constructing a Constraint. type Constraint a = (FU a, a) (.=.) :: (forall s. Mode s => AD s a -> AD s a) -> a -> ExpectationConstraint a newtype UU a UU :: (forall s. Mode s => ExpectationFunction (AD s a)) -> UU a unUU :: UU a -> forall s. Mode s => ExpectationFunction (AD s a) -- | Constraint type. A function and the constant it equals. -- -- Think of it as the pair (f, c) in the constraint -- --
-- Σ pₐ f(xₐ) = c ---- -- such that we are summing over all values . -- -- For example, for a variance constraint the f would be (\x -- -> x*x) and c would be the variance. type ExpectationConstraint a = (UU a, a) -- | A function that takes an index and value and returns a value. See -- average and variance for examples. type ExpectationFunction a = a -> a average :: Num a => a -> ExpectationConstraint a variance :: Num a => a -> ExpectationConstraint a -- | Discrete maximum entropy solver where the constraints are all moment -- constraints. maxent :: Double -> [Double] -> [ExpectationConstraint Double] -> Either (Result, Statistics) (Vector Double) -- | A more general solver. This directly solves the lagrangian of the -- constraints and the the additional constraint that the probabilities -- must sum to one. general :: Double -> Int -> [Constraint Double] -> Either (Result, Statistics) (Vector Double) data LinearConstraints a LC :: [[a]] -> [a] -> LinearConstraints a matrix :: LinearConstraints a -> [[a]] output :: LinearConstraints a -> [a] -- | This is for the linear case Ax = b x in this situation is the -- vector of probablities. -- -- Consider the 1 dimensional circular convolution using hmatrix. -- --
-- >>> import Numeric.LinearAlgebra -- -- >>> fromLists [[0.68, 0.22, 0.1], [0.1, 0.68, 0.22], [0.22, 0.1, 0.68]] <> fromLists [[0.2], [0.5], [0.3]] -- (3><1) [0.276, 0.426, 0.298] ---- -- Now if we were given just the convolution and the output, we can use -- linear to infer the input. -- --
-- >>> linear 3.0e-17 $ LC [[0.68, 0.22, 0.1], [0.1, 0.68, 0.22], [0.22, 0.1, 0.68]] [0.276, 0.426, 0.298] -- Right [0.20000000000000004,0.4999999999999999,0.3] ---- -- I fell compelled to point out that we could also just invert the -- original convolution matrix. Supposedly using maxent can reduce errors -- from noise if the convolution matrix is not properly estimated. linear :: Double -> LinearConstraints Double -> Either (Result, Statistics) (Vector Double) linear' :: LinearConstraints Double -> [[Double]] linear'' :: LinearConstraints Double -> [[Double]]