-- Hoogle documentation, generated by Haddock -- See Hoogle, http://www.haskell.org/hoogle/ -- | Compute Maximum Entropy Distributions -- @package maxent @version 0.7 -- | The maximum entropy method, or MAXENT, is variational approach for -- computing probability distributions given a list of moment, or -- expected value, constraints. -- -- Here are some links for background info. A good overview of -- applications: http://cmm.cit.nih.gov/maxent/letsgo.html On the -- idea of maximum entropy in general: -- http://en.wikipedia.org/wiki/Principle_of_maximum_entropy -- -- Use this package to compute discrete maximum entropy distributions -- over a list of values and list of constraints. -- -- Here is a the example from Probability the Logic of Science -- --
-- >>> maxent 0.00001 [1,2,3] [average 1.5] -- Right [0.61, 0.26, 0.11] ---- -- The classic dice example -- --
-- >>> maxent 0.00001 [1,2,3,4,5,6] [average 4.5] -- Right [.05, .07, 0.11, 0.16, 0.23, 0.34] ---- -- One can use different constraints besides the average value there. -- -- As for why you want to maximize the entropy to find the probability -- constraint, I will say this for now. In the case of the average -- constraint it is a kin to choosing a integer partition with the most -- interger compositions. I doubt that makes any sense, but I will try to -- explain more with a blog post soon. module Numeric.MaxEnt -- | An equality constraint of the form g(x, y, ...) = c. Use -- <=> to construct a Constraint. data Constraint :: * (.=.) :: (forall a. Floating a => a -> a) -> (forall b. Floating b => b) -> ExpectationConstraint -- | Constraint type. A function and the constant it equals. -- -- Think of it as the pair (f, c) in the constraint -- --
-- Σ pₐ f(xₐ) = c ---- -- such that we are summing over all values . -- -- For example, for a variance constraint the f would be (\x -- -> x*x) and c would be the variance. data ExpectationConstraint average :: (forall a. Floating a => a) -> ExpectationConstraint variance :: (forall a. Floating a => a) -> ExpectationConstraint -- | Discrete maximum entropy solver where the constraints are all moment -- constraints. maxent :: Double -> (forall a. Floating a => [a]) -> [ExpectationConstraint] -> Either (Result, Statistics) (Vector Double) -- | A more general solver. This directly solves the lagrangian of the -- constraints and the the additional constraint that the probabilities -- must sum to one. general :: Double -> Int -> [Constraint] -> Either (Result, Statistics) (Vector Double) data LinearConstraints LC :: (forall a. Floating a => ([[a]], [a])) -> LinearConstraints unLC :: LinearConstraints -> forall a. Floating a => ([[a]], [a]) -- | This is for the linear case Ax = b x in this situation is the -- vector of probablities. -- -- Consider the 1 dimensional circular convolution using hmatrix. -- --
-- >>> import Numeric.LinearAlgebra -- -- >>> fromLists [[0.68, 0.22, 0.1], [0.1, 0.68, 0.22], [0.22, 0.1, 0.68]] <> fromLists [[0.2], [0.5], [0.3]] -- (3><1) [0.276, 0.426, 0.298] ---- -- Now if we were given just the convolution and the output, we can use -- linear to infer the input. -- --
-- >>> linear 3.0e-17 $ LC ([[0.68, 0.22, 0.1], [0.1, 0.68, 0.22], [0.22, 0.1, 0.68]], [0.276, 0.426, 0.298]) -- Right (fromList [0.2000000000000001,0.49999999999999983,0.30000000000000004]) ---- -- I fell compelled to point out that we could also just invert the -- original convolution matrix. Supposedly using maxent can reduce errors -- from noise if the convolution matrix is not properly estimated. linear :: Double -> LinearConstraints -> Either (Result, Statistics) (Vector Double) linear' :: (Floating a, Ord a) => LinearConstraints -> [[a]] linear'' :: (Floating a, Ord a) => LinearConstraints -> [[a]]