-- Hoogle documentation, generated by Haddock -- See Hoogle, http://www.haskell.org/hoogle/ -- | Compute Maximum Entropy Distributions -- -- The maximum entropy method, or MAXENT, is variational approach for -- computing probability distributions given a list of moment, or -- expected value, constraints. -- -- Here are some links for background info. -- -- A good overview of applications: -- http://cmm.cit.nih.gov/maxent/letsgo.html -- -- On the idea of maximum entropy in general: -- http://en.wikipedia.org/wiki/Principle_of_maximum_entropy -- -- Use this package to compute discrete maximum entropy distributions -- over a list of values and list of constraints. -- -- Here is a the example from Probability the Logic of Science -- --
--   >>> maxent ([1,2,3], [average 1.5])
--   Right [0.61, 0.26, 0.11]
--   
-- -- The classic dice example -- --
--   >>> maxent ([1,2,3,4,5,6], [average 4.5])
--   Right [.05, .07, 0.11, 0.16, 0.23, 0.34]
--   
-- -- One can use different constraints besides the average value there. -- -- As for why you want to maximize the entropy to find the probability -- constraint, I will say this for now. In the case of the average -- constraint it is a kin to choosing a integer partition with the most -- interger compositions. I doubt that makes any sense, but I will try to -- explain more with a blog post soon. @package maxent @version 0.4.0.0 -- | The maximum entropy method, or MAXENT, is variational approach for -- computing probability distributions given a list of moment, or -- expected value, constraints. -- -- Here are some links for background info. A good overview of -- applications: http://cmm.cit.nih.gov/maxent/letsgo.html On the -- idea of maximum entropy in general: -- http://en.wikipedia.org/wiki/Principle_of_maximum_entropy -- -- Use this package to compute discrete maximum entropy distributions -- over a list of values and list of constraints. -- -- Here is a the example from Probability the Logic of Science -- --
--   >>> maxent ([1,2,3], [average 1.5])
--   Right [0.61, 0.26, 0.11]
--   
-- -- The classic dice example -- --
--   >>> maxent ([1,2,3,4,5,6], [average 4.5])
--   Right [.05, .07, 0.11, 0.16, 0.23, 0.34]
--   
-- -- One can use different constraints besides the average value there. -- -- As for why you want to maximize the entropy to find the probability -- constraint, I will say this for now. In the case of the average -- constraint it is a kin to choosing a integer partition with the most -- interger compositions. I doubt that makes any sense, but I will try to -- explain more with a blog post soon. module Numeric.MaxEnt -- | Constraint type. A function and the constant it equals. -- -- Think of it as the pair (f, c) in the constraint -- --
--   Σ pₐ f(xₐ) = c
--   
-- -- such that we are summing over all values . -- -- For example, for a variance constraint the f would be (\x -- -> x*x) and c would be the variance. type Constraint a = (ExpectationFunction a, a) -- | A function that takes an index and value and returns a value. See -- average and variance for examples. type ExpectationFunction a = a -> a constraint :: RealFloat a => ExpectationFunction a -> a -> Constraint a average :: RealFloat a => a -> Constraint a variance :: RealFloat a => a -> Constraint a -- | Discrete maximum entropy solver where the constraints are all moment -- constraints. maxent :: (forall a. RealFloat a => ([a], [Constraint a])) -> Either (Result, Statistics) [Double] type GeneralConstraint a = [a] -> [a] -> a -- | Most general solver This is the slowest but most flexible method. -- Although, I haven't tried using much... generalMaxent :: (forall a. RealFloat a => ([a], [(GeneralConstraint a, a)])) -> Either (Result, Statistics) [Double] data LinearConstraints a LC :: [[a]] -> [a] -> LinearConstraints a matrix :: LinearConstraints a -> [[a]] output :: LinearConstraints a -> [a] -- | This is for the linear case Ax = b x in this situation is the -- vector of probablities. -- -- Consider the 1 dimensional circular convolution using hmatrix. -- --
--   >>> import Numeric.LinearAlgebra
--   
--   >>> fromLists [[0.68, 0.22, 0.1], [0.1, 0.68, 0.22], [0.22, 0.1, 0.68]] <> fromLists [[0.2], [0.5], [0.3]]
--   (3><1) [0.276, 0.426, 0.298]   
--   
-- -- Now if we were given just the convolution and the output, we can use -- linear to infer the input. -- --
--   >>> linear 3.0e-17 $ LC [[0.68, 0.22, 0.1], [0.1, 0.68, 0.22], [0.22, 0.1, 0.68]] [0.276, 0.426, 0.298]
--   Right [0.20000000000000004,0.4999999999999999,0.3]
--   
linear :: Double -> (forall a. RealFloat a => LinearConstraints a) -> Either (Result, Statistics) [Double]