-- Hoogle documentation, generated by Haddock -- See Hoogle, http://www.haskell.org/hoogle/ -- | Compute Maximum Entropy Distributions -- -- The maximum entropy method, or MAXENT, is variational approach for -- computing probability distributions given a list of moment, or -- expected value, constraints. -- -- Here are some links for background info. -- -- A good overview of applications: -- http://cmm.cit.nih.gov/maxent/letsgo.html -- -- On the idea of maximum entropy in general: -- http://en.wikipedia.org/wiki/Principle_of_maximum_entropy -- -- Use this package to compute discrete maximum entropy distributions -- over a list of values and list of constraints. -- -- Here is a the example from Probability the Logic of Science -- --
--   maxent ([1,2,3], [average 1.5])
--   
-- -- Right [0.61, 0.26, 0.11] -- -- The classic dice example -- --
--   maxent ([1,2,3,4,5,6], [average 4.5])
--   
-- -- Right [.05, .07, 0.11, 0.16, 0.23, 0.34] -- -- One can use different constraints besides the average value there. -- -- As for why you want to maximize the entropy to find the probability -- constraint, I will say this for now. In the case of the average -- constraint it is a kin to choosing a integer partition with the most -- interger compositions. I doubt that makes any sense, but I will try to -- explain more with a blog post soon. @package maxent @version 0.3.0.1 -- | The maximum entropy method, or MAXENT, is variational approach for -- computing probability distributions given a list of moment, or -- expected value, constraints. -- -- Here are some links for background info. A good overview of -- applications: http://cmm.cit.nih.gov/maxent/letsgo.html On the -- idea of maximum entropy in general: -- http://en.wikipedia.org/wiki/Principle_of_maximum_entropy -- -- Use this package to compute discrete maximum entropy distributions -- over a list of values and list of constraints. -- -- Here is a the example from Probability the Logic of Science -- --
--   maxent ([1,2,3], [average 1.5])
--   
-- -- Right [0.61, 0.26, 0.11] -- -- The classic dice example -- --
--   maxent ([1,2,3,4,5,6], [average 4.5])
--   
-- -- Right [.05, .07, 0.11, 0.16, 0.23, 0.34] -- -- One can use different constraints besides the average value there. -- -- As for why you want to maximize the entropy to find the probability -- constraint, I will say this for now. In the case of the average -- constraint it is a kin to choosing a integer partition with the most -- interger compositions. I doubt that makes any sense, but I will try to -- explain more with a blog post soon. module MaxEnt -- | Constraint type. A function and the constant it equals. -- -- Think of it as the pair (f, c) in the constraint -- --
--   Σ pₐ f(xₐ) = c
--   
-- -- such that we are summing over all values . -- -- For example, for a variance constraint the f would be (\x -- -> x*x) and c would be the variance. type Constraint a = (ExpectationFunction a, a) -- | A function that takes an index and value and returns a value. See -- average and variance for examples. type ExpectationFunction a = a -> a constraint :: Floating a => ExpectationFunction a -> a -> Constraint a average :: Floating a => a -> Constraint a variance :: Floating a => a -> Constraint a -- | The main entry point for computing discrete maximum entropy -- distributions. Where the constraints are all moment constraints. maxent :: (forall a. Floating a => ([a], [Constraint a])) -> Either (Result, Statistics) [Double] -- | This is for the linear case Ax = b x in this situation is the -- vector of probablities. -- -- For example. -- --
--   maxentLinear ([1,1,1], ([[0.85, 0.1, 0.05], [0.25, 0.5, 0.25], [0.05, 0.1, 0.85]], [0.29, 0.42, 0.29]))
--   
-- -- Right [0.1, 0.8, 0.1] -- -- To be honest I am not sure why I can't use the maxent version -- to solve this type of problem, but it doesn't work. I'm still learning maxentLinear :: (forall a. Floating a => ([a], ([[a]], [a]))) -> Either (Result, Statistics) [Double]