Safe Haskell | None |
---|
The maximum entropy method, or MAXENT, is variational approach for computing probability distributions given a list of moment, or expected value, constraints.
Here are some links for background info. A good overview of applications: http://cmm.cit.nih.gov/maxent/letsgo.html On the idea of maximum entropy in general: http://en.wikipedia.org/wiki/Principle_of_maximum_entropy
Use this package to compute discrete maximum entropy distributions over a list of values and list of constraints.
Here is a the example from Probability the Logic of Science
maxent ([1,2,3], [average 1.5])
Right [0.61, 0.26, 0.11]
The classic dice example
maxent ([1,2,3,4,5,6], [average 4.5])
Right [.05, .07, 0.11, 0.16, 0.23, 0.34]
One can use different constraints besides the average value there.
As for why you want to maximize the entropy to find the probability constraint, I will say this for now. In the case of the average constraint it is a kin to choosing a integer partition with the most interger compositions. I doubt that makes any sense, but I will try to explain more with a blog post soon.
- type Constraint a = (ExpectationFunction a, a)
- type ExpectationFunction a = a -> a
- constraint :: Floating a => ExpectationFunction a -> a -> Constraint a
- average :: Floating a => a -> Constraint a
- variance :: Floating a => a -> Constraint a
- maxent :: (forall a. Floating a => ([a], [Constraint a])) -> Either (Result, Statistics) [Double]
- maxentLinear :: (forall a. Floating a => ([a], ([[a]], [a]))) -> Either (Result, Statistics) [Double]
Documentation
type Constraint a = (ExpectationFunction a, a)Source
Constraint type. A function and the constant it equals.
Think of it as the pair (f, c)
in the constraint
Σ pₐ f(xₐ) = c
such that we are summing over all values .
For example, for a variance constraint the f
would be (\x -> x*x)
and c
would be the variance.
type ExpectationFunction a = a -> aSource
constraint :: Floating a => ExpectationFunction a -> a -> Constraint aSource
average :: Floating a => a -> Constraint aSource
variance :: Floating a => a -> Constraint aSource
:: (forall a. Floating a => ([a], [Constraint a])) | A pair of values that the distributions is over and the constraints |
-> Either (Result, Statistics) [Double] | Either the a discription of what wrong or the probability distribution |
The main entry point for computing discrete maximum entropy distributions. Where the constraints are all moment constraints.
:: (forall a. Floating a => ([a], ([[a]], [a]))) | The values and a matrix A and column vector b |
-> Either (Result, Statistics) [Double] | Either the a discription of what wrong or the probability distribution |
This is for the linear case Ax = b
x
in this situation is the vector of probablities.
For example.
maxentLinear ([1,1,1], ([[0.85, 0.1, 0.05], [0.25, 0.5, 0.25], [0.05, 0.1, 0.85]], [0.29, 0.42, 0.29]))
Right [0.1, 0.8, 0.1]
To be honest I am not sure why I can't use the maxent
version to solve
this type of problem, but it doesn't work. I'm still learning