maxent-0.7: Compute Maximum Entropy Distributions

Safe HaskellNone
LanguageHaskell2010

Numeric.MaxEnt

Contents

Description

The maximum entropy method, or MAXENT, is variational approach for computing probability distributions given a list of moment, or expected value, constraints.

Here are some links for background info. A good overview of applications: http://cmm.cit.nih.gov/maxent/letsgo.html On the idea of maximum entropy in general: http://en.wikipedia.org/wiki/Principle_of_maximum_entropy

Use this package to compute discrete maximum entropy distributions over a list of values and list of constraints.

Here is a the example from Probability the Logic of Science

>>> maxent 0.00001 [1,2,3] [average 1.5]
Right [0.61, 0.26, 0.11]

The classic dice example

>>> maxent 0.00001 [1,2,3,4,5,6] [average 4.5]
Right [.05, .07, 0.11, 0.16, 0.23, 0.34]

One can use different constraints besides the average value there.

As for why you want to maximize the entropy to find the probability constraint, I will say this for now. In the case of the average constraint it is a kin to choosing a integer partition with the most interger compositions. I doubt that makes any sense, but I will try to explain more with a blog post soon.

Synopsis

Documentation

data Constraint :: *

An equality constraint of the form g(x, y, ...) = c. Use <=> to construct a Constraint.

(.=.) :: (forall a. Floating a => a -> a) -> (forall b. Floating b => b) -> ExpectationConstraint infixr 1 Source

data ExpectationConstraint Source

Constraint type. A function and the constant it equals.

Think of it as the pair (f, c) in the constraint

    Σ pₐ f(xₐ) = c

such that we are summing over all values .

For example, for a variance constraint the f would be (\x -> x*x) and c would be the variance.

average :: (forall a. Floating a => a) -> ExpectationConstraint Source

Classic moment based

maxent Source

Arguments

:: Double

Tolerance for the numerical solver

-> (forall a. Floating a => [a])

values that the distributions is over

-> [ExpectationConstraint]

The constraints

-> Either (Result, Statistics) (Vector Double)

Either the a discription of what wrong or the probability distribution

Discrete maximum entropy solver where the constraints are all moment constraints.

General

general Source

Arguments

:: Double

Tolerance for the numerical solver

-> Int

the count of probabilities

-> [Constraint]

constraints

-> Either (Result, Statistics) (Vector Double)

Either the a discription of what wrong or the probability distribution

A more general solver. This directly solves the lagrangian of the constraints and the the additional constraint that the probabilities must sum to one.

Linear

data LinearConstraints Source

Constructors

LC 

Fields

unLC :: forall a. Floating a => ([[a]], [a])
 

linear Source

Arguments

:: Double

Tolerance for the numerical solver

-> LinearConstraints

The matrix A and column vector b

-> Either (Result, Statistics) (Vector Double)

Either a description of what went wrong or the probability distribution

This is for the linear case Ax = b x in this situation is the vector of probablities.

Consider the 1 dimensional circular convolution using hmatrix.

>>> import Numeric.LinearAlgebra
>>> fromLists [[0.68, 0.22, 0.1], [0.1, 0.68, 0.22], [0.22, 0.1, 0.68]] <> fromLists [[0.2], [0.5], [0.3]]
(3><1) [0.276, 0.426, 0.298]   

Now if we were given just the convolution and the output, we can use linear to infer the input.

>>> linear 3.0e-17 $ LC ([[0.68, 0.22, 0.1], [0.1, 0.68, 0.22], [0.22, 0.1, 0.68]], [0.276, 0.426, 0.298])
Right (fromList [0.2000000000000001,0.49999999999999983,0.30000000000000004])

I fell compelled to point out that we could also just invert the original convolution matrix. Supposedly using maxent can reduce errors from noise if the convolution matrix is not properly estimated.

linear' Source

Arguments

:: (Floating a, Ord a) 
=> LinearConstraints

The matrix A and column vector b

-> [[a]]

Either a description of what went wrong or the probability distribution

linear'' Source

Arguments

:: (Floating a, Ord a) 
=> LinearConstraints

The matrix A and column vector b

-> [[a]]

Either a description of what went wrong or the probability distribution