maxent-0.6.0.4: Compute Maximum Entropy Distributions

Numeric.MaxEnt

Contents

Description

The maximum entropy method, or MAXENT, is variational approach for computing probability distributions given a list of moment, or expected value, constraints.

Here are some links for background info. A good overview of applications: http://cmm.cit.nih.gov/maxent/letsgo.html On the idea of maximum entropy in general: http://en.wikipedia.org/wiki/Principle_of_maximum_entropy

Use this package to compute discrete maximum entropy distributions over a list of values and list of constraints.

Here is a the example from Probability the Logic of Science

>>> maxent 0.00001 [1,2,3] [average 1.5]
Right [0.61, 0.26, 0.11]

The classic dice example

>>> maxent 0.00001 [1,2,3,4,5,6] [average 4.5]
Right [.05, .07, 0.11, 0.16, 0.23, 0.34]

One can use different constraints besides the average value there.

As for why you want to maximize the entropy to find the probability constraint, I will say this for now. In the case of the average constraint it is a kin to choosing a integer partition with the most interger compositions. I doubt that makes any sense, but I will try to explain more with a blog post soon.

Synopsis

# Documentation

type Constraint a = (FU a, a)

A constraint of the form g(x, y, ...) = c. See <=> for constructing a Constraint.

(.=.) :: (forall s. Mode s => AD s a -> AD s a) -> a -> ExpectationConstraint aSource

newtype UU a Source

Constructors

 UU FieldsunUU :: forall s. Mode s => ExpectationFunction (AD s a)

type ExpectationConstraint a = (UU a, a)Source

Constraint type. A function and the constant it equals.

Think of it as the pair (f, c) in the constraint

Σ pₐ f(xₐ) = c

such that we are summing over all values .

For example, for a variance constraint the f would be (\x -> x*x) and c would be the variance.

type ExpectationFunction a = a -> aSource

A function that takes an index and value and returns a value. See average and variance for examples.

## Classic moment based

Arguments

 :: Double Tolerance for the numerical solver -> [Double] values that the distributions is over -> [ExpectationConstraint Double] The constraints -> Either (Result, Statistics) (Vector Double) Either the a discription of what wrong or the probability distribution

Discrete maximum entropy solver where the constraints are all moment constraints.

## General

Arguments

 :: Double Tolerance for the numerical solver -> Int the count of probabilities -> [Constraint Double] constraints -> Either (Result, Statistics) (Vector Double) Either the a discription of what wrong or the probability distribution

A more general solver. This directly solves the lagrangian of the constraints and the the additional constraint that the probabilities must sum to one.

## Linear

Constructors

 LC Fieldsmatrix :: [[a]] output :: [a]

Instances

 Eq a => Eq (LinearConstraints a) Show a => Show (LinearConstraints a)

Arguments

 :: Double Tolerance for the numerical solver -> LinearConstraints Double The matrix A and column vector b -> Either (Result, Statistics) (Vector Double) Either the a discription of what wrong or the probability distribution

This is for the linear case Ax = b x in this situation is the vector of probablities.

Consider the 1 dimensional circular convolution using hmatrix.

>>> import Numeric.LinearAlgebra
>>> fromLists [[0.68, 0.22, 0.1], [0.1, 0.68, 0.22], [0.22, 0.1, 0.68]] <> fromLists [[0.2], [0.5], [0.3]]
(3><1) [0.276, 0.426, 0.298]

Now if we were given just the convolution and the output, we can use linear to infer the input.

>>> linear 3.0e-17 \$ LC [[0.68, 0.22, 0.1], [0.1, 0.68, 0.22], [0.22, 0.1, 0.68]] [0.276, 0.426, 0.298]
Right [0.20000000000000004,0.4999999999999999,0.3]

I fell compelled to point out that we could also just invert the original convolution matrix. Supposedly using maxent can reduce errors from noise if the convolution matrix is not properly estimated.