Safe Haskell | None |
---|

The maximum entropy method, or MAXENT, is variational approach for computing probability distributions given a list of moment, or expected value, constraints.

Here are some links for background info. A good overview of applications: http://cmm.cit.nih.gov/maxent/letsgo.html On the idea of maximum entropy in general: http://en.wikipedia.org/wiki/Principle_of_maximum_entropy

Use this package to compute discrete maximum entropy distributions over a list of values and list of constraints.

Here is a the example from Probability the Logic of Science

`>>>`

Right [0.61, 0.26, 0.11]`maxent 0.00001 [1,2,3] [average 1.5]`

The classic dice example

`>>>`

Right [.05, .07, 0.11, 0.16, 0.23, 0.34]`maxent 0.00001 [1,2,3,4,5,6] [average 4.5]`

One can use different constraints besides the average value there.

As for why you want to maximize the entropy to find the probability constraint, I will say this for now. In the case of the average constraint it is a kin to choosing a integer partition with the most interger compositions. I doubt that makes any sense, but I will try to explain more with a blog post soon.

- type Constraint a = (FU a, a)
- (.=.) :: (forall s. Mode s => AD s a -> AD s a) -> a -> ExpectationConstraint a
- newtype UU a = UU {
- unUU :: forall s. Mode s => ExpectationFunction (AD s a)

- type ExpectationConstraint a = (UU a, a)
- type ExpectationFunction a = a -> a
- average :: Num a => a -> ExpectationConstraint a
- variance :: Num a => a -> ExpectationConstraint a
- maxent :: Double -> [Double] -> [ExpectationConstraint Double] -> Either (Result, Statistics) (Vector Double)
- general :: Double -> Int -> [Constraint Double] -> Either (Result, Statistics) (Vector Double)
- data LinearConstraints a = LC {}
- linear :: Double -> LinearConstraints Double -> Either (Result, Statistics) (Vector Double)

# Documentation

type Constraint a = (FU a, a)

A constraint of the form `g(x, y, ...) = c`

. See `<=>`

for constructing a `Constraint`

.

type ExpectationConstraint a = (UU a, a)Source

Constraint type. A function and the constant it equals.

Think of it as the pair `(f, c)`

in the constraint

Σ pₐ f(xₐ) = c

such that we are summing over all values .

For example, for a variance constraint the `f`

would be `(\x -> x*x)`

and `c`

would be the variance.

type ExpectationFunction a = a -> aSource

average :: Num a => a -> ExpectationConstraint aSource

variance :: Num a => a -> ExpectationConstraint aSource

## Classic moment based

:: Double | Tolerance for the numerical solver |

-> [Double] | values that the distributions is over |

-> [ExpectationConstraint Double] | The constraints |

-> Either (Result, Statistics) (Vector Double) | Either the a discription of what wrong or the probability distribution |

Discrete maximum entropy solver where the constraints are all moment constraints.

## General

:: Double | Tolerance for the numerical solver |

-> Int | the count of probabilities |

-> [Constraint Double] | constraints |

-> Either (Result, Statistics) (Vector Double) | Either the a discription of what wrong or the probability distribution |

A more general solver. This directly solves the lagrangian of the constraints and the the additional constraint that the probabilities must sum to one.

## Linear

data LinearConstraints a Source

Eq a => Eq (LinearConstraints a) | |

Show a => Show (LinearConstraints a) |

:: Double | Tolerance for the numerical solver |

-> LinearConstraints Double | The matrix A and column vector b |

-> Either (Result, Statistics) (Vector Double) | Either the a discription of what wrong or the probability distribution |

This is for the linear case Ax = b
`x`

in this situation is the vector of probablities.

Consider the 1 dimensional circular convolution using hmatrix.

`>>>`

`import Numeric.LinearAlgebra`

`>>>`

(3><1) [0.276, 0.426, 0.298]`fromLists [[0.68, 0.22, 0.1], [0.1, 0.68, 0.22], [0.22, 0.1, 0.68]] <> fromLists [[0.2], [0.5], [0.3]]`

Now if we were given just the convolution and the output, we can use `linear`

to infer the input.

`>>>`

Right [0.20000000000000004,0.4999999999999999,0.3]`linear 3.0e-17 $ LC [[0.68, 0.22, 0.1], [0.1, 0.68, 0.22], [0.22, 0.1, 0.68]] [0.276, 0.426, 0.298]`

I fell compelled to point out that we could also just invert the original convolution matrix. Supposedly using maxent can reduce errors from noise if the convolution matrix is not properly estimated.