None   None$This is for the linear case Ax = b  x2 in this situation is the vector of probablities. ?Consider the 1 dimensional circular convolution using hmatrix. import Numeric.LinearAlgebraffromLists [[0.68, 0.22, 0.1], [0.1, 0.68, 0.22], [0.22, 0.1, 0.68]] <> fromLists [[0.2], [0.5], [0.3]](3><1) [0.276, 0.426, 0.298] ENow if we were given just the convolution and the output, we can use  to infer the input. clinear 3.0e-17 $ LC [[0.68, 0.22, 0.1], [0.1, 0.68, 0.22], [0.22, 0.1, 0.68]] [0.276, 0.426, 0.298]2Right [0.20000000000000004,0.4999999999999999,0.3] #Tolerance for the numerical solver !The matrix A and column vector b HEither the a discription of what wrong or the probability distribution a matrix A and column vector b HEither the a discription of what wrong or the probability distribution None>A function that takes an index and value and returns a value.  See   and   for examples. 8Constraint type. A function and the constant it equals. Think of it as the pair (f, c) in the constraint     p  f(x ) = c +such that we are summing over all values . +For example, for a variance constraint the f would be (\ x -> x*x) and c would be the variance. Most general solver B This is the slowest but most flexible method. Although, I haven't tried using much... SDiscrete maximum entropy solver where the constraints are all moment constraints.   DA pair of values that the distributions is over and the constraints GEither the a discription of what wrong or the probability distribution DA pair of values that the distributions is over and the constraints HEither the a discription of what wrong or the probability distribution     None    !      !"maxent-0.4.0.0Numeric.MaxEnt Numeric.MaxEnt.ConjugateGradientNumeric.MaxEnt.LinearNumeric.MaxEnt.InternalLinearConstraintsLCmatrixoutputlinearExpectationFunction ConstraintGeneralConstraint constraintaveragevariance generalMaxentmaxentdotsumMapsumWith toFunction toGradient toDoubleF squaredGradsolveprobspOfK partitionFunc objectiveFunclinear1entropy lagrangiangeneralObjectiveFunc