ClassificationLaplace is a module in the HasGP Gaussian Process library. It implements basic Gaussian Process Classification for two classes using the Laplace approximation. For details see www.gaussianprocesses.org.

Copyright (C) 2011 Sean Holden. sbh11@cl.cam.ac.uk.

- data LaplaceValue = LaplaceValue {}
- type LaplaceConvergenceTest = LaplaceValue -> LaplaceValue -> Bool
- gpCLaplaceLearn :: LogLikelihood l => CovarianceMatrix -> Targets -> l -> LaplaceConvergenceTest -> LaplaceValue
- convertToP_CG :: (Double, Double) -> Double
- gpCLaplacePredict :: (CovarianceFunction cF, LogLikelihood l) => DVector -> Inputs -> Targets -> CovarianceMatrix -> cF -> l -> Input -> (Double, Double)
- gpCLaplacePredict' :: (CovarianceFunction cF, LogLikelihood l) => DVector -> Inputs -> Targets -> CovarianceMatrix -> cF -> l -> Inputs -> [(Double, Double)]
- gpCLaplaceLogEvidence :: (CovarianceFunction cF, LogLikelihood l) => Inputs -> Targets -> cF -> l -> LaplaceConvergenceTest -> (Double, DVector)
- gpCLaplaceLogEvidenceList :: (CovarianceFunction cF, LogLikelihood l) => Inputs -> Targets -> cF -> l -> LaplaceConvergenceTest -> [Double] -> (Double, DVector)
- gpCLaplaceLogEvidenceVec :: (CovarianceFunction cF, LogLikelihood l) => Inputs -> Targets -> cF -> l -> LaplaceConvergenceTest -> DVector -> (Double, DVector)

# Documentation

data LaplaceValue Source

Computing the Laplace approximation requires us to deal with quite a lot of information. To keep things straightforward we wrap this up in a type.

The value associated with a state includes f, evidence, objective, derivative of the objective, the vector a needed to compute the derivative of the evidence, and the number of iterations.

type LaplaceConvergenceTest = LaplaceValue -> LaplaceValue -> BoolSource

A convergence test is a function that takes two consecutive values during iteration and works out whether you've converged or not.

:: LogLikelihood l | |

=> CovarianceMatrix | |

-> Targets | |

-> l | log likelihood |

-> LaplaceConvergenceTest | |

-> LaplaceValue |

Iteration to convergence is much nicer if the state is hidden using the State monad.

This uses a general function from HasGP.Support.Iterate to implement the learning algorithm. Convergence testing is done using a user supplied function.

convertToP_CG :: (Double, Double) -> DoubleSource

Converts pairs of fStar and V produced by the prediction functions to actual probabilities, assuming the cumulative Gaussian likelihood was used.

:: (CovarianceFunction cF, LogLikelihood l) | |

=> DVector | f |

-> Inputs | |

-> Targets | |

-> CovarianceMatrix | Covariance matrix |

-> cF | Covariance function |

-> l | log likelihood |

-> Input | Input to classify |

-> (Double, Double) |

Predict using a GP classifier based on the Laplace approximation.

Produces fStar and V rather than the actual probability as further approximations are then required to compute this.

:: (CovarianceFunction cF, LogLikelihood l) | |

=> DVector | f |

-> Inputs | |

-> Targets | |

-> CovarianceMatrix | |

-> cF | Covariance function |

-> l | log likelihood |

-> Inputs | Inputs to classify |

-> [(Double, Double)] |

Predict using a GP classifier based on the Laplace approximation.

The same as gpLaplacePredict but applies to a collection of new inputs supplied as the rows of a matrix.

Produces a list of pairs of fStar and V rather than the actual probabilities as further approximations are then required to compute these.

:: (CovarianceFunction cF, LogLikelihood l) | |

=> Inputs | |

-> Targets | |

-> cF | Covariance function |

-> l | log likelihood |

-> LaplaceConvergenceTest | |

-> (Double, DVector) |

Compute the log marginal likelihood and its first derivative for the Laplace approximation for GP classification.

The convergence test input tests for convergence when using gpClassificationLaplaceLearn. Note that a covariance function contains its own parameters and can compute its own derivative so theta does not need to be passed seperately.

Outputs the NEGATIVE log marginal likelihood and a vector of its derivatives. The derivatives are with respect to the actual, NOT log parameters.

gpCLaplaceLogEvidenceListSource

:: (CovarianceFunction cF, LogLikelihood l) | |

=> Inputs | |

-> Targets | |

-> cF | |

-> l | |

-> LaplaceConvergenceTest | |

-> [Double] | log hyperparameters |

-> (Double, DVector) |

A version of gpClassificationLaplaceEvidence that's usable by the conjugate gradient function included in the hmatrix library. Computes the log evidence and its first derivative for the Laplace approximation for GP classification. The issue is that while it makes sense for a covariance function to be implemented as a class so that any can easily be used, we need to supply evidence and its derivatives directly as functions of the hyperparameters, and these have to be supplied as vectors of Doubles. The solution is to include a function in the CovarianceFunction class that takes a list and returns a new covariance function of the required type having the specified hyperparameters.

Parameters: The same parameters as gpClassifierLaplaceEvidence, plus the list of hyperparameters. Outputs: negative log marginal likelihood and a vector of its first derivatives.

In addition to the above, this assumes that we want derivatives with respect to log parameters and so converts using df/d log p = p df/dp.

gpCLaplaceLogEvidenceVec :: (CovarianceFunction cF, LogLikelihood l) => Inputs -> Targets -> cF -> l -> LaplaceConvergenceTest -> DVector -> (Double, DVector)Source

This is the same as gpCLaplaceLogEvidenceList but takes a vector instead of a list.