optimization-0.1.7: Numerical optimization

Safe HaskellNone
LanguageHaskell2010

Optimization.LineSearch.MirrorDescent

Contents

Synopsis

Documentation

mirrorDescent Source

Arguments

:: (Num a, Additive f) 
=> LineSearch f a

line search method

-> (f a -> f a)

strongly convex function, psi

-> (f a -> f a)

dual of psi

-> (f a -> f a)

gradient of function

-> f a

starting point

-> [f a]

iterates

Mirror descent method.

Originally described by Nemirovsky and Yudin and later elucidated by Beck and Teboulle, the mirror descent method is a generalization of the projected subgradient method for convex optimization. Mirror descent requires the gradient of a strongly convex function psi (and its dual) as well as a way to get a subgradient for each point of the objective function f.

Step size methods