úÎied±L      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKportable experimentalfelipe.lessa@gmail.com\LMNOP<How to calculate the estimated error in the function value. RelativeEpsilon eps estimates the error as  eps * C_k. AbsoluteEpsilon eps estimates the error as eps. 3Stop rules used to decided when to stop iterating. AlternativeStopRule stops when ' |g_k|_infty <= grad_tol * (1 + |f_k|) DefaultStopRule stop_fac stops when  6 |g_k|_infty <= max(grad_tol, |g_0|_infty * stop_fac) where  |g_i|_infty& is the maximum absolute component of  the gradient at the i -th step. &Line search methods that may be used. 6Use ordinary Wolfe line search, switch to approximate  Wolfe when  # |f_{k+1} - f_k| < AWolfeFac * C_k where C_k! is the average size of cost and   AWolfeFac' is the parameter to this constructor. #Use approximate Wolfe line search. How verbose we should be. 6Print information about every step, may be useful for  troubleshooting. 2Print what work is being done on each iteraction. Do not output anything to stdout, which most of the  time is good. :Technical parameters which you probably should not touch.  You should read the papers of  CG_DESCENT to understand how  you can tune these parameters. *Wolfe line search parameter. Defaults to 0.1. *Wolfe line search parameter. Defaults to 0.9. 6Decay factor for bracket interval width. Defaults to  0.66. 4Growth factor when searching for initial bracketing  interval. Defaults to 5. 8Lower bound for the conjugate gradient update parameter  beta_k is techEta * ||d||_2. Defaults to 0.01. 9Factor used in starting guess for iteration 1. Defaults  to 0.01. 6In performing a QuadStep, we evaluate the function at  psi1 * previous step. Defaults to 0.1. 8When starting a new CG iteration, our initial guess for  the line search stepsize is psi2 * previous step.  Defaults to 2. #Parameters given to the optimizer. Print final statistics to stdout. Defaults to True. Print parameters to stdout before starting. Defaults to False 9How verbose we should be while computing. Everything is  printed to stdout. Defaults to  . 6What kind of line search should be used. Defaults to  AutoSwitch 1e-3.  Factor in [0, 1] used to compute average cost  magnitude C_k as follows:  & Q_k = 1 + (qdecay)Q_{k-1}, Q_0 = 0 ' C_k = C_{k-1} + (|f_k| - C_{k-1})/Q_k  Defaults to 0.7. 7Stop rules that define when the iterations should end.  Defaults to DefaultStopRule 0. 5How to calculate the estimated error in the function  value. Defaults to RelativeEpsilon 1e-6. 8When to attempt quadratic interpolation in line search.  If Nothing* then never try a quadratic interpolation  step. If  Just cutoff, then attemp quadratic # interpolation in line search when |f_{k+1} - f_k| / f_k  <= cutoff. Defaults to  Just 1e-12. !If Just tol, then always check that f_{k+1} - f_k <=  tol * C_k. Otherwise, if Nothing then no checking of ' function values is done. Defaults to Nothing. "If  Just step , then use step as the initial step of ! the line search. Otherwise, if Nothing then the initial 2 step is programatically calculated. Defaults to  Nothing. #7Defines the maximum number of iterations. The process  is aborted when maxItersFac * n iterations are done, where  n5 is the number of dimensions. Defaults to infinity. $9Maximum number of times the bracketing interval grows or * shrinks in the line search. Defaults to 50. %4Maximum number of secant iterations in line search.  Defaults to 50. &,Restart the conjugate gradient method after  restartFac  * n iterations. Defaults to 1. ' Stop when -alpha * dphi0, the estimated change in  function value, is less than funcEpsilon * |f|.  Defaults to 0. (After encountering NaN while calculating the step 7 length, growth factor when searching for a bracketing  interval. Defaults to 1.3. )3Technical parameters which you probably should not  touch. *-Statistics given after the process finishes. +,'Value of the function at the solution. -2Maximum absolute component of the gradient at the  solution. .Total number of iterations. /&Total number of function evaluations. 0&Total number of gradient evaluations. 12Initial function value was NaN. 3Function value became NaN. 4Couldn'%t allocate enought temporary memory. 50Debug tolerance was on and the test failed (see !). 6*Line search fails during interval update. 7$Line search fails during bisection. 8'Line search fails in initial interval. 9*Search direction not a descent direction. :,Number of secant iterations exceed nsecant. ;*Slope was always negative in line search. <Total iterations exceeded maxItersFac * n. ='Change in function value was less than funcEpsilon *  |f|. >%Convergence tolerance was satisfied. ?9Function calculating the both the value of the objective  function f and its gradient at a point x. @AB6Function calculating the value of the gradient of the  objective function f at a point x. The C* constructor uses a function receiving as  parameters the point x being evaluated (should not be @ modified) and the vector where the gradient should be written. CDE9Function calculating the value of the objective function f  at a point x. FGH/Phantom type for functions using mutable data. I(Phantom type for simple pure functions. QRSJRun the  CG_DESCENT# optimizer and try to minimize the  function. How should we optimize. grad_tol, see . Initial guess. Function to be minimized. Gradient of the function. '(Optional) Combined function computing ' both the function and its gradient. TUVWXYZ=Combine two separated functions into a single, combined one. > This is always a win for us since we save one jump from C to  Haskell land. [K/Default parameters. See the documentation for   and   to see what are the defaults. L  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLJEGFBDC?A@IH1>=<;:98765432*+,-./0K !"#$%&'()  L   !"#$%&'() !"#$%&'()*+,-./0+,-./01 >=<;:9876543223456789:;<=>?A@@ABDCCDEGFFGHIJK\       !"#$%&'())*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZnonlinear-optimization-0.1)Math.Optimization.Algorithms.HagerZhang05 EstimateErrorRelativeEpsilonAbsoluteEpsilon StopRulesAlternativeStopRuleDefaultStopRule LineSearch AutoSwitchApproximateWolfeVerbose VeryVerboseQuietTechParameters techDelta techSigma techGammatechRhotechEtatechPsi0techPsi1techPsi2 Parameters printFinal printParamsverbose lineSearchqdecay stopRules estimateError quadraticStepdebugTol initialStep maxItersFacnexpandnsecant restartFac funcEpsilonnanRhotechParameters Statistics finalValuegradNorm totalIters funcEvals gradEvalsResultStartFunctionValueNaNFunctionValueNaN OutOfMemoryDebugTolLineSearchFailsUpdateLineSearchFailsBisectionLineSearchFailsInitial NotDescent MaxSecantIter NegativeSlope MaxTotalIterFunctionChangeToleranceStatisfiedCombined MCombined VCombinedGradient MGradient VGradientFunction MFunction VFunctionMutableSimpleoptimizedefaultParameters cg_default mkCCombined mkCGradient mkCFunction cg_descent CCombined CGradient CFunctionmutableFprepareFmutableGprepareGmutableCprepareCcombine intToResult