rmM      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKL(c) 2009-2011 Felipe LessaGPLfelipe.lessa@gmail.com experimentalportableNone  !"3<HMH;How to calculate the estimated error in the function value.RelativeEpsilon eps estimates the error as  eps * C_k.AbsoluteEpsilon eps estimates the error as eps.2Stop rules used to decided when to stop iterating.AlternativeStopRule stops when %|g_k|_infty <= grad_tol * (1 + |f_k|)DefaultStopRule stop_fac stops when 4|g_k|_infty <= max(grad_tol, |g_0|_infty * stop_fac)where  |g_i|_infty; is the maximum absolute component of the gradient at the i -th step.%Line search methods that may be used.AUse ordinary Wolfe line search, switch to approximate Wolfe when !|f_{k+1} - f_k| < AWolfeFac * C_kwhere C_k" is the average size of cost and  AWolfeFac& is the parameter to this constructor."Use approximate Wolfe line search. How verbose we should be. GPrint information about every step, may be useful for troubleshooting. 1Print what work is being done on each iteraction. Do not output anything to stdout", which most of the time is good. YTechnical parameters which you probably should not touch. You should read the papers of  CG_DESCENT2 to understand how you can tune these parameters.*Wolfe line search parameter. Defaults to 0.1.*Wolfe line search parameter. Defaults to 0.9.7Decay factor for bracket interval width. Defaults to 0.66.LGrowth factor when searching for initial bracketing interval. Defaults to 5.9Lower bound for the conjugate gradient update parameter beta_k is techEta * ||d||_2. Defaults to 0.01.=Factor used in starting guess for iteration 1. Defaults to 0.01.7In performing a QuadStep, we evaluate the function at psi1 * previous step. Defaults to 0.1.UWhen starting a new CG iteration, our initial guess for the line search stepsize is psi2 * previous step. Defaults to 2."Parameters given to the optimizer.Print final statistics to stdout. Defaults to True.Print parameters to stdout before starting. Defaults to FalseEHow verbose we should be while computing. Everything is printed to stdout. Defaults to  .7What kind of line search should be used. Defaults to AutoSwitch 1e-3. Factor in [0, 1]) used to compute average cost magnitude C_k as follows: JQ_k = 1 + (qdecay)Q_{k-1}, Q_0 = 0 C_k = C_{k-1} + (|f_k| - C_{k-1})/Q_k Defaults to 0.7.DStop rules that define when the iterations should end. Defaults to DefaultStopRule 0.JHow to calculate the estimated error in the function value. Defaults to RelativeEpsilon 1e-6. <When to attempt quadratic interpolation in line search. If Nothing5 then never try a quadratic interpolation step. If  Just cutoff;, then attemp quadratic interpolation in line search when  |f_{k+1} - f_k| / f_k <= cutoff. Defaults to  Just 1e-12.!If Just tol, then always check that f_{k+1} - f_k <= tol * C_k. Otherwise, if Nothing< then no checking of function values is done. Defaults to Nothing."If  Just step , then use step9 as the initial step of the line search. Otherwise, if NothingE then the initial step is programatically calculated. Defaults to Nothing.#HDefines the maximum number of iterations. The process is aborted when maxItersFac * n iterations are done, where n4 is the number of dimensions. Defaults to infinity.$cMaximum number of times the bracketing interval grows or shrinks in the line search. Defaults to 50.%AMaximum number of secant iterations in line search. Defaults to 50.&,Restart the conjugate gradient method after restartFac * n iterations. Defaults to 1.' Stop when -alpha * dphi08, the estimated change in function value, is less than funcEpsilon * |f|. Defaults to 0.(After encountering NaNk while calculating the step length, growth factor when searching for a bracketing interval. Defaults to 1.3.):Technical parameters which you probably should not touch.*,Statistics given after the process finishes.,&Value of the function at the solution.-<Maximum absolute component of the gradient at the solution..Total number of iterations./%Total number of function evaluations.0%Total number of gradient evaluations.2Initial function value was NaN.3Function value became NaN.40Debug tolerance was on and the test failed (see !).5)Line search fails during interval update.6#Line search fails during bisection.7&Line search fails in initial interval.8)Search direction not a descent direction.9+Number of secant iterations exceed nsecant.:)Slope was always negative in line search.;Total iterations exceeded maxItersFac * n.<'Change in function value was less than funcEpsilon * |f|.=$Convergence tolerance was satisfied.>?Function calculating both the value of the objective function f and its gradient at a point x.AJFunction calculating the value of the gradient of the objective function f at a point x.The B@ constructor uses a function receiving as parameters the point x_ being evaluated (should not be modified) and the vector where the gradient should be written.D9Function calculating the value of the objective function f at a point x.G:Mutable vector representing where the gradient should be written.HdMutable vector representing the point where the function/gradient is begin evaluated. This vector should not be modified.I.Phantom type for functions using mutable data.J'Phantom type for simple pure functions.KRun the  CG_DESCENT- optimizer and try to minimize the function.M Allocates as N and sets the memory area.OAllocates enough work space for CG_DESCENT. If the number of dimensions is "small enough" then we allocate on the stack, otherwise we allocate via malloc.P~Copies the input array from a mutable storable vector to any pure vector. Used to convert pure functions into mutable ones.QCopies the output array from any pure vector to a mutable storable array. Used to convert pure functions that return the gradient into mutable ones.RCombine two separated functions into a single, combined one. This is always a win for us since we save one jump from C to Haskell land.L/Default parameters. See the documentation for  and   to see what are the defaults.d  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJSTUVWXYZ[KHow should we optimize.grad_tol, see .Initial guess.Function to be minimized.Gradient of the function.M(Optional) Combined function computing both the function and its gradient.MOPQ\]^_`aRbLcdM  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMKDFEACB>@?HGJI1=<;:98765432*+,-./0L !"#$%&'()  (   !"#$%&'()*+,-./01 =<;:98765432>@?ACBDFEGHIJSTUVWXYZ[KMOPQ\]^_`aRbLcde       !"#$%&'())*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdenonlinear-optimization-0.3.10,Numeric.Optimization.Algorithms.HagerZhang05 EstimateErrorRelativeEpsilonAbsoluteEpsilon StopRulesAlternativeStopRuleDefaultStopRule LineSearch AutoSwitchApproximateWolfeVerbose VeryVerboseQuietTechParameters techDelta techSigma techGammatechRhotechEtatechPsi0techPsi1techPsi2 Parameters printFinal printParamsverbose lineSearchqdecay stopRules estimateError quadraticStepdebugTol initialStep maxItersFacnexpandnsecant restartFac funcEpsilonnanRhotechParameters Statistics finalValuegradNorm totalIters funcEvals gradEvalsResultStartFunctionValueNaNFunctionValueNaNDebugTolLineSearchFailsUpdateLineSearchFailsBisectionLineSearchFailsInitial NotDescent MaxSecantIter NegativeSlope MaxTotalIterFunctionChangeToleranceStatisfiedCombined MCombined VCombinedGradient MGradient VGradientFunction MFunction VFunctionGradientMVector PointMVectorMutableSimpleoptimizedefaultParameters allocaSetbaseForeign.Marshal.AllocallocaallocateWorkSpace copyInput copyOutputcombine CCombined CGradient CFunction cg_default mkCCombined mkCGradient mkCFunction cg_descenttracemutableFprepareFmutableGprepareGmutableCprepareC intToResult$fStorableParameters$fStorableStatistics