úÎ!ŽR‡Èq      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnop(c) 2009-2011 Felipe LessaGPLfelipe.lessa@gmail.com experimentalportableNone  &'>GSX…\Hnonlinear-optimization;How to calculate the estimated error in the function value.nonlinear-optimizationAbsoluteEpsilon eps estimates the error as eps.nonlinear-optimizationRelativeEpsilon eps estimates the error as  eps * C_k.nonlinear-optimization2Stop rules used to decided when to stop iterating.nonlinear-optimizationDefaultStopRule stop_fac stops when 4|g_k|_infty <= max(grad_tol, |g_0|_infty * stop_fac)where  |g_i|_infty; is the maximum absolute component of the gradient at the i -th step.nonlinear-optimizationAlternativeStopRule stops when %|g_k|_infty <= grad_tol * (1 + |f_k|)nonlinear-optimization%Line search methods that may be used.nonlinear-optimization"Use approximate Wolfe line search.nonlinear-optimizationAUse ordinary Wolfe line search, switch to approximate Wolfe when !|f_{k+1} - f_k| < AWolfeFac * C_kwhere C_k" is the average size of cost and  AWolfeFac& is the parameter to this constructor. nonlinear-optimizationHow verbose we should be. nonlinear-optimizationDo not output anything to stdout", which most of the time is good. nonlinear-optimization1Print what work is being done on each iteraction. nonlinear-optimizationGPrint information about every step, may be useful for troubleshooting. nonlinear-optimizationYTechnical parameters which you probably should not touch. You should read the papers of  CG_DESCENT2 to understand how you can tune these parameters.nonlinear-optimization*Wolfe line search parameter. Defaults to 0.1.nonlinear-optimization*Wolfe line search parameter. Defaults to 0.9.nonlinear-optimization7Decay factor for bracket interval width. Defaults to 0.66.nonlinear-optimizationLGrowth factor when searching for initial bracketing interval. Defaults to 5.nonlinear-optimization9Lower bound for the conjugate gradient update parameter beta_k is techEta * ||d||_2. Defaults to 0.01.nonlinear-optimization=Factor used in starting guess for iteration 1. Defaults to 0.01.nonlinear-optimization7In performing a QuadStep, we evaluate the function at psi1 * previous step. Defaults to 0.1.nonlinear-optimizationUWhen starting a new CG iteration, our initial guess for the line search stepsize is psi2 * previous step. Defaults to 2.nonlinear-optimization"Parameters given to the optimizer.nonlinear-optimizationPrint final statistics to stdout. Defaults to True.nonlinear-optimizationPrint parameters to stdout before starting. Defaults to Falsenonlinear-optimizationEHow verbose we should be while computing. Everything is printed to stdout. Defaults to  .nonlinear-optimization7What kind of line search should be used. Defaults to AutoSwitch 1e-3.nonlinear-optimization Factor in [0, 1]) used to compute average cost magnitude C_k as follows: JQ_k = 1 + (qdecay)Q_{k-1}, Q_0 = 0 C_k = C_{k-1} + (|f_k| - C_{k-1})/Q_k Defaults to 0.7.nonlinear-optimizationDStop rules that define when the iterations should end. Defaults to DefaultStopRule 0.nonlinear-optimizationJHow to calculate the estimated error in the function value. Defaults to RelativeEpsilon 1e-6. nonlinear-optimization<When to attempt quadratic interpolation in line search. If Nothing5 then never try a quadratic interpolation step. If  Just cutoff;, then attemp quadratic interpolation in line search when  |f_{k+1} - f_k| / f_k <= cutoff. Defaults to  Just 1e-12.!nonlinear-optimizationIf Just tol, then always check that f_{k+1} - f_k <= tol * C_k. Otherwise, if Nothing< then no checking of function values is done. Defaults to Nothing."nonlinear-optimizationIf  Just step , then use step9 as the initial step of the line search. Otherwise, if NothingE then the initial step is programatically calculated. Defaults to Nothing.#nonlinear-optimizationHDefines the maximum number of iterations. The process is aborted when maxItersFac * n iterations are done, where n4 is the number of dimensions. Defaults to infinity.$nonlinear-optimizationcMaximum number of times the bracketing interval grows or shrinks in the line search. Defaults to 50.%nonlinear-optimizationAMaximum number of secant iterations in line search. Defaults to 50.&nonlinear-optimization,Restart the conjugate gradient method after restartFac * n iterations. Defaults to 1.'nonlinear-optimization Stop when -alpha * dphi08, the estimated change in function value, is less than funcEpsilon * |f|. Defaults to 0.(nonlinear-optimizationAfter encountering NaNk while calculating the step length, growth factor when searching for a bracketing interval. Defaults to 1.3.)nonlinear-optimization:Technical parameters which you probably should not touch.*nonlinear-optimization,Statistics given after the process finishes.,nonlinear-optimization&Value of the function at the solution.-nonlinear-optimization<Maximum absolute component of the gradient at the solution..nonlinear-optimizationTotal number of iterations./nonlinear-optimization%Total number of function evaluations.0nonlinear-optimization%Total number of gradient evaluations.2nonlinear-optimization$Convergence tolerance was satisfied.3nonlinear-optimization'Change in function value was less than funcEpsilon * |f|.4nonlinear-optimizationTotal iterations exceeded maxItersFac * n.5nonlinear-optimization)Slope was always negative in line search.6nonlinear-optimization+Number of secant iterations exceed nsecant.7nonlinear-optimization)Search direction not a descent direction.8nonlinear-optimization&Line search fails in initial interval.9nonlinear-optimization#Line search fails during bisection.:nonlinear-optimization)Line search fails during interval update.;nonlinear-optimization0Debug tolerance was on and the test failed (see !).<nonlinear-optimizationFunction value became NaN.=nonlinear-optimizationInitial function value was NaN.>nonlinear-optimization?Function calculating both the value of the objective function f and its gradient at a point x.Anonlinear-optimizationJFunction calculating the value of the gradient of the objective function f at a point x.The C@ constructor uses a function receiving as parameters the point x_ being evaluated (should not be modified) and the vector where the gradient should be written.Dnonlinear-optimization9Function calculating the value of the objective function f at a point x.Gnonlinear-optimization:Mutable vector representing where the gradient should be written.Hnonlinear-optimizationdMutable vector representing the point where the function/gradient is begin evaluated. This vector should not be modified.Inonlinear-optimization.Phantom type for functions using mutable data.Jnonlinear-optimization'Phantom type for simple pure functions.Knonlinear-optimizationRun the  CG_DESCENT- optimizer and try to minimize the function.qnonlinear-optimization Allocates as r and sets the memory area.snonlinear-optimizationAllocates enough work space for CG_DESCENT. If the number of dimensions is "small enough" then we allocate on the stack, otherwise we allocate via malloc.tnonlinear-optimization~Copies the input array from a mutable storable vector to any pure vector. Used to convert pure functions into mutable ones.unonlinear-optimization—Copies the output array from any pure vector to a mutable storable array. Used to convert pure functions that return the gradient into mutable ones.vnonlinear-optimization‰Combine two separated functions into a single, combined one. This is always a win for us since we save one jump from C to Haskell land.Lnonlinear-optimization/Default parameters. See the documentation for  and   to see what are the defaults.Knonlinear-optimizationHow should we optimize.nonlinear-optimizationgrad_tol, see .nonlinear-optimizationInitial guess.nonlinear-optimizationFunction to be minimized.nonlinear-optimizationGradient of the function.nonlinear-optimizationM(Optional) Combined function computing both the function and its gradient.M  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMKDEFABC>?@HGJI123456789:;<=*+,-./0L !"#$%&'()  w       !"#$%&'())*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvw4nonlinear-optimization-0.3.12-EbCWu73jPasLJH7nf7KQUW,Numeric.Optimization.Algorithms.HagerZhang05 EstimateErrorAbsoluteEpsilonRelativeEpsilon StopRulesDefaultStopRuleAlternativeStopRule LineSearchApproximateWolfe AutoSwitchVerboseQuiet VeryVerboseTechParameters techDelta techSigma techGammatechRhotechEtatechPsi0techPsi1techPsi2 Parameters printFinal printParamsverbose lineSearchqdecay stopRules estimateError quadraticStepdebugTol initialStep maxItersFacnexpandnsecant restartFac funcEpsilonnanRhotechParameters Statistics finalValuegradNorm totalIters funcEvals gradEvalsResultToleranceStatisfiedFunctionChange MaxTotalIter NegativeSlope MaxSecantIter NotDescentLineSearchFailsInitialLineSearchFailsBisectionLineSearchFailsUpdateDebugTolFunctionValueNaNStartFunctionValueNaNCombined VCombined MCombinedGradient VGradient MGradientFunction VFunction MFunctionGradientMVector PointMVectorMutableSimpleoptimizedefaultParameters$fStorableStatistics$fStorableParameters $fEqResult $fOrdResult $fShowResult $fReadResult $fEnumResult$fEqStatistics$fOrdStatistics$fShowStatistics$fReadStatistics$fEqTechParameters$fOrdTechParameters$fShowTechParameters$fReadTechParameters $fEqVerbose $fOrdVerbose $fShowVerbose $fReadVerbose $fEnumVerbose$fEqLineSearch$fOrdLineSearch$fShowLineSearch$fReadLineSearch $fEqStopRules$fOrdStopRules$fShowStopRules$fReadStopRules$fEqEstimateError$fOrdEstimateError$fShowEstimateError$fReadEstimateError$fEqParameters$fOrdParameters$fShowParameters$fReadParameters allocaSetbaseForeign.Marshal.AllocallocaallocateWorkSpace copyInput copyOutputcombine