h-K       !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                                                                                                                                                                     2.0.0.4!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimental5FlexibleInstances, DeriveFunctor, ScopedTypeVariables Safe-Inferred<' #$("!&%'  '   !"#$%&'()/6*)0+(/,,3!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimental5FlexibleInstances, DeriveFunctor, ScopedTypeVariables Safe-Inferred#< -Supported functionsESupported operatorsMTree structure to be used with Symbolic Regression algorithms. This structure is a fixed point of a n-ary tree. Nindex of the variablesOindex of the parameterPconstant value, can be converted to a parameter | IConst Int -- TODO: integer constant | RConst Ratio -- TODO: rational constantQunivariate functionRbinary operatorU8create a tree with a single node representing a variableV9create a tree with a single node representing a parameterW>create a tree with a single node representing a constant valueXArity of the current nodeYGet the children of a node. Returns an empty list in case of a leaf node.%map showExpr . getChildren $ "x0" + 2 ["x0", 2]Z$Get the children of an unfixed node [0replaces the children with elements from a list \9returns a node containing the operator and () as children]$Count the number of nodes in a tree.countNodes $ "x0" + 23^Count the number of N nodes,countVarNodes $ "x0" + 2 * ("x0" - sin "x1")3_Count the number of O nodes4countParams $ "x0" + "t0" * sin ("t1" + "x1") - "t0"3`Count the number of const nodes$countConsts $ "x0"* 2 + 3 * sin "x0"2a-Count the occurrences of variable indexed as ix2countOccurrences 0 $ "x0"* 2 + 3 * sin "x0" + "x1"2b#counts the number of unique tokens :countUniqueTokens $ "x0" + ("x1" * "x0" - sin ("x0" ** 2))8c&return the number of unique variables +numberOfVars $ "x0" + 2 * ("x0" - sin "x1")2dreturns the integer constants. We assume an integer constant as those values in which `floor x == ceiling x`.*getIntConsts $ "x0" + 2 * "x1" ** 3 - 3.14 [2.0,3.0]e;Relabel the parameters indices incrementaly starting from 0showExpr . relabelParams $ "x0" + "t0" * sin ("t1" + "x1") - "t0"'"x0" + "t0" * sin ("t1" + "x1") - "t2" f;Relabel the parameters indices incrementaly starting from 0showExpr . relabelParams $ "x0" + "t0" * sin ("t1" + "x1") - "t0"&"x0" + "t0" * sin ("t1" + "x1") - "t2"gChange constant values to a parameter, returning the changed tree and a list of parameter values6snd . constsToParam $ "x0" * 2 + 3.14 * sin (5 * "x1")[2.0,3.14,5.0]hSame as g but does not change constant values that can be converted to integer without loss of precision;snd . floatConstsToParam $ "x0" * 2 + 3.14 * sin (5 * "x1")[3.14]i1Convert the parameters into constants in the treeshowExpr . paramsToConst [1.1, 2.2, 3.3] $ "x0" + "t0" * sin ("t1" * "x0" - "t2")x0 + 1.1 * sin(2.2 * x0 - 3.3)othe instance of = allows us to create a tree using a more practical notation:":t "x0" + "t0" * sin("x1" * "t1") Fix SRTreeXZgWT`]a_b^hYd\cViefS[U-7:698;/>14DB.@AC03<=?25ELFIHJKGMRPOQN MRPOQN-7:698;/>14DB.@AC03<=?25ELFIHJKGVUWXYZ[\]^`_abcdefghiST j k l m !n o p qrs"tuvw #x%)yz{|"}$(!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalConstraintKinds Safe-Inferred1 ~)Returns a random constant, the parameter p must have the HasConst property)Returns a random function, the parameter p must have the  property%Returns a random node, the parameter p must have every property.2Returns a random non-terminal node, the parameter p must have every property.3Returns a random integer power node, the parameter p must have the  property;Returns a random tree with a limited budget, the parameter p must have every property.let treeGen = runReaderT (randomTree 12) (P [0,1] (-10, 10) (2, 3) [Log, Exp])(tree <- evalStateT treeGen (mkStdGen 52) showExpr tree"(-2.7631152121655838 / Exp((x0 / ((x0 * -7.681722660704317) - Log(3.378309080134594)))))"4Returns a random tree with a approximately a number n of nodes, the parameter p must have every property.let treeGen = runReaderT (randomTreeBalanced 10) (P [0,1] (-10, 10) (2, 3) [Log, Exp])(tree <- evalStateT treeGen (mkStdGen 42) showExpr tree"Exp(Log((((7.784360517385774 * x0) - (3.6412224491658223 ^ x1)) ^ ((x0 ^ -4.09764995657091) + Log(-7.710216839988497)))))")Returns a random variable, the parameter p must have the  propertyA structure with every property&Constraint synonym for all properties.RndTree is a Monad Transformer to generate random trees of type `SRTree ix val` given the parameters `p ix val` using the random number generator .~~? ? = =; ;!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimental Safe-Inferred# prints the expression prints the expression%prints expression in LaTeX notation. &print the expression in numpy notationprints the tree in TikZ format .convert a tree into a string in math notation )showExpr $ "x0" + sin ( tanh ("t0" + 2) )"(x0 + Sin(Tanh((t0 + 2.0))))"convert a tree into a string in math notation given named vars.showExprWithVar ["mu", "eps"] $ "x0" + sin ( "x1" * tanh ("t0" + 2) )$"(mu + Sin(Tanh(eps * (t0 + 2.0))))"1Displays a tree as a LaTeX compatible expression.*showLatex $ "x0" + sin ( tanh ("t0" + 2) )"\\left(x_{, 0} + \\operatorname{sin}(\\operatorname{tanh}(\\left(\\theta_{, 0} + 2.0\\right)))\\right)"1Displays a tree as a numpy compatible expression.+showPython $ "x0" + sin ( tanh ("t0" + 2) )."(x[:, 0] + np.sin(np.tanh((t[:, 0] + 2.0))))"Displays a tree in Tikz formatconverts a tree with protected operators to a conventional math tree  !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalFlexibleInstances, DeriveFunctor, ScopedTypeVariables, ConstraintKinds Safe-InferredXZgWT`]a_b^hYd\cViefS[U-7:698;/>14DB.@AC03<=?25ELFIHJKGMRPOQN MRPOQN-7:698;/>14DB.@AC03<=?25ELFIHJKGVUWXYZ[\]^`_abcdefghiST !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimental5FlexibleInstances, DeriveFunctor, ScopedTypeVariablesNonej Matrix of features values Vector of parameter values. Needs to be strict to be readily accesible.Vector of target values Evaluates the tree given a vector of variable values, a vector of parameter values and a function that takes a Double and change to whatever type the variables have. This is useful when working with datasets of many values per variables.>Returns the inverse of a function. This is a partial function.evals the inverse of a function'evals the right inverse of an operator &evals the left inverse of an operator List of invertible functions 68 84/ //!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalFlexibleInstances, DeriveFunctor, ScopedTypeVariables, ConstraintKindsNone#)  loads a dataset with a filename in the format: filename.ext:start_row:end_row:target:features:y_err it returns the X_train, y_train, X_test, y_test, varnames, target name where varnames are a comma separated list of the name of the vars and target name is the name of the targetwhere*start_row:end_row** is the range of the training rows (default 0:nrows-1). every other row not included in this range will be used as validation*target** is either the name of the PVector (if the datafile has headers) or the index of the target variable*features** is a comma separated list of SRMatrix names or indices to be used as input variables of the regression model.9Loads a list of list of bytestrings to a matrix of double$Returns true if the extension is .gzDetects the separator automatically by checking whether the use of each separator generates the same amount of SRMatrix in every row and at least two SRMatrix.detectSep ["x1,x2,x3,x4"] ','+reads a file and returns a list of list of  ByteString corresponding to each element of the matrix. The first row can be a header. Splits the parameters from the filename the expected format of the filename is *filename.ext:p1:p2:p3:p4* where p1 and p2 is the starting and end rows for the training data, by default p1 = 0 and p2 = number of rows - 1 p3 is the target PVector, it can be a string corresponding to the header or an index. p4 is a comma separated list of SRMatrix (either index or name) to be used as input variables. These will be renamed internally as x0, x1, ... in the order of this list.#Tries to parse a string into an intGiven a map between PVector name and indeces, the target PVector and the variables SRMatrix, returns the indices of the variables SRMatrix and the targetGiven the start and end rows, it returns the hmatrix extractors for the training and validation dataNone8Given a list of (x,y) co-ordinates, produces a list of coefficients to cubic equations, with knots at each of the initially provided x co-ordinates. Natural cubic spline interpololation is used. See:  http://en.wikipedia.org/wiki/Spline_interpolation#Interpolation_using_natural_cubic_spline.  !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalNone ? #gets the canonical id of an e-classcanonize the e-node childrenCreates a new e-class from an e-class id, a new e-node, and the info of this e-class returns an empty e-graph DBreturns an empty e-graph+this assumes up to 999 variables and paramsgets an e-class with id c,Check whether an e-class is a constant value+Creates a singleton trie from an e-class id        "        6        (*     ,3 +2 (/ '.    # 9 9  "& %) "& !%    !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalNone)8returns all the root e-classes (e-class without parents)returns the e-class id with the best fitness that is true to a predicate !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalNoneW,check whether an e-node evaluates to a constcalculates the cost of a nodeupdate the heights of each e-class won't work if there's no rootjoin data from two e-classes TODO: instead of folding, just do not apply rules list of values instead of single value0Calculate e-node data (constant values and cost)   !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimental5FlexibleInstances, DeriveFunctor, ScopedTypeVariables Safe-Inferred#Derivative of each supported function For a function h(f) it returns the derivative dh/dfderivative Log 2.00.5"Symbolic derivative by a parameter!Symbolic derivative by a variable.Second-order derivative of supported functionsdoubleDerivative Log 2.0-0.25>Creates the symbolic partial derivative of a tree by variable dx (if p is ) or parameter dx (if p is ). This uses mutual recursion where the first recursion (alg1) holds the derivative w.r.t. the current node and the second (alg2) holds the original tree.-showExpr . deriveBy False 0 $ 2 * "x0" * "x1" "(2.0 * x1)"showExpr . deriveBy True 1 $ 2 * "x0" * "t0" - sqrt ("t1" * "x0")1"(-1.0 * ((1.0 / (2.0 * Sqrt((t1 * x0)))) * x0))" !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimental5FlexibleInstances, DeriveFunctor, ScopedTypeVariablesNone )<t The function forwardModeUnique calculates the numerical gradient of the tree and evaluates the tree at the same time. It assumes that each parameter has a unique occurrence in the expression. This should be significantly faster than  forwardMode.Same as above, but using reverse mode with the tree encoded as an array, that is even faster.!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalConstraintKindsNone) -Fisher information of negative log-likelihoodget the standard error from a Maybe Double if it is Nothing, estimate from the ssr, otherwise use the current value For distributions other than Gaussian, it defaults to a constant 1'Gradient of the negative log-likelihood'Gradient of the negative log-likelihood'Gradient of the negative log-likelihood"Hessian of negative log-likelihoodNote, though the Fisher is just the diagonal of the return of this function it is better to keep them as different functions for efficiencyMean squared errorsNegative log-likelihood'Mean Squared error (not a distribution)&Prediction for different distributionsCoefficient of determinationRoot of the mean squared errors.Sum-of-square errors or Sum-of-square residuesSupported distributions for negative log-likelihood MSE refers to mean squared error HGaussian is Gaussian with heteroscedasticity, where the error should be providedTotal Sum-of-squareslogistic function!:!:(!::!:*:,!::!::!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalConstraintKindsNone)rAkaike information criterionBayesian information criterion Evidence MDL as described in Bartlett, Deaglan J., Harry Desmond, and Pedro G. Ferreira. "Exhaustive symbolic regression." IEEE Transactions on Evolutionary Computation (2023).same as  but weighting the functional structure by frequency calculated using a wiki information of physics and engineering functionsMDL Lattice as described in Bartlett, Deaglan, Harry Desmond, and Pedro Ferreira. "Priors for symbolic regression." Proceedings of the Companion Conference on Genetic and Evolutionary Computation. 2023.  !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalNone ,Returns a Query (list of atoms) of a patternreturns the e-class id for a certain variable that matches the pattern described by the atoms$checks if v is an element of an atomCreates the substituion map for the pattern variables for each one of the matched subgraph'returns the list of the children values>returns all e-class id that can matches this sequence of atomssearches for the intersection of e-class ids that matches each part of the query. Returns Nothing if the intersection is empty.var is the current variable being investigated xs is the map of ids being investigated and their corresponding e-class id trie is the current trie of the pattern (i:ids) sequence of root : children of the atom to investigate NOTE: it must be Maybe Set to differentiate between empty set and no answerchecks whether two ClassOrVar are different only check if it is a pattern variable, else returns true?Returns the substitution rules for every match of the pattern  inside the e-graph.?sorts the variables in a query by the most frequently occurring/updates all occurrence of var with the new id x "((" " " ". ."((":>"(?(" !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalNone4adds a new or existing e-node (merging if necessary). adds an e-node and e-class id to the databasegets the e-node of the target of the rule TODO: add consts and modify creates a database of patterns from an e-graph it simply calls addToDB for every pair (e-node, e-class id) from the e-graph.*Creates an e-graph from an expression tree1Builds an e-graph from multiple independent trees*returns all expressions rooted at e-class eId TODO: check for infinite list8gets the best expression given the default cost function)returns one expression rooted at e-class eId TODO: avoid loopings.returns a random expression rooted at e-class eIdreturns 3 if the condition of a rule is valid for that matchmerge to equivalent e-classesmodify an e-class, e.g., add constant e-node and prune non-leaves3Populates an IntTrie with a sequence of e-class ids:rebuilds the e-graph after inserting or merging e-classesrepairs e-node by canonizing its children if the canonized e-node already exists in e-graph, merge the e-classesrepair the analysis of the e-class considering the new added e-node,adds the target of the rule into the e-graph!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalNone5apply a single step of merge-only equality saturationruns equality saturation from an expression tree, a given set of rules, and a cost function. Returns the tree with the smallest cost.#matches the rules given a scheduler/recalculates the costs with a new cost function2run equality saturation for a number of iterationsThe  stores a map with the banned iterations of a certain rule . TODO: make it more customizable.!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalNone#5apply a single step of merge-only using default rules&simplify using the default parameters default cost function for simplification TODO: num_params: length: terminal < nonterminal: symbol comparison (constants, parameters, variables x0, x10, x2) op priorities (+, -, *, inv_div, pow, abs, exp, log, log10, sqrt) univariates!simplifies with custom parameters  (c) Matthew Peddie 2017BSD3"Matthew Peddie  provisionalGHCNone; Create a new  object0This function is very similar to the C function nlopt_optimize6, but it does not use mutable vectors and returns an  structure.The NLOPT algorithm names, apart from the names of the actual optimization methods, follow this scheme: Gmeans a global methodLmeans a local methodD+means a method that requires the derivativeN3means a method that does not require the derivative*_RAND0means the algorithm involves some randomization.*_NOSCALmeans the algorithm is *not* scaled to a unit hypercube (i.e. it is sensitive to the units of x)9AUGmented LAGrangian, requires local_optimizer to be setAUGmented LAGrangian with penalty functions only for equality constraints, requires local_optimizer to be set?Original Multi-Level Single-Linkage, user-provided derivativeMulti-Level Single-Linkage with Sobol Low-Discrepancy Sequence for starting points, user-provided derivativeStochastic Global Optimization3Stochastic Global Optimization, randomized variant-Controlled Random Search with Local MutationDIviding RECTangles,DIviding RECTangles, locally-biased variant1DIviding RECTangles, locally-biased and unscaled+DIviding RECTangles, "slightly randomized"DIviding RECTangles, locally-biased, unscaled and "slightly randomized"%DIviding RECTangles, unscaled versionEvolutionary Algorithm/Improved Stochastic Ranking Evolution Strategy$Original Multi-Level Single-LinkageMulti-Level Single-Linkage with Sobol Low-Discrepancy Sequence for starting points5DIviding RECTangles, original FORTRAN implementationDIviding RECTangles, locally-biased, original FORTRAN implementationOriginal Multi-Level Single-Linkage, user-provided derivative, requires local_optimizer to be setMulti-Level Single-Linkage with Sobol Low-Discrepancy Sequence for starting points, requires local_optimizer to be set/AUGmented LAGrangian, user-provided derivativeAUGmented LAGrangian with penalty functions only for equality constraints, user-provided derivative,Conservative Convex Separable ApproximationLimited-memory BFGSLimited-memory BFGSMethod of moving averages$Sequential Least-SQuares ProgrammingTruncated Newton's method)Preconditioned truncated Newton's methodPreconditioned truncated Newton's method with automatic restarting4Truncated Newton's method with automatic restarting/Shifted limited-memory variable-metric, rank-1/Shifted limited-memory variable-metric, rank-2AUGmented LAGrangianAUGmented LAGrangian with penalty functions only for equality constraints1Bounded Optimization BY Quadratic Approximations2Constrained Optimization BY Linear Approximations)Nelder-Mead Simplex gradient-free methodPowell's NEWUOA algorithm-Powell's NEWUOA algorithm with bounds by SGJ0PRincipal AXIS gradient-free local optimization2NLOPT implementation of Rowan's Subplex algorithmAn optimizer object which must be created, configured and then passed to  to solve a problem%The output of an NLOPT optimizer run.number of evaluations Return code=Minimum of the objective function if optimization succeededParameters corresponding to the minimum if optimization succeeded"This function type corresponds to  nlopt_precond in C and is used for functions that precondition a vector at a given point in the parameter space. You may pass data of any type a. to the functions in this module that take a  as an argument; this data will be supplied to your function when it is called.Mostly self-explanatory.Generic failure codeGeneric success code"This function type corresponds to  nlopt_func in C and is used for scalar functions of the parameter vector. You may pass data of any type a. to the functions in this module that take a  as an argument; this data will be supplied to your your function when it is called."This function type corresponds to  nlopt_mfunc in C and is used for vector functions of the parameter vector. You may pass data of any type a. to the functions in this module that take a  as an argument; this data will be supplied to your function when it is called.NLOPT library version, e.g. 2.4.2Choice of algorithmParameter vector dimensionOptimizer object,Optimizer object set up to solve the problemInitial-guess parameter vectorResults of the optimization runPrimary optimizerSubsidiary (local) optimizerParameter vectorVector v to preconditionOutput vector vpre to be filled in User dataParameter vectorGradient vector to be filled in User data Scalar resultParameter vectorOutput vector to be filled inGradient vector to be filled in User data#$#$# # # # ########"(c) Matthew Peddie 2017BSD3"Matthew Peddie  provisionalGHCNone6_Example programThe following interactive session example enforces the same scalar constraint as the nonlinear constraint example, but this time it uses the augmented Lagrangian method to enforce the constraint and the  algorithm, which does not support nonlinear constraints itself, to perform the minimization. As before, the parameters must always sum to 1, and the minimizer finds the same constrained minimum of 22.5 at  (0.5, 0.5). 6import Numeric.LinearAlgebra ( dot, fromList, toList )let objf x = x `dot` x + 220let stop = ObjectiveRelativeTolerance 1e-9 :| []%let algorithm = SBPLX objf [] Nothing.let subproblem = LocalProblem 2 stop algorithmlet x0 = fromList [5, 10]minimizeLocal subproblem x0Right (Solution {solutionCost = 22.0, solutionParams = [0.0,0.0], solutionResult = FTOL_REACHED})-- define constraint function:(let constraintf x = sum (toList x) - 1.05-- define constraint object to pass to the algorithm:=let constraint = EqualityConstraint (Scalar constraintf) 1e-6let problem = AugLagProblem [constraint] [] (AUGLAG_EQ_LOCAL subproblem)minimizeAugLag problem x0Right (Solution {solutionCost = 22.500000015505844, solutionParams = [0.5000880506776678,0.4999119493223323], solutionResult = FTOL_REACHED})0Solve the specified global optimization problem.Example program3The following interactive session example uses the  algorithm, a stochastic, derivative-free global optimizer, to minimize a trivial function with a minimum of 22.0 at (0, 0). The search is conducted within a box from -10 to 10 in each dimension. .import Numeric.LinearAlgebra ( dot, fromList )let objf x = x `dot` x + 22 -- define objectivelet stop = ObjectiveRelativeTolerance 1e-12 :| [] -- define stopping criterionlet algorithm = ISRES objf [] [] (SeedValue 22) Nothing -- specify algorithmlet lowerbounds = fromList [-10, -10] -- specify boundslet upperbounds = fromList [10, 10] -- specify boundslet problem = GlobalProblem lowerbounds upperbounds stop algorithmlet x0 = fromList [5, 8] -- specify initial guessminimizeGlobal problem x0Right (Solution {solutionCost = 22.000000000002807, solutionParams = [-1.660591102367038e-6,2.2407062393213684e-7], solutionResult = FTOL_REACHED})Example programThe following interactive session example enforces the same scalar constraint as the nonlinear constraint example, but this time it uses the SLSQP solver to find the minimum. =import Numeric.LinearAlgebra ( dot, fromList, toList, scale )*let objf x = (x `dot` x + 22, 2 `scale` x)0let stop = ObjectiveRelativeTolerance 1e-9 :| [];let constraintf x = (sum (toList x) - 1.0, fromList [1, 1])=let constraint = EqualityConstraint (Scalar constraintf) 1e-6-let algorithm = SLSQP objf [] [] [constraint]+let problem = LocalProblem 2 stop algorithmlet x0 = fromList [5, 10]minimizeLocal problem x0Right (Solution {solutionCost = 22.5, solutionParams = [0.4999999999999998,0.5000000000000002], solutionResult = FTOL_REACHED})The Augmented Lagrangian solvers allow you to enforce nonlinear constraints while using local or global algorithms that don't natively support them. The subsidiary problem is used to do the minimization, but the AUGLAG methods modify the objective to enforce the constraints. Please see  8http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithmsthe NLOPT algorithm manual for more details on how the methods work and how they relate to one another.See the documentation for 7 for an important note about the constraint functions.AUGmented LAGrangian with a global subsidiary method and with penalty functions only for equality constraints.AUGmented LAGrangian with a local subsidiary method and with penalty functions only for equality constraints4AUGmented LAGrangian with a global subsidiary method3AUGmented LAGrangian with a local subsidiary methodIMPORTANT NOTEFor augmented lagrangian problems, you, the user, are responsible for providing the appropriate type of constraint. If the subsidiary problem requires an , then you should provide constraint functions with derivatives. If the subsidiary problem requires an , you should provide constraint functions without derivatives. If you don't do this, you may get a runtime error.+Possibly empty set of equality constraints=Possibly empty set of equality constraints with derivativesAlgorithm specification.Bound constraints are specified by vectors of the same dimension as the parameter space.Example programThe following interactive session example enforces lower bounds on the example from the beginning of the module. This prevents the optimizer from locating the true minimum at (0, 0),; a slightly higher constrained minimum at (1, 1)- is found. Note that the optimizer returns $% rather than $&?, because the bound constraint is active at the final minimum..import Numeric.LinearAlgebra ( dot, fromList )let objf x = x `dot` x + 22 -- define objectivelet stop = ObjectiveRelativeTolerance 1e-6 :| [] -- define stopping criterionlet lowerbound = LowerBounds $ fromList [1, 1] -- specify boundslet algorithm = NELDERMEAD objf [lowerbound] Nothing -- specify algorithmlet problem = LocalProblem 2 stop algorithm -- specify problemlet x0 = fromList [5, 10] -- specify initial guessminimizeLocal problem x0Right (Solution {solutionCost = 24.0, solutionParams = [1.0,1.0], solutionResult = XTOL_REACHED})Lower bound vector v means we want x >= v.Upper bound vector u means we want x <= u.>A scalar constraint with an attached preconditioning function.A scalar constraint.A vector constraint.An equality constraint, comprised of both the constraint function (or functions, if a preconditioner is used) along with the desired tolerance.A collection of equality constraints that do not supply constraint derivatives.A collection of equality constraints that supply constraint derivatives.These are the global minimization algorithms provided by NLOPT. Please see  8http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithmsthe NLOPT algorithm manual for more details on how the methods work and how they relate to one another.%Optional parameters are wrapped in a ; for example, if you see  , you can simply specify  to use the default behavior.,Controlled Random Search with Local MutationDIviding RECTangles+DIviding RECTangles, locally-biased variant0DIviding RECTangles, locally-biased and unscaled*DIviding RECTangles, "slightly randomized"DIviding RECTangles, locally-biased, unscaled and "slightly randomized"%DIviding RECTangles, unscaled versionEvolutionary Algorithm.Improved Stochastic Ranking Evolution Strategy#Original Multi-Level Single-LinkageMulti-Level Single-Linkage with Sobol Low-Discrepancy Sequence for starting points4DIviding RECTangles, original FORTRAN implementationDIviding RECTangles, locally-biased, original FORTRAN implementation!Stochastic Global Optimization. 9This algorithm is only available if you have linked with  libnlopt_cxx.5Stochastic Global Optimization, randomized variant. 9This algorithm is only available if you have linked with  libnlopt_cxx.Algorithm specification At least one stopping conditionLower bounds for xUpper bounds for xAn inequality constraint, comprised of both the constraint function (or functions, if a preconditioner is used) along with the desired tolerance.A collection of inequality constraints that do not supply constraint derivatives.A collection of inequality constraints that supply constraint derivatives.This specifies the memory size to be used by algorithms like 6 which store approximate Hessian or Jacobian matrices.Problem specificationInitial parameter guessOptimization resultsParameter vectorObjective function valueParameter vector$(Objective function value, gradient)Parameter vector xVector v to precondition at xPreconditioned vector vpreParameter vector x'Constraint violation (deviation from 0)Parameter vector+(Constraint violation, constraint gradient)Parameter vectorConstraint VectorizeConstraint violation vectorParameter vectorConstraint Vectorize3(Constraint violation vector, constraint Jacobian)' ' ' ' ' '=?'02' '' '68' "'9=' #' #' "''':>'''''''48'''':>!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalConstraintKindsNone"$minimizes using Binomial likelihood $minimizes using Gaussian likelihood 7minimizes the negative log-likelihood of the expressionminimizes the function while keeping the parameter ix fixed (used to calculate the profile)#minimizes using Poisson likelihood !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalConstraintKindsNone)Calculates the confidence interval of the parameters using Laplace approximation or Profile likelihoodcalculates the prediction confidence interval using Laplace approximation or profile likelihood. Basic stats of the data: covariance of parameters, correlation, standard errors 8a confience interval is composed of the point estimate (), lower bound (_lower_) and upper bound ()Confidence Interval using Laplace approximation or profile likelihood.profile likelihood algorithms: Bates (classical), ODE (faster), Constrained (fastest) The Constrained approach returns only the endpoints.,,(1'1)(88(8$8(((8(<(1+1/(88"((2(6) Safe-Inferred!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalConstraintKindsNone#;+Calls the corresponding parser for a given fmap (showOutput MATH) $ parseSR OPERON "lambda,theta" False "lambda ^ 2 - sin(theta*3*lambda)"-Right "((x0 ^ 2.0) - Sin(((x1 * 3.0) * x0)))"Returns the corresponding function from Data.SRTree.Print for a given .Supported outputs.Supported algorithms.*Parser of a symbolic regression tree with 5 variable index and numerical values represented as 1. The numerical values type can be changed with .&Creates a parser for a binary operator%Creates a parser for a unary functionEnvelopes the parser in parens=Parse an expression using a user-defined parser given by the  lists containing the name of the functions and operators of that SR algorithm, a list of parsers binFuns for binary functions a parser U for variables, a boolean indicating whether to change floating point values to free parameters variables, and a list of variable names with their corresponding indexes.Tries to parse as an #, if it fails, parse as a Double.analytic quotientParser for binary functions/parser for Transformation-Interaction-Rational.parser for Operon.parser for HeuristicLab.parser for Bingoparser for GOMEAparser for PySR****''****''**<**''**6*:*''!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalConstraintKindsNonegiven a filename, the symbolic regression algorithm, a string of variables name, and two booleans indicating whether to convert float values to parameters and whether to simplify the expression or not, it will read the file and parse everything returning a list of either an error message or a tree.!empty filename defaults to stdin outputs a list of either error or trees to a file using the Output format. "empty filename defaults to stdout =debug version of output function to check the invalid parsers+,-+,./012345666789:;;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmno1pqrstuvwxyz{|}~                                                                                                                                                                     .&%+ ++))))))))+,srtree-2.0.0.4-inplaceAlgorithm.SRTree.NonlinearOptData.SRTree.RecursionData.SRTree.InternalData.SRTree.RandomData.SRTree.PrintData.SRTree.EvalData.SRTree.DatasetsAlgorithm.Massiv.UtilsAlgorithm.EqSat.EgraphAlgorithm.EqSat.QueriesAlgorithm.EqSat.InfoData.SRTree.DerivativeAlgorithm.SRTree.ADAlgorithm.SRTree.LikelihoodsAlgorithm.SRTree.ModelSelectionAlgorithm.EqSat.DBAlgorithm.EqSat.BuildAlgorithm.EqSatAlgorithm.EqSat.Simplify#Numeric.Optimization.NLOPT.BindingsAlgorithm.SRTree.Opt$Algorithm.SRTree.ConfidenceIntervals Text.ParseSRText.ParseSR.IOsrtreesrc/Data/SRTree/Recursion.hssrc/Data/SRTree/Internal.hssrc/Data/SRTree/Random.hs Data.SRTreesrc/Data/SRTree/Eval.hssrc/Algorithm/Massiv/Utils.hssrc/Algorithm/EqSat/Egraph.hs#src/Algorithm/SRTree/Likelihoods.hssrc/Algorithm/EqSat/DB.hs*src/Numeric/Optimization/NLOPT/Bindings.hsN XTOL_REACHED FTOL_REACHED$src/Algorithm/SRTree/NonlinearOpt.hs+src/Algorithm/SRTree/ConfidenceIntervals.hs Paths_srtreesrc/Text/ParseSR.hs ghc-internalGHC.Internal.BaseNonEmpty:|FreeRetOpCofree:< CoAlgebraAlgebraFixunfixTreeFLeafFNodeFStreamFNatFZeroFSuccFListFNilFConsFextractunOpcatacataManahyloparamutuapoaccuhistofutuchronofromListtoList stream2listtoNatfromNat$fFunctorTreeF$fFunctorStreamF $fFunctorNatF$fFunctorListFFunctionIdAbsSinCosTanSinhCoshTanhASinACosATanASinhACoshATanhSqrtSqrtAbsCbrtSquareLogLogAbsExpRecipCubeAddSubMulDivPowerPowerAbsAQSRTreeVarParamConstUniBinremoveProtectedOpsconvertProtectedOpsvarparamconstvarity getChildren childrenOfreplaceChildren getOperator countNodes countVarNodes countParams countConstscountOccurrencescountUniqueTokens numberOfVars getIntConsts relabelParams relabelVars constsToParamfloatConstsToParam paramsToConst$fTraversableSRTree$fFoldableSRTree $fFloatingFix$fFractionalFix$fNumFix $fIsStringFix $fShowSRTree $fEqSRTree $fOrdSRTree$fFunctorSRTree$fShowFunction$fReadFunction $fEqFunction $fOrdFunction$fEnumFunction$fShowOp$fReadOp$fEqOp$fOrdOp$fEnumOp randomConstrandomFunction randomNoderandomNonTerminal randomPow randomTreerandomTreeBalanced randomVar FullParamsP HasEverythingHasFunsHasValsHasVarsRndTree$fHasExpsFullParams$fHasFunsFullParams$fHasValsFullParams$fHasVarsFullParams printExprprintExprWithVars printLatex printPython printTikzshowExprshowExprWithVars showLatexshowOp showPythonshowTikzSRMatrixPVectorSRVectorcompMode replicateAsevalTreeevalOpevalFuncbrt inverseFunc evalInverseinvrightinvleft invertibles$fFractionalArray$fFloatingArray $fNumArray loadDataset appendCol appendRow backwardSubcholeskychunkBycubicSplineCoefficientsdetdetChol forwardSub genSplineFungetColsgetRowsinvChollinSpaceluluSolveouterrangedLinearDotProdupdateS MMassArrayNegDefPolyCos$fExceptionNegDef $fShowNegDefanalysisbest canonical canonicalMapcanonizeconstscost createEClass decodeEnodedl dlRangeDBeClasseClassIdeDB eNodeToEClasseNodesemptyDB emptyGraph encodeEnode fitRangeDBfitness getEClass getGreatest getSmallestgetWithinRangeheightinfo insertRangeisConstnextIdparentspatDBrefits removeRangesizesizeDBsizeDLDB sizeFitDBthetatrie unevaluatedworklist ClassIdMapConstsConstValNotConstParamIxCostCostFunDBEClass _eClassId_eNodes_height_info_parents EClassDataEData_best_consts_cost_dl_fitness_size_thetaEClassIdEGraph _canonicalMap_eClass_eDB_eNodeToEClassEGraphDBEDB _analysis _dlRangeDB _fitRangeDB_nextId_patDB_refits_sizeDB _sizeDLDB _sizeFitDB _unevaluated _worklistEGraphSTENodeENodeEncIntTrie_keys_triePropertyNegativeNonZeroPositiveReal RangeTree $fBinaryArray$fBinaryConsts$fBinaryEClass$fBinarySRTree0$fBinaryEClassData$fBinaryEGraph$fBinaryEGraphDB$fBinaryHashSet$fBinaryIntTrie$fBinaryProperty$fBinarySRTree $fEqConsts $fEqEClass$fEqEClassData $fEqProperty$fGenericConsts$fGenericEClass$fGenericEClassData$fGenericEGraph$fGenericEGraphDB$fGenericIntTrie$fGenericProperty$fGenericTuple2$fHashableSRTree $fShowConsts $fShowEClass$fShowEClassData $fShowEGraph$fShowEGraphDB $fShowIntTrie$fShowProperty canonizeRangefindRootClassesgetAllEvaluatedEClassesgetEClassesThatgetTopDLEClassIngetTopDLEClassNotIngetTopDLEClassThatgetTopDLEClassWithSizegetTopECLassIngetTopECLassNotIngetTopECLassThatgetTopEClassWithSizegetTopFitEClassIngetTopFitEClassNotIngetTopFitEClassThatgetTopFitEClassWithSizerebuildAllRanges rebuildRange updateFitnesscalculateConsts calculateCostcalculateHeights combineConstsgetChildrenMinHeightinsertDL insertFitnessjoinData makeAnalysis derivative deriveByParam deriveByVardoubleDerivativeforwardModeUniqueJacreverseModeArrreverseModeGraphbuildNLL fisherNLLgetSErrgradNLL gradNLLArr gradNLLGraph hessianNLLmsenllpredictr2rmsessetree2arr Distribution BernoulliGaussian HGaussianMSEPoissonROXY$fBoundedDistribution$fEnumDistribution$fEqDistribution$fReadDistribution$fShowDistributionaicbicevidence logFunctionallogFunctionalFreq logParameterslogParametersLattmdlmdlFreqmdlLattnll' treeToNatcleanDBcompileToQuerydomainX elemOfAtom genericJoin getConditionsgetElemsgetIntintersectAtomsintersectTries isDiffFrommatch orderedVarssourcetargettree2patunFixPat updateVarAtom ClassOrVar ConditionPatternFixedVarPatQueryRule:==::=> $fEqPattern$fFloatingPattern$fFractionalPattern $fNumPattern$fIsStringPattern $fOrdPattern $fShowAtom $fShowPattern $fShowRuleaddaddToDB applyMatchapplyMergeOnlyMatch canonizeMap classOfENode cleanMapscreateDB forceStatefromTree fromTreesgetAllChildEClassesgetAllExpressionsFrom getBestENode getBestExprgetExpressionFromgetRndExpressionFromisValidConditions isValidHeightmerge modifyEClasspopulaterebuildrepairrepairAnalysisreprPratapplySingleMergeOnlyEqSateqSatfromJustmatchWithSchedulerrecalculateBestrunEqSatCostMap SchedulerapplyMergeOnlyDftl rewriteBasicrewrites rewritesFunrewritesParamsrewritesSimplesimplifyEqSatDefaultadd_equality_constraintadd_equality_mconstraintadd_inequality_constraintadd_inequality_mconstraintadd_precond_equality_constraint!add_precond_inequality_constraintalgorithm_namecopycreatedestroy force_stop get_algorithm get_dimensionget_force_stop get_ftol_abs get_ftol_relget_initial_stepget_lower_bounds get_maxeval get_maxtimeget_population get_stopvalget_upper_boundsget_vector_storage get_xtol_abs get_xtol_rel isSuccessoptimizeremove_equality_constraintsremove_inequality_constraintsset_default_initial_stepset_force_stop set_ftol_abs set_ftol_relset_initial_stepset_initial_step1set_local_optimizerset_lower_boundsset_lower_bounds1set_max_objective set_maxeval set_maxtimeset_min_objectiveset_populationset_precond_max_objectiveset_precond_min_objective set_stopvalset_upper_boundsset_upper_bounds1set_vector_storage set_xtol_abs set_xtol_abs1 set_xtol_relsrand srand_timeversion AlgorithmAUGLAG AUGLAG_EQGD_MLSL GD_MLSL_LDSGD_STOGO GD_STOGO_RAND GN_CRS2_LM GN_DIRECT GN_DIRECT_LGN_DIRECT_L_NOSCALGN_DIRECT_L_RANDGN_DIRECT_L_RAND_NOSCALGN_DIRECT_NOSCALGN_ESCHGN_ISRESGN_MLSL GN_MLSL_LDSGN_ORIG_DIRECTGN_ORIG_DIRECT_LG_MLSL G_MLSL_LDS LD_AUGLAG LD_AUGLAG_EQLD_CCSAQLD_LBFGSLD_LBFGS_NOCEDALLD_MMALD_SLSQP LD_TNEWTONLD_TNEWTON_PRECONDLD_TNEWTON_PRECOND_RESTARTLD_TNEWTON_RESTARTLD_VAR1LD_VAR2 LN_AUGLAG LN_AUGLAG_EQ LN_BOBYQA LN_COBYLA LN_NELDERMEAD LN_NEWUOALN_NEWUOA_BOUND LN_PRAXISLN_SBPLXOptOutputnEvals resultCode resultCostresultParametersPreconditionerFunctionResultFAILURE FORCED_STOP INVALID_ARGSMAXEVAL_REACHEDMAXTIME_REACHED OUT_OF_MEMORYROUNDOFF_LIMITEDSTOPVAL_REACHEDSUCCESSScalarFunctionVectorFunctionVersionbugfixmajorminor$fBoundedAlgorithm$fBoundedResult$fEnumAlgorithm $fEnumResult $fEqAlgorithm $fEqResult $fEqVersion $fOrdVersion$fReadAlgorithm $fReadResult $fReadVersion$fShowAlgorithm $fShowResult $fShowVersionminimizeAugLagminimizeGlobal minimizeLocalAugLagAlgorithmAUGLAG_EQ_GLOBALAUGLAG_EQ_LOCAL AUGLAG_GLOBAL AUGLAG_LOCAL AugLagProblem alEquality alEqualityD alalgorithmBounds LowerBounds UpperBounds ConstraintPreconditionedScalarVectorEqualityConstrainteqConstraintFunctionseqConstraintToleranceEqualityConstraintsEqualityConstraintsDGlobalAlgorithmCRS2_LMDIRECTDIRECT_LDIRECT_L_NOSCAL DIRECT_L_RANDDIRECT_L_RAND_NOSCAL DIRECT_NOSCALESCHISRESMLSLMLSL_LDS ORIG_DIRECT ORIG_DIRECT_LSTOGO STOGO_RAND GlobalProblem galgorithmgstop lowerBounds upperBoundsInequalityConstraintineqConstraintFunctionsineqConstraintToleranceInequalityConstraintsInequalityConstraintsD InitialStepLocalAlgorithmBOBYQACCSAQCOBYLALBFGS LBFGS_NOCEDALMMA NELDERMEADNEWUOA NEWUOA_BOUNDPRAXISSBPLXSLSQPTNEWTONTNEWTON_PRECONDTNEWTON_PRECOND_RESTARTTNEWTON_RESTARTVAR1VAR2 LocalProblem lalgorithmlsizelstop Objective ObjectiveD PopulationPreconditioner RandomSeed Don'tSeed SeedFromTime SeedValueScalarConstraintScalarConstraintDSolution solutionCostsolutionParamssolutionResultStoppingConditionMaximumEvaluations MaximumTime MinimumValueObjectiveAbsoluteToleranceObjectiveRelativeToleranceParameterAbsoluteToleranceParameterRelativeToleranceVectorConstraintVectorConstraintD VectorStorage#$fApplyConstraintEqualityConstraint$$fApplyConstraintEqualityConstraint0%$fApplyConstraintInequalityConstraint&$fApplyConstraintInequalityConstraint0 $fEqBounds$fEqInitialStep$fEqPopulation$fEqRandomSeed $fEqSolution$fEqStoppingCondition$fEqVectorStorage$fExceptionNloptException$fShowNloptException$fProblemSizeAugLagProblem$fProblemSizeGlobalProblem$fProblemSizeLocalProblem $fReadBounds$fReadInitialStep$fReadPopulation$fReadRandomSeed$fReadSolution$fReadStoppingCondition$fReadVectorStorage $fShowBounds$fShowInitialStep$fShowPopulation$fShowRandomSeed$fShowSolution$fShowStoppingCondition$fShowVectorStorageminimizeBinomialminimizeGaussian minimizeNLL minimizeNLL'minimizeNLLWithFixedParamminimizeNLLWithFixedParam'minimizePoissonapproximateContour calcTheta0 createSplinesevalVargetAllProfilesgetCol getEndPoint getProfilegetProfileCnstr getProfileODEgetStatsFromModel inverseDistparamCI predictionCIprintCI replaceParam0rkshowCI sortOnFirstsplinesSketches BasicStatsMkStats_corr_cov_stdErrCIest_lower_upper_CITypeLaplaceProfilePTypeBates ConstrainedODEProfileT_opt _tau2theta_taus _theta2tau_thetas$fEqBasicStats$fEqCI$fReadCI $fReadPType$fShowBasicStats$fShowCI $fShowPTypeparsePatparseSR showOutputLATEXMATHPYTHONTIKZSRAlgsBINGOEPLEXGOMEAHLOPERONPYSRSBPTIR$fBoundedOutput$fBoundedSRAlgs $fEnumOutput $fEnumSRAlgs $fReadOutput $fReadSRAlgs $fShowOutput $fShowSRAlgs withInput withOutputwithOutputDebugGHC.Internal.Data.StringIsStringHasExpsrandom-1.2.1.3-a06571d4d55f8ce4662f189b20b500ad3157a8e9dcb83b8c6006f7e5a3938bb8System.Random.InternalStdGenremoveProtectionloadMtxisGZip detectSepreadFileToLinessplitFileNameParamsparseVal getColumnsderiveByghc-prim GHC.TypesFalseTruesseTotlogisticmyCost simplifyEqSatGHC.Internal.MaybeMaybeNothing getBinDir getDataDirgetDataFileName getDynLibDir getLibDir getLibexecDir getSysconfDir ParseTreeIntDoublefmapbinaryprefixparens parseExprattoparsec-expr-0.1.1.2-fd1096984d0e99463d1ecbadea4c5561d3500db58b9a5f2e5909966174692c47Data.Attoparsec.ExprOperator intOrDoubleaqbinFunparseTIR parseOperonparseHL parseBingo parseGOMEA parsePySR