h,/      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                                                                                                                                                            2.0.0.2!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimental5FlexibleInstances, DeriveFunctor, ScopedTypeVariables Safe-Inferred:' #$("!&%'  '   !"#$%&'(!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimental5FlexibleInstances, DeriveFunctor, ScopedTypeVariables Safe-Inferred":-Supported functionsESupported operatorsMTree structure to be used with Symbolic Regression algorithms. This structure is a fixed point of a n-ary tree. Nindex of the variablesOindex of the parameterPconstant value, can be converted to a parameter | IConst Int -- TODO: integer constant | RConst Ratio -- TODO: rational constantQunivariate functionRbinary operatorS8create a tree with a single node representing a variableT9create a tree with a single node representing a parameterU>create a tree with a single node representing a constant valueVArity of the current nodeWGet the children of a node. Returns an empty list in case of a leaf node.%map showExpr . getChildren $ "x0" + 2 ["x0", 2]X$Get the children of an unfixed node Y0replaces the children with elements from a list Z9returns a node containing the operator and () as children[$Count the number of nodes in a tree.countNodes $ "x0" + 23\Count the number of N nodes,countVarNodes $ "x0" + 2 * ("x0" - sin "x1")3]Count the number of O nodes4countParams $ "x0" + "t0" * sin ("t1" + "x1") - "t0"3^Count the number of const nodes$countConsts $ "x0"* 2 + 3 * sin "x0"2_-Count the occurrences of variable indexed as ix2countOccurrences 0 $ "x0"* 2 + 3 * sin "x0" + "x1"2`#counts the number of unique tokens :countUniqueTokens $ "x0" + ("x1" * "x0" - sin ("x0" ** 2))8a&return the number of unique variables +numberOfVars $ "x0" + 2 * ("x0" - sin "x1")2breturns the integer constants. We assume an integer constant as those values in which `floor x == ceiling x`.*getIntConsts $ "x0" + 2 * "x1" ** 3 - 3.14 [2.0,3.0]c;Relabel the parameters indices incrementaly starting from 0showExpr . relabelParams $ "x0" + "t0" * sin ("t1" + "x1") - "t0"'"x0" + "t0" * sin ("t1" + "x1") - "t2" d;Relabel the parameters indices incrementaly starting from 0showExpr . relabelParams $ "x0" + "t0" * sin ("t1" + "x1") - "t0"&"x0" + "t0" * sin ("t1" + "x1") - "t2"eChange constant values to a parameter, returning the changed tree and a list of parameter values6snd . constsToParam $ "x0" * 2 + 3.14 * sin (5 * "x1")[2.0,3.14,5.0]fSame as e but does not change constant values that can be converted to integer without loss of precision;snd . floatConstsToParam $ "x0" * 2 + 3.14 * sin (5 * "x1")[3.14]g1Convert the parameters into constants in the treeshowExpr . paramsToConst [1.1, 2.2, 3.3] $ "x0" + "t0" * sin ("t1" * "x0" - "t2")x0 + 1.1 * sin(2.2 * x0 - 3.3)mthe instance of = allows us to create a tree using a more practical notation:":t "x0" + "t0" * sin("x1" * "t1") Fix SRTree>VXeU^[_]`\fWbZaTgcdYS-7:698;/>14DB.@AC03<=?25ELFIHJKGMRPOQN >MPQRNO-B./0123456789:;<=>?@ACDEIFGHJKLTSUVWXYZ[\^]_`abcdefg !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalConstraintKinds Safe-Inferred/"j |RndTree is a Monad Transformer to generate random trees of type `SRTree ix val` given the parameters `p ix val` using the random number generator .}A structure with every property&Constraint synonym for all properties.)Returns a random variable, the parameter p must have the  property)Returns a random constant, the parameter p must have the HasConst property3Returns a random integer power node, the parameter p must have the  property)Returns a random function, the parameter p must have the  property%Returns a random node, the parameter p must have every property.2Returns a random non-terminal node, the parameter p must have every property.;Returns a random tree with a limited budget, the parameter p must have every property.let treeGen = runReaderT (randomTree 12) (P [0,1] (-10, 10) (2, 3) [Log, Exp])(tree <- evalStateT treeGen (mkStdGen 52) showExpr tree"(-2.7631152121655838 / Exp((x0 / ((x0 * -7.681722660704317) - Log(3.378309080134594)))))"4Returns a random tree with a approximately a number n of nodes, the parameter p must have every property.let treeGen = runReaderT (randomTreeBalanced 10) (P [0,1] (-10, 10) (2, 3) [Log, Exp])(tree <- evalStateT treeGen (mkStdGen 42) showExpr tree"Exp(Log((((7.784360517385774 * x0) - (3.6412224491658223 ^ x1)) ^ ((x0 ^ -4.09764995657091) + Log(-7.710216839988497)))))"}~|}~|!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimental Safe-Inferred"& converts a tree with protected operators to a conventional math tree.convert a tree into a string in math notation )showExpr $ "x0" + sin ( tanh ("t0" + 2) )"(x0 + Sin(Tanh((t0 + 2.0))))"convert a tree into a string in math notation given named vars.showExprWithVar ["mu", "eps"] $ "x0" + sin ( "x1" * tanh ("t0" + 2) )$"(mu + Sin(Tanh(eps * (t0 + 2.0))))"prints the expression prints the expression1Displays a tree as a numpy compatible expression.+showPython $ "x0" + sin ( tanh ("t0" + 2) )."(x[:, 0] + np.sin(np.tanh((t[:, 0] + 2.0))))"&print the expression in numpy notation1Displays a tree as a LaTeX compatible expression.*showLatex $ "x0" + sin ( tanh ("t0" + 2) )"\\left(x_{, 0} + \\operatorname{sin}(\\operatorname{tanh}(\\left(\\theta_{, 0} + 2.0\\right)))\\right)"%prints expression in LaTeX notation. Displays a tree in Tikz formatprints the tree in TikZ format  !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalFlexibleInstances, DeriveFunctor, ScopedTypeVariables, ConstraintKinds Safe-Inferred'>VXeU^[_]`\fWbZaTgcdYS-7:698;/>14DB.@AC03<=?25ELFIHJKGMRPOQN >MPQRNO-B./0123456789:;<=>?@ACDEIFGHJKLTSUVWXYZ[\^]_`abcdefg !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimental5FlexibleInstances, DeriveFunctor, ScopedTypeVariablesNone+K Matrix of features values Vector of parameter values. Needs to be strict to be readily accesible.Vector of target values Evaluates the tree given a vector of variable values, a vector of parameter values and a function that takes a Double and change to whatever type the variables have. This is useful when working with datasets of many values per variables.>Returns the inverse of a function. This is a partial function.evals the inverse of a function'evals the right inverse of an operator &evals the left inverse of an operator List of invertible functions!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalFlexibleInstances, DeriveFunctor, ScopedTypeVariables, ConstraintKindsNone"(3 9Loads a list of list of bytestrings to a matrix of double$Returns true if the extension is .gzDetects the separator automatically by checking whether the use of each separator generates the same amount of SRMatrix in every row and at least two SRMatrix.detectSep ["x1,x2,x3,x4"] ','+reads a file and returns a list of list of  ByteString corresponding to each element of the matrix. The first row can be a header. Splits the parameters from the filename the expected format of the filename is *filename.ext:p1:p2:p3:p4* where p1 and p2 is the starting and end rows for the training data, by default p1 = 0 and p2 = number of rows - 1 p3 is the target PVector, it can be a string corresponding to the header or an index. p4 is a comma separated list of SRMatrix (either index or name) to be used as input variables. These will be renamed internally as x0, x1, ... in the order of this list.#Tries to parse a string into an intGiven a map between PVector name and indeces, the target PVector and the variables SRMatrix, returns the indices of the variables SRMatrix and the targetGiven the start and end rows, it returns the hmatrix extractors for the training and validation data loads a dataset with a filename in the format: filename.ext:start_row:end_row:target:features it returns the X_train, y_train, X_test, y_test, varnames, target name where varnames are a comma separated list of the name of the vars and target name is the name of the targetwhere*start_row:end_row** is the range of the training rows (default 0:nrows-1). every other row not included in this range will be used as validation*target** is either the name of the PVector (if the datafile has headers) or the index of the target variable*features** is a comma separated list of SRMatrix names or indices to be used as input variables of the regression model.None5AGiven a list of (x,y) co-ordinates, produces a list of coefficients to cubic equations, with knots at each of the initially provided x co-ordinates. Natural cubic spline interpololation is used. See:  http://en.wikipedia.org/wiki/Spline_interpolation#Interpolation_using_natural_cubic_spline. !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalNone7 +this assumes up to 999 variables and paramsreturns an empty e-graphreturns an empty e-graph DBCreates a new e-class from an e-class id, a new e-node, and the info of this e-class #gets the canonical id of an e-classcanonize the e-node childrengets an e-class with id c+Creates a singleton trie from an e-class id,Check whether an e-class is a constant value !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalNone(:%8returns all the root e-classes (e-class without parents)returns the e-class id with the best fitness that is true to a predicate !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalNone;join data from two e-classes TODO: instead of folding, just do not apply rules list of values instead of single value0Calculate e-node data (constant values and cost)update the heights of each e-class won't work if there's no rootcalculates the cost of a node,check whether an e-node evaluates to a const !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalNoneA& ?Returns the substitution rules for every match of the pattern  inside the e-graph.,Returns a Query (list of atoms) of a pattern'returns the list of the children valuesCreates the substituion map for the pattern variables for each one of the matched subgraphreturns the e-class id for a certain variable that matches the pattern described by the atoms>returns all e-class id that can matches this sequence of atomssearches for the intersection of e-class ids that matches each part of the query. Returns Nothing if the intersection is empty.var is the current variable being investigated xs is the map of ids being investigated and their corresponding e-class id trie is the current trie of the pattern (i:ids) sequence of root : children of the atom to investigate NOTE: it must be Maybe Set to differentiate between empty set and no answer/updates all occurrence of var with the new id xchecks whether two ClassOrVar are different only check if it is a pattern variable, else returns true$checks if v is an element of an atom?sorts the variables in a query by the most frequently occurring !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimental5FlexibleInstances, DeriveFunctor, ScopedTypeVariables Safe-Inferred"Eg>Creates the symbolic partial derivative of a tree by variable dx (if p is ) or parameter dx (if p is ). This uses mutual recursion where the first recursion (alg1) holds the derivative w.r.t. the current node and the second (alg2) holds the original tree.-showExpr . deriveBy False 0 $ 2 * "x0" * "x1" "(2.0 * x1)"showExpr . deriveBy True 1 $ 2 * "x0" * "t0" - sqrt ("t1" * "x0")1"(-1.0 * ((1.0 / (2.0 * Sqrt((t1 * x0)))) * x0))"Derivative of each supported function For a function h(f) it returns the derivative dh/dfderivative Log 2.00.5.Second-order derivative of supported functionsdoubleDerivative Log 2.0-0.25!Symbolic derivative by a variable"Symbolic derivative by a parameter!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimental5FlexibleInstances, DeriveFunctor, ScopedTypeVariablesNone (:Kget the value of a certain index if it is an array (Left) or returns the value itself if it is a scalar.Calculates the results of the error vector multiplied by the Jacobian of an expression using forward mode provided a vector of variable values xss, a vector of parameter values theta and a function that changes a Double value to the type of the variable values. uses unsafe operations to use mutable array instead of a tape The function  calculates the numerical gradient of the tree and evaluates the tree at the same time. It assumes that each parameter has a unique occurrence in the expression. This should be significantly faster than .;Same as above, but using reverse mode, that is even faster.Same as above, but using reverse mode with the tree encoded as an array, that is even faster. reverseModeUniqueArr :: SRMatrix -> PVector -> SRVector -> (SRVector -> SRVector) -> Array S Ix1 (Int, Int, Int, Double) -- arity, opcode, ix, const val -> (Array D Ix1 Double, Array S Ix1 Double) The function  calculates the numerical gradient of the tree and evaluates the tree at the same time. It assumes that each parameter has a unique occurrence in the expression. This should be significantly faster than .!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalConstraintKindsNone(O3Supported distributions for negative log-likelihood.Sum-of-square errors or Sum-of-square residuesTotal Sum-of-squaresMean squared errorsRoot of the mean squared errorsCoefficient of determinationlogistic functionget the standard error from a Maybe Double if it is Nothing, estimate from the ssr, otherwise use the current value For distributions other than Gaussian, it defaults to a constant 1Negative log-likelihoodGaussian distribution&Prediction for different distributions'Gradient of the negative log-likelihoodGradient of the negative log-likelihood Array B Ix1 (Int, Int, Int, Double)'Gradient of the negative log-likelihood-Fisher information of negative log-likelihood"Hessian of negative log-likelihoodNote, though the Fisher is just the diagonal of the return of this function it is better to keep them as different functions for efficiency!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalConstraintKindsNone(SBayesian information criterionAkaike information criterion Evidence MDL as described in Bartlett, Deaglan J., Harry Desmond, and Pedro G. Ferreira. "Exhaustive symbolic regression." IEEE Transactions on Evolutionary Computation (2023).MDL Lattice as described in Bartlett, Deaglan, Harry Desmond, and Pedro Ferreira. "Priors for symbolic regression." Proceedings of the Companion Conference on Genetic and Evolutionary Computation. 2023.same as  but weighting the functional structure by frequency calculated using a wiki information of physics and engineering functions  !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalNoneX4adds a new or existing e-node (merging if necessary):rebuilds the e-graph after inserting or merging e-classesrepairs e-node by canonizing its children if the canonized e-node already exists in e-graph, merge the e-classesrepair the analysis of the e-class considering the new added e-nodemerge to equivalent e-classesmodify an e-class, e.g., add constant e-node and prune non-leaves creates a database of patterns from an e-graph it simply calls addToDB for every pair (e-node, e-class id) from the e-graph.. adds an e-node and e-class id to the database3Populates an IntTrie with a sequence of e-class idsgets the e-node of the target of the rule TODO: add consts and modify,adds the target of the rule into the e-graphreturns 3 if the condition of a rule is valid for that match*Creates an e-graph from an expression tree1Builds an e-graph from multiple independent trees8gets the best expression given the default cost function)returns one expression rooted at e-class eId TODO: avoid loopings*returns all expressions rooted at e-class eId TODO: check for infinite list.returns a random expression rooted at e-class eId!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalNone[UThe  stores a map with the banned iterations of a certain rule . TODO: make it more customizable.runs equality saturation from an expression tree, a given set of rules, and a cost function. Returns the tree with the smallest cost./recalculates the costs with a new cost function2run equality saturation for a number of iterations5apply a single step of merge-only equality saturation#matches the rules given a scheduler!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalNone"]default cost function for simplification TODO: num_params: length: terminal < nonterminal: symbol comparison (constants, parameters, variables x0, x10, x2) op priorities (+, -, *, inv_div, pow, abs, exp, log, log10, sqrt) univariates&simplify using the default parameters !simplifies with custom parameters5apply a single step of merge-only using default rules  (c) Matthew Peddie 2017BSD3"Matthew Peddie  provisionalGHCNoner:%The output of an NLOPT optimizer run.Parameters corresponding to the minimum if optimization succeeded=Minimum of the objective function if optimization succeeded Return code"This function type corresponds to  nlopt_precond in C and is used for functions that precondition a vector at a given point in the parameter space. You may pass data of any type a. to the functions in this module that take a  as an argument; this data will be supplied to your function when it is called."This function type corresponds to  nlopt_mfunc in C and is used for vector functions of the parameter vector. You may pass data of any type a. to the functions in this module that take a  as an argument; this data will be supplied to your function when it is called."This function type corresponds to  nlopt_func in C and is used for scalar functions of the parameter vector. You may pass data of any type a. to the functions in this module that take a  as an argument; this data will be supplied to your your function when it is called.NLOPT library version, e.g. 2.4.2An optimizer object which must be created, configured and then passed to  to solve a problemMostly self-explanatory.Generic failure codeGeneric success codeThe NLOPT algorithm names, apart from the names of the actual optimization methods, follow this scheme: Gmeans a global methodLmeans a local methodD+means a method that requires the derivativeN3means a method that does not require the derivative*_RAND0means the algorithm involves some randomization.*_NOSCALmeans the algorithm is *not* scaled to a unit hypercube (i.e. it is sensitive to the units of x)DIviding RECTangles,DIviding RECTangles, locally-biased variant+DIviding RECTangles, "slightly randomized"%DIviding RECTangles, unscaled version1DIviding RECTangles, locally-biased and unscaledDIviding RECTangles, locally-biased, unscaled and "slightly randomized"5DIviding RECTangles, original FORTRAN implementationDIviding RECTangles, locally-biased, original FORTRAN implementationStochastic Global Optimization3Stochastic Global Optimization, randomized variantLimited-memory BFGSLimited-memory BFGS0PRincipal AXIS gradient-free local optimization/Shifted limited-memory variable-metric, rank-2/Shifted limited-memory variable-metric, rank-1Truncated Newton's method4Truncated Newton's method with automatic restarting)Preconditioned truncated Newton's methodPreconditioned truncated Newton's method with automatic restarting-Controlled Random Search with Local Mutation$Original Multi-Level Single-Linkage?Original Multi-Level Single-Linkage, user-provided derivativeMulti-Level Single-Linkage with Sobol Low-Discrepancy Sequence for starting pointsMulti-Level Single-Linkage with Sobol Low-Discrepancy Sequence for starting points, user-provided derivativeMethod of moving averages2Constrained Optimization BY Linear ApproximationsPowell's NEWUOA algorithm-Powell's NEWUOA algorithm with bounds by SGJ)Nelder-Mead Simplex gradient-free method2NLOPT implementation of Rowan's Subplex algorithmAUGmented LAGrangian/AUGmented LAGrangian, user-provided derivativeAUGmented LAGrangian with penalty functions only for equality constraintsAUGmented LAGrangian with penalty functions only for equality constraints, user-provided derivative1Bounded Optimization BY Quadratic Approximations/Improved Stochastic Ranking Evolution Strategy9AUGmented LAGrangian, requires local_optimizer to be setAUGmented LAGrangian with penalty functions only for equality constraints, requires local_optimizer to be setOriginal Multi-Level Single-Linkage, user-provided derivative, requires local_optimizer to be setMulti-Level Single-Linkage with Sobol Low-Discrepancy Sequence for starting points, requires local_optimizer to be set$Sequential Least-SQuares Programming,Conservative Convex Separable ApproximationEvolutionary Algorithm Create a new  object0This function is very similar to the C function nlopt_optimize6, but it does not use mutable vectors and returns an  structure.Parameter vectorVector v to preconditionOutput vector vpre to be filled in User dataParameter vectorOutput vector to be filled inGradient vector to be filled in User dataParameter vectorGradient vector to be filled in User data Scalar resultChoice of algorithmParameter vector dimensionOptimizer object,Optimizer object set up to solve the problemInitial-guess parameter vectorResults of the optimization runPrimary optimizerSubsidiary (local) optimizer(c) Matthew Peddie 2017BSD3"Matthew Peddie  provisionalGHCNoneThis structure is returned in the event of a successful optimization.Why the optimizer stopped3The parameter vector which minimizes the objective,The objective function value at the minimumThe Augmented Lagrangian solvers allow you to enforce nonlinear constraints while using local or global algorithms that don't natively support them. The subsidiary problem is used to do the minimization, but the AUGLAG methods modify the objective to enforce the constraints. Please see  8http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithmsthe NLOPT algorithm manual for more details on how the methods work and how they relate to one another.See the documentation for 7 for an important note about the constraint functions.3AUGmented LAGrangian with a local subsidiary methodAUGmented LAGrangian with a local subsidiary method and with penalty functions only for equality constraints4AUGmented LAGrangian with a global subsidiary methodAUGmented LAGrangian with a global subsidiary method and with penalty functions only for equality constraints.IMPORTANT NOTEFor augmented lagrangian problems, you, the user, are responsible for providing the appropriate type of constraint. If the subsidiary problem requires an , then you should provide constraint functions with derivatives. If the subsidiary problem requires an , you should provide constraint functions without derivatives. If you don't do this, you may get a runtime error.Algorithm specification.=Possibly empty set of equality constraints with derivatives+Possibly empty set of equality constraintsThese are the local minimization algorithms provided by NLOPT. Please see  8http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithmsthe NLOPT algorithm manual for more details on how the methods work and how they relate to one another. Note that some local methods require you provide derivatives (gradients or Jacobians) for your objective function and constraint functions.%Optional parameters are wrapped in a ; for example, if you see  , you can simply specify  to use the default behavior.Limited-memory BFGSLimited-memory BFGS.Shifted limited-memory variable-metric, rank-2.Shifted limited-memory variable-metric, rank-1Truncated Newton's method3Truncated Newton's method with automatic restarting(Preconditioned truncated Newton's methodPreconditioned truncated Newton's method with automatic restartingMethod of moving averages.Sequential Least-Squares Quadratic Programming+Conservative Convex Separable Approximation/PRincipal AXIS gradient-free local optimization1Constrained Optimization BY Linear ApproximationsPowell's NEWUOA algorithm,Powell's NEWUOA algorithm with bounds by SGJ(Nelder-Mead Simplex gradient-free method1NLOPT implementation of Rowan's Subplex algorithm0Bounded Optimization BY Quadratic ApproximationsAlgorithm specification At least one stopping condition'The dimension of the parameter vector.These are the global minimization algorithms provided by NLOPT. Please see  8http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithmsthe NLOPT algorithm manual for more details on how the methods work and how they relate to one another.%Optional parameters are wrapped in a ; for example, if you see  , you can simply specify  to use the default behavior.DIviding RECTangles+DIviding RECTangles, locally-biased variant*DIviding RECTangles, "slightly randomized"%DIviding RECTangles, unscaled version0DIviding RECTangles, locally-biased and unscaledDIviding RECTangles, locally-biased, unscaled and "slightly randomized"4DIviding RECTangles, original FORTRAN implementationDIviding RECTangles, locally-biased, original FORTRAN implementation!Stochastic Global Optimization. 9This algorithm is only available if you have linked with  libnlopt_cxx.5Stochastic Global Optimization, randomized variant. 9This algorithm is only available if you have linked with  libnlopt_cxx.,Controlled Random Search with Local Mutation.Improved Stochastic Ranking Evolution StrategyEvolutionary Algorithm#Original Multi-Level Single-LinkageMulti-Level Single-Linkage with Sobol Low-Discrepancy Sequence for starting pointsAlgorithm specification At least one stopping conditionUpper bounds for xLower bounds for xThis specifies the memory size to be used by algorithms like 6 which store approximate Hessian or Jacobian matrices.This specifies the population size for algorithms that use a pool of solutions.This specifies how to initialize the random number generator for stochastic algorithms.%Seed the RNG with the provided value.$Seed the RNG using the system clock.5Don't perform any explicit initialization of the RNG.A  tells NLOPT when to stop working on a minimization problem. When multiple s are provided, the problem will stop when any one condition is met.(Stop minimizing when an objective value J4 less than or equal to the provided value is found.Stop minimizing when an optimization step changes the objective value J3 by less than the provided tolerance multiplied by |J|.Stop minimizing when an optimization step changes the objective value by less than the provided tolerance.'Stop when an optimization step changes  every element of the parameter vector x by less than x# scaled by the provided tolerance.'Stop when an optimization step changes  every element of the parameter vector x by less than the corresponding element in the provided vector of tolerances values.Stop when the number of evaluations of the objective function exceeds the provided count.Stop when the optimization time exceeds the provided time (in seconds). This is not a precise limit.Bound constraints are specified by vectors of the same dimension as the parameter space.Example programThe following interactive session example enforces lower bounds on the example from the beginning of the module. This prevents the optimizer from locating the true minimum at (0, 0),; a slightly higher constrained minimum at (1, 1)- is found. Note that the optimizer returns  rather than ?, because the bound constraint is active at the final minimum..import Numeric.LinearAlgebra ( dot, fromList )let objf x = x `dot` x + 22 -- define objectivelet stop = ObjectiveRelativeTolerance 1e-6 :| [] -- define stopping criterionlet lowerbound = LowerBounds $ fromList [1, 1] -- specify boundslet algorithm = NELDERMEAD objf [lowerbound] Nothing -- specify algorithmlet problem = LocalProblem 2 stop algorithm -- specify problemlet x0 = fromList [5, 10] -- specify initial guessminimizeLocal problem x0Right (Solution {solutionCost = 24.0, solutionParams = [1.0,1.0], solutionResult = XTOL_REACHED})Lower bound vector v means we want x >= v.Upper bound vector u means we want x <= u.A collection of inequality constraints that supply constraint derivatives.A collection of equality constraints that supply constraint derivatives.A collection of inequality constraints that do not supply constraint derivatives.A collection of equality constraints that do not supply constraint derivatives.An inequality constraint, comprised of both the constraint function (or functions, if a preconditioner is used) along with the desired tolerance.An equality constraint, comprised of both the constraint function (or functions, if a preconditioner is used) along with the desired tolerance.A scalar constraint.A vector constraint.>A scalar constraint with an attached preconditioning function.$A constraint function which returns c(x) given the parameter vector x7 along with the Jacobian (first derivative) matrix of c(x) with respect to x3 at that point. The constraint will enforce that  c(x) == 0 (equality constraint) or  c(x) <= 0 (inequality constraint).-A constraint function which returns a vector c(x) given the parameter vector x$. The constraint will enforce that  c(x) == 0 (equality constraint) or  c(x) <= 0 (inequality constraint).$A constraint function which returns c(x) given the parameter vector x along with the gradient of c(x) with respect to x3 at that point. The constraint will enforce that  c(x) == 0 (equality constraint) or  c(x) <= 0 (inequality constraint).$A constraint function which returns c(x) given the parameter vector x$. The constraint will enforce that  c(x) == 0 (equality constraint) or  c(x) <= 0 (inequality constraint).*A preconditioner function, which computes  vpre = H(x) v , where H is the Hessian matrix: the positive semi-definite second derivative at the given parameter vector x, or an approximation thereof.An objective function that calculates both the objective value and the gradient of the objective with respect to the input parameter vector, at the given parameter vector.An objective function that calculates the objective value at the given parameter vector.0Solve the specified global optimization problem.Example program3The following interactive session example uses the  algorithm, a stochastic, derivative-free global optimizer, to minimize a trivial function with a minimum of 22.0 at (0, 0). The search is conducted within a box from -10 to 10 in each dimension. .import Numeric.LinearAlgebra ( dot, fromList )let objf x = x `dot` x + 22 -- define objectivelet stop = ObjectiveRelativeTolerance 1e-12 :| [] -- define stopping criterionlet algorithm = ISRES objf [] [] (SeedValue 22) Nothing -- specify algorithmlet lowerbounds = fromList [-10, -10] -- specify boundslet upperbounds = fromList [10, 10] -- specify boundslet problem = GlobalProblem lowerbounds upperbounds stop algorithmlet x0 = fromList [5, 8] -- specify initial guessminimizeGlobal problem x0Right (Solution {solutionCost = 22.000000000002807, solutionParams = [-1.660591102367038e-6,2.2407062393213684e-7], solutionResult = FTOL_REACHED})Example programThe following interactive session example enforces the same scalar constraint as the nonlinear constraint example, but this time it uses the SLSQP solver to find the minimum. =import Numeric.LinearAlgebra ( dot, fromList, toList, scale )*let objf x = (x `dot` x + 22, 2 `scale` x)0let stop = ObjectiveRelativeTolerance 1e-9 :| [];let constraintf x = (sum (toList x) - 1.0, fromList [1, 1])=let constraint = EqualityConstraint (Scalar constraintf) 1e-6-let algorithm = SLSQP objf [] [] [constraint]+let problem = LocalProblem 2 stop algorithmlet x0 = fromList [5, 10]minimizeLocal problem x0Right (Solution {solutionCost = 22.5, solutionParams = [0.4999999999999998,0.5000000000000002], solutionResult = FTOL_REACHED})Example programThe following interactive session example enforces the same scalar constraint as the nonlinear constraint example, but this time it uses the augmented Lagrangian method to enforce the constraint and the  algorithm, which does not support nonlinear constraints itself, to perform the minimization. As before, the parameters must always sum to 1, and the minimizer finds the same constrained minimum of 22.5 at  (0.5, 0.5). 6import Numeric.LinearAlgebra ( dot, fromList, toList )let objf x = x `dot` x + 220let stop = ObjectiveRelativeTolerance 1e-9 :| []%let algorithm = SBPLX objf [] Nothing.let subproblem = LocalProblem 2 stop algorithmlet x0 = fromList [5, 10]minimizeLocal subproblem x0Right (Solution {solutionCost = 22.0, solutionParams = [0.0,0.0], solutionResult = FTOL_REACHED})-- define constraint function:(let constraintf x = sum (toList x) - 1.05-- define constraint object to pass to the algorithm:=let constraint = EqualityConstraint (Scalar constraintf) 1e-6let problem = AugLagProblem [constraint] [] (AUGLAG_EQ_LOCAL subproblem)minimizeAugLag problem x0Right (Solution {solutionCost = 22.500000015505844, solutionParams = [0.5000880506776678,0.4999119493223323], solutionResult = FTOL_REACHED})Parameter vectorConstraint Vectorize3(Constraint violation vector, constraint Jacobian)Parameter vectorConstraint VectorizeConstraint violation vectorParameter vector+(Constraint violation, constraint gradient)Parameter vector x'Constraint violation (deviation from 0)Parameter vector xVector v to precondition at xPreconditioned vector vpreParameter vector$(Objective function value, gradient)Parameter vectorObjective function valueProblem specificationInitial parameter guessOptimization results!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalConstraintKindsNonet7minimizes the negative log-likelihood of the expressionminimizes the likelihood assuming repeating parameters in the expression minimizes the function while keeping the parameter ix fixed (used to calculate the profile)$minimizes using Gaussian likelihood $minimizes using Binomial likelihood #minimizes using Poisson likelihood !(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalConstraintKindsNone(8a confience interval is composed of the point estimate (), lower bound (_lower_) and upper bound ()Basic stats of the data: covariance of parameters, correlation, standard errors Confidence Interval using Laplace approximation or profile likelihood.profile likelihood algorithms: Bates (classical), ODE (faster), Constrained (fastest) The Constrained approach returns only the endpoints.Calculates the confidence interval of the parameters using Laplace approximation or Profile likelihoodcalculates the prediction confidence interval using Laplace approximation or profile likelihood. ,,  Safe-Inferred!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalConstraintKindsNone"ESupported outputs.Supported algorithms.*Parser of a symbolic regression tree with 5 variable index and numerical values represented as 1. The numerical values type can be changed with .Returns the corresponding function from Data.SRTree.Print for a given .+Calls the corresponding parser for a given fmap (showOutput MATH) $ parseSR OPERON "lambda,theta" False "lambda ^ 2 - sin(theta*3*lambda)"-Right "((x0 ^ 2.0) - Sin(((x1 * 3.0) * x0)))"&Creates a parser for a binary operator%Creates a parser for a unary functionEnvelopes the parser in parens=Parse an expression using a user-defined parser given by the  lists containing the name of the functions and operators of that SR algorithm, a list of parsers binFuns for binary functions a parser S for variables, a boolean indicating whether to change floating point values to free parameters variables, and a list of variable names with their corresponding indexes.Tries to parse as an #, if it fails, parse as a Double.analytic quotientParser for binary functions/parser for Transformation-Interaction-Rational.parser for Operon.parser for HeuristicLab.parser for Bingoparser for GOMEAparser for PySR!(c) Fabricio Olivetti 2021 - 2024BSD3fabricio.olivetti@gmail.com experimentalConstraintKindsNonegiven a filename, the symbolic regression algorithm, a string of variables name, and two booleans indicating whether to convert float values to parameters and whether to simplify the expression or not, it will read the file and parse everything returning a list of either an error message or a tree.!empty filename defaults to stdin outputs a list of either error or trees to a file using the Output format. "empty filename defaults to stdout =debug version of output function to check the invalid parsers!"#!"$%&'()*+,,,-./01123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcde'fghijklmnopqrstuvwxyz{|}~                                                                                                                             $                               ! !!        !"srtree-2.0.0.2-inplaceAlgorithm.SRTree.NonlinearOptData.SRTree.RecursionData.SRTree.InternalData.SRTree.RandomData.SRTree.PrintData.SRTree.EvalData.SRTree.DatasetsAlgorithm.Massiv.UtilsAlgorithm.EqSat.EgraphAlgorithm.EqSat.QueriesAlgorithm.EqSat.InfoAlgorithm.EqSat.DBData.SRTree.DerivativeAlgorithm.SRTree.ADAlgorithm.SRTree.LikelihoodsAlgorithm.SRTree.ModelSelectionAlgorithm.EqSat.BuildAlgorithm.EqSatAlgorithm.EqSat.Simplify#Numeric.Optimization.NLOPT.BindingsAlgorithm.SRTree.Opt$Algorithm.SRTree.ConfidenceIntervals Text.ParseSRText.ParseSR.IOsrtree Data.SRTree32N XTOL_REACHED FTOL_REACHED Paths_srtree ghc-internalGHC.Internal.BaseNonEmpty:|FreeRetOpCofree:< CoAlgebraAlgebraFixunfixTreeFLeafFNodeFStreamFNatFZeroFSuccFListFNilFConsFextractunOpcatacataManahyloparamutuapoaccuhistofutuchronofromListtoList stream2listtoNatfromNat$fFunctorTreeF$fFunctorStreamF $fFunctorNatF$fFunctorListFFunctionIdAbsSinCosTanSinhCoshTanhASinACosATanASinhACoshATanhSqrtSqrtAbsCbrtSquareLogLogAbsExpRecipCubeAddSubMulDivPowerPowerAbsAQSRTreeVarParamConstUniBinvarparamconstvarity getChildren childrenOfreplaceChildren getOperator countNodes countVarNodes countParams countConstscountOccurrencescountUniqueTokens numberOfVars getIntConsts relabelParams relabelVars constsToParamfloatConstsToParam paramsToConst$fTraversableSRTree$fFoldableSRTree $fFloatingFix$fFractionalFix$fNumFix $fIsStringFix $fShowSRTree $fEqSRTree $fOrdSRTree$fFunctorSRTree$fShowFunction$fReadFunction $fEqFunction $fOrdFunction$fEnumFunction$fShowOp$fReadOp$fEqOp$fOrdOp$fEnumOpRndTree FullParamsP HasEverythingHasFunsHasValsHasVars randomVar randomConst randomPowrandomFunction randomNoderandomNonTerminal randomTreerandomTreeBalanced$fHasFunsFullParams$fHasExpsFullParams$fHasValsFullParams$fHasVarsFullParamsshowExprshowExprWithVars printExprprintExprWithVars showPython printPython showLatex printLatexshowTikz printTikzSRMatrixPVectorSRVectorcompMode replicateAsevalTreeevalOpevalFuncbrt inverseFunc evalInverseinvrightinvleft invertibles$fFractionalArray$fFloatingArray $fNumArray loadDatasetPolyCosNegDef MMassArraygetRowsgetCols appendRow appendColupdateSlinSpaceouterdetdetCholrangedLinearDotProdcholeskyinvChollu forwardSub backwardSubluSolvecubicSplineCoefficientschunkBy genSplineFun$fExceptionNegDef $fShowNegDefIntTrie_trie_keysDB EClassDataEData_size_theta_fitness_consts_best_costPropertyPositiveNegativeNonZeroRealConstsNotConstParamIxConstValEClass_info_height_parents_eNodes _eClassIdEGraphDBEDB_nextId _unevaluated _sizeFitDB_sizeDB _fitRangeDB_patDB _analysis _worklistEGraph_eDB_eClass_eNodeToEClass _canonicalMap RangeTreeCostFunCostEGraphSTENodeEncENode ClassIdMapEClassId encodeEnode decodeEnode insertRange removeRangegetWithinRange getSmallest getGreatest$fHashableSRTree$fEqEClassData $fShowIntTrie $fShowEGraph$fShowEGraphDB $fShowEClass $fEqEClass$fShowEClassData$fShowProperty $fEqProperty $fShowConsts $fEqConsts canonicalMapeClasseDB eNodeToEClasseClassIdeNodesheightinfoparentsbestconstscostfitnesssizethetaanalysis fitRangeDBnextIdpatDBsizeDB sizeFitDB unevaluatedworklist emptyGraphemptyDB createEClass canonicalcanonize getEClasstrieisConstgetEClassesThat updateFitnessfindRootClassesgetTopECLassThatgetTopECLassWithSizejoinData makeAnalysisgetChildrenMinHeightcalculateHeights calculateCostcalculateConsts combineConsts insertFitnessAtom ClassOrVar ConditionQueryRule:=>:==:PatternFixedVarPatunFixPattargetsource getConditionscleanDBmatchcompileToQuerygetIntgetElems genericJoindomainXintersectAtomsintersectTries updateVar isDiffFrom elemOfAtom orderedVars$fFloatingPattern$fFractionalPattern $fNumPattern$fIsStringPattern $fShowRule $fShowAtom $fShowPattern derivativedoubleDerivative deriveByVar deriveByParam forwardModeforwardModeUniquereverseModeUniquereverseModeUniqueArrforwardModeUniqueJac$fFunctorTupleF DistributionGaussian BernoulliPoissonssemsermser2getSErrnllpredictgradNLL gradNLLArrgradNLLNonUnique fisherNLL hessianNLL$fShowDistribution$fReadDistribution$fEnumDistribution$fBoundedDistributionbicaicevidencemdlmdlLattmdlFreq logFunctionallogFunctionalFreq logParameterslogParametersLattnll' treeToNataddrebuildrepairrepairAnalysismerge modifyEClasscreateDBaddToDBpopulate canonizeMap applyMatchapplyMergeOnlyMatch classOfENodereprPrat isValidHeightisValidConditionsfromTree fromTreesgetBestgetExpressionFromgetAllExpressionsFromgetRndExpressionFrom cleanMaps forceStateCostMap SchedulerfromJusteqSatrecalculateBestrunEqSatapplySingleMergeOnlyEqSatmatchWithScheduler rewriteBasic rewritesFunrewritessimplifyEqSatDefaultapplyMergeOnlyDftlOutputresultParameters resultCost resultCodePreconditionerFunctionVectorFunctionScalarFunctionVersionbugfixminormajorOptResultFAILURE INVALID_ARGS OUT_OF_MEMORYROUNDOFF_LIMITED FORCED_STOPSUCCESSSTOPVAL_REACHEDMAXEVAL_REACHEDMAXTIME_REACHED Algorithm GN_DIRECT GN_DIRECT_LGN_DIRECT_L_RANDGN_DIRECT_NOSCALGN_DIRECT_L_NOSCALGN_DIRECT_L_RAND_NOSCALGN_ORIG_DIRECTGN_ORIG_DIRECT_LGD_STOGO GD_STOGO_RANDLD_LBFGS_NOCEDALLD_LBFGS LN_PRAXISLD_VAR2LD_VAR1 LD_TNEWTONLD_TNEWTON_RESTARTLD_TNEWTON_PRECONDLD_TNEWTON_PRECOND_RESTART GN_CRS2_LMGN_MLSLGD_MLSL GN_MLSL_LDS GD_MLSL_LDSLD_MMA LN_COBYLA LN_NEWUOALN_NEWUOA_BOUND LN_NELDERMEADLN_SBPLX LN_AUGLAG LD_AUGLAG LN_AUGLAG_EQ LD_AUGLAG_EQ LN_BOBYQAGN_ISRESAUGLAG AUGLAG_EQG_MLSL G_MLSL_LDSLD_SLSQPLD_CCSAQGN_ESCHalgorithm_name isSuccesscreatedestroycopysrand srand_timeversion get_algorithm get_dimensionoptimizeset_min_objectiveset_max_objectiveset_precond_min_objectiveset_precond_max_objectiveset_lower_boundsset_lower_bounds1get_lower_boundsset_upper_boundsset_upper_bounds1get_upper_boundsremove_inequality_constraintsadd_inequality_constraint!add_precond_inequality_constraintadd_inequality_mconstraintremove_equality_constraintsadd_equality_constraintadd_precond_equality_constraintadd_equality_mconstraint set_stopval get_stopval set_ftol_rel get_ftol_rel set_ftol_abs get_ftol_abs set_xtol_rel get_xtol_rel set_xtol_abs1 set_xtol_abs get_xtol_abs set_maxeval get_maxeval set_maxtime get_maxtime force_stopset_force_stopget_force_stopset_local_optimizerset_populationget_populationset_vector_storageget_vector_storageset_default_initial_stepset_initial_stepset_initial_step1get_initial_step$fEnumAlgorithm $fEnumResult $fEqVersion $fOrdVersion $fReadVersion $fShowVersion $fEqResult $fReadResult $fShowResult$fBoundedResult $fEqAlgorithm$fShowAlgorithm$fReadAlgorithm$fBoundedAlgorithmSolutionsolutionResultsolutionParams solutionCostAugLagAlgorithm AUGLAG_LOCALAUGLAG_EQ_LOCAL AUGLAG_GLOBALAUGLAG_EQ_GLOBAL AugLagProblem alalgorithm alEqualityD alEqualityLocalAlgorithm LBFGS_NOCEDALLBFGSVAR2VAR1TNEWTONTNEWTON_RESTARTTNEWTON_PRECONDTNEWTON_PRECOND_RESTARTMMASLSQPCCSAQPRAXISCOBYLANEWUOA NEWUOA_BOUND NELDERMEADSBPLXBOBYQA LocalProblem lalgorithmlstoplsizeGlobalAlgorithmDIRECTDIRECT_L DIRECT_L_RAND DIRECT_NOSCALDIRECT_L_NOSCALDIRECT_L_RAND_NOSCAL ORIG_DIRECT ORIG_DIRECT_LSTOGO STOGO_RANDCRS2_LMISRESESCHMLSLMLSL_LDS GlobalProblem galgorithmgstop upperBounds lowerBounds InitialStep VectorStorage Population RandomSeed SeedValue SeedFromTime Don'tSeedStoppingCondition MinimumValueObjectiveRelativeToleranceObjectiveAbsoluteToleranceParameterRelativeToleranceParameterAbsoluteToleranceMaximumEvaluations MaximumTimeBounds LowerBounds UpperBoundsInequalityConstraintsDEqualityConstraintsDInequalityConstraintsEqualityConstraintsInequalityConstraintineqConstraintToleranceineqConstraintFunctionsEqualityConstrainteqConstraintToleranceeqConstraintFunctions ConstraintScalarVectorPreconditionedVectorConstraintDVectorConstraintScalarConstraintDScalarConstraintPreconditioner ObjectiveD ObjectiveminimizeGlobal minimizeLocalminimizeAugLag%$fApplyConstraintInequalityConstraint#$fApplyConstraintEqualityConstraint&$fApplyConstraintInequalityConstraint0$$fApplyConstraintEqualityConstraint0$fExceptionNloptException$fProblemSizeGlobalProblem$fProblemSizeLocalProblem$fProblemSizeAugLagProblem $fEqSolution$fShowSolution$fReadSolution$fShowNloptException$fEqInitialStep$fShowInitialStep$fReadInitialStep$fEqVectorStorage$fShowVectorStorage$fReadVectorStorage$fEqPopulation$fShowPopulation$fReadPopulation$fEqRandomSeed$fShowRandomSeed$fReadRandomSeed$fEqStoppingCondition$fShowStoppingCondition$fReadStoppingCondition $fEqBounds $fShowBounds $fReadBoundstree2arr minimizeNLLminimizeNLLNonUniqueminimizeNLLWithFixedParamminimizeGaussianminimizeBinomialminimizePoisson estimateSErrProfileT _theta2tau _tau2theta_opt_thetas_tausCIupper_lower_est_ BasicStatsMkStats_stdErr_corr_covCITypeLaplaceProfilePTypeBatesODE ConstrainedshowCIprintCIparamCI predictionCI inverseDist replaceParam0evalVar calcTheta0getAllProfiles getProfilegetProfileCnstr getEndPoint getProfileODErkgetStatsFromModel createSplinesgetCol sortOnFirstsplinesSketchesapproximateContour$fEqCI$fShowCI$fReadCI$fEqBasicStats$fShowBasicStats $fShowPType $fReadPTypePYTHONMATHTIKZLATEXSRAlgsTIRHLOPERONBINGOGOMEAPYSRSBPEPLEX showOutputparseSR $fShowOutput $fReadOutput $fEnumOutput$fBoundedOutput $fShowSRAlgs $fReadSRAlgs $fEnumSRAlgs$fBoundedSRAlgs withInput withOutputwithOutputDebugGHC.Internal.Data.StringIsStringrandom-1.2.1.2-0abcd3077dd67d4c5bebbef4d850bb4b6512f7573e1d6c8f76bc7d0a4d3f6f97System.Random.InternalStdGenHasExpsremoveProtectionloadMtxisGZip detectSepreadFileToLinessplitFileNameParamsparseVal getColumnsderiveByghc-prim GHC.TypesFalseTrue!??sseTotlogisticmyCost simplifyEqSatGHC.Internal.MaybeMaybeNothing getBinDir getDataDirgetDataFileName getDynLibDir getLibDir getLibexecDir getSysconfDir ParseTreeIntDoublefmapbinaryprefixparens parseExprattoparsec-expr-0.1.1.2-f00fe7331ba08c38745989acd2b7dc3d82f3d3444513c7165d0c4e203573bff4Data.Attoparsec.ExprOperator intOrDoubleaqbinFunparseTIR parseOperonparseHL parseBingo parseGOMEA parsePySR