H      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~None 3579=NEComplex number for FFI with the same memory layout as std::complex<T>      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~NoneN<Alias for double prevision mutable matrix of complex numbers<Alias for single previsiom mutable matrix of complex numbers)Alias for double precision mutable matrix)Alias for single precision mutable matrix'Mutable matrix. You can modify elements *Verify matrix dimensions and memory layoutQCreate a mutable matrix of the given size and fill it with 0 as an initial value.OCreate a mutable matrix of the given size and fill it with as an initial value.1Set all elements of the matrix to the given valueLCopy a matrix. The two matrices must have the same size and may not overlap.(Yield the element at the given position.*Replace the element at the given position.Copy a matrix. The two matrices must have the same size and may not overlap however no bounds check performaned to it may SEGFAULT for incorrect input.HYield the element at the given position. No bounds checks are performed.JReplace the element at the given position. No bounds checks are performed.Pass a pointer to the matrix's data to the IO action. Modifying data through the pointer is unsafe if the matrix could have been frozen before the modification.     None!"N<Alias for double prevision mutable matrix of complex numbers<Alias for single previsiom mutable matrix of complex numbers)Alias for double precision mutable matrix)Alias for single precision mutable matrix&Mutable version of sparse matrix. See  ! for details about matrix layout.'Creates new matrix with the given size  rows x cols(Returns the number of rows of the matrix +Returns the number of columns of the matrix!lReturns the number of rows (resp. columns) of the matrix if the storage order column major (resp. row major)"lReturns the number of columns (resp. rows) of the matrix if the storage order column major (resp. row major)#2Returns whether this matrix is in compressed form.$,Turns the matrix into the compressed format.%,Turns the matrix into the uncompressed mode.&*Reads the value of the matrix at position i, j. This function returns  Scalar(0)$ if the element is an explicit zero.'+Writes the value of the matrix at position i, jY. This function turns the matrix into a non compressed form if that was not the case. This is a  O(log(nnz_j))d operation (binary search) plus the cost of element insertion if the element does not already exist.Cost of element insertion is sorted insertion in O(1) if the elements of each inner vector are inserted in increasing inner index order, and in O(nnz_j) for a random insertion.(&Sets the matrix to the identity matrix)/Removes all non zeros but keep allocated memory*#The number of non zero coefficients+HPreallocates space for non zeros. The matrix must be in compressed mode.,FResizes the matrix to a rows x cols matrix and initializes it to zero.-HResizes the matrix to a rows x cols matrix leaving old values untouched. !"#$%&'()*+,- !"#$%&'()*+,-+ !"*#$%&')(,- !"#$%&'()*+,-None.1Sets the max number of threads reserved for Eigen/1Gets the max number of threads reserved for Eigen././././None !"79INT1)View matrix as a lower triangular matrix.2*View matrix as an upper triangular matrix.3DView matrix as a lower triangular matrix with zeros on the diagonal.4EView matrix as an upper triangular matrix with zeros on the diagonal.5CView matrix as a lower triangular matrix with ones on the diagonal.6DView matrix as an upper triangular matrix with ones on the diagonal.74Alias for double prevision matrix of complex numbers84Alias for single previsiom matrix of complex numbers9!Alias for double precision matrix:!Alias for single precision matrix;iMatrix to be used in pure computations, uses column major memory layout, features copy-free FFI with C++  http://eigen.tuxfamily.orgEigen library.='Encode the matrix as a lazy byte string>'Decode matrix from the lazy byte string?Empty 0x0 matrix@Is matrix empty?AIs matrix square?B3Matrix where all coeffs are filled with given valueCMatrix where all coeff are 0DMatrix where all coeff are 1E-The identity matrix (not necessarily square).F!The random matrix of a given sizeGNumber of rows for the matrixH Number of columns for the matrixIMtrix size as (rows, cols) pairJ*Matrix coefficient at specific row and colK*Matrix coefficient at specific row and colLPUnsafe version of coeff function. No bounds check performed so SEGFAULT possibleM&List of coefficients for the given colN&List of coefficients for the given rowOVExtract rectangular block from matrix defined by startRow startCol blockRows blockColsP*Verify matrix dimensions and memory layoutQ%The maximum coefficient of the matrixR%The minimum coefficient of the matrixSTop N rows of matrixTBottom N rows of matrixULeft N columns of matrixVRight N columns of matrixWvConstruct matrix from a list of rows, column count is detected as maximum row length. Missing values are filled with 0X Convert matrix to a list of rowsY %generate rows cols ( row col -> val)'Create matrix using generator function  row col -> valZ)The sum of all coefficients of the matrix[-The product of all coefficients of the matrix\*The mean of all coefficients of the matrix]jThe trace of a matrix is the sum of the diagonal coefficients and can also be computed as sum (diagonal m)^iApplied to a predicate and a matrix, all determines if all elements of the matrix satisfies the predicate_hApplied to a predicate and a matrix, any determines if any element of the matrix satisfies the predicate`JReturns the number of coefficients in a given matrix that evaluate to trueaFor vectors, the l2 norm, and for matrices the Frobenius norm. In both cases, it consists in the square root of the sum of the square of all the matrix entries. For vectors, this is also equals to the square root of the dot product of this with itself.bFor vectors, the squared l2 norm, and for matrices the Frobenius norm. In both cases, it consists in the sum of the square of all the matrix entries. For vectors, this is also equals to the dot product of this with itself.cThe l2 norm of the matrix using the Blue's algorithm. A Portable Fortran Program to Find the Euclidean Norm of a Vector, ACM TOMS, Vol 4, Issue 1, 1978.dThe l2 norm of the matrix avoiding undeflow and overflow. This version use a concatenation of hypot calls, and it is very slow.eThe determinant of the matrixfNAdding two matrices by adding the corresponding entries together. You can use (+) function as well.gXSubtracting two matrices by subtracting the corresponding entries together. You can use (-) function as well.h#Matrix multiplication. You can use (*) function as well.i5Apply a given function to each element of the matrix.AHere is an example how to implement scalar matrix multiplication:*let a = fromList [[1,2],[3,4]] :: MatrixXfa Matrix 2x21.0 2.03.0 4.0 map (*10) a Matrix 2x2 10.0 20.0 30.0 40.0j5Apply a given function to each element of the matrix.BHere is an example how upper triangular matrix can be implemented:6let a = fromList [[1,2,3],[4,5,6],[7,8,9]] :: MatrixXfa Matrix 3x3 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.06imap (\row col val -> if row <= col then val else 0) a Matrix 3x3 1.0 2.0 3.0 0.0 5.0 6.0 0.0 0.0 9.0k1Triangular view extracted from the current matrixl+Lower trinagle of the matrix. Shortcut for triangularView Lowerm+Upper trinagle of the matrix. Shortcut for triangularView UppernFFilter elements in the matrix. Filtered elements will be replaced by 0oFFilter elements in the matrix. Filtered elements will be replaced by 0pCReduce matrix using user provided function applied to each element.q^Reduce matrix using user provided function applied to each element. This is strict version of prQReduce matrix using user provided function applied to each element and it's indexsmReduce matrix using user provided function applied to each element and it's index. This is strict version of rtCReduce matrix using user provided function applied to each element.u^Reduce matrix using user provided function applied to each element. This is strict version of pvDiagonal of the matrixwInverse of the matrix}For small fixed sizes up to 4x4, this method uses cofactors. In the general case, this method uses PartialPivLU decompositionxAdjoint of the matrixyTranspose of the matrixzConjugate of the matrix{*Nomalize the matrix by deviding it on its a|Apply a destructive operation to a matrix. The operation will be performed in place if it is safe to do so and will modify a copy of the matrix otherwise.}FConvert matrix to different type using user provided element converter~-Yield an immutable copy of the mutable matrix,Yield a mutable copy of the immutable matrix}Unsafe convert a mutable matrix to an immutable one without copying. The mutable matrix may not be used after this operation.Unsafely convert an immutable matrix to a mutable one without copying. The immutable matrix may not be used after this operation.gPass a pointer to the matrix's data to the IO action. The data may not be modified through the pointer.Matrix binary serialization0Basic matrix math exposed through Num instance: (*), (+), (-), , , , Pretty prints the matrixZ0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~U0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~U;<:987PWXY?@ACDEBFHGIJKLMNOSTUVZ[\RQ]abcdepqrstu^_`fghijnovywxz{|}0123456klm=>~S0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~None!"4N.;Alias for double prevision sparse matrix of complex numbers;Alias for single previsiom sparse matrix of complex numbers(Alias for double precision sparse matrix(Alias for single precision sparse matrix*A versatible sparse matrix representation.SparseMatrix is the main sparse matrix representation of Eigen's sparse module. It offers high performance and low memory usage. It implements a more versatile variant of the widely-used Compressed Column (or Row) Storage scheme.#It consists of four compact arrays:1: stores the coefficient values of the non-zeros.9: stores the row (resp. column) indices of the non-zeros.`: stores for each column (resp. row) the index of the first non-zero in the previous two arrays.: stores the number of non-zeros of each column (resp. row). The word inner refers to an inner vector that is a column for a column-major matrix, or a row for a row-major matrix. The word outer refers to the other direction.KThis storage scheme is better explained on an example. The following matrix [0 3 0 0 0 22 0 0 0 17 7 5 0 1 0 0 0 0 0 0 0 0 14 0 8  and one of its possible sparse,  column major representation: values: 22 7 _ 3 5 14 _ _ 1 _ 17 8 innerIndices: 1 2 _ 0 2 4 _ _ 2 _ 1 4 outerStarts: 0 3 5 8 10 12 innerNNZs: 2 2 1 1 2  Currently the elements of a given inner vector are guaranteed to be always sorted by increasing inner indices. The "_" indicates available free space to quickly insert new elements. Assuming no reallocation is needed, the insertion of a random element is therefore in O(nnz_j) where nnz_j is the number of nonzeros of the respective inner vector. On the other hand, inserting elements with increasing inner indices in a given inner vector is much more efficient since this only requires to increase the respective  entry that is a O(1) operation.The case where no empty space is available is a special case, and is refered as the compressed mode. It corresponds to the widely used Compressed Column (or Row) Storage schemes (CCS or CRS). Any + can be turned to this form by calling the 1 function. In this case, one can remark that the  array is redundant with  because we the equality: .InnerNNZs[j] = OuterStarts[j+1]-OuterStarts[j]#. Therefore, in practice a call to  frees this buffer.The results of Eigen's operations always produces compressed sparse matrices. On the other hand, the insertion of a new element into a . converts this later to the uncompressed mode.&For more infomration please see Eigen  >http://eigen.tuxfamily.org/dox/classEigen_1_1SparseMatrix.htmldocumentation page..Encode the sparse matrix as a lazy byte string.Decode sparse matrix from the lazy byte string/Stores the coefficient values of the non-zeros.7Stores the row (resp. column) indices of the non-zeros.^Stores for each column (resp. row) the index of the first non-zero in the previous two arrays.Stores the number of non-zeros of each column (resp. row). The word inner refers to an inner vector that is a column for a column-major matrix, or a row for a row-major matrix. The word outer refers to the other direction$Number of rows for the sparse matrix'Number of columns for the sparse matrix'Matrix coefficient at given row and col'Matrix coefficient at given row and colFor vectors, the l2 norm, and for matrices the Frobenius norm. In both cases, it consists in the square root of the sum of the square of all the matrix entries. For vectors, this is also equals to the square root of the dot product of this with itself.For vectors, the squared l2 norm, and for matrices the Frobenius norm. In both cases, it consists in the sum of the square of all the matrix entries. For vectors, this is also equals to the dot product of this with itself.The l2 norm of the matrix using the Blue's algorithm. A Portable Fortran Program to Find the Euclidean Norm of a Vector, ACM TOMS, Vol 4, Issue 1, 1978.]Extract rectangular block from sparse matrix defined by startRow startCol blockRows blockCols1Number of non-zeros elements in the sparse matrix#The matrix in the compressed format#The matrix in the uncompressed modeIs this in compressed form?1Minor dimension with respect to the storage order1Major dimension with respect to the storage orderRSuppresses all nonzeros which are much smaller than reference under the tolerence epsilon!Multiply matrix on a given scalarTranspose of the sparse matrixAdjoint of the sparse matrixUAdding two sparse matrices by adding the corresponding entries together. You can use (+) function as well._Subtracting two sparse matrices by subtracting the corresponding entries together. You can use (-) function as well.#Matrix multiplication. You can use (*) function as well.OConstruct sparse matrix of given size from the list of triplets (row, col, val)ZConstruct sparse matrix of given size from the storable vector of triplets (row, col, val)gConvert sparse matrix to the list of triplets (row, col, val). Compressed elements will not be includedrConvert sparse matrix to the storable vector of triplets (row, col, val). Compressed elements will not be includedConstruct sparse matrix of two-dimensional list of values. Matrix dimensions will be detected automatically. Zero values will be compressed.;Convert sparse matrix to (rows X cols) dense list of valuesKConstruct sparse matrix from dense matrix. Zero elements will be compressed)Construct dense matrix from sparse matrix-Yield an immutable copy of the mutable matrix,Yield a mutable copy of the immutable matrix}Unsafe convert a mutable matrix to an immutable one without copying. The mutable matrix may not be used after this operation.Unsafely convert an immutable matrix to a mutable one without copying. The immutable matrix may not be used after this operation.7Basic sparse matrix math exposed through Num instance: (*), (+), (-), , , , Pretty prints the sparse matrix6--5None 3579C%Computation was successful.4The provided data did not satisfy the prerequisites.%Iterative procedure did not converge.The inputs are invalid, or the algorithm has been improperly called. When assertions are enabled, such errors trigger an error.4Sparse left-looking rank-revealing QR factorization.This class implements a left-looking rank-revealing QR decomposition of sparse matrices. When a column has a norm less than a given tolerance it is implicitly permuted to the end. The QR factorization thus obtained is given by  A*P = Q*R where R$ is upper triangular or trapezoidal.Pi is the column permutation which is the product of the fill-reducing and the rank-revealing permutations.QL is the orthogonal matrix represented as products of Householder reflectors.RG is the sparse triangular or trapezoidal matrix. The later occurs when A is rank-deficient.8Sparse supernodal LU factorization for general matrices.This class implements the supernodal LU factorization for general matrices. It uses the main techniques from the sequential  *http://crd-legacy.lbl.gov/~xiaoye/SuperLU/SuperLU package4. It handles transparently real and complex arithmetics with single and double precision, depending on the scalar type of your input matrix. The code has been optimized to provide BLAS-3 operations during supernode-panel updates. It benefits directly from the built-in high-performant Eigen BLAS routines. Moreover, when the size of a supernode is very small, the BLAS calls are avoided to enable a better optimization from the compiler. For best performance, you should compile it with NDEBUG flag to avoid the numerous bounds checking on vectors.An important parameter of this class is the ordering method. It is used to reorder the columns (and eventually the rows) of the matrix to reduce the number of new elements that are created during numerical factorization. The cheapest method available is COLAMD. See  Bhttp://eigen.tuxfamily.org/dox/group__OrderingMethods__Module.htmlOrderingMethods module< for the list of built-in and external ordering methods.EA bi conjugate gradient stabilized solver for sparse square problems.This class allows to solve for A.x = b\ sparse linear problems using a bi conjugate gradient stabilized algorithm. The vectors x and b can be either dense or sparse.OThe maximal number of iterations and tolerance value can be controlled via the  and ` methods. The defaults are the size of the problem for the maximal number of iterations and epsilon for the tolerance=A conjugate gradient solver for sparse self-adjoint problems.This class allows to solve for A.x = bP sparse linear problems using a conjugate gradient algorithm. The sparse matrix A must be selfadjoint.OThe maximal number of iterations and tolerance value can be controlled via the  and ` methods. The defaults are the size of the problem for the maximal number of iterations and epsilon for the toleranceSometimes, the solution need not be too accurate. In this case, the iterative methods are more suitable and the desired accuracy can be set before the solve step using .FFor direct methods, the solution is computed at the machine precision.-A preconditioner based on the digonal entriesIt allows to approximately solve for A.x = b problems assuming A is a diagonal matrix. In other words, this preconditioner neglects all off diagonal entries and, in Eigen's language, solves for: 3 A.diagonal().asDiagonal() . x = b  This preconditioner is suitable for both selfadjoint and general problems. The diagonal entries are pre-inverted and stored into a dense vector.[A variant that has yet to be implemented would attempt to preserve the norm of each column.KA naive preconditioner which approximates any matrix as the identity matrixOrdering methods for sparse matrices. They are typically used to reduce the number of elements during the sparse matrix decomposition (LLT, LU, QR<). Precisely, in a preprocessing step, a permutation matrix P is computed using those ordering methods and applied to the columns of the matrix. Using for instance the sparse Cholesky decomposition, it is expected that the nonzeros elements in LLT(A*P)# will be much smaller than that in LLT(A).iThe column approximate minimum degree ordering The matrix should be in column-major and compressed formatThe natural ordering (identity)HInitializes the iterative solver for the sparsity pattern of the matrix A for further solving Ax=b problems.IInitializes the iterative solver with the numerical values of the matrix A for further solving Ax=b problems.1Initializes the iterative solver with the matrix A for further solving Ax=b problems.The & method is equivalent to calling both  and .An expression of the solution x of Ax=b$ using the current decomposition of A.9 if the iterations converged or computation was succesful1 if the factorization reports a numerical problem$ if the iterations are not converged if the input matrix is invalid6The tolerance threshold used by the stopping criteria.;Sets the tolerance threshold used by the stopping criteria.EThis value is used as an upper bound to the relative residual error:  |Ax-b|/|b|6. The default value is the machine precision given by epsilonThe max number of iterations. It is either the value setted by setMaxIterations or, by default, twice the number of columns of the matrix.XSets the max number of iterations. Default is twice the number of columns of the matrix.sThe tolerance error reached during the last solve. It is a close approximation of the true relative residual error  |Ax-b|/|b|.8The number of iterations performed during the last solve Returns the b sparse upper triangular matrix R of the QR factorization.Returns the matrix Q. as products of sparse Householder reflectors.aSets the threshold that is used to determine linearly dependent columns during the factorization.In practice, if during the factorization the norm of the column that has to be eliminated is below this threshold, then the entire column is treated as zero, and it is moved at the end.]Returns the number of non linearly dependent columns as determined by the pivoting threshold.:Indicate that the pattern of the input matrix is symmetricReturns the matrix LReturns the matrix UThe determinant of the matrix.lThe natural log of the absolute value of the determinant of the matrix of which this is the QR decompositionsThis method is useful to work around the risk of overflow/underflow that's inherent to the determinant computation.[The absolute value of the determinant of the matrix of which *this is the QR decomposition.A determinant can be very big or small, so for matrices of large enough dimension, there is a risk of overflow/underflow. One way to work around that is to use  instead.1A number representing the sign of the determinant=..1None  SDecomposition Requirements on the matrix Speed Accuracy Rank Kernel Image PartialPivLU Invertible ++ + - - - FullPivLU None - +++ + + + HouseholderQR None ++ + - - - ColPivHouseholderQR None + ++ + - - FullPivHouseholderQR None - +++ + - - LLT Positive definite +++ + - - - LDLT Positive or negative semidefinite +++ ++ - - - JacobiSVD None - +++ + - - ZThe best way to do least squares solving for square matrices is with a SVD decomposition ()3LU decomposition of a matrix with partial pivoting.4LU decomposition of a matrix with complete pivoting.)Householder QR decomposition of a matrix.MHouseholder rank-revealing QR decomposition of a matrix with column-pivoting.KHouseholder rank-revealing QR decomposition of a matrix with full pivoting.3Standard Cholesky decomposition (LL^T) of a matrix.8Robust Cholesky decomposition of a matrix with pivoting.;Two-sided Jacobi SVD decomposition of a rectangular matrix. x = solve d a bfinds a solution x of ax = b equation using decomposition d e = relativeError x a b computes norm (ax - b) / norm b where norm is L2 normThe rank of the matrix>Return matrix whose columns form a basis of the null-space of ABReturn a matrix whose columns form a basis of the column-space of A )(coeffs, error) = linearRegression points$computes multiple linear regression #y = a1 x1 + a2 x2 + ... + an xn + b using  decompositionpoint format is  [y, x1..xn]coeffs format is  [b, a1..an]error is calculated using  import Data.Eigen.LA main = print $ linearRegression [ [-4.32, 3.02, 6.89], [-3.79, 2.01, 5.39], [-4.01, 2.41, 6.01], [-3.86, 2.09, 5.55], [-4.10, 2.58, 6.32]] produces the following output V([-2.3466569233817127,-0.2534897541434826,-0.1749653335680988],1.8905965120153139e-3)      !"#$%%&'()*+,-./0123456789:;<=>?@@ABCDEFGHIJ&'KLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~   AB&'MLbcdQ/+,*()zyghiXYf       !"#$%&'()*+,-./0123456JIghiwzxy{|[\]b^cdef789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~ eigen-2.1.6Data.Eigen.MatrixData.Eigen.Matrix.MutableData.Eigen.SparseMatrix.MutableData.Eigen.ParallelData.Eigen.SparseMatrixData.Eigen.SparseLA Data.Eigen.LAData.Eigen.Internal SparseMatrixCComplexElemSTMatrixIOMatrix MMatrixXcd MMatrixXcf MMatrixXd MMatrixXfMMatrixmm_rowsmm_colsmm_valsvalidnew replicatesetcopyreadwrite unsafeCopy unsafeRead unsafeWrite unsafeWithIOSparseMatrixXcdIOSparseMatrixXcfIOSparseMatrixXdIOSparseMatrixXfIOSparseMatrixrowscols innerSize outerSize compressedcompress uncompress setIdentitysetZerononZerosreserveresizeconservativeResize setNbThreads getNbThreadsTriangularModeLowerUpper StrictlyLower StrictlyUpper UnitLower UnitUpper MatrixXcd MatrixXcfMatrixXdMatrixXfMatrixencodedecodeemptynullsquareconstantzeroonesidentityrandomdims!coeff unsafeCoeffcolrowblockmaxCoeffminCoefftopRows bottomRowsleftCols rightColsfromListtoListgeneratesumprodmeantraceallanycountnorm squaredNormblueNorm hypotNorm determinantaddsubmulmapimaptriangularView lowerTriangle upperTrianglefilterifilterfoldfold'ifoldifold'fold1fold1'diagonalinverseadjoint transpose conjugate normalizemodifyconvertfreezethaw unsafeFreeze unsafeThawSparseMatrixXcdSparseMatrixXcfSparseMatrixXdSparseMatrixXfvalues innerIndices outerStarts innerNNZsprunedscale fromVectortoVector fromDenseList toDenseList fromMatrixtoMatrixSolverTComputationInfoSuccessNumericalIssue NoConvergence InvalidInputSparseQRSparseLUBiCGSTABConjugateGradientIterativeSolver DirectSolverSolverPreconditionerDiagonalPreconditionerIdentityPreconditionerOrderingMethodCOLAMDOrderingNaturalOrdering runSolverTanalyzePattern factorizecomputesolveinfo tolerance setTolerance maxIterationssetMaxIterationserror iterationsmatrixRmatrixQsetPivotThresholdrank setSymmetricmatrixLmatrixUlogAbsDeterminantabsDeterminantsignDeterminant Decomposition PartialPivLU FullPivLU HouseholderQRColPivHouseholderQRFullPivHouseholderQRLLTLDLT JacobiSVD relativeErrorkernelimagelinearRegression MagicCodeCodecode CSolverPtrCSolverCSparseMatrixPtr CSparseMatrixCTripletCastcastc_sparse_la_signDeterminantc_sparse_la_absDeterminantc_sparse_la_logAbsDeterminantc_sparse_la_determinantc_sparse_la_setSymmetricc_sparse_la_matrixUc_sparse_la_matrixLc_sparse_la_rankc_sparse_la_setPivotThresholdc_sparse_la_matrixRc_sparse_la_matrixQc_sparse_la_solvec_sparse_la_iterationsc_sparse_la_errorc_sparse_la_infoc_sparse_la_setMaxIterationsc_sparse_la_maxIterationsc_sparse_la_setTolerancec_sparse_la_tolerancec_sparse_la_computec_sparse_la_analyzePatternc_sparse_la_factorizec_sparse_la_freeSolverc_sparse_la_newSolverc_sparse_uncompressInplacec_sparse_compressInplacec_sparse_conservativeResizec_sparse_resizec_sparse_reservec_sparse_setIdentityc_sparse_setZeroc_sparse_innerNNZsc_sparse_innerIndicesc_sparse_outerStartsc_sparse_valuesc_sparse_toMatrixc_sparse_fromMatrixc_sparse_block c_sparse_mul c_sparse_sub c_sparse_addc_sparse_blueNormc_sparse_squaredNorm c_sparse_norm c_sparse_rows c_sparse_colsc_sparse_coeffRefc_sparse_coeffc_sparse_outerSizec_sparse_innerSizec_sparse_nonZerosc_sparse_scalec_sparse_prunedRefc_sparse_prunedc_sparse_adjointc_sparse_transposec_sparse_isCompressedc_sparse_uncompressc_sparse_makeCompressed c_sparse_freec_sparse_toListc_sparse_fromListc_sparse_clone c_sparse_newc_relativeErrorc_solvec_kernelc_imagec_rank c_determinant c_hypotNorm c_blueNorm c_squaredNormc_tracec_normc_meanc_prodc_sum c_normalize c_conjugate c_adjoint c_inverse c_transpose c_diagonalc_mulc_subc_add c_identityc_randomc_getNbThreadsc_setNbThreadsfree c_freeStringintSize encodeInt decodeInt performIOplusForeignPtrcall magicCode sparse_new sparse_clonesparse_fromList sparse_toList sparse_freesparse_makeCompressedsparse_uncompresssparse_isCompressedsparse_transposesparse_adjoint sparse_prunedsparse_prunedRef sparse_scalesparse_nonZerossparse_innerSizesparse_outerSize sparse_coeffsparse_coeffRef sparse_cols sparse_rows sparse_normsparse_squaredNormsparse_blueNorm sparse_add sparse_sub sparse_mul sparse_blocksparse_fromMatrixsparse_toMatrix sparse_valuessparse_outerStartssparse_innerIndicessparse_innerNNZssparse_setZerosparse_setIdentitysparse_reserve sparse_resizesparse_conservativeResizesparse_compressInplacesparse_uncompressInplacesparse_la_newSolversparse_la_freeSolversparse_la_factorizesparse_la_analyzePatternsparse_la_computesparse_la_tolerancesparse_la_setTolerancesparse_la_maxIterationssparse_la_setMaxIterationssparse_la_infosparse_la_errorsparse_la_iterationssparse_la_solvesparse_la_matrixQsparse_la_matrixRsparse_la_setPivotThresholdsparse_la_ranksparse_la_matrixLsparse_la_matrixUsparse_la_setSymmetricsparse_la_determinantsparse_la_logAbsDeterminantsparse_la_absDeterminantsparse_la_signDeterminant$fBinaryMagicCode$fCodeCComplex$fCodeCComplex0 $fCodeCDouble $fCodeCFloat$fCast(,,)CTriplet$fCastCTriplet(,,)$fCastComplexCComplex$fCastCComplexComplex$fCastComplexCComplex0$fCastCComplexComplex0$fCastDoubleCDouble$fCastCDoubleDouble$fCastFloatCFloat$fCastCFloatFloat $fCastIntCInt $fCastCIntInt$fStorableCTriplet$fStorableCComplex$fBinaryVector$fElemComplexCComplex$fElemComplexCComplex0$fElemDoubleCDouble$fElemFloatCFloat_inplace_prop$fBinaryMatrix $fNumMatrixbaseGHC.Num fromIntegersignumabsnegate $fShowMatrix_binop_unop_vals$fNumSparseMatrix$fShowSparseMatrix_getvec_clone_map_mk$fBinarySparseMatrix _get_prop _get_matrix _set_prop$fCodeSparseQR$fDirectSolverSparseQR$fSolverSparseQR$fCodeSparseLU$fDirectSolverSparseLU$fSolverSparseLU$fCodeBiCGSTAB$fIterativeSolverBiCGSTAB$fSolverBiCGSTAB$fCodeConjugateGradient"$fIterativeSolverConjugateGradient$fSolverConjugateGradient