|      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~None 3579=NEComplex number for FFI with the same memory layout as std::complex<T>      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~NoneN<Alias for double prevision mutable matrix of complex numbers<Alias for single previsiom mutable matrix of complex numbers)Alias for double precision mutable matrix)Alias for single precision mutable matrix'Mutable matrix. You can modify elements *Verify matrix dimensions and memory layoutQCreate a mutable matrix of the given size and fill it with 0 as an initial value.OCreate a mutable matrix of the given size and fill it with as an initial value.1Set all elements of the matrix to the given valueLCopy a matrix. The two matrices must have the same size and may not overlap.(Yield the element at the given position.*Replace the element at the given position.Copy a matrix. The two matrices must have the same size and may not overlap however no bounds check performaned to it may SEGFAULT for incorrect input.HYield the element at the given position. No bounds checks are performed.JReplace the element at the given position. No bounds checks are performed.Pass a pointer to the matrix's data to the IO action. Modifying data through the pointer is unsafe if the matrix could have been frozen before the modification.     None1Sets the max number of threads reserved for Eigen1Gets the max number of threads reserved for EigenNone !"79INS)View matrix as a lower triangular matrix.*View matrix as an upper triangular matrix.DView matrix as a lower triangular matrix with zeros on the diagonal.EView matrix as an upper triangular matrix with zeros on the diagonal.CView matrix as a lower triangular matrix with ones on the diagonal. DView matrix as an upper triangular matrix with ones on the diagonal.!4Alias for double prevision matrix of complex numbers"4Alias for single previsiom matrix of complex numbers#!Alias for double precision matrix$!Alias for single precision matrix%iMatrix to be used in pure computations, uses column major memory layout, features copy-free FFI with C++  http://eigen.tuxfamily.orgEigen library.'Empty 0x0 matrix(Is matrix empty?)Is matrix square?*3Matrix where all coeffs are filled with given value+Matrix where all coeff are 0,Matrix where all coeff are 1--The identity matrix (not necessarily square)..!The random matrix of a given size/Number of rows for the matrix0 Number of columns for the matrix1Mtrix size as (rows, cols) pair2*Matrix coefficient at specific row and col3*Matrix coefficient at specific row and col4PUnsafe version of coeff function. No bounds check performed so SEGFAULT possible5&List of coefficients for the given col6&List of coefficients for the given row7VExtract rectangular block from matrix defined by startRow startCol blockRows blockCols8*Verify matrix dimensions and memory layout9%The maximum coefficient of the matrix:%The minimum coefficient of the matrix;Top N rows of matrix<Bottom N rows of matrix=Left N columns of matrix>Right N columns of matrix?vConstruct matrix from a list of rows, column count is detected as maximum row length. Missing values are filled with 0@ Convert matrix to a list of rowsA %generate rows cols ( row col -> val)'Create matrix using generator function  row col -> valB)The sum of all coefficients of the matrixC-The product of all coefficients of the matrixD*The mean of all coefficients of the matrixEjThe trace of a matrix is the sum of the diagonal coefficients and can also be computed as sum (diagonal m)FiApplied to a predicate and a matrix, all determines if all elements of the matrix satisfies the predicateGhApplied to a predicate and a matrix, any determines if any element of the matrix satisfies the predicateHJReturns the number of coefficients in a given matrix that evaluate to trueIFor vectors, the l2 norm, and for matrices the Frobenius norm. In both cases, it consists in the square root of the sum of the square of all the matrix entries. For vectors, this is also equals to the square root of the dot product of this with itself.JFor vectors, the squared l2 norm, and for matrices the Frobenius norm. In both cases, it consists in the sum of the square of all the matrix entries. For vectors, this is also equals to the dot product of this with itself.KThe l2 norm of the matrix using the Blue's algorithm. A Portable Fortran Program to Find the Euclidean Norm of a Vector, ACM TOMS, Vol 4, Issue 1, 1978.LThe l2 norm of the matrix avoiding undeflow and overflow. This version use a concatenation of hypot calls, and it is very slow.MThe determinant of the matrixNNAdding two matrices by adding the corresponding entries together. You can use (+) function as well.OXSubtracting two matrices by subtracting the corresponding entries together. You can use (-) function as well.P#Matrix multiplication. You can use (*) function as well.Q5Apply a given function to each element of the matrix.AHere is an example how to implement scalar matrix multiplication:*let a = fromList [[1,2],[3,4]] :: MatrixXfa Matrix 2x21.0 2.03.0 4.0 map (*10) a Matrix 2x2 10.0 20.0 30.0 40.0R5Apply a given function to each element of the matrix.BHere is an example how upper triangular matrix can be implemented:6let a = fromList [[1,2,3],[4,5,6],[7,8,9]] :: MatrixXfa Matrix 3x3 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.06imap (\row col val -> if row <= col then val else 0) a Matrix 3x3 1.0 2.0 3.0 0.0 5.0 6.0 0.0 0.0 9.0S1Triangular view extracted from the current matrixT+Lower trinagle of the matrix. Shortcut for triangularView LowerU+Upper trinagle of the matrix. Shortcut for triangularView UpperVFFilter elements in the matrix. Filtered elements will be replaced by 0WFFilter elements in the matrix. Filtered elements will be replaced by 0XCReduce matrix using user provided function applied to each element.Y^Reduce matrix using user provided function applied to each element. This is strict version of XZQReduce matrix using user provided function applied to each element and it's index[mReduce matrix using user provided function applied to each element and it's index. This is strict version of Z\CReduce matrix using user provided function applied to each element.]^Reduce matrix using user provided function applied to each element. This is strict version of X^Diagonal of the matrix_Inverse of the matrix}For small fixed sizes up to 4x4, this method uses cofactors. In the general case, this method uses PartialPivLU decomposition`Adjoint of the matrixaTranspose of the matrixbConjugate of the matrixc*Nomalize the matrix by deviding it on its IdApply a destructive operation to a matrix. The operation will be performed in place if it is safe to do so and will modify a copy of the matrix otherwise.eFConvert matrix to different type using user provided element converterf-Yield an immutable copy of the mutable matrixg,Yield a mutable copy of the immutable matrixh}Unsafe convert a mutable matrix to an immutable one without copying. The mutable matrix may not be used after this operation.iUnsafely convert an immutable matrix to a mutable one without copying. The immutable matrix may not be used after this operation.jgPass a pointer to the matrix's data to the IO action. The data may not be modified through the pointer.k'Encode the matrix as a lazy byte stringl'Decode matrix from the lazy byte string0Basic matrix math exposed through Num instance: (*), (+), (-), , , , Pretty prints the matrixY !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklU !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklU%&$#"!8?@A'()+,-*.0/1234567;<=>BCD:9EIJKLMXYZ[\]FGHNOPQRVW^a_`bcde STUklgfihjR !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklNone!"4N)m;Alias for double prevision sparse matrix of complex numbersn;Alias for single previsiom sparse matrix of complex numberso(Alias for double precision sparse matrixp(Alias for single precision sparse matrixq*A versatible sparse matrix representation.SparseMatrix is the main sparse matrix representation of Eigen's sparse module. It offers high performance and low memory usage. It implements a more versatile variant of the widely-used Compressed Column (or Row) Storage scheme.#It consists of four compact arrays:s1: stores the coefficient values of the non-zeros.t9: stores the row (resp. column) indices of the non-zeros.u`: stores for each column (resp. row) the index of the first non-zero in the previous two arrays.v: stores the number of non-zeros of each column (resp. row). The word inner refers to an inner vector that is a column for a column-major matrix, or a row for a row-major matrix. The word outer refers to the other direction.KThis storage scheme is better explained on an example. The following matrix [0 3 0 0 0 22 0 0 0 17 7 5 0 1 0 0 0 0 0 0 0 0 14 0 8  and one of its possible sparse,  column major representation: values: 22 7 _ 3 5 14 _ _ 1 _ 17 8 innerIndices: 1 2 _ 0 2 4 _ _ 2 _ 1 4 outerStarts: 0 3 5 8 10 12 innerNNZs: 2 2 1 1 2  Currently the elements of a given inner vector are guaranteed to be always sorted by increasing inner indices. The "_" indicates available free space to quickly insert new elements. Assuming no reallocation is needed, the insertion of a random element is therefore in O(nnz_j) where nnz_j is the number of nonzeros of the respective inner vector. On the other hand, inserting elements with increasing inner indices in a given inner vector is much more efficient since this only requires to increase the respective v entry that is a O(1) operation.The case where no empty space is available is a special case, and is refered as the compressed mode. It corresponds to the widely used Compressed Column (or Row) Storage schemes (CCS or CRS). Any q+ can be turned to this form by calling the 1 function. In this case, one can remark that the v array is redundant with u because we the equality: .InnerNNZs[j] = OuterStarts[j+1]-OuterStarts[j]#. Therefore, in practice a call to  frees this buffer.The results of Eigen's operations always produces compressed sparse matrices. On the other hand, the insertion of a new element into a q. converts this later to the uncompressed mode.&For more infomration please see Eigen  >http://eigen.tuxfamily.org/dox/classEigen_1_1SparseMatrix.htmldocumentation page.#Not exposed, For internal use donlys/Stores the coefficient values of the non-zeros.t7Stores the row (resp. column) indices of the non-zeros.u^Stores for each column (resp. row) the index of the first non-zero in the previous two arrays.vStores the number of non-zeros of each column (resp. row). The word inner refers to an inner vector that is a column for a column-major matrix, or a row for a row-major matrix. The word outer refers to the other directionw$Number of rows for the sparse matrixx'Number of columns for the sparse matrixy'Matrix coefficient at given row and colz'Matrix coefficient at given row and col{For vectors, the l2 norm, and for matrices the Frobenius norm. In both cases, it consists in the square root of the sum of the square of all the matrix entries. For vectors, this is also equals to the square root of the dot product of this with itself.|For vectors, the squared l2 norm, and for matrices the Frobenius norm. In both cases, it consists in the sum of the square of all the matrix entries. For vectors, this is also equals to the dot product of this with itself.}The l2 norm of the matrix using the Blue's algorithm. A Portable Fortran Program to Find the Euclidean Norm of a Vector, ACM TOMS, Vol 4, Issue 1, 1978.~]Extract rectangular block from sparse matrix defined by startRow startCol blockRows blockCols1Number of non-zeros elements in the sparse matrix#The matrix in the compressed format#The matrix in the uncompressed modeIs this in compressed form?1Minor dimension with respect to the storage order1Major dimension with respect to the storage orderYSuppresses all nonzeros which are much smaller than reference under the tolerence epsilon!Multiply matrix on a given scalarTranspose of the sparse matrixAdjoint of the sparse matrixUAdding two sparse matrices by adding the corresponding entries together. You can use (+) function as well._Subtracting two sparse matrices by subtracting the corresponding entries together. You can use (-) function as well.#Matrix multiplication. You can use (*) function as well.OConstruct sparse matrix of given size from the list of triplets (row, col, val)gConvert sparse matrix to the list of triplets (row, col, val). Compressed elements will not be includedConstruct sparse matrix of two-dimensional list of values. Matrix dimensions will be detected automatically. Zero values will be compressed.;Convert sparse matrix to (rows X cols) dense list of valuesKConstruct sparse matrix from dense matrix. Zero elements will be compressed)Construct dense matrix from sparse matrix.Encode the sparse matrix as a lazy byte string.Decode sparse matrix from the lazy byte string7Basic sparse matrix math exposed through Num instance: (*), (+), (-), , , , Pretty prints the sparse matrix-mnopqrstuvwxyz{|}~'mnopqrstuvwxyz{|}~'qrponmstuvxwyz{|}~,mnopqrstuvwxyz{|}~None 3579C%Computation was successful.4The provided data did not satisfy the prerequisites.%Iterative procedure did not converge.The inputs are invalid, or the algorithm has been improperly called. When assertions are enabled, such errors trigger an error.4Sparse left-looking rank-revealing QR factorization.This class implements a left-looking rank-revealing QR decomposition of sparse matrices. When a column has a norm less than a given tolerance it is implicitly permuted to the end. The QR factorization thus obtained is given by  A*P = Q*R where R$ is upper triangular or trapezoidal.Pi is the column permutation which is the product of the fill-reducing and the rank-revealing permutations.QL is the orthogonal matrix represented as products of Householder reflectors.RG is the sparse triangular or trapezoidal matrix. The later occurs when A is rank-deficient.8Sparse supernodal LU factorization for general matrices.This class implements the supernodal LU factorization for general matrices. It uses the main techniques from the sequential  *http://crd-legacy.lbl.gov/~xiaoye/SuperLU/SuperLU package4. It handles transparently real and complex arithmetics with single and double precision, depending on the scalar type of your input matrix. The code has been optimized to provide BLAS-3 operations during supernode-panel updates. It benefits directly from the built-in high-performant Eigen BLAS routines. Moreover, when the size of a supernode is very small, the BLAS calls are avoided to enable a better optimization from the compiler. For best performance, you should compile it with NDEBUG flag to avoid the numerous bounds checking on vectors.An important parameter of this class is the ordering method. It is used to reorder the columns (and eventually the rows) of the matrix to reduce the number of new elements that are created during numerical factorization. The cheapest method available is COLAMD. See  Bhttp://eigen.tuxfamily.org/dox/group__OrderingMethods__Module.htmlOrderingMethods module> for the list of built-in and external ordering methods. EA bi conjugate gradient stabilized solver for sparse square problems.This class allows to solve for A.x = b\ sparse linear problems using a bi conjugate gradient stabilized algorithm. The vectors x and b can be either dense or sparse.OThe maximal number of iterations and tolerance value can be controlled via the  and ` methods. The defaults are the size of the problem for the maximal number of iterations and epsilon for the tolerance=A conjugate gradient solver for sparse self-adjoint problems.This class allows to solve for A.x = bP sparse linear problems using a conjugate gradient algorithm. The sparse matrix A must be selfadjoint.OThe maximal number of iterations and tolerance value can be controlled via the  and ` methods. The defaults are the size of the problem for the maximal number of iterations and epsilon for the toleranceSometimes, the solution need not be too accurate. In this case, the iterative methods are more suitable and the desired accuracy can be set before the solve step using .FFor direct methods, the solution is computed at the machine precision.-A preconditioner based on the digonal entriesIt allows to approximately solve for A.x = b problems assuming A is a diagonal matrix. In other words, this preconditioner neglects all off diagonal entries and, in Eigen's language, solves for: 3 A.diagonal().asDiagonal() . x = b  This preconditioner is suitable for both selfadjoint and general problems. The diagonal entries are pre-inverted and stored into a dense vector.[A variant that has yet to be implemented would attempt to preserve the norm of each column.KA naive preconditioner which approximates any matrix as the identity matrixOrdering methods for sparse matrices. They are typically used to reduce the number of elements during the sparse matrix decomposition (LLT, LU, QR<). Precisely, in a preprocessing step, a permutation matrix P is computed using those ordering methods and applied to the columns of the matrix. Using for instance the sparse Cholesky decomposition, it is expected that the nonzeros elements in LLT(A*P)# will be much smaller than that in LLT(A).jThe column approximate minimum degree ordering The matrix should be in column-major and compressed format The natural ordering (identity)HInitializes the iterative solver for the sparsity pattern of the matrix A for further solving Ax=b problems.IInitializes the iterative solver with the numerical values of the matrix A for further solving Ax=b problems.1Initializes the iterative solver with the matrix A for further solving Ax=b problems.The & method is equivalent to calling both  and .An expression of the solution x of Ax=b$ using the current decomposition of A.9 if the iterations converged or computation was succesful1 if the factorization reports a numerical problem$ if the iterations are not converged if the input matrix is invalid6The tolerance threshold used by the stopping criteria.;Sets the tolerance threshold used by the stopping criteria.EThis value is used as an upper bound to the relative residual error:  |Ax-b|/|b|6. The default value is the machine precision given by epsilonThe max number of iterations. It is either the value setted by setMaxIterations or, by default, twice the number of columns of the matrix.XSets the max number of iterations. Default is twice the number of columns of the matrix.sThe tolerance error reached during the last solve. It is a close approximation of the true relative residual error  |Ax-b|/|b|.8The number of iterations performed during the last solve Returns the b sparse upper triangular matrix R of the QR factorization.Returns the matrix Q. as products of sparse Householder reflectors.aSets the threshold that is used to determine linearly dependent columns during the factorization.In practice, if during the factorization the norm of the column that has to be eliminated is below this threshold, then the entire column is treated as zero, and it is moved at the end.]Returns the number of non linearly dependent columns as determined by the pivoting threshold.:Indicate that the pattern of the input matrix is symmetricReturns the matrix LReturns the matrix UThe determinant of the matrix.lThe natural log of the absolute value of the determinant of the matrix of which this is the QR decompositionsThis method is useful to work around the risk of overflow/underflow that's inherent to the determinant computation.[The absolute value of the determinant of the matrix of which *this is the QR decomposition.A determinant can be very big or small, so for matrices of large enough dimension, there is a risk of overflow/underflow. One way to work around that is to use  instead.1A number representing the sign of the determinant>//2None  SDecomposition Requirements on the matrix Speed Accuracy Rank Kernel Image PartialPivLU Invertible ++ + - - - FullPivLU None - +++ + + + HouseholderQR None ++ + - - - ColPivHouseholderQR None + ++ + - - FullPivHouseholderQR None - +++ + - - LLT Positive definite +++ + - - - LDLT Positive or negative semidefinite +++ ++ - - - JacobiSVD None - +++ + - - ZThe best way to do least squares solving for square matrices is with a SVD decomposition ()3LU decomposition of a matrix with partial pivoting.4LU decomposition of a matrix with complete pivoting.)Householder QR decomposition of a matrix.MHouseholder rank-revealing QR decomposition of a matrix with column-pivoting.KHouseholder rank-revealing QR decomposition of a matrix with full pivoting.3Standard Cholesky decomposition (LL^T) of a matrix.8Robust Cholesky decomposition of a matrix with pivoting.;Two-sided Jacobi SVD decomposition of a rectangular matrix. x = solve d a bfinds a solution x of ax = b equation using decomposition d e = relativeError x a b computes norm (ax - b) / norm b where norm is L2 normThe rank of the matrix>Return matrix whose columns form a basis of the null-space of ABReturn a matrix whose columns form a basis of the column-space of A )(coeffs, error) = linearRegression points$computes multiple linear regression #y = a1 x1 + a2 x2 + ... + an xn + b using  decompositionpoint format is  [y, x1..xn]coeffs format is  [b, a1..an]error is calculated using  import Data.Eigen.LA main = print $ linearRegression [ [-4.32, 3.02, 6.89], [-3.79, 2.01, 5.39], [-4.01, 2.41, 6.01], [-3.86, 2.09, 5.55], [-4.10, 2.58, 6.32]] produces the following output V([-2.3466569233817127,-0.2534897541434826,-0.1749653335680988],1.8905965120153139e-3)       !"#$%&'()*+,,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuuvwxy5698NOP=z{|}~feSTUDEopR     43STUcfdeghGHINJOPQR !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmklnkloklpqrstuVvwxtsyz{|}~ eigen-2.1.3Data.Eigen.MatrixData.Eigen.Matrix.MutableData.Eigen.ParallelData.Eigen.SparseMatrixData.Eigen.SparseLA Data.Eigen.LAData.Eigen.InternalCComplexElemSTMatrixIOMatrix MMatrixXcd MMatrixXcf MMatrixXd MMatrixXfMMatrixmm_rowsmm_colsmm_valsvalidnew replicatesetcopyreadwrite unsafeCopy unsafeRead unsafeWrite unsafeWith setNbThreads getNbThreadsTriangularModeLowerUpper StrictlyLower StrictlyUpper UnitLower UnitUpper MatrixXcd MatrixXcfMatrixXdMatrixXfMatrixemptynullsquareconstantzeroonesidentityrandomrowscolsdims!coeff unsafeCoeffcolrowblockmaxCoeffminCoefftopRows bottomRowsleftCols rightColsfromListtoListgeneratesumprodmeantraceallanycountnorm squaredNormblueNorm hypotNorm determinantaddsubmulmapimaptriangularView lowerTriangle upperTrianglefilterifilterfoldfold'ifoldifold'fold1fold1'diagonalinverseadjoint transpose conjugate normalizemodifyconvertfreezethaw unsafeFreeze unsafeThawencodedecodeSparseMatrixXcdSparseMatrixXcfSparseMatrixXdSparseMatrixXf SparseMatrixvalues innerIndices outerStarts innerNNZsnonZeroscompress uncompress compressed innerSize outerSizeprunedscale fromDenseList toDenseList fromMatrixtoMatrixSolverTComputationInfoSuccessNumericalIssue NoConvergence InvalidInputSparseQRSparseLUBiCGSTABConjugateGradientIterativeSolver DirectSolverSolverPreconditionerDiagonalPreconditionerIdentityPreconditionerOrderingMethodCOLAMDOrderingNaturalOrdering runSolverTanalyzePattern factorizecomputesolveinfo tolerance setTolerance maxIterationssetMaxIterationserror iterationsmatrixRmatrixQsetPivotThresholdranksimplicialFactorize setSymmetricmatrixLmatrixUlogAbsDeterminantabsDeterminantsignDeterminant Decomposition PartialPivLU FullPivLU HouseholderQRColPivHouseholderQRFullPivHouseholderQRLLTLDLT JacobiSVD relativeErrorkernelimagelinearRegressionCodecode CSolverPtrCSolverCSparseMatrixPtr CSparseMatrixCTripletCastcastc_sparse_la_signDeterminantc_sparse_la_absDeterminantc_sparse_la_logAbsDeterminantc_sparse_la_determinantc_sparse_la_simplicialFactorizec_sparse_la_setSymmetricc_sparse_la_matrixUc_sparse_la_matrixLc_sparse_la_rankc_sparse_la_setPivotThresholdc_sparse_la_matrixRc_sparse_la_matrixQc_sparse_la_solvec_sparse_la_iterationsc_sparse_la_errorc_sparse_la_infoc_sparse_la_setMaxIterationsc_sparse_la_maxIterationsc_sparse_la_setTolerancec_sparse_la_tolerancec_sparse_la_computec_sparse_la_analyzePatternc_sparse_la_factorizec_sparse_la_freeSolverc_sparse_la_newSolverc_sparse_toMatrixc_sparse_fromMatrixc_sparse_block c_sparse_mul c_sparse_sub c_sparse_addc_sparse_blueNormc_sparse_squaredNorm c_sparse_norm c_sparse_rows c_sparse_colsc_sparse_coeffc_sparse_outerSizec_sparse_innerSizec_sparse_nonZerosc_sparse_diagonalc_sparse_scalec_sparse_prunedRefc_sparse_prunedc_sparse_adjointc_sparse_transposec_sparse_isCompressedc_sparse_uncompressc_sparse_makeCompressed c_sparse_freec_sparse_toListc_sparse_fromListc_relativeErrorc_solvec_kernelc_imagec_rank c_determinant c_hypotNorm c_blueNorm c_squaredNormc_tracec_normc_meanc_prodc_sum c_normalize c_conjugate c_adjoint c_inverse c_transpose c_diagonalc_mulc_subc_add c_identityc_randomc_getNbThreadsc_setNbThreadsfree c_freeStringintSize encodeInt decodeInt performIOplusForeignPtrcall magicCodesparse_fromList sparse_toList sparse_freesparse_makeCompressedsparse_uncompresssparse_isCompressedsparse_transposesparse_adjoint sparse_prunedsparse_prunedRef sparse_scalesparse_diagonalsparse_nonZerossparse_innerSizesparse_outerSize sparse_coeff sparse_cols sparse_rows sparse_normsparse_squaredNormsparse_blueNorm sparse_add sparse_sub sparse_mul sparse_blocksparse_fromMatrixsparse_toMatrixsparse_la_newSolversparse_la_freeSolversparse_la_factorizesparse_la_analyzePatternsparse_la_computesparse_la_tolerancesparse_la_setTolerancesparse_la_maxIterationssparse_la_setMaxIterationssparse_la_infosparse_la_errorsparse_la_iterationssparse_la_solvesparse_la_matrixQsparse_la_matrixRsparse_la_setPivotThresholdsparse_la_ranksparse_la_matrixLsparse_la_matrixUsparse_la_setSymmetricsparse_la_simplicialFactorizesparse_la_determinantsparse_la_logAbsDeterminantsparse_la_absDeterminantsparse_la_signDeterminant openStream readStream closeStreamreadInt$fCodeCComplex$fCodeCComplex0 $fCodeCDouble $fCodeCFloat$fCastComplexCComplex$fCastCComplexComplex$fCastComplexCComplex0$fCastCComplexComplex0$fCastDoubleCDouble$fCastCDoubleDouble$fCastFloatCFloat$fCastCFloatFloat $fCastIntCInt $fCastCIntInt$fStorableCTriplet$fStorableCComplex$fElemComplexCComplex$fElemComplexCComplex0$fElemDoubleCDouble$fElemFloatCFloat $fNumMatrixbaseGHC.Num fromIntegersignumabsnegate $fShowMatrix_prop_binop_unop_vals$fNumSparseMatrix$fShowSparseMatrixmk _get_prop _get_matrix _set_prop$fCodeSparseQR$fDirectSolverSparseQR$fSolverSparseQR$fCodeSparseLU$fDirectSolverSparseLU$fSolverSparseLU$fCodeBiCGSTAB$fIterativeSolverBiCGSTAB$fSolverBiCGSTAB$fCodeConjugateGradient"$fIterativeSolverConjugateGradient$fSolverConjugateGradient