5#N      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~None 3579=NEComplex number for FFI with the same memory layout as std::complex<T>      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHI      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]NoneN<Alias for double prevision mutable matrix of complex numbers<Alias for single previsiom mutable matrix of complex numbers)Alias for double precision mutable matrix)Alias for single precision mutable matrix'Mutable matrix. You can modify elements *Verify matrix dimensions and memory layoutQCreate a mutable matrix of the given size and fill it with 0 as an initial value.OCreate a mutable matrix of the given size and fill it with as an initial value.1Set all elements of the matrix to the given valueLCopy a matrix. The two matrices must have the same size and may not overlap.(Yield the element at the given position.*Replace the element at the given position.Copy a matrix. The two matrices must have the same size and may not overlap however no bounds check performaned to it may SEGFAULT for incorrect input.HYield the element at the given position. No bounds checks are performed.JReplace the element at the given position. No bounds checks are performed.Pass a pointer to the matrix's data to the IO action. Modifying data through the pointer is unsafe if the matrix could have been frozen before the modification.     None1Sets the max number of threads reserved for Eigen1Gets the max number of threads reserved for EigenNone !"79INL4Alias for double prevision matrix of complex numbers4Alias for single previsiom matrix of complex numbers!Alias for double precision matrix!Alias for single precision matrixiMatrix to be used in pure computations, uses column major memory layout, features copy-free FFI with C++  http://eigen.tuxfamily.orgEigen library. Empty 0x0 matrix!Is matrix empty?"Is matrix square?#3Matrix where all coeffs are filled with given value$Matrix where all coeff are 0%Matrix where all coeff are 1&-The identity matrix (not necessarily square).'!The random matrix of a given size(Number of rows for the matrix) Number of columns for the matrix*Mtrix size as (rows, cols) pair+*Matrix coefficient at specific row and col,*Matrix coefficient at specific row and col-PUnsafe version of coeff function. No bounds check performed so SEGFAULT possible.&List of coefficients for the given col/&List of coefficients for the given row0VExtract rectangular block from matrix defined by startRow startCol blockRows blockCols11Verify matrix dimensions and values memory layout2%The maximum coefficient of the matrix3%The minimum coefficient of the matrix4Top N rows of matrix5Bottom N rows of matrix6Left N columns of matrix7Right N columns of matrix8vConstruct matrix from a list of rows, column count is detected as maximum row length. Missing values are filled with 09 Convert matrix to a list of rows: %generate rows cols ( row col -> val)'Create matrix using generator function  f row col val;)The sum of all coefficients of the matrix<-The product of all coefficients of the matrix=*The mean of all coefficients of the matrix>jThe trace of a matrix is the sum of the diagonal coefficients and can also be computed as sum (diagonal m)?iApplied to a predicate and a matrix, all determines if all elements of the matrix satisfies the predicate@hApplied to a predicate and a matrix, any determines if any element of the matrix satisfies the predicateAJReturns the number of coefficients in a given matrix that evaluate to trueBFor vectors, the l2 norm, and for matrices the Frobenius norm. In both cases, it consists in the square root of the sum of the square of all the matrix entries. For vectors, this is also equals to the square root of the dot product of this with itself.CFor vectors, the squared l2 norm, and for matrices the Frobenius norm. In both cases, it consists in the sum of the square of all the matrix entries. For vectors, this is also equals to the dot product of this with itself.DThe l2 norm of the matrix using the Blue's algorithm. A Portable Fortran Program to Find the Euclidean Norm of a Vector, ACM TOMS, Vol 4, Issue 1, 1978.EThe l2 norm of the matrix avoiding undeflow and overflow. This version use a concatenation of hypot calls, and it is very slow.FThe determinant of the matrixGNAdding two matrices by adding the corresponding entries together. You can use (+) function as well.HXSubtracting two matrices by subtracting the corresponding entries together. You can use (-) function as well.I#Matrix multiplication. You can use (*) function as well.J5Apply a given function to each element of the matrix.AHere is an example how to implement scalar matrix multiplication:*let a = fromList [[1,2],[3,4]] :: MatrixXfa Matrix 2x21.0 2.03.0 4.0 map (*10) a Matrix 2x2 10.0 20.0 30.0 40.0K5Apply a given function to each element of the matrix.BHere is an example how upper triangular matrix can be implemented:6let a = fromList [[1,2,3],[4,5,6],[7,8,9]] :: MatrixXfa Matrix 3x3 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.06imap (\row col val -> if row <= col then val else 0) a Matrix 3x3 1.0 2.0 3.0 0.0 5.0 6.0 0.0 0.0 9.0LUpper trinagle of the matrixMLower trinagle of the matrixNFFilter elements in the matrix. Filtered elements will be replaced by 0OFFilter elements in the matrix. Filtered elements will be replaced by 0PCReduce matrix using user provided function applied to each element.Q^Reduce matrix using user provided function applied to each element. This is strict version of PRQReduce matrix using user provided function applied to each element and it's indexSmReduce matrix using user provided function applied to each element and it's index. This is strict version of RTCReduce matrix using user provided function applied to each element.U^Reduce matrix using user provided function applied to each element. This is strict version of PVDiagonal of the matrixWInverse of the matrix}For small fixed sizes up to 4x4, this method uses cofactors. In the general case, this method uses PartialPivLU decompositionXAdjoint of the matrixYTranspose of the matrixZConjugate of the matrix[*Nomalize the matrix by deviding it on its B\Apply a destructive operation to a matrix. The operation will be performed in place if it is safe to do so and will modify a copy of the matrix otherwise.]FConvert matrix to different type using user provided element converter^-Yield an immutable copy of the mutable matrix_,Yield a mutable copy of the immutable matrix`}Unsafe convert a mutable matrix to an immutable one without copying. The mutable matrix may not be used after this operation.aUnsafely convert an immutable matrix to a mutable one without copying. The immutable matrix may not be used after this operation.bgPass a pointer to the matrix's data to the IO action. The data may not be modified through the pointer.c'Encode the matrix as a lazy byte stringd'Decode matrix from the lazy byte string^Shortcuts for basic matrix math_Pretty prints the matrixQ !"#$%&'()`*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdabc^_M !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdM189: !"$%&#')(*+,-./04567;<=32>BCDEFPQRSTU?@AGHIJKNOVYWXZ[\]LMcd_^a`bP !"#$%&'()`*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdabc^_None!"4N%e;Alias for double prevision sparse matrix of complex numbersf;Alias for single previsiom sparse matrix of complex numbersg(Alias for double precision sparse matrixh(Alias for single precision sparse matrixi*A versatible sparse matrix representation.This class implements a more versatile variants of the common compressed row/column storage format. Each colmun's (resp. row) non zeros are stored as a pair of value with associated row (resp. colmiun) index. All the non zeros are stored in a single large buffer. Unlike the compressed format, there might be extra space inbetween the nonzeros of two successive colmuns (resp. rows) such that insertion of new non-zero can be done with limited memory reallocation and copies.The results of Eigen's operations always produces compressed sparse matrices. On the other hand, the insertion of a new element into a SparseMatrix converts this later to the uncompressed mode.A call to the function tS turns the matrix into the standard compressed format compatible with many library.Implementation deails of SparseMatrix are intentionally hidden behind ForeignPtr bacause Eigen doesn't provide mapping over plain data for sparse matricies.:For more infomration please see Eigen documentation page: >http://eigen.tuxfamily.org/dox/classEigen_1_1SparseMatrix.htmld#Not exposed, For internal use donlyk$Number of rows for the sparse matrixl'Number of columns for the sparse matrixm'Matrix coefficient at given row and coln'Matrix coefficient at given row and coloFor vectors, the l2 norm, and for matrices the Frobenius norm. In both cases, it consists in the square root of the sum of the square of all the matrix entries. For vectors, this is also equals to the square root of the dot product of this with itself.pFor vectors, the squared l2 norm, and for matrices the Frobenius norm. In both cases, it consists in the sum of the square of all the matrix entries. For vectors, this is also equals to the dot product of this with itself.qThe l2 norm of the matrix using the Blue's algorithm. A Portable Fortran Program to Find the Euclidean Norm of a Vector, ACM TOMS, Vol 4, Issue 1, 1978.r]Extract rectangular block from sparse matrix defined by startRow startCol blockRows blockColss1Number of non-zeros elements in the sparse matrixt+Turns the matrix into the compressed formatunot exposed currentlyvnot exposed currentlyw1Minor dimension with respect to the storage orderx1Major dimension with respect to the storage orderyYSuppresses all nonzeros which are much smaller than reference under the tolerence epsilonz!Multiply matrix on a given scalar{Transpose of the sparse matrix|Adjoint of the sparse matrix}UAdding two sparse matrices by adding the corresponding entries together. You can use (+) function as well.~_Subtracting two sparse matrices by subtracting the corresponding entries together. You can use (-) function as well.#Matrix multiplication. You can use (*) function as well.OConstruct sparse matrix of given size from the list of triplets (row, col, val)gConvert sparse matrix to the list of triplets (row, col, val). Compressed elements will not be includedConstruct sparse matrix of two-dimensional list of values. Matrix dimensions will be detected automatically. Zero values will be compressed.;Convert sparse matrix to (rows X cols) dense list of valuesKConstruct sparse matrix from dense matrix. Zero elements will be compressed)Construct dense matrix from sparse matrix.Encode the sparse matrix as a lazy byte string.Decode sparse matrix from the lazy byte stringeShortcuts for basic matrix mathfPretty prints the sparse matrix)efghijdgklmnopqrstuvwxyz{|}~hief#efghijklmnopqrstuvwxyz{|}~#ijhgfelkmnopqrtuvswx}~yz{|(efghijdgklmnopqrstuvwxyz{|}~hiefNone 3579Computation was successful.4The provided data did not satisfy the prerequisites.%Iterative procedure did not converge.The inputs are invalid, or the algorithm has been improperly called. When assertions are enabled, such errors trigger an error.=A conjugate gradient solver for sparse self-adjoint problems.This class allows to solve for A.x = bP sparse linear problems using a conjugate gradient algorithm. The sparse matrix A must be selfadjoint.OThe maximal number of iterations and tolerance value can be controlled via the  and d methods. The defaults are the size of the problem for the maximal number of iterations and epsilon for the toleranceEA bi conjugate gradient stabilized solver for sparse square problems.This class allows to solve for A.x = b` sparse linear problems using a bi conjugate gradient stabilized algorithm. The vectors x and b can be either dense or sparse.OThe maximal number of iterations and tolerance value can be controlled via the  and d methods. The defaults are the size of the problem for the maximal number of iterations and epsilon for the tolerance8Sparse supernodal LU factorization for general matrices.This class implements the supernodal LU factorization for general matrices. It uses the main techniques from the sequential  *http://crd-legacy.lbl.gov/~xiaoye/SuperLU/SuperLU packageD. It handles transparently real and complex arithmetics with single and double precision, depending on the scalar type of your input matrix. The code has been optimized to provide BLAS-3 operations during supernode-panel updates. It benefits directly from the built-in high-performant Eigen BLAS routines. Moreover, when the size of a supernode is very small, the BLAS calls are avoided to enable a better optimization from the compiler. For best performance, you should compile it with NDEBUG flag to avoid the numerous bounds checking on vectors.4Sparse left-looking rank-revealing QR factorization.This class implements a left-looking rank-revealing QR decomposition of sparse matrices. When a column has a norm less than a given tolerance it is implicitly permuted to the end. The QR factorization thus obtained is given by  A*P = Q*R where R$ is upper triangular or trapezoidal.Pi is the column permutation which is the product of the fill-reducing and the rank-revealing permutations.QL is the orthogonal matrix represented as products of Householder reflectors.RG is the sparse triangular or trapezoidal matrix. The later occurs when A is rank-deficient.HInitializes the iterative solver for the sparsity pattern of the matrix A for further solving Ax=b problems.Hnitializes the iterative solver with the numerical values of the matrix A for further solving Ax=b problems.1Initializes the iterative solver with the matrix A for further solving Ax=b problems.The & method is equivalent to calling both  and .6The tolerance threshold used by the stopping criteria.Sets the tolerance threshold used by the stopping criteria. | This value is used as an upper bound to the relative residual error:  |Ax-b|/|b|=. The default value is the machine precision given by epsilonThe max number of iterations. It is either the value setted by setMaxIterations or, by default, twice the number of columns of the matrix.XSets the max number of iterations. Default is twice the number of columns of the matrix.#An expression of the solution x of A x = b$ using the current decomposition of A.BSuccess if the iterations converged, and NoConvergence otherwise. sThe tolerance error reached during the last solve. It is a close approximation of the true relative residual error  |Ax-b|/|b|.8The number of iterations performed during the last solvejkjkNone  SDecomposition Requirements on the matrix Speed Accuracy Rank Kernel Image PartialPivLU Invertible ++ + - - - FullPivLU None - +++ + + + HouseholderQR None ++ + - - - ColPivHouseholderQR None + ++ + - - FullPivHouseholderQR None - +++ + - - LLT Positive definite +++ + - - - LDLT Positive or negative semidefinite +++ ++ - - - JacobiSVD None - +++ + - - ZThe best way to do least squares solving for square matrices is with a SVD decomposition ()3LU decomposition of a matrix with partial pivoting.4LU decomposition of a matrix with complete pivoting.)Householder QR decomposition of a matrix.MHouseholder rank-revealing QR decomposition of a matrix with column-pivoting.KHouseholder rank-revealing QR decomposition of a matrix with full pivoting.3Standard Cholesky decomposition (LL^T) of a matrix.8Robust Cholesky decomposition of a matrix with pivoting.;Two-sided Jacobi SVD decomposition of a rectangular matrix. x = solve d a bfinds a solution x of ax = b equation using decomposition d e = relativeError x a b computes norm (ax - b) / norm b where norm is L2 normThe rank of the matrix>Return matrix whose columns form a basis of the null-space of ABReturn a matrix whose columns form a basis of the column-space of A )(coeffs, error) = linearRegression points$computes multiple linear regression #y = a1 x1 + a2 x2 + ... + an xn + b using  decompositionpoint format is  [y, x1..xn]coeffs format is  [b, a1..an]error is calculated using  import Data.Eigen.LA main = print $ linearRegression [ [-4.32, 3.02, 6.89], [-3.79, 2.01, 5.39], [-4.01, 2.41, 6.01], [-3.86, 2.09, 5.55], [-4.10, 2.58, 6.32]] produces the following output V([-2.3466569233817127,-0.2534897541434826,-0.1749653335680988],1.8905965120153139e-3) l      !"#$%%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmm./21GHI6nopqrstu^]LMN=>vwxyghz{|}~-,LMN[^\]_`@ABGCHIJK      !"#$%&'()*+,-./0123456789O:;<98=>? eigen-2.1.2Data.Eigen.MatrixData.Eigen.Matrix.MutableData.Eigen.ParallelData.Eigen.SparseMatrixData.Eigen.SparseLA Data.Eigen.LAData.Eigen.InternalCComplexElemSTMatrixIOMatrix MMatrixXcd MMatrixXcf MMatrixXd MMatrixXfMMatrixmm_rowsmm_colsmm_valsvalidnew replicatesetcopyreadwrite unsafeCopy unsafeRead unsafeWrite unsafeWith setNbThreads getNbThreads MatrixXcd MatrixXcfMatrixXdMatrixXfMatrixemptynullsquareconstantzeroonesidentityrandomrowscolsdims!coeff unsafeCoeffcolrowblockmaxCoeffminCoefftopRows bottomRowsleftCols rightColsfromListtoListgeneratesumprodmeantraceallanycountnorm squaredNormblueNorm hypotNorm determinantaddsubmulmapimap upperTriangle lowerTrianglefilterifilterfoldfold'ifoldifold'fold1fold1'diagonalinverseadjoint transpose conjugate normalizemodifyconvertfreezethaw unsafeFreeze unsafeThawencodedecodeSparseMatrixXcdSparseMatrixXcfSparseMatrixXdSparseMatrixXf SparseMatrixnonZeroscompress uncompress compressed innerSize outerSizeprunedscale fromDenseList toDenseList fromMatrixtoMatrixSolverTComputationInfoSuccessNumericalIssue NoConvergence InvalidInput SolverInfoConjugateGradientBiCGSTABSparseLUSparseQR runSolverTanalyzePattern factorizecompute tolerance setTolerance maxIterationssetMaxIterationssolveinfoerror iterations Decomposition PartialPivLU FullPivLU HouseholderQRColPivHouseholderQRFullPivHouseholderQRLLTLDLT JacobiSVD relativeErrorrankkernelimagelinearRegressionCodecode CSolverPtrCSolverCSparseMatrixPtr CSparseMatrixCTripletCastcastc_sparse_la_solvec_sparse_la_iterationsc_sparse_la_errorc_sparse_la_infoc_sparse_la_setMaxIterationsc_sparse_la_maxIterationsc_sparse_la_setTolerancec_sparse_la_tolerancec_sparse_la_computec_sparse_la_analyzePatternc_sparse_la_factorizec_sparse_la_freeSolverc_sparse_la_newSolverc_sparse_toMatrixc_sparse_fromMatrixc_sparse_block c_sparse_mul c_sparse_sub c_sparse_addc_sparse_blueNormc_sparse_squaredNorm c_sparse_norm c_sparse_rows c_sparse_colsc_sparse_coeffc_sparse_outerSizec_sparse_innerSizec_sparse_nonZerosc_sparse_diagonalc_sparse_scalec_sparse_prunedRefc_sparse_prunedc_sparse_adjointc_sparse_transposec_sparse_isCompressedc_sparse_uncompressc_sparse_makeCompressed c_sparse_freec_sparse_toListc_sparse_fromListc_relativeErrorc_solvec_kernelc_imagec_rank c_determinant c_hypotNorm c_blueNorm c_squaredNormc_tracec_normc_meanc_prodc_sum c_normalize c_conjugate c_adjoint c_inverse c_transpose c_diagonalc_mulc_subc_add c_identityc_randomc_getNbThreadsc_setNbThreadsfree c_freeStringintSize encodeInt decodeInt performIOplusForeignPtrcall magicCodesparse_fromList sparse_toList sparse_freesparse_makeCompressedsparse_uncompresssparse_isCompressedsparse_transposesparse_adjoint sparse_prunedsparse_prunedRef sparse_scalesparse_diagonalsparse_nonZerossparse_innerSizesparse_outerSize sparse_coeff sparse_cols sparse_rows sparse_normsparse_squaredNormsparse_blueNorm sparse_add sparse_sub sparse_mul sparse_blocksparse_fromMatrixsparse_toMatrixsparse_la_newSolversparse_la_freeSolversparse_la_factorizesparse_la_analyzePatternsparse_la_computesparse_la_tolerancesparse_la_setTolerancesparse_la_maxIterationssparse_la_setMaxIterationssparse_la_infosparse_la_errorsparse_la_iterationssparse_la_solve openStream readStream closeStreamreadInt$fCodeCComplex$fCodeCComplex0 $fCodeCDouble $fCodeCFloat$fCastComplexCComplex$fCastCComplexComplex$fCastComplexCComplex0$fCastCComplexComplex0$fCastDoubleCDouble$fCastCDoubleDouble$fCastFloatCFloat$fCastCFloatFloat $fCastIntCInt $fCastCIntInt$fStorableCTriplet$fStorableCComplex$fElemComplexCComplex$fElemComplexCComplex0$fElemDoubleCDouble$fElemFloatCFloat $fNumMatrix $fShowMatrixvals_prop_binop_unop$fNumSparseMatrix$fShowSparseMatrixmk _get_prop _set_prop