YTZ      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYNone 6<    None'Mutable matrix. You can modify elementsQCreates a mutable matrix of the given dimension. Elements are initialized with 0. 1Set all elements of the matrix to the given value!(Yield the element at the given position."*Replace the element at the given position.  !"  !"  !" !"None#;Must be call first when calling Eigen from multiple threads$1Sets the max number of threads reserved for Eigen#$#$#$#$None6HM'%qconstant Matrix class to be used in pure computations, uses the same column major memory layout as Eigen MatrixXd*empty 0x0 matrix+3matrix where all coeffs are filled with given value,matrix where all coeff are 0-matrix where all coeff are 1.5square matrix with 1 on main diagonal and 0 elsewhere/number of rows for the matrix0 number of columns for the matrix1*matrix coefficient at specific row and col2&list of coefficients for the given col3&list of coefficients for the given row4Vextract rectangular block from matrix defined by startRow startCol blockRows blockCols5)the maximum of all coefficients of matrix6)the minimum of all coefficients of matrix7top n rows of matrix8bottom n rows of matrix9left n columns of matrix:right n columns of matrix;Tconstruct matrix from a list of rows, column count is detected as maximum row length<%converts matrix to a list of its rows==craete matrix using generator function f :: row -> col -> val>for vectors, the l2 norm, and for matrices the Frobenius norm. In both cases, it consists in the square root of the sum of the square of all the matrix entries. For vectors, this is also equals to the square root of the dot product of this with itself.?for vectors, the squared l2 norm, and for matrices the Frobenius norm. In both cases, it consists in the sum of the square of all the matrix entries. For vectors, this is also equals to the dot product of this with itself.@the determinant of the matrixA return a - bB return a + bC return a * bDinverse of the matrixiFor small fixed sizes up to 4x4, this method uses cofactors. In the general case, this method uses class  PartialPivLUEadjoint of the matrixFtranspose of the matrixGconjugate of the matrixH*nomalize the matrix by deviding it on its >IApply a destructive operation to a matrix. The operation will be performed in place if it is safe to do so and will modify a copy of the matrix otherwise.J-Yield an immutable copy of the mutable matrixK,Yield a mutable copy of the immutable matrixL}Unsafe convert a mutable matrix to an immutable one without copying. The mutable matrix may not be used after this operation.MUnsafely convert an immutable matrix to a mutable one without copying. The immutable matrix may not be used after this operation.ZHonly the following functions are defined for Num instance: (*), (+), (-)[pretty prints the matrix.%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLM\]^Z[)%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLM)%&'();<=*,-.+0/165234789:>?@ABCDEGFHIKJML*%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLM\]^Z[None  N Decomposition Requirements on the matrix Speed Accuracy PartialPivLU Invertible ++ + FullPivLU None - +++ HouseholderQR None ++ + ColPivHouseholderQR None + ++ FullPivHouseholderQR None - +++ LLT Positive definite +++ + LDLT Positive or negative semidefinite +++ ++ JacobiSVD None - +++ The best way to do least squares solving for square matrices is with a SVD decomposition (JacobiSVD) O;Two-sided Jacobi SVD decomposition of a rectangular matrix.P8Robust Cholesky decomposition of a matrix with pivoting.Q3Standard Cholesky decomposition (LL^T) of a matrix.RKHouseholder rank-revealing QR decomposition of a matrix with full pivoting.SMHouseholder rank-revealing QR decomposition of a matrix with column-pivoting.T)Householder QR decomposition of a matrix.U4LU decomposition of a matrix with complete pivoting.V3LU decomposition of a matrix with partial pivoting.W x = solve d a bfinds a solution x of ax = b equation using decomposition dX e = relativeError x a b computes norm (ax - b) / norm b where norm is L2 normY )(coeffs, error) = linearRegression points$computes multiple linear regression #y = a1 x1 + a2 x2 + ... + an xn + b using S decompositionpoint format is  [y, x1..xn]coeffs format is  [b, a1..an]error is calculated using X import Data.Eigen.LA main = print $ linearRegression [ [-4.32, 3.02, 6.89], [-3.79, 2.01, 5.39], [-4.01, 2.41, 6.01], [-3.86, 2.09, 5.55], [-4.10, 2.58, 6.32]] produces the following output V([-2.3466569233817127,-0.2534897541434826,-0.1749653335680988],1.8905965120153139e-3) NOPQRSTUV_`WXY NOPQRSTUVWXY NVUTSRQPOWXYNVUTSRQPO_`WXYa       !"#$%&'()**+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcde eigen-1.1.2Data.Eigen.InternalData.Eigen.Matrix.MutableData.Eigen.ParallelData.Eigen.Matrix Data.Eigen.LACastcast c_determinant c_hypotNorm c_blueNorm c_squaredNormc_norm c_normalize c_conjugate c_adjoint c_inverse c_transposec_mulc_subc_addc_setNbThreadsc_initParallel c_freeString performIOcall $fCastIntCInt $fCastCIntInt$fCastDoubleCDouble$fCastCDoubleDoubleSTMatrixIOMatrixMMatrixmm_rowsmm_colsmm_valsnewsetreadwrite initParallel setNbThreadsMatrixm_rowsm_colsm_valsemptyconstantzeroonesidentityrowscolscoeffcolrowblockmaxCoeffminCoefftopRows bottomRowsleftCols rightColsfromListtoListgeneratenorm squaredNorm determinantaddsubmulinverseadjoint transpose conjugate normalizemodifyfreezethaw unsafeFreeze unsafeThaw Decomposition JacobiSVDLDLTLLTFullPivHouseholderQRColPivHouseholderQR HouseholderQR FullPivLU PartialPivLUsolve relativeErrorlinearRegression $fNumMatrix $fShowMatrix_unop_binop_modifyc_relativeErrorc_solve