sparse-linear-algebra-0.2.9.4: Numerical computation in native Haskell

Safe HaskellNone
LanguageHaskell2010

Numeric.LinearAlgebra.Sparse

Contents

Description

This module exposes the high-level functionality of the library.

Synopsis

Linear solvers

Iterative methods

(<\>) :: (LinearSystem v, MonadIO m, MonadThrow m) => MatrixType v -> v -> m v Source #

Solve a linear system; uses GMRES internally as default method

Moore-Penrose pseudoinverse

pinv :: (MatrixType v ~ SpMatrix a, LinearSystem v, Epsilon a, MonadThrow m, MonadIO m) => SpMatrix a -> v -> m v Source #

Least-squares approximation of a rectangular system of equations.

Preconditioners

jacobiPre :: Fractional a => SpMatrix a -> SpMatrix a Source #

The Jacobi preconditioner is just the reciprocal of the diagonal

ilu0 Source #

Arguments

:: (Scalar (SpVector t) ~ t, Elt t, VectorSpace (SpVector t), Epsilon t, MonadThrow m) 
=> SpMatrix t 
-> m (SpMatrix t, SpMatrix t)

L, U (with holes)

Used for Incomplete LU : remove entries in the output matrix corresponding to zero entries in the input matrix (this is called ILU(0) in the preconditioner literature)

mSsor Source #

Arguments

:: (MatrixRing (SpMatrix b), Fractional b) 
=> SpMatrix b 
-> b

relaxation factor

-> (SpMatrix b, SpMatrix b)

Left, right factors

Symmetric Successive Over-Relaxation. `mSsor aa omega` : if `omega = 1` it returns the symmetric Gauss-Seidel preconditioner. When ω = 1, the SOR reduces to Gauss-Seidel; when ω > 1 and ω < 1, it corresponds to over-relaxation and under-relaxation, respectively.

Direct methods

luSolve Source #

Arguments

:: (Scalar (SpVector t) ~ t, MonadThrow m, Elt t, InnerSpace (SpVector t), Epsilon t) 
=> SpMatrix t

Lower triangular

-> SpMatrix t

Upper triangular

-> SpVector t

r.h.s.

-> m (SpVector t) 

Direct solver based on a triangular factorization of the system matrix.

Forward substitution

triLowerSolve Source #

Arguments

:: (Scalar (SpVector t) ~ t, Elt t, InnerSpace (SpVector t), Epsilon t, MonadThrow m) 
=> SpMatrix t

Lower triangular

-> SpVector t

r.h.s

-> m (SpVector t) 

Forward substitution solver

Backward substitution

triUpperSolve Source #

Arguments

:: (Scalar (SpVector t) ~ t, Elt t, InnerSpace (SpVector t), Epsilon t, MonadThrow m) 
=> SpMatrix t

Upper triangular

-> SpVector t

r.h.s

-> m (SpVector t) 

Backward substitution solver

Eigensolvers

eigsQR Source #

Arguments

:: (MonadThrow m, MonadIO m, Num' a, Normed (SpVector a), MatrixRing (SpMatrix a), Typeable (Magnitude (SpVector a))) 
=> Int 
-> Bool

Print debug information

-> SpMatrix a

Operand matrix

-> m (SpVector a)

Eigenvalues {λ_i}

`eigsQR n mm` performs n iterations of the QR algorithm on matrix mm, and returns a SpVector containing all eigenvalues

eigsArnoldi Source #

Arguments

:: (Scalar (SpVector t) ~ t, MatrixType (SpVector t) ~ SpMatrix t, Elt t, V (SpVector t), MatrixRing (SpMatrix t), Epsilon t, MonadThrow m) 
=> Int 
-> SpMatrix t 
-> SpVector t 
-> m (SpMatrix t, SpMatrix t, SpVector t)

Q, O, {λ_i}

`eigsArnoldi n aa b` computes at most n iterations of the Arnoldi algorithm to find a Krylov subspace of (A, b), denoted Q, along with a Hessenberg matrix of coefficients H. After that, it computes the QR decomposition of H, denoted (O, R) and the eigenvalues {λ_i} of A are listed on the diagonal of the R factor.

Matrix factorization algorithms

QR

qr Source #

Arguments

:: (Elt a, MatrixRing (SpMatrix a), Epsilon a, MonadThrow m) 
=> SpMatrix a 
-> m (SpMatrix a, SpMatrix a)

Q, R

Given a matrix A, returns a pair of matrices (Q, R) such that Q R = A, where Q is orthogonal and R is upper triangular. Applies Givens rotation iteratively to zero out sub-diagonal elements.

LU

lu Source #

Arguments

:: (Scalar (SpVector t) ~ t, Elt t, VectorSpace (SpVector t), Epsilon t, MonadThrow m) 
=> SpMatrix t 
-> m (SpMatrix t, SpMatrix t)

L, U

Given a matrix A, returns a pair of matrices (L, U) where L is lower triangular and U is upper triangular such that L U = A . Implements the Doolittle algorithm, which sets the diagonal of the L matrix to ones and expects all diagonal entries of A to be nonzero. Apply pivoting (row or column permutation) to enforce a nonzero diagonal of the A matrix (the algorithm throws an appropriate exception otherwise).

Cholesky

chol Source #

Arguments

:: (Elt a, Epsilon a, MonadThrow m) 
=> SpMatrix a 
-> m (SpMatrix a)

L

Given a positive semidefinite matrix A, returns a lower-triangular matrix L such that L L^T = A . This is an implementation of the Cholesky–Banachiewicz algorithm, i.e. proceeding row by row from the upper-left corner. | NB: The algorithm throws an exception if some diagonal element of A is zero.

Arnoldi iteration

arnoldi Source #

Arguments

:: (MatrixType (SpVector a) ~ SpMatrix a, V (SpVector a), Scalar (SpVector a) ~ a, Epsilon a, MonadThrow m) 
=> SpMatrix a

System matrix

-> SpVector a

r.h.s.

-> Int

Max. # of iterations

-> m (SpMatrix a, SpMatrix a)

Q, H

Given a matrix A, a vector b and a positive integer n, this procedure finds the basis of an order n Krylov subspace (as the columns of matrix Q), along with an upper Hessenberg matrix H, such that A = Q^T H Q. At the i`th iteration, it finds (i + 1) coefficients (the i`th column of the Hessenberg matrix H) and the (i + 1)`th Krylov vector.

Matrix partitioning

diagPartitions Source #

Arguments

:: SpMatrix a 
-> (SpMatrix a, SpMatrix a, SpMatrix a)

Subdiagonal, diagonal, superdiagonal partitions

Partition a matrix into strictly subdiagonal, diagonal and strictly superdiagonal parts

Utilities

Givens' rotation

givens :: (Elt a, MonadThrow m) => SpMatrix a -> Int -> Int -> m (SpMatrix a) Source #

Givens method, row version: choose other row index i' s.t. i' is : * below the diagonal * corresponding element is nonzero

QR.C1 ) To zero out entry A(i, j) we must find row k such that A(k, j) is non-zero but A has zeros in row k for all columns less than j.

NB: the current version is quite inefficient in that: 1. the Givens' matrix G_i is different from Identity only in 4 entries 2. at each iteration i we multiply G_i by the previous partial result M. Since this corresponds to a rotation, and the givensCoef function already computes the value of the resulting non-zero component (output r), `G_i ## M` can be simplified by just changing two entries of M (i.e. zeroing one out and changing the other into r).

Condition number

conditionNumberSM :: (MonadThrow m, MatrixRing (SpMatrix a), Num' a, Typeable a) => SpMatrix a -> m a Source #

Matrix condition number: computes the QR factorization and extracts the extremal eigenvalues from the R factor

Householder reflection

hhRefl :: Num a => SpVector a -> SpMatrix a Source #

Householder reflection: a vector x uniquely defines an orthogonal (hyper)plane, i.e. an orthogonal subspace; the Householder operator reflects any point v through this subspace: v' = (I - 2 x >< x) v

Creation and conversion of sparse data

From/to SpVector

fromListSV :: Foldable t => Int -> t (Int, a) -> SpVector a Source #

toListSV :: SpVector a -> [(Key, a)] Source #

From/to SpMatrix

fromListSM :: Foldable t => (Int, Int) -> t (IxRow, IxCol, a) -> SpMatrix a Source #

Create new SpMatrix using data from list (row, col, value)

toListSM :: SpMatrix t -> [(IxRow, IxCol, t)] Source #

Populate list with SpMatrix contents

Packing and unpacking, rows or columns of a sparse matrix

", using lists as container

fromRowsL :: [SpVector a] -> SpMatrix a Source #

Pack a list of SpVectors as rows of an SpMatrix

toRowsL :: SpMatrix a -> [SpVector a] Source #

Unpack the rows of an SpMatrix into a list of SpVectors

fromColsL :: [SpVector a] -> SpMatrix a Source #

Pack a list of SpVectors as columns an SpMatrix

toColsL :: SpMatrix a -> [SpVector a] Source #

Unpack the columns of an SpMatrix into a list of SpVectors

", using Vector as container

fromRowsV :: Vector (SpVector a) -> SpMatrix a Source #

Pack a V.Vector of SpVectors as rows of an SpMatrix

fromColsV :: Vector (SpVector a) -> SpMatrix a Source #

Pack a V.Vector of SpVectors as columns of an SpMatrix

Operators

(.*) :: VectorSpace v => Scalar v -> v -> v Source #

Scale a vector

(./) :: (VectorSpace v, Fractional (Scalar v)) => v -> Scalar v -> v Source #

Scale a vector by the reciprocal of a number (e.g. for normalization)

Inner product

(<.>) :: InnerSpace v => v -> v -> Scalar v #

Inner/dot product

Matrix-vector product

(#>) :: LinearVectorSpace v => MatrixType v -> v -> v Source #

Matrix-vector action

(<#) :: LinearVectorSpace v => v -> MatrixType v -> v Source #

Dual matrix-vector action

Matrix-matrix product

(##) :: MatrixRing m => m -> m -> m Source #

Matrix-matrix product

(#^#) :: MatrixRing m => m -> m -> m Source #

A^T B

(##^) :: MatrixRing m => m -> m -> m Source #

A B^T

Sparsifying matrix-matrix product

(#~#) :: (MatrixRing (SpMatrix a), Epsilon a) => SpMatrix a -> SpMatrix a -> SpMatrix a Source #

Removes all elements x for which `| x | <= eps`)

Vector outer product

(><) :: Num a => SpVector a -> SpVector a -> SpMatrix a Source #

Outer product (all-with-all matrix)

Common operations

dim :: FiniteDim f => f a -> FDSize f Source #

Dimension (i.e. Int for SpVector, (Int, Int) for SpMatrix)

nnz :: HasData f a => f a -> Int Source #

Number of nonzeros

spy :: (Sparse f a, Fractional b) => f a -> b Source #

Sparsity (fraction of nonzero elements)

Vector-related

cvx :: (VectorSpace e, Num (Scalar e)) => Scalar e -> e -> e -> e Source #

Convex combination of two vectors (NB: 0 <= a <= 1).

Norms and normalization

norm :: (Normed v, Floating (Magnitude v)) => RealScalar v -> v -> Magnitude v Source #

Lp norm (p > 0)

norm2 :: (Normed v, Floating (Magnitude v)) => v -> Magnitude v Source #

Euclidean (L2) norm

norm2' :: (Normed v, Floating (Scalar v)) => v -> Scalar v Source #

Euclidean (L2) norm; returns a Complex (norm :+ 0) for Complex-valued vectors

normalize :: Normed v => RealScalar v -> v -> v Source #

Normalize w.r.t. Lp norm

normalize2 :: Normed v => v -> v Source #

Normalize w.r.t. L2 norm

normalize2' :: (Normed v, Floating (Scalar v)) => v -> v Source #

Normalize w.r.t. norm2' instead of norm2

norm1 :: Normed v => v -> Magnitude v Source #

L1 norm

hilbertDistSq :: InnerSpace v => v -> v -> Scalar v Source #

`hilbertDistSq x y = || x - y ||^2` computes the squared L2 distance between two vectors

Matrix-related

transpose :: MatrixRing m => m -> m Source #

Matrix transpose

normFrobenius :: MatrixRing m => m -> MatrixNorm m Source #

Frobenius norm

Pretty-printing

prd :: PrintDense a => a -> IO () Source #

Pretty-print with a descriptive header

prd0 :: PrintDense a => a -> IO () Source #

Pretty-print with no header

Iteration combinators

untilConvergedG0 Source #

Arguments

:: (Normed v, MonadThrow m, MonadIO m, Typeable (Magnitude v), Typeable s, Show s) 
=> String 
-> IterationConfig s v 
-> v

Known value

-> (s -> s) 
-> s 
-> m s 

untilConvergedG0 is a special case of untilConvergedG that assesses convergence based on the L2 distance to a known solution xKnown

untilConvergedG :: (Normed v, MonadThrow m, MonadIO m, Typeable (Magnitude v), Typeable s, Show s) => String -> IterationConfig s v -> (v -> Bool) -> (s -> s) -> s -> m s Source #

This function makes some default choices on the modifyInspectGuarded machinery: convergence is assessed using the squared L2 distance between consecutive states, and divergence is detected when this function is increasing between pairs of measurements.

untilConvergedGM :: (Normed v, MonadThrow m, MonadIO m, Typeable (Magnitude v), Typeable s, Show s) => String -> IterationConfig s v -> (v -> Bool) -> (s -> m s) -> s -> m s Source #

", monadic version

modifyInspectGuarded Source #

Arguments

:: (MonadThrow m, MonadIO m, Typeable s, Typeable a, Show s, Show a) 
=> String

Calling function name

-> IterationConfig s v

Configuration

-> ([v] -> a)

State summary array projection

-> (a -> Bool)

Convergence criterion

-> (a -> a -> Bool)

Divergence criterion

-> (v -> Bool)

Final state acceptance criterion

-> (s -> s)

State evolution

-> s

Initial state

-> m s

Final state

modifyInspectGuarded is a high-order abstraction of a numerical iterative process. It accumulates a rolling window of 3 states and compares a summary q of the latest 2 with that of the previous two in order to assess divergence (e.g. if `q latest2 > q prev2` then it). The process ends when either we hit an iteration budget or relative convergence is verified. The function then assesses the final state with a predicate qfinal (e.g. against a known solution; if this is not known, the user can just supply `const True`)

modifyInspectGuardedM :: (MonadThrow m, MonadIO m, Typeable s, Show s, Typeable a, Show a) => String -> IterationConfig s v -> ([v] -> a) -> (a -> Bool) -> (a -> a -> Bool) -> (v -> Bool) -> (s -> m s) -> s -> m s Source #

", monadic version

data IterationConfig a b Source #

Constructors

IterConf 

Fields

modifyUntil :: MonadState s m => (s -> Bool) -> (s -> s) -> m s Source #

modifyUntilM :: MonadState s m => (s -> Bool) -> (s -> m s) -> m s Source #

Internal

linSolve0 :: ((~#) * * (Scalar (SpVector t)) t, (~#) * * (MatrixType (SpVector t)) (SpMatrix t), Show t, Epsilon t, MonadThrow m, MonadIO m, LinearVectorSpace (SpVector t), Typeable * (Magnitude (SpVector t)), Typeable * t, Normed (SpVector t), Elt t) => LinSolveMethod -> SpMatrix t -> SpVector t -> SpVector t -> m (SpVector t) Source #

Interface method to individual linear solvers, do not use directly

Orphan instances