Stability | stable |
---|---|
Maintainer | erkokl@gmail.com |
Safe Haskell | Safe-Inferred |
Math.ConjugateGradient
Description
(The linear equation solver library is hosted at http://github.com/LeventErkok/conjugateGradient. Comments, bug reports, and patches are always welcome.)
Sparse matrix linear-equation solver, using the conjugate gradient algorithm. Note that the technique only applies to matrices that are:
- Symmetric
- Positive-definite
See http://en.wikipedia.org/wiki/Conjugate_gradient_method for details.
The conjugate gradient method can handle very large sparse matrices, where direct methods (such as LU decomposition) are way too expensive to be useful in practice.
- type SV a = IntMap a
- type SM a = IntMap (SV a)
- sMulSV :: RealFloat a => a -> SV a -> SV a
- sMulSM :: RealFloat a => a -> SM a -> SM a
- addSV :: RealFloat a => SV a -> SV a -> SV a
- subSV :: RealFloat a => SV a -> SV a -> SV a
- dotSV :: RealFloat a => SV a -> SV a -> a
- mulSMV :: RealFloat a => SM a -> SV a -> SV a
- normSV :: RealFloat a => SV a -> a
- solveCG :: (RandomGen g, RealFloat a, Random a) => g -> Int -> SM a -> SV a -> (a, SV a)
- showSolution :: RealFloat a => Int -> Int -> Int -> SM a -> SV a -> SV a -> String
Types
We represent sparse matrices and vectors using IntMap
s. In a sparse vector, we only populate those elements that are non-0
.
In a sparse matrix, we only populate those rows that contain a non-0
element. This leads to an efficient representation for
sparse matrices and vectors, where the space usage is proportional to number of non-0
elements. Strictly speaking, putting non-0
elements
would not break the algorithms we use, but clearly they would be less efficient.
Indexings starts at 0
, and is assumed to be non-negative, corresponding to the row numbers.
type SM a = IntMap (SV a)Source
A sparse matrix is an int-map containing sparse row-vectors. Again, only put in rows that have a non-0
element in them for efficiency.
Sparse operations
mulSMV :: RealFloat a => SM a -> SV a -> SV aSource
Multiply a sparse matrix (nxn) with a sparse vector (nx1), obtaining a sparse vector (nx1).
normSV :: RealFloat a => SV a -> aSource
Norm of a sparse vector. (Square-root of it's dot-product with itself.)
Conjugate-Gradient solver
Arguments
:: (RandomGen g, RealFloat a, Random a) | |
=> g | The seed for the random-number generator. |
-> Int | Number of variables. |
-> SM a | The |
-> SV a | The |
-> (a, SV a) | The final error factor, and the |
Conjugate Gradient Solver for the system Ax=b
. See: http://en.wikipedia.org/wiki/Conjugate_gradient_method.
NB. Assumptions on the input:
- The
A
matrix is symmetric and positive definite. - The indices start from
0
and go consecutively up-ton-1
. (Only non-0
value/row indices has to be present, of course.)
For efficiency reasons, we do not check for either property. (If these assumptions are violated, the algorithm will still produce a result, but not the one you expected!)
We perform either 10^6
iterations of the Conjugate-Gradient algorithm, or until the error
factor is less than 1e-10
. The error factor is defined as the difference of the norm of
the current solution from the last one, as we go through the iteration. See
http://en.wikipedia.org/wiki/Conjugate_gradient_method#Convergence_properties_of_the_conjugate_gradient_method
for a discussion on the convergence properties of this algorithm.
Displaying solutions
Arguments
:: RealFloat a | |
=> Int | Cell-width. Each value will occupy this many characters at least. |
-> Int | Precision: Use this many digits after the decimal point. |
-> Int | Number of variables, |
-> SM a | The |
-> SV a | The |
-> SV a | The |
-> String |
Display a solution in a human-readable form. Needless to say, only use this method when the system is small enough to fit nicely on the screen.