BLAS module provides a fully general
yet type safe BLAS API.
When in doubt about the semantics of an operation, consult your system's BLAS api documentation, or just read the documentation for the Intel MKL BLAS distribution
A few basic notes about how to invoke BLAS routines.
Many BLAS operations take one or more arguments of type
Tranpose has the following different constructors, which tell BLAS
routines what transformation to implicitly apply to an input matrix
mat with dimension
n x m.
NoTransposeleaves the matrix
matas being implicitly transposed, with dimension
m x n. Entry
mat(i,j)being treated as actually being the entry
mat(j,i). For Real matrices this is also the matrix adjoint operation. ie
ConjNoTransposewill implicitly conjugate
mat, which is a no op for Real (
Double) matrices, but for 'Complex Float' and 'Complex Double' matrices, a given matrix entry
mat(i,j)==xwill be treated as actually being
ConjTranposewill implicitly transpose and conjugate the input matrix. ConjugateTranpose acts as matrix adjoint for both real and complex matrices.
The *gemm operations work as follows (using
sgemm as an example):
'sgemm trLeft trRight alpha beta left right result', where
trRightare values of type
Transposethat respectively act on the matrices
- the generalized matrix computation thusly formed can be viewed as being
result = alpha * trLeft(left) * trRight(right) + beta * result
the *gemv operations are akin to the *gemm operations, but with
being vectors rather than matrices.
the *trsv operations solve for
x in the equation
A x = y given
MatUpLo argument determines if the matrix should be treated as upper or
lower triangular and
MatDiag determines if the triangular solver should treat
the diagonal of the matrix as being all 1's or not. A general pattern of invocation
A key detail to note is that the input vector is ALSO the result vector,
strsv matuplo tranposeMatA matdiag matrixA xVector
strsv and friends updates the vector place.