Torch.Indef.Dynamic.Tensor.Math.Lapack

Description

Functions in this section are implemented with an interface to LAPACK libraries. If LAPACK libraries are not found during compilation step, then these functions will not be available. Hasktorch has not been tested without LAPACK functionality, so that behaviour is currently undefined for an end user. (FIXME) Someone needs to test LAPACK-less compilation steps.

Synopsis

# Documentation

Docs taken from MAGMA documentation at: http://icl.cs.utk.edu/projectsfiles/magma/doxygen/group__magma__getri.html

getri computes the inverse of a matrix using the LU factorization computed by getrf.

This method inverts U and then computes inv(A) by solving the system inv(A)*L = inv(U) for inv(A).

Note that it is generally both faster and more accurate to use gesv, or getrf and getrs, to solve the system AX = B, rather than inverting the matrix and multiplying to form X = inv(A)*B. Only in special instances should an explicit inverse be computed with this routine.

getri_ :: Dynamic -> IO () Source #

inplace version of getri

Arguments

 :: Dynamic matrix to decompose -> Triangle which triangle should be used. -> Dynamic

Cholesky Decomposition of 2D tensor A. The matrix, A, has to be a positive-definite and either symmetric or complex Hermitian.

The factorization has the form

  A = U**T * U,   if UPLO = U, or
A = L  * L**T,  if UPLO = L,


where U is an upper triangular matrix and L is lower triangular.

Arguments

 :: Dynamic matrix to decompose -> Triangle which triangle should be used. -> IO ()

infix version of potrf.

Returns the inverse of 2D tensor A given its Cholesky decomposition chol.

Square matrix chol should be triangular.

Triangle specifies matrix chol as either upper or lower triangular.

potri_ :: Dynamic -> Triangle -> IO () Source #

inplace version of potri.

Arguments

 :: Dynamic Tensor B -> Dynamic Cholesky decomposition chol -> Triangle which triangle to use (upper or lower) -> Dynamic

Returns the solution to linear system AX = B using the Cholesky decomposition chol of 2D tensor A.

Square matrix chol should be triangular; and, righthand side matrix B should be of full rank.

Triangle specifies matrix chol as either upper or lower triangular.

Arguments

 :: Dynamic Tensor B -> Dynamic Cholesky decomposition chol -> Triangle which triangle to use (upper or lower) -> IO ()

Inplace version of potri. Mutating tensor B in place.

qr :: Dynamic -> (Dynamic, Dynamic) Source #

Compute a QR decomposition of the matrix x: matrices q and r such that x = q * r, with q orthogonal and r upper triangular. This returns the thin (reduced) QR factorization.

Note that precision may be lost if the magnitudes of the elements of x are large.

Note also that, while it should always give you a valid decomposition, it may not give you the same one across platforms - it will depend on your LAPACK implementation.

Note: Irrespective of the original strides, the returned matrix q will be transposed, i.e. with strides 1, m instead of m, 1.

qr_ :: (Dynamic, Dynamic) -> Dynamic -> IO () Source #

Inplace version of qr

_geqrf :: Dynamic -> Dynamic -> Dynamic -> IO () Source #

This is a low-level function for calling LAPACK directly. You'll generally want to use qr instead.

Computes a QR decomposition of a, but without constructing Q and R as explicit separate matrices. Rather, this directly calls the underlying LAPACK function ?geqrf which produces a sequence of "elementary reflectors". See LAPACK documentation from MKL for further details and <http://icl.cs.utk.edu/projectsfiles/magma/doxygen/group__magma__geqrf.html for MAGMA documentation>.

Note that, because this is low-level code, hasktorch just calls Torch directly.

Arguments

 :: Dynamic square matrix to get eigen{values/vectors} of. -> EigenReturn whether or not to return eigenvectors. -> (Dynamic, Maybe Dynamic) (e, V) standing for eigenvalues and eigenvectors

(e, V) <- geev A returns eigenvalues and eigenvectors of a general real square matrix A.

A and V are m × m matrices and e is an m-dimensional vector.

This function calculates all right eigenvalues (and vectors) of A such that A = V diag(e) V.

The EigenReturn argument defines computation of eigenvectors or eigenvalues only. It determines if only eigenvalues are computed or if both eigenvalues and eigenvectors are computed.

The eigenvalues returned follow LAPACK convention and are returned as complex (real/imaginary) pairs of numbers (2 * m dimensional tensor).

Also called the "eig" fuction in torch.

Arguments

 :: (Dynamic, Dynamic) (e, V) standing for eigenvalues and eigenvectors -> Dynamic square matrix to get eigen{values/vectors} of. -> EigenReturn whether or not to return eigenvectors. -> IO ()

In-place version of geev.

Note: Irrespective of the original strides, the returned matrix V will be transposed, i.e. with strides 1, m instead of m, 1.

alias to geev to match Torch naming conventions.

eig_ :: (Dynamic, Dynamic) -> Dynamic -> EigenReturn -> IO () Source #

alias to geev_ to match Torch naming conventions.

Arguments

 :: Dynamic square matrix to get eigen{values/vectors} of. -> EigenReturn whether or not to return eigenvectors. -> Triangle whether the upper or lower triangle should be used -> (Dynamic, Maybe Dynamic) (e, V) standing for eigenvalues and eigenvectors

(e, V) <- syev A returns eigenvalues and eigenvectors of a symmetric real matrix A.

A and V are m × m matrices and e is a m-dimensional vector.

This function calculates all eigenvalues (and vectors) of A such that A = V diag(e) V.

The EigenReturn argument defines computation of eigenvectors or eigenvalues only.

Since the input matrix A is supposed to be symmetric, only one triangular portion is used. The Triangle argument indicates if this should be the upper or lower triangle.

Arguments

 :: (Dynamic, Dynamic) (e, V) standing for eigenvalues and eigenvectors -> Dynamic square matrix to get eigen{values/vectors} of. -> EigenReturn whether or not to return eigenvectors. -> Triangle whether the upper or lower triangle should be used -> IO ()

Inplace version of syev

Note: Irrespective of the original strides, the returned matrix V will be transposed, i.e. with strides 1, m instead of m, 1.

alias to syev to match Torch naming conventions.

symeig_ :: (Dynamic, Dynamic) -> Dynamic -> EigenReturn -> Triangle -> IO () Source #

alias to syev to match Torch naming conventions.

Arguments

 :: Dynamic B -> Dynamic A -> (Dynamic, Dynamic) (X, LU)

 (X, LU) <- gesv B A returns the solution of AX = B and LU contains L and U factors for LU factorization of A.

A has to be a square and non-singular matrix (a 2D tensor). A and LU are m × m, X is m × k and B is m × k.

Arguments

 :: (Dynamic, Dynamic) (X, LU) -> Dynamic B -> Dynamic A -> IO ()

Inplace version of gesv.

In this case x and lu will be used for temporary storage and returning the result.

• x will contain the solution X.
• lu will contain L and U factors for LU factorization of A.

Note: Irrespective of the original strides, the returned matrices x and lu will be transposed, i.e. with strides 1, m instead of m, 1.

gels :: Dynamic -> Dynamic -> (Dynamic, Dynamic) Source #

Solution of least squares and least norm problems for a full rank m × n matrix A.

• If n ≤ m, then solve ||AX-B||_F.
• If n > m, then solve min ||X||_F such that AX = B.

On return, first n rows of x matrix contains the solution and the rest contains residual information. Square root of sum squares of elements of each column of x starting at row n + 1 is the residual for corresponding column.

Arguments

 :: (Dynamic, Dynamic) (resb, resa) -> Dynamic matrix b -> Dynamic matrix a -> IO ()

Inplace version of gels.

Note: Irrespective of the original strides, the returned matrices resb and resa will be transposed, i.e. with strides 1, m instead of m, 1.

(U, S, V) <- svd A returns the singular value decomposition of a real matrix A of size n × m such that A = USV'*.

U is n × n, S is n × m and V is m × m.

The ComputeSingularValues argument represents the number of singular values to be computed. SomeSVs stands for "some" (FIXME: figure out what that means) and AllSVs stands for all.

Arguments

 :: (Dynamic, Dynamic, Dynamic) (u, s, v) -> Dynamic m -> ComputeSingularValues Whether to compute all or some of the singular values -> IO ()

Inplace version of gesvd.

Note: Irrespective of the original strides, the returned matrix U will be transposed, i.e. with strides 1, n instead of n, 1.

Arguments

 :: Dynamic m -> ComputeSingularValues Whether to compute all or some of the singular values -> (Dynamic, Dynamic, Dynamic, Dynamic) (u, s, v, a)

gesvd, computing A = U*Σ*transpose(V).

NOTE: "gesvd, computing A = U*Σ*transpose(V)." is only inferred documentation. This documentation was made by stites, inferring from the description of the gesvd docs at <https://software.intel.com/en-us/mkl-developer-reference-c-gesvd the intel mkl documentation>.

Arguments

 :: (Dynamic, Dynamic, Dynamic, Dynamic) (u, s, v, a) -> Dynamic m -> ComputeSingularValues Whether to compute all or some of the singular values -> IO ()

Inplace version of gesvd2_.

data Triangle Source #

Argument to specify whether the upper or lower triangular decomposition should be used in potrf and potrf_.

Constructors

 Upper use upper triangular matrix Lower use lower triangular matrix
Instances
 Source # Instance detailsDefined in Torch.Indef.Dynamic.Tensor.Math.Lapack Methods Source # Instance detailsDefined in Torch.Indef.Dynamic.Tensor.Math.Lapack MethodsshowList :: [Triangle] -> ShowS #

Argument to be passed to geev, syev, and their inplace variants. Determines if the a function should only compute eigenvalues or both eigenvalues and eigenvectors.

Constructors

 ReturnEigenValues ReturnEigenValuesAndVector
Instances
 Source # Instance detailsDefined in Torch.Indef.Dynamic.Tensor.Math.Lapack Methods Source # Instance detailsDefined in Torch.Indef.Dynamic.Tensor.Math.Lapack MethodsshowList :: [EigenReturn] -> ShowS #

Represents the number of singular values to be computed in gesvd and gesvd2. While fairly opaque about how many values are computed, Torch says we either compute "some" or all of the values.

Constructors

 SomeSVs AllSVs