Safe Haskell | Safe-Inferred |
---|---|
Language | Haskell98 |
Numeric.LAPACK.Orthogonal
Synopsis
- leastSquares :: (Measure meas, C vert, C horiz, C height, Eq height, C width, C nrhs, Floating a) => Full meas horiz Small height width a -> Full meas vert horiz height nrhs a -> Full meas vert horiz width nrhs a
- minimumNorm :: (Measure meas, C vert, C horiz, C height, Eq height, C width, C nrhs, Floating a) => Full meas Small vert height width a -> Full meas vert horiz height nrhs a -> Full meas vert horiz width nrhs a
- leastSquaresMinimumNormRCond :: (Measure meas, C vert, C horiz, C height, Eq height, C width, C nrhs, Floating a) => RealOf a -> Full meas horiz vert height width a -> Full meas vert horiz height nrhs a -> (Int, Full meas vert horiz width nrhs a)
- pseudoInverseRCond :: (Measure meas, C vert, C horiz, C height, C width, Floating a) => RealOf a -> Full meas vert horiz height width a -> (Int, Full meas horiz vert width height a)
- project :: (C height, Eq height, C width, Eq width, Floating a) => Wide height width a -> Vector height a -> Vector width a -> Vector width a
- leastSquaresConstraint :: (C height, Eq height, C width, Eq width, C constraints, Eq constraints, Floating a) => General height width a -> Vector height a -> Wide constraints width a -> Vector constraints a -> Vector width a
- gaussMarkovLinearModel :: (C height, Eq height, C width, Eq width, C opt, Eq opt, Floating a) => Tall height width a -> General height opt a -> Vector height a -> (Vector width a, Vector opt a)
- determinant :: (C sh, Floating a) => Square sh a -> a
- determinantAbsolute :: (Measure meas, C vert, C horiz, C height, C width, Floating a) => Full meas vert horiz height width a -> RealOf a
- complement :: (C height, C width, Floating a) => Tall height width a -> Tall height ShapeInt a
- affineFrameFromFiber :: (C width, Eq width, C height, Eq height, Floating a) => Wide height width a -> Vector height a -> (Tall width ShapeInt a, Vector width a)
- affineFiberFromFrame :: (C width, Eq width, C height, Eq height, Floating a) => Tall height width a -> Vector height a -> (Wide ShapeInt height a, Vector ShapeInt a)
- householder :: (Measure meas, C vert, C horiz, Permutable height, C width, Floating a) => Full meas vert horiz height width a -> (Square height a, UpperTrapezoid meas vert horiz height width a)
- householderTall :: (Measure meas, C vert, C height, Permutable width, Floating a) => Full meas vert Small height width a -> (Full meas vert Small height width a, Upper width a)
Documentation
leastSquares :: (Measure meas, C vert, C horiz, C height, Eq height, C width, C nrhs, Floating a) => Full meas horiz Small height width a -> Full meas vert horiz height nrhs a -> Full meas vert horiz width nrhs a Source #
If x = leastSquares a b
then x
minimizes Vector.norm2 (multiply a x
.sub
b)
Precondition: a
must have full rank and height a >= width a
.
minimumNorm :: (Measure meas, C vert, C horiz, C height, Eq height, C width, C nrhs, Floating a) => Full meas Small vert height width a -> Full meas vert horiz height nrhs a -> Full meas vert horiz width nrhs a Source #
The vector x
with x = minimumNorm a b
is the vector with minimal Vector.norm2 x
that satisfies multiply a x == b
.
Precondition: a
must have full rank and height a <= width a
.
leastSquaresMinimumNormRCond :: (Measure meas, C vert, C horiz, C height, Eq height, C width, C nrhs, Floating a) => RealOf a -> Full meas horiz vert height width a -> Full meas vert horiz height nrhs a -> (Int, Full meas vert horiz width nrhs a) Source #
If (rank,x) = leastSquaresMinimumNormRCond rcond a b
then x
is the vector with minimum Vector.norm2 x
that minimizes Vector.norm2 (a #*| x
.sub
b)
Matrix a
can have any rank
but you must specify the reciprocal condition of the rank-truncated matrix.
pseudoInverseRCond :: (Measure meas, C vert, C horiz, C height, C width, Floating a) => RealOf a -> Full meas vert horiz height width a -> (Int, Full meas horiz vert width height a) Source #
project :: (C height, Eq height, C width, Eq width, Floating a) => Wide height width a -> Vector height a -> Vector width a -> Vector width a Source #
project b d x
projects x
on the plane described by B*x = d
.
b
must have full rank.
leastSquaresConstraint :: (C height, Eq height, C width, Eq width, C constraints, Eq constraints, Floating a) => General height width a -> Vector height a -> Wide constraints width a -> Vector constraints a -> Vector width a Source #
leastSquaresConstraint a c b d
computes x
with minimal || c - A*x ||_2
and constraint B*x = d
.
b
must be wide and a===b
must be tall
and both matrices must have full rank.
gaussMarkovLinearModel :: (C height, Eq height, C width, Eq width, C opt, Eq opt, Floating a) => Tall height width a -> General height opt a -> Vector height a -> (Vector width a, Vector opt a) Source #
gaussMarkovLinearModel a b d
computes (x,y)
with minimal || y ||_2
and constraint d = A*x + B*y
.
a
must be tall and a|||b
must be wide
and both matrices must have full rank.
determinantAbsolute :: (Measure meas, C vert, C horiz, C height, C width, Floating a) => Full meas vert horiz height width a -> RealOf a Source #
Gramian determinant - works also for non-square matrices, but is sensitive to transposition.
determinantAbsolute a = sqrt (Herm.determinant (Herm.gramian a))
complement :: (C height, C width, Floating a) => Tall height width a -> Tall height ShapeInt a Source #
For an m-by-n-matrix a
with m>=n
this function computes an m-by-(m-n)-matrix b
such that Matrix.multiply (adjoint b) a
is a zero matrix.
The function does not try to compensate a rank deficiency of a
.
That is, a|||b
has full rank if and only if a
has full rank.
For full-rank matrices you might also call this kernel
or nullspace
.
affineFrameFromFiber :: (C width, Eq width, C height, Eq height, Floating a) => Wide height width a -> Vector height a -> (Tall width ShapeInt a, Vector width a) Source #
affineFrameFromFiber a b == (c,d)
Means:
An affine subspace is given implicitly by {x : a#*|x == b}.
Convert this into an explicit representation {c#*|y|+|d : y}.
Matrix a
must have full rank,
otherwise the explicit representation will miss dimensions
and we cannot easily determine the origin d
as a minimum norm solution.
The computation is like
c = complement $ adjoint a d = minimumNorm a b
but the QR decomposition of a
is computed only once.
affineFiberFromFrame :: (C width, Eq width, C height, Eq height, Floating a) => Tall height width a -> Vector height a -> (Wide ShapeInt height a, Vector ShapeInt a) Source #
This conversion is somehow inverse to affineFrameFromFiber
.
However, it is not precisely inverse in either direction.
This is because both affineFrameFromFiber
and affineFiberFromFrame
accept non-orthogonal matrices but always return orthogonal ones.
In affineFiberFromFrame c d
,
matrix c
should have full rank,
otherwise the implicit representation will miss dimensions.