!@ 0      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~(c) Justus Sagemller 2016GPL v3(@) jsag $ hvl.no experimentalportableSafe (c) Justus Sagemller 2016GPL v3(@) jsag $ hvl.no experimentalportableNone,1=>?@ACDHPSUVXglinearmap-categoryA bilinear function is a linear function mapping to a linear function, or equivalently a 2-argument function that's linear in each argument independently. Note that this can note be uncurried to a linear function with a tuple argument (this would not be linear but quadratic).linearmap-categoryInfix synonym of ., without explicit mention of the scalar type.linearmap-categorysA linear map, represented simply as a Haskell function tagged with the type of scalar with respect to which it is linear. Many (sparse) linear mappings can actually be calculated much more efficiently if you don't represent them with any kind of matrix, but just as a function (which is after all, mathematically speaking, what a linear map foremostly is).However, if you sum up many (s  which you can simply do with the  instance  they will become ever slower to calculate, because the summand-functions are actually computed individually and only the results summed. That's where _ is generally preferrable. You can always convert between these equivalent categories using .linearmap-categoryelacs "a    . 0(c) Justus Sagemller 2016GPL v3(@) jsag $ hvl.no experimentalportableNone%&',178=>?@ACDHPSUVXgXzlinearmap-category]The workhorse of this package: most functions here work on vector spaces that fulfill the  v constraint.In summary, this is a  with an implementation for . v w, for any other space w , and with a  space. This fulfills  ( v) ~ v( (this constraint is encapsulated in +). To make a new space of yours an ", you must define instances of - and  . In fact,  is equivalent to a, but makes the condition explicit that the scalar and dual vectors also form a linear space. # only stores that constraint in $ (to avoid UndecidableSuperclasses).linearmap-categoryInfix synonym for ., without explicit mention of the scalar type.linearmap-categoryInfix synonym for ., without explicit mention of the scalar type.linearmap-categoryTensor products are most interesting because they can be used to implement linear mappings, but they also form a useful vector space on their own right.linearmap-category{The tensor product between one space's dual space and another space is the space spanned by vector dual-vector pairs, in  7https://en.wikipedia.org/wiki/Bra%E2%80%93ket_notationabra-ket notation written as  m = " |w''v| JAny linear mapping can be written as such a (possibly infinite) sum. The .w data structure only stores the linear independent parts though; for simple finite-dimensional spaces this means e.g.  ! ! !X effectively boils down to an ordinary matrix type, namely an array of column-vectors |w'.(The 'v|H dual-vectors are then simply assumed to come from the canonical basis.){For bigger spaces, the tensor product may be implemented in a more efficient sparse structure; this can be defined in the - instance.linearmap-categoryThe class of vector spaces v for which  s v w is well-implemented.linearmap-category7Suitable representation of a linear map from the space v to its field.4For the usual euclidean spaces, you can just define  v = vQ. (In this case, a dual vector will be just a row vector  if you consider v-vectors as column vectors . 0 will then effectively have a matrix layout.).linearmap-category!The internal representation of a  product.FFor Euclidean spaces, this is generally constructed by replacing each s scalar field in the v vector with an entire w7 vector. I.e., you have then a nested vector  or, if v is a  DualVector / row vector , a matrix.>linearmap-categoryk Sanity-check  a vector. This typically amounts to detecting any NaN components, which should trigger a Nothing) result. Otherwise, the result should be Just` the input, but may also be optimised / memoised if applicable (i.e. for function spaces).Klinearmap-categoryInfix version of 8.Llinearmap-categoryAThe dual operation to the tuple constructor, or rather to the  fanout operation: evaluate two (linear) functions in parallel and sum up the results. The typical use is to concatenate row vectors  in a matrix definition.Mlinearmap-categoryASCII version of Llinearmap-category@((v'"w)+>x) -> ((v+>w)+>x)linearmap-category ((v+>w)+>x) -> ((v'"w)+>x)linearmap-category (u+>(v"w)) -> (u+>v)"wlinearmap-category (u+>v)"w -> u+>(v"w)linearmap-category ((u+>v)+>w) -> u"(v+>w)linearmap-category (u"(v+>w)) -> (u+>v)+>wlinearmap-category ((u"v)+>w) -> (u+>(v+>w))linearmap-category (u+>(v+>w)) -> ((u"v)+>w)Nlinearmap-categoryJUse a function as a linear map. This is only well-defined if the function is+ linear (this condition is not checked).d *)('&%#"!$ +,-?>=<;:9876543210/.@ABCDFEGHIJKLMN7K76L6M6(c) Justus Sagemller 2016-2019GPL v3(@) jsag $ hvl.no experimentalportableNone,1=>?DHPUVX\M OPQRSTUVS7(c) Justus Sagemller 2016GPL v3(@) jsag $ hvl.no experimentalportableNone,1=>?DHPSUVX_fp\linearmap-categoryWhereas 4-values refer to a single basis vector, a single \} value represents a collection of such basis vectors, which can be used to associate a vector with a list of coefficients.*For spaces with a canonical finite basis, \ does not actually need to contain any information, it can simply have the full finite basis as its only value. Even for large sparse spaces, it should only have a very coarse structure that can be shared by many vectors.`linearmap-categoryBSplit up a linear map in column vectors  WRT some suitable basis.alinearmap-categorycExpand in the given basis, if possible. Else yield a superbasis of the given one, in which this is) possible, and the decomposition therein.blinearmap-categoryRAssemble a vector from coefficients in some basis. Return any excess coefficients.elinearmap-categoryGiven a function that interprets a coefficient-container as a vector representation, build a linear function mapping to that space.glinearmap-categoryThe existance of a finite basis gives us an isomorphism between a space and its dual space. Note that this isomorphism is not natural (i.e. it depends on the actual choice of basis, unlike everything else in this library).ilinearmap-categoryi is the class of vector spaces with finite subspaces in which you can define a basis that can be used to project from the whole space into the subspace. The usual application is for using a kind of  -https://en.wikipedia.org/wiki/Galerkin_methodGalerkin method) to give an approximate solution (see pC) to a linear equation in a possibly infinite-dimensional space.VOf course, this also works for spaces which are already finite-dimensional themselves.jlinearmap-categoryLazily enumerate choices of a basis of functionals that can be made dual to the given vectors, in order of preference (which roughly means, large in the normal direction.) I.e., if the vector c) is assigned early to the dual vector c', then (c' $ c)@ should be large and all the other products comparably small.The purpose is that we should be able to make this basis orthonormal with a ~Gaussian-elimination approach, in a way that stays numerically stable. This is otherwise known as the choice of a pivot element.XFor simple finite-dimensional array-vectors, you can easily define this method using n.plinearmap-category=Inverse function application, aka solving of a linear system: f p f  v "a v f  f p u "a u If f does not have full rank, the behaviour is undefined. However, it does not need to be a proper isomorphism: the first of the above equations is still fulfilled if only f is  injective2 (overdetermined system) and the second if it is  surjective.pIf you want to solve for multiple RHS vectors, be sure to partially apply this operator to the linear map, like map (f p) [v , v , ...] fSince most of the work is actually done in triangularising the operator, this may be much faster than [f p v , f p v , ...] linearmap-categoryIf f is injective, then unsafeLeftInverse f . f "a id linearmap-categoryIf f is surjective, then  f . unsafeRightInverse f "a id linearmap-categoryFInvert an isomorphism. For other linear maps, the result is undefined.rlinearmap-categoryThe  :https://en.wikipedia.org/wiki/Riesz_representation_theoremRiesz representation theoremT provides an isomorphism between a Hilbert space and its (continuous) dual space.tlinearmap-categorytFunctions are generally a pain to display, but since linear functionals in a Hilbert space can be represented by vectors7 in that space, this can be used for implementing a  instance.ulinearmap-categoryOuter product of a general v!-vector and a basis element from w~. Note that this operation is in general pretty inefficient; it is provided mostly to lay out matrix definitions neatly.wlinearmap-category&For real matrices, this boils down to @. For free complex spaces it also incurs complex conjugation.'The signature can also be understood as 6adjoint :: (v +> w) -> (DualVector w +> DualVector v) Or 6adjoint :: (DualVector v +> DualVector w) -> (w +> v) But not (v+>w) -> (w+>v)F, in general (though in a Hilbert space, this too is equivalent, via r isomorphism).nlinearmap-category#Set of canonical basis functionals.linearmap-categoryDecompose a vector in absolute valueV components. the list indices should correspond to those in the functional list.linearmap-categorySuitable definition of j.CWXYZ[hgfedcba`_^]\\imlkjno pqr stu v  w4p0u7v7 (c) Justus Sagemller 2016GPL v3(@) jsag $ hvl.no experimentalportableNone,=>?@ADHPUVXxlinearmap-category4Generalised multiplication operation. This subsumes S and  . For scalars therefore also  , and for , .xx7(c) Justus Sagemller 2016GPL v3(@) jsag $ hvl.no experimentalportableNone%&',18=>?@ACDHPSUVXgyz{yz{y7z7{7(c) Justus Sagemller 2016GPL v3(@) jsag $ hvl.no experimentalportableNone,1=>?DHPUVX*'}linearmap-categoryZHow well the data uncertainties match the deviations from the model's synthetic data.   = 1  " y  y  Where N is the number of degrees of freedom (data values minus model parameters),  y = m x - ydf is the deviation from given data to the data the model would predict (for each sample point), and y@ is the a-priori measurement uncertainty of the data points. Values >1@ indicate that the data could not be described satisfyingly; "j1V suggests overfitting or that the data uncertainties have been postulated too high. 1http://adsabs.harvard.edu/abs/1997ieas.book.....TAIf the model is exactly determined or even underdetermined (i.e. "d0 ) then  is undefined.~linearmap-categoryThe model that best corresponds to the data, in a least-squares sense WRT the supplied norm on the data points. In other words, this is the model that minimises  " y / y.linearmap-categoryThe estimated eigenvalue .linearmap-categoryNormalised vector v( that gets mapped to a multiple, namely:linearmap-categoryf $ v "a  *^ v .linearmap-category Deviation of v to (f$v)^/). Ideally, this would of course be equal.linearmap-categorySquared norm of the deviation.linearmap-categoryA space in which you can use xX both for scaling with a real number, and as dot-product for obtaining such a number.linearmap-category&A multidimensional variance of points ve with some distribution can be considered a norm on the dual space, quantifying for a dual vector dv the expectation value of (dv .^v)^2.linearmap-category1A norm  that may explicitly be degenerate, with  m|$|v *u 0 for some  v "` zeroV.linearmap-categoryKA positive (semi)definite symmetric bilinear form. This gives rise to a  0https://en.wikipedia.org/wiki/Norm_(mathematics)norm thus:   n  v = "(n v S v) `Strictly speaking, this type is neither strong enough nor general enough to deserve the name : it includes proper s (i.e.  m|$|v "a 0 does not guarantee  v == zeroV), but not actual norms such as the ! -norm on !  (Taxcab norm) or the supremum norm. However, ? -like norms are the only ones that can really be formulated without any basis reference; and guaranteeing positive definiteness through the type system is scarcely practical.linearmap-category8A linear map that simply projects from a dual vector in u to a vector in v. (du  v) u "a v  (du S u) linearmap-categoryA seminorm defined by  v  = "("b 'db|v') for some dual vectors dbH. If given a complete basis of the dual space, this generates a proper .If the db0 are a complete orthonormal system, you get the  (in an inefficient form).linearmap-categoryUModify a norm in such a way that the given vectors lie within its unit ball. (Not  optimally/  the unit ball may be bigger than necessary.)linearmap-categoryAScale the result of a norm with the absolute of the given number. &scaleNorm  n |$| v = abs  * (n|$|v) PEquivalently, this scales the norm's unit ball by the reciprocal of that factor.linearmap-categoryGThe canonical standard norm (2-norm) on inner-product / Hilbert spaces.linearmap-category|The norm induced from the (arbitrary) choice of basis in a finite space. Only use this in contexts where you merely need someE norm, but don't care if it might be biased in some unnatural way.linearmap-categoryA proper norm induces a norm on the dual space  the reciprocal norm . (The orthonormal systems of the norm and its dual are mutually conjugate.) The dual norm of a seminorm is undefined.linearmap-categoryD in the opposite direction. This is actually self-inverse; with / you can replace each with the other direction.linearmap-categoryNThe unique positive number whose norm is 1 (if the norm is not constant zero).linearmap-categoryUnsafe version of C, only works reliable if the norm is actually positive definite.linearmap-category Partially apply  a norm, yielding a dual vector (i.e. a linear form that accepts the second argument of the scalar product). (  v) S w "a v  w  See also .linearmap-category&The squared norm. More efficient than / because that needs to take the square root.linearmap-categoryUse a * to measure the length / norm of a vector.  |$| v "a "(v  v) linearmap-categoryFlipped, ket  version of . v S (w |&> ) "a v  w linearmap-category /  are inefficient if the number of vectors is similar to the dimension of the space, or even larger than it. Use this function to optimise the underlying operator to a dense matrix representation.linearmap-categoryLike C, but also perform a sanity check  to eliminate NaN etc. problems.linearmap-categoryLazily compute the eigenbasis of a linear map. The algorithm is essentially a hybrid of Lanczos/Arnoldi style Krylov-spanning and QR-diagonalisation, which we don't do separately but  interleave at each step.The size of the eigen-subbasis increases with each step until the space's dimension is reached. (But the algorithm can also be used for infinite-dimensional spaces.)linearmap-categoryFind a system of vectors that approximate the eigensytem, in the sense that: each true eigenvalue is represented by an approximate one, and that is closer to the true value than all the other approximate EVs.iThis function does not make any guarantees as to how well a single eigenvalue is approximated, though.linearmap-categoryqSimple automatic finding of the eigenvalues and -vectors of a Hermitian operator, in reasonable approximation.9This works by spanning a QR-stabilised Krylov basis with  until it is complete (3), and then properly decoupling the system with 7 (based on two iterations of shifted Givens rotations).=This function is a tradeoff in performance vs. accuracy. Use  and j directly for more quickly computing a (perhaps incomplete) approximation, or for more precise results.linearmap-category!Approximation of the determinant.linearmap-category Inverse of . Equivalent to  on the dual space.linearmap-categorybFor any two norms, one can find a system of co-vectors that, with suitable coefficients, spans either of them: if &shSys = sharedNormSpanningSystem n n  , then n =  $ fst $shSys and n =  [dv^* | (dv,)<-shSys] A rather crude approximation (^) is used in this function, so do not expect the above equations to hold with great accuracy.linearmap-category2Like 'sharedNormSpanningSystem n n ', but allows either of the norms to be singular. n =  [dv | (dv, Just _)<-shSys] and n = Q $ [dv^* | (dv, Just )<-shSys] ++ [ dv | (dv, Nothing)<-shSys] You may also interpret a NothingQ here as an infinite eigenvalue , i.e. it is so small as an spanning vector of n > that you would need to scale it by " to use it for spanning n .linearmap-categoryrA system of vectors which are orthogonal with respect to both of the given seminorms. (In general they are not  orthonormal to either of them.)linearmap-categorycInterpret a variance as a covariance between two subspaces, and normalise it by the variance on u. The result is effectively the linear regression coefficient of a simple regression of the vectors spanning the variance.linearmap-categorySimple wrapper of .linearmap-category $(m<>n|$|v)^2 *u (m|$|v)^2 + (n|$|v)^2linearmap-category mempty|$|v "a 0linearmap-categoryThe notion of orthonormality.linearmap-category+Error bound for deviations from eigen-ness.linearmap-categoryuOperator to calculate the eigensystem of. Must be Hermitian WRT the scalar product defined by the given metric.linearmap-category(Starting vector(s) for the power method.linearmap-category]Infinite sequence of ever more accurate approximations to the eigensystem of the operator.  $!"#%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwx|~}NLMwSKPQRTUOVpq|}~-./0123456789:;<=>? $!"#%&'()*ijklmno[\]^_`abcdefgh vx rstuZWDEF YXIJGHBC+,@A76001    !"#$%&'()*+,-./01233456789:;<=>?@ABCDEFGGHHIJKLLMMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxy z{|}~       1linearmap-category-0.4.0.1-ITdXYhY7mUOJZ550UK5VxB Math.VectorSpace.ZeroDimensionalMath.LinearMap.Category#Math.LinearMap.Category.DerivativesMath.LinearMap.Asserted LinearMapMath.LinearMap.Category.Class!Math.LinearMap.Category.InstancesMath.VectorSpace.Docile"Math.LinearMap.Category.TensorQuot-manifolds-core-0.5.1.0-7erFyQcGCyw3Wj1JfXnH10)Math.Manifold.VectorSpace.ZeroDimensionalOriginZeroDimBilinear-+>LinearFunctiongetLinearFunctionaddVbilinearFunction flipBilinscaleinner Fractional' DualSpaceLSpace⊗+>TensorgetTensorProduct getLinearMap LinearSpace DualVectordualSpaceWitnesslinearIdidTensorsampleLinearFunction toLinearFormfromLinearFormcoerceDoubleDualtracecontractTensorMapcontractMapTensorcontractTensorFncontractLinearMapAgainstapplyDualVector applyLinear composeLineartensorIdapplyTensorFunctionalapplyTensorLinMapDualSpaceWitness TensorSpace TensorProductscalarSpaceWitnesslinearManifoldWitness zeroTensor toFlatTensorfromFlatTensor addTensorssubtractTensors scaleTensor negateTensor tensorProducttensorProductstransposeTensor fmapTensorfzipTensorWithcoerceFmapTensorProductwellDefinedVectorwellDefinedTensorLinearManifoldWitnessScalarSpaceWitnessNum'closedScalarWitnesstrivialTensorWitnessTrivialTensorWitnessClosedScalarWitness⊕>+<lfun⊗〃+>SymmetricTensor SymTensorgetSymmetricTensor<.>^squareVsquareVs currySymBilin SimpleSpace RealFloat' RealFrac' HilbertSpaceFiniteDimensionalSubBasis entireBasisenumerateSubBasissubbasisDimensiondecomposeLinMapdecomposeLinMapWithin recomposeSBrecomposeSBTensorrecomposeLinMaprecomposeContraLinMaprecomposeContraLinMapTensoruncanonicallyFromDualuncanonicallyToDual SemiInnerdualBasisCandidatestensorDualBasisCandidatessymTensorDualBasisCandidates"symTensorTensorDualBasisCandidatescartesianDualBasisCandidatesembedFreeSubspace\$ pseudoInverserieszcoRieszshowsPrecAsRiesz.<.⊗adjoint·/∂*∂.∂LinearRegressionResultlinearFit_χν²linearFit_bestModellinearFit_modelUncertaintyLinearShowable Eigenvector ev_Eigenvalueev_Eigenvectorev_FunctionApplied ev_Deviation ev_Badness RealSpaceVarianceSeminormNorm applyNorm-+|>spanNorm spanVariance relaxNorm scaleNorm euclideanNormdualNorm dualNorm' transformNormtransformVariancefindNormalLength normalLength<$|normSq|$||&> densifyNormwellDefinedNormconstructEigenSystemfinishEigenSystemroughEigenSystemeigenroughDetnormSpanningSystemnormSpanningSystem'varianceSpanningSystemsharedNormSpanningSystemsharedSeminormSpanningSystemsharedSeminormSpanningSystem' dependencesummandSpaceNormssumSubspaceNormsconvexPolytopeHullsymmetricPolytopeOuterVerticeslinearRegressionWlinearRegression $fShowNorm$fSemigroupNorm $fMonoidNorm$fShowEigenvector(vector-space-0.16-AfBGClPmXjvBom1MmjmC9wData.VectorSpace VectorSpace5constrained-categories-0.4.0.0-6M3GlFaQ9f9BdWcYMS4DQbControl.Arrow.ConstrainedarrelacslinearFunction scaleWithscaleVconst0lNegateV fmapScalelCoFstlCoSndbiConst0lApply-+$>&&& argFromTensor argAsTensordeferLinearMaphasteLinearMapcoCurryLinearMapcoUncurryLinearMapcurryLinearMapuncurryLinearMapGenericNeedle'getGenericNeedle'GenericTupleDual fmapLinearMapasTensor fromTensor asLinearMap fromLinearMappseudoFmapTensorLHSpseudoPrecomposeLinmapenvTensorLHSCoercionenvLinmapPrecomposeCoercion<⊕ lfstBlock lsndBlock lassocTensor rassocTensoruncurryLinearFnsampleLinearFunctionFn fromLinearFn asLinearFnexposeLinearFngenericTensorspaceErrorLinearApplicativeSpacegetLinearApplicativeSpaceℝ Data.BasisBasis$unsafeLeftInverseunsafeRightInverse unsafeInversebaseGHC.ShowShow Data.OldList transposeTensorDecomposabletensorDecompositionshowsPrecBasisRieszDecomposablerieszDecomposition ZeroBasis LinMapBasis SymTensBasis TensorBasis TupleBasisV4BasisV3BasisV2BasisV1Basis RealsBasisV0BasisDListorthonormaliseDuals dualBasis dualBasis' zipTravWith⊗<$>^/^recomposeMultiple tensorLinmapDecompositionhelperssRieszrieszDecomposeShowsPrectensorDecomposeShowsPrec^ multiSplit*^GHC.Num* InnerSpace<.> TensorQuot⨸^* adhocNorm^%