G2k      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxy z { | } ~         !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~$portable (uses FFI) provisionalAlberto Ruiz <aruiz@um.es> Safe-Inferred444portable (uses FFI) provisionalAlberto Ruiz <aruiz@um.es>None description of GSL error codes clear the fpu  splitEvery 3 [1..9] == [[1,2,3],[4,5,6],[7,8,9]]1obtains the common value of a property of a list common value with " adaptable" 1 Formatting tool postfix function application (flip ($)) specialized fromIntegral Aerror codes for the auxiliary functions required by the wrappers check the error code &Error capture and conversion to Maybe )       )      portable (uses FFI) provisionalAlberto Ruiz <aruiz@um.es>NoneNumber of elements creates a Vector from a list: > fromList [2,3,5,7] 4 |> [2.0,3.0,5.0,7.0]'extracts the Vector elements to a list > toList (linspace 5 (1,10)) [1.0,3.25,5.5,7.75,10.0]An alternative to $ with explicit dimension. The input F list is explicitly truncated if it is too long, so it may safely 1 be used, for instance, with infinite lists. >This is the format used in the instances for Show (Vector a). 1access to Vector elements without range checking /access to Vector elements with range checking. 5takes a number of consecutive elements from a Vector  > subVector 2 3 (fromList [1..10]) 3 |> [3.0,4.0,5.0]Reads a vector position: > fromList [0..9] @> 7 7.02creates a new Vector by joining a list of Vectors > join [fromList [1..5], constant 1 3] %8 |> [1.0,2.0,3.0,4.0,5.0,1.0,1.0,1.0]3Extract consecutive subvectors of the given sizes.  > takesV [3,4] (linspace 10 (1,10)) [3 |> [1.0,2.0,3.0],4 |> [4.0,5.0,6.0,7.0]]Ztransforms a complex vector into a real vector with alternating real and imaginary parts Ytransforms a real vector into a complex vector with alternating real and imaginary parts map on Vectors zipWith for Vectors unzipWith for Vectors !monadic map over Vectors  the monad m must be strict "monadic map over Vectors #Tmonadic map over Vectors with the zero-indexed index passed to the mapping function  the monad m must be strict $Tmonadic map over Vectors with the zero-indexed index passed to the mapping function &ULoads a vector from an ASCII file (the number of elements must be known in advance). 'TSaves the elements of a vector, with a given format (%f, %e, %g), to an ASCII file. (ULoads a vector from a binary file (the number of elements must be known in advance). )1Saves the elements of a vector to a binary file. 7 !"index of the starting element number of elements to extract source result #$%&'()*+, !"#$%&'(),-"#$%&'(*+, !"#$%&'()7 !"#$%&'()*+, !"#$%&'()portable (uses FFI) provisionalAlberto Ruiz <aruiz@um.es>None+>normal distribution with mean zero and standard deviation one ,uniform distribution in [0,1) .sum of elements /sum of elements 0sum of elements 1sum of elements 2product of elements 3product of elements 4product of elements 5product of elements 6 dot product 7 dot product 8 dot product 9 dot product :Vobtains different functions of a vector: norm1, norm2, max, min, posmax, posmin, etc. ;Vobtains different functions of a vector: norm1, norm2, max, min, posmax, posmin, etc. <;obtains different functions of a vector: only norm1, norm2 =;obtains different functions of a vector: only norm1, norm2 >(map of real vectors with given function ?+map of complex vectors with given function @(map of real vectors with given function A(map of real vectors with given function B(map of real vectors with given function C+map of complex vectors with given function D(map of real vectors with given function E+map of complex vectors with given function F&elementwise operation on real vectors G)elementwise operation on complex vectors H&elementwise operation on real vectors I)elementwise operation on complex vectors -Obtains a vector of pseudorandom elements from the the mt19937 generator in GSL, with a given seed. Use randomIO to get a random seed. i*+,JKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~./0123456789:;<=>?@ABCDEFGHI-seed  distribution  vector size G*+,JKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnop./0123456789:;<=>?@ABCDEFGHI-D*,+JPONMLKQWVUTSRX^]\[ZY_ponmlkjihgfedcba`qrstuvwxyz{|}~./0123456789:;<=>?@ABCDEFGHI-portable (uses FFI) provisionalAlberto Ruiz <aruiz@um.es>None.Supported matrix elements. 'This class provides optimized internal + operations for selected element types. - It provides unoptimised defaults for any  type, + so you can create instances simply as:  instance Element Foo. /@Matrix representation suitable for GSL and LAPACK computations. 6The elements are stored in a continuous memory array. 6Matrix transpose. :nCreates a vector by concatenation of rows. If the matrix is ColumnMajor, this operation requires a transpose. > flatten (ident 3) * 9 |> [1.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,1.0];the inverse of  <(Create a matrix from a list of vectors. + All vectors must have the same dimension, 6 or dimension 1, which is are automatically expanded. =3extracts the rows of a matrix as a list of vectors >4Creates a matrix from a list of vectors, as columns ?7Creates a list of vectors from the columns of a matrix @Reads a matrix position. DCreates a matrix from a vector by grouping the elements in rows with the desired number of columns. (GNU-Octave groups by columns. To do it you can define reshapeF r = trans . reshape r (where r is the desired number of rows.) > reshape 4 ( [1..12]) (3><4)  [ 1.0, 2.0, 3.0, 4.0  , 5.0, 6.0, 7.0, 8.0  , 9.0, 10.0, 11.0, 12.0 ]EBapplication of a vector function on the flattened matrix elements FDapplication of a vector function on the flattened matrices elements G$Extracts a submatrix from a matrix. H"Saves a matrix as 2D ASCII table. M.(r0,c0) starting position  (rt,ct) dimensions of submatrix /0123456789:;<=>?@ABCDEFG(r0,c0) starting position  (rt,ct) dimensions of submatrix  input matrix result Hformat (%f, %g, %e) ,./0123456789:;<=>?@ABCDEFGHC./0213456789:;<=>?@ABCDEFGHportable provisionalAlberto Ruiz <aruiz@um.es>NoneI,Structures that may contain complex numbers JSupported real types -Supported single-double precision type pairs Dcreates a complex vector from vectors with real and imaginary parts the inverse of  toComplex IJIJ IJportable provisionalAlberto Ruiz <aruiz@um.es>None- "#$%&'(*+, !"#$%&'()./0123456789:;<=>?@ABCDEFGHportable provisionalAlberto Ruiz <aruiz@um.es>None%KLMNOPQRSTUVWXYZ[\]^_`abcdefKLMNOPQRSTUVWXYZ[\]^_`abcdefLYMTOVWRSKfZb\cd_`XPQNUe]^[a#KLMNOPQRSTUVWXYZ[\]^_`abcdefNoneIf we use unsafePerformIO, it may not get inlined, so in a function that returns IO (which are all safe uses of app* in this module), there would be 7 unecessary calls to unsafePerformIO or its internals. gGOnly useful since it is left associated with a precedence of 1, unlike , which is right associative.  e.g.    someFunction  ` appMatrixLen` m  ` appVectorLen` v  `app` other  `app` arguments  `app` go here One could also write:   (someFunction  ` appMatrixLen` m  ` appVectorLen` v)  other  arguments  (go here) nJThis will disregard the order of the matrix, and simply return it as-is. G If the order of the matrix is RowMajor, this function is identical to :. ghijklmno ghijklmno ghijklmno ghijklmnouses ffi provisional!Alberto Ruiz (aruiz at um dot es)NoneVconversion of Haskell functions into function pointers that can be used in the C side p'Adaptive central difference algorithm, gsl_deriv_central. For example: " > let deriv = derivCentral 0.01  > deriv sin (pi/4) ,(0.7071067812000676,1.0600063101654055e-10)  > cos (pi/4) 0.7071067811865476 q'Adaptive forward difference algorithm, gsl_deriv_forwardX. The function is evaluated only at points greater than x, and never at x itself. The derivative is returned in result and an estimate of its absolute error is returned in abserr. This function should be used if f(x) has a discontinuity at x, or is undefined for values less than x. A backward derivative can be obtained using a negative step. r(Adaptive backward difference algorithm, gsl_deriv_backward. 'type: 0 central, 1 forward, 2 backward initial step size  function $point where the derivative is taken result and error pinitial step size  function $point where the derivative is taken result and absolute error qinitial step size  function $point where the derivative is taken result and absolute error rinitial step size  function $point where the derivative is taken result and absolute error pqrpqrpqruses ffi provisional!Alberto Ruiz (aruiz at um dot es)NoneVconversion of Haskell functions into function pointers that can be used in the C side sNumerical integration using gsl_integration_qags9 (adaptive integration with singularities). For example: >% let quad = integrateQAGS 1E-9 1000 ># let f a x = x**(-0.5) * log (a*x) > quad (f 1) 0 1 *(-3.999999999999974,4.871658632055187e-13)tNumerical integration using gsl_integration_qngA (useful for fast integration of smooth functions). For example: > let quad = integrateQNG 1E-6 > quad (\x -> 4/(1+x*x)) 0 1 )(3.141592653589793,3.487868498008632e-14)uNumerical integration using gsl_integration_qagiA (integration over the infinite integral -Inf..Inf using QAGS).  For example: >% let quad = integrateQAGI 1E-9 1000 > let f a x = exp(-a * x^2) > quad (f 0.5) *(2.5066282746310002,6.229215880648858e-11)vNumerical integration using gsl_integration_qagiu8 (integration over the semi-infinite integral a..Inf).  For example: >& let quad = integrateQAGIU 1E-9 1000 > let f a x = exp(-a * x^2) > quad (f 0.5) 0 *(1.2533141373155001,3.114607940324429e-11)wNumerical integration using gsl_integration_qagil9 (integration over the semi-infinite integral -Inf..b).  For example: >& let quad = integrateQAGIL 1E-9 1000 > let f a x = exp(-a * x^2) > quad (f 0.5) 0 *(1.2533141373155001,3.114607940324429e-11)xNumerical integration using gsl_integration_cquad (quadrature /for general integrands). From the GSL manual:  BCQUAD is a new doubly-adaptive general-purpose quadrature routine Ewhich can handle most types of singularities, non-numerical function Cvalues such as Inf or NaN, as well as some divergent integrals. It Bgenerally requires more function evaluations than the integration Droutines in QUADPACK, yet fails less often for difficult integrands. For example:  >' let quad = integrateCQUAD 1E-12 1000 > let f a x = exp(-a * x^2) > quad (f 0.5) 2 5 0(5.7025405463957006e-2,9.678874441303705e-16,95)AUnlike other quadrature methods, integrateCQUAD also returns the )number of function evaluations required. sprecision (e.g. 1E-9) (size of auxiliary workspace (e.g. 1000) 0function to be integrated on the interval (a,b) a b $result of the integration and error tprecision (e.g. 1E-9) 0function to be integrated on the interval (a,b) a b $result of the integration and error uprecision (e.g. 1E-9) (size of auxiliary workspace (e.g. 1000) 5function to be integrated on the interval (-Inf,Inf) $result of the integration and error vprecision (e.g. 1E-9) (size of auxiliary workspace (e.g. 1000) 2function to be integrated on the interval (a,Inf) a $result of the integration and error wprecision (e.g. 1E-9) (size of auxiliary workspace (e.g. 1000) 2function to be integrated on the interval (a,Inf) b $result of the integration and error xprecision (e.g. 1E-9) (size of auxiliary workspace (e.g. 1000) 1function to be integrated on the interval (a, b) a b Nresult of the integration, error and number of function evaluations performed stuvwxtsuvwx stuvwx uses ffi provisional!Alberto Ruiz (aruiz at um dot es)NoneyFast 1D Fourier transform of a  ( ) using gsl_fft_complex_forward6. It uses the same scaling conventions as GNU Octave. > fft ( [1,2,3,4]) Dvector (4) [10.0 :+ 0.0,(-2.0) :+ 2.0,(-2.0) :+ 0.0,(-2.0) :+ (-2.0)]zThe inverse of y, using gsl_fft_complex_inverse. yzyzyzyz uses ffi provisional!Alberto Ruiz (aruiz at um dot es)None{0Solution of general polynomial equations, using gsl_poly_complex_solve. For example, ( the three solutions of x^3 + 8 = 0  > polySolve [8,0,0,1] [(-1.9999999999999998) :+ 0.0,  1.0 :+ 1.732050807568877,  1.0 :+ (-1.732050807568877)]@The example in the GSL manual: To find the roots of x^5 -1 = 0: > polySolve [-1, 0, 0, 0, 0, 1] .[(-0.8090169943749475) :+ 0.5877852522924731, 0(-0.8090169943749475) :+ (-0.5877852522924731), +0.30901699437494734 :+ 0.9510565162951536, .0.30901699437494734 :+ (-0.9510565162951536),  1.0 :+ 0.0]{{{{NoneVconversion of Haskell functions into function pointers that can be used in the C side    uses ffi provisional!Alberto Ruiz (aruiz at um dot es)None|Stepping functions }A variable-coefficient linear multistep backward differentiation formula (BDF) method in Nordsieck form. This stepper uses the explicit BDF formula as predictor and implicit BDF formula as corrector. A modified Newton iteration method is used to solve the system of non-linear equations. Method order varies dynamically between 1 and 5. The method is generally suitable for stiff problems. ~A variable-coefficient linear multistep Adams method in Nordsieck form. This stepper uses explicit Adams-Bashforth (predictor) and implicit Adams-Moulton (corrector) methods in P(EC)^m functional iteration mode. Method order varies dynamically between 1 and 12. Implicit Gaussian first order Runge-Kutta. Also known as implicit Euler or backward Euler method. Error estimation is carried out by the step doubling method. lImplicit Bulirsch-Stoer method of Bader and Deuflhard. The method is generally suitable for stiff problems. 3Implicit 4th order Runge-Kutta at Gaussian points. 3Implicit 2nd order Runge-Kutta at Gaussian points. 2Embedded Runge-Kutta Prince-Dormand (8,9) method. .Embedded Runge-Kutta Cash-Karp (4, 5) method. _Embedded Runge-Kutta-Fehlberg (4, 5) method. This method is a good general-purpose integrator. 4th order (classical) Runge-Kutta. The error estimate is obtained by halving the step-size. For more efficient estimate of the error, use the embedded methods. $Embedded Runge-Kutta (2, 3) method.  A version of Q with reasonable default parameters and system of equations defined using lists. 9Evolution of the system with adaptive step-size control. |}~ xdot(t,x) initial conditions desired solution times  solution initial step size (absolute tolerance for the state vector (relative tolerance for the state vector  xdot(t,x) initial conditions desired solution times  solution optional jacobian initial step size (absolute tolerance for the state vector (relative tolerance for the state vector  xdot(t,x) initial conditions desired solution times  solution  |}~|~} | ~} portable provisionalAlberto Ruiz <aruiz@um.es>None 0125789ABCC9 021578BA provisionalAlberto Ruiz <aruiz@um.es>None 2creates a matrix from a vertical list of matrices  4creates a matrix from a horizontal list of matrices CCreates a matrix from blocks given as a list of lists of matrices.  Single row/:column components are automatically expanded to match the %corresponding common row and column: > let disp = putStr . dispf 2 >. let vector xs = fromList xs :: Vector Double > let diagl = diag . vector > let rowm = asRow . vector  >+ disp $ fromBlocks [[ident 5, 7, rowm[10,20]], [3, diagl[1,2,3], 0]]  8x10 1 0 0 0 0 7 7 7 10 20 0 1 0 0 0 7 7 7 10 20 0 0 1 0 0 7 7 7 10 20 0 0 0 1 0 7 7 7 10 20 0 0 0 0 1 7 7 7 10 20 3 3 3 3 3 1 0 0 0 0 3 3 3 3 3 0 2 0 0 0 3 3 3 3 3 0 0 3 0 0create a block diagonal matrix  Reverse rows Reverse columns 'creates a rectangular diagonal matrix:  > diagRect 7 (fromList [10,20,30]) 4 5 :: Matrix Double (4><5)  [ 10.0, 7.0, 7.0, 7.0, 7.0  , 7.0, 20.0, 7.0, 7.0, 7.0  , 7.0, 7.0, 30.0, 7.0, 7.0  , 7.0, 7.0, 7.0, 7.0, 7.0 ]0extracts the diagonal from a rectangular matrix  An easy way to create a matrix:  > (2><3)[1..6] (2><3)  [ 1.0, 2.0, 3.0  , 4.0, 5.0, 6.0 ]GThis is the format produced by the instances of Show (Matrix a), which can also be used for input. 7The input list is explicitly truncated, so that it can Csafely be used with lists that are too long (like infinite lists).  Example: > (2><3)[1..] (2><3)  [ 1.0, 2.0, 3.0  , 4.0, 5.0, 6.0 ]9Creates a matrix with the first n rows of another matrix 4Creates a copy of a matrix without the first n rows <Creates a matrix with the first n columns of another matrix 7Creates a copy of a matrix without the first n columns  Creates a /, from a list of lists (considered as rows). > fromLists [[1,2],[3,4],[5,6]] (3><2)  [ 1.0, 2.0  , 3.0, 4.0  , 5.0, 6.0 ]%creates a 1-row matrix from a vector (creates a 1-column matrix from a vector Fcreates a Matrix of the specified size using the supplied function to  to map the row/(column position to the value at that row/column position.  > buildMatrix 3 4 (\*(r,c) -> fromIntegral r * fromIntegral c) (3><4)  [ 0.0, 0.0, 0.0, 0.0, 0.0  , 0.0, 1.0, 2.0, 3.0, 4.0  , 0.0, 2.0, 4.0, 6.0, 8.0]Hilbert matrix of order N: hilb n = buildMatrix n n (\(i,j)->1/%(fromIntegral i + fromIntegral j +1))Trearranges the rows of a matrix according to the order given in a list of integers. Lcreates matrix by repetition of a matrix a given number of rows and columns (> repmat (ident 2) 2 3 :: Matrix Double (4><6)  [ 1.0, 0.0, 1.0, 0.0, 1.0, 0.0  , 0.0, 1.0, 0.0, 1.0, 0.0, 1.0  , 1.0, 0.0, 1.0, 0.0, 1.0, 0.0  , 0.0, 1.0, 0.0, 1.0, 0.0, 1.0 ] A version of Fm which automatically adapt matrices with a single row or column to match the dimensions of the other matrix. KPartition a matrix into blocks with the given numbers of rows and columns. / The remaining rows and columns are discarded. QFully partition a matrix into blocks of the same size. If the dimensions are not ? a multiple of the given size the last blocks will be smaller. ghci> mapMatrixWithIndexM_ (\(i,j) v -> printf " m[%.0f,%.0f] = %.f\n" i j v :: IO()) ((2><3)[1 :: Double ..]) m[0,0] = 1 m[0,1] = 2 m[0,2] = 3 m[1,0] = 4 m[1,1] = 5 m[1,2] = 6 ghci> mapMatrixWithIndexM (\>(i,j) v -> Just $ 100*v + 10*i + j) (ident 3:: Matrix Double) Just (3><3)  [ 100.0, 1.0, 2.0  , 10.0, 111.0, 12.0  , 20.0, 21.0, 122.0 ] ghci> mapMatrixWithIndex (\7(i,j) v -> 100*v + 10*i + j) (ident 3:: Matrix Double) (3><3)  [ 100.0, 1.0, 2.0  , 10.0, 111.0, 12.0  , 20.0, 21.0, 122.0 ]'    )./346:;<=>?@DEFG)/.346D:;@<=>?GEF'     uses ffi provisional!Alberto Ruiz (aruiz at um dot es)NoneOnedimensional minimization. !Minimization without derivatives 2Minimization without derivatives (vector version) Minimization with derivatives. /Minimization with derivatives (vector version) The method used. "desired precision of the solution %maximum number of iterations allowed function to minimize &guess for the location of the minimum lower bound of search interval upper bound of search interval solution and optimization path .desired precision of the solution (size test) %maximum number of iterations allowed  sizes of the initial search box function to minimize starting point &solution vector and optimization path .desired precision of the solution (size test) %maximum number of iterations allowed  sizes of the initial search box function to minimize starting point &solution vector and optimization path 2desired precision of the solution (gradient test) %maximum number of iterations allowed size of the first trial step (tol (precise meaning depends on method) function to minimize  gradient starting point &solution vector and optimization path 2desired precision of the solution (gradient test) %maximum number of iterations allowed size of the first trial step (tol (precise meaning depends on method) function to minimize  gradient starting point &solution vector and optimization path  uses ffi provisional!Alberto Ruiz (aruiz at um dot es)NoneNNonlinear multidimensional root finding using algorithms that do not require 8 any derivative information to be supplied by the user. @ Any derivatives needed are approximated by finite differences. UNonlinear multidimensional root finding using both the function and its derivatives.  !"#maximum residual %maximum number of iterations allowed function to minimize starting point &solution vector and optimization path $maximum residual %maximum number of iterations allowed function to minimize  Jacobian starting point &solution vector and optimization path %&' !"#$%&'portable (uses FFI) provisional!Alberto Ruiz (aruiz at um dot es)None4Matrix product based on BLAS's dgemm. Matrix product based on BLAS's zgemm. Matrix product based on BLAS's sgemm. Matrix product based on BLAS's cgemm. &Full SVD of a real matrix using LAPACK's dgesvd. &Full SVD of a real matrix using LAPACK's dgesdd. )Full SVD of a complex matrix using LAPACK's zgesvd. )Full SVD of a complex matrix using LAPACK's zgesdd. 'Thin SVD of a real matrix, using LAPACK's dgesvd with jobu == jobvt == 'S'. *Thin SVD of a complex matrix, using LAPACK's zgesvd with jobu == jobvt == 'S'. 'Thin SVD of a real matrix, using LAPACK's dgesdd with jobz == 'S'. *Thin SVD of a complex matrix, using LAPACK's zgesdd with jobz == 'S'. .Singular values of a real matrix, using LAPACK's dgesvd with jobu == jobvt == 'N'. 1Singular values of a complex matrix, using LAPACK's zgesvd with jobu == jobvt == 'N'. .Singular values of a real matrix, using LAPACK's dgesdd with jobz == 'N'. 1Singular values of a complex matrix, using LAPACK's zgesdd with jobz == 'N'. MSingular values and all right singular vectors of a real matrix, using LAPACK's dgesvd with jobu == 'N' and jobvt == 'A'. PSingular values and all right singular vectors of a complex matrix, using LAPACK's zgesvd with jobu == 'N' and jobvt == 'A'. LSingular values and all left singular vectors of a real matrix, using LAPACK's dgesvd with jobu == 'A' and jobvt == 'N'. OSingular values and all left singular vectors of a complex matrix, using LAPACK's zgesvd with jobu == 'A' and jobvt == 'N'. LEigenvalues and right eigenvectors of a general complex matrix, using LAPACK's zgeev. H The eigenvectors are the columns of v. The eigenvalues are not sorted. 5Eigenvalues of a general complex matrix, using LAPACK's zgeev with jobz == 'N'. ! The eigenvalues are not sorted. IEigenvalues and right eigenvectors of a general real matrix, using LAPACK's dgeev. H The eigenvectors are the columns of v. The eigenvalues are not sorted. 2Eigenvalues of a general real matrix, using LAPACK's dgeev with jobz == 'N'. ! The eigenvalues are not sorted. KEigenvalues and right eigenvectors of a symmetric real matrix, using LAPACK's dsyev. ( The eigenvectors are the columns of v. 5 The eigenvalues are sorted in descending order (use  for ascending order).  in ascending order NEigenvalues and right eigenvectors of a hermitian complex matrix, using LAPACK's zheev. ( The eigenvectors are the columns of v. 5 The eigenvalues are sorted in descending order (use  for ascending order).  in ascending order 4Eigenvalues of a symmetric real matrix, using LAPACK's dsyev with jobz == 'N'. 1 The eigenvalues are sorted in descending order. 7Eigenvalues of a hermitian complex matrix, using LAPACK's zheev with jobz == 'N'. 1 The eigenvalues are sorted in descending order. Solve a real linear system (for square coefficient matrix and several right-hand sides) using the LU decomposition, based on LAPACK's dgesv6. For underconstrained or overconstrained systems use  or  . See also . Solve a complex linear system (for square coefficient matrix and several right-hand sides) using the LU decomposition, based on LAPACK's zgesv6. For underconstrained or overconstrained systems use  or  . See also . wSolves a symmetric positive definite system of linear equations using a precomputed Cholesky factorization obtained by . wSolves a Hermitian positive definite system of linear equations using a precomputed Cholesky factorization obtained by . Least squared error solution of an overconstrained real linear system, or the minimum norm solution of an underconstrained system, using LAPACK's dgels!. For rank-deficient systems use . Least squared error solution of an overconstrained complex linear system, or the minimum norm solution of an underconstrained system, using LAPACK's zgels!. For rank-deficient systems use . hMinimum norm solution of a general real linear least squares problem Ax=B using the SVD, based on LAPACK's dgelss6. Admits rank-deficient systems but it is slower than . The effective rank of A is determined by treating as zero those singular valures which are less than rcond times the largest singular value. If rcond == Nothing machine precision is used. kMinimum norm solution of a general complex linear least squares problem Ax=B using the SVD, based on LAPACK's zgelss6. Admits rank-deficient systems but it is slower than . The effective rank of A is determined by treating as zero those singular valures which are less than rcond times the largest singular value. If rcond == Nothing machine precision is used. TCholesky factorization of a complex Hermitian positive definite matrix, using LAPACK's zpotrf. QCholesky factorization of a real symmetric positive definite matrix, using LAPACK's dpotrf. TCholesky factorization of a complex Hermitian positive definite matrix, using LAPACK's zpotrf (( version). QCholesky factorization of a real symmetric positive definite matrix, using LAPACK's dpotrf (( version). /QR factorization of a real matrix, using LAPACK's dgeqr2. 2QR factorization of a complex matrix, using LAPACK's zgeqr2. >Hessenberg factorization of a square real matrix, using LAPACK's dgehrd. AHessenberg factorization of a square complex matrix, using LAPACK's zgehrd. 9Schur factorization of a square real matrix, using LAPACK's dgees. <Schur factorization of a square complex matrix, using LAPACK's zgees. 7LU factorization of a general real matrix, using LAPACK's dgetrf. :LU factorization of a general complex matrix, using LAPACK's zgetrf. @Solve a real linear system from a precomputed LU decomposition (), using LAPACK's dgetrs. @Solve a real linear system from a precomputed LU decomposition (), using LAPACK's zgetrs. m)*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[rcond coefficient matrix right hand sides (as columns) solution vectors (as columns) rcond coefficient matrix right hand sides (as columns) solution vectors (as columns) \]^_`a44m)*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`a provisionalAlberto Ruiz <aruiz@um.es>NoneHcreates a Vector of the specified length using the supplied function to 2 to map the index to the value at that index. > buildVector 4 fromIntegral 4 |> [0.0,1.0,2.0,3.0]zip for Vectors unzip for Vectors bcdef !"#$%%!"#$ bcdefuses ffi provisional!Alberto Ruiz (aruiz at um dot es)NoneA !"#$%./346:;<=>?@DEFGportable provisionalAlberto Ruiz <aruiz@um.es>None!%Matrix product and related functions matrix product dot (inner) product @sum of absolute value of elements (differs in complex case from norm1) "sum of absolute value of elements euclidean norm element of maximum magnitude :Basic element-by-element functions for numeric containers )create a structure with a single element complex conjugate 7scale the element by element reciprocal of the object: scaleRecip 2 (fromList [5,i]#) == 2 |> [0.4 :+ 0.0,0.0 :+ (-2.0)]""element by element multiplication #element by element division &Fcannot implement instance Functor because of Element class constraint '!constant structure of given size ($create a structure using a function Hilbert matrix of order N: hilb n = build (n,n) (\i j -> 1/(i+j+1)))indexing function *index of min element +index of max element ,value of min element -value of max element .'the sum of elements (faster than using fold) /+the product of elements (faster than using fold) 0#A more efficient implementation of cmap (\x -> if x>0 then 1 else 0) #> step $ linspace 5 (-1,1::Double)  5 |> [0.0,0.0,0.0,1.0,1.0]1Element by element version of /case compare a b of {LT -> l; EQ -> e; GT -> g}. >Arguments with any dimension = 1 are automatically expanded:  > cond ((1><4)[1..]) ((3><1)[1..] ) 0 100 ((3><4)[1..]) :: Matrix Double  (3><4)  [ 100.0, 2.0, 3.0, 4.0  , 0.0, 100.0, 7.0, 8.0  , 0.0, 0.0, 100.0, 12.0 ]21Find index of elements which satisfy a predicate '> find (>0) (ident 3 :: Matrix Double)  [(0,0),(1,1),(2,2)]3,Create a structure from an association list > assoc 5 0 [(2,7),(1,3)] :: Vector Double  5 |> [0.0,3.0,7.0,0.0,0.0]4,Modify a structure using an update function *> accum (ident 5) (+) [((1,1),5),((0,3),3)] :: Matrix Double  (5><5)  [ 1.0, 0.0, 0.0, 3.0, 0.0  , 0.0, 6.0, 0.0, 0.0, 0.0  , 0.0, 0.0, 1.0, 0.0, 0.0  , 0.0, 0.0, 0.0, 1.0, 0.0  , 0.0, 0.0, 0.0, 0.0, 1.0 ]9Outer product of two vectors. >  [1,2,3] `outer`  [5,2,3] (3><3)  [ 5.0, 2.0, 3.0  , 10.0, 4.0, 6.0  , 15.0, 6.0, 9.0 ]:#Kronecker product of two matrices.  m1=(2><3)  [ 1.0, 2.0, 0.0  , 0.0, -1.0, 3.0 ] m2=(4><3)  [ 1.0, 2.0, 3.0  , 4.0, 5.0, 6.0  , 7.0, 8.0, 9.0  , 10.0, 11.0, 12.0 ] > kronecker m1 m2 (8><9) ; [ 1.0, 2.0, 3.0, 2.0, 4.0, 6.0, 0.0, 0.0, 0.0 ; , 4.0, 5.0, 6.0, 8.0, 10.0, 12.0, 0.0, 0.0, 0.0 ; , 7.0, 8.0, 9.0, 14.0, 16.0, 18.0, 0.0, 0.0, 0.0 ; , 10.0, 11.0, 12.0, 20.0, 22.0, 24.0, 0.0, 0.0, 0.0 ; , 0.0, 0.0, 0.0, -1.0, -2.0, -3.0, 3.0, 6.0, 9.0 ; , 0.0, 0.0, 0.0, -4.0, -5.0, -6.0, 12.0, 15.0, 18.0 ; , 0.0, 0.0, 0.0, -7.0, -8.0, -9.0, 21.0, 24.0, 27.0 ; , 0.0, 0.0, 0.0, -10.0, -11.0, -12.0, 30.0, 33.0, 36.0 ];conjugate transpose </Creates a square matrix with a given diagonal. =/creates the identity matrix of given dimension Zghijkl      !"#$%&'()*+,-./01a b l e g result 23size default value association list result 4initial structure update function association list result m56789:no;<=pqrstuvwxyz{|}~DIJ      !"#$%&'()*+,-./0123456789:;<=2ghijkl      !"#$%&'()*+,-./01234m56789:no;<=pqrstuvwxyz{|}~portable provisional6Vivian McPhail <haskell.vivian.mcphail <at> gmail.com>None>IProvide optimal association order for a chain of matrix multiplications $ and apply the multiplications. #The algorithm is the well-known O(n^!3) dynamic programming algorithm 4 that builds a pyramid of optimal associations.  ! m1, m2, m3, m4 :: Matrix Double  m1 = (10><15) [1..]  m2 = (15><20) [1..]  m3 = (20><5) [1..]  m4 = (5><10) [1..]   >>> optimiseMult [m1,m2,m3,m4]  will perform ((m1  (m2  m3))  m4) 2The naive left-to-right multiplication would take 4500 scalar multiplications 'whereas the optimised version performs 2750) scalar multiplications. The complexity in this case is 32 (= 4^3/C2) * (2 comparisons, 3 scalar multiplications, 3 scalar additions, ?5 lookups, 2 updates) + a constant (= three table allocations) >>>uses ffi provisional!Alberto Ruiz (aruiz at um dot es)None1FClass used to define generic linear algebra computations for both real and complex matrices. Only double precision is supported in this version (we can )transform single precision objects using  and ). G#Full singular value decomposition. H A version of G which returns only the min (rows m) (cols m) singular vectors of m. If (u,s,v) = thinSVD m then m == u < > diag s < > trans v. ISingular values only. J A version of GH which returns an appropriate diagonal matrix with the singular values. If (u,d,v) = fullSVD m then m == u <> d < > trans v. K Similar to HU, returning only the nonzero singular values and the corresponding singular vectors. L0Singular values and all right singular vectors. M/Singular values and all left singular vectors. NRObtains the LU decomposition of a matrix in a compact data structure suitable for O. OmSolution of a linear system (for several right hand sides) from the precomputed LU factorization obtained by N. PSolve a linear system (for square coefficient matrix and several right-hand sides) using the LU decomposition. For underconstrained or overconstrained systems use S or R.  It is similar to O . N, but  linearSolve1 raises an error if called on a singular system. QvSolve a symmetric or Hermitian positive definite linear system using a precomputed Cholesky decomposition obtained by `. RMinimum norm solution of a general linear least squares problem Ax=B using the SVD. Admits rank-deficient systems but it is slower than Sg. The effective rank of A is determined by treating as zero those singular valures which are less than h# times the largest singular value. SLeast squared error solution of an overconstrained linear system, or the minimum norm solution of an underconstrained system. For rank-deficient systems use R. T9Eigenvalues and eigenvectors of a general square matrix. If  (s,v) = eig m then m < > v == v <> diag s U(Eigenvalues of a general square matrix. V Similar to Xl without checking that the input matrix is hermitian or symmetric. It works with the upper triangular part. W Similar to Yl without checking that the input matrix is hermitian or symmetric. It works with the upper triangular part. XNEigenvalues and Eigenvectors of a complex hermitian or real symmetric matrix. If (s,v) = eigSH m then m == v < > diag s < > ctrans v Y=Eigenvalues of a complex hermitian or real symmetric matrix. ZQR factorization. If  (q,r) = qr m then m == q <> r0, where q is unitary and r is upper triangular. [RQ factorization. If  (r,q) = rq m then m == r <> q0, where q is unitary and r is upper triangular. \Hessenberg factorization. If (p,h) = hess m then m == p <> h < > ctrans p, where p is unitary V and h is in upper Hessenberg form (it has zero entries below the first subdiagonal). ]Schur factorization. If (u,s) = schur m then m == u <> s < > ctrans u, where u is unitary \ and s is a Shur matrix. A complex Schur matrix is upper triangular. A real Schur matrix is ! upper triangular in 2x2 blocks. "GAnything that the Jordan decomposition can do, the Schur decomposition  can do better!" (Van Loan) ^ Similar to _V, but instead of an error (e.g., caused by a matrix not positive definite) it returns . _ Similar to `m, without checking that the input matrix is hermitian or symmetric. It works with the upper triangular part. `MCholesky factorization of a positive definite hermitian or symmetric matrix. If  c = chol m then c is upper triangular and m == ctrans c <> c. aNJoint computation of inverse and logarithm of determinant of a square matrix. bLDeterminant of a square matrix. To avoid possible overflow or underflow use a. c/Explicit LU factorization of a general matrix. If (l,u,p,s) = lu m then m == p <> l <> u, where l is lower triangular, ] u is upper triangular, p is a permutation matrix and s is the signature of the permutation. d%Inverse of a square matrix. See also a. e:Pseudoinverse of a general matrix with default tolerance (f 1, similar to GNU-Octave). f pinvTol r7 computes the pseudoinverse of a matrix with tolerance tol=r*g*eps*(max rows cols)*, where g is the greatest singular value. > let m =  [[1,0, 0]  ,[0,1, 0]  ,[0,0,1e-10]]   -- > e m 1. 0. 0. 0. 1. 0. 0. 0. 10000000000.   -- > pinvTol 1E8 m  1. 0. 0.  0. 1. 0. 0. 0. 1.5Numeric rank of a matrix from the SVD decomposition. g3Numeric rank of a matrix from its singular values. h#The machine precision of a Double: eps = 2.22044604925031e-16! (the value used by GNU-Octave). i!1 + 0.5*peps == 1, 1 + 0.6*peps /= 1 jThe imaginary unit: i = 0.0 :+ 1.0 k6The nullspace of a matrix from its SVD decomposition. l$The nullspace of a matrix. See also k. mRThe nullspace of a matrix, assumed to be one-dimensional, with machine precision. n;Return an orthonormal basis of the range space of a matrix rZReciprocal of the 2-norm condition number of a matrix, computed from the singular values. s0Number of linearly independent rows or columns. tDGeneric matrix functions for diagonalizable matrices. For instance: logm = matFunc loguNMatrix exponential. It uses a direct translation of Algorithm 11.3.1 in Golub & Van Loan, + based on a scaled Pade approximation. v[Matrix square root. Currently it uses a simple iterative algorithm described in Wikipedia. jIt only works with invertible matrices that have a real solution. For diagonalizable matrices you can try  matFunc sqrt.  m = (2><2) [4,9  ,0,4] :: Matrix Double >sqrtm m (2><2)  [ 2.0, 2.25  , 0.0, 2.0 ]w<Approximate number of common digits in the maximum element. x>Generalized symmetric positive definite eigensystem Av = lBv, F for A and B symmetric, B positive definite (conditions not checked). h?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`a/(inverse, (log abs det, sign or phase of det)) bcdefnumeric zero (e.g. 1*h) input matrix m sv of m  rank of m gnumeric zero (e.g. 1*h)  maximum dimension of the matrix singular values  rank of m hijkLeft "numeric" zero (eg. 1*h),  or Right " theoretical" matrix rank. input matrix m L of m /list of unitary vectors spanning the nullspace lrelative tolerance in h units (e.g., use 3 to get 3*h)  input matrix /list of unitary vectors spanning the nullspace mnopqrstuvwxA B :?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwx:FPOQSRdefbasrGJHKIMLTXVUYWxZ[`_^\]cNuvtlmkn?@AEDCBwhijopqgQ?@AEDCBFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwx portable provisionalAlberto Ruiz <aruiz@um.es>None yZCreates a string from a matrix given a separator and a function to show each entry. Using Gthis function the user can easily define any desired display function:  import Text.Printf(printf) disp = putStr . format " " (printf "%.2f")zShow a matrix with " autoscaling"' and a given number of decimal places. disp = putStr . disps 2  > disp $ 120 * (3><4) [1..] 3x4 E3  0.12 0.24 0.36 0.48  0.60 0.72 0.84 0.96  1.08 1.20 1.32 1.44 {5Show a matrix with a given number of decimal places. disp = putStr . dispf 3  > disp (1/ 3 + ident 4) 4x4 1.333 0.333 0.333 0.333 0.333 1.333 0.333 0.333 0.333 0.333 1.333 0.333 0.333 0.333 0.333 1.333 |5Show a vector using a function for showing matrices. disp = putStr . vecdisp ({ 2)  > disp (linspace 10 (0,1)) A10 |> 0.00 0.11 0.22 0.33 0.44 0.56 0.67 0.78 0.89 1.00 },Tool to display matrices with latex syntax. =Pretty print a complex number with at most n decimal digits. ~=Pretty print a complex matrix with at most n decimal digits. <reads a matrix from a string containing a table of numbers. =obtains the number of rows and columns in an ASCII data file  (provisionally using unix's wc). ;Loads a matrix from an ASCII file formatted as a 2D table. ]Loads a matrix from an ASCII file (the number of rows and columns must be known in advance). yz{|}type of braces: "matrix", "bmatrix", "pmatrix", etc. AFormatted matrix, with elements separated by spaces and newlines ~&'()Hyz{|}~yz{|}~! provisionalAlberto Ruiz <aruiz@um.es>NoneIObtains a matrix whose rows are pseudorandom samples from a multivariate  Gaussian distribution. IObtains a matrix whose rows are pseudorandom samples from a multivariate  uniform distribution. number of rows  mean vector covariance matrix result number of rows ranges for each column result *+,-portable provisionalAlberto Ruiz <aruiz@um.es>None:least squares solution of a linear system, similar to the \ operator of Matlab/!Octave (based on linearSolveSVD) :Matrix-matrix, matrix-vector, and vector-matrix products. :creates a vector with a given number of equal components: > constant 2 7 !7 |> [2.0,2.0,2.0,2.0,2.0,2.0,2.0]4Creates a real vector containing a range of values:  > linspace 5 (-3,7) 5 |> [-3.0,-0.5,2.0,4.5,7.0]/Logarithmic spacing can be defined as follows: )logspace n (a,b) = 10 ** linspace n (a,b) Dot product: u <.> v = dot u v CCompute mean vector and covariance matrix of the rows of a matrix.  !"#$%&'()*+,-./346:;<=>?@DEFGHIJ      !"#$%&'()*+,-./0123456789:;<=>yz{|}~X<=; !"#$%&'()*+,-./01234>6789:*,+-  IJ   5{z~|}yH&'() "portable provisionalAlberto Ruiz <aruiz@um.es>None#portable provisionalAlberto Ruiz <aruiz@um.es>None  $uses ffi provisional!Alberto Ruiz (aruiz at um dot es)None !"#$%&'()*+,-./346:;<=>?@DEFGHIJ      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~uses ffi provisional!Alberto Ruiz (aruiz at um dot es)None This is an unscaled version of the lmder algorithm. The elements of the diagonal scaling matrix D are set to 1. This algorithm may be useful in circumstances where the scaled version of lmder converges too slowly, or the function is already scaled appropriately. Interface to gsl_multifit_fdfsolver_lmsder. This is a robust and efficient version of the Levenberg-Marquardt algorithm as implemented in the scaled lmder routine in minpack. Minpack was written by Jorge J. More, Burton S. Garbow and Kenneth E. Hillstrom. 2Nonlinear multidimensional least-squares fitting. Higher level interface to   . The optimization function and ] Jacobian are automatically built from a model f vs x = y and its derivatives, and a list of ( instances (x, (y,sigma)) to be fitted. Higher level interface to   . The optimization function and ] Jacobian are automatically built from a model f vs x = y and its derivatives, and a list of  instances (x,y) to be fitted. DModel-to-residual for association pairs with sigma, to be used with . Associated derivative for . 9Model-to-residual for association pairs, to be used with . It is equivalent  to  with all sigmas = 1. Associated derivative for . absolute tolerance relative tolerance %maximum number of iterations allowed function to be minimized  Jacobian starting point &solution vector and optimization path absolute tolerance relative tolerance %maximum number of iterations allowed (model, derivatives)  instances starting point ((solution, error) and optimization path absolute tolerance relative tolerance %maximum number of iterations allowed (model, derivatives)  instances starting point solution and optimization path uses -fffi and -fglasgow-exts provisional!Alberto Ruiz (aruiz at um dot es)NoneVThis action removes the GSL default error handler (which aborts the program), so that N GSL errors can be handled by Haskell (using Control.Exception) and ghci doesn' t abort. Wpqrstuvwxyz{|}~% provisional!Alberto Ruiz (aruiz at um dot es)None correlation )corr (fromList[1,2,3]) (fromList [1..10])2fromList [14.0,20.0,26.0,32.0,38.0,44.0,50.0,56.0] convolution (J with reversed kernel and padded input, equivalent to polynomial product) &conv (fromList[1,1]) (fromList [-1,1])fromList [-1.0,0.0,1.0] similar to , using  instead of (*) 2D correlation 2D convolution Smatrix computation implemented as separated vector operations by rows and columns. kernel source  provisional!Alberto Ruiz (aruiz at um dot es)NoneBshow a matrix with given number of digits after the decimal point :pseudorandom matrix with uniform elements between 0 and 1 :pseudorandom matrix with uniform elements between 0 and 1 )pseudorandom matrix with normal elements *create a real diagonal matrix from a list a real matrix of zeros a real matrix of ones concatenation of real vectors *horizontal concatenation of real matrices 1(00A6) horizontal concatenation of real matrices (vertical concatenation of real matrices ,create a single row real matrix from a list /create a single column real matrix from a list extract selected rows  (00BF) extract selected columns /cross product (for three-element real vectors) 2-norm of real vector 5Obtains a vector in the same direction with 2-norm=1 (rows &&& cols)  trans . inv 4Matrix of pairwise squared distances of row vectors 4 (using the matrix product trick in blog.smola.org) outer products of rows 6solution of overconstrained homogeneous linear system @solution of overconstrained homogeneous symmetric linear system stacking of columns 2half-vectorization (of the lower triangular part) duplication matrix ( k <>  m ==  m, for symmetric m of  k)  generalized "vector" transposition:  1 == 6, and  (3 m) m ==  ( m) rows columns rows columns rows columns !!uses gnuplot and ImageMagick provisional!Alberto Ruiz (aruiz at um dot es)NonelFrom vectors x and y, it generates a pair of matrices to be used as x and y arguments for matrix functions. 4Draws a 3D surface representation of a real matrix.  / > mesh $ build (10,10) (\\i j -> i + (j-5)^2) NIn certain versions you can interactively rotate the graphic using the mouse. mDraws the surface represented by the function f in the desired ranges and number of points, internally using .  > let f x y = cos (x + y) " > splot f (0,pi) (0,2*pi) 50 -plots several vectors against the first one ? > let t = linspace 100 (-3,3) in mplot [t, sin t, exp (-t^2)] TDraws a list of functions over a desired range and with a desired number of points + > plot [sin, cos, sin.(3*)] (0,2*pi) 1000 SDraws a parametric curve. For instance, to draw a spiral we can do something like: > > parametricPlot (\t->(t * sin t, t * cos t)) (0,10*pi) 1000 "writes a matrix to pgm image file Qimshow shows a representation of a matrix as a gray level image using ImageMagick' s display.    &'(&')&'*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~                                                    !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~         !!%%%%%%8      !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQ&'RSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~Wst          !"#$%&'() * + , - ./0123456789:;< = > ? @ - , A B C D E F G H , -IJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~I         !""""######### #    ,-%%hmatrix-0.15.2.0Data.Packed.DevelopmentData.Packed.VectorNumeric.ContainerData.Packed.MatrixData.Packed.STData.Packed.ForeignNumeric.GSL.DifferentiationNumeric.GSL.IntegrationNumeric.GSL.FourierNumeric.GSL.PolynomialsNumeric.GSL.ODENumeric.GSL.MinimizationNumeric.GSL.RootNumeric.LinearAlgebra.LAPACK Numeric.LinearAlgebra.AlgorithmsNumeric.GSL.Fitting Numeric.GSLNumeric.LinearAlgebra.Util Graphics.PlotData.Packed.Internal.SignaturesData.Packed.Internal.CommonData.Packed.Internal.VectorNumeric.GSL.VectorData.Packed.Internal.Matrix fromListsNumeric.ConversionData.Packed.InternalNumeric.GSL.Internal Data.PackedNumeric.ContainerBoot Numeric.Chain Numeric.IOData.Packed.RandomNumeric.MatrixNumeric.VectorNumeric.LinearAlgebra&Numeric.LinearAlgebra.Util.Convolutionvector-0.10.9.1Data.Vector.StorableunsafeToForeignPtrunsafeFromForeignPtrVector//app1app2app3app4app5app6app7app8app9app10checkdimvec createVectorfromListtoList|>at' subVector@>jointakesV mapVector zipVectorWithunzipVectorWith foldVectorfoldVectorWithIndexfoldLoop foldVectorG mapVectorM mapVectorM_mapVectorWithIndexMmapVectorWithIndexM_mapVectorWithIndex fscanfVector fprintfVector freadVector fwriteVectorRandDistGaussianUniform randomVectorElementMatrix MatrixOrder ColumnMajorRowMajorrowscolsorderOftranscmatfmatmatflattentoListsfromRowstoRows fromColumns toColumns@@>atM'matrixFromVector createMatrixreshape liftMatrix liftMatrix2 subMatrix saveMatrix Complexable RealElementSTMatrixSTVector thawVectorunsafeThawVector runSTVectorunsafeReadVectorunsafeWriteVector modifyVector liftSTVector freezeVectorunsafeFreezeVector readVector writeVectornewUndefinedVector newVector thawMatrixunsafeThawMatrix runSTMatrixunsafeReadMatrixunsafeWriteMatrix modifyMatrix liftSTMatrixunsafeFreezeMatrix freezeMatrix readMatrix writeMatrixnewUndefinedMatrix newMatrixapp appVector appVectorLen appMatrix appMatrixLen appMatrixRawappMatrixRawLenunsafeMatrixToVectorunsafeMatrixToForeignPtr derivCentral derivForward derivBackward integrateQAGS integrateQNG integrateQAGIintegrateQAGIUintegrateQAGILintegrateCQUADfftifft polySolve ODEMethodMSBDFMSAdamsRK1impBSimpRK4impRK2impRK8pdRKckRKf45RK4RK2JacobianodeSolve odeSolveV fromBlocks diagBlockflipudfliprldiagRecttakeDiag><takeRowsdropRows takeColumns dropColumnsasRowasColumn buildMatrix fromArray2D extractRowsrepmatliftMatrix2AutotoBlocks toBlocksEverymapMatrixWithIndexM_mapMatrixWithIndexMmapMatrixWithIndex mapMatrixMinimizeMethodDSteepestDescent VectorBFGS2 VectorBFGS ConjugatePR ConjugateFRMinimizeMethod NMSimplex2 NMSimplexUniMinimizeMethod QuadGolden BrentMini GoldenSectionminimizeNMSimplexminimizeConjugateGradientminimizeVectorBFGS2 uniMinimizeminimize minimizeV minimizeD minimizeVD RootMethodJGNewtonNewtonHybridJHybridsJ RootMethodBroydenDNewtonHybridHybridsUniRootMethodJ SteffensonSecantUNewton UniRootMethodBrentFalsePos BisectionuniRootuniRootJrootrootJ multiplyR multiplyC multiplyF multiplyQsvdRsvdRdsvdCsvdCdthinSVDRthinSVDC thinSVDRd thinSVDCdsvRsvCsvRdsvCdrightSVRrightSVCleftSVRleftSVCeigCeigOnlyCeigReigOnlyReigSeigS'eigHeigH'eigOnlySeigOnlyH linearSolveR linearSolveC cholSolveR cholSolveClinearSolveLSRlinearSolveLSClinearSolveSVDRlinearSolveSVDCcholHcholSmbCholHmbCholSqrRqrChessRhessCschurRschurCluRluClusRlusC buildVector zipVector unzipVectorkonst'build'DoubleOfSingleOf ComplexOfRealOfConvertrealcomplexsingledouble toComplex fromComplexProductmultiplydotabsSumnorm1norm2normInf Containerscalarconjscale scaleRecip addConstantaddsubmuldivideequalarctan2cmapkonstbuildatIndexminIndexmaxIndex minElement maxElement sumElements prodElementsstepcondfindassocaccumIndexOfmXmmXvvXmouter kroneckerctransdiagident optimiseMultNormedpnormNormType FrobeniusPNorm2PNorm1InfinityFieldsvdthinSVDsingularValuesfullSVD compactSVDrightSVleftSVluPackedluSolve linearSolve cholSolvelinearSolveSVD linearSolveLSeig eigenvalueseigSH'eigenvaluesSH'eigSH eigenvaluesSHqrrqhessschurmbCholSHcholSHcholinvlndetdetluinvpinvpinvTolranksvepspepsi nullspaceSVD nullspacePrec nullVectororth haussholderunpackQR unpackHessrcondrankmatFuncexpmsqrtm relativeErrorgeigSH'formatdispsdispfvecdisp latexFormatdispcffileDimensions loadMatrixfromFilegaussianSample uniformSampleLSDiv<\>Mul<>constantlinspace<.>meanCov FittingMethodLevenbergMarquardtLevenbergMarquardtScaled nlFittingfitModelScaledfitModelsetErrorHandlerOffcorrconvcorrMincorr2conv2 separabledisprandrandndiaglzerosones&!¦#rowcol?¿crossnormunitarysizemt pairwiseD2 rowOutersnull1null1symvechdupvtransmeshdommeshsplotmplotplotparametricPlot matrixToPGMimshowgnuplotX gnuplotpdf gnuplotWinTMMCVMTMCVMTCVMTVCVTCMCVTQMQMQMTQMQMTQMTQVFTQVQVQVTQVQVTQVTCVVTCVCVCVTCVCVTCVTCMCMCMTCMCMVCMTCMVCMTVCMTCMCM TCMCMCVCMTMCMCVCMTCMCVCMTCVCMTCMTMMVMTMVMTMMVTMVTVVMTVMTMMMTMVMMTVMMTMMTMTFMFMFMTFMFMTFMTVVVTVVTVTFFFTVFTFVTFFTFPCPQPDPF gsl_strerrorfinit splitEverycommon compatdimtablefi errorCodembCatchAdapt10Adapt9Adapt8Adapt7Adapt6Adapt5Adapt4Adapt3Adapt2Adapt1Adaptww2ww3ww4ww5ww6ww7ww8ww9ww10atasReal asComplexgsl_vector_fwritegsl_vector_freadgsl_vector_fprintfgsl_vector_fscanf c_conjugateC c_conjugateQc_condDc_condFc_stepDc_stepFc_double2floatc_float2doublesafeReadinlinePerformIOsafe float2DoubleV double2FloatVstepFstepDcondFcondD conjugateAux conjugateQ conjugateC cloneVector unsafeWithsumFsumRsumQsumCprodFprodRprodQprodCdotFdotRdotQdotC toScalarR toScalarF toScalarC toScalarQ vectorMapR vectorMapC vectorMapF vectorMapQ vectorMapValR vectorMapValC vectorMapValF vectorMapValQ vectorZipR vectorZipC vectorZipF vectorZipQFunCodeSMinMinIdxMaxMaxIdxAbsSumNorm2 FunCodeVVATan2PowDivSubAdd FunCodeSVPowVSPowSVNegate AddConstantRecipScaleFunCodeVSqrtSignLogExpATanhACoshASinhTanhCoshSinhATanACosASinAbsTanCosSinc_random_vector c_vectorZipQ c_vectorZipF c_vectorZipC c_vectorZipRc_vectorMapValQc_vectorMapValFc_vectorMapValCc_vectorMapValR c_vectorMapQ c_vectorMapF c_vectorMapC c_vectorMapR c_toScalarQ c_toScalarC c_toScalarF c_toScalarRc_dotCc_dotQc_dotRc_dotFc_prodCc_prodQc_prodRc_prodFc_sumCc_sumQc_sumRc_sumFfromei toScalarAux vectorMapAuxvectorMapValAux vectorZipAuxbaseForeign.StorableStorable subMatrixD transdata constantDMtirowsicolsxdatordermatrix_fprintf cconstantP cconstantC cconstantQ cconstantR cconstantFctransPctransCctransQctransRctransF transOrdercdatfdat singletoncompat transdata' transdataAux transdataP constant' constantAux constantF constantR constantQ constantC constantP subMatrix'' subMatrix' conformMs conformVs conformMTo conformVTorepRowsrepColsshSize$fNFDataMatrix$fElementComplex$fElementComplex0$fElementDouble$fElementFloat Precision toComplexV fromComplexV toComplex' fromComplex'comp'single'double' double2FloatG float2DoubleG$fComplexableMatrix$fComplexableVector$fRealElementFloat$fRealElementDouble$fPrecisionComplexComplex$fPrecisionFloatDouble Data.Complexphase magnitudepolarcismkPolar conjugateimagPartrealPart:+ComplexioReadVioWriteV safeIndexVioReadMioWriteM cloneMatrix safeIndexMunsafeInlinePerformIOGHC.Base$mkfunc_derivderivGenc_integrate_cquadc_integrate_qagilc_integrate_qagiuc_integrate_qagic_integrate_qngc_integrate_qagsghc-prim GHC.TypesDoublec_fftgenfft c_polySolve polySolve'mkVecfunmkDoubleVecMatfun mkVecMatfun mkDoublefunmkDoubleVecVecfun mkVecVecfunivaux_vTovaux_vTomcreateV createMIOode_c odeSolveV' checkdim1 checkdim2 checkTimesjoinVert joinHorizdspbreakAt fromBlocksRaw adaptBlockslMcompat' toBlockRows toBlockColsmk $fReadMatrix $fShowMatrix$fBinaryMatrix c_minimizeD c_minimize c_uniMinizeuniMinimizeGen c_multirootj c_multirootc_rootjc_root uniRootGen uniRootJGenrootGenrootJGen Data.MaybeMaybeTQTWzgetrsdgetrszgetrfdgetrfzgeesdgeeszgehrddgehrdzgeqr2dgeqr2dpotrfzpotrfzgelssdgelsszgelsdgelszpotrsdpotrszgesvdgesvzheevdsyevzgeevdgeevzgesdddgesddzgesvddgesvdcgemmcsgemmczgemmcdgemmcisTtt multiplyAuxsvdAux thinSVDAuxsvAux rightSVAux leftSVAuxeigAux eigOnlyAuxeigRauxfixeig1fixeigeigSHAuxvrevlinearSolveSQAuxlinearSolveAuxcholAuxqrAuxhessAuxschurAuxluAuxlusAuxchunkchunks putVector getVector$fBinaryVector ContainerOf'Konst ContainerOfBoundsOfBuild ElementOfArgOfbuildMbuildVfindVfindMassocVassocMaccumVaccumMcondMcondV $fKonst(,) $fKonstInt $fBuild(->) $fBuild(->)0$fConvertComplex$fConvertComplex0$fConvertFloat$fConvertDouble$fProductComplex$fProductComplex0$fProductDouble$fProductFloat$fContainerMatrixa$fContainerVectorComplex$fContainerVectorComplex0$fContainerVectorDouble$fContainerVectorFloatIndexesCostSizesMatricesupdatenewWorkSpaceCostnewWorkSpaceIndexesmatricesToSizeschain chain_cost chain_cost' minimum_cost smaller_cost fulcrum_orderpartner chain_parenm1m2m3m4NothingrankSVDsvd'thinSVD'sv' luPacked'luSolve' linearSolve' cholSolve'linearSolveSVD'linearSolveLS'eig'eigSH''eigOnly eigOnlySHcholSH' mbCholSH'qr'hess'schur'squareverticalexactHermitianzhztuH diagonalizegolubepsepslistgepsexpGolubsqrtmInvsignlpswapfixPermtriangluFact$fNormedMatrixComplex$fNormedMatrixFloat$fNormedMatrixComplex0$fNormedMatrixDouble$fNormedVectorComplex$fNormedVectorFloat$fNormedVectorComplex0$fNormedVectorDouble$fFieldComplex $fFieldDouble showComplexsdims formatFixedisInt formatScaledshcr lookslikeIntisZeroisOneSeed$fLSDivMatrixMatrix$fLSDivVectorVector$fMulVectorMatrixVector$fMulMatrixVectorVector$fMulMatrixMatrixMatrix$fFloatingMatrix$fFractionalMatrix $fNumMatrix $fEqMatrix adaptScalar$fFloatingVector$fFloatingVector0$fFloatingVector1$fFloatingVector2$fFractionalVector $fNumVector $fNumVector0 $fNumVector1 $fNumVector2resMsresDsresMresDc_nlfitnlFitGenerrcostjacobian GHC.ClassesminvectSSmatSSrandm datafollowsprep