-- Hoogle documentation, generated by Haddock -- See Hoogle, http://www.haskell.org/hoogle/ -- | Math using lists, including FFT and Wavelet -- -- This package contains standard one-dimensional mathematical transforms -- (FFT, Wavelet, etc.) applied to Haskell lists. Documentation including -- mathematical details and examples are included to facilitate use with -- small or moderate sized problems, and for educational purposes. The -- algorithms have a very consise representation in Haskell that is a -- direct translation of the mathematical formulations. -- -- Some of the examples use tyhe HakellR package and the hybrid Haskell/R -- environment that it provides. This permits use of R statistical and -- grahics tools in a Haskell "playgound" that requires minimal -- configuration. HaskellR's Jupyter kernel for Haskell is an added -- bonus. The functions of this package do not depend on R or HaskellR. @package mathlist @version 0.1.0.4 -- | This package contains standard mathematical operators like FFT and -- Wavelets applied to Haskell lists. module Math.List -- | Includes an implementation of the fast Fourier transform and its -- inverse using lists. Support for shifting and scaling for easier -- interpretation are included, as is computation of the analytic signal -- (where spectral power is shifted from negative frequencies to positive -- frequencies). The imaginary part of the latter is the Hilbert -- transform. -- -- The FFT might be called the "Feasible Fourier Transform" because in -- many problems using an input vector size that is not a power of 2 can -- result in a huge performance penalty (hours instead of seconds), so it -- is important to keep this in mind when working with large input -- vectors. -- -- The discrete Fourier transform can be viewed as an approximation to -- the Fourier integral of a non-periodic function that is very small -- outside a finite interval. Alternatively, and more commonly, it can be -- viewed as a partial scan of a function that may behave arbitrarily -- outside of the scanned (or sampled) points (like an MRI scan of part -- of the human body). When we observe below that the transform is -- periodic, we are referring to properties of the mathematical model, -- not of the signal under study (a periodic MRI scan does not mean the -- human body is periodic!). -- -- A periodic function is not integrable over all real numbers, so it -- doesn't have a Fourier integral (ignoring weak forms of convergence), -- and the Fourier series applies only to periodic functions, so Fourier -- series and the Fourier integral are distinct probes that depend on a -- particular choice of basis. The wavelet transform uses a different -- choice of basis. -- -- Shannon's sampling theorem tells us that if we sample a band-limited -- function fast enough, the function is fully determined by the sampled -- values. In other words, the discrete Shannon wavelet basis is -- sufficient to represent the function exactly in this case. module Math.List.FFT -- | Pure and simple fast Fourier transform following the recursive -- algorithm that appears in Mathematical Foundations of Data -- Sciences by Gabriel Peyre' (2021). The Haskell implementation is a -- direct translation of the mathematical specifications. The input -- vector size <math> must be a power of 2 (the functions -- ft1d and ift1d work with vectors of any size, but they -- may be extremely slow for large input vectors). -- -- Recall that the discrete Fourier transform applied to a vector -- <math> is defined by <math> -- -- As written this has time complexity <math>, but the fast Fourier -- transform algorithm reduces this to <math>. The algorithm can be -- written as follows <math> Here <math> is the Fourier -- transform defined on vectors of size <math>, <math>, and -- <math>, for <math>. Here <math>, and <math> is -- the interleaving operator defined by <math>. The operator -- <math> is defined as follows <math> -- -- The algorithm follows easily by considering the even and odd terms in -- the discrete Fourier transform. Let <math>, and let's consider -- the even terms first: -- -- <math> The last term is just the Fourier transform of the vector -- of length <math> given by <math> -- -- For the odd terms we have -- -- <math> The last term is just the Fourier transform of the vector -- of length <math> given by <math> -- -- The recursive algorithm now follows by interleaving the even and the -- odd terms. -- --

Examples:

-- --
--   >>> fft [1,2,3,4]
--   
-- --
--   [10.0 :+ 0.0, (-2.0) :+ 2.0, (-2.0) :+ 0.0, (-1.99999) :+ (-2.0)]
--   
-- -- Check reproduction property with N=4 -- --
--   >>> z = fft [1,2,3,4]
--   
--   >>> map (realPart . (/4)) $ ifft z
--   
-- --
--   [1.0,2.0,3.0,4.0]
--   
-- -- Test on a mixture of two sine waves... Evalutate a mixture of two sine -- waves on n equally-spaced points -- --
--   >>> n = 1024 -- sample points (a power of 2)
--   
--   >>> dt = 10.24/n -- time interval for Fourier integral approximation.
--   
--   >>> df = 1/dt/n -- frequency increment (df x dt = 1/n)
--   
--   >>> fs = 1/dt -- sampling rate (sampling theorem requires fs >= 2*BW)
--   
--   >>> f1 = 20 -- signal is a mixture of frequencies 20 and 30
--   
--   >>> f2 = 30
--   
--   >>> signal t = sin (2*pi*f1*t) + 0.5* sin(2*pi*f2*t)
--   
--   >>> t = take n $ iterate (+ dt) 0
--   
--   >>> f = take n $ iterate (+ df) 0
--   
--   >>> y = map ((:+ 0.0) . signal) t -- apply signal and complexify
--   
--   >>> z = fft -- Fourier transform
--   
--   >>> mags = map magnitude z -- modulus of the complex numbers
--   
--   >>> [rgraph| plot(f_hs, mags_hs,type='l') |] -- show plot using HaskellR
--   
-- -- -- Notice that by default the zero frequency point is on the left edge of -- the chart. To bring the zero frequency point to the center of the -- chart see fftshift and its examples. fft :: [Complex Double] -> [Complex Double] -- | Pure and simple inverse fast Fourier Transform. This is the same as -- fft with one change: replace <math> with <math>, -- its complex conjugate. As is well-known, -- --
--   ifft (fft x) = x*N,
--   
-- -- so we must divide by <math> to recover the original input vector -- from its discrete Fourier transform. -- --

Examples:

-- -- Check ability to recover original input vector. -- --
--   >>> z = fft [1,2,3,4]
--   
--   >>> map (realPart . (/4)) $ ifft z
--   
-- --
--   [1.0,2.0,3.0,4.0]
--   
ifft :: [Complex Double] -> [Complex Double] -- | Slow 1D Fourier transform. Accepts input vectors of any size. ft1d :: [Complex Double] -> [Complex Double] -- | Slow 1D inverse Fourier transform. Accepts input vectors of any size. ift1d :: [Complex Double] -> [Complex Double] -- | fftshift rotates the result of fft so that the zero -- frequency point is in the center of the range. To understand why this -- is needed, it is helpful to recall the following approximation for the -- continuous Fourier transform, for a function that is very small -- outside the interval <math>. As noted in the introductory -- comments above, this is one of two possible interpretations of the -- discrete Fourier transform. We have -- -- <math> where <math>. Discretizing in the frequency domain -- and setting <math> yields <math> where <math>. Now -- set <math>, and the standard discrete Fourier transform appears: -- <math> where <math>. -- -- Let's gather together the parameters involved in this approximation: -- <math> where <math> is the sampling rate. Corresponding to -- the <math> samples in the time domain, for convenience we -- normally consider exactly <math> samples in the frequency -- domain, but what samples do we choose? It is easy to check that the -- discrete Fourier transform <math> is periodic with period -- <math> in <math>, and the same is true of our -- approximation <math> provided we choose <math> for some -- integer <math> (which we can do without loss of generality), so -- the exponential factor in the equation for <math> is periodic -- with period <math>. -- -- In the standard discrete Fourier transform we define the time and -- frequency grid as follows: <math>, for <math>, and -- <math>, for <math>. In terms of our approximation above, -- this implicitly assumes that <math>, so the exponential term in -- the expression for <math> does not appear. It also assumes that -- the zero frequency point is on the left edge where <math>. It -- follows from periodicity that <math>, for <math>, so -- negative frequencies wrap around in a circular fashion and appear to -- the right of the mid point. In many applications it is more natural to -- work with <math>, the circular rotation of $X$ to the left by -- <math>. This brings the zero frequency to the center of the -- range, and places the negative frequencies where we expect them to be. -- This is what fftshift does. -- -- After applying this transformation the frequency grid becomes -- <math>, for <math>, and we have <math>, which -- happens to be the same as the restriction imposed by Shannon's -- sampling theorem. -- --

Examples:

-- -- It is well-known that the Fourier transform of a Gaussian function -- <math> is another Gaussian. Let's check this by first generating -- a time/frequency grid. -- --
--   >>> n = 1024 -- sample points.
--   
--   >>> dt = 10.24/n -- time increment in Fourier integral approximation.
--   
--   >>> df = 1/dt/n  -- frequency increment (df x dt = 1/n).
--   
--   >>> fs = 1/dt    -- sample rate
--   
--   >>> t = take n $ iterate (+ dt) 0
--   
--   >>> f = take n $ iterate (+ df) 0
--   
--   >>> signal t = exp(-64.0*t^2) --- Gaussian
--   
--   >>> gauss = map ((:+ 0.0) . signal) t -- apply signal and complexify
--   
--   >>> ft = fft gauss
--   
--   >>> mags = map magnitude ft
--   
--   >>> [rgraph| plot(f_hs, mags_hs,type='l',main='Uncentered Fourier Transform') |]
--   
-- -- -- Notice that the resulting Gaussian is not properly centered, because -- zero frequency is on the left, and negative frequencies wrap around -- and appear on the right. Let's use fftshift to fix this. -- --
--   >>> ftshift = fftshift ft
--   
--   >>> magshift = map magnitude ftshift
--   
--   >>> fshift = take n $ iterate (+ df) (-fs/2)
--   
--   >>> [rgraph| plot(fshift_hs,magshift_hs,type='l',main='Centered Fourier Transform') |]
--   
-- fftshift :: [Complex Double] -> [Complex Double] -- | fftscale shows the result of fft on a logarithmic (dB) -- scale -- --
--   Usage: fftscale x nfirst nsamples fs fc  
--   
-- -- -- -- The subsample starts at nfirst and consists of nsamples -- points. nsamples must be a power of 2. The corresponding -- frequencies and normalized power are returned (measured in dB down -- from the max). -- --

Examples:

-- -- Here we use HaskellR tools to import a data file created by R that -- contains I/Q data corresponding to an FM radio broadcast station (the -- inexpensive RTL-SDR tuner and ADC is used). The file contains the -- modulus of I + jQ for 1.2M samples (file size about 10 MB since each -- double occupies 8 bytes). This is the result of 12M values downsampled -- by a factor of 10. -- -- We show here how to use fftscale to create a spectral chart -- that shows power concentrated at the pilot tone (19k), mono L+R audio -- (low freqs up to 15k), stereo L-R channel (38k), and the station ID -- RDS channel (57k). These spectral lines are seen clearly in the -- waterfall diagram that appears in the documentation for -- analytic. (The station is decoded as Mono even thought the -- content is Stereo, probably because the signal was weak.) -- -- Note that the sample size (a power of 2) is less than the size of the -- input vector, and this introduces some noise. See Wikipedia entry on -- FM broadcasting for more information. (Note that RDS in the R -- context means R Data Serialization, unrelated to RDS used in FM -- broadcasting!) -- --
--    getData :: R s [Double]
--    getData = do  
--      R.dynSEXP <$> [r| setwd("path to HaskellR")  
--                          readRDS("FMIQ.rds") |]
--    
--   fmiq <- R.runRegion getData -- 1.2M samples (modulus I + jQ)  
--    
--   toDoubleComplex :: [Double] -> [Complex Double]  
--   toDoubleComplex = map (:+ 0.0)  
--    
--   fs = 1200000  
--   fftdata = fftscale (toDoubleComplex fmiq) 0 (2^15) (fs/10) 0  
--   freq = fst fftdata  
--   mag  = snd fftdata  
--   [rgraph| plot(freq_hs, mag_hs, type=l,xlab=Freq,  
--            ylab="Rel Amp [dB down from max]",main="FM Spectrum")  
--            abline(v=19000,col=red)  -- Pilot frequency
--            abline(v=38000,col=pink) -- L-R channel
--            abline(v=57000,col=green) |]  -- RDS station signal
--   
-- fftscale :: [Complex Double] -> Int -> Int -> Double -> Double -> ([Double], [Double]) -- | analytic uses fft and ifft to compute the -- analytic signal. Thus the input vector must have size a power of 2. -- Use the slower primed variant to support input vectors of any size. -- Imaginary part is the Hilbert transform of the input signal. Normally -- the input should have zero imaginary part. -- -- Today digital signals (as well as analog signals like FM radio) are -- transmitted in the form a one-dimensional functions of the form -- <math> where <math> is a carrier frequency (modern -- technologies like WiFi/OFDM employ more than one carrier), and the -- functions <math> and <math> encode information about -- amplitude and phase. In this way two-dimensional information is -- encoded in a one-dimensional signal (the extra dimension comes from -- phase differences or timing). Digital information is sent by -- subdividing the complex plane of <math> into regions -- corresponding to different symbols of the alphabet to be used (such -- regions are shown in a constellation diagram, like the one shown -- below). -- -- The image below shows part of the GUI for a GNU Radio FM receiver and -- decoder (available at GR-RDS) where the complex <math> -- plane is divided into two halves corresponding to two values of a -- binary signal that carries information like station name, content -- type, etc. The digital information has central frequency 57 kHz, and -- its power profile is clearly seen in the spectral waterfall. A -- detailed discussion of the decoding process can be found in the online -- book PySDR. See the examples for fftscale for more -- information. -- -- -- The Hilbert transform is related to this representation for the signal -- thanks to Bedrosian's Theorem (see Wikipedia on Hilbert transform). It -- implies that in many cases we can write the Hilbert transform in terms -- of the same <math> and <math> as follows: <math> In -- other words, the Hilbert transform introduces a phase-shift in the -- carrier basis functions by -90 degrees (this is synthesized in -- quadrature demodulation). -- -- The Hilbert transform for continuous-time functions is defined below, -- and most of its properties carry over to the discrete-time case. It is -- used to define the analytic signal for a real-valued signal -- <math> as follows: <math> where <math> is the -- Hilbert transform of <math> The most important property of the -- analytic signal is that its Fourier transform is zero for negative -- frequencies (see below). Let's begin by defining the analytic signal -- using this property. -- -- Constructing the analytic signal for a given signal <math> is -- straightforward: compute its Fourier transform <math>, and -- replace it with <math> where <math> for <math>, -- <math> for <math>, and <math>. Then apply the -- inverse Fourier transform to obtain the analytic signal <math>, -- where <math> is the Hilbert transform of <math>. What we -- have done here is shift power from negative frequencies to positive -- frequencies in such a way that the total power is preserved. It is -- well-known that this definition of the Hilbert transform agrees with -- the one to be introduced below. -- -- Some care is required when this is implemented in the discrete -- sampling domain, and this is the focus of Computing the -- Discrete-Time "Analytic" Signal via FFT by S. Lawrence Marple, Jr, -- IEEE Trans. on Signal Processing, 47(9), 1999. The goal of this paper -- is to define the discrete analytic signal in such a way that -- properties of the continuous case are preserved, in particular, the -- real part of the analytic signal should be the original signal, and -- the real and imaginary parts should be orthogonal. It is shown that -- this will be the case if we modify the discrete Fourier transform as -- follows. <math> for <math> (as discussed in -- fftshift, these are negative frequencies), <math> for -- <math> and <math>, and <math> for <math> -- (these are positive frequencies, so we double the transform). -- -- The Hilbert transform was originally studied in the continuous-time -- domain, where it is defined as <math> a convolution with a -- singular kernel. The Hilbert transform is closely related to the -- Cauchy integral formula, potential theory, and many other areas of -- mathematical analysis. See The Cauchy Transform, Potential Theory, -- and Conformal Mapping by Steven R. Bell (2015), or Wavelets and -- operators by Yves Meyer (1992), or Functional Analysis by -- Walter Rudin (1991). -- -- The analytic signal is defined in terms of the Hilbert transform: -- <math> To understand why this is useful let's recall some -- properties of the Hilbert transform. It can be shown (see Wikipedia) -- that <math>, and <math>, when <math> <math> -- can be expanded in a Fourier series of complex exponentials with both -- positive and negative frequencies. Indeed, just consider one (real) -- component <math> It contributes both positive and negative -- frequencies. But its contribution to the analytic signal is: -- <math> so negative frequency components do not appear. In other -- words, forming the analytic signal effectively filters out all -- negative frequencies. Another way to see this is to recall the -- relationship between the Hilbert transform and the Fourier transform -- (see Wikipedia) <math> It follows readily from this that the -- Fourier transform of the analytic signal <math> is zero for -- negative frequencies. This relationship also shows (under some -- technical conditions) that the Hilbert transform <math> is -- orthogonal to the original signal <math> To see this, use the -- Plancherel Theorem, and observe that it leads to the integral of an -- odd function in the frequency domain, which is zero by symmetry. -- --

Examples:

-- -- The analytic signal is obtained by shifting power from negative -- frequencies to positive frequencies in the Fourier transform, then -- applying the inverse Fourier transform. The imaginary part of the -- result is the Hilbert transform of the input signal. It is phase -- shifted by 90 degrees. -- -- The real an imaginary parts can be plotted using your favorite -- graphics software. Below we use the R interface provided by HaskellR -- in a Jupyter notebook. -- --
--   >>> n=1024
--   
--   >>> dt = 2*pi/(n-1)
--   
--   >>> x = take n $ iterate (+ dt) 0
--   
--   >>> y = [z | k <- [0..(n-1)], let z = sin (x!!k)]
--   
--   >>> z = analytic y
--   
--   >>> zr = map realPart z
--   
--   >>> zi = map imagPart z
--   
--   >>> [rgraph| plot(x_hs,zr_hs,xlab='t',ylab='signal', type='l',col='blue',
--   
--   >>> main='Orig. signal blue, Hilbert transform red')
--   
--   >>> lines(x_hs,zi_hs,type='l',col='red') |]
--   
-- -- -- The real and imaginary parts of the analytic signal are orthogonal to -- each other. Let's check this... -- --
--   >>> n=1024
--   
--   >>> dt = 2*pi/(n-1)
--   
--   >>> x = take n $ iterate (+ dt) 0
--   
--   >>> y = [z | k <- [0..(n-1)], let z = sin (x!!k)]
--   
--   >>> z = analytic y
--   
--   >>> zr = map realPart z
--   
--   >>> zi = map imagPart z
--   
--   >>> sum $ zipWith (*) zr zi
--   
-- --
--   1.5e-13        
--   
analytic :: [Complex Double] -> [Complex Double] -- | Same as analytic but uses the slow transforms ft1d and -- ift1d. There are no restrictions on the input vector size. analytic' :: [Complex Double] -> [Complex Double] -- | Wavelets can be viewed as an alternative to the usual Fourier basis -- (used in the Fourier transform) that supports analyzing functions at -- different scales. Frequency information is also captured, though not -- as readily as this is done with the Fourier transform. A powerful -- feature of the wavelet transform is that in many problems it leads to -- a sparse representation. This can be exploited to reduce computational -- complexity (when solving differential equations, say) or to implement -- improved data compression (in image analysis, for example). -- -- Only the 1D case is implemented here. Extension to higher dimensions -- is straightforward, but the list implementation used here may not be -- appropriate for this purpose. This implementation is suitable for -- small or moderate sized 1D problems, and for understanding the -- underlying theory which does not change as the dimension increases. -- -- References on wavelets with abbreviations: -- -- module Math.List.Wavelet -- | wt1d is the 1-dimensional discrete wavelet transform. -- --
--   Usage: wt1d x nd jmin jmax
--   
-- -- -- -- The Daubechies wavelets are defined in TenLectures. The input -- vector is the initial detail vector. The first level of the -- transform splits this vector into two parts, coarse (coarse -- coefficients---see below) and detail (detail coefficients). -- Then the coarse vector is split into two parts, one -- coarse and the other detail (the previous -- detail vector is unchanged). This continues until the -- coarse vector is reduced to size <math>, where -- <math>. The output vector returned by this function has the same -- size as the input vector, and it has the form <math>, where -- coarse is the last coarse vector, detail is -- its companion, and d, d', d'', etc. are detail vectors from previous -- steps, increasing in size by factors of 2. The inverse wavelet -- transform iwt1d starts with this vector and works backwards to -- recover the original input vector. Of course, it needs jmin to know -- where to start. See Examples below. -- -- Here are the fundamental relations derived in Compact1988 that -- define the discrete compactly supported wavelet transform and its -- inverse <math> where <math> and <math> are the -- approximation and detail coefficients, respectively. See -- Compact1988, pp.935-938. The reverse or inverse DWT is defined -- by <math> It is clear from this that decimation by two and -- convolution is involved at each step. More precisely, these equations -- and the corresponding Haskell implementation makes it clear that in -- the forward transform the filters are reversed, convolved with the -- input (prior level coefficients), and subsampled, while in the reverse -- transform, the prior level coefficients are upsampled, convolved with -- the filters (not reversed), and summed. Implementations in Python, R, -- Julia and Matlab can be found in Data2021 and -- NumericalTours. -- -- Note that much of the analysis in Compact1988 is done where n -- varies over all integers. In the implementation below we work modulo -- the size of the input vector, which turns ordinary convolutions into -- circular convolutions in the usual way. -- -- The <math>-th Daubechies wavelet filter h has support -- <math> (even number of points). See Compact1988, Table 1, -- p.980, and TenLectures, Table 6.1, p.195 (higher precision). -- The novelty of this work was the discovery of invertible filters with -- compact support (the Shannon wavelet does not have compact support). -- This requires detailed Fourier analysis, from which it is deduced that -- the conjugate filter g can be chosen to satisfy <math> for a -- convenient choice for p (see TenLectures, p.136). To define g -- with the same support as h, for the N-th wavelet, use p = N-1, so -- <math>. -- -- Following NumericalTours, we define the circular convolution -- cconv1d in such a way that the filter h is centered, and for -- this purpose a zero is prepended to both h and g (so these vectors -- have an odd number of points, with the understanding that their -- support does not change). -- --

Examples:

-- --
--   >>> dist :: [Double] -> [Double] -> Double
--   
--   >>> dist x y = sqrt (sum (zipWith (\x y -> (x-y)^2) x y))
--   
--   >>> sig = deltaFunc 5 1024
--   
--   >>> forward = wt1d sig 10 5 10 -- wavelet 10, jmin=5, jmax=10
--   
--   >>> backward = iwt1d forward 10 5 10
--   
--   >>> dist sig backward
--   
-- --
--   1.2345e-12
--   
-- -- Let's take a look at the simplest Daubechies wavelet (N=2) by taking -- the inverse transform of a delta function at position 5... -- --
--   >>> sig = deltaFunc 5 1024
--   
--   >>> wavelet2 = iwt1d sig 2 0 10
--   
--   >>> [rgraph| plot(wavelet2_hs,type='l') |]
--   
-- -- -- Let's place masses at positions 100 and 400, with the first twice as -- large as the second. -- --
--   >>> x = deltaFunc 100 1024
--   
--   >>> y = deltaFunc 400 1024
--   
--   >>> z = zipWith (\x y -> 2*x + y) x y
--   
--   >>> wt = wt1d z 2 0 10
--   
--   >>> [rgraph| plot(wt_hs,type='l') |]
--   
-- -- -- The spikes show up in the scalogram at about 1/2 the distance in scale -- units, and there are "harmonics" at the coarser scales (on the left). wt1d :: [Double] -> Int -> Int -> Int -> [Double] -- | iwt1d is the 1-dimensional inverse discrete wavelet transform. -- --
--   Usage: iwt1d x nd jmin jmax
--   
-- -- -- -- Reverses the steps taken in wt1d above. iwt1d :: [Double] -> Int -> Int -> Int -> [Double] -- | conv1d is the 1-dimensional conventional convolution (no FFT). -- --
--   Usage: conv1d h x        
--   
-- -- -- --
--   Python equivalent: np.convolve([1,2,3,4],[1,2,3,4])        
--   Octave equivalent: conv(1:4,1:4)        
--   R equivalent: convolve(h,rev(x),type='open')
--   
conv1d :: [Double] -> [Double] -> [Double] -- | cconv1d is the 1-dimensional circular convolution (with -- centering). -- --
--   Usage: cconv1d h x        
--   
-- -- cconv1d :: [Double] -> [Double] -> [Double] -- | cconv1dnc is the 1-dimensional non-centered circular -- convolution. -- --
--   Usage: cconv1dnc h x        
--   
-- -- -- --
--   R equivalent: convolve(h,x,type='circular',conj=FALSE)        
--   
cconv1dnc :: [Double] -> [Double] -> [Double] -- | deltaFunc creates a list of zero's except for one element that -- equals 1 -- --
--   Usage: deltaFunc k n
--   
-- -- deltaFunc :: Num a => Int -> Int -> [a]