-- Hoogle documentation, generated by Haddock -- See Hoogle, http://www.haskell.org/hoogle/ -- | Math using lists, including FFT and Wavelet -- -- This package contains standard one-dimensional mathematical transforms -- (FFT, Wavelet, etc.) applied to Haskell lists. Documentation including -- mathematical details and examples are included to facilitate use with -- small or moderate sized problems, and for educational purposes. The -- algorithms have a very consise representation in Haskell that is a -- direct translation of the mathematical formulations. -- -- Some of the examples use the HakellR package and the hybrid Haskell/R -- environment that it provides. This permits use of R statistical and -- grahics tools in a Haskell "playgound" that requires minimal -- configuration. HaskellR's Jupyter kernel for Haskell is an added -- bonus. The functions of this package do not depend on R or HaskellR. @package mathlist @version 0.1.0.0 -- | This package contains standard mathematical operators like FFT and -- Wavelets applied to Haskell lists. module Math.List -- | Includes an implementation of the fast Fourier transform and its -- inverse using lists. Support for shifting and scaling for easier -- interpretation are included, as is computation of the analytic signal -- (where spectral power is shifted from negative frequencies to positive -- frequencies). The imaginary part of the latter is the Hilbert -- transform. -- -- The FFT might be called the "Feasible Fourier Transform" because in -- many problems using an input vector size that is not a power of 2 can -- result in a huge performance penalty (hours instead of seconds), so it -- is important to keep this in mind when working with large input -- vectors. module Math.List.FFT -- | Pure and simple fast Fourier transform following the recursive -- algorithm that appears in Mathematical Foundations of Data -- Sciences by Gabriel Peyre' (2021). The Haskell implementation -- closely follows the mathematical specifications. The input vector size -- <math> must be a power of 2 (the functions ft1d and -- ift1d work with vectors of any size, but they may be extremely -- slow for large input vectors). -- -- Recall that the discrete Fourier transform applied to a vector -- <math> is defined by <math> -- -- As written this has time complexity <math>, but the fast Fourier -- transform algorithm reduces this to <math>. The algorithm can be -- written as follows <math> Here <math> is the Fourier -- transform defined on vectors of size <math>, <math>, and -- <math>, for <math>. Here <math>, and <math> is -- the interleaving operator defined by <math>. The operator -- <math> is defined as follows <math> -- -- The algorithm follows easily by considering the even and odd terms in -- the discrete Fourier transform. Let <math>, and let's consider -- the even terms first: -- -- <math> The last term is just the Fourier transform of the vector -- of length <math> given by <math> -- -- For the odd terms we have -- -- <math> The last term is just the Fourier transform of the vector -- of length <math> given by <math> -- -- The recursive algorithm now follows by interleaving the even and the -- odd terms. -- --
-- >>> fft [1,2,3,4] ---- --
-- [10.0 :+ 0.0, (-2.0) :+ 2.0, (-2.0) :+ 0.0, (-1.99999) :+ (-2.0)] ---- -- Check reproduction property with N=4 -- --
-- >>> z = fft [1,2,3,4] -- -- >>> map (realPart . (/4)) $ ifft z ---- --
-- [1.0,2.0,3.0,4.0] ---- -- Test on a mixture of two sine waves... Evalutate a mixture of two sine -- waves on n equally-spaced points -- --
-- >>> n = 1024 -- sample points (a power of 2) -- -- >>> dt = 10.24/n -- time interval for Fourier integral approximation. -- -- >>> df = 1/dt/n -- frequency increment (df x dt = 1/n) -- -- >>> fs = 1/dt -- sampling rate (sampling theorem requires fs >= 2*BW) -- -- >>> f1 = 20 -- signal is a mixture of frequencies 20 and 30 -- -- >>> f2 = 30 -- -- >>> signal t = sin (2*pi*f1*t) + 0.5* sin(2*pi*f2*t) -- -- >>> t = take n $ iterate (+ dt) 0 -- -- >>> f = take n $ iterate (+ df) 0 -- -- >>> y = map ((:+ 0.0) . signal) t -- apply signal and complexify -- -- >>> z = fft -- Fourier transform -- -- >>> mags = map magnitude z -- modulus of the complex numbers -- -- >>> [rgraph| plot(f_hs, mags_hs,type='l') |] -- show plot using HaskellR ---- -- -- Notice that by default the zero frequency point is on the left edge of -- the chart. To bring the zero frequency point to the center of the -- chart see fftshift and its examples. fft :: [Complex Double] -> [Complex Double] -- | Pure and simple inverse fast Fourier Transform. This is the same as -- fft with one change: replace <math> with <math>, -- its complex conjugate. As is well-known, -- --
-- ifft (fft x) = x*N, ---- -- so we must divide by <math> to recover the original input vector -- from its discrete Fourier transform. -- --
-- >>> z = fft [1,2,3,4] -- -- >>> map (realPart . (/4)) $ ifft z ---- --
-- [1.0,2.0,3.0,4.0] --ifft :: [Complex Double] -> [Complex Double] -- | Slow 1D Fourier transform. Accepts input vectors of any size. ft1d :: [Complex Double] -> [Complex Double] -- | Slow 1D inverse Fourier transform. Accepts input vectors of any size. ift1d :: [Complex Double] -> [Complex Double] -- | fftshift rotates the result of fft so that the zero -- frequency point is in the center of the range. To understand why this -- is needed, it is helpful to recall the following approximation for the -- continuous Fourier transform, for a function that is very small -- outside the interval <math>. -- -- <math> where <math>. Discretizing in the frequency domain -- and setting <math> yields <math> where <math>. Now -- set <math>, and the standard discrete Fourier transform appears: -- <math> where <math>. -- -- Let's gather together the parameters involved in this approximation: -- <math> where <math> is the sampling rate. Corresponding to -- the <math> samples in the time domain, for convenience we -- normally consider exactly <math> samples in the frequency -- domain, but what samples do we choose? It is easy to check that the -- discrete Fourier transform <math> is periodic with period -- <math> in <math>, and the same is true of our -- approximation <math> provided we choose <math> for some -- integer <math> (which we can do without loss of generality), so -- the exponential factor in the equation for <math> is periodic -- with period <math>. -- -- In the standard discrete Fourier transform we assume the time grid is -- <math>, for <math>, and the N-point window for the -- discrtee Fourier transform is <math>, for <math>. In -- particular, zero frequency corresponds to <math>, and because of -- periodicity, <math>, for <math>. This shows that negative -- frequencies wrap around in a circular fashion and appear to the right -- of the mid point. In many applications it is more natural to work with -- <math>, the circular rotation of $X$ to the left by -- <math>. This brings the zero frequency to the center of the -- range, and places the negative frequencies where we expect them to be. -- This is what fftshift does. -- -- After applying this transformation the frequency grid becomes -- <math>, for <math>, and we have <math>, which -- happens to be the same as the restriction imposed by Shannon's -- sampling theorem. -- --
-- >>> n = 1024 -- sample points. -- -- >>> dt = 10.24/n -- time increment in Fourier integral approximation. -- -- >>> df = 1/dt/n -- frequency increment (df x dt = 1/n). -- -- >>> fs = 1/dt -- sample rate -- -- >>> t = take n $ iterate (+ dt) 0 -- -- >>> f = take n $ iterate (+ df) 0 -- -- >>> signal t = exp(-64.0*t^2) --- Gaussian -- -- >>> gauss = map ((:+ 0.0) . signal) t -- apply signal and complexify -- -- >>> ft = fft gauss -- -- >>> mags = map magnitude ft -- -- >>> [rgraph| plot(f_hs, mags_hs,type='l',main='Uncentered Fourier Transform') |] ---- -- -- Notice that the resulting Gaussian is not properly centered, because -- zero frequency is on the left, and negative frequencies wrap around -- and appear on the right. Let's use fftshift to fix this. -- --
-- >>> ftshift = fftshift ft -- -- >>> magshift = map magnitude ftshift -- -- >>> fshift = take n $ iterate (+ df) (-fs/2) -- -- >>> [rgraph| plot(fshift_hs,magshift_hs,type='l',main='Centered Fourier Transform') |] ---- fftshift :: [Complex Double] -> [Complex Double] -- | fftscale shows the result of fft on a logarithmic (dB) -- scale -- --
-- Usage: fftscale x nfirst nsamples fs fc ---- --
-- getData :: R s [Double]
-- getData = do
-- R.dynSEXP <$> [r| setwd("path to HaskellR")
-- readRDS("FMIQ.rds") |]
--
-- fmiq <- R.runRegion getData -- 1.2M samples (modulus I + jQ)
--
-- toDoubleComplex :: [Double] -> [Complex Double]
-- toDoubleComplex = map (:+ 0.0)
--
-- fs = 1200000
-- fftdata = fftscale (toDoubleComplex fmiq) 0 (2^15) (fs/10) 0
-- freq = fst fftdata
-- mag = snd fftdata
-- [rgraph} plot(freq_hs, mag_hs, type=l,xlab=Freq,
-- ylab="Rel Amp [dB down from max]",main="FM Spectrum")
-- abline(v=19000,col=red) -- Pilot frequency
-- abline(v=38000,col=pink) -- L-R channel
-- abline(v=57000,col=green) |] -- RDS station signal
--
--
fftscale :: [Complex Double] -> Int -> Int -> Double -> Double -> ([Double], [Double])
-- | analytic uses fft and ifft to compute the
-- analytic signal. Thus the input vector must have size a power of 2.
-- Use the slower primed variant to support input vectors of any size.
-- Imaginary part is the Hilbert transform of the input signal. Normally
-- the input should have zero imaginary part.
--
-- Today digital signals (as well as analog signals like FM radio) are
-- transmitted in the form a one-dimensional functions of the form
-- <math> where <math> is a carrier frequency (modern
-- technologies like WiFi/OFDM employ more than one carrier), and the
-- functions <math> and <math> encode information about
-- amplitude and phase. In this way two-dimentional information is
-- encoded in a one-dimentional signal (the extra dimension comes from
-- phase differences or timing). Digital information is sent by
-- subdividing the complex plane of <math> into regions
-- corresponding to different symbols of the alphabet to be used (such
-- regions are shown in a constellation diagram).
--
-- The image below shows part of the GUI for a GNURadio FM receiver and
-- decoder (available at GR-RDS) where the complex <math>
-- plane is divided into two halves corresponding to two values of a
-- binary signal that carries information like station name, content
-- type, etc. The digital information has central frequency 57 kHz, and
-- its power profile is clearly seen in the spectral waterfall. A
-- detailed discussion of the decoding process can be found in the online
-- book PySDR. See the examples for fftscale for more
-- information.
--
--
-- The Hilbert transform is related to this representation for the signal
-- thanks to Bedrosian's Theorem (see Wikipedia on Hilbert transform). It
-- implies that in many cases we can write the Hilbert transform in terms
-- of the same <math> and <math> as follows: <math> In
-- other words, the Hilbert transform introduces a phase-shift in the
-- carrier basis functions by -90 degrees (this is synthesized in
-- quadrature demodulation).
--
-- The Hilbert transform for continuous-time functions is defined below,
-- and most of its properties carry over to the discrete-time case. It is
-- used to define the analytic signal for a real-valued signal
-- <math> as follows: <math> where <math> is the
-- Hilbert transform of <math> The most important property of the
-- analytic signal is that its Fourier transform is zero for negative
-- frequencies (see below). Let's begin by defining the analytic signal
-- using this property.
--
-- Constructing the analytic signal for a given signal <math> is
-- straightforward: compute its Fourier transform <math>, and
-- replace it with <math> where <math> for <math>,
-- <math> for <math>, and <math>. Then apply the
-- inverse Fourier transform to obtain the analytic signal <math>,
-- where <math> is the Hilbert transform of <math>. What we
-- have done here is shift power from negative frequencies to positive
-- frequencies in such a way that the total power is preserved. It is
-- well-known that this definition of the Hilbert transform agrees with
-- the one to be introduced below.
--
-- Some care is required when this is implemented in the discrete
-- sampling domain, and this is the focus of Computing the
-- Discrete-Time "Analytic" Signal via FFT by S. Lawrence Marple, Jr,
-- IEEE Trans. on Signal Processing, 47(9), 1999. The goal of this paper
-- is to define the discrete analytic signal in such a way that
-- properties of the continous case are preserved, in particular, the
-- real part of the analytic signal should be the original signal, and
-- the real and imaginary parts should be orthogonal. It is shown that
-- this will be the case if we modify the discrete Fourier transform as
-- follows. <math> for <math> (as discussed in
-- fftshift, these are negative frequencies), <math> for
-- <math> and <math>, and <math> for <math>
-- (these are positive frequencies, so we double the transform).
--
-- The Hilbert transform was originally studied in the continuous-time
-- domain, where it is defined as <math> a convolution with a
-- singular kernel. (The Hilbert transform is closely related to the
-- Cauchy integral formula, potential theory, and many other areas of
-- mathematical analysis. See The Cauchy Transform, Potential Theory,
-- and Conformal Mapping by Steven R. Bell (2015), or Wavelets and
-- operators by Yves Meyer (1992), or Functional Analysis by
-- Walter Rudin (1991).)
--
-- The analytic signal is defined in terms of the Hilbert transform:
-- <math> To understand why this is useful let's recall some
-- properties of the Hilbert transform. It can be shown (see Wikipedia)
-- that <math>, and <math>, when <math> <math>
-- can be expanded in a Fourier series of complex exponentials with both
-- positive and negative frequencies. Indeed, just consider one (real)
-- component <math> It contributes both positive and negtive
-- frequencies. But its contribution to the analytic signal is:
-- <math> so negative frequency components do not appear. In other
-- words, forming the analytic signal effectively filters out all
-- negative frequencies. Another way to see this is to recall the
-- relationship between the Hilbert transform and the Fourier transform
-- (see Wikipedia) <math> It follows readily from this that the
-- Fourier transform of the analytic signal <math> is zero for
-- negative frequencies. This relationship also shows (under some
-- technical conditions) that the Hilbert transform <math> is
-- orthogonal to the original signal <math> To see this, use the
-- Plancherel Theorem, and observe that it leads to the integral of an
-- odd function in the frequency domain, which is zero by symmetry.
--
-- -- >>> n=1024 -- -- >>> dt = 2*pi/(n-1) -- -- >>> x = take n $ iterate (+ dt) 0 -- -- >>> y = [z | k <- [0..(n-1)], let z = sin (x!!k)] -- -- >>> z = analytic y -- -- >>> zr = map realPart z -- -- >>> zi = map imagPart z -- -- >>> [rgraph| plot(x_hs,zr_hs,xlab='t',ylab='signal', type='l',col='blue', -- -- >>> main='Orig. signal blue, Hilbert transform red') -- -- >>> lines(x_hs,zi_hs,type='l',col='red') |] ---- -- -- The real and imaginary parts of the analytic signal are orthogonal to -- each other. Let's check this... -- --
-- >>> n=1024 -- -- >>> dt = 2*pi/(n-1) -- -- >>> x = take n $ iterate (+ dt) 0 -- -- >>> y = [z | k <- [0..(n-1)], let z = sin (x!!k)] -- -- >>> z = analytic y -- -- >>> zr = map realPart z -- -- >>> zi = map imagPart z -- -- >>> sum $ zipWith (*) zr zi ---- --
-- 1.5e-13 --analytic :: [Complex Double] -> [Complex Double] -- | Same as analytic but uses the slow transforms ft1d and -- ift1d. There are no restrictions on the input vector size. analytic' :: [Complex Double] -> [Complex Double] -- | Wavelets can be viewed as an alternative to the usual Fourier basis -- (used in the Fourier transform) that supports analyzing functions at -- different scales. Frequency information is also captured, though not -- as readily as this is done with the Fourier transform. A powerful -- feature of the wavelet transform is that in many problems it leads to -- a sparse representation. This can be exploited to reduce computational -- complexity (when solving differential equations, say) or to implement -- improved data compression (in image analysis, for example). -- -- Only the 1D case is implemented here. Extension to higher dimensions -- is straightforward, but the list implementation used here may not be -- appropriate for this purpose. This implementation is suitble for small -- or moderate sized 1D problems, and for understanding the underlying -- theory which does not change as the dimension increases. -- -- References on wavelets with abbreviations: -- --
-- Usage: wt1d x nd jmin jmax ---- --
-- >>> dist :: [Double] -> [Double] -> Double -- -- >>> dist x y = sqrt (sum (zipWith (\x y -> (x-y)^2) x y)) -- -- >>> sig = deltaFunc 5 1024 -- -- >>> forward = wt1d sig 10 5 10 -- wavelet 10, jmin=5, jmax=10 -- -- >>> backward = iwt1d forward 10 5 10 -- -- >>> dist sig backward ---- --
-- 1.2345e-12 ---- -- Let's take a look at the simplest Daubechies wavelet (N=2) by taking -- the inverse transform of a delta function at position 5... -- --
-- >>> wavelet2 = iwt1d x 2 0 10 -- -- >>> [rgraph| plot(wavelet2_hs,type='l') |] ---- -- -- Let's place masses at positions 100 and 400, with the first twice as -- large as the second. -- --
-- >>> x = deltaFunc 100 1024 -- -- >>> y = deltaFunc 400 1024 -- -- >>> z = zipWith (\x y -> 2*x + y) x y -- -- >>> wt = wt1d z 2 0 10 -- -- >>> [rgraph| plot(wt_hs,type='l') |] ---- -- -- The spikes show up in the scalogram at about 1/2 the distance in scale -- units, and there are "harmonics" at the coaser scales (on the left). wt1d :: [Double] -> Int -> Int -> Int -> [Double] -- | iwt1d is the 1-dimensional inverse discrete wavelet transform. -- --
-- Usage: iwt1d x nd jmin jmax ---- --
-- Usage: conv1d h x ---- --
-- Python equivalent: np.convolve([1,2,3,4],[1,2,3,4]) -- Octave equivalent: conv(1:4,1:4) -- R equivalent: convolve(h,rev(x),type='open') --conv1d :: [Double] -> [Double] -> [Double] -- | cconv1d is the 1-dimensional circular convolution (with -- centering). -- --
-- Usage: cconv1d h x ---- --
-- Usage: cconv1dnc h x ---- --
-- R equivalent: convolve(h,x,type='circular',conj=FALSE) --cconv1dnc :: [Double] -> [Double] -> [Double] -- | deltaFunc creates a list of zero's except for one element that -- equals 1 -- --
-- Usage: deltaFunc k n ---- --