Îõ³h&jáj†      Safe-Inferredi(C) Dominick Samperi 2023BSD3djsamperi@gmail.com experimental Safe-InferredQ« mathlistÚPure and simple fast Fourier transform following the recursive algorithm that appears in )Mathematical Foundations of Data SciencesŒ by Gabriel Peyre' (2021). The Haskell implementation is a direct translation of the mathematical specifications. The input vector size  N % must be a power of 2 (the functions  and Ù work with vectors of any size, but they may be extremely slow for large input vectors).ÁRecall that the discrete Fourier transform applied to a vector  x \in R^N  is defined by Ù \mathbb{F}_N(x)(k) = \sum_{j=0}^{N-1} x_n e^{-2\pi i j k/N}, \qquad k=0,1,2,...N-1. $As written this has time complexity  O(N^2) <, but the fast Fourier transform algorithm reduces this to  O(N \log N) ,. The algorithm can be written as follows ß \mathbb{F}_N(x) = \mathbb{I}_N(\mathbb{F}_{N/2}(x_e),\mathbb{F}_{N/2}(x_o \odot \alpha_N)).  Here  \mathbb{F}_{N/2} 8 is the Fourier transform defined on vectors of size  N/2 ,  x_e(n) = x_n + x_{n+N/2} , and  x_o(n) = x_n - x_{n+N/2} , for  n=0,1,...,N/2-1 . Here  \alpha_N = \exp(-2\pi i/N) , and  \mathbb{I}_N * is the interleaving operator defined by ; \mathbb{I}_N(a,b) = (a_1,b_1,a_2,b_2,...,a_{N/2},b_{N/2}) . The operator  \odot  is defined as follows Ù x \odot \alpha_N(k) = x(k) (\alpha_N)^k,\qquad k=0,1,...,N/2-1, \qquad x \in R^{N/2} ëThe algorithm follows easily by considering the even and odd terms in the discrete Fourier transform. Let  N' = N/2 +, and let's consider the even terms first:ü \begin{eqnarray} X_{2k} &=& \sum_{n=0}^{N-1} x_n e^{-2\pi i n k/N'} \\ &=& \sum_{n=0}^{N'-1} x_n e^{-2\pi i n k/N'} + \sum_{n=0}^{N'-1} x_{n+N'} e^{-2\pi i (n+N')k/N'} \\ &=& \sum_{n=0}^{N'-1} (x_n + x_{n+N'}) e^{-2\pi i n k/N'} \end{eqnarray} Ç The last term is just the Fourier transform of the vector of length  N/2  given by , x_e(n) = x_n + x_{n+N/2}, n=0,1,...,N/2-1. For the odd terms we haveë \begin{eqnarray*} X_{2k+1} &=& \sum_{n=0}^{N'-1} x_n e^{-2\pi i n (2k+1)/N} + \sum_{n=0}^{N'-1} x_{n+N'} e^{-2\pi i (2k+1) (n+N/2)/N} \\ &=& \sum_{n=0}^{N'-1} x_n e^{-2\pi i n k/N'} e^{-2\pi i n/N} + \sum_{n=0}^{N'-1} x_{n+N'} e^{-2\pi i (2kn + n + kN + N/2)N} \\ &=& \sum_{n=0}^{N'-1} (x_n - x_{n+N'})e^{-2\pi i n/N} e^{-2\pi i n k/N'} \end{eqnarray*} Ç The last term is just the Fourier transform of the vector of length  N/2  given by  \tilde{x}_n = (x_n - x_{n+N/2})e^{-2\pi i n/N}, n=0,1,...,N/2-1. ÐThe recursive algorithm now follows by interleaving the even and the odd terms. Examples: fft [1,2,3,4] Á[10.0 :+ 0.0, (-2.0) :+ 2.0, (-2.0) :+ 0.0, (-1.99999) :+ (-2.0)]$Check reproduction property with N=4z = fft [1,2,3,4]map (realPart . (/4)) $ ifft z [1.0,2.0,3.0,4.0]èTest on a mixture of two sine waves... Evalutate a mixture of two sine waves on n equally-spaced points (n = 1024 -- sample points (a power of 2)Ádt = 10.24/n -- time interval for Fourier integral approximation.2df = 1/dt/n -- frequency increment (df x dt = 1/n)Áfs = 1/dt -- sampling rate (sampling theorem requires fs >= 2*BW)7f1 = 20 -- signal is a mixture of frequencies 20 and 30f2 = 300signal t = sin (2*pi*f1*t) + 0.5* sin(2*pi*f2*t)t = take n $ iterate (+ dt) 0f = take n $ iterate (+ df) 0Slow 1D Fourier transform. Accepts input vectors of any size.mathlistÆSlow 1D inverse Fourier transform. Accepts input vectors of any size.mathlist rotates the result of ˆ so that the zero frequency point is in the center of the range. To understand why this is needed, it is helpful to recall the following approximation for the continuous Fourier transform, for a function that is very small outside the interval  [a,b] —. As noted in the introductory comments above, this is one of two possible interpretations of the discrete Fourier transform. We have ˆ X(f) \approx \int_a^b x(t) e^{-2\pi i f t} dt \approx \sum_{j=0}^{N-1} x(a + j \Delta t) e^{-2\pi i f (a + j \Delta t)} \Delta t,  where  \Delta t = \frac{b-a}{N} <. Discretizing in the frequency domain and setting  f = k\Delta f  yields È X(k\Delta f) = \sum_{j=0}^{N-1} x(a + j\Delta t) e^{-2\pi i (a + j\Delta t)k \Delta f} = e^{-2\pi i a k \Delta f} \sum_{j=0}^{N-1} x_j e^{-2\pi i j k \Delta f\Delta t}\Delta t,  where  x_j = x(a + j\Delta t)  . Now set  \Delta f\Delta t = 1/N Á, and the standard discrete Fourier transform appears: ’ X(k\Delta f) = e^{-2\pi i a k\Delta f} \sum_{j=0}^{N-1} x_j e^{-2\pi i j k/N} \Delta t = e^{-2\pi i a k \Delta f} \Delta t X_d(x)(k),  where 4 X_d(x)(k) = \sum_{j=0}^{N-1} x_j e^{-2\pi i j k/N}  . ÎLet's gather together the parameters involved in this approximation: ì \Delta f\Delta t = \frac{1}{N},\qquad \Delta t = \frac{b-a}{N},\qquad \Delta f = \frac{f_s}{N},  where  f_s = 1/\Delta t , is the sampling rate. Corresponding to the  N Ë samples in the time domain, for convenience we normally consider exactly  N ú samples in the frequency domain, but what samples do we choose? It is easy to check that the discrete Fourier transform  X_d(x)(k) " is periodic with period  N  in  k 5, and the same is true of our approximation  X(k\Delta f)  provided we choose  a = -p\Delta t  for some integer  p Ý (which we can do without loss of generality), so the exponential factor in the equation for  X(k\Delta f)  is periodic with period  N  . ÞIn the standard discrete Fourier transform we define the time and frequency grid as follows:  t = j\Delta t , for  j = 0,1,2,...,N-1 , and  f = k\Delta f , for  k = 0,1,2,...,N-1 Å. In terms of our approximation above, this implicitly assumes that  a = 0 0, so the exponential term in the expression for  X(k\Delta f) ã does not appear. It also assumes that the zero frequency point is on the left edge where  k = 0 -. It follows from periodicity that " X(-k\Delta f) = X((N-k)\Delta f) , for  k = N/2+1,N/2+2,...,N-1 ¥, so negative frequencies wrap around in a circular fashion and appear to the right of the mid point. In many applications it is more natural to work with " \tilde{X}(k) = X((k+N/2) \mod N) 7, the circular rotation of $X$ to the left by  N/2 “. This brings the zero frequency to the center of the range, and places the negative frequencies where we expect them to be. This is what  does. >After applying this transformation the frequency grid becomes  f = -f_s/2 + k\Delta f , for  k=0,1,...,N-1 , and we have  -f_s/2 \leq f \lt f_s/2 á, which happens to be the same as the restriction imposed by Shannon's sampling theorem.  Examples:ÄIt is well-known that the Fourier transform of a Gaussian function  \exp(-a t^2) â is another Gaussian. Let's check this by first generating a time/frequency grid.  n = 1024 -- sample points.Ádt = 10.24/n -- time increment in Fourier integral approximation.4df = 1/dt/n -- frequency increment (df x dt = 1/n).fs = 1/dt -- sample ratet = take n $ iterate (+ dt) 0f = take n $ iterate (+ df) 0&signal t = exp(-64.0*t^2) --- GaussianÀgauss = map ((:+ 0.0) . signal) t -- apply signal and complexifyft = fft gaussmags = map magnitude ftÌ[rgraph| plot(f_hs, mags_hs,type='l',main='Uncentered Fourier Transform') |] 2https://humangarden.net/images/GaussUncentered.pngGaussUncentered¼Notice that the resulting Gaussian is not properly centered, because zero frequency is on the left, and negative frequencies wrap around and appear on the right. Let's use  to fix this. ftshift = fftshift ft magshift = map magnitude ftshift(fshift = take n $ iterate (+ df) (-fs/2)Ò[rgraph| plot(fshift_hs,magshift_hs,type='l',main='Centered Fourier Transform') |] 0https://humangarden.net/images/GaussCentered.png GaussCentered mathlist shows the result of  on a logarithmic (dB) scale )Usage: fftscale x nfirst nsamples fs fc x - complex-valued input signal nfirst# - first index value (zero-based) nsamples+ - numer of samples (must be a power of 2) fs - original sample rate fc - original central frequency 1The subsample starts at nfirst and consists of nsamples points. nsamplesý must be a power of 2. The corresponding frequencies and normalized power are returned (measured in dB down from the max). Examples:ïHere we use HaskellR tools to import a data file created by R that contains I/Q data corresponding to an FM radio broadcast station (the inexpensive RTL-SDR tuner and ADC is used). The file contains the modulus of I + jQ for 1.2M samples (file size about 10 MB since each double occupies 8 bytes). This is the result of 12M values downsampled by a factor of 10. We show here how to use ¨ to create a spectral chart that shows power concentrated at the pilot tone (19k), mono L+R audio (low freqs up to 15k), stereo L-R channel (38k), and the station ID RDS channel (57k). These spectral lines are seen clearly in the waterfall diagram that appears in the documentation for ï. (The station is decoded as Mono even thought the content is Stereo, probably because the signal was weak.) ÃNote that the sample size (a power of 2) is less than the size of the input vector, and this introduces some noise. See Wikipedia entry on FM broadcasting for more information. (Note that RDSÙ in the R context means R Data Serialization, unrelated to RDS used in FM broadcasting!) ¼ getData :: R s [Double] getData = do R.dynSEXP <$> [r| setwd("path to HaskellR") readRDS("FMIQ.rds") |] fmiq <- R.runRegion getData -- 1.2M samples (modulus I + jQ) toDoubleComplex :: [Double] -> [Complex Double] toDoubleComplex = map (:+ 0.0) fs = 1200000 fftdata = fftscale (toDoubleComplex fmiq) 0 (2^15) (fs/10) 0 freq = fst fftdata mag = snd fftdata [rgraph| plot(freq_hs, mag_hs, type=l,xlab=Freqá, ylab="Rel Amp [dB down from max]",main="FM Spectrum") abline(v=19000,col=red2) -- Pilot frequency abline(v=38000,col=pink-) -- L-R channel abline(v=57000,col=green) |] -- RDS station signal  'https://humangarden.net/images/FMIQ.pngFMIQmathlist uses  and ¤ to compute the analytic signal. Thus the input vector must have size a power of 2. Use the slower primed variant to support input vectors of any size. Imaginary part is the Hilbert transform of the input signal. Normally the input should have zero imaginary part. ‡Today digital signals (as well as analog signals like FM radio) are transmitted in the form a one-dimensional functions of the form Á x(t) = I(t) \cos(2\pi f_c t) + Q(t) \sin(2\pi f_c t),  where  f_c ÷ is a carrier frequency (modern technologies like WiFi/OFDM employ more than one carrier), and the functions  I(t)  and  Q(t) ˆ encode information about amplitude and phase. In this way two-dimensional information is encoded in a one-dimensional signal (the extra dimension comes from phase differences or timing). Digital information is sent by subdividing the complex plane of  I + j Q ¤ into regions corresponding to different symbols of the alphabet to be used (such regions are shown in a constellation diagram, like the one shown below).ÞThe image below shows part of the GUI for a GNU Radio FM receiver and decoder (available at  %https://github.com/bastibl/gr-rds.gitGR-RDS) where the complex  I + j Q ò plane is divided into two halves corresponding to two values of a binary signal that carries information like station name, content type, etc. The digital information has central frequency 57 kHz, and its power profile is clearly seen in the spectral waterfall. A detailed discussion of the decoding process can be found in the online book  "https://pysdr.org/content/rds.htmlPySDR. See the examples for  for more information.  3https://humangarden.net/images/RDSConstellation.pngRDSConstellation ôThe Hilbert transform is related to this representation for the signal thanks to Bedrosian's Theorem (see Wikipedia on Hilbert transform). It implies that in many cases we can write the Hilbert transform in terms of the same  I(t)  and  Q(t)  as follows: å \mathbb{H}(x)(t) = I(t) \cos(2\pi f_c t - \pi/2) + Q(t) \sin(2\pi f_c t - \pi/2). ½ In other words, the Hilbert transform introduces a phase-shift in the carrier basis functions by -90 degrees (this is synthesized in quadrature demodulation). ÔThe Hilbert transform for continuous-time functions is defined below, and most of its properties carry over to the discrete-time case. It is used to define the analytic signal for a real-valued signal  x(t)  as follows: + \mathbb{A}(t) = x(t) + j\mathbb{H}(x)(t),  where  \mathbb{H}(x)(t)  is the Hilbert transform of  x(t). Ú The most important property of the analytic signal is that its Fourier transform is zero for negative frequencies (see below). Let's begin by defining the analytic signal using this property. 4Constructing the analytic signal for a given signal  x(t) 4 is straightforward: compute its Fourier transform  X(f) , and replace it with  Z(f),  where  Z(f) = 2 X(f),  for  f > 0 ,  Z(f) = 0,  for  f < 0 , and  Z(0) = X(0) Ó. Then apply the inverse Fourier transform to obtain the analytic signal ! z(t) = x(t) + j\mathbb{H}{x}(t) , where  \mathbb{H}{x}(t) & is the Hilbert transform of  x(t) ”. What we have done here is shift power from negative frequencies to positive frequencies in such a way that the total power is preserved. It is well-known that this definition of the Hilbert transform agrees with the one to be introduced below. ìSome care is required when this is implemented in the discrete sampling domain, and this is the focus of 5Computing the Discrete-Time "Analytic" Signal via FFTÌ by S. Lawrence Marple, Jr, IEEE Trans. on Signal Processing, 47(9), 1999. The goal of this paper is to define the discrete analytic signal in such a way that properties of the continuous case are preserved, in particular, the real part of the analytic signal should be the original signal, and the real and imaginary parts should be orthogonal. It is shown that this will be the case if we modify the discrete Fourier transform as follows.  Z[m] = 0  for  N/2+1 \leq m \leq N-1  (as discussed in $, these are negative frequencies),  Z[m] = X[m]  for  m = 0  and  m = N/2 , and Z[m] = 2 X[m]  for  1 \leq m \leq N/2-1 Ç (these are positive frequencies, so we double the transform). õThe Hilbert transform was originally studied in the continuous-time domain, where it is defined as Ï \mathbb{H}(x)(t) = \frac{1}{\pi} \int_{-\infty}^{\infty} \frac{x(s)}{t-s} ds, Ë a convolution with a singular kernel. The Hilbert transform is closely related to the Cauchy integral formula, potential theory, and many other areas of mathematical analysis. See =The Cauchy Transform, Potential Theory, and Conformal Mapping by Steven R. Bell (2015), or Wavelets and operators by Yves Meyer (1992), or Functional Analysis by Walter Rudin (1991).ÌThe analytic signal is defined in terms of the Hilbert transform: . \mathbb{A}(x)(t) = x(t) + j\mathbb{H}(x)(t). Š To understand why this is useful let's recall some properties of the Hilbert transform. It can be shown (see Wikipedia) that Î \mathbb{H}(\cos(\omega t)) = \cos(\omega t - \frac{\pi}{2}) = \sin(\omega t) , and Î \mathbb{H}(\sin(\omega t)) = \sin(\omega t -\frac{\pi}{2}) = -\cos(\omega t) , when  \omega > 0.   x(t) ¡ can be expanded in a Fourier series of complex exponentials with both positive and negative frequencies. Indeed, just consider one (real) component 6 \cos(\omega t) = (e^{j\omega t} + e^{-j\omega t})/2. ô It contributes both positive and negative frequencies. But its contribution to the analytic signal is: ž \cos(\omega t) + j\sin(\omega t) = \frac{e^{j\omega t} + e^{-j\omega t}}{2} + j\frac{e^{j\omega t} - e^{-j\omega t}}{2 j} = e^{j\omega t}, š so negative frequency components do not appear. In other words, forming the analytic signal effectively filters out all negative frequencies. Another way to see this is to recall the relationship between the Hilbert transform and the Fourier transform (see Wikipedia) Ú \mathbb{F}(\mathbb{H}(x))(\omega) = -j \text{sgn}(\omega)\mathbb{F}(x)(\omega) ã It follows readily from this that the Fourier transform of the analytic signal * \mathbb{A}(t) = x(t) + j\mathbb{H}(x)(t) ‡ is zero for negative frequencies. This relationship also shows (under some technical conditions) that the Hilbert transform  \mathbb{H}(x)(t) ' is orthogonal to the original signal  x(t). œ To see this, use the Plancherel Theorem, and observe that it leads to the integral of an odd function in the frequency domain, which is zero by symmetry. Examples:¸The analytic signal is obtained by shifting power from negative frequencies to positive frequencies in the Fourier transform, then applying the inverse Fourier transform. The imaginary part of the result is the Hilbert transform of the input signal. It is phase shifted by 90 degrees. ¤The real an imaginary parts can be plotted using your favorite graphics software. Below we use the R interface provided by HaskellR in a Jupyter notebook.  n=1024dt = 2*pi/(n-1)x = take n $ iterate (+ dt) 0-y = [z | k <- [0..(n-1)], let z = sin (x!!k)]z = analytic yzr = map realPart zzi = map imagPart zÅ[rgraph| plot(x_hs,zr_hs,xlab='t',ylab='signal', type='l',col='blue',0main='Orig. signal blue, Hilbert transform red')'lines(x_hs,zi_hs,type='l',col='red') |] +https://humangarden.net/images/analytic.pngAnalyticöThe real and imaginary parts of the analytic signal are orthogonal to each other. Let's check this... n=1024dt = 2*pi/(n-1)x = take n $ iterate (+ dt) 0-y = [z | k <- [0..(n-1)], let z = sin (x!!k)]z = analytic yzr = map realPart zzi = map imagPart zsum $ zipWith (*) zr zi 1.5e-13 mathlistSame as  but uses the slow transforms  and >. There are no restrictions on the input vector size. mathlist/Rotate a list to the left, wrap to end. mathlist1Permute a list using a specified list of indices.mathlist0Return sublist given list of consecutive indices(C) Dominick Samperi 2023BSD3djsamperi@gmail.com experimental Safe-Inferredjvmathlist1 is the 1-dimensional discrete wavelet transform. Usage: wt1d x nd jmin jmaxx - input vector of size  M = 2^{jmax} .nd! - Daubechies wavelet identifier  N , where  2 \leq N \leq 10 .jmin - Last coarse vector has size  2^{jmin} .jmax - input vector size is  2^{jmax} .'The Daubechies wavelets are defined in  TenLectures#. The input vector is the initial detailÏ vector. The first level of the transform splits this vector into two parts, coarse) (coarse coefficients---see below) and detail! (detail coefficients). Then the coarse& vector is split into two parts, one coarse and the other detail (the previous detail1 vector is unchanged). This continues until the coarse vector is reduced to size  2^{jmin}  , where  jmax > jmin \ge 0 ë. The output vector returned by this function has the same size as the input vector, and it has the form " (coarse, detail, d, d', d'',...) , where coarse is the last coarse vector, detail” is its companion, and d, d', d'', etc. are detail vectors from previous steps, increasing in size by factors of 2. The inverse wavelet transform  ™ starts with this vector and works backwards to recover the original input vector. Of course, it needs jmin to know where to start. See Examples below..Here are the fundamental relations derived in  Compact1988Ô that define the discrete compactly supported wavelet transform and its inverse Î a_k^{j+1} = \sum_n h(n-2k) a_n^j,\qquad d_k^{j+1} = \sum_n g(n-2k) a_n^j,  where  a_k^j  and  d_k^j Ä are the approximation and detail coefficients, respectively. See  Compact1988;, pp.935-938. The reverse or inverse DWT is defined by > a_n^{j-1} = \sum_k h(n-2k) a_k^j + \sum_k g(n-2k) d_k^j. ‚ It is clear from this that decimation by two and convolution is involved at each step. More precisely, these equations and the corresponding Haskell implementation makes it clear that in the forward transform the filters are reversed, convolved with the input (prior level coefficients), and subsampled, while in the reverse transform, the prior level coefficients are upsampled, convolved with the filters (not reversed), and summed. Implementations in Python, R, Julia and Matlab can be found in Data2021 and NumericalTours."Note that much of the analysis in  Compact1988È is done where n varies over all integers. In the implementation below we work modulo the size of the input vector, which turns ordinary convolutions into circular convolutions in the usual way.The  N ,-th Daubechies wavelet filter h has support  [0,2N-1]  (even number of points). See  Compact1988, Table 1, p.980, and  TenLectures³, Table 6.1, p.195 (higher precision). The novelty of this work was the discovery of invertible filters with compact support (the Shannon wavelet does not have compact support). This requires detailed Fourier analysis, from which it is deduced that the conjugate filter g can be chosen to satisfy  g(k) = (-1)^k h(2p+1-k) & for a convenient choice for p (see  TenLecturesÛ, p.136). To define g with the same support as h, for the N-th wavelet, use p = N-1, so ) g(k) = (-1)^k h(2N-1-k), k=0,1,...,2N-1 . Following NumericalTours&, we define the circular convolution  Õ in such a way that the filter h is centered, and for this purpose a zero is prepended to both h and g (so these vectors have an odd number of points, with the understanding that their support does not change). Examples:&dist :: [Double] -> [Double] -> Double5dist x y = sqrt (sum (zipWith (\x y -> (x-y)^2) x y))sig = deltaFunc 5 10249forward = wt1d sig 10 5 10 -- wavelet 10, jmin=5, jmax=10 backward = iwt1d forward 10 5 10dist sig backward  1.2345e-12€Let's take a look at the simplest Daubechies wavelet (N=2) by taking the inverse transform of a delta function at position 5...sig = deltaFunc 5 1024wavelet2 = iwt1d sig 2 0 10&[rgraph| plot(wavelet2_hs,type='l') |] +https://humangarden.net/images/Wavelet2.pngWavelet2ÚLet's place masses at positions 100 and 400, with the first twice as large as the second.x = deltaFunc 100 1024y = deltaFunc 400 1024!z = zipWith (\x y -> 2*x + y) x ywt = wt1d z 2 0 10 [rgraph| plot(wt_hs,type='l') |] ,https://humangarden.net/images/TwoSpikes.png TwoSpikesŽThe spikes show up in the scalogram at about 1/2 the distance in scale units, and there are "harmonics" at the coarser scales (on the left). mathlist 9 is the 1-dimensional inverse discrete wavelet transform. Usage: iwt1d x nd jmin jmaxx - The output from & when the last coarse vector has size  2^{jmin} , where  0 \leq jmin \lt jmax .nd! - Daubechies wavelet identifier  N , where  2 \leq N \leq 10 .jmin - Last coarse vector has size  2^{jmin} .jmax - Output vector has size is  2^{jmax} .Reverses the steps taken in  above. mathlist > creates a list of zero's except for one element that equals 1 Usage: deltaFunc k nn - length of the listk. - position that equals 1 (zero-based, k < n).mathlist5 pads a vector with zeros as neded. Zeropad a vector mathlist < is the 1-dimensional circular convolution (with centering). Usage: cconv1d h x x - input signal h# - filter to convolve with. mathlist 9 is the 1-dimensional non-centered circular convolution.  Usage: cconv1dnc h x x - input signal h" - filter to convolve with >R equivalent: convolve(h,x,type='circular',conj=FALSE) mathlist 8 is the 1-dimensional conventional convolution (no FFT). Usage: conv1d h x x - input signal h# - filter to convolve with. ‘Python equivalent: np.convolve([1,2,3,4],[1,2,3,4]) Octave equivalent: conv(1:4,1:4) R equivalent: convolve(h,rev(x),type='open')       'mathlist-0.1.0.4-Crq6dW46Nz324HOWeQceOl Math.List.FFTMath.List.Wavelet Math.Listfftifftft1dift1dfftshiftfftscaleanalytic analytic'wt1diwt1d deltaFunccconv1d cconv1dncconv1drotatepermsublistzeropad