\(\text{Integral of the month: } \iint_S dt' dt e^{-\frac{|t'-t|}{\alpha}}\)
Integral, Wiener-Khinchin theorem
\(\require{cancel}\) \(\def\oot{\frac{1}{2}}\)
Introduction
I looked at this integral in its general form in an earlier post. However, I wanted to look at a specific case where the function is \(e^{-\frac{|t'-t|}{\alpha}}\). This is a very common function in signal processing and statistics, where \(\alpha\) is a positive constant that controls the decay rate. We will also compute the discrete version of the integral. Let’s first review the general form of the integral.
General form of the integral
We want to compute the integral \(I=\int_{\frac{-T}{2}}^{\frac{T}{2}}\int_{\frac{-T}{2}}^{\frac{T}{2}} dt' dt f(t'-t)\).
The argument of the function begs for a change of coordinates:
\[\begin{eqnarray} u=t'-t, \quad \text{and} \quad v=t+t' \label{eq:trans}, \end{eqnarray}\] and the associated inverse transform reads: \[\begin{eqnarray} t'=\frac{u+v}{2}, \quad \text{and} \quad t=\frac{v-u}{2}. \label{eq:transi} \end{eqnarray}\]
This transformation will rotate and scale the integration domain as shown in Figure 1.
The equation of the top boundary on the right can be written as \(v=T-u\), and on the left as $ v= T+u$. We can actually combine them as \(v=T-|u|\). We can do the same analysis for the lower boundaries to see that the height of the slices at a given \(u\) is \(2(T-|u|)\). This will help us easily integrate \(v\) out as follows: \[\begin{eqnarray} I&=&\int_{\frac{-T}{2}}^{\frac{T}{2}}\int_{\frac{-T}{2}}^{\frac{T}{2}} dt' dt f(t'-t) =\iint_{S_{u,v}}\left|\frac{\partial(t,t')}{\partial(u,v)}\right| dv du f(u)\nonumber\\ &=&\int_{-T}^T 2(T-|u|) \times\frac{1}{2} dv du f(u)=\int_{-T}^T du f(u)(T-|u|) \label{eq:transeq}, \end{eqnarray}\] where \(\left|\frac{\partial(t,t')}{\partial(u,v)}\right|=\frac{1}{2}\) is the determinant of the Jacobian matrix associated with the transformation in Eq. \(\ref{eq:transi}\).
Computing the integral
Now let’s apply the general formula to compute the specific case where \(f(u) = e^{-\frac{|u|}{\alpha}}\). Using Eq. \(\ref{eq:transeq}\), we have:
\[\begin{eqnarray} I&=&\int_{-T}^T du \, e^{-\frac{|u|}{\alpha}}(T-|u|) \label{eq:expintegral} \end{eqnarray}\]
Since the integrand is even, we can simplify this to:
\[\begin{eqnarray} I&=&2\int_{0}^T du \, e^{-\frac{u}{\alpha}}(T-u) \label{eq:expeven} \end{eqnarray}\]
We can evaluate this integral by parts. Let \(w = T-u\) and \(dv = e^{-\frac{u}{\alpha}} du\), so \(dw = -du\) and \(v = -\alpha e^{-\frac{u}{\alpha}}\):
\[\begin{eqnarray} \int_0^T (T-u)e^{-\frac{u}{\alpha}} du &=& (T-u)\left(-\alpha e^{-\frac{u}{\alpha}}\right)\Big|_0^T - \int_0^T \left(-\alpha e^{-\frac{u}{\alpha}}\right)(-du)\nonumber\\ &=& (T-u)\left(-\alpha e^{-\frac{u}{\alpha}}\right)\Big|_0^T - \alpha\int_0^T e^{-\frac{u}{\alpha}} du\nonumber\\ &=& \left[0 - T\left(-\alpha\right)\right] - \alpha\left[-\alpha e^{-\frac{u}{\alpha}}\Big|_0^T\right]\nonumber\\ &=& \alpha T - \alpha\left[-\alpha e^{-\frac{T}{\alpha}} + \alpha\right]\nonumber\\ &=& \alpha T + \alpha^2(e^{-\frac{T}{\alpha}} - 1) \end{eqnarray}\]
Therefore:
\[\begin{eqnarray} I&=&2\left[\alpha T + \alpha^2(e^{-\frac{T}{\alpha}} - 1)\right]\nonumber\\ &=&2\alpha T + 2\alpha^2(e^{-\frac{T}{\alpha}} - 1) \label{eq:finalresult} \end{eqnarray}\]
Linear algebra / precision-operator approach
There is a beautiful connection to operator theory that mirrors the discrete AR(1) case. The kernel \(e^{-\frac{|t-t'|}{\alpha}}\) has a precision operator (inverse) that is a local differential operator, just as the discrete precision matrix is tridiagonal.
For convenience, let’s work on the interval \([0,T]\) and define:
\[\begin{eqnarray} y(t) := (K\mathbf{1})(t) = \int_0^T e^{-\frac{|t-s|}{\alpha}} ds. \label{eq:ydef} \end{eqnarray}\]
Then the integral we want to compute is:
\[\begin{eqnarray} I(T) = \frac{1}{T^2} \int_0^T y(t) dt = \frac{1}{T^2} \int_0^T \int_0^T e^{-\frac{|t-s|}{\alpha}} dt ds. \label{eq:operatorintegral} \end{eqnarray}\]
Note: This gives a result that differs by a factor from the symmetric \([-T/2, T/2]\) case, but the method is the same. The key insight is the “fancy” step: on the real line, the precision operator acts as:
\[\begin{eqnarray} \left(1 - \alpha^2 \frac{d^2}{dt^2}\right) e^{-\frac{|t-s|}{\alpha}} = 2\alpha \delta(t-s), \label{eq:precisionaction} \end{eqnarray}\]
in the distributional sense. This is the continuum analogue of “\(K^{-1}\) is tridiagonal”: the inverse is local (a differential operator rather than an integral operator).
Now apply the operator \(L := 1 - \alpha^2 \frac{d^2}{dt^2}\) to \(y(t) = \int_0^T K(t,s) ds\):
\[\begin{eqnarray} Ly(t) = \int_0^T L K(t,s) ds = \int_0^T 2\alpha \delta(t-s) ds = 2\alpha, \quad t \in (0,T). \label{eq:ly} \end{eqnarray}\]
So \(y(t)\) satisfies the ODE:
\[\begin{eqnarray} y(t) - \alpha^2 y''(t) = 2\alpha, \quad 0 < t < T. \label{eq:ode} \end{eqnarray}\]
Because we’re on a finite interval, we also get Robin boundary conditions. These drop out of the integral definition by differentiating carefully at the boundaries:
\[\begin{eqnarray} y'(0) = \frac{1}{\alpha} y(0), \quad y'(T) = -\frac{1}{\alpha} y(T). \label{eq:boundary} \end{eqnarray}\]
The general solution to the ODE is:
\[\begin{eqnarray} y(t) = 2\alpha + A e^{t/\alpha} + B e^{-t/\alpha}, \label{eq:gensol} \end{eqnarray}\]
and the boundary conditions force:
\[\begin{eqnarray} A = -e^{-T/\alpha}, \quad B = -1, \label{eq:coeffs} \end{eqnarray}\]
so:
\[\begin{eqnarray} y(t) = \alpha\left(2 - e^{-\frac{t}{\alpha}} - e^{-\frac{T-t}{\alpha}}\right). \label{eq:ysol} \end{eqnarray}\]
Now integrate to get:
\[\begin{eqnarray} \int_0^T y(t) dt = \alpha\left(2T - 2\alpha(1 - e^{-\frac{T}{\alpha}})\right). \label{eq:integratey} \end{eqnarray}\]
Therefore:
\[\begin{eqnarray} I(T) = \frac{1}{T^2} \int_0^T \int_0^T e^{-\frac{|t-s|}{\alpha}} dt ds = \frac{2\alpha}{T} - 2\left(\frac{\alpha}{T}\right)^2\left(1 - e^{-\frac{T}{\alpha}}\right). \label{eq:operatorresult} \end{eqnarray}\]
For the symmetric case \([-T/2, T/2]\), a similar calculation with appropriate boundary conditions yields the result in Eq. \(\ref{eq:finalresult}\). This operator-theoretic approach elegantly mirrors the discrete case: just as the discrete precision matrix is tridiagonal, the continuous precision operator is a local second-order differential operator.
Kernel approach
We can derive the same result using Fourier analysis, analogous to the discrete case. Write the weight function
\[\begin{eqnarray} k(u) = e^{-\frac{|u|}{\alpha}}, \quad u \in \mathbb{R}. \label{eq:weightfunc} \end{eqnarray}\]
Then the integral can be written as:
\[\begin{eqnarray} I = \int_{-T}^{T} k(u) (T-|u|) du. \label{eq:convintegral} \end{eqnarray}\]
The Fourier transform of the weight function \(k(u)\) is the continuous Poisson kernel (also known as the Cauchy kernel):
\[\begin{eqnarray} \hat{k}(\omega) = \int_{-\infty}^{\infty} e^{-\frac{|u|}{\alpha}} e^{-i\omega u} du = \frac{2\alpha}{1+\alpha^2\omega^2}. \label{eq:poissonkernelcont} \end{eqnarray}\]
The Fourier transform of the triangular window \(w(u) = T-|u|\) for \(|u| \leq T\) (and zero otherwise) is:
\[\begin{eqnarray} \hat{w}(\omega) = \int_{-T}^{T} (T-|u|) e^{-i\omega u} du = T^2 \left(\frac{\sin(\omega T/2)}{\omega T/2}\right)^2 = T^2 \operatorname{sinc}^2(\omega T/2), \label{eq:fejerkernelcont} \end{eqnarray}\]
where \(\operatorname{sinc}(x) = \frac{\sin x}{x}\). This is the continuous analog of the Fejér kernel.
By Parseval’s theorem (or the convolution theorem), we have:
\[\begin{eqnarray} I = \frac{1}{2\pi} \int_{-\infty}^{\infty} \hat{k}(\omega) \hat{w}(\omega) d\omega = \frac{1}{2\pi} \int_{-\infty}^{\infty} \frac{2\alpha}{1+\alpha^2\omega^2} T^2 \left(\frac{\sin(\omega T/2)}{\omega T/2}\right)^2 d\omega. \label{eq:fourierintegralcont} \end{eqnarray}\]
This integral can be evaluated using contour integration in the complex plane. Let’s work through the calculation step by step.
We have:
\[\begin{eqnarray} I = \frac{T^2}{2\pi} \int_{-\infty}^{\infty} \frac{2\alpha}{1+\alpha^2\omega^2} \left(\frac{\sin(\omega T/2)}{\omega T/2}\right)^2 d\omega. \label{eq:contourstart} \end{eqnarray}\]
Using the identity \(\sin^2(\omega T/2) = \frac{1-\cos(\omega T)}{2}\), we rewrite the integrand:
\[\begin{eqnarray} I = \frac{\alpha T^2}{\pi} \int_{-\infty}^{\infty} \frac{1-\cos(\omega T)}{(1+\alpha^2\omega^2)\omega^2} d\omega. \label{eq:rewritten} \end{eqnarray}\]
Now write \(\cos(\omega T) = \frac{e^{i\omega T} + e^{-i\omega T}}{2}\) to split the integral:
\[\begin{eqnarray} I = \frac{\alpha T^2}{\pi} \int_{-\infty}^{\infty} \frac{1}{(1+\alpha^2\omega^2)\omega^2} d\omega - \frac{\alpha T^2}{2\pi} \int_{-\infty}^{\infty} \frac{e^{i\omega T}}{(1+\alpha^2\omega^2)\omega^2} d\omega - \frac{\alpha T^2}{2\pi} \int_{-\infty}^{\infty} \frac{e^{-i\omega T}}{(1+\alpha^2\omega^2)\omega^2} d\omega. \label{eq:split} \end{eqnarray}\]
For the terms with \(e^{\pm i\omega T}\), we use contour integration. For \(e^{i\omega T}\), close the contour in the upper half-plane (since \(T > 0\)). The integrand has poles at: - \(\omega = \pm i/\alpha\) from \(1+\alpha^2\omega^2 = 0\) (only \(\omega = i/\alpha\) is in the upper half-plane) - \(\omega = 0\) from \(\omega^2\) in the denominator
The pole at \(\omega = 0\) is actually removable when we consider the full expression, but we need to handle it carefully. Let’s compute the residue at \(\omega = i/\alpha\):
\[\begin{eqnarray} \operatorname{Res}\left(\frac{e^{i\omega T}}{(1+\alpha^2\omega^2)\omega^2}, \omega = i/\alpha\right) &=& \lim_{\omega \to i/\alpha} \frac{e^{i\omega T}}{(\omega + i/\alpha)\omega^2} \nonumber\\ &=& \frac{e^{-T/\alpha}}{(2i/\alpha)(-1/\alpha^2)} = \frac{i\alpha e^{-T/\alpha}}{2}. \label{eq:residueupper} \end{eqnarray}\]
For the term with \(e^{-i\omega T}\), we close in the lower half-plane and pick up the pole at \(\omega = -i/\alpha\):
\[\begin{eqnarray} \operatorname{Res}\left(\frac{e^{-i\omega T}}{(1+\alpha^2\omega^2)\omega^2}, \omega = -i/\alpha\right) = -\frac{i\alpha e^{-T/\alpha}}{2}, \label{eq:residuelower} \end{eqnarray}\]
where the minus sign comes from the clockwise orientation of the lower half-plane contour.
The first integral (without the exponential) can be evaluated using standard techniques or by taking the limit as \(T \to 0\) of the full expression. After careful calculation of all contributions, we obtain:
\[\begin{eqnarray} I = 2\alpha T + 2\alpha^2(e^{-T/\alpha} - 1), \label{eq:contourresult} \end{eqnarray}\]
which matches Eq. \(\ref{eq:finalresult}\). Alternatively, we can evaluate it directly by recognizing that it’s the convolution of a Lorentzian (Poisson kernel) with a squared sinc function (Fejér kernel) in the time domain, which gives the same result.
Computing the discrete sum
Now let’s compute the discrete analog of this integral. We first defined a dimensionless step size \(\Delta\). The relation to the original time period \(T\) is: \[\begin{eqnarray} \Delta = \frac{T}{\alpha N} \label{eq:deltaalpha} \end{eqnarray}\]
The measuure of the integral is \(dt\), which converts to \(\frac{T}{N}\) in the discrete case, that is: \[\begin{eqnarray} dt dt'= \frac{T^2}{N^2} \label{eq:measures} \end{eqnarray}\]
Therefore the discrete version of the integral we want to evaluate is:
\[\begin{eqnarray} S = \frac{T^2}{N^2} \sum_{i=0}^{N-1}\sum_{j=0}^{N-1} e^{-\Delta|i-j|} \label{eq:discretesum} \end{eqnarray}\]
Similar to the continuous case, the sum depends only on the absolute difference \(k = |i-j|\). For a given \(k\):
- When \(k = 0\): \(i = j\), so there are \(N\) pairs
- When \(k > 0\): For each \(i\) from \(0\) to \(N-1\), we have \(j = i+k\) (when \(i+k < N\)) and \(j = i-k\) (when \(i-k \geq 0\))
- For \(j = i+k\): valid when \(i < N-k\), giving \(N-k\) pairs
- For \(j = i-k\): valid when \(i \geq k\), giving \(N-k\) pairs
- Total: \(2(N-k)\) pairs
Therefore, we can rewrite the sum as:
\[\begin{eqnarray} \sum_{i=0}^{N-1}\sum_{j=0}^{N-1} e^{-\Delta|i-j|} &=& N + \sum_{k=1}^{N-1} 2(N-k) e^{-\Delta k}\nonumber\\ &=& N + 2\sum_{k=1}^{N-1} (N-k) e^{-\Delta k}. \label{eq:sumrewrite} \end{eqnarray}\] Let’s do this in two pieces. \[\begin{eqnarray} S_1&=& \sum_{k=1}^{N-1} e^{-\Delta k}=\sum_{k=0}^{N-1} e^{-\Delta k}-1=\frac{1-e^{-\Delta N}}{1-e^{-\Delta }}-1 = e^{-\Delta }\frac{1-e^{-\Delta (N-1)}}{1-e^{-\Delta }} \label{eq:sumrewrites1} \end{eqnarray}\] And the second piece of this form:
\[\begin{eqnarray} S_2&=& \sum_{k=1}^{N-1} k \,e^{-\Delta k}=\sum_{k=0}^{N-1} k \,e^{-\Delta k}=-\frac{\partial}{\partial \Delta}\sum_{k=0}^{N-1} e^{-\Delta k}=-\frac{\partial}{\partial \Delta}\frac{1-e^{-\Delta N}}{1-e^{-\Delta }}\nonumber\\ &=&e^{-\Delta } \frac{1-Ne^{-(N-1) \Delta }+(N-1)e^{-N \Delta }}{(1-e^{-\Delta })^2} \label{eq:sumrewrites2} \end{eqnarray}\]
Reorganizing the terms, we get the final result: \[\begin{eqnarray} S &=& \frac{T^2}{N^2} \left[N + 2N e^{-\Delta } \frac{1-e^{-(N-1) \Delta }}{1-e^{-\Delta }} - 2e^{-\Delta } \frac{1-Ne^{-(N-1) \Delta }+(N-1)e^{-N \Delta }}{(1-e^{-\Delta })^2}\right]\nonumber\\ &=&\frac{T^2}{N^2} \left[ \left(N + 2N e^{-\Delta } \frac{1}{1-e^{-\Delta }}\right) -2N e^{-\Delta } \frac{e^{-(N-1) \Delta }}{1-e^{-\Delta }}- 2e^{-\Delta } \frac{1-Ne^{-(N-1) \Delta }+(N-1)e^{-N \Delta }}{(1-e^{-\Delta })^2}\right]\nonumber\\ &=&\frac{T^2}{N^2} \left[ N\frac{1+e^{-\Delta }}{1-e^{-\Delta }} -2 e^{-\Delta } \frac{1- e^{-N\Delta }}{\left(1-e^{-\Delta }\right)^2} \right] \label{eq:discreteresult} \end{eqnarray}\]
We can replace \(\Delta\) with \(\frac{T}{N \alpha}\) from Eq. \(\ref{eq:deltaalpha}\). In the small \(\Delta\) limit, we can approximate \(e^{-\Delta }\) as \(1-\Delta\). Therefore:
\[\begin{eqnarray} S &\approx& \frac{T^2}{N^2} \left[ \frac{2N}{\Delta } -2 \frac{1- e^{-\frac{T}{\alpha}}}{\Delta^2} \right]\nonumber\\ &=& 2\alpha T - 2 \alpha^2 \left(1- e^{-\frac{T}{\alpha}} \right) \label{eq:discretelargeN}, \end{eqnarray}\]
which is the same as the continuous result in Eq. \(\ref{eq:finalresult}\).
A faster way
Let’s try in a different way by using the symmetry of the problem. We can sum over \(j\) from \(0\) to \(i\) which resolves the absolute value since \(i>j\). When \(j > i\), we can argue that these are dummy indices and we will get the exact same result. So simply double that sum? Almost! That double counts the \(i=j\) case. So we need to subtract it.
\[\begin{eqnarray} S &=& 2 \frac{T^2}{N^2} \left(\sum_{i=0}^{N-1}\sum_{j=0}^{i} e^{-\Delta(i-j)}-N\right) =\frac{T^2}{N^2} \left(2 \sum_{i=0}^{N-1} e^{-\Delta i}\sum_{j=0}^{i} e^{\Delta j}-N\right)\nonumber\\ &=&\frac{T^2}{N^2} \left(2\sum_{i=0}^{N-1} e^{-\Delta i} \frac{1-e^{\Delta (i+1)}}{1-e^{\Delta }}-N\right) =\frac{T^2}{N^2} \left(\frac{2}{1-e^{\Delta }}\sum_{i=0}^{N-1} \left(e^{-\Delta i}-e^{\Delta }\right)-N\right)\nonumber\\ &=&\frac{T^2}{N^2} \left(\frac{2}{1-e^{\Delta }}\left[\frac{1-e^{-\Delta N}}{1-e^{-\Delta }} -N e^{\Delta }\right]-N \right) \nonumber\\ &=&\frac{T^2}{N^2} \left(-2e^{-\Delta }\frac{1-e^{-\Delta N}}{\left(1-e^{-\Delta }\right)^2} + N\frac{1+e^{-\Delta }}{1-e^{-\Delta }} \right) \label{eq:discretesumnew}, \end{eqnarray}\] which is the same as the result in Eq. \(\ref{eq:discreteresult}\).
Linear algebra / AR(1) trick
There is a beautiful connection to linear algebra and autoregressive processes. The kernel \(e^{-\Delta|i-j|}\) is exactly the covariance matrix of an AR(1) (autoregressive order 1) process.
Consider the \(N \times N\) covariance matrix \(\Sigma\) with entries:
\[\begin{eqnarray} \Sigma_{ij} = e^{-\Delta|i-j|}, \quad i,j = 0, 1, \ldots, N-1. \label{eq:covmatrix} \end{eqnarray}\]
This is a symmetric Toeplitz matrix. The unnormalized sum \(S_N = \sum_{i,j} e^{-\Delta|i-j|}\) can be written as:
\[\begin{eqnarray} S_N = \mathbf{1}^T \Sigma \mathbf{1} = \sum_{i,j} \Sigma_{ij}, \label{eq:matrixsum} \end{eqnarray}\]
where \(\mathbf{1}\) is the \(N\)-dimensional vector of all ones.
The key insight is that for an AR(1) process \(x_n = r x_{n-1} + \epsilon_n\) with \(r = e^{-\Delta}\) and white noise \(\epsilon_n\), the covariance matrix has a tridiagonal inverse. Specifically, the precision matrix \(\Sigma^{-1}\) is:
\[\begin{eqnarray} \Sigma^{-1} = \frac{1}{1-r^2} \begin{pmatrix} 1 & -r & 0 & \cdots & 0 \\ -r & 1+r^2 & -r & \cdots & 0 \\ 0 & -r & 1+r^2 & \cdots & 0 \\ \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \cdots & -r & 1 \end{pmatrix}. \label{eq:precisionmatrix} \end{eqnarray}\]
Using the matrix inversion lemma or directly computing \(\mathbf{1}^T \Sigma \mathbf{1}\), we can exploit the structure. Alternatively, we can use the fact that for a Toeplitz matrix with entries \(r^{|i-j|}\), the sum of all entries can be computed by recognizing it as a geometric series in two dimensions.
A particularly elegant approach uses the Cholesky decomposition or the fact that \(\Sigma\) can be written as \(\Sigma = LL^T\) where \(L\) is lower triangular. Then:
\[\begin{eqnarray} S_N = \mathbf{1}^T LL^T \mathbf{1} = \|L^T \mathbf{1}\|^2. \label{eq:cholesky} \end{eqnarray}\]
Computing \(L^T \mathbf{1}\) explicitly and taking its squared norm yields the same result as Eq. \(\ref{eq:discreteresult}\). This linear algebra perspective connects the discrete sum to covariance matrix theory and provides computational advantages for large \(N\).
Dirichlet kernel
There is an elegant approach using Fourier analysis. Write the weight sequence
\[\begin{eqnarray} k_n = r^{|n|}, \quad n \in \mathbb{Z}, \label{eq:weightseq} \end{eqnarray}\]
where \(r = e^{-\Delta}\). Then the unnormalized sum (without the \(\frac{T^2}{N^2}\) factor) is:
\[\begin{eqnarray} S_N = \sum_{i,j} k_{i-j} = \sum_{n=-(N-1)}^{N-1} (N-|n|) k_n. \label{eq:convsum} \end{eqnarray}\]
Now use Fourier series: the discrete-time Fourier transform of \(k_n\) is the Poisson kernel
\[\begin{eqnarray} \hat{k}(\theta) = \sum_{n=-\infty}^{\infty} r^{|n|} e^{-i n \theta} = \frac{1-r^2}{1-2r\cos\theta + r^2}. \label{eq:poissonkernel} \end{eqnarray}\]
Also, the Fourier transform of the triangular window \((N-|n|)\) is:
\[\begin{eqnarray} \sum_{n=-(N-1)}^{N-1} (N-|n|) e^{-i n \theta} = \left|\sum_{m=0}^{N-1} e^{-i m \theta}\right|^2 = \left(\frac{\sin(N\theta/2)}{\sin(\theta/2)}\right)^2, \label{eq:fejerkernel} \end{eqnarray}\]
which is the square of the Dirichlet kernel, also known as the Fejér kernel.
Therefore, by Parseval’s theorem, we get the harmonic analysis identity:
\[\begin{eqnarray} S_N = \frac{1}{2\pi} \int_{-\pi}^{\pi} \frac{1-r^2}{1-2r\cos\theta + r^2} \left(\frac{\sin(N\theta/2)}{\sin(\theta/2)}\right)^2 d\theta. \label{eq:fourierintegral} \end{eqnarray}\]
This is already a fancy closed form: it’s the Poisson kernel convolved with the Fejér kernel. Evaluating the integral (e.g., by residues, below) gives the explicit formula.
Take the integral in Eq. \(\ref{eq:fourierintegral}\) and set \(z = e^{i\theta}\), so \(d\theta = \frac{dz}{iz}\) and \(\cos\theta = \frac{1}{2}(z + z^{-1})\). The Poisson kernel becomes rational:
\[\begin{eqnarray} \frac{1-r^2}{1-2r\cos\theta + r^2} = \frac{1-r^2}{(1-rz)(1-r/z)}. \label{eq:poissonrational} \end{eqnarray}\]
Also,
\[\begin{eqnarray} \left|\sum_{m=0}^{N-1} z^{-m}\right|^2 = \left(\frac{1-z^{-N}}{1-z^{-1}}\right)\left(\frac{1-z^{N}}{1-z}\right) = \frac{(1-z^{N})(1-z^{-N})}{(1-z)(1-z^{-1})}. \label{eq:fejerrational} \end{eqnarray}\]
So
\[\begin{eqnarray} S_N = \frac{1}{2\pi i} \oint_{|z|=1} \frac{1-r^2}{(1-rz)(z-r)} \frac{(1-z^{N})(1-z^{-N})}{(1-z)(1-z^{-1})} \frac{dz}{z}. \label{eq:contourintegral} \end{eqnarray}\]
Everything is rational; the poles inside \(|z|=1\) are at \(z=r\) (since \(r<1\)) and at \(z=0\) plus the removable/simplified behavior near \(z=1\) coming from the Fejér-kernel factor. You can do the residue calculation cleanly and it collapses to the same closed form.
The result (after simplifying) is:
\[\begin{eqnarray} S_N = \frac{N(1+r)}{1-r} - \frac{2r(1-r^{N})}{(1-r)^2}, \quad r = e^{-\Delta}. \label{eq:residueresult} \end{eqnarray}\]
Multiplying by the measure factor \(\frac{T^2}{N^2}\) from Eq. \(\ref{eq:measures}\) and using \(r = e^{-\Delta}\) with \(\Delta = \frac{T}{\alpha N}\) from Eq. \(\ref{eq:deltaalpha}\), this matches the result in Eq. \(\ref{eq:discreteresult}\).