\(S_n=\sum_{i=1}^n T_i\) is a sum of \(n\) random numbers. It is illustrative to consider \(n=2\) case and figure out the distribution of the sum of two random numbers \(T_1\) and \(T_2\). The cumulative probability density of \(S_{2}\equiv T_1+T_2\) is given by: \[\begin{eqnarray} F_{S_{2}}(t)&=&P(T_1+T_2< t)=\int_{t_1+t_2<t}f_{T_1}(t_1)f_{T_2}(t_2) dt_1 dt_2=\int_{-\infty}^\infty\int_{-\infty}^{t-t_2}f_{T_2}(t_2)dt_2f_{T_1}(t_1) dt_1\nonumber\\ &=&\int_{-\infty}^\infty F_{T_2}(t-t_1)f_{T_1}(t_1) dt_1. \tag{1} \end{eqnarray}\]
The probability density function is the derivative of Eq. (1):
\[\begin{eqnarray} f_{S_{2}}(t)&=&\frac{d}{dt} F_{S_{2}}(t)=\int_{-\infty}^\infty f_{T_2}(t-t_1)f_{T_1}(t_1) dt_1=\int_{0}^t f_{T_2}(t-t_1)f_{T_1}(t_1) dt_1,\tag{2} \end{eqnarray}\] where the limits of the integral are truncated to the range where \(f\neq0\). The integral in Eq.(2) is known as the convolution integral: \[\begin{eqnarray} f_{T_1}\circledast f_{T_2}\equiv \int_{-\infty}^\infty f_{T_2}(t-t_1)f_{T_1}(t_1) dt_1\tag{3}. \end{eqnarray}\] In the special case of exponential distributions, \(f\) is parameterized by a single parameter \(\lambda\), which represents the failure rate, and it is given by \[\begin{eqnarray} f_{T}(t)=\lambda e^{-\lambda t},\,\, t > 0. \tag{4} \end{eqnarray}\]
From Eq. (2) we get: \[\begin{eqnarray} f_{S_{2}}(t)&=&\int_{0}^t f_{T_2}(t-t_1)f_{T_1}(t_1) dt_1=\lambda^2 \int_{0}^t e^{-\lambda (t-t_1)}e^{-\lambda t_1} dt_1=\lambda^2e^{-\lambda t} \int_{0}^t dt_1=\lambda^2\, t \,e^{-\lambda t},\tag{5} \end{eqnarray}\] which is actually a \(\Gamma\) distribution. The corresponding cumulative failure function is: \[\begin{eqnarray} F_{S_{2}}(t)&=&\int_0^t d\tau f_{S_{2}}(\tau)=\lambda^2\int_{0}^td\tau\, \tau \,e^{-\lambda \tau}=-\lambda^2\frac{d}{d\lambda}\left[\int_{0}^td\tau\,e^{-\lambda \tau}\right]=\lambda^2\frac{d}{d\lambda}\left[\frac{e^{-\lambda t}-1}{\lambda}\right]\nonumber\\ &=&1-e^{-\lambda t}(1+\lambda t). \tag{6} \end{eqnarray}\] This is pretty neat. Can we move to the next level and add another \(T_i\), i.e., \(S_{3}=T_1+T_2+T_3=S_{2}+T_3\). We just reiterate Eq. (2) with probability density for \(S_{2}\) from Eq. (5).
\[\begin{eqnarray} f_{S_{3}}(t)&=&\int_{0}^t f_{T_3}(t-t_1)f_{S_{2}}(t_1) =\lambda^3 \int_{0}^t e^{-\lambda (t-t_1)} t_1 \,e^{-\lambda t_1} dt_1=\lambda^3\frac{t^2}{2}e^{-\lambda t}, \tag{7} \end{eqnarray}\] which was very easy! In fact, we can keep adding more terms. The exponentials kindly drop out of the \(t_1\) integral, and we will be simply integrating powers of \(t_1\), and for \(S_{n}\equiv T_1+T_n+\cdots +T_n\) to get: \[\begin{eqnarray} f_{S_{n}}(t)&=&\lambda^n\frac{t^{n-1}}{(n-1)!}e^{-\lambda t}. \tag{8} \end{eqnarray}\]
It will be fun if we redo this with some advanced mathematical tools, such as the Laplace transform, which is defined as: \[\begin{eqnarray} \tilde f(s)\equiv\mathcal{L}\big[f(t)\big]=\int_0^\infty dt \, e^{-s \,t} f(t) \tag{9}. \end{eqnarray}\] There are a couple of nice features of the Laplace transforms we can make use of. The first one is the mapping of convolution integrals in \(t\) space to multiplication in \(s\) space. To show this, let’s take the Laplace transform of Eq. (3):
\[\begin{eqnarray} \mathcal{L}\big[f_{T_1}\circledast f_{T_2}\big] = \int_0^\infty dt \, e^{-s \,t} \int_{-\infty}^\infty f_{T_2}(t-t_1)f_{T_1}(t_1) dt_1 = \int_{-\infty}^\infty dt_1 \int_0^\infty dt \, e^{-s \,(t-t_1) } f_{T_2}(t-t_1)e^{-s \,t_1 } f_{T_1}(t_1). \tag{10} \end{eqnarray}\] Let’s take a closer look at the middle integral: \[\begin{eqnarray} \int_0^\infty dt \, e^{-s \,(t-t_1) } f_{T_2}(t-t_1) = \int_{-t_1}^\infty dt \, e^{-s \tau} f_{T_2}(\tau) = \int_{0}^\infty d\tau \, e^{-s \tau} f_{T_2}(\tau)= \tilde f_{T_2}(s), \tag{11} \end{eqnarray}\] where we first defined \(\tau=t-t_1\), and then shifted the lower limit of the integral back to \(0\) since \(f_{T_2}(t)=0\) for \(t<0\). Putting this back in, we have the nice property: \[\begin{eqnarray} \mathcal{L}\big[f_{T_1}\circledast f_{T_2}\big] = \tilde f_{T_1}(s) \tilde f_{T_2}(s). \tag{12} \end{eqnarray}\] How do we make use of this? The probability distribution of a sum of random numbers is the convolution of individual distributions: \[\begin{eqnarray} f_{S_{n}}=\underbrace{f_{T_1}\circledast f_{T_2}\circledast \cdots \circledast f_{T_n}}_{n \text{ times}}. \tag{13} \end{eqnarray}\] We can map this convolution to multiplications in \(s\) space: \[\begin{eqnarray} \tilde f_{S_{n}}(s)\equiv\mathcal{L}\big[f_{S_{n}}\big] =\underbrace{\tilde f_{T_1} \tilde f_{T_2} \cdots \tilde f_{T_n}}_{n \text{ times}}=\displaystyle \prod_{j=1}^{n} \tilde f_{T_j}. \tag{14} \end{eqnarray}\] When the individual random numbers are independent and have the same distribution, we get: \[\begin{eqnarray} \tilde f_{S_{n}}(s) =\left(\tilde f_{T}\right)^n. \tag{15} \end{eqnarray}\] If the random numbers are exponentially distributed, as in Eq. (4), their Laplace transformation is easy to compute: \[\begin{eqnarray} \tilde f(s)=\int_0^\infty dt \, e^{-s \,t}\lambda e^{-\lambda t} =\frac{\lambda}{s+\lambda} \tag{16}, \end{eqnarray}\] which means the Laplace transform of the sum is: \[\begin{eqnarray} \tilde f_{S_{n}}(s) =\left(\frac{\lambda}{s+\lambda} \right)^n. \tag{17} \end{eqnarray}\] We will have to inverse transform Eq. (17), which will require some trick. This brings us to the second nifty property of Laplace transform. Consider transforming \(t f(t)\):
\[\begin{eqnarray} \mathcal{L}\big[t f(t)\big]=\int_0^\infty dt \, t e^{-s \,t} f(t)=-\frac{d}{ds}\left[\int_0^\infty dt e^{-s \,t} f(t)\right] =-\frac{d}{ds}\left[\tilde f(s)\right] \tag{18}. \end{eqnarray}\] Therefore, we see that Laplace transform maps the operation of multiplying with \(t\) to taking negative derivatives in \(s\) space: \[\begin{eqnarray} t \iff -\frac{d}{ds} \tag{19}. \end{eqnarray}\] We re-write Eq. (17) as: \[\begin{eqnarray} \tilde f_{S_{n}}(s) =\left(\frac{\lambda}{s+\lambda} \right)^n= \frac{\lambda^n}{(n-1)!}\left(-\frac{d}{ds} \right)^n \left(\frac{\lambda}{s+\lambda} \right). \tag{20} \end{eqnarray}\] Using the property in Eq. (19), we can invert the transform: \[\begin{eqnarray} f_{S_{n}}(t)=\mathcal{L}^{-1}\big[f_{S_{n}}\big]=&=&\lambda^n\frac{t^{n-1}}{(n-1)!}e^{-\lambda t} \tag{21}, \end{eqnarray}\] which is what we got earlier in Eq. (8).