As the author mentions, convolutions for signal processing and convolutions for probability are essentially equivalent things in two different contexts. It is interesting that the same concept is behind the multiplication of polynomials. In the discrete case we have the parallel of the vectors representing ...
the (discretized) probability density <-> the (discrete) signal intensity <-> the coefficients of the polynomial
... in the respective context.
If we look at the discrete case, since the operation of convolution is computationally expensive (O(n^3)), people often rely on the Fast Fourier Transform Algorithm (FFT), that runs in O(n log(n)). An excellent and very enjoyable video on the FFT applied to the polynomial multiplication is https://youtu.be/h7apO7q16V0?t=1.
That's the whole magic of FFT, it can be done in O(n*log(n)). And this is a practical algorithm, unlike the many fascinating algorithms for multiplying matrices. GMP will switch to FFT for multiplying very large numbers, for example, as numbers laid out in digits (whether binary or decimal) are polynomials if you look at them from the right angle. For processing audio and video it very quickly becomes economical to do the convolution in the frequency domain.
EDIT: never mind, I just figured out that you pointed a typo in the parent comment. Straightforward implementation is indeed O(n^2)
A minor thought I found inspiring many years ago: the Central Limit Theorem may be considered a statement about dynamic processes in the functional space of distributions.
If you consider the operator of convolution (rescaling for unit variance), the normal distribution is the only attractor (under many "natural" choices of metrics).
It's actually not the only attractor -- it's just the only attractor once you restrict to finite variance. If you allow infinite variance, there can be others: https://en.wikipedia.org/wiki/Stable_distribution
Makes sense, if you have a random process with IID variables, the distributions d are the same, so you can view the partial sums process (∑Xi) dynamic as advancing by convolving with d.
In the generating function pic, the distribution is borne as coefficients of polynomials or of formal power series, and convolution is polynomial multiplication. One example of your step is multiplying by (x+1)/2 (Bernoulli trial), and that gives approaches to the gaussian by chunks of normalized binomial coefficients.
Other related limit, described as a functional central limit theorem is given by Donsker's theorem [1], giving the passage from the discrete situation (random walk) to the continous (Brownian motion).
"Central Limit Theorem" says that "adding (bounded variance) stuff together makes it converge to a Gaussian" (roughly). On the other hand, when you are adding random variable together, you are actually convolving their densities. Thus, what the CLT says is that a gaussian is some sort of attractor/fixed point of the "dynamic process" of convolving (finite energy/bounded variance) distributions.
There are (several) central limit theorems for dependent variables, but you often have to assume other things as well (e.g., stationarity, bounded third moment, and/or limited-range correlations) for it to work.
For me, the way to understanding convolutions was to to put sample rand(0, 1) a bunch of times and bucket the samples, then plot the number of samples in a bucket. You get a more-or-less flat graph. Now if you add or multiply two variables, you get different shapes. Once you spent enough effort on trying to understand why, you derive convolutions.
Small correction: this is correlation operator. Deep learning wise they (correlation and convolution) are equivalent but in probability it will lead to incorrect results.
To elaborate, though the page does mention the distinction:
(Cross-)correlation is convolution with a reversed kernel (second signal) (or vice-versa, of course). (For discrete signals, reversing the kernel is just a swapping the indexes around; which makes absolutely no difference for deep-learning). Convolution is more "natural" because it's abelian, whereas swapping signal and kernel in cross-correlation time-reverses the result.
If we look at the discrete case, since the operation of convolution is computationally expensive (O(n^3)), people often rely on the Fast Fourier Transform Algorithm (FFT), that runs in O(n log(n)). An excellent and very enjoyable video on the FFT applied to the polynomial multiplication is https://youtu.be/h7apO7q16V0?t=1.