Proving the convolution theorem for discrete signals

Sources: Here for the proof and this mathworks page for the destinction between linear and circular convolutions.

Assume that the signals xx and hh have lengths PP and QQ, respectively, and are non-periodic. Then you typically define a linear convolution as

ylin(n)=k=0P+Q2h(k)x(nk). y_\mathrm{lin}(n) = \sum_{k=0}^{P+Q-2} h(k) x(n-k) \quad .

Some of the indices run out of bounds in this sum, so we'll tacitly assume that anything out of bounds is zero. Note that nn can run from 0 to P+Q2P+Q-2! In other words, a (non-trivial) linear convolution has a longer output than the input. Our definition nicely describes a tapped delay line. The extra samples are the input and output transients.

For periodic signals, you typically switch to a seperate definition of a convolution. Assume that xx and hh are of the same length and periodic. Then in the following definition for the circular convolution nothing can run out of bounds (everything is periodic!), and the output will have the same period as the input.

y(n)=k=0N1h(k)x(nk) y(n) = \sum_{k=0}^{N-1}h(k) x(n-k)

No matter if we are working with periodic signals or non-periodic signals, the discrete Fourier transform and its inverse are always defined as

X(m)=n=0N1x(n)ej2πnm/N X(m) = \sum^{N-1}_{n = 0} x(n) e^{-j \, 2 \pi \, n \, m/N}

x(n)=1Nm=0N1X(m)ej2πmn/N x(n) = \frac{1}{N} \sum^{N-1}_{m = 0} X(m) e^{-j \, 2 \pi \, m \, n/N}

So, let's get to work and prove the convolution theorem for periodic signals. We'll start with the definition of the circular convolution

y(n)=k=0N1h(k)x(nk),y(n) = \sum_{k=0}^{N-1}h(k) x(n-k) \quad ,

and replace xx and hh with the inverse Fourier transform of XX and HH. Note that we need the periodicity here - otherwise we'd have to ensure that out-of-bound indexes lead to 0! (We'll get to that case in a bit.)

=k=0N11Nm=0N1H(m)ej2πmk/N1Nl=0N1X(l)ej2πl(nk)/N = \sum_{k=0}^{N-1} \frac{1}{N} \sum^{N-1}_{m = 0} H(m) e^{-j \, 2 \pi \, m \, k/N} \frac{1}{N} \sum^{N-1}_{l = 0} X(l) e^{-j \, 2 \pi \, l \, (n-k)/N}

Next, we rearrange the sums such that the sum over kk is the innermost sum.

=1Nm=0N1H(m)1Nl=0N1X(l)k=0N1ej2πmk/Nej2πl(nk)/N =\frac{1}{N} \sum^{N-1}_{m = 0} H(m) \frac{1}{N} \sum^{N-1}_{l = 0} X(l) \sum_{k=0}^{N-1} e^{-j \, 2 \pi \, m \, k/N} e^{-j \, 2 \pi \, l \, (n-k)/N}

We can pull out the part of the exponential that does not depend on kk.

=1Nm=0N1H(m)1Nl=0N1X(l)k=0N1ej2π(lnlk+mk)/N =\frac{1}{N} \sum^{N-1}_{m = 0} H(m) \frac{1}{N} \sum^{N-1}_{l = 0} X(l) \sum_{k=0}^{N-1} e^{-j \, 2 \pi \,( l n - l k + mk)/N}

=1Nm=0N1H(m)1Nl=0N1X(l)ej2πln/Nk=0N1ej2πk(ml)/N =\frac{1}{N} \sum^{N-1}_{m = 0} H(m) \frac{1}{N} \sum^{N-1}_{l = 0} X(l) e^{-j \, 2 \pi \, l n /N} \sum_{k=0}^{N-1} e^{-j \, 2 \pi \, k(m-l)/N}

If m=lm=l, then the inner sum evaluates to NN. For all other cases, you can apply the formula of a geometric series and quickly convince yourself that the result is 0 (the numerator in that formula is alwasy 0 due to the 2π2\pi's). Thus

=1Nm=0N1H(m)1Nl=0N1X(l)ej2πln/NNδ(ml) =\frac{1}{N} \sum^{N-1}_{m = 0} H(m) \frac{1}{N} \sum^{N-1}_{l = 0} X(l) e^{-j \, 2 \pi \, l n /N} N \delta(m-l)

=1Nm=0N1H(m)X(m)ej2πmn/N =\frac{1}{N} \sum^{N-1}_{m = 0} H(m) X(m) e^{-j \, 2 \pi \, m n /N}

And this is just the inverse Fourier transform of the function Y(m)=H(m)X(m)Y(m) = H(m) X(m). In other words, we just proved that the convolution can be expressed as the inverse Fourier transform of the product of the individual spectra.

With care, we can also apply this to the linear convolution. By appropriately 0-padding both signals, such that they both have a length of P+Q1P + Q -1, we can treat them as periodic functions! The periodicity just doesn't matter, because the ends never meet (there is just enough zeros between them.) We can then apply the convolution theorem and see that it mostly works as expected.

But there is an important side-effect: We cannot apply a pure sine frequency at the input anymore. For any non-trivial filter, the signal has to be 0-padded at the end! So you never get a pure spectrum out - this is because the input and output transients are usually not sinusoid.