Understanding Digital Signal Processing Chapter 5

Filters are there to change the spectrum of signals. There are two broad classes: finitie impulse response (FIR) and inifite impulse response (IIR) filters.

FIR in a nutshell: An input with finite non-zero values will lead to an output with finite non-zero values. Example: Calculating a running average of the last 5 values is a low pass filter. In DSP, this is called a 5-tap tapped-delay line FIR filter... The mental model is a line having 5 unit-delays, each having a T-junction (tap) in front of them that leads to a sum-input (and a 0.2 multiplier). A more general example:

y(n)=k=0P+Q2h(k)x(nk). y(n) = \sum_{k=0}^{P+Q-2} h(k) x(n-k) \quad .

In the formula above PP is the number of hh coefficients, QQ is the number of xx coefficients, and nn can run from 0 to L=P+Q1L = P+ Q - 1. Usually, the first PP outputs are discarded, as this is the transient repsonse as the filter-buckets are being filled up. If any index runs out-of bounds, the corresponding coefficient is assume to equal 0. The equation is often rewritten in shorthand, using * as the convolution operator.

y(n)=h(k)x(n) y(n) = h(k) * x(n)

You can characterize a filter with its impulse response. This is the output (time series!) for an input comprising a single 1 preceded and followed by zeros. For FIR filters this directly gives you the filter coefficients h(k)h(k).

Convolution in the time domain becomes multiplication in the frequency domain (proof omitted in the book - see errata post for my idea of the proof) Don't underestimate the power of this statement: If you know the spectra of both the signal and your filter (at least one of them appropriately 0-padded in the time domain), then the output spectrum is just the scalar multiplication of the input spectra:

y(n)=h(k)x(n)Y(m)=H(m)X(m). y(n) = h(k) * x(n) \longleftrightarrow Y(m) = H(m)\cdot X(m) \quad .

In general, filter effects include a transient response (at the beginning and end, where there are not enough values for the filter-coefficients), a phase shift, and a amplitude shift. During the transient, the output may not be given by the Fourier transform (I think

Question: Are the output/input transients described by the convolution theorem? The book does not give an answer to this, but if you look at the proof of the convolution theorem, you can see an important side-effect: For the convolution theorem to work, you have to define your signal and your filter-coefficients to be periodic. This is, of course, not what a tapped-delay-line filter does - it cannot know about the last few samples as the first one comes in. Therefore, in order to describe this type of filter using convolutions and the convolution theorem, you have to appropriately zero-pad both the signal and the filter coefficients, such that in a periodic world they can never meet. This means, you cannot have pure sine-waves as inputs (so no "pure" input spectrum), so the product in the frequency space won't give you a nice peak, and you can see how the input and output transients might be properly described by the convolution theorem.

For symmetrical filter coefficents, the phase shift can be expressed as a constant delay in time.

The author keeps saying that convolution works with time-reversed samples, but I don't think this is generally true. If standard order is x(0x(0 = oldest sample = first measured sample, then in his pictures, he indeed reverses the order. In his formulas, though, the order is unchanged. Apparently, some authors do this differently (?).

You can design FIR filters with many different methods, usually starting out with a desired response in the frequency domain, then translating that to filter coefficients.

The Window Design Method can mean one of two things: 1) You choose your H(m)H(m) coefficients to form a "frequency window". You can calculate the filter coefficient h(k)h(k) algebraically, which the author describes by using negative frequencies (and a rectangular window centered around 0). Or you can calculate them using an IFFT "on the fly", but then you have to use standard (all positive) frequencies and a "half-window" at both ends. (For whatever reason, the author procedes to throw away one output coefficient and to shift the others around such that the filter coefficients are highest in the center and symmetrical). 2) You can start out with the sinc-like coefficients h(k)h(k) of an ideal window and multiply them, in the time domain (!), with window coefficients w(k)w(k). The latter usually have a smooth fourier transform, which due to the convolution in the frequency domain, makes the total frequency response smoother.

You can leave out some of the filter coefficients to make the filter simpler, but then you get a non-ideal frequency reponse. In any case, you have to deal with ripple. Even when using all N1N-1 filter coefficients, you still have ripple for non-integer frequencies ("Gibb's Dilemma"). The ripple is a direct consequence of the step-like frequency window.

The window coefficients of popular functions are often given in the time domain (!) as w(k)w(k). You then, in the time domain, multiply the window coefficients with the "sinc-like-coefficients h(k)h(k) of the "ideal" rectangle filter to get your final filter coefficients. In the frequency domain, this corresponds to a convulotion of the sinc's rectangle with the smooth function's whatever-smooth-function.

You can use smoother windows to reduce the ripple - a popular example being the Blackman window function (two cosines cleverly added to get smooth transitions). The cost is a less sharp filter.

The Chebyshev and the Kaiser window functions are other, important alternatives. They have the added benefit of an adjustable parameter that directly gives you the minimum attenuation of the stop band. The higher the attenuation, the broader the filter.

A bandpass filter is just a low-pass filter shited up in frequency. You can shift it up by multiplying it (in the time domain) with a sinusoid of appropriate frequency. (Think about the convolution of the filter's spectrum with the sinusoids delta-peak at some frequency - this just shifts the filter spectrum.) A popular choice is a bandpass-filter around fs/4f_{\mathrm{s}}/4, because then the sine-wave has 4 points per full cycly, and these points are (0,1,0,1)(0, 1, 0, -1). This makes the multiplication trivial. Note that the output is generally a factor two lower (your multiplying with lots of zeros!), but appears to be wider (?). I am not sure how you see the reduction of two in the convolution, though...

The same technique can be used to design a high-pass filter. You just multiply by fs/2f_{\mathrm{s}}/2, which is the sequency (1,1,1,1,)(1,-1,1,-1,\dots).

One automated way of calculating a filter is the Parks-McClellan exchange. You define a few points in the pass-band, and it tries to give you a result with minimal ripple. The result - claimed to be optimal - is a Chebyshev-type filter that has constant ripple throughout the stop-band (all the spurious maxima have the same height, which means that the highes maximum is a low as it gets). The math is complicated, but many filter design tools exist for it.

A supposedly very useful, special filter is the half-band FIR filter, which is a (low-pass) filter that has a pass-band gain of 1 and is point-symmetric (?) around (fs/4,0.5)(f_{\mathrm{s}}/4, 0.5). (It has to pass through that point.) In this special case, almost half of the filter coefficients are 0, which makes it very fast to calculate. These filters are supposedly used all over the place.

FIR filters always (?) have a linear phase response (he later says only if the coefficients are symmetric...). Intuitively, this makes sense, if you think about tapped delay line filter, or about the convolution sum "holding inputs back" by a fixed amount of samples \rightarrow linearly varying frequency. The result is, that if you look at the H(m)H(m) on a phasor-diagram, they always rotate by a fixed amount if you go change mm+1m \rightarrow m+1. But beware: The amplitude can also go negative, which gives you 180° phase jumps. When you calculate the angles, you also get 360° phase jumps because the arctan-routines typically don't know how to unwrap the phase (of course, you could teach them...).

The linear phase response means, that the phase-change per frequency bin is constant. In the general case (even when not constant?) this is called the group delay, and is given by G=dϕ/df G = -d\phi/df . Assuming it's constant, you can quickly show that a constant group delay leads to a fixed time-delay of every frequency component, which means there is no (relative) phase distortion between different frequencies. A useful property for any modulated signals! It's also called "envelope delay". For symmetric coefficients and an SS-tap FIR filter with D=S1D = S-1 as the number of unit-delay elements, you can handwavingly find that the group delay (in seconds) is given by

G=Dts2. G = \frac{D t_\mathrm{s}}{2}\quad .

An nn-th order FIR filter is just an n+1n+1 order tapped delay line filter.

Passband gain is defined as the mean-value inside the passband around which the ripples oscillate. For small ripples, the passband gain is equal to the DC-gain, which is equal to the sum of all the FIR coefficients.

Estimating the number of FIR filter taps is somewhat guesswork, but you can guide yourself by the following formula, which gives you passband ripple of 0.1 dB for any attenuation and (fractional, relative to fsf_{\mathrm{s}}) corner frequencies fstop,fpassf_{\mathrm{stop}}, f_{\mathrm{pass}}:

NFIR=attenuation22(fstopfpass) N_\mathrm{FIR} = \frac{\mathrm{attenuation}}{22(f_{\mathrm{stop}} - f_{\mathrm{pass}})}

Analyzing FIR-filters can either be done by stating the DTFT as a "polynomial" (you just expand the Fourier-sum with a continuous m/Nωm/N \rightarrow \omega inside, which looks like a polynomial of exp's), or you can calculate the DFT (which gives you a sampled version of the DTFT.) In the latter case, you probably want to 0-pad the original data to get a higher resolution.

Question: I'm curious what happens if you ignore the symmetry and leave out the far end coefficients in the IFFT calculation of the windowed-low pass filter.

Question: How exactly does he go to negative frequencies and back? Again, I think there is a phase factor involved, and I think he sneakily redefines the DFT as he goes along!