Changes in technology leave words like fossils washed up on the shore. How many young people now know why you press buttons on a keypad to “dial” a telephone number?
When Aiden Bruen was here this week, we were wondering why the capacity of a channel for digital information is called “bandwidth”. I have a theory about this, based on an observation I made as a student, that may be somewhere near the truth.
Consider AM radio. Each station broadcasts on a frequency, say a; that is (neglecting a factor 2π), it has a carrier wave of the form sin at. This is then modulated by the signal to be sent. For a single pure tone of frequency b (much smaller than a), this would have the form sin bt. (By Fourier analysis, we can think of any signal as a linear combination of terms of this form; the process is linear, so we may consider just one.) Because
sin at·sin bt = (cos(a−b)t − cos(a+b)t)/2,
the result is the average of two signals with frequencies a±b.
So, to transmit a signal including frequencies up to b without interference, a band from a−b to a+b in the frequency spectrum (i.e. of width 2b) must be reserved for the station’s exclusive use.
So the term applies naturally in this context.
Now suppose that we wish to transmit the same information digitally, by sampling the signal and transmitting the result (rather than, say, calculating the Fourier transform and sending that). The sampling rate should be at least twice the largest frequency to be captured. (I don’t know what theorem implies this; but this is why the sampling rate for compact discs was originally chosen to be twice the upper limit of audibility for the human ear.) So, for given accuracy of sampling, the rate at which data must be transmitted is proportional to the largest frequency to be captured, just like the AM bandwidth.
I suppose it was then natural to use the same word.