The Hamming code and its generalisations are so versatile that there is lots more that could be said. With some difficulty, I’ve restricted myself to just a few.
The trick for catching a liar in the preceding post is very easy to program (I will say a bit more about implementation below). Back in the good old days when my Psion held my diary and much more, I wrote a little program which posed the questions to the viewer. It was coupled with a Shakespearian insult generator which I got from Lewis Nowitz. (This consisted of three lists of about 50 words each: the first two lists were of adjectives and the third of nouns. If the player tells a lie, then before revealing his or her choices, the computer denounces him or her as “You X Y Z”.) Sad to say, the Psion no longer works, and I have lost the insult generator. But this little program was very attractive to children; sad to say, it seemed to have the effect of encouraging lying!
This particular Hamming code (or, strictly, the mnemonic I gave for using it) has a connection with the game of Nim. In this game, there are a number of heaps of matches; players take it in turn to pick up any positive number of matches from one pile. The player who takes the last match loses. The winning positions with at most three heaps each containing at most seven matches are of two types:
- two heaps of the same size; and
- three heaps whose sizes are the numbers on a line of the Fano plane given in the preceding post.
Thus, memorising them saves some mental effort if you are playing Nim.
Now I turn to Hamming codes in greater generality.
We start with the binary case, where F is the two-element field. Then a code C is a k-dimensional subspace of the n-dimensional vector space Fn, hopefully with the property that any two codewords are quite far apart.
A code can be specified in either of two ways:
- C is the image of an injective map E: Fk→Fn, the encoding map. Now E takes k bits of the message which is to be sent, and encodes them as the n bits of a word of C. So the rate at which information is transmitted is only k/n of what it would be without encoding; this is the price we pay for error correction.
- C is the kernel of a surjective map D: Fn→Fn-k. This is not quite the decoding map, but it tells us if the word has been transmitted without error: this is the case if it mapped to zero by D.
Now encoding is simple arithmetic over the binary field, and can be done by lightweight and low-powered hardware. This is a great advantage in interplanetary exploration, where the space probe has weight and power limitations.
Decoding is more complicated; the easiest to describe is syndrome decoding. The received word can be written as t+e, where t is the transmitted word (which we want to find) and e the unknown error that occurred during transmission. For each coset of C in Fn, choose a word of smallest weight in the coset, called the “coset leader”. Our assumption that the number of errors is not too great says precisely that the error pattern e will be a coset leader. Now D maps the coset leaders bijectively to Fn-k, so by a look-up table or other means we can calculate e from D(t+e) = D(e) [remember that D(t) = 0, since t is a codeword], and hence find t.
The Hamming code of length 2d−1 and dimension 2d−d−1 has the property that the matrix D (whose size is (2d−1)×d) has as its rows all the non-zero binary strings of length d. We can regard these as being the base 2 representations of the integers from 1 to 2d−1. If we write the rows so that the i-th row is the base 2 representation of i, then the syndrome of a received word (the result of multiplying it by this matrix) is 0 if no error occurred, and the base 2 representation of i if one error occurred in the i-th position. This is exactly the principle we used in the trick. This shows, in a very constructive way, that the Hamming code will correct one error. It is a perfect code, for each value of d.
Hamming codes exist over other finite fields. We take the matrix D to have as its rows, not all the non-zero strings of length d over the field, but a selection of them such that no row is a scalar multiple of any other. This is most conveniently done by arranging that the first non-zero entry in any row is 1. The number of rows is (qd−1)/(q−1).
Here is an example with d=3 and q=3. I have transposed the matrix, to save space, and written the entries of the field as 0,+,−.
If one error occurs in transmission, the syndrome will be the i-th row or its negative according as 1 was added to or subtracted from the ith entry.
There is another use we can make of this: we can derive a solution of the “weighing pennies” problem. In this problem, we are given a number of pennies, one of which is known to be either lighter or heavier than all the others (but we don’t know which), and have to identify the odd penny with a given number of weighings on a balance. The traditional puzzle has twelve pennies and allows three weighings.
The Hamming code above would solve the problem if only the weights of the pennies were elements of the integers mod 3 rather than real numbers. Let v be the vector whose i-th coordinate is +1 if the i-th penny is heavy, −1 if it is light, and 0 if it is normal. Multiplying this vector by the transpose of the above matrix outputs plus or minus the ith row of the transposed matrix if the ith penny is heavy or light respectively.
To get round this, we have to arrange that any row of the original matrix has equally many + and − entries. This is not possible as it stands, but might be if we delete a column consisting of non-zero entries, since then the weight of each row would become even. In the existing matrix, the numbers of +/− entries in the three rows are 9/0, 6/3 and 5/4 respectively. A little thought shows that if we change signs of the columns numbered 8, 9, 10, 11, these become 5/4, 4/5 and 5/4; so we have to delete the column (+,−,+)T (the 12th) to obtain balanced rows. The resulting matrix gives the weighing scheme. For the i-th weighing, look at the ith row, and place the pennies corresponding to entries − in the left-hand pan and those corresponding to + in the right. So the weighings are:
- 8,9,10,11 v 5,6,7,12
- 8,9,10,12 v 2,3,4,11
- 4,7,9,12 v 1,3,6,10
The results of the weighings tell us the odd penny and whether it is light or heavy.
Hi I am a old buddy of lewis nowitz is there a wayto fi nd him if he still aRound. ,alive thanks. Denn is pitter
I haven’t seen Lewis for many years. But we have a joint paper in 2002, which came about in a curious way: three separate teams were working on this problem, and one person was working alone. Lewis, with Dragan Marusic in Slovenia, made up one of the three teams; I believe Lewis was visiting Dragan at the time. so he was obviously still around then.
He was a remarkable character who would email all his friends jokes from his enormous collection. But I haven’t had any of these emails for a long time, so I don’t know what has happened.
Lewis, are you out there?