## Catching a liar

In logical puzzles about truth-tellers and liars, it is always assumed that the questions have just two answers; truth-tellers always give the correct answer and liars the incorrect answer. One might expect that, even in this artificial situation, a liar might be more versatile, and might set out to deceive.

The French essayist Michel de Montaigne said,

If, like the truth, falsehood had only one face, we should know better where we are, for we should then take the opposite of what a liar said to be the truth. But the opposite of the truth has a hundred thousand shapes and a limitless field.

In similar vein, the philosopher Anthony Kenny said in his doctoral thesis,

… all worthwhile philosophical statements express an insight; and the opposite of an insight is not a contradictory sentence, but a muddle …

Nevertheless, I will discuss a very artificial situation, where there is a series of yes-no questions, and the respondent is permitted to lie once in answering the questions. The purpose of this is to introduce coding theory, as I will discuss later. This is prompted by a post about the Hamming code of length 7 by John Bamberg on SymOmega recently. I am discussing the same code, and the same use of it. Unable to shed my teacher’s persona, I will attempt to pull back the screen and show what lies behind.

Unlike John’s version of the trick, mine does without the cards; you have to remember the questions, and you have to do a small amount of brainwork to find the lie. When I do this in public, I usually get it right, but sometimes fumble it.

The game works like this. Instruct your volunteer to think of a whole number between 0 and 15 (inclusive), and then to answer a few questions about it. He or she is permitted to lie to at most one question, but this is not compulsory; truth-telling is allowed. Here are the questions.

1. Is the number 8 or greater?
2. Is it in the set {4,5,6,7,12,13,14,15}?
3. Is it in the set {2,3,6,7,10,11,14,15}?
4. Is it odd?
5. Is it in the set {1,2,4,7,9,10,12,15}?
6. Is it in the set {1,2,5,6,8,11,12,15}?
7. Is it in the set {1,3,4,6,8,10,13,15}?

At the end, you announce both the number thought of, and the question lied to (if any).

In order to do the trick, you have to remember the following simple diagram, which is the Fano plane with a certain natural labelling. (Put the first three powers of 2, namely 1, 2, 4, at the vertices of the big triangle, and then label the third point of each line with the sum of the two points already labelled.) Now here are the decoding rules. First we identify the lie. Record the answers to the questions in order as 1 for “yes” and 0 for “no”, obtaining a binary string of length 7. The weight of this string is the number of ones it contains.

• If the weight is 0, no lie was told.
• If the weight is 1, the lie is in the position of the 1 in the string.
• If the weight is 2, then the positions of the two 1s lie on a unique line; the third point on the line is the lie.
• If the weight is 3, look at the three positions of the three 1s. If they form a line, then no lie was told. If not, then the complementary set of positions of the four 0s contains exactly one line; the point not on this line gives the lie.
• If the weight is 4 or more, then apply the same rules as above to the positions of the zeros.

Having found the lie, you can now correct it; the first four digits of the corrected string give the base 2 representation of the number thought of.

For example, if the answers given yielded the string 0111000, you conclude that the answer to the fifth question was a lie, and the number thought of was 7.

Why does it work?

Let us get the easy part out of the way first. If we knew the correct answers to the questions, we can recover the number thought of. This is simply a matter of looking at the first four questions and noting that they ask for the four digits in the base 2 representation of the number. In coding theory terms, these are the “information digits”, encoding the information we are trying to transmit. The remaining questions yield “check digits”, enabling errors to be spotted and fixed.

In fact, if you examine the set H of 16 strings produced by correct answers to the questions with each possible input, you will find two remarkable things:

• The set is closed under addition (mod 2). If you look further, you find that, if we represent each number from 0 to 15 in base 2, and regard the resulting string of length 4 as a binary vector in F4, where F is the two-element field, then each question is a linear functional on this 4-dimensional space (simply observe that the set of “no” answers forms a subspace), and so the entire procedure gives a linear map from F4 to F7, so its image is a subspace.
• The set contains the all-0 and all-1 vectors, the seven vectors whose supports are the lines of the Fano plane, and the seven vectors whose supports are the complements of lines.

It follows from these properties that any two elements of H differ in at least three positions. (The number of positions in which v and w differ is equal to the number of ones in v+w; if v and w belong to H, then so does v+w, so if non-zero it has at least three 1s.) So if we take an element of H and change a single coordinate (corresponding to telling a lie to one question), the result is still closer to the starting element than to any other. (If any two villages are at least 3km apart, and I walk 1km from one village, I am still closer to that village than to any other.)

So in principle, the decoding is possible; all I have to do is to run through the 16 elements of H and find which one differs in at most one position from the sequence produced by the answers to the questions. The procedure I gave above is a relatively simple method of doing this.

We say that H is a 1-error-correcting code. It is the famous Hamming code of length 7 (which, arguably, was discovered in statistics by R. A. Fisher eight years before Hamming found it, but that is another story).

In fact, H has the additional property that any vector in F7 differs in at most one coordinate from an element of H. This is because H contains 16 vectors, and the 16×7=112 vectors obtained by changing one coordinate in a vector of H are all distinct; and 16+112 = 128 = 27, so every vector is accounted for. We say that H is a perfect 1-error-correcting code.

The decoding method (the way to identify the lie) I gave above is known as syndrome decoding.

The underlying practical situation is that we are trying to send information through a noisy channel where some distortion will occur; during transmission of a binary string, it will occasionally happen that a 0 is changed into a 1 or vice versa. If we can assume that it is very unlikely that more than one of every seven digits transmitted will be received incorrectly, then the Hamming code allows us to recover from the errors in almost all cases. I count all the things that need to be counted.
This entry was posted in exposition and tagged , , . Bookmark the permalink.

### 5 Responses to Catching a liar

1. Nigel White says:

Thanks to introducing me to this trick – originally from watching a video of your LMS Popular Lecture a few years ago. In my experience, it always goes down very well at children’s parties. In response to ‘how did you do that’ – I just reply ‘maths’. Hopefully, it generates a few early converts to maths being fun.

2. Ralph says:

The Hamming questions remind me of techniques used by some trial lawyers when cross examining a deceitful witness. Good cross examiners effectively ask the same question many different times, with variations and in hidden combinations.

The witness can tell unlimited lies, but has to remember all prior answers and carry out the necessary logical calculations accurately.

“Did you return the fur coat before or after you stopped for a drink?”… “How did you pay for your coffee?”… “Where was the coat at that time?”

Lying consistently is not an easy job. Sometimes more than 4 data bits are involved and more than seven questions have to be answered.

3. Bill Fahle says:

This famous hat problem seems to also involve hamming codes: http://www.msri.org/people/members/sara/articles/hat.html

4. Jon Awbrey says: