I am a great admirer of Ben Goldacre. An advocate of evidence-based medicine, he is the person behind Bad Science, the Guardian column, website, and book; he is the scourge of dodgy nutritionists, alternative therapists who cherry-pick trials to bolster their wares, evil pharmaceutical companies who have subtle ways of burying unfavourable trials, and journalists who manufacture scare stories by misunderstanding the scientific evidence. In fact, what little scientific expertise I have is in the hard sciences, and it is refreshing for me to read about a different kind of science where statistical analysis is necessary to supplement direct observation.
There is only one small point on which I am critical of Goldacre. His jokey style includes several shots along the lines “mathematics is hard, but don’t worry, I’ve simplified it for you.”
For example, on page 265 of the book, he says,
As statisticians would say, you must `correct for clustering’. This is done with clever maths which makes everyone’s head hurt. All you need to know is that the reasons why you must `correct for clustering’ are transparent, obvious and easy, as we have just seen (in fact, as with many implements, knowing when to use a statistical tool is a different and equally important skill to knowing how it is built).
I completely agree with his main point. But surely the wording just panders to readers’ fear of mathematics. One of the jobs of mathematical statistics is to take these “transparent, obvious and easy” reasons and turn them into something which can be used to extract reliable information from data.
Eighteen pages earlier, we had the following.
Imagine a table with four cards on it, marked `A’, `B’, `2′, and `3′. Each card has a letter on one side, and a number on the other. Your task is to determine whether all cards with a vowel on one side have an even number on the other. Which two cards would you turn over?
This doesn’t make anyone’s head hurt, since apparently it is not mathematics, just a “modest brain-teaser”. It is there to demonstrate that we have a bias towards looking for confirming instances when testing a theory; but the essence of a scientific theory is that it can be falsified, and the essence of an experiment is that it must have the possibility of falsifying the theory it sets out to test (as Karl Popper insisted).
In fact, I sincerely hope that people who have studied mathematics will be able to solve this brain-teaser. It would be interesting to test mathematics students against others; I would like to think the maths students would do better, though cynical enough to admit that it may not turn out that way. (One of the greatest difficulties in teaching mathematics is breaking down our students’ tendency to compartmentalise their knowledge; to persuade them that something they learn in their introductory pure mathematics course might be useful, not just in other courses, but in the world outside the lecture room.)
For there is no doubt that this is mathematics. We teach our students that the implication `p implies q‘ is false only in the case when p is true and q false. (My standard example: I promise you that if it is fine tomorrow we will go to the Zoo. If it rains all day, then I haven’t broken my promise, no matter what we do. I would only have done so if it were fine and we didn’t go to the Zoo.)
So to test the hypothesis `vowel on one side implies even number on the other’, we have to look only for instances that could falsify the implication by having a vowel on one side and an odd number on the other, that is, by turning over the cards showing `A’ and `3′.
(Another way of thinking about it is that the logically-equivalent contrapositive of the hypothesis we are testing asserts that all cards with an odd number on one side have a consonant on the other.)
The Sudoku instructions in The Independent say,
There’s no mathematics involved. Use reasoning and logic to solve the puzzle.
But what is mathematics if it isn’t reasoning and logic?