Littlewood published an article entitled “Large numbers” in the Mathematical Gazette in 1948. One of his examples depends on his own theorem about the relation between π(x) (the number of primes up to x) and li(x) (the integral of 1/log t from 0 to x). Although π(x)–li(x) is negative for all values of x for which it had been computed in Littlewood’s day, he proved in 1914 that this difference is sometimes positive; but the proof is non-constructive, and gives no upper bound for the smallest value of x for which it is positive.
Assuming the Riemann hypothesis, Skewes showed that the smallest value of x does not exceed 10^(10^(10^34)). Subseqently, he found a still larger number which bounds the value unconditionally. At the time, this was probably the largest number which had been used for a specific purpose in mathematics (though it is fair to state that subsequent work has considerably reduced these upper bounds).
Skewes’ number is much larger than the odds against a snowball surviving for a week in hell (which Littlewood estimates at 10^(10^46)) or the number of possible games of chess (about 10^(10^70) – the finiteness depends on one or other player exercising the option to claim a draw when a position occurs for the third time). Even these numbers dwarf the numbers we meet in everyday life, such as the UK national debt.
Skewes’ number shares with even larger numbers that have come up subsequently in mathematics in that it is an upper bound for something whose true (but unknown) value is probably much smaller. The undisputed champion, Graham’s number (which I will not even attempt to describe) is an upper bound for a number whose true value is conjectured to be 6.
Last week the two one-day London combinatorics colloquia (one at LSE, one at Queen Mary) took place. Two of the speakers (Oleg Pikhurko and Daniela Kühn) gave talks proving theorems along the lines “Conjecture X is true if n is sufficiently large”; in each case Conjecture X asserts that something is true for all values of n. In each case the proof shows that there is a number beyond which the conjecture holds; but neither speaker would be drawn on the question of how large this number is.
Curiously, both conjectures involved embeddings of trees. The one discussed by Pikhurko asserts that any tree on n vertices can be embedded in the graph whose vertices are the numbers 1,2,…,n, two vertices being adjacent if they are coprime; in other words, any tree can have its vertices labelled with the numbers 1,2,…,n in such a way that adjacent vertices have coprime labels. The one discussed by Kühn asserts that any tree on n vertices with directions on the edges can be embedded in any tournament (complete graph with directions on the edges) on 2n–2 vertices (this bound is easily seen to be best possible). My colleagues and I had heard this result before in a seminar talk by Richard Mycroft, now a postdoc in our department.
Though both speakers declined to put a value to their bound, neither of them made Littlewood’s claim that the proof is non-constructive and no bound can be extracted. Probably a proof theorist would contend that any theorem asserting the existence of a bound can be tortured to reveal an explicit value for the bound.
Perhaps some time in the future we will write our proofs in such a way that this extraction is always possible…
In any case, a striking new proof of an old theorem sometimes reveals its superiority by reducing substantially the bound associated with the result. Shelah’s new proof of van der Waerden’s theorem about arithmetic progressions reduced the bound by one level in the hierarchy of rapidly growing functions.
It is entirely possible that we might prove some famous conjecture to be true beyond a certain bound, which is far too large for computation if the intermediate values to be possible, but the proof is recalcitrant and mathematicians are unable to reduce the bound. Then there would be no alternative but to look for a new proof!
A sequence in the Horizon programme on infinity showed Ron Graham performing a long calculation showing that the last digit of Graham’s number is 7. If the bound were improved, or the exact value found, in the future, then presumably this would be of historical interest only.
Once I was involved with a problem in which the issue of whether the existence of bounds could be translated into explicit bounds. The theorem in question asserts:
There are only finitely many finite distance-transitive graphs of given finite valency greater than 2.
The original proof used a compactness argument based on three observations:
- a reduction due to Derek Smith, showing that it suffices to prove the theorem whose graphs have primitive automorphism groups;
- the proof (using the Classification of Finite Simple Groups) of the Sims conjecture, asserting that in a primitive permutation group in which the point stabiliser has an orbit of size d (where d>2), the order of a point stabiliser is bounded by a function of d;
- a theorem of Dugald Macpherson, determining all the infinite distance-transitive graphs of finite valency.
I will not define all these terms, or give the argument in too much detail, since the problem was later circumvented. Closer analysis of distance-transitive graphs showed that an explicit bound (not too far from the truth) on the number of vertices of a distance-transitive graph of valency d (for d>2) can be derived from a bound for the function in Sims’ conjecture; and such a bound can be extracted directly from the proof. Indeed, even the use of CFSG was later avoided by Richard Weiss. Finally, a theorem of Praeger, Saxl and Yokoyama made it feasible to aim instead for a complete classification of finite distance-transitive graphs.
So this episode really is of historical interest only!