Gareth Tracey talked about crowns in finite group theory. If we are studying minimal generating sets in finite groups, it is useful to understand groups G which require more generators than any proper quotient; these are crowns. Given an arbitrary group G, for any normal subgroup N, we know that the number of generators of G/N does not exceed the number for G; so, if we choose N maximal such that equality holds, then G/N is a crown. Gareth has used his results on crowns for various things; in particular, he gave us a preview of his result with Colva Roney-Dougal on the number of subgroups of the symmetric group S_{n}.
Tom Coleman told us about his work (much of it in his recent PhD thesis) on generation, cofinality and strong cofinality, and the Bergman property for various transformation semigroups on an infinite set, such as all maps, injective maps, surjective maps, bijective maps, partial maps, and bijective partial maps on a countable set, and analogous things for homomorphisms etc of the random graph. The Bergman property for a structure asserts that, if S is a generating set, then any element of the structure is a word of bounded length in elements of S.
Justine Falque talked about her work with Nicolas Thiéry on the orbit algebra of a permutation group in the case where the number of orbits on n-sets is bounded by a polynomial in n. They show that the algebra is finitely generated and Cohen–Macaulay, so that the generating function for the number of orbits on n-sets is a rational function of the form P(x)/Π(1-x^{di}), where P is a polynomial. I have already discussed this here; I have read the short version on the arXiv and eagerly await the full version.
]]>He explained that there are many different “ordering” relations on graphs; the two he considers most important are the minor and induced subgraph orders. Roughly speaking, he worked on the minor ordering in the second millennium and the induced subgraph ordering in the third.
An induced subgraph of a graph is obtained by throwing away some vertices and the edges incident with them; you are not allowed to throw away an edge within the set of vertices you are keeping. Paul began with the general problem: given a graph H, can you determine the structure of graphs G containing no induced copy of H?
The first issue is what you mean by “structure” here. Paul thinks of it as a construction for such graphs. But even this needs explaining. You can construct all triangle-free graphs by taking a set of vertices and adding edges, taking care never to create a triangle. But clearly this is a hopeless construction and tells you nothing about triangle-free graphs!
The answer is known in embarrassingly few cases:
And that’s it! Not even for a 4-cycle is the answer known. There is a structure theory for bull-free graphs modulo the structure of triangle-free graphs and their complements, which again is not easy. (The bull has a triangular face, with horns or pendant edges at two of its three vertices.)
If we are content with less than a complete description, there are various conjectures you can try: does a forbidden subgraph force the existence of a “large” complete or null subgraph? For arbitrary graphs on n vertices we can’t do better than log n (Ramsey’s theorem together with the probabilistic method); can we achieve n^{c} (“polynomial”), or even cn (linear): Can we find two large sets with all or no possible edges between them?
Another question asks for χ-boundedness, the property that the chromatic number χ(G) is bounded by a function of the clique number ω(G). The perfect graph theorem, proved by Paul and collaborators, proves the conjecture of Berge by showing that if G contains no odd hole (induced odd cycle) or odd antihole (induced complement of an odd cycle) then G has equal clique and chromatic numbers. He went on to discuss some new and powerful results he and others have proved, where we forbid just some holes.
There was a lot more but that will do for now.
David Conlon’s advertised title was “How to build a hypergraph expander”, and he did indeed show us this in the second part of his talk. But his real subject was “The unreasonable effectiveness of mixing algebra and probability”. The first half of his talk mainly concerned the function ex(n,H), the maximum number of edges in a graph G on n vertices not containing H as a (not necessarily induced) subgraph. The Erdős–Stone theorem gives as an upper bound a fraction (1−1/(χ(H))+\epsilon) of all edges of K_{n}, and Turán graphs show this is tight. The problem occurs when χ(H) = 2, that is, H is bipartite, when the theorem only gives a fraction o(1), that is, o(n^{2} edges. There is some evidence that the correct asymptotics should be around cn^{2−1/s}, where s is the smaller bipartite block of H. This is known to hold in a few cases, including K_{2,2} and K_{3,3}, where algebraic constructions over finite fields realise the lower bound. The probabilistic method straightforwardly gives about n^{1−1/s−1/t}. David and coauthors have improved this by a method involving choosing “random algebraic varieties”.
For hypergraph expanders, even the best definition is not known: spectral, combinatorial, rapid mixing of some random walk, topological (Gromov), coboundary, …? David has constructions involving rapid mixing of a random walk on pairs on the neighbourhoods of vertices in a Cayley graph in an elementary abelian 2-group. As he remarked, received wisdom is that abelian groups are bad for rapid mixing, but the commutative property was essential in his argument.
Last before lunch was a beautiful talk by Simon Smith on infinite permutation groups. He assumed some topological knowledge (he liked his groups to be totally disconnected and locally compact (tdlc): the study of locally compact groups reduces to two cases, Lie groups and tdlc groups). His permutation groups were transitive and subdegree-finite (this means that a point stabiliser has all its orbits finite – automorphism groups of locally finite connected graphs are examples). There is a close connection: a closed subdegree-finite group is tdlc, and a tdlc group has a nantural action as a subdegree-finite permutation group.
An important construction for finite permutation groups is the wreath product in its product action. Simon has another construction which he calls the box product. If X and Y are the domains of G and H, take a collection of copies of X, with complete graphs on each; and glue them together, any two meeting in one point and the copies through any point indexed by Y. Then there is a way to make a group act on this graph, so that the stabiliser of a lobe (copy of X) has G acting, while the stabiliser of a point has H acting on the lobes through that point. This is the box product. He has a theorem for when the box product is primitive, mirroring what happens for wreath product, and an O’Nan–Scott type theorem for primitive subdegree-finite groups.
After lunch and the (brief) business meeting of the Colloquium, I returned to the algebra workshop to hear Brent Everitt talking about three different ways to attach (co)homology to a hyperplane arrangement (a finite set of hyperplanes in a vector space). Note that the hyperplanes and their intersections have the structure of a lattice. The first method is simply to remove the hyperplanes and consider what’s left. Over the real numbers, this falls into many pieces, but over the complex numbers it is connected, and the Orlik–Solomon theorem gives its cohomology over the integers: there is no torsion, and the Betti numbers are expressible in terms of the Möbius function of the lattice.
The second method is to form the chain complex of the lattice (with top and bottom elements removed), the simplicial complex whose simplices are the chains. This is homotopy-equivalent to a bunch of spheres; the Goresky–MacPherson th eorem connects its reduced homology to the previous case.
The third solution puts a sheaf of R-modules on the lattice, and take its simplicial cohomology with local coefficients. There is a canonical sheaf: each element of the lattice is a subspace; just use that subspace.
I ducked out and went to the history stream to hear Rosemary Bailey talk about Latin squares. Much of it was familiar to me, but there were some eye-openers, including the use in an experiment on diets and slaughtering times for sheep, reported in France in 1770, a possible proof (now lost) of the non-existence of orthogonal Latin squares of order 6 by Clausen in 1842, and a row between Fisher and Neyman about whether using a Latin square in an experiment gives biased results (with a shocking recent tailpiece).
I missed the London Mathematical Society meeting because I had a visitor; then there was a wine reception followed by a very nice dinner. I wish I had felt well enough to enjoy it more; but at least I could eat it, which is a sign of progress!
The last morning had just two morning talks and one plenary.
The first morning talk I went to was by Vicky Gould, who gave us a crash course in semigroup theory followed by the biorder on the idempotents of a semigroup and the “free idempotent-generated semigroup” generated by a biordered set. This is a fairly recent topic where the arguments are not straightforward. It is a free object whose generators correspond to the idempotents, and relations ef = g if the product of e and f is g in the original semigroup (or, abstractly, if e and f are comparable in one of the orders). Nice results but I won’t attempt to confuse you with a proper account.
Second, Daniel Král’ talked about graphons and their analogues for permutations, permutons. A sequence (G_{n}) of, say, graphs is said to converge if the probability that a random set of size |H| in G_{n} induces a copy of H converges for all graphs H. So what does the sequence converge to? This is more difficult, and was the problem solved by Borgs, Chayes, and Lovász by inventing graphons. It appeared that graphons often correspond to solutions to extremal graph problems which are either unique, or can be made so by adding extra relations. But Dan and his coauthors have shown that this is very far from being the case; he showed us pictures of some enormously complicated graphons refuting the obvious conjectures.
After coffee we had the final talk of the conference, a plenary on “Products of random matrices” by Marcelo Viana. Furstenberg and Kesten showed that certain limits always exist; these turned out to be the extremal Lyapunov exponents. Marcelo and collaborators showed that these numbers depend continuously on the input. This can be widely generalised, to distributions on the group GL(d), and to things called linear cocycles. In the first case it is not enough that the distribuions are close in the weak* topology, but also that their (compact) supports are close in the Hausdorff topology; he showed us examples to demonstrate this. Most of the results have generalisations in some form. It was very clear, but again my brain was full.
]]>The first plenary talk was by Irit Dinur on the unique games conjecture. I think I understood roughly what this conjecture says, but what it has to do with games, unique or otherwise, still escapes me completely.
Some NP-hard problems are constraint satisfaction problems; examples include 3-COL (is a given graph 3-colourable?) and 3-SAT (is a conjunctive normal form formula where each clause has 3 literals satisfiable?) In each case, if the answer is no, we could ask what is the largest fraction of constraints which can be satisfied. For example, with 3-SAT, there is one constraint for each clause. By assigning Boolean values randomly to the literals, we see that seven of the eight assignments to each clause will make it true, and so the expected number of satisfied clauses is at least a fraction 7/8 of the total. In the same way, an edge gives a constraint on a colouring (its ends must have different colours), and so a random colouring will have at least 2/3 of the edges properly ccoloured.
The PCP-theorem says that there exists a positive δ such that it is NP-hard to distinguish betweem “all constraints satisfied” and “a fraction smaller than 1−δ of constraints satisfied.
The unique games conjecture says something about a particular kind of CSP called “label cover with 1:1 constraints”, asserting that it is hard to distinguish between fraction ε and 1&mimus;ε of constraints satisfied. The breakthrough result is this with the upper value replaced by (1/2)−;ε. This has a lot of consequences in related areas of complexity, but for fear of getting things wrong I will say no more.
After that, there was a wonderful talk by Clifford Cocks on the history of public-key cryptography. James Ellis at GCHQ had discovered the principles of public-key cryptography in 1969, and Cocks, arriving shortly afterwards, had constructed a practical scheme for doing it, almost identical with what we now call RSA (although technology wasn’t able to do the business until the 1980s); a later arrival, Malcolm Williamson, discovered another method, almost precisely the same as Diffie–Hellman key exchange.
On the other side of the Atlantic, Whitfield Diffie gave up his job in 1972 to learn about cryprography, and began working with Martin Hellman in 1974; by 1976 they had laid down the principles of PKC and had discovered Diffie–Hellman key exchange. Rivest, Shamir and Adelman read their paper and got to work; Rivest and Shamir tried many ideas, Adelman knocked them all down, until they came up with the RSA method. In one respect the Americans had advanced beyond the British; Diffie and Hellman realised the importance of authentication, and had discovered a method for providing it.
Having had a morning of computer-related things, I missed Mike Fellows’ talk on parameterized complexity, and listened instead to Marta Mazzocco on the geometry of the q-Askey scheme. This is a huge mountain, whose lower slopes on one side I have wandered among; Marta was approaching from the other side, and I am afraid that the landscape was almost completely unfamiliar to me. The q-Askey scheme is a poset with 29 boxes, each containing a type of orthogonal polynomials, with Askey–Wilson and q-Racah at the top. She talked mostly about the left-side, and I realised that the polynomials I am more familiar with, such as Krawtchouk and Hahn, were all on the other side. Her talk was full of discrete versions of Painlevé equations (six important families of non-linear differential equations), and was described by “chewing gum moves” on a certain Riemann surface; she has a duality which produces new information.
At lunchtime the rare books collection had put on a very nice exhibition, with works of (among many others) Pacioli, Fermat, Gauss, and Mary Somerville (who grew up in Burntisland).
After lunch, Nalini Joshi talked about building a bridge (motivated by the famous Harbour Bridge in her hometown Sydney) between apparently unrelated works by Japanese and European mathematicians. The talk began slowly with root systems and reflection groups, but picked up speed; I don’t feel competent to explain much of it, I’m afraid.
Then, not entirely by design, I found myself in the Algebra workshop. Tim Burness began with a talk on the length and depth of finite groups. The length of a finite group is the length of the longest chain of subgroups of the group. This is something I am interested in; in the early 1980s I found a beautiful formula for the length of the symmetric group S_{n}: take 3n/2, round up, subtract the number of 1s in the base 2 expansion of n, and subtract 1 from that.
The length has many nice properties; it is monotonic, and additive over composition factors. Tim has defined another parameter which he calls the depth, which is the minimum length of an unrefinable chain of subgroups (that is, each one maximal in the next). This failse these nice properties, and behaves very differently from length. For example, the depths of finite symmetric groups are bounded above by 24; the nice proof of this uses Helfgott’s proof of the Ternary Goldbach conjecture! But the depth of all finite simple groups is not bounded above. They have some results for algebraic groups as well (where the subgroups in the chain are restricted to be closed and connected).
Some of my colleagues are interested in something called the partition monoid. Maud de Visscher talked about the partition algebra, a slightly twisted version of the monoid algebra of the partition monoid. This arose in statistical mechanics; they have used it in representation theory, to get a better understanding of certain numbers called Kronecker coefficients arising from diagonal actions of symmetric groups on tensor products of Specht modules.
Finally, Marianne Johnson showed that the 2-variable identities of the bicyclic monoid (with two generators p and q satisfying the relation pq = 1) are identical to those of the 2×2 upper triangular matrices over the tropical semiring. A lovely and unexpected result!
The last event of the day was speed talks by PhD students, but your correspondent’s brain was full, so I cut this session.
]]>As well as standard BMC fare of plenary lectures and “morning lectures”, there are five workshops, in algebra, analysis & probability, combinatorics, dynamics, and (a St Andrews speciality) history of mathematics.
The organisers had worried that attendance might be down: it is at an unusual time of year, convenient for us but less so for some universities south of the border; and St Andrews is fairly remote. But in the event, the main Physics Lecture Theatre was almost full for the opening lecture. We were welcomed by the leading organiser, our Regius Professor, who (after doing an impression of an airline cabin-staff member in pointing out the fire exits) gave way to a recent holder of one of Britain’s other two Regius chairs in mathematics, Martin Hairer.
Martin began by pointing out two guiding principles in probability theory: symmetry (“equivalent” outcomes, such as heads and tails in a coin toss, should have the same probability), and universality (harder to describe, but roughly, if a random outcome depends on many different sources of randomness, its detailed description should not matter too much. His first example, apart from the Central Limit Theorem for something like an infinite sequence of coin tosses, was Brownian motion. The apparently random motion of pollen grains in water was explained in around 1905 by Einstein and Smoluchowsi (independently) as caused by many small nudges caused by random impacts from water molecules; they showed that the distribution should satisfy the heat equation, and this was verified experimentally by Perrin ten years later, the first experimental “proof” of the atomic hypothesis. But five years earlier, Bachelier had investigated the movement of share prices on the stock exchange, and had come to exactly the same conclusion (a precursor of the Black–Scholes equation). The rigorous mathematical description (a measure on the space of continuous functions) was given by Wiener, and the universality proved by Donder.
But there are other universality classes, such as the Ising model close to the critical temperature, whose limiting behaviour is far from “Gaussian”. Its behaviour is conjectured to be universal for many phase transition models, but this has not been proved. Yet another class consists of surface growth models, which describe many physical situations but also the shape formed by random Tetris pieces falling from the sky.
Martin’s title was “Bridging Scales”, and his interest was in process where the large-scale and small-scale scaling limits are known distributions (such as Gaussian and KPZ); what happens on intermediate scales? He and coauthors have a quite general theorem involving solutions of a certain type of stochastic differential equation, and there is not one behaviour, but a family of “canonical” behaviours described by a nilpotent Lie group. But I was floundering at this point.
I spent the next hour and a half in the History workshop, where I will describe only the first talk. Ursula Martin had a research grant to investigate how mathematical impact occurs. She started off by tracing the opposition from earliest times of two views: one expounded by G. H. Hardy (in A Mathematician’s Apology) and Vannevar Bush (whose report, containing very little data, was perhaps the inspiration for setting up the NSF, included the words “scientific progress … results from the free play of free intellects …); the other can be traced back to the founding of the Royal Society in the 17th century (“to extend the boundaries of Empire, and of arts and sciences”), and can be heard in almost every pronouncement of funding bodies today: we should deliver highly skilled people to the labour market, create spin-off companies, and so on.
The conclusions that Ursula and her colleague Laura Meagher came to were interesting. In a previous paper by Meagher and Nutley, impact was classified into five types (and reading this you see how impoverished the REF definition of impact is): conceptual, instrumental, capacity building, attitude or cultural change, and enhancing connectivity. Apart from the fact that essentially only the second and third count for REF, Meagher and Martin found that the REF protocols reinforce the myth that impact only happens in a linear order, whereas in fact it is a tangled web whose components cannot be separated.
The take-home messages were that impact is about people, not processes, and requires “knowledge intermediaries” rather than technology transfer offices; and, most important, we need more good stories.
The final talk of the day was the public lecture by Julia Wolf, on finding structure in randomness. It had attracted a fair number of people who were not BMC delegates (she asked for a show of hands, and was relieved at the result).
Her message was: if our object is random, or even just “looks like random” (technically quasi-random), then we can learn a lot about it, in particular counts of subconfigurations; if it is not random, then it is structured, and this gives us a lot of information; but even these two principles are not universally applicable. Her examples were taken from graphs and sets of integers. (Despite my last-but-one post, there is of course no problem in choosing a random set of integers!) For graphs, there is the theorem of Fan Chung, Ron Graham and Richard Wilson: if a graph has edge density p and has the number of 4-cycles which a random graph with edge probability p would have, then it shares many properties with random graphs. She said a few words, which actually made this clearer to me than I have ever understood before: the number of 4-cycles in a graph with given edge density is minimised by the random graph, and therefore interesting stuff clusters at that point.
A couple further snippets: She described the Green–Tao theorem on primes in arithmetic progression, and mentioned that the longest known such progression has length 26, but finding one for length 27 is completely out of computational reach; the Szemerédi regularity lemma says that, given ε, any large graph can be dissected into a number of pieces bounded by a function of ε such that the edges between almost all pairs of pieces form quasi-random bipartite graphs, but the function of ε is a tower of 2s of height ε^{−2}; and there is a possible game-changer on the horizon, the polynomial method recently used so spectacularly by Croot, Lev and Pach and by Ellenberg and Gijswijt (which I described here).
Then there was a welcoming wine reception, but I had had enough for the day and slipped away.
]]>Now I see that the company which has fought a long and bitter war against open source software is to buy GitHub, according to the BBC news. And April is long past.
When Microsoft bought Skype, I found it would no longer run on my Linux laptop. So should we be worried that they have bought GitHub? I don’t use it myself, but a lot of the excellent free software that I do use, such as GAP, is now developed there. Are they buying it to destroy the competition, as the railways did to the canal network in Britain in the nineteenth century?
I have great admiration for the boss of Microsoft, but nothing but fear and loathing for his company.
]]>Littlewood, in his Miscellany, discusses this, and comes firmly to the conclusion that probability theory can say nothing about the real world, and that a pure mathematician should “wash his hands of applications”, or if talking to someone who wants them, should say something along the lines of “try this; it is not my businesss to justify [it]”. However, later in the book, in the section on large numbers, he is happy to estimate the odds against a celluloid mouse surviving in Hell for a week. (Probably he would have argued that this is not the “real world”.)
That said, almost all mathematicians regard Kolmogorov’s axioms as the basis for probability theory. Littlewood says that “the aspiring reader finds he is expected to know the theory of the Lebesgue integral. He is sometimes shocked at this, but it is entirely natural.”
I had a project student this year who wrote about developing probability theory for non-classical logics such as intuitionistic or paraconsistent logic. I certainly learned a great deal about non-classical logic from supervising the project, for which I am very grateful to her; I might say more about this later. What I have to say below has nothing to do with her project, but the idea was sparked by a section in one of the papers she unearthed in her literature search. This is, perhaps, my small contribution to the foundations of probability theory.
Kolmogorov’s axioms say, in brief, that probability theory is measure theory on a space with total measure 1. In more detail, the ingredients are (Ω,F,p), where
The paper in question is “Kolmogorov’s axiomatization and its discontents”, by Aidan Lyon, in the Oxford Handbook of Probability and Philosophy. Section 4 of the paper takes Kolmogorov’s axioms to task because they cannot model a lottery with a countably infinite number of tickets, which is fair in the sense that each ticket has the same chance of winning.
Right from the start, this struck me as odd. One of the themes of mathematical philosophy is that mathematicians are not careful enough in their work, and accept arguments without sufficient scrutiny. The proportion of people who support some form of constructivism is probably much higher among philosophers of mathematics than among mathematicians. (Typically, they reject proof by contradiction: a proof of a theorem must be a construction which demonstrates the theorem, it is not enough to show that assuming that the theorem is false leads to a contradiction.)
But here, it seems to be the other way around. I cannot imagine any construction which would actually implement a fair lottery with a countably infinite number of tickets. To point the contradiction, Kolmogorov’s axioms have no difficulty in handling the result of tossing a fair coin countably many times. This generates the set of all countable 0-1 sequences, interpreting 1 as “heads” and 0 as “tails”. Now the measure on this set is the standard product measure induced from the measure on {0,1} where each of the sets {0} and {1} has probability 1/2.
Here is the argument that a fair countable lottery is not possible. The sample space is the countable set of tickets. (Said otherwise, there is a possible world in which any ticket is the winning ticket.) We want all these outcomes to have the same probability p. Now the sample space Ω is the union of the singleton sets {T_{i}} for i∈N, which are pairwise disjoint; so, by countable additivity, if we add up p a countable number of times, the result should be 1. But this is not possible, since if p = 0, then the sum is 0, whereas if p > 0, then the result is infinite.
(The first part of this argument generalises to the fact that a countable union of null sets (sets with probability 0) is a null set.)
The difference between the two situations, it seems to me, is that I can imagine a mechanism that allows a coin to be tossed infinitely often. If we are going to have infinite processes in mathematics, this seems a fairly harmless one. But I am unable to visualise a fair method of picking one lottery ticket from a countable set. One could try the following: For each ticket, toss a fair coin infinitely often to generate the base 2 expansion of a real number in the unit interval; then the winner is the ticket whose number is the largest. Trouble is, there may not be a largest; the set of chosen numbers has a supremum but possibly no maximum.
Another way of describing the same process is like this. Take a fair coin, and toss it once for each ticket. Any ticket which gets tails is eliminated, those which get heads remain. Repeat this process until a single ticket remains, continuing into the transfinite if necessary (but there is no guarantee that this will happen!)
Littlewood gives an argument which shows that, if we really want a countable fair lottery, we have to give up either finite additivity or the property that the probability of the whole space Ω is 1. Take the following countable pack of cards: each card has consecutive natural numbers written on its two sides, and the number of cards bearing the numbers n and n+1 is 10^{n}. There are two players, Alice and Bob. A card is chosen randomly from the pack and held up between them, so that each can see one side, and they are invited to bet at evens that the number on their side of the card is smaller than the number on the other side. If Alice sees the positive integer n, it is ten times as likely that the number on the other side is n+1 than that it is n−1. So she should bet. Bob reasons in the same way. So each has the expectation of winning.
Littlewood describes this as a “paradox”; I think it is rather more than that.
Lyon claims that it is not necessary to be able to imagine a mechanism in order for the objection to Kolmogorov’s axioms to be valid. I don’t think I agree.
Anyway, Lyon describes a possible solution of using non-standard real numbers as probabilities, so that the probability of picking a ticket in the lottery is a suitable infinitesimal with the property that adding it countably many times gives 1. But he rightly remarks that there are problems with this solution: lack of uniqueness of non-standard models and inability to name the elements.
When I read this, it struck me that there was a potentially nicer answer to the question: use John Conway’s surreal numbers as probabilities. These numbers are explicitly defined and named; morever there is a unique surreal number 1/ω with the property that adding it to itself ω times will give 1.
Conway’s number system comes out of his general framework for game theory, described in his book On Numbers and Games. The first account of it was the mathematical novel Surreal Numbers by Don Knuth; he describes two backpackers who unearth an ancient tablet bearing the signature “J.H.W.H. Conway” and giving the rules for the construction of Conway’s number system, and their elation and frustration as they decipher the tablet and try to prove the assertions on it.
Has anybody seen a treatment of probability using Conway’s surreal numbers? It seems to me that in order to have such a treatment, one would presumably need to modify the notion of “null set”, to address the fact that the union of countably many “null sets”, such as the lottery tickets with probability 1/ω, may not be null; there may be difficulties lurking here. The problem of inventing a mechanism for the lottery still remains, but maybe closer inspection of the way the surreal numbers are generated would suggest a method.
Actually, the sting in the tail of this story is that there are some absolutely classical mathematical results which assign “probabilities” to sets of natural numbers. For two famous examples, which incidentally lead to the same value,
This uses a completely different notion of probability. The statements in question really assert that the proportion of natural numbers up to n which are squarefree (or the proportion of pairs which are coprime) tends to the limit 6/π^{2} as n→∞.
]]>I thought the aim was to tell the world that mathematics is a great career by recounting the stories of people who have some interesting success to tell about. Well, that is part of it, but it seems to have another aim, which is to provide role models for groups under-represented in the profession. Of course you cannot speak against this aim, but looking at the web page you might get the impression that, while most mathematicians in the UK are male, most of the successful ones are female.
Enough negativity: it is a site well worth browsing, and the people represented there have interesting stories to tell. We are all people, and mathematicians are as varied as any group of people; there is something to celebrate, and this website does it. Take a look!
And to declare an interest: you will find my success story there also.
]]>In the mid-1970s, when I began being interested in infinite permutation groups (I had been a strictly finite person before that point), I soon came to realise that I would need to know some model theory. As I have told elsewhere on this blog (for example, here, here and here), I learned it by volunteering to give the second-year lectures on logic in Oxford. I was also helped by Graham Higman’s advanced class on the Ryll-Nardzewski theorem (also proved at the same time by Engeler and by Svenonius), which says (in one formulation) that a countable first-order structure is determined up to uniqueness by the first-order sentences it satisfies together with the condition of countability if and only if its automorphism group is what is now called oligomorphic. In other words, in this situation at least, (first-order) axiomatisability is equivalent to a very high degree of symmetry. This is one of my very favourite theorems in mathematics.
Soon afterwards, Dugald Macpherson (whose undergraduate degree in Oxford was in Mathematics and Philosophy – he had been my student at Merton College) began his DPhil under my supervision. As with many of my students, he taught me much more than he learned from me; he contributed more than anyone else to the study of oligomorphic permutation groups, and has subsequently done much more in model theory and permutation groups, including work on o-minimality and on simplicity of automorphism groups.
Now Dugald is 60, and the occasion is being celebrated by a conference at the International Centre for Mathematical Sciences in Edinburgh, from 17 to 21 September this year. I will certainly be there. It looks like a super conference; many of the things I talk about here will be discussed at the meeting. The invited speakers are Peter Cameron (St Andrews), Zoé Chatzidakis (ENS, Paris), Gregory Cherlin (Rutgers), Richard Elwes (Leeds), David Evans (Imperial College), Ehud Hrushovski (Oxford), Martin Liebeck (Imperial College), Dugald Macpherson (Leeds), Angus Macintyre (QMUL), Peter Neumann (Oxford), Jaroslav Nešetřil (Charles Uni., Prague), Anand Pillay (Notre Dame), Cheryl Praeger (Uni. West. Australia), Charlie Steinhorn (Vassar), Pierre Simon (Berkeley), Slawomir Solecki (Cornell), Katrin Tent (Münster), Simon Thomas (Rutgers), John Truss (Leeds).
The web page for the conference is at http://www.icms.org.uk/permutationgroups.php. Space is limited, and registration closes on 8 June (or maybe 9 June), so apply now! There is a registration form available from the web page.
See you there!
]]>The project involves an interesting mix of algebra, analysis and combinatorics: the description mentions “infinite groups of homeomorphisms […] automorphism groups of shift spaces, automata groups, the extended family of R. Thompson groups, enumerative combinatorics, and/or GAP coding”.
If you think this might be for you, details are at https://www.vacancies.st-andrews.ac.uk/.
]]>I have been to the Royal Society of Edinburgh twice in the last two weeks, for different reasons.
Yesterday it was for my induction as a Fellow, along with forty-odd others. One of the things that we were told was that the RSE is trying to diversify its membership, and it is no longer true that all the new Fellows are 70-year-old male professors; but there are still a few of us, doing what we do best.
We were encouraged to spread the word about the RSE and the good things it does, so here is my small contribution.
The RSE was founded by royal charter from George III in 1783. It differs from many national academies in the diversity of interests of its Fellows: it is not just a scientific society like the Royal Society of London, although it is true that academics in the science subjects predominate. Academics in the humanities and social sciences, public figures, business people, authors and artists all figure. Indeed, probably the best-known President ever was Sir Walter Scott, whose towering monument you pass on the way to the RSE premises from Waverley Station in Edinburgh.
Among its other functions, the RSE gives entirely non-partisan advice on matters of national interest to the Scottish government.
Before the induction ceremony we were given a tour of the building, whose chief treasures are large portraits of famous people, mostly former presidents. Here are a few of them: Michael Atiyah, Thomas Brisbane, David Brewster, P. G. Tait, and D’Arcy Thompson. (Which of these was not RSE President?)
Brisbane gave his name to the city where I was an undergraduate for four years (educated in a system based on the Scottish system). According to our guide, he took the job as Governor of New South Wales because he was an astronomer and was interested in viewing the Southern stars. But he was not the worst governor of New South Wales, although there are probably fewer things named after him than either his predecessor Lachlan Macquarie or his successor Ralph Darling. (Incidentally, in Australia he is Thomas Brisbane but the RSE calls him Makdougall Brisbane.)
My previous visit two weeks ago was for a lecture on the new Queensferry Crossing, given by Naeem Hussain, the global leader of bridge design at Arup, who had designed it. It was a good lecture, inspiring and humorous by turns. He told us that he first came to the UK in 1964, went for a holiday in Scotland, saw the then-new Forth Road Bridge, and thought to himself, “I wish I could have designed that bridge”. So, fifty years later, he got his chance. Designing a third bridge to stand beside the two iconic bridges already there was an aesthetic as well as an engineering challenge, as he acknowledged; the new bridge, like the one on the other side (the nineteenth-century Forth Bridge) has three cantilever points, while the one in the middle is suspended from two points. Moreover, although politicians boast that the bridge towers are the highest such in the UK, Naeem would rather stress how low they are. (He made them as low as the engineering would allow, and used slim pillars rather than the more common A, H or diamond shapes, so as not to overwhelm visually the other two bridges.)
This was preceded by a trip to the Visitor Centre where we had a view of the three bridges, some models of the new one, and talks by the person who commissioned it, the engineer who built it, and a university professor who gave a very entertaining talk putting it into context. (Did you know that, when the Forth Road Bridge was designed, the only motorway in the UK was the Preston By-pass?)
Here are the bridges, taken last year from the Fife side, near Aberdour.
On that occasion, I was able to take a good look at the grant of arms to the RSE. The heraldic description includes the phrases
Per fess wavy Sable and Argent the scientific sign “D.N.A. Helix” fessways enhanced Gules and Azure
and
A celestial crown Or showing five mullets Argent
Do you know what “mullets” are in this context? (You might be able to cheat by looking at Wikipedia.)
]]>