Most of these uses seem completely unconnected. But the use of the same term for the last two is not coincidence, but comes from Galois theory. If L is a Galois extension of a base field E, and K an intermediate field, then L/K is a normal extension if and only if the Galois group of L over K is a normal subgroup of its Galois group over E. (If this happens, the Galois group of K over E is the quotient group.)
But this also hides some mystery. The most important property of normal subgroups is that they are kernels of homomorphisms (and conversely). But step outside group theory, to semigroup theory or universal algebra, and you learn that the kernel of a homomorphism is a partition, not a subalgebra: two elements are in the same part if they have the same image under the homomorphism. It just happens that, in groups, the kernel partition of a homomorphism is precisely the partition into cosets (left or right, it doesn’t matter) of the kernel subgroup.
Indeed, in German, one talks of a “normal divisor” rather than “normal subgroup”, which presumably arises from this interpretation as partition (but I am guessing, I don’t know the etymology).
You can see kernels of homomorphisms in the Galois connection. If L/K is a normal extension, with L Galois over the subfield E, then any E-automorphism of L fixes K setwise, and so induces an E automorphism of K. So we have a homomorphism from Gal(L/E) (the group of E-automorphisms of L) to Gal(K/E). The kernel of this homomorphism consists of the automorphisms which act trivially on K; these are the K-automorphisms of L, the elements of Gal(L/K). [An E-automorphism of L is a field automorphism of L fixing E elementwise.]
In group theory, the term “normal” could be, and sometimes is, replaced by “invariant”. An invariant subgroup is one mapped to itself by all conjugations; this fits in with the notion of fully invariant subgroup, mapped to itself by all endomorphisms. Indeed, for the notion that most people call “subnormal subgroup” (a term in a series of subgroups, each normal in the next, with top element the whole group) was called by Marshall Hall a “subinvariant subgroup”; he remarked in a footnote that he found the term subnormal “unnecessarily distracting”. [Footnote on p.124 of his book The Theory of Groups, published in 1959. He says “The more colorful term subnormal series has been urged on the writer by Irving Kaplansky”, suggesting that it wasn’t yet in common use in 1959.]
All well and good, if a little confusing so far; the first three uses mentioned above are so well separated that probably mathematical papers using each of them form disjoint open neighbourhoods.
But when we come to Cayley graphs, there is real confusion.
Let G be a group, and S an inverse-closed subset of G not containing the identity. The Cayley graph Cay(G,S) is the graph with vertex set G, in which two elements g and h are joined if and only if hg^{−1}∈S. The fact that S is inverse-closed makes the graph undirected, and the fact that it doesn’t contain the identity makes the graph loopless. The group G acts on itself by right multiplication; this action embeds G into the automorphism group of the Cayley graph.
Now each of the following two definitions occurs in the literature:
These two definitions are quite different. Indeed, the second one restricts the symmetry of the graph (its automorphisms are all contained in the normaliser of G in the symmetric group), while the second expands it (the left, as well as the right, action of G consists of automorphisms).
The complete graph on G is a Cayley graph for any group G; it is normal in the first sense but not the second (if the order of G is greater than 4). On the other hand, the Cayley graph of S_{3} with respect to two of its transpositions is a 6-cycle, and its automorphism group contains S_{3} as a (normal) subgroup of index 2; so it is normal in the second sense but not the first (since the three transpositions are conjugate).
Both terms, as I said, are well-established, and it is probably too late to change the terminology now.
This was on my mind because of recent events. The argument about synchronization for groups with regular subgroups mentioned in the last-but-one post depends on a relevant graph being a normal Cayley graph (in the first sense); but I learned about the result of Cai and Zhang at the conference in Shenzhen, which also had a talk about normal Cayley graphs (in the second sense).
Philosophers of mathematics argue about whether mathematics is discovered or invented. In the book of Genesis we read that God created the animals but Adam gave them their names. I think what the examples above show is that, whether mathematics is discovered or invented, the names we give to the concepts are our own invention.
]]>Sometimes the weather at this time of year can be beautiful, and the Reading campus looking lovely with autumn leaves under blue skies and a variety of waterfowl on the lake. But this time it was wet; Rosemary and I started out round the lake when the rain seemed to have stopped, but were thoroughly drenched by the time we got back.
The highlight of the meeting was a talk by Imre Leader on new work he and Paul Russell have done on partition regularity for inhomogeneous equations, going right back to Rado’s work at the beginning of the subject. A system Ax = b over the integers is partition regular if, in any finite colouring of the natural numbers, you can find a monochromatic solution to the equations. We are interested here in the inhomogeneous case, where the vector b is non-zero. In this case, if the system has a constant solution (with the values of all variables equal), it is clearly partition regular. If s denotes the column vector which is the sum of all the columns of A, then this holds if and only if b is a multiple of s (say b = as, in which case putting all variables equal to a gives a solution).
Rado proved the converse, that is, if the system is partition-regular then there is a constant solution. Imre began by taking us through the proof. First he did the case where there is just one equation. Then rather complicated arguments do the general case. Various authors since have extended this, using algebraic techniques of increased sophistication, to extend from the integers to other rings (so the result holds for integral domains, and for Noetherian reduced rings).
Remarkably, what Imre and Paul have done is to handle all commutative rings with identity, not by elaborating the existing arguments, but by going back to Rado’s original proof for one equation and extending it to any number. The trick is that, although we work over a ring R, we are going to think of it in the proof as an abelian group and essentially ignore the multiplication after the very first step. So here is the proof.
We suppose that there is no constant solution. Let K be the subgroup of R^{n} consisting of all multiples of s. By assumption, b is not in K. Now let H be a subgroup of R^{n} which is maximal with respect to containing K but not b. Then R^{n}/H is an abelian group containing a non-zero element (the image of b) which lies in every non-zero subgroup (if not, we could replace H by a larger subgroup). Now such abelian groups are not too hard to characterise: they consist of finite cyclic p-groups and Prüfer p-groups, for some prime p. The latter consists of all the p-power roots of unity in the complex numbers; so, in either case, the group can be represented on the unit circle. Now dividing the circle into sufficiently small intervals allows the construction of a colouring with no monochromatic solution of the original system.
After this lecture, Anthony Hilton fetched in from the common room the bust of Richard Rado from the common room next door, where it sits unnoticed in a corner (despite Anthony’s attempts to have it more prominently displayed).
After that, a briefer account of the rest.
I was the first speaker. I talked about equitable partitions of Latin square graphs, a topic I have described here. Partly inspired by some work I am currently doing with Sanming Zhou from Melbourne, I introduced the topic with a few words about perfect 1-codes in graphs. I sometimes think that, had we mentioned this in the introduction of the paper, it might not have been rejected without refereeing by a journal with the words “designs” and “codes” in the title.
Dudley Stark talked about a theorem he and Nick Wormald have spent the past twenty years proving: estimates for the probability that a random graph G contains a fixed graph H. They handle both the G_{n,m} and G_{n,p} models (the answers are slightly different), and they are able to extend the range of parameters previously handled (even in the case where H is a triangle). The idea is quite simple, but the details extremely complicated. Dudley said they never doubted that the result was true but were not sure whether they would be able to produce a proof.
Richard Mycroft talked about the function t(T), where T is an edge-oriented tree on n vertices, defined to be the smallest N such that T is embeddable in every N-vertex tournament. It was conjectured by Sumner that 2n−2 vertices suffice; this is known for large n (“regularity lemma large”) and small n (up to 8, maybe), and thought to be true for intermediate values. A tree T is called unavoidable if t(T) = n; Richard and his colleague have shown that almost all trees are unavoidable.
Peter Larcombe talked about some rather different mathematics, involving Catalan numbers and Catalan polynomials. This touched on Newton–Raphson and other approximation methods including Padé approximants, and he also mentioned some new results about powers of 2×2 matrices (and more generally, tridiagonal matrices) which he has observed and proved, challenging us to say if we had seen such things before (no one had). It all went by rather fast; I hope to say more on some later occasion.
The final talk was by Matthew Johnson, who talked about almost bipartite graphs, those whose vertex set can be partitioned into an independent set and a set inducing a forest. It is known that a graph of maximum degree 3 is almost bipartite; Matthew and his colleagues have found a different proof of this which gives a linear-time algorithm for finding the partition. The application is to a recolouring problem. Given two proper colourings of a graph, with the same set of colours, is it possible to move from one to the other by changing the colour of one vertex at a time (retaining the properness)? In some cases they have a simple algorithmic answer to this question.
]]>Here is a brief recapitulation.
Let G be a transitive permutation group on Ω, where |Ω| = n.
The “official” definition of synchronization hearks back to the origin of this concept in automata theory; but what is said above will do for now. Note that if G is non-synchronizing, then A and any part of P witness the fact that G is non-separating; hence separating implies synchronizing.
For the record, I also mention the graph-theoretic interpretation of these concepts:
Now a synchronizing group is primitive (i.e. preserves no non-trivial equivalence relation), as we can see from the graph-theoretic interpretation by observing that an imprimitive group preserves a complete multipartite graph. Also, a synchronizing group is basic (in the O’Nan–Scott classification), since a non-basic graph preserves a Hamming graph. So O’Nan–Scott tells us that such a group is affine, diagonal or almost simple.
While I was in Shenzhen earlier this month, I was given a preprint by Qi Cai and Hua Zhang showing that, for groups of affine type, synchronization and separation are equivalent. Just before I left, Mohammed Aljohani had told me that the same is true for groups of diagonal type with two socle factors.
There is in fact a common approach to these two results. Suppose that the group G has a regular subgroup H. This means that there is a unique element of H mapping any given point of Ω to any other. Choosing a fixed element α of Ω to correspond to the identity, we can let the element αh correspond to h, for any h in H. Thus we identify Ω with H. Now the following two propositions are straightforward to prove:
Proposition 1 If A and B witness non-separation, then A^{−1} and B give an exact factorisation of H (this means that any element of H is uniquely expressible as xy with x in A^{−1} and y in B).
Proposition 2 If A and B give an exact factorisation of H, then the sets Ab for b in B form a partition of H, and this partition and the subset B witness non-synchronization.
These results come close to showing that synchronization and separation are equivalent for groups with regular subgroups; but there is that pesky business about the inverses which I can’t get around in general. But there is a special case where everything works fine. Note that a G-invariant graph must be a Cayley graph for the subgroup H (admitting the right action of H as automorphisms). If any such graph also admits the left action of H, then A being a clique implies that A^{−1} is a clique, and so the gap is closed. This is the case if G contains both the left and right actions of H, which is true in the 2-factor diagonal case. It is also true in the abelian case, since these actions are then the same. But in a group of affine type, the translation group is an abelian regular subgroup; so we recover the above special cases.
It is also the case that diagonal groups with k factors in the socle, for k > 2, are necessarily non-synchronizing. This, it turned out, had already been proved by Pablo Spiga, but the manuscript had been lost, so I had to re-do his arguments myself. I do not want to go through it here, since defining the diagonal groups is a moderately complicated business; but I want to mention a remarkable feature of the proof.
There is a natural graph associated with a diagonal group, which is the union of k overlapping copies of a Hamming graph, or can be regarded as a Hamming graph with some “diagonal edges” added. This is closely related to some work Cheryl Praeger and Csaba Schneider discussed at the Shenzhen conference. It is possible to show that, for a diagonal group with k simple factors T in the socle, where k > 2, this graph has clique number and chromatic number equal to |T|.
For the clique number, this is easy to see; the cliques are visible in the Hamming graphs (which have dimension k−1 over the alphabet T). The trick is to show that the chromatic number is also equal to |T|, by colouring the graph using T as the set of colours. For k even, there is a simple direct rule for the colouring; but for k odd, this requires the truth of the Hall–Paige conjecture, fairly recently proved.
A complete mapping on a group G is a bijection φ from G to itself with the property that the map x→xφ(x) is also a bijection. The latter map, which I will call ψ, is also called an orthomorphism of G.
The existence of a complete mapping has the effect of producing a Latin square orthogonal to the Cayley table of G. And now we see the relevance to non-synchronization. For the case of three factors, the graph referred to above is just the Latin square graph associated with the Cayley table of G (the vertices are the cells, and are joined if they are in the same row or column or have the same entry), and a Latin square orthogonal to the Cayley table gives a proper colouring of this graph with |G| colours.
Hall and Paige conjectured that a group has a complete mapping if and only if its Sylow 2-subgroups are not cyclic. They proved the necessity of the condition, and its sufficiency for alternating groups. Wilcox reduced the proof of the conjecture to the case of simple groups, and dealt with groups of Lie type except for the Tits group. Evans did this group and also all the sporadic simple groups except for J_{4}, which was handled by Bray. Thus the conjecture is proved, the only small problem being that John Bray has not yet published his proof.
Groups which are synchronizing but not separating are very rare. We have one infinite family of examples, and one sporadic example, and that is all.
But these recent developments show that such a group must be almost simple. So at least we know where they are hiding!
]]>Given four real polynomials, all of which vanish at the origin, suppose that p_{1}(x) < p_{2}(x) < p_{3}(x) < p_{4}(x) for small negative values of x. What are the possible orderings of the four polynomials for small positive values of x? Perhaps one should think of four marathon runners going into a tunnel; when they emerge, how will their positions have changed?
You might think that this problem should have been solved centuries ago. But apparently it wasn’t; the solution was given by Maxim Kontsevich in 2009.
To save writing, let me give the initial order as 1234. As a warmup, let me note that, in the case of three polynomials, all permutations of the initial order are possible. But for four polynomials, this is not the case; Kontsevich showed that the orders 2413 and 3142 are impossible, while all other orders are possible.
This is not very difficult: Kontsevich proved it on a Paris Métro ticket. You might like to try it yourself.
At this stage, my friends who work on permutation patterns are probably pricking up their ears, noticing that this means that the permutations realised by n polynomials avoid these two 4-element patterns, and possibly even conjecturing that, for general n, any permutation which avoids these two patterns can be realised.
This is indeed the case. Indeed this is a very interesting class of permutations: it consists of all those permutations obtained from the permutation 1 by direct sum and skew sum, or those whose permutation graph is a cograph (I prefer to call these graphs N-free, since they do not contain an induced path of length 3, and are closely related to the N-free posets, which arise in experimental design as the structures which can be obtained by nesting and crossing). Their occurrence in the polynomial interchange problem is due to Étienne Ghys.
I have just described to you the first 20 pages or so of a remarkable book by Ghys, entitled A singular mathematical promenade. Two truly remarkable things about this book: the author is not afraid of either algebraic geometry or combinatorics, and is happy to mix them together; and he has put the book on the arXiv (number 1612.06373), from where it can be freely downloaded.
There is, of course, much more in the book’s 300 pages than just the above. The main concern is the structure of an algebraic curve in the neighbourhood of a singular point. In such a neighbourhood, the curve consists of a number of smooth branches, so already we have something resembling four curves y = P_{i}(x) passing through the origin). Ghys’ aim is to give a complete description of how the branches of a real algebraic curve can intersect a small circle around a singular point, where the first forbidden pattern occurs for five branches. By analogy with N-free graphs, you may now suspect that certain ternary structures like pentagon-free two-graphs will arise.
Indeed this is so, but I don’t want to give too much away. You should read the book yourself!
But just to continue a little more. One of the great puzzles for historians of Greek mathematics is the mysterious statement by Hipparchus, that the number of propositions which can be formed from 10 simple propositions is 103,049 “on the affirmative side” and 310,952 “on the negative side”. It has been claimed that no sense can be extracted from these figures; but 103,049 is the tenth “small Schroeder number”, and these numbers doubled count certain kinds of trees which represent compound propositions formed from ten simple propositions using only the connectives “and” and “or”. (There is an obvious duality given by swapping the two connectives, and the number given counts the dual pairs.)
As Ghys points out,
Most mathematicians, including myself, have a naive idea about Greek mathematics. We believe that it only consists of Geometry, in the spirit of Euclid. The example of the computation by Hipparchus of the tenth Schroeder number may be a hint that the Ancient Greeks had developed a fairly elaborate understanding of combinatorics.
This example is intended to give you some of the flavour of the book. As the title describes, it is a promenade, around a singular point of a plane curve or around some related and attractive areas of mathematics with surprising interconnections.
So go for a little stroll with Étienne Ghys; he is an excellent guide!
To conclude, I am very grateful to my friend Yaokun Wu for drawing my attention to this remarkable book, during my recent stay in Shanghai.
]]>Still catching up on the backlog: last week I was in Edinburgh for a meeting at ICMS for Dugald Macpherson’s birthday. The ICMS has just moved to new premises at the top of the University of Edinburgh’s new Bayes Centre, with fine views of Arthur’s Seat, Salisbury Crags, and beyond them the Firth of Forth widening to the sea. The move is so recent that some technological problems had not been properly ironed out, so there was a certain amount of standing on a stool and pressing switches with a long ruler done by the organisers. The lecture room was a bit on the small side for such a meeting, and not well ventilated. (The picture shows how much useless space was included in the building by the architects, some of which could have been used to make a decent-sized lecture room.) But the space outside, with lots of blackboards as well as tables and chairs, was ideal for discussion, and the ICMS staff were unfailingly helpful and friendly.
Dugald was one of my first doctoral students (the fifth, according to my list), and has been enormously successful in his career since then. The meeting was organised by his students, coauthors, and colleagues, and two things were always abundantly clear: first, the range of exciting mathematics that the speakers had chosen to put before him; and second, the great respect and affection in which he is held by all who have come into contact with him. This was especially evident in the brief vignettes presented at the conference dinner.
One of the particular pleasures of the conference, for me, was meeting many of Dugald’s students, my mathematical grandchildren, not previously known to me. (According to Mathematical Genealogy, Dugald has given me 14 grandchildren; only Tim Penttila, with 17, has more.)
I had also taught Dugald as an undergraduate in Oxford, where his degree was in Mathematics and Philosophy. (This put him at a slight disadvantage in the competition for research studentships, since some straight mathematicians discounted this degree to some extent; but Dugald’s performance as an undergraduate was so strong that the case could not be gainsaid.) The summer after his undergraduate degree, he went to the Northern Isles of Scotland to do chess workshops for schoolchildren there. I suggested to him that he might think about whether the number of orbits on n-sets of a primitive but not highly homogeneous permutation group must grow faster than polynomially. (I thought this would be a good thesis topic.) He came back with a proof that it grows at least fractionally exponentially (roughly like exp(√n)). In the course of his DPhil, he improved this to straight exponential, which examples show is best possible. I will say more about this later.
The title of the conference was “From permutation groups to model theory”, though Dugald’s trajectory wasn’t simply this; he continues to work in both fields, and in particular in the very fertile ground where they meet. For background, here are the three basic facts:
One of the nicest talks in the meeting was by David Evans. There is a very important nexus between the second point above, Ramsey theory, and topological dynamics. David gave a very clear exposition of this, and of recent work in this area.
A Ramsey class is a class of finite structures with the property that, if A and B are structures in the class, then there is a structure C in the class with the property that, if all the copies of A in C are coloured red and blue, then there is a copy of B such that all copies of A within it have the same colour. Jarik Nešetřil, one of the masters of structural Ramsey theory, observed that a Ramsey class has the amalgamation property; hence, by Fraïssé’s theorem, it is the age of a countable homogeneous structure. He proposed a programme to determine the Ramsey classes; first, determine the countable homogeneous relational structures (equivalently, the amalgamation classes); then look at the classification and determine which of these are Ramsey classes. Then came the very striking result of Kechris, Pestov and Todorcevic, asserting that the age of the homogeneous structure M is a Ramsey class if and only if its automorphism group (with the subspace topology from the symmetric group) is extremely amenable: this means that any continuous action of this group on a compact space has a fixed point.
David explained all this and went on to describe recently discovered conditions for the universal minimal flow for Aut(M) to be metrisable, with a proof that these conditions fail for some of the homogeneous structures discovered by Udi Hrushovski some thirty years ago.
Incidentally, I remember learning that in a Ramsey class, the objects had to have no non-trivial automorphisms. The usual way to enforce this is to ensure that a total order is part of the structure, since finite total orders are automorphism-free. I observed that there are other ways to make the objects in such a class rigid. (For example, tournaments can only have automorphism groups of odd order, and certain treelike objects can only have automorphisms of 2-power order.) Could such things be used here? The reason that we must have a linear order becomes very clear from the KPT theorem. For suppose that the age of M is a Ramsey class. Then Aut(M) is extremely amenable. But this group acts on the compact space of all dense linear orderings of the domain fof M; so it must fix one of these.
My talk was about the growth rates of the orbit-counting sequences in oligomorphic groups; I was able to get to the recent result of Justine Falque and Nicolas Thiéry that, in the case of polynomial growth, the orbit algebra is Cohen–Macaulay (described here). Pierre Simon had a beautiful update to my talk. As background, I note that, in Dugald’s result about exponential growth, he gave a constant about 1.13, improved later to 1.32 by Francesca Merola. Pierre had improved this to 1.57, not by laboriously improving Francesca’s arguments, but by showing with model-theoretic tools that the case of tournaments (the one for which Francesca’s estimate had slowest growth) could not actually occur in this situation, so we jump up to the next slowest in her list. (The correct value is conjectured to be 2.)
His talk was about a classification theorem, essentially saying that any such structure has growth constant 2, or is a reduct of the dense linear order, or is strictly stable (this means stable but not ω-stable). It was a very nice argument but I can’t really do justice to it.
There were so many other nice things that it is very tempting to let my pen run away with me; but I will restrict myself to just a couple. We learned something, in Dugald’s own talk and in the following talk by Angus Macintyre (two blackboard talks which made up the last morning, after the conference dinner) about one aspect of Dugald’s career progression. One subject to which Dugald made a big contribution was the theory of o-minimal structures, a model-theoretic take on the ordering of the real numbers which is so crucial in analysis. Now dense linear orders provide operands for one class of Jordan groups, on which Dugald worked with Samson Adeleke and Peter Neumann. Another such class consists of certain treelike objects called C-relations. It turns out that these provide a model-theoretic take on the structure of the valuation subrings in a complete valued field. This took Dugald into that area, another where he made very important contributions.
Finally, I will mention the public lecture by Richard Elwes, a student (and now colleague) of Dugald’s. This was held in the old ICMS premises in South College Street. The title was “The unreasonable effectiveness of logic”, and Richard attempted to describe some recent developments in model theory to an audience which contained a fair number of non-logicians (even, perhaps, non-mathematicians). This led to some simplification, where concepts in logic were compressed into memorable slogans. Thus “completeness” becomes “anything which can happen does happen somewhere”, while “compactness” is “anything can happen (unless there is a good reason why not)”, and “saturation” is “everything that can happen happens in the same place”. I will not go on to stability, since I don’t think I understand it well enough. But it was a valiant and entertaining lecture, ending with a list of topics to which model theorists have contributed (not explained to the layman), including the André–Oort conjecture, additive combinatorics, Schanuel’s conjecture, and algebraic dynamics, with more things expected to appear on this list in the future.
]]>In more mathematical terms, each person has a measure on the cake; without loss, the total measure is 1. The procedure satisfies two fairness conditions:
It is fairly well known that there is an extension which allows n people to share a cake fairly (satisfying the above criteria).
Why am I discussing this in the account of a graph theory conference? Because my choice of the most interesting talk at the conference was Zbigniew Lonc’s talk on this topic.
There are many real-life situations which don’t look like cake-cutting. For example, in a divorce case, maybe a house and a car have to be shared, and half a house or a third of a car are not much use. So it is reasonable to consider a discrete version of the problem, where there are m indivisible goods to be shared among n agents, each of whom has a valuation on each of the goods. This turns the problem into discrete mathematics.
In this case, easy examples show that we cannot satisfy the fairness criteria above. How can they be relaxed? One way is the maximin condition. Each agent considers all partitions of the set of goods into n parts and notes the value of the least valuable part. The maximum of this over all partitions is the best that this agent could expect to get in a “fair division”. An allocation of goods to agents is said to be a maximin share if each agent gets at least this maximin value. A maximin share may not exist (though failures seem to be fairly rare), but it is not even known that the existence of a maximin share is in NP.
But now let us add a further complication, which brings the problem into the realm of graph theory. Suppose that the set of goods is the vertex set of a graph, and we require that the share of each agent should be connected. (Suppose that the agents are farmers and the goods are plots of land; each farmer should get a contiguous set of plots.) We can easily extend the maximin share principle to this case (consider only partitions into connected parts). But can it be done? This question has only been answered for rather special graphs and small numbers of agents.
I wanted to ask a question, but couldn’t formulate it quickly enough. Suppose that, instead of insisting on connected parts, we imagine that each agent puts a value on connectedness (perhaps a decreasing function of the number of parts, or an increasing function on the degree of connectedness). How does the problem change? In a divorce case, the parties would not be happy with a division which gave the television to one and the remote to the other, but they could probably live with it.
The conference was in Rytro, a village more than two hours from Krakow in the mountains, with swimming pool, etc., and a ski lift next door. Brief comments on some of the other talks follow.
There were several talks on the distinguishing index of a graph, the least number of colours required to colour the edges so as to kill all automorphisms. I hope it wasn’t the case that they invited me on the strength of my survey paper with Robert Bailey on similar things (I didn’t realise that I should worry about this until too late, since my talk was the first of the conference.) This number is very small, even if you allow the automorphisms the possibility of permuting the colour classes rather than fixing them.
There were also a number of talks on “facial total colourings” of graphs embedded in surfaces, like the more usual colourings except that edges are only required to have different colours if they are consecutive in a face. There are variants where edges at most k steps apart must get different colours, or where you colour incidences rather than vertices and edges.
Sandi Klavžar talked about the “strong geodetic number” of a graph, a new topic where you ask for a set of vertices and a choice of shortest paths between all pairs in the set so that every vertex of the graph is covered. All published papers on this are 2018 or later. I asked at the end what happens if you want to bound the load on any vertex (the number of paths passing through it), which seems a reasonable condition for a network. He said that is the next thing they are going to look at.
Nikolay Abrosimov gave a talk which was not graph theory, but which I enjoyed. He was talking about the volume of a polyhedron in hyperbolic space, in terms of its side lengths. Once a popular topic (both Lobachevsky and Thurston considered it), but perhaps a bit unfashionable now. Anyway, he had completely solved the problem for symmetric antiprisms.
Akira Sato gave a most enjoyable talk. He was considering two famous results about existence of Hamiltonian cycles, by Ore and by Moon and Moser for bipartite graphs. Can you deduce Ore from Moon and Moser? No, there is a gap of one in the required inequalities. But perhaps you can find all examples attaining the Moon–Moser bound, and find a workaround for these. This can be done, and a lot of interesting arguments have to be deployed to do it.
Marthe Bonamy, the last plenary speaker, gave a thought-provoking talk. It was a pity that many people had already left. It was a graph colouring problem in which the traditionally easiest graphs are the hardest, while a lot of traditionally hard graphs are very easy.
She imagines that each node has unlimited computing power but can only communicate with its neighbours, so has a very local view of the graph. Clearly a path of length n is going to take at least n steps to agree a colouring. But given any graph, if you add a vertex joined to every other vertex, this vertex will see the whole graph in a small finite number of time steps, and can then use its unlimited computing power to find a proper colouring, and then order the others to use it. One of the difficulties (clear even in the path case) is conflict resolution; she talked about this but I am not sure I followed it.
The conference excursion was to the top of Przehyba, an 1175-metre mountain which involved a hike of more than 20 kilometres. With a party of 27 walkers and a guide, of course we didn’t go very fast, and there was plenty of time to view the many different kinds of fungi that grew near the park. By contrast, apart from flies and wasps, there was very little wildlife to be seen: two butterflies, a small passerine bird I couldn’t identify, and a pigeon, and on the way down a pair of crows and a dead mole. From the lunch stop at the restaurant on top of the mountain (a feature we don’t have in Scotland!), we could see the much higher Tatra Mountains in the distance.
]]>
I stayed in Balliol College’s new Jowett Walk Building, just around the corner from Holywell Manor, the Balliol – St Anne’s graduate centre where I stayed as a graduate student nearly 50 years ago. So I was able to walk through the University Parks to the Mathematical Institute as I did back then, although the Institute itself has moved to a new site where the Radcliffe Infirmary once stood.
As usual, I can only highlight a few talks. On the firt morning, Bob Guralnick told us about the discovery of some huge first cohomology groups of finite groups which led to the demise of Tim Wall’s conjecture (that the number of maximal subgroups of a finite group did not exceed the order of the group). Actually my reading of the write-up of this suggests that Wall’s conjecture is very nearly true, and the number of maximal subgroups of a group of order n may be bounded by n^{1+ε} for some small ε.
Martin Bridson started his talk with a little gem. Consider the three groups with presentations as follows:
Then he said, one of these groups is infinite with infinitely many finite quotients; one is infinite with no non-trivial finite quotients, and one is trivial. He took a vote on which is which, which was indecisive; this reinforced his point that a presentation is a very bad way to convey information about a group.
I happened to know the answer. G_{2} clearly has many finite quotients, since all we need is a group with an element conjugate to its square. (In fact this is the Baumslag–Solitar group. I recognised G_{4} as a group used by Graham Higman exactly because it is infinite with no non-trivial finite quotients (a pleasant exercise), and I had once used it for a similar purpose. Therefore G_{3} must be trivial. (Is this a proof of that fact??)
He went on to speak about what information the set of finite quotients of a group (in other words, its profinite completion) tells about the group (assuming it to be residually finite, unlike Higman’s group, which is not distinguished from the trivial group by its finite quotients).
On the second day, John Thompson talked about projective planes. He had some difficulty in writing on the board, but he spoke completely clearly. He presented us with an outrageous conjecture, designed to challenge us to solve the extremely difficult problem of finding the set of orders of finite projective planes. His conjecture was that this set is equal to the set of orders of direct powers of finite simple groups, both abelian (which gives the familiar prime powers) and non-abelian (so the first test case would be 60). In the title of the talk he mentioned the Hall–Paige conjecture; he ran out of time to tell us about this conjecture and its proof, but did so in answer to a question.
On Wednesday, Alex Lubotzky started his talk by saying, “I should thank the organisers for inviting me. But I found out later from Martin Liebeck that I was an organiser.” Indeed, organisation was very low-key. There was no paper version of the programme or abstracts, or even paper to write on; and registration consisted of participants sorting through a random heap of name badges on a table to find their own. (I am sufficiently old-fashined that I like having at least a printed programme. Indeed, when Martin asked me to chair a session, I said that my fee was a hard copy of the programme.)
Alex proceeded to give a beautiful talk, one of the highlights of the conference. He used the phrase “Finite to infinite”. This reminded me of an evening in a restaurant in Budapest, where Laci Babai and I planned a conference with the title “Group Theory: Finite to Infinite”; it was held in a hotel near Pisa, where there was much discussion of profinite groups, and where we shared the hotel with a football team (I don’t now recall which one) whose manager was constantly on the phone trading players and doing other deals.
There was much in Alex’s talk, but here is just one thing, which I had heard about but not internalised before. This is the notion of a sofic group, one which can be approximated by finite symmetric groups with normalised Hamming distance. (This means that there are near-homomorphisms from G to these groups, where the images of any non-identity element don’t get arbitrarily close to the identity.) Of course, every finite group can be embedded into a finite symmetric group; but infinite groups are much wilder. Nevertheless, it is currently open whether every group is sofic. If so, this would be an extraordinary statement of the power of the innocent-looking group axioms; so, if I had to bet, I would put money on the existence of non-sofic groups. But it appears to be a very hard question.
I can’t fail to mention Goulnara Arzhantseva’s construction of expander graphs where the diameter to girth ratio is bounded, from high-dimensional linear groups.
Nor can I fail to mention Persi Diaconis, was doing random walks on the irreducible characters of finite or compact groups, in a way which recalled the McKay correspondence. You pick a fixed irreducible χ; if you are currently at a character φ, you tensor φ with χ, decompose the result into irreducibles, and pick an irreducible with probability proportional to the product of its degree and its multiplicity. Many familiar and unfamiliar random walks arise. For example, for SU(2), you get the usual random walk on the integers conditioned on never going negative. He also developed a modular version of the same thing.
Cheryl Praeger talked about finding combinatorial structures whose automorphism groups are maximal subgroups of finite symmetric groups. For intransitive groups, you take a subset; for imprimitive groups, a partition (a set of subsets, pairwise disjoint and covering all points); for primitive non-basic groups, a Cartesian structure (which, as Laci Kovács pointed out, can be regarded as a set of partitions satisfying appropriate conditions). For affine groups, you take the affine space. What about diagonal groups with more than two factors? Cheryl described these as collections of Cartesian structures. It seems to me, at least for three factors, that they can be described as Latin squares, essentially the Cayley tables of the factors. (Most naturally, the diagonal group preserves the set of triples whose product is the identity.) This story is not yet finished.
Katrin Tent showed us the construction of infinite “non-split” sharply 2-transitive groups, at least for characteristic 0 or 2. For finite odd characteristic, the problem is more difficult. But she claims that the methods they have used to handle this case may actually lead to substantial advances on the Burnside problem, at least for odd exponent. Again, the story is not finished.
Colva Roney Dougal (the latest of my former students to be made a Professor) opened the proceedings on Friday with a talk entitled “Random subgroups of finite symmetric groups”. The main thrust is the new results she and Gareth Tracey have almost finished on this problem where you choose from the uniform distribution on subgroups, but she began with a beautiful summary on other methods such as random generation. Yet another not-quite-finished story; the precise result is not available yet and is eagerly awaited.
Colva also revealed that Friday happened to be Cheryl’s birthday.
The conference was concluded by a lovely talk from Ben Green. He had proved this theorem as a result of a question from Kufei Zhao, but he didn’t tell us about that. His result says that symmetric subsets of the unit sphere (that is, orbits of a finite group) in high dimensions are almost flat, in the sense that they lie on a thin slice through the centre of the sphere (precisely, there is a unit vector whose inner products with vectors in the symmetric set are in absolute value bounded by c/√(log d) in dimension d. He started by “cooking” his own theorem by pointing out that this is not as surprising as it seems. The vertices of the cube, for example, normalised to lie on the unit sphere, have all coordinates ±1/√d, and so are almost orthogonal to a unit basis vector.
The theorem itself used a quantified version of the theorem of Jordan, according to which a finite subgroup of the unitary group U(d) has an abelian subgroup bounded by a function of d. The quantification was due to Michael Collins (I believe Boris Weisfeiler also worked on this problem but his results were unpublished). Anyway, Collins’ result was not quite strong enough to use as a black box by Green, so he had to open the box and re-do some of the arguments. Not a straightforward task, and a number of ideas from probability theory also crept in.
]]>This is just to report a discovery. There is now an official “Oxford Mathematical Alphabet”. I have lost count of the number of lectures I heard in Oxford which began “Let G be a group.” But not any more, it seems. In the Oxford mathematical alphabet, G is for … Growth Tensor.
]]>It stands for “quantum”, and it is fashionable now to produce quantum versions of everything from chromatic number of a graph to the symmetric group.
Related to this, q occurs as a “deformation parameter” in quantised versions of standard algebraic structures. That is, there is an algebra depending on a parameter q, and as q tends to 1, the algebra becomes a “classical algebra”. There can be real advantages to doing this. It may be that the representation theory of the classical algebra is difficult, but that of the deformed algebra is easier, and we can get some information about the classical case by taking a limit.
It is also quite common to use q as a formal parameter in certain power series counting objects of interest. For example, consider the problem of counting lattice paths from the origin to the point (m,n), where m and n are non-negative integers. Each step in the path must be a unit step in either the easterly or the northerly direction. We have to take altogether m+n steps, of which m must be easterly and n northerly; these steps can be taken in any order, so the number of paths is the binomial coefficient, which I shall write as Bin(m+n,m). But now let us refine the count. Each such path, together with the X-axis and the line x = m, encloses a certain area A (between 0 and mn inclusive). Now if, instead of counting 1 for each path, we count q^{A} to a path enclosing an area A, the sum we get is the Gaussian coefficient Gauss(m+n,m)_{q}. This is calculated by replacing each integer k in the expression for the binomial coefficient by the “q-integer” q^{k}−1. Since the number of factors in numerator and denominator is the same, an application of l’Hôpital’s rule shows that the Gaussian coefficient tends to the binomial as q→1.
Finally, the number of elements in a finite field (which is a prime power) is universally referred to as q. This is so ingrained that the finite fields community call their annual conferences F_{q}, a standard notation for a field of q elements.
Now something unexpected and wonderful holds. The binomial coefficient Binom(n,k) counts the number of k-element subsets of a set of size n; and the Gaussian coefficient Gauss(n,k)_{q} counts the number of k-dimensional subspaces of an n-dimensional subspace over the field F_{q}.
There are many analogies and parallels between the combinatorics of subsets of a set and the combinatorics (or projective geometry) of subspaces of a vector space, especially over a finite field. This leads us to sometimes describe the former as “geometry over the field with one element”, as I have discussed on another occasion.
This long introduction finally brings me to my topic, a paper entitled “Defining the q-analogue of a matroid” by Relinde Jurrius and Ruud Pellikaan, which appeared in the Electronic Journal of Combinatorics last month.
Matroids are combinatorial structures which describe many kinds of “independence”: linear independence in a vector space; algebraic independence over a ground field in an algebraically closed field; forests in a graph; partial transversals of a set system; affine independence in an affine space; and so on. They were introduced by Whitney, and their theory was developed by Tutte, Rota, welsh, and many other mathematicians. Take a look at the Matroid Union blog to learn more.
Jurrius and Pellikaan are not producing a quantum version of matroid theory. Rather, as in the last case above, they are defining a structure over a field with q elements to be the analogue of a matroid in the usual sense over the “field with one element”. More precisely, they define a q-matroid to be a finite-dimensional vector space E over F_{q}, together with a rank function r from the set of subspaces of E, satisfying exactly the same conditions as for a matroid, namely, for all subspaces A and B of E,
Of course, matroids can be defined in many other ways: in terms of their independent sets, bases, circuits, and so on. Something a little unexpected emerges. For the analogue of independent sets, for example, we have a collection of subspaces called independent spaces, satisfying exact analogues of the independent-sets axiom for matroids, but with an extra axiom, which cannot be dispensed with. It states that if A,B are subspaces, and I,J are maximal independent subspaces of A,B respectively, then there is a maximal subspace of A+B which is contained in I+J.
About halfway through the paper comes the motivation for studying q-matroids. Just as ordinary (representable) matroids arise from (linear) codes, it turns out that q-matroids arise from “rank metric codes”, structures using the rank of the difference between matrices as a measure of how far apart they are.
Right at the end of the paper, they mention that the notion of a quantum matroid has been defined by Paul Terwilliger; it is closely related to that of q-matroid. There is also related work by Henry Crapo.
The paper contains many research topics. One of these concerns the Tutte polynomial. How should it be defined in such a way that Greene’s link between Tutte pollynomial of a matroid and weight enumerator of the corresponding linear code has an analogue for q-matroids and rank metric codes?
]]>
The conference was in Będlewo Castle, once a private residence for a noble family which had more than its share of scandal, now a mathematics conference centre something along the lines of Oberwolfach, with nearby woods for walking, and in a village with a stork’s nest (now inhabited by house martins) on top of the church tower.
I have been to several statistics conferences before, but this is the first time I have ever spoken at one. And more: I organised a session on Combinatorics with applications to experimental designs. I will say a little bit about this later; but here is my team.
It was a very enjoyable conference (or would have been, had I not had so many urgent jobs hanging over my head, including two MSc dissertations to read which arrived while I was at the conference). On Tuesday there was a special session for Tadeusz Caliński’s 90th birthday. His career has been truly remarkable. He is responsible for the close links that have existed for more than 50 years between Polish statisticians and those in many other countries including the UK, France, Italy and the Netherlands. He managed to travel to France even after the declaration of martial law in Poland, because the collaboration had been agreed at high level between the French and Polish governments. (But a couple of years later he was refused permission to travel to Canada, ostensibly for some stupid reason which I do not recall, but actually because he was not a party member but, worse, a supporter of Solidarity.) Many friends he met in the 1960s on his early travels to the west are now dead, but many of those who are still alive had come to the conference to honour him, including Gavin Ross and Rob Verdooren. (But, on the other hand, there were many talented young people there.)
On Tuesday evening there was a concert of music by a string quartet, slightly marred by the fact that people were unsure about when each piece was finished. But the musicians did a good job, and we were regaled with stories about the composers.
On Wednesday there was a choice of excursions, to Kórnik Castle, or to walk in Wielkopolska National Park – I chose the latter. The park was very dry because of the unusual weather, and we were told that the lakes were somewhat polluted (indeed, we saw a machine whose purpose was to help oxygenate the lake water). After the walk they had put out an impressive spread of food for us, but we didn’t fall on it like locusts, since we knew that there would be a barbecue when we got back to Będlewo. The barbecue, as well as huge amounts of food, featured entertainment by a group of musicians in some kind of traditional dress playing violins and bagpipes.
Thursday was the Conference dinner, and presentation of prizes to the young
researchers. A pianist entertained us during the dinner.
I will mention just two talks, which for me were the highlights of the meeting.
Friedrich Pukelsheim, from Augsburg, was a statistican, who wrote a book on optimal design. He switched fields, and is now a sociologist, or political scientist, who looks at electoral systems (with a mathematician’s eye).
Many legislatures have, in some form, a principle known as degressive proportionality: the higher the population of a region, the larger the ratio of electors to representatives. But the European parliament is unusual in having this as a formally stated law. Unfortunately, the last election failed to meet this requirement precisely. Friedrich was on a small committee of mathematicians charged with ensuring that the principle is complied with in future. The European parliament is also unusual in trusting mathematicians to do this job, though they don’t trust them enough to accept a formula for the allocation of seats to countries, so the process will have to be repeated for the following election. Friedrich gave us an analysis of the case of Catalonia, where the legal requirements are somewhat contradictory, and in any case are based on population figures from the 1970s. He has written a book, which he recommended as a present for our loved ones.
I have worked a bit on optimality criteria for block designsl the most commonly used of these can be expressed in terms of the Laplacian eigenvalues of the concurrence graph of the design. Thus, A- and D-optimality maximise the harmonic and geometric means of the nontrivial eigenvalues, while E-optimality maximises the smallest eigenvalue.
In a nice but fast-paced talk, young researcher Samuel Rosa used graphs in a different way. We are comparing n different treatments, with no blocking factors, and we have a collection of pairs of treatments whose comparisons we are most interested in. A classical case involves a comparison of just two treatments; we should allocate experimental units as equally as possible to the two treatments.
In the general case, the interesting comparisons form the edge set of a graph; our task is to allocate weights to the treatments, giving the proportion of experimental units which should be allocatedd to them, in an optimal way. (Here, unlike the former case, the graph is given in advance.) It turns out that, again, the optimal allocations are determined by the Laplacian matrix of the graph. If I got the details right, assuming that the Laplacian has rank n−1, then the D-optimal assignment is uniform; the A-optimal weights are proportional to the square roots of the vertex degrees; and the E-optimal weights are proportional to the vertex degrees.
Quite a few statisticians use Jordan algebras in their work. So I put my neck on the block and talked about the notion of Jordan schemes (a generalisation of association schemes). However, from some of the other speakers, I realised that there was a mismatch between the statisticians’ notion of Jordan algebra and mine. After a robust discussion with Roman Zmyślony, things became a bit clearer to me, and I offered to write up some notes on this. I will report when I have done the job.
I had a bit of trouble getting the Polish ogonek on the first e in Będlewo in my slides. The solution I finally used was a package optimised for Computer Modern and Times Roman, but even the package authors agrees it is not entirely satisfactory for other fonts.
]]>