I have spoken about the result at the Combinatorics Study Group in Queen Mary and at the St Andrews Research Day. A paper containing the complete proof of the theorem is now on the arXiv.
Let Γ be a finite graph; I will always assume it to be connected and regular with valency k. Let Ω be its vertex set.
A partition Π = {S_{1},…,S_{r}} of Ω is said to be equitable if there is a r×r matrix M (the quotient matrix of the equitable partition) such that a vertex in S_{i} has exactly m_{ij} neighbours in S_{j}.
Equitable partitions of graphs include a large number of structures of interest
in finite geometry and combinatorics. Here are a few examples.
We call a subset S of Ω perfect if the partition into S and its complement is equitable.
Theorem: The spectrum of the quotient matrix of an equitable partition is contained in the spectrum of the adjacency matrix A(Γ) of Γ.
To see this, let V be the real vector space of real functions on Ω, and let v_{i} be the characteristic function of S_{i}. Then from the definition we get
v_{i}A(Γ) = ∑m_{ji}v_{j},
so the subspace spanned by the vectors v_{i} is invariant under A(Γ), and the matrix of the restriction relative to the given basis is the quotient matrix of the equitable partition.
In particular, in the case of a distance-regular graph, the quotient matrix M of the distance partition has each eigenvalue of A(Γ) with multiplicity 1.
Since Γ is regular and connected, its valency k is a simple eigenvalue of A(Γ). Also, the matrix M of the equivariant partition has all row sums equal to k; so k is an eigenvalue of M, called the principal eigenvalue.
If S is a perfect set such that the matrix of the partition {S,Ω\S} has eigenvalues k and μ, then we call S a μ-perfect set.
Here I am concerned with equitable partitions with just one non-principal eigenvalue. These of course include all such partitions with just two parts. In general, the partition Π has all non-principal eigenvalues equal to μ if and only if the characteristic vectors of its parts all lie in the sum of the eigenspaces with eigenvalues k and μ of A(Γ). From this fact, we get the following results, which reduce the problem of their classification to that of the minimal μ-perfect sets:
Proposition: Let Π = {S_{1},…,S_{r}} be a partition of the vertex set of the connected regular graph Γ.
We call such partitions Π μ-equitable.
A Latin square L is a n×n array with entries from the set {1,…,n}, such that each letter occurs once in each row and once in each column.
Latin squares are important in combinatorics, since they are highly regular structures (giving rise to strongly regular graphs, as we will see) but exist in great profusion (the number of Latin squares of order n grows faster than the exponential of n^{2}).
Given a Latin square L, the corresponding Latin square graph Γ has as vertices the n^{2} cells of L, two cells being adjacent if they lie in the same row or the same column or contain the same letter.
It is well known that this graph is a strongly regular graph (a distance-regular graph with diameter 2), which has valency 3(n−1), and whose other eigenvalues are n−3 (with multiplicity 3(n−1)) and −3 (with multiplicity (n−1)(n−2)).
Proposition: In a Latin square graph, the set of cells in any row (or in any column, or containing any letter) is (n−3)-perfect.
For a cell in a row is joined to n−1 other cells in that row and 2(n−1) cells outside; while a cell outside a row is joined to two cells in the row (one in the same column, and one with the same letter) and 3n−5 cells outside. So the partition is equitable, with matrix having eigenvalues 3(n−1) and n−3. It follows that any partition, each of whose parts is a union of rows (or columns, or letters), is equitable, with all non-trivial eigenvalues equal to n−3.
Are there any others?
At the International Workshop on Bannai–Ito Theory in Hangzhou, China, in November 2017, Sergey Goryainov talked about the following result that he and Alexander Gavrilyuk had proved:
Theorem: Let L be the Cayley table of an elementary abelian 2-group G of order n. Then a minimal (n−3)-perfect set in the Latin square graph is a row, a column, a letter, or the set of n^{2}/4 cells corresponding to a subgroup of G of index 2.
Rosemary and I decided to have a go at generalising this result … Eventually the gang of four came up with a complete solution.
We found a couple more constructions.
Corner sets: Let L be the Cayley table of the cyclic group of order n, which we regard as being the set of integers modulo n. A typical corner set looks like this:
A fairly simple calculation shows that a corner set is (n−3)-perfect.
We will also call a set obtained from a corner set as above by permuting rows,
columns or letters a corner set.
Inflation: Let L_{0} be a Latin square of order s. Construct an n×n array, where n = st, by replacing each entry of L_{0} by a Latin square of order t, where the squares which replace a given letter all use the same alphabet of size t, and the alphabets associated with different letters of L_{0} are pairwise disjoint. The resulting array is a Latin square L of order n, called an inflation of L_{0}.
A short calculation shows that if S_{0} is a set of cells of L_{0} which is (s−3)-perfect, then the set of |S_{0}|t^{2} cells of L in the corresponding subsquares is (n−3)-perfect. We call this set an inflation of S_{0}.
Again, we use the same term for any square obtained by permuting rows, columns and letters.
In particular, in the unique Latin square of order 2, any single cell is a corner set; inflating this we see that, in a Latin square of even order n, any subsquare of order n/2 is an inflation of a corner set, and so is (n−3)-perfect.
Now we can state the main theorem:
Theorem: A minimal (n−3)-perfect subset of a Latin square graph from a square of order n is a row, a column, a letter, or an inflation of a corner set in a cyclic square.
The proof of this is quite complicated and I will not attempt even a sketch here. Read it on the arXiv if you are interested!
The other non-principal eigenvalue of a Latin square graph is −3. Can we say anything about −3-perfect sets? Such a set S has the property that it meets any row, column or letter in a constant number s of cells, and its cardinality is sn.
In particular, with s = 1, such set is a transversal, a set containing one cell from each row, column or letter. One of the oldest conjectures about Latin squares is Ryser’s conjecture, asserting that any Latin square of odd order has a transversal. Many squares of even order do too, but some do not (for example, the Cayley table of the cyclic group). The conjecture is still open despite a lot of work, so characterising such sets is unlikely to be achieved soon!
We see also that, if an equitable partition Π has all non-principal eigenvalues −3, then all its parts satisfy the above condition, so that rows, columns, letters, and parts of Π form an orthogonal array of strength 2. In particular, if our square has an orthogonal mate, then the letters of this mate form such a partition.
There is a partition with parts of sizes 7, 14 and 28 of a Latin square of order 7 derived from the Fano plane.
If we consider partitions where both non-principal eigenvalues occur, there is much more freedom, and probably no hope of a classification. To mention just two examples:
Before reading on, do you know the mysterious property of 4913?
As regulars here know, I am a Linux user. My favourite editor is one found on every unix machine, namely vi. Well, not the actual original vi, whose movement commands are somewhat cryptic, but the replacement vim (“vi improved”), which does at least use the arrow keys in the way you would expect. (Did I ever tell the story of how I persuaded John Conway to produce a paper for a conference proceedings published in time for the conference?)
I usually use vi from the command line in a terminal. But, possibly because of advancing age causing advancing holes in my memory, I have once or twice recently used the file selector to find the file I am looking for, having forgotten the pathname, then opened a terminal in this directory to use vi. On doing so, I noticed that at the end of the session there was a locked file called 4913 in the directory.
More experimentation showed several curious things. This is not a real file, since it does not show up in a directory listing (which is why I hadn’t spotted it before), and since it doesn’t exist, it can’t be deleted. However, it can be opened with another text editor such as gedit. If you do this, type in a character, and save, then the file takes on a real existence; it is no longer locked, and can be deleted in the usual way. Moreover, if you don’t do this but use vi again, the mysterious 4913 disappears; it seems to be toggled by the use of the editor.
A Google search for the number 4913 led me to a question someone had asked on StackExchange ten years ago. So this mystery has been known for ten years, and nobody, it seems, considered it worth doing anything about it in the intervening time. Well, I suppose that writing a non-existent locked file isn’t really a bug in the programme. But why 4913? I don’t know that!
However, lovers of numbers will no doubt have noticed that 4913 is a perfect cube.
]]>The LMS newsletter told me,
2018 will be the Year of Engineering. The Government has announced that it will work with industry partners … to offer a “million direct and inspiring experiences of engineering to young people throughout the year”
while the EMS reported
Year of Mathematical Biology 2018. The Year … is a joint venture of the European Mathematical Society and the European Society for Mathematical and Theoretical Biology.
Can I respectfully suggest that we might have a Year of Communication?
]]>Bees are under threat from various problems including colony collapse disorder and the Varroa mite. Recently a new threat has emerged, from a fairly new class of pesticide, the neonicotinoids. These compounds were introduced about thirty years ago, and have many advantages; they are far less harmful to humans, fish, and other vertebrates than organophosphorus compounds. However, their effect on bees is a current concern.
In 2017, the results of an observational study were reported by the BBC. I have been unable to find the report of the study, so what follows is from memory. Over 100 sites worldwide were included in the study, which found that in two-thirds of the sites, there were measurable levels of neonic derivatives in honey, and in one-quarter of the sites, these where high enough to be harmful to the bees.
The BBC asked an industry spokesperson to comment. Two comments were reported: first, the levels were too low to pose a danger to human health (no such danger had been suggested in the report); and second, one hundred sites were too few to allow reliable inference about the scale of the problem.
The German chemical company Bayer (which, according to a report I read, is talking about a merger with Monsanto, the company environmentalists love to hate) conducted a large experiment in Germany to test the effects of their poison on bee colonies. They chose two large sites; on one they used neonicotinoids, on the other they didn’t. Then they measured the effect on bee colonies on these sites. An entire issue of the journal Ecotoxicology was devoted to papers resulting from this experiment.
You can probably see the flaw in this already, even if you are not a biologist or a statistician. If a difference is found, you have no way of telling whether it is due to the different treatment (pesticide) or to the different conditions at the two sites.
Indeed, one of the most basic principles of experimental design, one of R. A. Fisher’s “three Rs”, is replication. Statistics provides means of separating these differences, but only if the experiment is replicated, that is, each treatment used on more than one site.
St Andrews biologist Jeremy Greenwood was so annoyed by this that, after some communication with the authors of the study, he, and statistician Rosemary Bailey, wrote a paper pointing out the issue, and submitted it to Ecotoxicology. This is how science works: scientific results must be published, and openly discussed; any flaws will emerge in the discussion.
What happened next throws an interesting light on academic publishing. The editor of the journal was prepared to consider the paper, but downgraded it from a paper to a “Letter to the Editor”. This had two effects. First, open access publication is not available for letters to the editor. Second, the editor offered authors of the original study (one of whom is now a Bayer employee) the right to reply; had a telling answer been available, this could have nipped the criticism in the bud.
Along the way, an environmentalist learned about the paper, and alerted a journalist, who wrote about it. In a fit of pique, the editor accused the authors of breaching confidentiality and threatened to regard the paper as withdrawn. So much for scientific debate. (There was not even the excuse of commercial confidentiality, since the paper simply discussed work already published.) Just imagine if I was barred from telling anyone about my latest theorem until the paper had been accepted by a journal!
In the end the two letters were published; the DOIs are 10.1007/s10646-017-1877-1 and 10.1007/s10646-017-1878-0. If your university has a subscription (Ecotoxicology is a Springer journal) you will be able to take a look, if you want to. Othewise you can read a preprint of the Bailey and Greenwood letter here. My view is that the response doesn’t address the criticism; to me, it reads like “We have a newer and bigger computer, so we can do this sum which involves dividing by zero, which we couldn’t do before”. But I am not an expert.
I see some slight parallel with Alan Sokal’s famous hoax “Transgressing the Boundaries“, published in the journal Social Text twenty years ago. The day the paper was published, he announced that it was a hoax, and garnered a lot of publicity. Sokal described this as an “experiment” with a serious purpose. He was attacking the influence of postmodern notions of relativism applied to science. He had several examples, from Europe, the United States, and India, of actual harm caused by this idea that science is just another discourse no more or less valid than any other. He believed that the progressive Left, rather than supporting this nonsense, should oppose it.
But imagine that an Alan Sokal in a parallel universe had decided that, rather than revealing the hoax, he would proceed to the second phase of the experiment, and leave the paper to lie there until somebody noticed what he had done. This might have taken a while: presumably not many physicists or mathematicians apart from Sokal read Social Text. When eventually some high-school student noticed some of the more obvious errors and wrote to the journal to point them out, would the editor have offered Sokal the right to reply? They may have got more than they had bargained for!
]]>Kenneth works on geometric analysis, specifically fractal sets and their geometry. He has written several good books on this, and has built up a powerful group in St Andrews.
Kenneth is also a long-time member of the Long Distance Walkers’ Association, and regularly takes part in their hundred-mile walks. In the latest issue of their journal Strider, he features several times: as the setter of the cryptic crossword, as the setter of the “Puzzlers’ Plod”, and profiled in a question-and-answer session.
Benoit Mandelbrot said that the coastline of Britan is a fractal; its measured length depends on the scale of measurement used. No doubt Kenneth could tell us the length of the coastline measured at the scale of his impressive stride!
Anyway, warmest congratulations to Kenneth!
]]>So to recall: a coherent configuration is, in essence, the combinatorial structure arising from the partition of the set of ordered pairs from X into orbits of a permutation group on X. In order of increasing specialisation, we have the following kinds of structure:
I will use the term association scheme for “symmetric coherent configuration”. If you prefer a different convention, you may think the problem I pose is less interesting; but the interest of a problem should not depend on the language used to describe it!
One of the features of coherent configurations which is crucial for their use in the graph isomorphism problem is that the coherent configurations on a given finite set form a lattice. The join operation in the lattice is just the join in the lattice of set partitions. The meet is defined because there is at least one coherent configuration finer than two given configurations (namely the trivial configuration where all relations are singletons), and the meet of the two is then just the join of all the configurations finer than both.
A permutation group G on X is
These are all standard in permutation group theory, but we need to add a few more, defined in terms of association schemes. The permutation group is
These concepts are related by the following implications:
Moreover, none of these implications reverses, and no further implications hold except for those obtained from these by the transitive law.
This theorem, and examples to prove the claims, are in my paper “Coherent configurations, association schemes, and permutation groups”, in Groups, Combinatorics and Geometry, World Scientific, Singapore, 2003.
The problem concerns AS-free groups. These are primitive, and basic (since otherwise they would preserve a Hamming association scheme), and so, by the O’Nan–Scott theorem, they are affine, diagonal or almost simple. The affine groups must be 2-homogeneous. Almost simple examples have been constructed computationally by Faradzev, Klin and Muzichuk; but they are few and far between, and there seems to be no pattern to their occurrence.
This leaves diagonal groups. Now a diagonal group whose socle has two simple factors preserves the conjugacy class scheme of the simple group; and one with three simple factors preserves the strongly regular Latin square graph associated with the Cayley table of the simple group. For four or more factors, the answer is unknown.
Problem: Are primitive diagonal groups with four or more factors in their socles AS-free or not?
The smallest possible degree of such a group would be 216000, so probably computation is not feasible …
]]>If you have heard Laci Babai talking about his quasi-polynomial time algorithm for graph isomorphism, you might appreciate the significance of this; but if not, it might look a bit specialised. I want to say a bit about the subject matter of the conference, and persuade you that it is a central area of algebraic combinatorics which connects with many other areas of mathematics.
The area grew from three input streams, which resulted in three different names and some continuing confusion. I’ll describe them one at a time, and say a bit about the subsequent history.
If you design an experiment for comparing a number of factors, you have to look ahead to how useful information will be extracted from the experimental results. In all but the simplest cases, this involves finding a generalized (von Neumann) inverse of a potentially rather large concurrence matrix.
R. C. Bose and his school invented the concept of partially balanced designs to facilitate this, replacing inverting a large matrix to inverting a much smaller matrix. This tylically involves having a structure known as an association scheme on the set of treatments, so that the number of blocks in which a pair of treatments occur depends only on the associate class containing the pair of treatments.
So let us stop and define this object. It is a partition of the set of all ordered pairs from the set X of treatments into relations R_{0}, … R_{s}, with the properties
The third condition looks complicated, but it simply says that the linear span over the real numbers of the matrices of the relations is closed under multiplication, and so forms a commutative algebra, which is called the Bose–Mesner algebra of the association scheme. From the point of view of the statistician, inverting an n×n matrix is replaced by inverting an element of this algebra (represented by an (s+1)×(s+1) matrix in the regular representation, usually very much smaller.
The smallest non-trivial case is s=2, where the relations R_{1} and R_{2} are the edge sets of a complementary pair of strongly regular graphs.
The Bose–Mesner algebra thus has two bases: the first consists of the relation matrices; the second consists of the orthogonal projections onto the eigenspaces. The change of basis matrices between these two are now denoted by P and Q.
The second stream flowed from the former Soviet Union, in particular from the work of Weisfeiler and Leman on graph isomorphism. They defined a class of combinatorial structures, which now are called coherent configurations for reasons we will see shortly; these generalise association schemes in two ways. Again we partition the set of ordered pairs into relations, but we weaken the first two conditions as follows:
The third condition remains the same; so the relation matrices once again span an algebra, though this is no longer necessarily commutative. This algebra was called a cellular algebra, but this term has been hijacked to have a completely different meaning. Taking terminology from the third stream, we call the set-up a coherent configuration, and the algebra a coherent algebra.
How is this relevant to graph isomorphism? We begin by remarking that, if we could find (generators for) the automorphism group of a graph, we could solve graph isomorphism. For, given two graphs, without loss of generality they are both connected; now the two graphs are isomorphic if and only if their disjoint union admits an automorphism interchanging the two components.
Now given a graph, we think of it as a partition of the set of ordered pairs into three parts: the diagonal; the edges of the graph; and the non-edges of the graph.
Now, given any partition of the pairs, we can refine it to a partition which is an association scheme, and can be computed effectively from the graph. First, for each triple i,j,k of indices of relations, and each pair (x,y) in R_{k}, we count the number of points z such that (x,z)∈R_{i} and (z,y)∈R_{j}; then we split R_{k} into parts corresponding to the different values of this parameter. We repeat this process until it stabilises; when this happens, the values p_{ij}^{k} are well-defined, and we have a coherent configuration.
Moreover, this configuration is invariant under the automorphism group of the graph. If it is the trivial configuration, in which all parts are singletons, then the automorphism group is the trivial group. Otherwise, we can try to show that there is a vertex whose stabiliser is trivial, by splitting one of the diagonal relations into a pair (x,x) and all the rest, and repeat the refinement. I will not give further details. But the upshot is that we would like to find a reasonably small set of vertices whose pointwise stabiliser is trivial, and then work back to find appropriate automorphisms.
The third strand is the oldest, going back to Schur, who invented the method of Schur rings in his proof that a primitive permutation group containing a regular cyclic subgroup of composite order must be doubly transitive. The method was developed by Schur’s student Wielandt, and included as the fourth of five chapters of his influential book on permutation groups. The translation, by R. Bercov, was my own introduction to the subject in 1968.
Wielandt’s book in fact segues from Schur rings into representation theory, and it is clear that he had thought deeply about this transition. But he didn’t publish much, and the idea was taken up by Donald Higman in the late 1960s. Higman defined a coherent configuration, as we have done above, and observed that, if G is any permutation group, then the orbits of G on ordered pairs form a coherent configuration.
If this configuration is an association scheme, then the group will act on the eigenspaces of the Bose–Mesner algebra, and indeed will be irreducible on each eigenspace. Higman saw the general theory as a way of decomposing permutation representations into irreducible components. Things are more difficult because of the non-commutativity, but the closure under transposition has the result that the algebra is semisimple, so is isomorphic to a direct sum of matrix algebras. In principle, the dimensions of these matrix algebras and their multiplicities in the Bose–Mesner algebra can be computed from the parameters of the configuration. In the group case, this algebra is precisely the centraliser of the permutation matrices making up the group; so on interchanging degrees and multiplicities, we get information about the representation of the group and its decomposition.
Higman spent the year 1970–1971 in Oxford; he was the external examiner for my doctorate, and he gave a course of lectures, for which I took notes (along with Susannah Howard), which were subsequently published in the Mathematical Institute’s Lecture Note Series. So I was in at the ground floor!
We are now faced with four kinds of objects; in order of increasing generalisation, these are association schemes (symmetric coherent configurations), commutative coherent configurations, homogeneous coherent configurations (those for which the diagonal is a single relation), and general coherent configurations. This has given rise to some problems of terminology!
In 1973, Philippe Delsarte wrote a doctoral thesis at the Université Catholique de Louvain, which was reprinted in the Philips Research Reports Supplements, and has been one of the most influential documents in the theory. Delsarte restricted himself to commutative association schemes, though in fact most of his examples are symmetric. He observed that these provide a setting for two important areas of discrete mathematics: error-correcting codes and t-designs. Codes are easier to explain here.
An association scheme is P-polynomial if there is a relation matrix R_{1} in the scheme which generates it, in the sense that R_{i} is a polynomial of degree i in R_{1}. The graph with edge set R_{1} is what is known as a distance-regular graph. An important class of examples are the Hamming graphs, where the vertex set is the set of all words of length n, or n-tuples over an alphabet of size q, two vertices being joined if they differ in just one coordinate.
A d-error-correcting code is a set of words with the property that, if one is transmitted through a noisy channel and at most d symbol errors occur, then the received word is still closer to the transmitted word than to any other word in the code; so, with this assumption, the transmitted word can be found. This is just a set of words (or vertices in the Hamming graph) whose pairwise distances are all at least 2d+1. Delsarte observed that this definition could be made for any distance-regular graph, and many of the results about such codes (including the sphere-packing bound and Lloyd’s Theorem) could be proved in the more general situation. (Norman Biggs was developing similar ideas at about the same time.) Delsarte also showed that the technique of linear programming could be used to find new bounds on codes which in some cases were stronger than any previously known.
Delsarte also remarked that there was a dual concept to P-polynomial schemes (or distance-regular graphs), which he called Q-polynomial schemes; and that these schemes provided a setting for a generalisation of t-designs (in the so-called Johnson schemes) and orthogonal arrays (in the Hamming schemes).
At about this time, Eiichi Bannai and Tatsuro Ito were using similar techniques to prove the non-existence of Moore graphs with diameter greater than 2 other than odd cycles. (These graphs, which I won’t define here, would be examples of distance-regular graphs. The theorem was also proved by Damerell at about the same time.)
This led, a few years later, to a very influential book on algebraic combinatorics by Bannai and Ito, Association Schemes, I. (Volume 2 never appeared.) These authors defined association schemes to be homogeneous coherent configurations; but, like Delsarte, most of the important examples were symmetric. Their focus was on schemes which are both P- and Q-polynomial. I mentioned above that in a P-polynomial scheme, the i relation matrix is a polynomial of degree i in the first one. These polynomials satisfy a three-term recurrence relation, and so form a family of orthogonal polynomials. For the Hamming scheme, these are the Krawtchouk polynomials; for the Johnson scheme, the Eberline or dual Hahn polynomials, as Delsarte had observed. Q-polynomial schemes give a dual family of polynomials. Some of the examples led to new families of orthogonal polynomials; one of these consists of the Bannai–Ito polynomials.
Of course I cannot trace everything that has happened since. In the preceding post, I described very briefly the talks given at the International Workshop on Bannai–Ito Theory in Hangzhou last month; as well as the subjects mentioned above, there were connections with synchronization, optimality, symmetric spaces, quantum probability, and much more.
I expect that much of this will be discussed at the Pilsen meeting. But in particular, one thing that will be under discussion is the history of the subject. Before all the participants have “left”, it is important to get a record of what happened in those confused early days, when communication between researchers was by letter, or (in the case of the Soviet Union and the West) virtually nonexistent.
]]>
Hangzhou is set on an artificial (but centuries old) lake, with islands, causeways, willows, pagodas, temples, boats, and so on. The picture above shows the Lei Feng pagoda, Xiao Ying Zhou Island, the Ling Yin temple, and a small lake in the Botanic Gardens. The first three were in places where we were taken by the generous and friendly students, Zhiwen He and Weicong Li. Xiao Ying Zhou Island is the viewpoint for the famous “Three Ponds Mirroring the Moon” which appeared on the 1 yuan note (now replaced by a coin).
Rosemary and I went a day early to give us a bit more time for sightseeing. Conference registration was on Thursday afternoon, and the talks ran from Friday morning until Sunday lunch. It was a packed schedule, with 28 talks in the two and a half days. I will try to give a brief summary. I don’t know what Bannai–Ito theory actually is, but we heard a lot about association schemes, orthogonal polynomials, and so on.
The schedule said that the first item was an opening ceremony at 8.30 followed by my talk at 9. But China apparently doesn’t go in for long opening ceremonies. At 8.30, Jack Koolen said a few words saying essentially that he was happy we had all come, and then it was announced that we would have the conference photograph. So we all traipsed outside for that, and then back in again. Then someone said, shall we wait until 9 or shall we just get started right away? The latter view prevailed, so without further ado I started my lecture. It is true that at about 9 a small group of students arrived and seemed a bit disconcerted to find the first speaker in full flow.
We had been unable to use the regular conference room due to another booking, so the first day’s talks were in a room which was too small (if everyone came, some had to stand), and where the screen was too low: from the back of the room it was impossible to see the last few lines. Despite this, my talk went OK, with even a few interruptions, I’m glad to say. I realised I had left something important out of the statement of one theorem, so made a mental note to fix the slides (which involved adding a new frame).
My talk was on synchronization and separation in the Johnson association scheme, which I have discussed here recently, so I will say no more.
The second talk was by Akihiro Munemasa, who had a nice idea for an approach to permutation representations which are not multiplicity-free. He supposes that the multiplicities are at most two, and the action is imprimitive, so that, if a character has multiplicity two, then it occurs precisely once in the action on blocks. This means that in the isotypic component affording twice this character, there is a “canonical” submodule affording the component in the action on blocks, and then its perp forms a “canonical” second submodule. He was able to use this to get formulae for the minimal idempotents, and hence develop a Delsarte-type theory.
Sung-Yell Song gave us a brief account of partial geometric designs (or 1 1/2 designs according to Arnold Neumaier), with some examples which went past rather fast.
Takayuki Okuda proved a theorem which he claims has an application to isometric embeddings of compact Riemannian symmetric spaces. The theore says that if you take the “opposition involution” on a root system and produce a graph on the quotient in the obvious way, its independence number is the dimension of its fixed point space. He gave the values of this parameter for all the indecomposable root systems.
Rosemary talked about products of association schemes, especially generalized wreath products and crested products.
Then we had a lunch break, with a huge amount of food. There would have been time to do something after lunch, but I didn’t take the opportunity.
Jason Williford discussed the representation diagrams of association schemes, a concept arising naturally from the notion of Q-polynomial schemes, and gave some background.
Bill Martin talked about joint work with Brian Kodalen on the connectivity of the graphs in an association scheme. The obvious conjectures are that both vertex and edge connectivity should be equal to valency, and the only disconnecting sets of the appropriate sizes should be those which isolate a vertex. They can prove that the condition that the graph obtained by deleting a closed neighbourhood is connected is equivalent to there being no two vertices with the same neighbourhoods.
Yan Zhu talked about the structure of shells in the Hamming schemes (the sets of points at fixed distance from a given point), and the question of what t-designs in these schemes look like. A shell in the Johnson scheme is just the direct product of smaller Johnson schemes, but shells in the Hamming scheme are more interesting. (In Rosemary’s terms, there is “something” nested in a Johnson scheme, but exactly what it is is not so clear.) This was joint work with Eiichi Bannai.
Da Zhao presented a theorem, also joint with Bannai, classifying the association schemes which have spherical embeddings in 3-dimensional Euclidean space: there are just seven of these, the five platonic solids and the line graphs of the cube and dodecahedron.
The final session was the highlight, with talks by Tatsuro Ito and Eiichi Bannai. Ito spoke about “Bannai’s Jugendtraum“, which (he said) was to give an alternative approach to the classification of finite simple groups by means of association schemes. The conjugacy class scheme of a finite group is a commutative scheme which is primitive if and only if the group is simple. If we could characterise up to parameters the schemes that arise, then we would just have to characterise all the finite simple groups by their character tables. This dream was not realised and probably is unlikely to be. But this led to P- and Q-polynomial association schemes, Leonard’s theorem and the connection with orthogonal polynomials, and Terwilliger’s theory; there a classification is in sight, as Ito and Terwilliger are on the case. (Also, of course, it led to the celebrated book by Bannai and Ito.)
Then Bannai spoke. He described how, as a student of Iwahori, he had become very familiar with the commutative square
He worked on permutation representations of the groups on the left of the diagram. This led him and Ito to the theorem on Moore graphs (simultaneous with Damerell), association schmes, and the book with Ito, which of course stimulated much more work. He concluded with some problems he still dreams of solving: a bound on the degree of transitivity not using CFSG; re-doing CFSG using algebraic combinatorics (he admitted that these two are out of reach); a theory of commutative association schemes following that of compact symmetric spaces; and generalising tight t-designs to other contexts. In particular, he would like multivariate versions of P- and Q-polynomial association schemes and of Askey–Wilson polynomials.
Then it was time for the conference banquet, where we ate and ate, and finally
staggered home to bed.
The next day, we had to be early again, since we needed to follow the crocodile to the new conference room (though I had a good idea where it was, and could have found it myself). This was in the Sir Run-Run Shaw building; the door we entered said it was the Business Administration department, though the door on the ground floor called it the Mathematics Department (in common with the building we were in yesterday).
We were in a much bigger room, but still not ideal; the room was not raked, so it was still a bit difficult to see the bottom of the screen; and there was storage space under the desks, which meant that my legs didn’t go under unless I twisted them into an uncomfortable contortion.
Kaishun Wang kicked off the proceedings with a talk on weakly distance-transitive digraphs. This is where the relation of two points specifies their distance in both directions, and it is required that these relations form an association scheme. He had cclassifications in some cases including valency 2 or 3, thin case (all nontrivial intersection numbers at most 1), and partial results on diameter 2.
Hajime Tanaka came from quantum probability theory. Essentially, he looks at the spectral distribution of various families, suitably normalised: e.g. for Hamming, he gets Poisson if q/n tends to a finite limit, or normal if it tends to zero; the corresponding orthogonal polynomials are Charlier and Hermite polynomials. Similarly, Johnson schemes give geometric and exponential distributions and Meixner and Laguerre polynomials.
He went on to look for multivariate analogues. In particular, the limit of the spectral distribution of symmetric products of strongly regular graphs, suitably normalised, gives either bivariate Gaussian, or priduct of Gaussian and Poisson, or bivariate Poisson, depending on various limits; he can describe the orthogonal polynomials also.
Sho Suda found the invariant factors of a skew-symmetric ±1 matrix of order 4n+2, as given by the Ehlich–Wojtas theory. As a corollary, Kharaghani’s example of a matrix of order 66 meeting the determinant bound cannot be equivalent to a skew-symmetric matrix.
Zhi Qiao gave a characterisation of dual polar spaces of sufficiently large
diameter: he requires large cliques (meeting the Delsarte bound) and local
conditions on the association scheme, not the full strength of this.
Wei-Hsuan Yu has been improving bounds for numbers of equiangular lines in Euclidean space, and has resolved a number of unknowns in Seidel’s table. But not everything is known yet. He showed that the 90 lines in 20 dimensions cannot have another line added; this is of course not equivalent to showing that 90 is the maximum. The week after the workshop, he came to SJTU to give a seminar, and went into a bit more detail about the semidefinite programming used.
Ziqing Xiang talked about explicit constructions of spherical designs. The existing constructions for given strength and dimension depend on continuity arguments and so are not explicit; he has managed, in a remarkable construction, to give explicit rational constructions, though the price he pays is that the designs are rather large!
Then the lunch break. After lunch, taking advice from Yaokun Wu and the students, Rosemary and I went to the botanic gardens. After paying the small entrance fee, we went up a path over a hill and down the other side, to an area with small lakes with beautifully coloured leaves on the trees and picturesque buildings around them, together with the other side, car parks and huge restaurants.
Paul Terwilliger started the afternoon with a talk on Leonard triples (a new development from Leonard pairs; I am not sure of its application to association schemes). He has constructed three matrices which he calls “pseudo-intertwiners” for the triple, and took us through the rather complicated algebra necessary to verify their properties.
Suogang Guo used the Terwilliger algebra for the folded cube (which is in this case just the coherent algebra for the point stabiliser in its automorphism group – I don’t know how general this is!) to get improved bounds for codes in this graph. However, he didn’t actually show us an explicit example where his method beats known bounds.
Hiroshi Nozaki was interested in bounding the number of vertices in a connected bipartite graph with given valency and given second eigenvalue. On a similar theme, Jongyook Park talked in some generality about the second eigenvalue of a distance-transitive graph and what it tells you.
Sergey Goryainov talked about perfect 2-colourings (that is, equitable partitions with two parts) of the bilinear forms graph of dimension 2×d over the field of two elements. For such a partition, the 2×2 collapsed adjacency matrix has two eigenvalues, one of them the valency; in the case where the other is the second eigenvalue, he was able to give a complete classification. More on this later.
The day ended with Alexei Zhedanov talking about orthogonal polynomials on the unit circle and their connection with double affine Hecke algebras, things which had been mentioned in other talks. He says that these things have applications in solving certain multivariate eigenvalue problems, such as for (α_{1}R_{1}+α_{2}R_{2}+βI)f = 0, which physicists and others are interested in. The polynomials are the Szegő orthogonal polynomials.
The next morning we started a bit later, since we knew the way, but found when we arrived right on time that Eric Moorhouse had just started. Fortunately we didn’t miss much. Eric gave a beautiful talk on double covers, his tool of first resort for many mathematical problems. He showed us by way of introduction that the cube and dodecahedron are (antipodal) double covers of K_{4} and the Petersen graph, and that there is a repeated double cover from K_{4,4} to the 4-cube to a 4-regular graph of girth 6 which is most of PG(2,4). A similar double extension leads from K_{6} to the icosahedron to a graph which is most of PG(2,5). Also, the folded 6-cube is a double cover of both the 6-valent strongly regular graphs on 16 points.
There are 193 known projective planes of order 25, falling into 99 dual pairs. Eric analysed these, taking the quotient of a plane by an involution (discarding fixed points) and then a double cover of this. He found several examples, none of them new. He is trying 49, but there are millions of planes there! He also used an idea due to Conway to get an efficient isomorphism invariant for things involving permutations (such as loops, nonincidence graphs of projective planes, etc.)
He has used double covers of complete graphs (that is, two-graphs – he used this term) to get an isomorphism invariant for ovoids, and show the nonexistence of ovoids in large odd dimensions over odd characteristic.
The piece de resistance wsa the set of Lagrangians (maximal totally isotropic subspaces) in a symplectic space in odd characteristic. There is a 2-valued invariant of triples of Lagrangians called the Maslov index, which is a two-graph and so gives a double cover. He explained how this is constructed in the case of pairwise disjoint Lagrangians X,Y,Z. Any vector in Z has the form x+f(x), where f is an isomorphism from X to Y; using the fact that Z is t.i. and the form is alternating, you find that B(x,f(y)) = B(y,f(x)), so you get a symmetric bilinear form on X, and take the type of the corresponding quadratic form in the Witt ring. Interesting stuff!
Edwin van Dam classified graphs with an eigenvalue of multiplicity 3 which are partially metric (that is, they generate an association scheme, and the distance 2 relation is a single relation in the scheme. APart from complete multipartite graphs, there are just nine such, with interesting relationships, e.g. the Foster graph is a double cover of the Nauru graph.
Hirotake Kurihara comes from differential geometry. He showed us how a irreducible Hermitian symmetric space has a “scaffold” which is a distance-transitive graph. Such a graph is built on the fixed point set of the isometry s_{x} fixing the point x in the definition of symmetric space. It is uniquely determined by the space, and conversely. Grassmann spaces give rise to Johnson graphs. The two exceptional examples give the Schläfli and Gosset graphs.
Wei Wang talked about the graph isomorphism problem, specifically, is it true that almost all graphs are determined by their spectrum (as Willem Haemers conjectured)? He has partial results when you give yourself the spectra of the graph and its complement. He mentioned two things I didn’t know: W. X. Du has improved Babai’s quasipolynomial bound for graph isomorphism to an ^{c log n} bound; and Tao and others have shown that almost all graphs have a simple eigenvalue.
Finally Jack Koolen gave a lovely talk about representing graphs with least eigenvalue −3 or greater by integral lattices with minimum norm −3. Following Conway and Sloane, a lattice L is called s-integrable if (√s)L is isomorphic to a sublattice of the standard lattice Z^{n}. Thus E_{6}, E_{7} and E_{8} are 2-integrable but not 1-integrable. His main theorem asserts that there is a number K such that a graph with minimum valency at least K and least eigenvalue at least −3 is 2-integrable. (This extends Hoffman’s theorem, which has 2 and 1 in place of 3 and 2.) He pointed out that, although the upper bound in the theorem is huge and can surely be improved, the true bound cannot be smaller than 163, since the strongly regular graph of valency 162 on 275 vertices is not 2-integrable. He conjectures that the main theorem holds also for signed graphs, and that all such graphs and signed graphs are s-integrable for some bounded s (perhaps the correct bound is 4).
Then it was the end of the conference. The organisers were thanked; in fact Tao Feng and his team were out of the room when the thanks began, but were quickly fetched. Rosemary and I went for a coffee, then went for a walk by the lake, on the Su causeway, and on Gushan island, before heading back to the hotel where taxis would be fetched to take us to the station for the train back to Shanghai.
]]>I hadn’t realised he was so famous in Shanghai …
]]>ANTI SOCIAL SOCIAL CLUB
Do I smell a trace of Cantor’s diagonal argument here? Perhaps even Russell’s paradox? Suppose that every social club is named after a person, and a person X is antisocial if X is not a member of the X club. Then let the club of antisocial people be the Y club, and ask: Is Y social or antisocial?
Perhaps you would like to formalise this …
]]>