Find me on the web
So the conference is over; but before I start a description of the last two days, let me pose a problem which might be an interesting small research topic for someone.
As I have said, statisticians love symmetric real matrices. Indeed, some of them love these so much that they tinker with matrix multiplication to preserve symmetry. The Jordan product of two square matrices of the same size is given by A*B = (AB+BA)/2. Thus, the Jordan product of symmetric matrices is symmetric.
Some time ago I defined the notion of a Jordan scheme, the analogue of a coherent configuration for the Jordan product. It can be defined in terms of a partition of Ω2 into subsets, but I will give the simpler definition in terms of matrices. A Jordan scheme is a set of square zero-one matries of the same size, satisfying the conditions
It is clear that the symmetrisation of a coherent configuration (obtained by adding each non-symmetric matrix in the configuration to its transpose) is a Jordan scheme. I made a conjecture, which seems to me a bit unlikely: every Jordan scheme is the symmetrization of a coherent configuration.
I never spent any time on this conjecture, apart from failing to find any easy counterexamples. To investigate it, it would be useful to have a tool. So here I propose a tool: symmetric Weisfeiler–Leman algorithm. In other words, modify the usual Weisfeiler–Leman algorithm (which you can find defined in several slides from the conference, for example those of Sven Reichard or Oleg Verbitsky) so that it deals only with symmetric relations; so, when considering a pair (x,y), it counts the paths of a given type from x to y and those of the same type from y to x.
I assume that no implementation of this has ever been produced, so I may have to write my own. But it would be a valuable tool in exploring the conjecture, and raises a question in its own right:
I can’t see a good reason why this should be so, but if it were, one could pick random configurations and apply ordinary and symmetric WL to them and check whether the latter was the symmetrisation of the former.
I mentioned this to Pascal Schweitzer, and he agreed that all the above questions almost certainly have a negative answer. This is a good thing, since there is potentially a theory of Jordan schemes which is not just about symmetrised coherent configurations.
Anyway, I had written the above before the final day’s lectures. I was amazed when Sergey Shpectorov, in a beautiful talk about successive generalisations of the Griess algebra for the Monster to the present concept of axial algebras (more on this shortly), mentioned Jordan algebras. More evidence that there is an interesting theory lurking here!
Now a couple of comments on things I said earlier. Alexander Gavrilyuk wasn’t convinced by my comments on Higman and Benson, and showed me the abstract of Benson’s paper where he specifically credits Feit and Higman for the idea. I will do more research on this, read the papers, and write a longer report. I don’t know yet what its conclusion will be – any bets? Also, István Estélyi, a local, told me that it is not correct that brewing is the only industry in Pilsen. Although Škoda cars are no longer made here, trams and trains are manufactured. Indeed, on Saturday morning, smoking chimneys were clearly visible from the hotel terrace.
Anyway, back to business. The first talk on Friday was by Laci Babai. His title was the same as the conference title, “Symmetry vs Regularity”, and indeed Laci has done more than anyone else in the last fifty years to explore this distinction. Although symmetry implies combinatorial regularity, and one might naively expect a converse to hold, Laci contends that in most cases regularity is the enemy of symmetry; one can bound the number of automorphisms of regular objects like coherent configurations, with known exceptions. [There is one situation which doesn’t fit the paradigm. Sometimes, strong regularity conditions imply symmetry, because actually they imply a complete characterisation, and the only objects which satisfy the conditions are symmetric ones. I will come back to this later.] In one of his later talks, which were “tutorials” on his quasi-polynomial algorithm for graph isomorphism, he drew the moral that “the opposite of hidden irrecularity is not hidden regularity, but hidden robust symmetry.” This was in the context of his “split-or-Johnson” subroutine. The general structure of the algorithm is to break enough symmetry that one can apply Luks’ “divide-and-conquer” methods. You cannot easily divide Johnson graphs. Fortunately, with Laci’s device of an “ideal set”, Johnson graphs allow you to reduce very greatly the size of the sets you need to consider.
More diversions in that paragraph than good writing rules might allow; but there were so many connections! Anyway, Laci proposed two conjectures for “group theory without groups”:
Back to symmetry vs regularity, I proposed the following example, which seems to lie right on the borderline. Desargues’ Theorem is a regularity condition for projective planes; at first it seems to support Laci’s argument, since it is strong enough to imply a classification (such a plane is coordinatised by a skew field). But looking a bit deeper, Desargues’ Theorem immediately implies the existence of many automorphisms, namely central collineations (at least n5/2, if n is the number of points of the plane). It is these central collineations which are used to classify the planes, since the addition and multiplication groups of the field are groups of central collineations with fixed centre and axis.
Laci had explained his results on the connections between minimal degree, thickness, and (in some cases) order of automorphism groups of coherent configurations. The next talk was by his student Bohdan Kivva, who had proved Ω(n) lower bounds for the minimal degree of the automorphism groups of distance-reguar graphs of fixed diameter and of primitive rank 4 coherent configurations; a tour de force using combinatorial and spectral methods as well as characterisations in special cases.
Tatsuro Ito gave an update on the progress of the classification of (P&Q)-polynomial association schemes; he surveyed the known examples, described Leonard pairs and the work of Terwilliger, and left us the impression that progress continues to be made but the end is not yet.
Then we had an introduction to quantum permutation groups by David Roberson, where permutation matrices are replaced by “magic unitaries” over a C*-algebra. The good news for classicists is that these give rise to coherent configurations in the sense we already know.
Pascal Schweitzer gave a very interesting talk about the Weisfeiler–Leman algorithm. In particular, he mentioned that planar graphs have WL-dimension at most 3, using Tutte’s “spring embedder” of a planar graph (you replace edges by springs, and let the configuration move to a lowest energy state). He also has a complete characterisation of graphs identified by 1-dimensional WL.
The day ended with a short presentation by Sasha Ivanov. He pointed out that there are geometries for M24 (octads–trios–sextets) and L5(2) (the projective geometry) with many local symmetries: the point, line, and point-line stabilisers are isomorphic, although the embedding of the last in the other two is slightly different in the two cases. So the groups are homomorphic images of slightly different amalgamated products. I must admit I was left a little unsure of what exactly the point was; he wasn’t announcing that he had glued the two pictures together to get a new simple group (at least I don’t think so). Right at the end it appeared that he was advertising his new book.
We had a half-day on Saturday morning to finish the conference by lunchtime. This made, by my count, a total of 64 talks in four full and two half days; unsurprisingly we were all tired! But most people were still there at the end, a remarkable testimony to the success of the meeting. The highlight of the morning, for me, was a pair of talks by Sergey Shpectorov and Madeleine Whybrow on axial algebras, which are (as I mentioned) the current endpoint of a line of generalisation starting with the Griess algebra and going on to work of Sakuma, Ivanov, Pasechnik, Seress and Shpectorov. In the original case and the first version considered by Sasha (Majorana algebras), the distinguished idempotents have eigenvalues 0, 1, 1/4 and 1/32. (If you are wondering how an idempotent can have eigenvalues other than 0 and 1, remember that these algebras are non-associative!). Also there are fusion rules implying that there is a C2 grading, where the 1/32–eigenspace has degree −1 and the others +1. Now one can vary both the numerical values of the eigenvalues and the fusion rules and see what you get.
The C2 grading implies that the “reflection” negating the elements of degree −1 is an automorphism of order 2; these automorphisms have the property that their pairwise products have order at most 6, so form a normal set of 6-transpositions in the group they generate. Now group theory tells us where to look for examples. (Another case where regularity implies symmetry.) Madeleine, with Markus Pfeiffer in St Andrews, is producing GAP programs to allow the user to play with these fascinating objects.
Roman Nedela asked me to say a few words to close the conference. My main message was that “Symmetry vs Regularity” is not like a football game where someone loses and is eliminated; it is more like a cooperative game. Laci Babai uses group theory to indicate what might be true and where the interesting examples are, before developing techniques to prove things without group theory (and thus get much stronger results).
After the rush of the first two days, things quietened down a bit, so I can stop and look around me.
The conference logo, and an explanation, is here. Norman Biggs, one of the pioneers of algebraic graph theory, speculated that perhaps for any feasible parameters for which a mathematical object exists, there would be at least one highly symmetric object with those parameters. This speculation was debunked when the strongly regular graphs on 26 vertices were classified; none of them is vertex-transitive. The particular one used for the logo, still has quite a lot of symmetry; its group is A5×C2, with two orbits on vertices, of lengths 6 and 20, which can be interpreted in the icosahedron.
Pilsen is a former industrial town which was populated by Czechs, Germans, and Jews. Because of events surrounding the second world war, the second and third of these peoples have largely gone, and there is not too much industry here apart from beer production. (How many other towns have given their names to a type of beer?) But some fine old buildings remain.
One of these is the town hall; the conference takes place in the Council chambers on the top floor of this building. (The picture in the last post was taken through the windows of this room.) The desks all have microphones which can be activated at the press of a button. We are supposed to use these when asking a question of a plenary speakers, since these are recorded (and will hopefully be made available at some point). But we are not always so disciplined …
On Wednesday we had talks in the morning (only six; it seemed like a holiday), and excursions in the afternoon. The most interesting talk of the morning was the plenary by Edwin van Dam. He explained to us what semidefinite programming is, how it can be used to find bounds for the Max-cut problem, with applications to chromatic number, bandwidth, and other problems. Very relevant to the conference, he showed us how the fact that the relevant matrices lie in an algebra of relatively low dimension can be used to reduce very substantially the amount of computation required to use the method: this applies to graphs in a coherent configuration, graphs with a lot of symmetry, and walk-regular graphs. I suspect that there are techniques here useful for the synchronization project, but not a lot of time to think it through now.
Elena Konstantinova talked about a nice topic, integral graphs (those all of whose eigenvalues are integers). After a historical introduction, she turned to Cayley graphs. Towards the end of the lecture, she mentioned that Problem 19.50(a) of the Kourovka Notebook, which asks whether the Cayley graph of a group G with respect to a normal (conjugation-closed) set of involutions is integral, has a positive solution, produced independently by D. O. Revin and A. Abdollahi, using character theory. She concentrated mainly on the star graph, the Cayley graph of the symmtric group Sn with respect to the set of all transpositions (1,i), for 2 ≤ i ≤ n. This graph is integral, and its eigenvalues and multiplicities are known. More details on this graph were given in the next talk, by Sergey Goryainov.
Thursday was entirely devoted to historical talks or reminiscences. This entire day had been arranged by Misha Klin, whose birthday it was; he had invited the speakers and even allocated topics to them.
The talks were extremely interesting. Some of the pioneers of the subject, such as Boris Weisfeiler, Dale Mesner, and Jaap Seidel, came to life at the hands of the speakers. Other talks traced the varied history of algebraic combinatorics from its roots in the work of Issai Schur and R. C. Bose, the various people who contributed to bringing the strands together (Seidel, Philippe Delsarte), and the strong direction provided by the book of Bannai and Ito.
We also heard about the great difficulties faced by members of the Soviet school pre-1990. The fact that many of them are now scattered all over the world illustrates what happens when governments treat their citizens like that.
But I won’t even attempt to summarise any of these talks. I hope that the slides (and perhaps even videos) will be on the web soon. I will announce this when it happens.
The day had been hot, and in the final session a thunderstorm hit and the rain came pelting down. It eased up just long enough to have the conference photograph taken before setting in again, leaving us the problem of getting to the dinner (in the Pilsner Urquell brewery) in the rain.
The conference on “Symmetry vs Regularity” in Pilsen has begun.
I hesitate at trying to describe the talks, since the first day alone had 13 talks, and the second day 19, of lengths ranging from ten minutes to an hour, with no parallel sessions.
Random thoughts will have to suffice.
The meeting largely concerns the structures variously known as association schemes or coherent configurations. These had at least three independent origins: by R. C. Bose and his students and collaborators in experimental design in statistics; by Issai Schur in permutation group theory; and by Weisfeiler and Leman in studying the graph isomorphism problem (it is the fiftieth anniversary of their paper that we are specifically celebrating at this conference).
Subsequently the methods began to coalesce: Jaap Seidel recognised the connections between Bose’s and Higman’s work, and as communications between the Soviet Union and the west improved we were able to appreciate each other’s contribution. Delsarte applied Bose’s methods to coding theory and design theory, and Bannai and Ito wrote an important book laying out the essentials of the theory.
I began life in Schur’s school in this respect. As a beginning D.Phil student at Oxford, I was given by Peter Neumann his long preprint on extending Wielandt’s theorem on primitive groups of degree 2p (p prime) to degree 3p. Then Donald Higman visited Oxford and lectured on coherent configurations, perhaps his first complete presentation of the theory he had developed; one of the key ingredients, what we now sometimes call the “Bose–Mesner isomorphism” from the algebra generated by the basis matrices to the algebra generated by the intersection matrices, he attributes to Wielandt (see the Appendix of his 1967 paper on intersection matrices). Wielandt was a student of Schur and has much to say on “The method of Schur” and its connection with representation theory in his book on Finite Permutation Groups, which is dedicated to Schur.
Anyway, I was one of two students assigned to take notes of Higman’s lectures (the other was Susannah Howard); the notes, corrected by Higman, were published in the Mathematical Institute Lecture Notes series.
This leads me to the talk by Alexander Gavrilyuk on “Graham Higman’s method”. I bear some of the responsibility for this. Graham Higman gave a formula for the character of an automorphism group of an association scheme on one of its eigenspaces, in terms of (in Delsarte’s terminology) the Q-matrix of the scheme and the functions ai, where ai(g) is the number of points x such that the pair (x,xg) satisfies the ith relation of the scheme. He used this to show that the hypothetical Moore graph of valency 57 could not have a transitive automorphism group. Higman never published this (though he addressed the London Mathematical Sociey on it), but I thought it was potentially a useful technique and gave it as an example in my Permutation Groups book.
Meanwhile, C. T. Benson had used it to study automorphisms of generalised quadrangles and beyond, and later authors had extended this to other generalised polygons, perhaps without realising that it works for any association scheme. Two years ago in Singapore, Stefaan de Winter talked about applications of this, and I briefly discussed Higman’s work with him. Alexander did mention some more recent work; it seems people are on the case at last.
Another small observation. Walter Feit and Graham Higman proved their theorem by considering the collinearity graph of the generalised polygon (whose vertices are the points, adjacent if collinear); this graph is distance-transitive, so the association scheme is symmetric and commutative, and eigenvalues tell you everything. Donald Higman gave a new proof of the theorem using the coherent configuration based on flags of the geometry; this is non-commutative and one has to study the more difficult 2-dimensional representations, but the payoff is that one gets more refined information, including “Higman’s inequality” for generalised quadrangles. This approach fits better with the BN-pair structure in the group case.
My account of the Singapore meeting is here.
But there is a small historical mystery here. Did Benson get the idea from the Feit–Higman paper, as Alexander thought possible? (I have looked through the paper and I don’t think this is very likely.) Was there another connection? I am inclined to believe that the two inventions were completely independent. Perhaps it should be the Benson–Higman method?
In the evening there were four short talks. Two nice things for me. Bogdana Oliynyk talked about an extension of an old paper I wrote with Sam Tarzi about diagonal limits of cubes; she and her co-authors have extended it to Hamming spaces over arbitrary finite alphabets. Then Alicia Roca talked about invariant subspaces. There was some nice combinatorics in there, but a key paper included Bill Longstaff as an author, and this took me back. He was an undergraduate with me, and about 40 years ago I met up with him on a visit to Perth, where he told me some of this material. I think it was on the same visit that Cheryl Praeger and I proved the theorem about dual polar graphs that Sasha Ivanov mentioned in his talk opening the conference.
A highlight for me of the second morning was the talk of Oleg Verbitsky, who looked at the Weisfeiler–Leman algorithm from the viewpoint of logic. Success of the k-dimensional version corresponds to a formula in first-order logic with counting quantifiers which characterises the graph up to isomorphism, with k+1 variables; the quantifier depth determines the number of rounds required. This was explained with a very nice Ehrenfeucht game on the pair of graphs. The two players are provided with matching collections of coloured pebbles. The first player is trying to demonstrate non-isomorphism of the two graphs, and the second player to prevent this. The pebbles correspond to the variables in the formula, and the number of rounds of the game to the quantifier depth.
The highlight of the afternoon was the talk by John Wilmes. He was a student of Laci Babai, and he and Xiaorui Sun took the first step since 1980 towards a conjecture of Babai. Back then, Babai proved that primitive but not 2-transitive permutation groups had order bounded by exp(n1/2) (this is slightly inaccurate, since we might have to put some log n factors in the exponent; I will be similarly cavalier throughout this comment). This is best possible, since both the symmetric group acting on 2-sets, and the automorphism group of the square lattice, are primitive groups with order about this value. Indeed Babai’s bound and reality differ by a constant and a single log factor in th exponent.) But actually Babai proved much more; his bound applies to the automorphism group of any non-trivial coherent configuration.
About the same time, the Classification of Finite Simple Groups was announced, and using this, Babai’s result for groups could be deduced. Indeed, much stronger results can be proved; with “known” exceptions (essentially powers of the symmetric or alternating group acting on k-sets, wich some group permuting the coordinates), the orders of primitive groups don’t exceed nlog n+1. Babai conjectured that with a similar list of exceptions, the order of the automorphism group of a primitive coherent configuration could not exceed exp(nε) for some arbitrarily small ε. His theorem gave ε = 1/2; and, of course, CFSG is no help here, except perhaps as a guiding principle.
Now Sun and Wilmes have reduced this to ε = 1/3. So, in particular, they have to deal with the two examples with very large automorphism groups, which they do by building “clique geometries” for them, which turn out to be the partial geometries in the sense of Bose (who was perhaps the first to use geometric arguments on cliques to characterise strongly regular graphs). It is a beautiful combinatorial argument; but here I will only say, you should have been there! It was also a beautifully planned and executed talk, with just the amount of detail that could be comfortably fitted into the time allowed.
One thing from the evening session: Mikhail Kagan was using resistance distance as an alternative to the Weisfeiler–Leman algorithm. This is something I have an interest in, as it connects (among other things) with statistical optimality. I would like to see the result of resistance distance applied to the collaboration graph!
The BMC was followed by three satellite meetings, on Semigroups and Applications, Geometric Group Theory, and Ergodic Theory. I had a lot of catching up and other jobs to do, and so was unable to go to more than a few of these talks. I managed three.
Gareth Tracey talked about crowns in finite group theory. If we are studying minimal generating sets in finite groups, it is useful to understand groups G which require more generators than any proper quotient; these are crowns. Given an arbitrary group G, for any normal subgroup N, we know that the number of generators of G/N does not exceed the number for G; so, if we choose N maximal such that equality holds, then G/N is a crown. Gareth has used his results on crowns for various things; in particular, he gave us a preview of his result with Colva Roney-Dougal on the number of subgroups of the symmetric group Sn.
Tom Coleman told us about his work (much of it in his recent PhD thesis) on generation, cofinality and strong cofinality, and the Bergman property for various transformation semigroups on an infinite set, such as all maps, injective maps, surjective maps, bijective maps, partial maps, and bijective partial maps on a countable set, and analogous things for homomorphisms etc of the random graph. The Bergman property for a structure asserts that, if S is a generating set, then any element of the structure is a word of bounded length in elements of S.
Justine Falque talked about her work with Nicolas Thiéry on the orbit algebra of a permutation group in the case where the number of orbits on n-sets is bounded by a polynomial in n. They show that the algebra is finitely generated and Cohen–Macaulay, so that the generating function for the number of orbits on n-sets is a rational function of the form P(x)/Π(1-xdi), where P is a polynomial. I have already discussed this here; I have read the short version on the arXiv and eagerly await the full version.
The day began with the plenary talk by Paul Seymour, whom I’ve known for longer than almost anyone else at the meeting.
He explained that there are many different “ordering” relations on graphs; the two he considers most important are the minor and induced subgraph orders. Roughly speaking, he worked on the minor ordering in the second millennium and the induced subgraph ordering in the third.
An induced subgraph of a graph is obtained by throwing away some vertices and the edges incident with them; you are not allowed to throw away an edge within the set of vertices you are keeping. Paul began with the general problem: given a graph H, can you determine the structure of graphs G containing no induced copy of H?
The first issue is what you mean by “structure” here. Paul thinks of it as a construction for such graphs. But even this needs explaining. You can construct all triangle-free graphs by taking a set of vertices and adding edges, taking care never to create a triangle. But clearly this is a hopeless construction and tells you nothing about triangle-free graphs!
The answer is known in embarrassingly few cases:
And that’s it! Not even for a 4-cycle is the answer known. There is a structure theory for bull-free graphs modulo the structure of triangle-free graphs and their complements, which again is not easy. (The bull has a triangular face, with horns or pendant edges at two of its three vertices.)
If we are content with less than a complete description, there are various conjectures you can try: does a forbidden subgraph force the existence of a “large” complete or null subgraph? For arbitrary graphs on n vertices we can’t do better than log n (Ramsey’s theorem together with the probabilistic method); can we achieve nc (“polynomial”), or even cn (linear): Can we find two large sets with all or no possible edges between them?
Another question asks for χ-boundedness, the property that the chromatic number χ(G) is bounded by a function of the clique number ω(G). The perfect graph theorem, proved by Paul and collaborators, proves the conjecture of Berge by showing that if G contains no odd hole (induced odd cycle) or odd antihole (induced complement of an odd cycle) then G has equal clique and chromatic numbers. He went on to discuss some new and powerful results he and others have proved, where we forbid just some holes.
There was a lot more but that will do for now.
David Conlon’s advertised title was “How to build a hypergraph expander”, and he did indeed show us this in the second part of his talk. But his real subject was “The unreasonable effectiveness of mixing algebra and probability”. The first half of his talk mainly concerned the function ex(n,H), the maximum number of edges in a graph G on n vertices not containing H as a (not necessarily induced) subgraph. The Erdős–Stone theorem gives as an upper bound a fraction (1−1/(χ(H))+\epsilon) of all edges of Kn, and Turán graphs show this is tight. The problem occurs when χ(H) = 2, that is, H is bipartite, when the theorem only gives a fraction o(1), that is, o(n2 edges. There is some evidence that the correct asymptotics should be around cn2−1/s, where s is the smaller bipartite block of H. This is known to hold in a few cases, including K2,2 and K3,3, where algebraic constructions over finite fields realise the lower bound. The probabilistic method straightforwardly gives about n1−1/s−1/t. David and coauthors have improved this by a method involving choosing “random algebraic varieties”.
For hypergraph expanders, even the best definition is not known: spectral, combinatorial, rapid mixing of some random walk, topological (Gromov), coboundary, …? David has constructions involving rapid mixing of a random walk on pairs on the neighbourhoods of vertices in a Cayley graph in an elementary abelian 2-group. As he remarked, received wisdom is that abelian groups are bad for rapid mixing, but the commutative property was essential in his argument.
Last before lunch was a beautiful talk by Simon Smith on infinite permutation groups. He assumed some topological knowledge (he liked his groups to be totally disconnected and locally compact (tdlc): the study of locally compact groups reduces to two cases, Lie groups and tdlc groups). His permutation groups were transitive and subdegree-finite (this means that a point stabiliser has all its orbits finite – automorphism groups of locally finite connected graphs are examples). There is a close connection: a closed subdegree-finite group is tdlc, and a tdlc group has a nantural action as a subdegree-finite permutation group.
An important construction for finite permutation groups is the wreath product in its product action. Simon has another construction which he calls the box product. If X and Y are the domains of G and H, take a collection of copies of X, with complete graphs on each; and glue them together, any two meeting in one point and the copies through any point indexed by Y. Then there is a way to make a group act on this graph, so that the stabiliser of a lobe (copy of X) has G acting, while the stabiliser of a point has H acting on the lobes through that point. This is the box product. He has a theorem for when the box product is primitive, mirroring what happens for wreath product, and an O’Nan–Scott type theorem for primitive subdegree-finite groups.
After lunch and the (brief) business meeting of the Colloquium, I returned to the algebra workshop to hear Brent Everitt talking about three different ways to attach (co)homology to a hyperplane arrangement (a finite set of hyperplanes in a vector space). Note that the hyperplanes and their intersections have the structure of a lattice. The first method is simply to remove the hyperplanes and consider what’s left. Over the real numbers, this falls into many pieces, but over the complex numbers it is connected, and the Orlik–Solomon theorem gives its cohomology over the integers: there is no torsion, and the Betti numbers are expressible in terms of the Möbius function of the lattice.
The second method is to form the chain complex of the lattice (with top and bottom elements removed), the simplicial complex whose simplices are the chains. This is homotopy-equivalent to a bunch of spheres; the Goresky–MacPherson th eorem connects its reduced homology to the previous case.
The third solution puts a sheaf of R-modules on the lattice, and take its simplicial cohomology with local coefficients. There is a canonical sheaf: each element of the lattice is a subspace; just use that subspace.
I ducked out and went to the history stream to hear Rosemary Bailey talk about Latin squares. Much of it was familiar to me, but there were some eye-openers, including the use in an experiment on diets and slaughtering times for sheep, reported in France in 1770, a possible proof (now lost) of the non-existence of orthogonal Latin squares of order 6 by Clausen in 1842, and a row between Fisher and Neyman about whether using a Latin square in an experiment gives biased results (with a shocking recent tailpiece).
I missed the London Mathematical Society meeting because I had a visitor; then there was a wine reception followed by a very nice dinner. I wish I had felt well enough to enjoy it more; but at least I could eat it, which is a sign of progress!
The last morning had just two morning talks and one plenary.
The first morning talk I went to was by Vicky Gould, who gave us a crash course in semigroup theory followed by the biorder on the idempotents of a semigroup and the “free idempotent-generated semigroup” generated by a biordered set. This is a fairly recent topic where the arguments are not straightforward. It is a free object whose generators correspond to the idempotents, and relations ef = g if the product of e and f is g in the original semigroup (or, abstractly, if e and f are comparable in one of the orders). Nice results but I won’t attempt to confuse you with a proper account.
Second, Daniel Král’ talked about graphons and their analogues for permutations, permutons. A sequence (Gn) of, say, graphs is said to converge if the probability that a random set of size |H| in Gn induces a copy of H converges for all graphs H. So what does the sequence converge to? This is more difficult, and was the problem solved by Borgs, Chayes, and Lovász by inventing graphons. It appeared that graphons often correspond to solutions to extremal graph problems which are either unique, or can be made so by adding extra relations. But Dan and his coauthors have shown that this is very far from being the case; he showed us pictures of some enormously complicated graphons refuting the obvious conjectures.
After coffee we had the final talk of the conference, a plenary on “Products of random matrices” by Marcelo Viana. Furstenberg and Kesten showed that certain limits always exist; these turned out to be the extremal Lyapunov exponents. Marcelo and collaborators showed that these numbers depend continuously on the input. This can be widely generalised, to distributions on the group GL(d), and to things called linear cocycles. In the first case it is not enough that the distribuions are close in the weak* topology, but also that their (compact) supports are close in the Hausdorff topology; he showed us examples to demonstrate this. Most of the results have generalisations in some form. It was very clear, but again my brain was full.
Things may be a bit briefer from now on.
The first plenary talk was by Irit Dinur on the unique games conjecture. I think I understood roughly what this conjecture says, but what it has to do with games, unique or otherwise, still escapes me completely.
Some NP-hard problems are constraint satisfaction problems; examples include 3-COL (is a given graph 3-colourable?) and 3-SAT (is a conjunctive normal form formula where each clause has 3 literals satisfiable?) In each case, if the answer is no, we could ask what is the largest fraction of constraints which can be satisfied. For example, with 3-SAT, there is one constraint for each clause. By assigning Boolean values randomly to the literals, we see that seven of the eight assignments to each clause will make it true, and so the expected number of satisfied clauses is at least a fraction 7/8 of the total. In the same way, an edge gives a constraint on a colouring (its ends must have different colours), and so a random colouring will have at least 2/3 of the edges properly ccoloured.
The PCP-theorem says that there exists a positive δ such that it is NP-hard to distinguish betweem “all constraints satisfied” and “a fraction smaller than 1−δ of constraints satisfied.
The unique games conjecture says something about a particular kind of CSP called “label cover with 1:1 constraints”, asserting that it is hard to distinguish between fraction ε and 1&mimus;ε of constraints satisfied. The breakthrough result is this with the upper value replaced by (1/2)−;ε. This has a lot of consequences in related areas of complexity, but for fear of getting things wrong I will say no more.
After that, there was a wonderful talk by Clifford Cocks on the history of public-key cryptography. James Ellis at GCHQ had discovered the principles of public-key cryptography in 1969, and Cocks, arriving shortly afterwards, had constructed a practical scheme for doing it, almost identical with what we now call RSA (although technology wasn’t able to do the business until the 1980s); a later arrival, Malcolm Williamson, discovered another method, almost precisely the same as Diffie–Hellman key exchange.
On the other side of the Atlantic, Whitfield Diffie gave up his job in 1972 to learn about cryptography, and began working with Martin Hellman in 1974; by 1976 they had laid down the principles of PKC and had discovered Diffie–Hellman key exchange. Rivest, Shamir and Adelman read their paper and got to work; Rivest and Shamir tried many ideas, Adelman knocked them all down, until they came up with the RSA method. In one respect the Americans had advanced beyond the British; Diffie and Hellman realised the importance of authentication, and had discovered a method for providing it.
Having had a morning of computer-related things, I missed Mike Fellows’ talk on parameterized complexity, and listened instead to Marta Mazzocco on the geometry of the q-Askey scheme. This is a huge mountain, whose lower slopes on one side I have wandered among; Marta was approaching from the other side, and I am afraid that the landscape was almost completely unfamiliar to me. The q-Askey scheme is a poset with 29 boxes, each containing a type of orthogonal polynomials, with Askey–Wilson and q-Racah at the top. She talked mostly about the left-side, and I realised that the polynomials I am more familiar with, such as Krawtchouk and Hahn, were all on the other side. Her talk was full of discrete versions of Painlevé equations (six important families of non-linear differential equations), and was described by “chewing gum moves” on a certain Riemann surface; she has a duality which produces new information.
At lunchtime the rare books collection had put on a very nice exhibition, with works of (among many others) Pacioli, Fermat, Gauss, and Mary Somerville (who grew up in Burntisland).
After lunch, Nalini Joshi talked about building a bridge (motivated by the famous Harbour Bridge in her hometown Sydney) between apparently unrelated works by Japanese and European mathematicians. The talk began slowly with root systems and reflection groups, but picked up speed; I don’t feel competent to explain much of it, I’m afraid.
Then, not entirely by design, I found myself in the Algebra workshop. Tim Burness began with a talk on the length and depth of finite groups. The length of a finite group is the length of the longest chain of subgroups of the group. This is something I am interested in; in the early 1980s I found a beautiful formula for the length of the symmetric group Sn: take 3n/2, round up, subtract the number of 1s in the base 2 expansion of n, and subtract 1 from that.
The length has many nice properties; it is monotonic, and additive over composition factors. Tim has defined another parameter which he calls the depth, which is the minimum length of an unrefinable chain of subgroups (that is, each one maximal in the next). This failse these nice properties, and behaves very differently from length. For example, the depths of finite symmetric groups are bounded above by 24; the nice proof of this uses Helfgott’s proof of the Ternary Goldbach conjecture! But the depth of all finite simple groups is not bounded above. They have some results for algebraic groups as well (where the subgroups in the chain are restricted to be closed and connected).
Some of my colleagues are interested in something called the partition monoid. Maud de Visscher talked about the partition algebra, a slightly twisted version of the monoid algebra of the partition monoid. This arose in statistical mechanics; they have used it in representation theory, to get a better understanding of certain numbers called Kronecker coefficients arising from diagonal actions of symmetric groups on tensor products of Specht modules.
Finally, Marianne Johnson showed that the 2-variable identities of the bicyclic monoid (with two generators p and q satisfying the relation pq = 1) are identical to those of the 2×2 upper triangular matrices over the tropical semiring. A lovely and unexpected result!
The last event of the day was speed talks by PhD students, but your correspondent’s brain was full, so I cut this session.