## Two combinatorics colloquia

One of the rituals of Spring for combinatorialists in the UK is meeting in London in May for two one-day colloquia, one at Queen Mary and the other at LSE.

As always, this year’s colloquia provided very good instruction and entertainment. And, as always in my descriptions, I will not cover the talks uniformly!

At the QM day, the first talk was by Stéphan Thomassé, who was very complimentary about the state of combinatorics in the UK, and indeed quoted recent results by Keevash, Kuhn and Osthus in his talk. It was a lovely talk. He started with the context, which is nowhere-zero flows in graphs. A k-flow, for short, is a flow on an orientation of the graph which takes values from the set {1,…,k−1} such that the flow into any vertex is equal to the flow out. (A result of Tutte, which has always smacked of magic to me, is that we can take the values of the flow to be non-zero elements of any abelian group of order k, without changing the class of graphs having such a flow.) Tutte made two conjectures, both still open:

• a 2-edge-connected graph has a 5-flow;
• a 4-edge-connected graph has a 3-flow.

Stéphan remarked that a sensible way to weaken the conjectures is to replace 5 (in the first) and 4 (in the second) by larger numbers. The first was achieved by Kirkpatrick (in his thesis, which was ignored) and five years later by Jaeger, giving the value 8. The proof is so nice that I will give it in outline below. Seymour reduced 8 to 6, and there matters stand.

For the 8-flow theorem, we use the abelian group formulation, and show first that we can assume the graph is 3-edge-connected (if there is an edge cut of size 2, the two edges carry equal flow in opposite directions, so we may cut them and join the ends to get smaller graphs). Now double the edges of the graph to obtain a 6-edge-connected graph. A theorem of Nash-Williams shows that this graph has three edge-disjoint spanning trees; their complements can be enlarged to Eulerian subgraphs (see below). Now define a flow in (Z2)3 by putting a flow (a,b,c) on an edge e, where a is 1 if the edge lies in the first Eulerian subgraph and 0 otherwise, and similarly for the other two entries. It is easily checked that this is a flow.

[Why can the complement of a spanning tree be enlarged to an Eulerian subgraph? This depends on the fact that 2m vertices of a tree can be “matched” by m disjoint paths – an easy exercise, try it! Take the set of vertices of odd degree in the complement of the tree, and add the joining paths to this complement.]

For the other conjecture, Carsten Thomassen proved in 2012 that an 8-edge-connected graph has a 3-flow. According to Stéphan, Thomassen had been working on this problem for 40 years! Along the way, he came up with another nice conjecture. Given a tree T, there is a number c (depending on T) such that a c-edge-connected graph, whose number of edges is divisible by the number in T, can be decomposed into copies of T. He proved this for stars, but paths seemed much harder.

Stéphan and his co-authors have shown that 24-connected is enough, provided we add a minimum degree condition: given a path P, there is a number d (depending on P) such that a 24-edge-connected graph with minimal degree at least d, whose number of edges is divisible by the number in P, can be decomposed into copies of P.

The next talk was by Alex Scott; a very nice talk, but very similar to his Glasgow talk which I already described.

After lunch we had a nice survey by Olof Sisask (formerly at Queen Mary) of some new results in additive combinatorics, depending on a “continuity” condition for convolutions; then a talk by Anita Liebenau on the minimal degree of minimal Ramsey graphs (graphs in which, say, any r edge-colouring contains a monochromatic copy of the complete graph of order k, but when a vertex or edge is deleted this is no longer true).

Then we had the highlight of the day, for me at least: another proof of the existence of t-designs, about which Peter Keevash talked last year. This one was by Ron Peled, and the approach was completely different. To say that a certain kind of design exists is equivalent to saying that, if you choose the blocks independently at random with some fixed probability, then there is a non-zero probability (maybe quite small!) of obtaining the design. Now suppose we could estimate this probability, with techniques from probability theory, and show that it was non-zero: wouldn’t that be nice? Well, that is what Ron and his co-authors have done.

It is interesting to contrast their result with Keevash’s. He was able to show that t-(v,k,λ) designs exist for all sufficiently large v satisfying the usual necessary conditions (for fixed t,k,λ). In particular, Steiner systems (λ = 1) exist whenever the appropriate conditions hold and v is large enough. He wouldn’t be drawn on how large is large enough, though he claimed it is not astronomically large.

By contrast, Peled and his co-authors require λ to be large, so no Steiner systems (though they give a formula for how large it has to be, containing an undetermined constant). But they don’t require t and k to be fixed; for example, k might be as large as the square root of v. Moreover, their estimate for the probability translates into an estimate of how many such designs there are: it is a function of the parameters times (1+o(1)), where the o(1) is explicit.

The method also constructs other things such as orthogonal arrays and uniformly t-transitive sets of permutations.

I will briefly outline how it works. Let M be an integer matrix with rows and columns indexed by sets B and A respectively (and think of B as typically much larger than A). Let x be the vector obtained by averaging the rows of M. We look for a relatively small set of rows so that the average of the rows in this set is also x. The theorem guarantees that this can be done, under suitable conditions:

1. the entries of M are bounded by a constant;
2. the space orthogonal to the row space has a basis of integer vectors with small l1-norm (bounded by another constant: this is related to the LDPC, or low-density parity check, condition of coding theory, in ways which are not yet understood);
3. the matrix has a group of symmetries acting transitively on the rows.

This is proved by analysing a certain random walk on the lattice spanned by the rows of the matrix, showing that the three conditions imply a “local central limit theorem” for the limiting distribution of the random walk.

For the design application, we take M to be the incidence matrix of k-sets against t-sets in a set of size v. The row sum average vector for the whole set is a constant vector; and a subset of rows is a t-design if and only if the average over this subset if also a constant vector (necessarily the same).

The final talk, by Greg Sorkin, was on “VCG auctions”; I confess that I found it more interesting than I had expected.

The second day was as usual at LSE. The weather had changed for the worse but enthusiasm was undampened. But I am getting tired, so my write-ups may be shorter.

Apart from the British Combinatorial Committee, who had a meeting before the start ot the talks, we kicked off with two talks on the connection between combinatorics and statistical mechanics.

Will Perkins started by giving a “birthday inequality”, an inequality for the probability that no two of n people share a birthday. The bound came from noting that the probability that the next person has a different birthday from all those considered so far is minimised when all these have different birthdays: he called this the “repulsion inequality”. Then he considered the hard spheres problem. The goal is to find the probability that n spheres of fixed radius r/2 placed in the d-dimensional torus don’t overlap. An upper bound from this would follow from a similar “repulsion inequality”, which would say that the proportion of the torus covered by the spheres of radius r with the same centres is maximised if the smaller spheres are pairwise disjoint. A reasonable-looking statement, but much harder: he can prove it in certain ranges, but it is not true in general! Indeed, it fails in 24 dimensions, and the reason is the Leech lattice, which gives a very dense sphere-packing.

Next, Lex Schrijver (who was introduced with his name pronounced correctly by Jan van den Heuvel) talked about some variations of a result he proved a few years ago with Freedman and Lovász. Given any real symmetric n×n matrix A, one can define a partition function on finite graphs. He gave some conditions that such a partition function must satisfy; the theorem asserts that these conditions characterise partition functions. (The conditions are slightly different over the real and the complex numbers, a feature which recurred in his talk.) He went on to analogous situations: first the analogue for edge colourings (thinking of the classical case as vertex colourings); then link invariants. Connections with graph limits and with Vassiliev knot invariants.

After lunch, we heard from Frank Vallentin about packing superspheres in 3-space. He began with a survey of packing spheres. A supersphere (a notion invented and popularised by Piet Hein) is a ball in the lp metric, for p > 2. As with Hales’ result for spheres, a lot of cleverness and some big computer calculations are required.

Then Christina Goldschmidt told us about scaling limits of Galton–Watson trees. The simplest model gives “random compact R-trees”, which are almost surely 3-branching everywhere, but more complicated models allow higher-order branching. The simple model is related to Brownian motion in a way she explained, and the more complicated models to Lévy processes. But I am out of my depth here!

• for some constant c, a ck2-connected tournament has k edge-disjoint Hamiltonian cycles;
• for some constant c, a ck-connected tournament is k-linked. (This condition means that, given two disjoint k-tuples of distinct vertices, there are k edge-disjoint paths such that the ith path goes from the ith point of the first tuple to the ith point of the second.)

My keenest followers may recall that I mentioned in a recent post the fact that (strongly) connected tournaments have Hamiltonian cycles (in connection with a problem about idempotent generators for the transformation semigroup). I feel almost certain that some of Alexey’s results and ideas will be useful in work on this problem. As soon as I can devise a nice test question, I have promised to dangle it in front of him!

The day ended with the Norman Biggs lecture, given this year by Tim Gowers. He talked about a result he and Emanuele Viola have proved. Let G be the group SL(2,q). Let A and B be “large” subsets of G×G. If (a1,a2) and (b1,b2) are random elements of A and B respectively, then the “interleaved product” a1b1a2b2 is approximately uniformly distributed over G. He took us through a series of reductions of the problem, which came in the end to counting the solutions of a single polynomial equation in five variables over the field of order q. The polynomial turns out, after some work, to be almost always absolutely irreducible, so the Lang–Weil theorem gives an estimate for the number of solutions, which does the job.

The theorem has an application to communication complexity in computer science, which he explained. He speculated that a similar result will hold for all finite groups of Lie type and bounded rank. The key property of SL(2,q) needed for the proof is that all its non-trivial conjugacy classes have approximately the same size.

So in most respects, it was business as usual; the organisers are very good at choosing speakers who have something to say and know how to say it. However, there was a new feature of this year’s colloquia: the inside front cover of the program book carried the message “When tweeting about the colloquia, please use the hashtag #CC2015”. Apparently this feature was well used (but how would I know?). If you did use this feature, I welcome your feedback! 