This week saw the two linked London Combinatorics Colloquia, at Queen Mary and the London School of Economics.

Not so long ago, a one-day meeting in Reading organised by Anthony Hilton was a fixture of the calendar. When Anthony came to QM, he brought the colloquium with him; the same year, Norman Biggs came to the end of full-time empolyment at LSE, and his colleagues took advantage of the absence of the Reading meeting to put on a one-day colloquium for him. Fortunately we were able to cooperate, and the double event has continued in the same format since then.

This year saw a fine collection of talks. János Pach observed that four of the twelve talks mentioned Szemerédi, fitting in the year he won the Abel prize, but perhaps indicating a predominance of a certain sort of combinatorics …

I will describe briefly my favourite talks.

Günter Ziegler gave a talk which began with a conjecture from two Kolkata mathematicians, Nandakumar and Ramana Rao, in 2006, and ended with a theorem by a mathematician from Madras (as it then was), Ram, in 1909. The problem states:

Given a convex polygon in the plane and a positive integer *n*, is it possible to divide the polygon into *n* convex pieces any two of which have the same area and the same perimeter?

The theorem asserts:

The greatest common divisor of the entries in the *n*th row of Pascal’s triangle (omitting the initial and final 1) is 1 if *n* is not a prime power, and *p* if *n* is a power of the prime *p*.

By a long chain of argument, using optimal transport (going back to Monge in 1751), weighted Voronoi diagrams, and equivariant obstruction theory, he and Pavle Blagojević use the theorem to prove that the dissection is possible if *n* is a prime power.

János Pach gave a talk entitled “Szemerédi strikes back”. It was an excellently-paced survey and a proof with a beautiful trick. First, briefly, the background. Van der Waerden’s Theorem asserts that, given *c* and *l*, there is a number *n* such that, if the first *n* positive integers are coloured with *c* colours, there is a monochromatic arithmetic progression of length *l*. Erdős and Rényi asked for a density version, which was proved by Szemerédi by combinatorial means, and two years later by Furstenberg using ergodic theory. I have told the story here.

A few years ago, Furstenberg and Weiss proved an analogous theorem for binary trees. Let *T _{n}* be the binary tree with

*n*levels. A

*replica*of

*T*in

_{d}*T*is a copy of

_{n}*T*with a property that the two children of a node in the small tree (which are descendants of the corresponding node in the big one) are descended from different children of this node, and that a level in the small tree is contained in a level in the big one; it is

_{d}*arithmetic*if the levels of the big tree on which vertices of the small tree occur form an arithmetic progression. Furstenberg and Weiss showed that, given

*c*and

*l*, there exists

*n*such that, if the vertices of

*T*are coloured with

_{n}*c*colours, there is a monochromatic arithmetic replica of

*T*. They alsos showed a “density version” of this partition result.

_{l}
Pach observed that, if the word “arithmetic” is omitted from the theorem, then it can be proved by a simple combinatorial argument with an explicit bound. Now if we want an arithmetic replica, we find a replica of *T _{m}*, where

*m*is the number in van der Waerden’s theorem; then look at the set of levels used, and find an arithmetic progression of size

*l*, giving an arithmetic replica. This argument also does the density version, replacing van der Waerden’s theorem by Szemerédi’s.

With all the emphasis on Szemerédi’s regularity lemma, perhaps it is time to revisit a dream of mine: that the regularity lemma can be used to prove non-existence results for certain kinds of strongly regular graphs. I will just discuss one special case here, concerning triangle-free strongly regular graphs. Such a graph has *v* vertices and valency *k*, no triangles, and two non-adjacent vertices have μ common neighbours. Only finitely many are known. There is one particular (extremal) subfamily of “negative Latin square” type, where *v* = *s*^{2}(*s*+3)^{2}, *k* = *s*(*s*^{2}+3*s*+1), and μ = *s*(*s*+1). For *s* = 1, we have the Clebsch graph, and for *s* = 2, the Higman–Sims (or Mesner) graph. No others are known. For *s* = 3, the non-existence has been shown computationally. This is the limit of our knowledge.

The regularity lemma says that any sufficiently large graph can be partitioned into (many large) pieces with the property that, for almost all pairs of pieces, the edges between those pieces *A* and *B* behave like a random bipartite graph: that is, the edge density between subsets *X* and *Y* of *A* and *B* is close to the edge density between *A* and *B*.

For dense graphs, this implies that given any three pieces, the number of triangles with one vertex in each piece is approximately what you would expect in a random tripartite graph. The argument fails for sparse graphs, however, since the approximation is sufficiently rough that zero is a possible number of triangles.

Various authors have worked on sparse versions of the regularity lemma. As far as I know, none of these will do the job I want; I would like to be proved wrong!

A putative strongly regular graph of the type considered, with *n* vertices, is sparse but not too sparse, with edge density about *n*^{−1/4}. Also, the fact that it is strongly regular prevents it from having dense “hotspots” (a hypothesis in some versions of the sparse regularity lemma). Is it possible to prove that a graph looking like this must necessarily contain a triangle?

Is the 2 in the formula for k meant to be there?

No – now fixed, thanks (along with another misprint in the same formula).

Alexander Gavrilyuk has sent me a reprint of a paper he published with A. A. Makhnev, in

Doklady Akademii Nauk(2005), proving the non-existence of the 324-vertex triangle-free strongly regular graph without the use of a computer.One of the first mathematicians to realize this possibility was Carl Friedrich Gauss , who then carefully measured out large right triangles as part of his geographical surveys in order to check the theorem. He found no counterexamples to the theorem within his measurement precision.

“With all the emphasis on Szemerédi’s regularity lemma, perhaps it is time to revisit a dream of mine: that the regularity lemma can be used to prove non-existence results for certain kinds of strongly regular graphs.”

I do now at least two people working on strongly regular graphs who believe that up to trivial existence all infinite families of parameters for SRGs have SRGs as soon as numbers are big enough. Can I conclude from the above that you did not believe this in 2012? And, then I am wondering, did your opinion change since then due to the results on the existence of designs?

(I found this post because I was wondering if you ever wrote a blog post that corresponds to your “Random strongly regular graphs?” paper, but I think that the answer is no.)

Sorry, my memory is so poor that to remember what I have written I have to use the search box…

I think I stick to my 2012 belief. Strongly regular graphs are more like symmetric designs than like t-designs with large t. Nobody has yet proved a theorem that projective planes of arbitrarily large order exist. Indeed, if I had to put my money down, I would bet on the prime power conjecture for projective planes. All our known constructions have an algebraic aspect, which restricts us to prime powers.

Of course, I would be delighted to be surprised!

That is interesting. I sometimes ask people if they believe in the prime power conjecture. If they do not believe in it, then I ask them for a guess for the smallest counterexample. These guesses go from 15 to a large 3-digit number.

Do you know Eric Moorehouse’s construction for projective planes using double covers/lifting quotients? And do you still consider algebraic? It is described in Section 3:

Click to access wyoming.pdf

The planes found that way in that paper are only the Wyoming planes (which still have much symmetry), but at least the process itself does not assume any symmetry. Eric also found _many_ planes of order 49 using the same process, but I have no idea how small the symmetry gets. I talked with him about that once, but I cannot really recall our conversation, so I only know the vague thing from his homepage:

http://ericmoorhouse.org/pub/planes49/

“From these 1349 planes and their duals, hundreds of thousands of others sare obtained by repeatedly deriving and dualizing; also by the process of lifting quotients. This work, currently in progress, is inspired by the similar project which I completed years ago for known projective planes of order 25. Currently I have found over 309,000 nonisomorphic planes of order 49; but this number is growing by several planes every hour and is expected to reach at least half a million planes before completion sometime within the next year (2011). Check back later for updates!”

As there are ~1400 planes of order 49 from the known nice algebraic constructions, but >=500000 planes by doing this (in my opinion: non-algebraic) modification, I kind of see this as (weak) evidence against the prime power conjecture.

I wouldn’t go so far as to say that I really believe it, just that I find it slightly less implausible than the only reasonable alternative we have at the moment, that planes exist for all sufficiently large n satisfying Bruck–Ryser.

Back in the 1960s, a few people were interested in the result that a plane of order n with a subplane of order m has n=m^2 or n\ge m(m+1); if equality holds in the latter, then various things happen, which they thought might make a construction feasible. I don’t know if anyone is continuing that approach at the moment.

Another analogy is extremal doubly-even self-dual codes, where it is known that only finitely many can exist.

Thanks for the answer and the example of doubly-even self-dual codes. I like that analogy very much.

As reference for others: There seem to be ~10000 of these codes of length 40 with trivial automorphism group (out of ~15000). They have length at most 3928. So my “argument” is not very good.

See “A complete classification of doubly even self-dual codes of length 40” by Koichi Betsumiya, Masaaki Harada, and Akihiro Munemasa. And “On the nonexistence of extremal self-dual codes” by Shengyuan Zhang. If I understand Zhang’s proof correctly, then it is an application of the MacWilliams transform/Delsarte’s LP bound (or, more generally, a nice spectral argument).

I think perhaps you are too easily persuaded. My analogy doesn’t really prove anything…

I think there is a somewhat mysterious borderline between things that exist in great profusion and things that are unique or don’t exist at all. One example of this (related to where we started this discussion) is combinatorially r-homogeneous graphs (not sure if this is the best name), where the number of common neighbours of a set of r points depends only on the induced subgraph on the set. For r=2 (strongly regular graphs), they exist in great profusion; for r=5, they are all known, and the only “interesting” ones are the pentagon and the 3×3 grid. r=3 and r=4 are not classified, but it seems that for r=4 there is very little more beyond r=5, while for r=3 there is at least the possibility of many examples (e.g. generalized quadrangles with t=s^2). I would very much like to know more, which is the purpose of my challenge to experts on the regularity lemma.