BCC26, 2

The first lecture on Thursday was by Benny Sudakov, on robustness of graph properties. For purposes of exposition, he had decided to focus on one particular property, that of being Hamiltonian. Dirac’s famous theorem tells us that a graph on n vertices with minimum degree at least n/2 has a Hamiltonian circuit. Robustness of the property of Hamiltonicity could mean one of several things: there are many Hamiltonian circuits; there are many pairwise disjoint Hamiltonian circuits; an enemy would have to make significant changes to the graph to destroy the property; and so on. All these hold (and can be quantified). Moreover, not only is Hamiltonicity robust in a graph satisfying Dirac’s condition, but it tends to be robust in a random graph where edge probabilities are large enough to force the property to hold.

The second plenary was by Gi-Sang Cheon, on new applications of Riordan arrays. This was a rather technical talk, full of information. Unfortunately, since Gi-Sang was substituting at rather short notice for a plenary speaker who was unable to attend, he didn’t have an article in the book, and when I asked him afterwards he said that the material he was speaking about was too new to be in any survey yet. So I won’t attempt to describe the talk.

Of the contributed talks, there were several worth mentioning, but one of the best was by Iain Moffatt. He told us that the crucial property of the Tutte polynomial is the deletion-contraction relation, and in any situation where one has such a relation one should expect a “Tutte polynomial”. Why, he asked, are there three different Tutte polynomials for graphs embedded in surfaces in the literature? For the answer it is necessary to look more closely at deletion and contraction. Even if we use the nicest possible definition, where graphs are embedded so that faces are topological discs, trouble can arise when you delete or contract an edge. If you delete an edge going round a hole, a face of the resulting graph can fail to be a disc; and if you contract an edge which is a non-contractible cycle, you pinch the surface to a point, so you must either put up with a pseudosurface, or take your scissors and cut the surface at that point. Thus you actually expect four Tutte polynomials (allowing or disallowing non-simply-connected faces, and allowing or disallowing pseudosurfaces).

In the final session, I had to miss some talks I would like to have heard in order to give my own, on synchronization and separation in the Johnson association scheme. The slides are in the usual place.

Dinner was in the Barony Hall, a former huge church just beyond the cathedral, now used by the University of Strathclyde. A very good dinner, very congenial and interesting conversation, and after dinner a ceilidh, at which I guess the majority of the participants had at least one dance. After the band stopped and the staff were clearing the hall, we headed back to the hotel.

Friday morning saw us check out of the hotel in good time to be at the conference by 9, where I was introducing Bill Chen. He had chosen well; his topic, the spt-function of Andrews, was very technical, but his talk was well judged for the morning after the conference dinner, with little episodes into historical detail (Euler, Hardy and Ramanujan, Dyson, Andrews, and others), and even a quote from Rabindranath Tagore on the relation between simplicity and complication. (The definition of rank which led to the “combinatorial” proof of two of Ramanujan’s conjectures is very simple, but the proof is still very complicated.)

One contributed talk stood out for me: Bridget Webb’s. She and Daniel Horsley have constructed uncountably many countable homogeneous Steiner triple systems. (I need to point out here, as Bridget did, that we are regarding a Steiner triple system as a functional structure – a quasigroup – whose substructures correspond to subsystems, rather than as a relational structure whose substructures are partial Steiner triple systems: a structure is homogeneous if any isomorphism between finitely generated substructures extends to an automorphism of the structure. The construction required results on embedding partial Steiner triple systems which go well beyond what was previously known. I was reminded of Cherlin’s classification of homogeneous digraphs, where there are uncountably many of a general type together with a few sporadic examples. I suspect that turning the construction here into a classification will be extremely difficult!

The final talk of the conference, which was also the Rado lecture, was given by Vít Jelínek on Ramsey-type problems in permutations. Since Strathclyde is one of the top centres of research on permutation patterns, this was a good choice of topic; Vít’s take on permutations is similar to mine, regarding a permutation as a set carrying two total orders, and he did indeed quote my classification of homogeneous permutations (but there was far more in the talk than that).

And so ended what I thought had been an excellent and memorable British Combinatorial Conference, and we all headed home (slightly delayed, in our case, by Sergey Kitaev, who had very kindly invited us to a barbecue at his home in Linlithgow, preceded by a very brief detour to Linlithgow Palace, the birthplace of Mary Queen of Scots).

Linlithgow Palace

Posted in events | Tagged , , , , | 2 Comments

BCC26, 1

About halfway through the British Combinatorial Conference, at last I have time to say something about it.

I am not just a combinatorialist, but I always feel that the BCC delegates are the group into which I naturally fit. It is always a pleasure meeting old friends and indeed new friends.

So to begin with the least interesting part: I am never really off duty at the BCC, since I have to chair the business meeting on Tuesday and the committee meeting on Wednesday, perform at the concert, give my talk, and run the problem session. Also, there are far too many contributed talks for me to go to more than a tiny fraction of them; I have already had to miss one talk by a co-author of mine, and others by colleagues or students.

One innovation this time has been the introduction of mini-symposia, of which there were three: on extremal combinatorics, on graph colouring, and on permutation patterns. The speakers in the mini-symposia were given a little longer than ordinary contributed speakers. These seem to have been popular, and look to be here to stay; I hope that the result is not perceived to be two classes of speakers (apart from plenary speakers). One highlight was a lovely talk by Bruce Sagan on descent polynomials and related things. A descent in a permutation π is a value i such that π(i) > π(i+1). MacMahon proved that the number of permutations with descents in specified positions (and no others) is given by a polynomial; these polynomials were ignored for a long time but are now of great interest, as are the related peak polynomials counting permutations with a given set of peaks.

On Monday night we had a civic reception in the amazing Glasgow City Chambers.

The dignitary who addressed us was well briefed. This is the 100th anniversary of the birth of Bill Tutte, who among other things was a Bletchley Park codebreaker who was instrumental in breaking the Lorenz cipher used by the German high command in World War 2. She knew this, and gave us a well-informed potted account about Tutte (but didn’t know that, like her, Tutte was a graduate in Chemistry; I told her this when we spoke later).

On Tuesday we had the conference concert, the highlight of which was Bruce Sagan leading us in singing Scottish folk songs including “Wild Mountain Thyme”.

Just a bit about the main talks. The first was by Daniel Horsley, who showed us some of the results that have been proved about graph decompositions by “local switching” (including embeddings of partial Steiner triple systems, cycle decompositions, and almost regular decompositions). He illustrated what he meant by “local switching” (a somewhat vague but fruitful idea) by showing us that, if a graph has a proper t-edge-colouring, then it has one in which the sizes of the colour classes differ by at most one.

Rosemary Bailey began by showing us why experimental design is the study between relations between partitions of a set, of which the most important in this context are refinement, orthogonality, and balance; by looking at the possible relations between three partitions, many interesting classes of designs emerge naturally, including triple arrays, which I have recently spoken about here.

Julia Böttcher and Peter Allen are here with their new baby, and bravely take turns in looking after it while the other goes to a talk. (Actually, it is one of the best-behaved babies I have seen, and I wouldn’t object if it came to my talk.) On Tuesday, Julia talked about large-scale structures in random graphs. What is the threshold for a large structure, or for all the structures in some family, to appear in the Erdős–Rényi random graph? And when do they appear robustly, so that an adversary would have to make many changes to destroy the property? This is a fertile meeting ground of extremal and random graph theory.

Robert Morris talked about “cellular automata”, but in fact the problems he discussed were essentially percolation problems. The set-up is that, initially, a finite set of cells in the plane square lattice are “infected”; at any time, a newe cell becomes infected if all the cells obtained from it by one of a given set of translations are infected (for a simple example, at least two nearest neighbours); an infected cell remains infected. Does the whole plane eventually become infected, and if so, how fast?

Wednesday brought us two of the best talks so far. Graham Farr celebrated Tutte by a lecture on his life and his early work in graph theory, illustrated by some remarkable photographs. This prompted me to remember my closest encounter with Tutte. This was at the 1973 British Combinatorial Conference in Aberystwyth; Rosemary had already related my encounter with Donald Preece at that conference. The railway line to Aberystwyth was, as often happens, closed by a flood, and so we had to take the train to Worcester from where coaches were provided. As the bus wound along narrow roads through the Welsh hills, I went forward to have a word with someone. As I turned around to return to my seat, the bus gave an especially violent jerk, and I was thrown into Tutte’s lap.

I have always considered Tutte’s book Graph Theory as I have known it remarkable for being both a textbook on graph theory and a historical account of Tutte’s own passage through the subject, explaining the connections between the very disparate things (polynomials, network flows, spanning trees, symmetry, determinants, …), and how he was naturally led from one topic to the next. Graham certainly agreed with this assessment!

Julia Wolf introduced us to the use of entropy methods in additive combinatorics; these typically give improvements on the “energy methods” (as the earlier techniques are now called). She worked all the time in the elementary abelian 2-group, or vector space over the 2-element field, where the Fourier transform is particularly easy to understand, and its basic properties easy to prove. This was the closest plenary so far to the material at the Finite Geometry summer school last week.

Posted in events | Tagged , , , , , , , , , | 3 Comments

A prize

Time for a little blast on my own trumpet.

At the meeting of the London Mathematical Society today, it was announced that I am the winner of the Senior Whitehead Prize this year.

The rules for the prize state, in part,

The SENIOR WHITEHEAD PRIZE is awarded, in memory of Professor J. H. C. Whitehead, a former President of the Society. It is awarded in odd-numbered years. […] The grounds for the award may include work in, influence on or service to mathematics, or recognition of lecturing gifts in the field of mathematics […]

The citation says,

PROFESSOR PETER CAMERON of the UNIVERSITY OF ST ANDREWS is awarded a SENIOR WHITEHEAD PRIZE for his exceptional research contributions across combinatorics and group theory. His fertile imagination and encouragement of others have sparked activity in many fields.

Cameron has long been a leading figure in combinatorics and permutation group theory, frequently impacting on other areas and, through beautiful insights and questions, initiating a stream of research. As an example, starting in 1976 (with the first result in a now rich literature on reducts of homogeneous structures), he has led research on infinite permutation groups. He identified oligomorphic permutation groups, which link to model theory, making connections to combinatorial enumeration and notions of randomness, and led to `Cameron’s constant’ for sum-free sets. Another 1976 paper, with Goethals, Seidel and Shult, makes links between finite graphs with least eigenvalue −2, root systems, and systems of lines at 60 and 90 degrees. His 1981 article on the implications of the classification of finite simple groups made explicit the new methods emerging for finite permutation groups, and he and many others have continued to reap rewards; examples include his proof with Praeger, Saxl and Seitz of the Sims Conjecture. He has papers on many topics in coding theory, design theory, finite geometry, and graph theory, often spotting key interconnections. In a current collaboration with Araújo and others stemming from the Černý conjecture in automata theory, he has opened up links between permutation groups, semigroups, and graphs.

As a lecturer, Cameron’s enthusiasm and lucidity make tricky arguments easy. He draws people to his subject, encouraging problem-solving for starting undergraduates, giving advanced ones a taste of research, and giving sparkling conference lectures. His list of around 40 PhD students includes, among many with academic careers, Eric Lander, the founding Director of the Broad Institute of MIT and Harvard.

Whoever wrote that knows, apart from much else, what I consider to be valuable in my work!

I feel immensely proud of this. Among other things, let me just note that I am only the third person to win both a Junior and Senior Whitehead Prize, the other two being Robert Mackay and Frances Kirwan. (Henry Whitehead, incidentally, was my mathematical great-grandfather; my father Peter Neumann also won the Senior Whitehead Prize in 2003, and though my grandfather Graham Higman did not, he won the Senior Berwick Prize in 1962 and was awarded the De Morgan Medal in 1974.)

Posted in Uncategorized | Tagged | 17 Comments

Finite geometry in Brighton

This week I am in Brighton, at a summer school on Finite Geometry aimed at PhD students, organised by Fatma Karaoglu.

Despite the aim of the summer school, there are a few people here who are certainly not PhD students, including old friends Jim Davis and David Wales.

There are four short courses, given by Simeon Ball, Anton Betten, Jef Thas, and me. Jef and I lectured twice a day for the first two days, and then bowed out to be replaced by the other two for the last two-and-a half days. There were also many contributed talks, and GAP sessions run by my colleagues Alexander Konovalov and Chris Jefferson.

My talks were loosely based on the third series of my Advanced Combinatorics lectures in St Andrews. They were centred around the classification of graphs with the strong triangle property: each edge is contained in a triangle such that any further vertex is joined to exactly one vertex of the triangle. These graphs fall into two infinite families and three sporadic examples. If this suggests links to the ADE affair, that is correct: it leads fairly directly to a classification of the ADE root systems. But it leads in other directions too. It is the classification of generalised quadrangles with three points on a line, and the proof slightly tweaked shows the non-existence of infinite quadrangles with this property. It generalises to the classification of graphs with the triangle property: each edge is contained in a triangle such that any further vertex is joined to one or all vertices of the triangle. This is the theorem of Shult and Seidel, an immediate predecessor of the celebrated Buekenhout–Shult theorem classifying all the finite-rank polar spaces.

The notes are on the course website, here.

Jef spoke about arcs and caps in finite projective spaces, from Segre’s classification of conics in projective planes over fields of odd order (including Segre’s important “lemma of tangents”) to results on generalisations where instead of points one considers subspaces of arbitrary direction, and the connections with linear MDS codes. A very full and detailed survey, and even more valuable when the notes become available (as they should be soon).

Anton Betten’s topic is computational results in finite geometry. He began with the observation that, just because a problem is hard, that doesn’t mean we shouldn’t solve it as best we can. He quoted me on this – it is a sentiment I agree with! He remarked that he has some difficulty with NSF grants precisely because computer scientists who referee his proposals think that the problems are too hard. After this he plunged into four particular problems. The first was the classification of things called BLT-sets, which give rise to (often new) projective planes and generalised quadrangles, among other things. He has pushed the classification up to q = 67; the numbers of examples do not grow too rapidly. The others are parallelisms in 3-dimensional projective space, optimal linear codes, and cubic surfaces. He showed us his app for showing features of the 27 lines on a cubic surface, running on his iPhone on the document camera – you can find this app, called “Clebsch”, on the app store if you have an iPhone. (Does it sound as if I know what I am talking about??)

Simeon is telling us more about arcs and caps, and applying the polynomial method to these questions. He and Michel Lavrauw have a theorem which is a considerable improvement of Segre’s in odd characteristic, and (with a lot of effort) also gives results in higher dimensions. He concluded the series by proving the prime case of the celebrated MDS conjectures. Simeon’s notes are also available, here.

In his last talk, Anton showed us some details of what the computer actually does in these classifications. Typically there is a huge bump which takes a lot of the time. In a nice piece of synergy, in Simeon’s last talk, he showed us a method which allows one to tunnel through the bump: a condition on a (rather large) matrix built from a small arc allows one to show that it has no extension to a large arc of some size. New developments should arise from this!

Among the contributed talks, I am only going to mention two. Michael Reynolds is developing a prgram, along the lines of GeoGebra, for doing metric geometry over finite fields. I am not terribly impressed with the app, which represents the affine plane over a finite field as a square grid: lines don’t look much like lines! But I like the philosophy, which is to avoid all use of real arithmetic, transcendental functions, etc. Thus, the squared distance is given by a quadratic form; orthogonality by the associated bilinear form; and instead of angle θ, he uses cos2θ, which can be found “rationally” using the cosine rule.

Sandra Kingan did her best to persuade finite geometers that they should be interested in matroids. She talked about the representability problem over GF(5). The difficulty of the problem is greatly increased by the fact that the same matroid can have different representations. But what are inequivalent arcs of the same size in a projective plane but different representations of a uniform matroid?

Thursday night brought a very enjoyable dinner in a restaurant in Hove. No pictures, I’m afraid; I forgot my camera! There will be a conference photo eventually …

Posted in events | Tagged , , , , , | 2 Comments


Conference logo

I’ve just been at a conference called “Kinks 2”, or that might be “knkx 2” (I never saw it written down). But look at the list of topics: dynamics, functional equations, infinite combinatorics and probability. How could I resist that?

Of course, with a conference like this, the first question is: is the conference on the union of these topics, or their intersection? The conference logo strongly suggests the latter. But as usual I was ready to plunge in with something connecting with more than one of the topics. I have thought about functional equations, but only in the power series ring (or the ring of species); but it seemed to me that sum-free sets would be the best fit: there is infinite combinatorics and probability; there is no dynamics, but some of the behaviour looks as if there should be!

The organisers were Adam Ostaszewski and Nick Bingham. So the conference was centred on their common interests. I will say a bit about this in due course. Unfortunately, for personal reasons, I was not able to travel south until Monday morning, and even getting up at 6am I didn’t get to the conference room until a little after 2pm. So I missed the opening talk by Nick Bingham (which no doubt set the scene, and according to Charles Goldie was a sermon) and the talk by Jaroslav Smital; I arrived shortly after the first afternoon lecture, by Vladimir Bogachev, had begun.

I will try to give an overview of what I heard, but will not cover everything at the same level. Of course, a topologist, analyst or probabilist would give a very different account!

Vladimir’s talk was something which tied in to the general theme in a way I didn’t realise until later. He talked about topological vector spaces carrying measures; in such a space, consider the set of vectors along which the measure is differentiable (in one of several senses), one of which is apparently called the Cameron–Martin space. He had something called DC and explained that he originally named it after a Russian mathematician whose name began with S (in transliterated form); he had used the Cyrillic form, never imagining in those days that one day he would be lecturing about it in London!

David Applebaum talked about extending the notion of a Feller–Markov process from compact topological groups to symmetric spaces, and considering five questions that have been studied in the classical case, asking how they extend. He wrote on the board, which didn’t actually slow him down much. David said “I want to prove something”, but Nick, who was chairing, said “No, just tell the story”.

Finally for the day, Christopher Good explained two concepts to us: shifts of finite type (this means closed subspaces of the space of sequences over a finite alphabet, closed under the shift map, and defined by finitely many excluded subwords), and shadowing (this is relevant if you are iterating some function using the computer; typically the computed value of f(xi) will not be exact, but will be a point xi+1 close to f(xi); we say that this pseudo-orbit is shadowed by the orbit of y if fi(y) is close to xi). People have known for a long time that these concepts are closely related, but he and his postdoc have a theorem to this effect. In a compact 0-dimensional space like Cantor space, their theorem says that a map has the shadowing property if and only if it is conjugate to an inverse limit of shifts of finite type. Over arbitrary compact metric spaces they also have a necessary and sufficient condition, but it is more complicated, involving the notion of semi-conjugacy. He explained that the aim of this kind of dynamics is to replace a complicated map on a simple space by a simple map (a shift) on a more complicated space (finite type).

I was the first speaker next morning. I arrived half an hour early, to find the coffee laid out but nobody else around. Soon Rebecca Lumb came along and logged in to the computer, so I could load my talk. I found that the clicker provided had the feature that the left button advances the slides, so I took it out and put in my own, which works the right way round. The talk went well, and I enjoyed a gasp of surprise from the audience when I displayed my empirical approximation to the density spectrum of a random sum-free set. My last slide, a picture of an apparently non-periodic sum-free set generated by a periodic input, was also admired. It was suggested that a series of such pictures, in striking colours, would be worthy of an art exhibition. The slides are in the usual place.

After a coffee break, Imre Leader spoke about sumsets (so not too far from my talk but not at all the same). As usual, he wrote on the board. The question was: if you colour the natural numbers with finitely many colours, is there an infinite set for which all pairwise sums have the same colour? The answer is yes if you take pairwise sums of distinct elements. (Colour the pair {i,j} with the colour of i+j. By Ramsey’s theorem, there is an infinite set with all pairs of the same colour; this does it!). The first surprise was that if you allow x+x as a sum as well, then it is impossible; he showed us the nice two-step argument for this. (The first step is simple; you can always colour so that x and 2x have different colours: take two colours, and give x the colour red if the exponent of 2 in the factorisation of x is even, blue if it is odd. The second step a bit more elaborate.) What about larger systems? He showed us why the answer is No in the integers, and No in the rationals, but (surprisingly) Yes in the reals, if you assume the Continuum Hypothesis (and indeed, Yes in a vector space of sufficiently large dimension over the rationls (precisely, bethω).

First after lunch was Dugald Macpherson, who talked about automorphism groups of countable relational structures, especially closed oligomorphic groups (and more specially, automorphism groups of homogeneous structures over finite relational languages). His talk was in three parts, of which he skipped the second for lack of time. The first part was stuff I knew well, the connection between Ramsey structures and topological dynamics (the Kechris–Pestov–Todorcevic theorem and what happened after). The second part would have been about simplicity. The third concerned the existence of “ample generics” and applications of this to many things including the small index property, uncountable cofinality, and the Bergman property, and then sufficient conditions for ample generics (with names like EPPA and APPA).

Eliza Jablonska talked about “Haar meager sets”, an ideal of subsets of a topological group having many similarites to the Haar null setes in the case of a locally compact group. Some, but not all, of the classic results about measure and category for the real numbers go through to this situation.

Finally, Janusz Brzdek talked about the generalised Steinhaus property. In loose terms, this says that, if A is a subset of a topological group which is “large enough and decent”, then AA has non-empty interior, and more generally, if A is as above and B is “not small”, then AB has non-empty interior. This kind of result has applications in the theory of functional equations, for example showing that if you have a solution of f(x+y) = f(x)+f(y) in a large enough and decent subset of G then this solution can be extended to the whole of G. There are also applications to “automatic continuity” (but I don’t know what this is). He started off with some very general results which apply in any magma (set with a binary operation) with identity. You have to redefine AB in such a case, since inverses don’t exist; it is the set of z for which z+B intersects A. He went on to a discussion of microperiodic functions (which have arbitrarily small periods): on the reals, such a function, if continuous, must be constant, and if Lebesgue measurable, must be constant almost everywhere. There are also approximately microperiodic functions, sub-microperiodic functions, and so on.

Then off to a pleasant conference dinner at the local Thai restaurant, where conversation ranged over doing, teaching and understanding mathematics, along with many other topics.

Wednesday was a beautiful day, so I walked in to the LSE, past St Pauls and down Fleet Street.

The first speaker, Harry Miller, was talking to us by skype from Sarajevo, since his health is not good enough to allow him to make the trip. The technology worked reasonably well, though the sound quality was not great and the handwritten slides were packed with information. He gave us half a dozen different conditions saying that a subset of the unit interval is “large” (not including the well-known ones, measure 1 and residual), and a number of implications between them and applications. One nice observation: are there compact sets A and B such that A+B contains an interval but AB doesn’t? Such sets had been constructed by “transfinite mumbo-jumbo”, but he showed us a simple direct construction: A is the set of numbers in [0,1] whose base-7 “decimal” expansion has only digits 0, 4 and 6, while B is the set using digits 0, 2 and 6.

After this, Marta Stefankova talked about hyperspace: this is the space K(X) of all compact subsets of a compact metric space X, with the Hausdorff metric. Given a map f, how do the dynamics of f on X relate to the dynamics of the induced map on K(X)? She introduced four kinds of “Li–Yorke chaos”: a map can be generically, or densely, chaotic or epsilon-chaotic. There are a number of implications between the resulting eight situations, but some non-implications as well; almost everything is known if X is the unit interval but in other cases there are still many mysteries.

Adrian Mathias, whom I haven’t seen for donkeys years, talked about connections between descriptive set theory and dynamics (he said, inspired by a trip to Barcelona, where he found the dynamicists sadly lacking in knowledge about descriptive set theory). The subtitle was “analytic sets under attack”. (The basic idea is that iteration of a map f is continued into the transfinite (I missed the explanation of how this is done), and x attacks y if there is an increasing sequence of ordinals so that the corresponding sequence of iterations of f applied to x has limit y.)

Dona Strauss talked about the space βN, the Stone–Cech compactification of the natural numbers (realised as the set of ultrafilters on the natural numbers). It inherits a semigroup structure from N, and the interplay of algebra and topology is very interesting. Her main result was that many subsets of βN which are very natural from an algebraic point of view are not Borel sets: these include the set of idempotents, the minimal ideal, any principal right ideal, and so on. (The Borel sets form a hierarchy, but any beginners’ text on descriptive set theory tells you not to worry too much, all interesting sets are Borel, and are in fact very low in the hierarchy.)

Peter Allen took time out from looking after his five-week-old baby to come and tell us about graph limits. It was a remarkable talk; I have heard several presentations of that theory, including one by Dan Kral in Durham, but Peter managed to throw new light on it by taking things in a different order. He also talked about some applications, such as a version of the theory for sparse graphs, and some major results already found using this approach: these include

  • a considerable simplification of the Green–Tao theorem that the primes contain arbitrarily long arithmetic progressions; and
  • a solution of an old problem: given two bounded sets A, B in Euclidean space of dimension at least 3, having the same measure, each of which “covers” the other (this means finitely many congruent copies of A cover B and vice versa), there is a finite partition of A into measureable pieces, and a collection of isometries (one for each piece) so that the images of the pieces under the isometries partition B.

Finally it was time for Adam’s talk. His title was “Asympotic group actions and their limits”, but he had the brilliant subtitle for a talk at the London School of Economics and Political Science: “On quantifier easing”. He explained how the notion of regular variation had led him to various functional equations, the simplest of which is additivity, and then he had discovered that these equations were actually the same up to a twist. There was quite a lot of technical stuff, and my concentration was beginning to fade, so I didn’t catch all the details.

That was the end of the conference, and we all went our separate ways.

Posted in events | Tagged , , , , , | Leave a comment


Rosemary Bailey, Tomas Nilson and I have just put a paper on the arXiv on this topic.

I discussed related things here last year, but here is a reminder. A triple array is an r×c array whose entries are from a set of v letters, satisfying the following five conditions:

  • (A0) No letter appears more than once in any row or column.
  • (A1) Each letter occurs a constant number k of times, where vk = rc.
  • (A2) The numbers of letters common to any two rows is a non-zero constant.
  • (A3) The numbers of letters common to any two columns is a non-zero constant.
  • (A4) The numbers of letters common to a row and a column is a non-zero constant.

I will refer to these conditions as binary, equireplicate, row-balanced, column-balanced, and having adjusted orthogonality. (For more precise usage see the paper.)

These arrays were first considered by Agrawal (I described this here.) He noted that if a triple array has r+c = v+1, then it gives rise to a symmetric BIBD, whose points are the rows and columns and whose blocks are the letters and an additional symbol *; a letter is incident with the rows not containing that letter and the columns containing it, whereas * is incident with all rows and no columns. He conjectured that the process could be reversed, so that from a symmetric BIBD one could construct a triple array, except for the Fano plane and its complement (where a simple sudoku-like argument shows that it is not possible). The conjecture is still open; no counterexamples have been found, and in general it seems easy to find the appropriate array.

In the paper, we define a sesqui-array as an array satisfying (A0)–(A2) and (A4) but not necessarily (A3).

From a statistical point of view, we are interested in things like “efficiency” and “optimality” of a desigh, which are answers to the question: how accurately can we estimate, say, differences between treatment effects using this design? Among block designs, it is known that balanced incomplete block designs (if they exist) are optimal for all the standard measures of optimality. Analogously, triple arrays are best if they exist. But if a triple array does not exist, a sesqui-array may be a good substitute, since the adjusted orthogonality is important, and apart from that, we want the row and column designs to be as good as possible; if the row design is balanced then the array with the better column design should win.

In the paper, there are two main constructions. The first is a 7×36 sesqui-array with 42 letters, whose column design is the Sylvester design which I mentioned here: since it is a good design, hopefully the sesqui-array is also efficient. (Just a week or so ago, I got the computer to print out the array from my elegant construction, and found that it was fatally flawed and couldn’t be fixed; so this would have to come out of the paper, which would play havoc with the balance. Fortunately, within a day I had fixed it!)

The other is a construction of sesqui-arrays from biplanes which contains a small surprise.

A biplane is a symmetric BIBD with parameters (V,K,2). In other words, there are V points and V blocks; each block has K points and each point lies in K blocks; two points lie in two blocks, and two blocks intersect in two points. We have V = 1+K(K−1)/2. (I will use capital letters, as we have used the more common lower-case v and k as array parameters.)

There are only finitely many known biplanes, though no finiteness theorem has been proved. The standard reference on them is Section 15.8 of Marshall Hall’s book Combinatorial Theory, second edition. (Actually the data in this section contains several errors; I will list the ones we found at the end.)

There is a convenient description of a biplane that Hall uses. Fix a block B of a biplane. Now any further block meets B in two points, and any two points of B lie in one further block; so we can represent the other blocks by the 2-element subsets of B. If q is a point outside B, then the blocks containing q meet B in two points, which can be thought of as the edges of a graph H(q) of valency 2 (that is, a union of cycles), called a Hussain chain. It is easy to see that the biplane can be reconstructed from the set of Hussain chains on a block.

Now consider the array whose rows and columns are labelled by the points in B and the points not in B respectively. In row p and column q, we put the 2-element subset of B consisting of the two neigbours of p in the Hussain chain H(q). Now observe:

  • There are no pairs repeated in a row, as is easily seen (two adjacent edges belong to a unique Hussain chain). There may be repeated pairs in a column; this will occur if there is a 4-cycle in a Hussain chain, since two opposite points on a 4-cycle have the same set of neighbours.
  • The array is equireplicate, row-balanced, and has adjusted orthogonality. Therefore, if no Hussain chain contains a 4-cycle, we get a sesqui-array.
  • If every Hussain chain consists of 3-cycles only, then the two neighbours of a vertex themselves form an edge, so the array is the one associated to the biplane by Agrawal, and is therefore a triple array. This occurs for Hall’s first 16-point biplane B1 (with any choice of block; the Hussain chains are all 10 possible pairs of triangles), and for Hall’s first 37-point biplane (the Hussain chains are the orbits of subgroups of order 3 in PGL(2,8)). This gives 6×10 and 9×28 triple arrays in a very simple form.

The surprise comes from looking at the other two 16-point biplanes (B2 and B3 in Hall). In both cases, even though some Hussain chains are hexagons, the sesqui-array turns out to be a triple array; and in both cases, the biplane associated to it by Agrawal’s construction is B1. There is a nice explanation for this, which you can find in the paper. But it does show that the relation between isomorphism of designs and isomorphism of arrays is rather subtle.

For the record, the misprints we found in Hall’s book are:

  • Block A10 in the biplane B1 with k = 6: the entry 16 should be 15.
  • In the biplane BL(9), the second occurrence of 31 in the permutation ψ should be 34, and the element 32 in the second base block should be 22.
  • In the biplane BH(9), in the block beginning 2, 8, the entry 38 should be 34; and in the block beginning 7,8, the entry 10 should be 16.
  • In the table of chain lengths on p.333, 5-5-3 should be 5-3-3.
  • There are two misprints in the generators of the automorphism group of B(13) on p.334: the generator x should have 69 inserted before 70 in the last cycle, and the generator z should have the cycle (15,24) rather than (15,25).
Posted in Uncategorized | Tagged , , , , | Leave a comment


Simon and Garfunkel sang,

May, she will stay [but will she?]
June, she’ll change her tune
August, die she must

In the same vein, John Wyndham, in The Chrysalids, wrote the obituary for her defunct parrot Strong’n’Stable:

Soon, they’ll obtain the stability they strive for, in the only way it’s granted: a place among the fossils.

(I quote from memory, that may not be quite right.)

It was interesting being in a constituency where the result was decided by two votes after three counts (giving different results, so I have very little confidence that the last recount was correct). Here is my theory of what happened.

North-east Fife was always going to be close between the SNP and the Liberal Democrats, with perhaps the Lib Dems having a slight edge. Then, shortly before the election, the Tory candidate woke up and started flooding the constituency with campaign literature. This didn’t give any policies; the only thing he said was that only a vote for him would “stop the SNP”. This of course was not true, but it possibly scared a few people thinking of voting Lib Dem to vote Conservative instead. So the result of his campaign was that the SNP candidate won.

This in my view is because of the “first-past-the-post” electoral system (misnamed, since there is no post which candidates have to pass), which encourages tactical voting rather than support of policies.

Posted in maybe politics | Tagged | Leave a comment