It is a 2-day meeting; each day had a number of invited speakers and a smaller number of PhD student presentations. The highlight on the first day was a talk by Ciaran McCreesh from Edinburgh, with the intriguing title “Is your combinatorial search algorithm telling you the truth?” The answer is “Not always” in the case of some of the major programs in this area; he found some gave wrong answers with a frequency of about 1 in 10000, worryingly high. What to do about this? Two traditional answers are to use provably correct software (impossible at present for programs of this size and complexity) or extensive software testing (which has been done in this case without success – you only find the bugs you are looking for). He proposed a third approach. As the program finds solutions or shows they don’t exist, it also provides a proof of this result, which can be fed to a checker along with the output. He spent some time on the satisfiability problem for formulae in disjunctive normal form, where such proof tools include unit propogation and resolution. For more general cases he proposes using integer variables with the values 0 and 1 only in place of Boolean variables, and replacing Boolean expressions by linear constraints, when cutting plane methods can be used. It is important that the methods used should be simple to check, so that issues of correctness of the checking program don’t arise. He claims that this method should be independent even of dodgy operating systems and random hardware errors.

(Subsequently João Araújo sent me a link to slides of a talk by Curtis Bright, who has been thinking along similar lines.)

A similar theorem, in a more specific situation, was picked up by Ruth Hoffmann in a talk about speeding up algorithms for hard problems between subgraph isomorphism and largest common subgraph. Her technique was to take a suitable homomorphic image of the graphs in question, augmented with some numerical data, so that the problem is easier to solve in the images and solutions there give information about solutions in the preimages. The goal is to have an add-on which is independent of the algorithm used to solve the problem, and so can potentially speed up any algorithm.

Torsten Mütze told us two weeks ago about a construction of Hamiltonian cycles in the Kneser graphs. This time he left that to his student Namrata, and instead told us about a much more general technique. His Algorithm J works on any class of permutations which form a “zigzag language”, or indeed any class of objects which can be encoded as a zigzag language of permutations: this includes such varied objects as pattern-avoiding permutations, rectangulations, lattice congruences on the weak order, elimination trees, acyclic orientations, and more. HIs second algorithm P works on bitstrings, and again works on many related classes, but time didn’t permit a detailed description of this.

After dinner at the Italian Kitchen, we came back for another very interesting day. The talks of Bishal Deb and Natasha Blitvič, notionally on completely different things (Laguerre digraphs and combinatorial moment sequences), described the same 14(!)-variable generating function enumerating permutations with respect to 14 different “statistics” on permutations. Robert Brignall’s talk overlapped these, on well-quasi-ordered permutation classes. The expectation is that a well-quasi-ordered class (one containing no infinite antichain) will have a better-bahaved enumeration function, but actually the picture is a bit more complicated than that.

Jess Enright combined two of her interests (cops-and-robbers games, and treewidth of graphs) to consider cops-and-robbers on multilayer graphs. A multilayer graph is simply one whose edge set is the union of a collection of “layers”; she took the rule that cops are confined to a single layer but robbers can move anywhere. She showed us many examples, the message being that virtually any plausible conjecture is false. (Some time ago, we had a “Discrete Mathematics and Big Data” day at St Andrews, where Simon Dobson talked about modelling London as a multilayer network: you can walk, or take the Tube. He was using real data from Oyster cards on the Tube.)

A model of random graphs where you add an edge at a time is by now “classical”, having been introduced by Erdős and Rényi in the 1950s. Dan Threlfall gave us a similar take on random permutations, where you instead compose with one inversion at a time, and computing thresholds for properties such as pattern containment.

]]>Norman Biggs was the founder of the combinatorics group at the London School of Economics, and so the LSE day this year was devoted to celebrating his 80th birthday (even though he is now 81). Unfortunately he was not well enough to travel in for the event, but we did get to see a video he had recorded (after some technical problems had been resolved). Norman had known Bill Tutte well, and he told us about Tutte’s work at Bletchley Park during the second world war, details of which were not made public until 2000, by which time the public perception was that Alan Turing was the hero of Bletchley Park. Turing had an actual Enigma machine to work on, brought to Britain by the Polish cryptographers who had studied it before the war. By contrast, Tutte worked on the Lorenz cipher machine, more complicated than the Enigma and used for the highest level communications between Hitler and his generals, even though nobody in Britain had ever seen this machine (Tutte first saw it in 1947). Tutte was able to work out the details of the machine, including all the wheel sizes, from a string of key which had been obtained as a result of an error by a German radio operator. Norman couldn’t explain precisely what Tutte did, but was able to show us one of Tutte’s working diagrams.

Also, because of the event for Norman Biggs, there were other oldies speaking: Robin Wilson, Martin Anthony, and Paul Seymour (though I regard Paul as a youngster). Robin started to tell us about the main developments in graph theory in the century from 1890 to 1990, though he only got as far as about 1970 because of the time lost to the technical problems. Martin told us about his work on machine learning, where the positive and negative training instances are not cut and dried but are weighted according to, for example, the margin between the instance and the boundary between positive and negative (if the classifiers are real valued) or what he called “sample width” (if they are Boolean). Paul told us about a variant of tree decomposition of a graph where the subgraphs in the “bags” are required to be, not just connected, but of bounded diameter. (An interesting side story: he and his colleague Eli Berger had proved a nice theorem, but had just been scooped, so they simply sat down and proved an even stronger theorem.)

The day also included two talks on the *G _{n,p}* random graph model: Annika Heckel on concentration results for the chromatic number when

At the Queen Mary day the previous day, there were many very nice talks, so it is hard for me to choose. Carla Groenland talked about skipless chains in the Boolean lattice (where the ranks of the chains form an interval); the main theorem states that the union of *m* disjoint chains can be covered by *m* disjoint skipless chains (so that Dilworth’s Theorem holds for skipless chains). She wondered whether this could be true in other lattices; of course I wondered about projective spaces.

My colleague Louis Theran works on rigidity of frameworks, but is always open to connections and relations with other areas. He told us about *Gaussian graphical models* where you have some data on *n*-dimensional space and an *n*-vertex graph, and you want a maximum likelihood estimate if the covariance matrix is constrained to be zero on non-edges of the graph; the question of interest is: What is the minimum number of datapoints you need for an MLE to exist? This is called the maximum likelihood threshold (MLT) of the graph. Louis showed us that it is related to global rigidity of the graph. He proposed a better version which looks at (*d*+2)-vertex subgraphs (a paradigm in rigidity theory).

Torsten Mütze talked about the conjecture of Lovász that, with five known exceptions, any vertex-transitive graph has a Hamilton cycle (and mentioned the competing conjecture of Babai that there is a positive ε with the property that infinitely many vertex-transitive graphs have no cycle longer than (1−ε)*n*, where *n* is the number of vertices. He was able to prove that Lovász was right for various graphs in the Johnson association scheme (still a very long way from resolving the question, in my view).

Thomas Bloom gave us an entertaining survey of the past, present and possible future of work on counting 3-term arithmetic progressions in large subsets of {1,…,*n*}.

I sometimes wonder why I write summaries like this. I think it is as backup memory; if I write down what I learn, or even just a brief summary, I have a better chance of finding it again in the future.

]]>He founded the Scottish Charity Air Ambulance.

After Rosemary fell into the Kenly Water, the Coastguard came and got her out of the ice-cold water (where she had been for an hour), and then the SCAA helicopter took her to Ninewells Hospital in Dundee.

]]>A base is *irredundant* if no point in the base is fixed by the stabiliser of its predecessors; it is *minimal* if no point in the base is fixed by the stabiliser of all the others. An irredundant base can be found simply by choosing and stabilising points not already fixed there are no more. By contrast, it is more difficult to find a minimal base.

A couple of things are clear:

- any minimal base is irredundant (but the converse is false);
- any re-ordering of a minimal base is minimal (but this is not true for irredundant bases).

In 1995, Dima Fon-Der-Flaass and I showed that the three conditions on irredundant bases, namely

- they are preserved by re-ordering;
- they all have the same size;
- they are the bases of a matroid;

are all equivalent. We call groups having this property *IBIS groups*.

Recently several people (Murray Whyte, Scott Harper, Pablo Spiga, maybe others) asked whether the set of sizes of irredundant bases form an interval. This would be an analogue of a result of Tarski about generators of an algebra.

After Pablo asked me this, I started thinking about it. So let *G* be a permutation group which has an irredundant base of size *x* and one of larger size *y*, and let *z* be a number between *x* and *y*: is there an irredundant base of size *z*?

The easiest case to think about is *x* = 1; to my delight, this case is just Mornington Crescent (if you know what this is). You start working along the longer base, stabilising the points; after a while you get bored, say “Mornington Crescent”, and play the point in the short base, at which point the game terminates and you have an irredundant base.

(I had always regarded Mornington Crescent as an analogy for nuclear war; it is nice to find a less destructive interpretation of it!)

The proof in the general case is just a variation on this. Now you can’t win in just one step. Suppose for a contradiction that there is no irredundant base of size *z*. Choose points in the long base, and when you get bored, switch to the small base, so that you have *z* altogether. By assumption, this base must be redundant; but the points in the long base are irredundant, so the redundancies must be in the short base. Throw them away. Now play the game again, continuing a bit longer with the long base and then just using the remaining points of the short base, to get *z* altogether. This procedure can be repeated forever, contradicting the fact that the natural numbers are well-ordered (so it is not possible to throw away points from a finite set for ever). The contradiction shows that the assumption is wrong, there is an irredundant base of size *z*. This proof is nonconstructive, but you can easily turn it into an algorithm.

What about minimal bases?

The analogue for minimal bases is false. For any integer *d* > 1, there is a group whose minimal bases have sizes 1 and d only. (Take an elementary abelian group of order *p ^{d}*, and let it act with one regular orbit and

However, it is not yet known whether they do form an interval if the permutation group is required to be transitive. This seems much more difficult. Here is a possibly representative example.

Take the symmetric group of degree *n*, acting on the set of 2-element subsets of {1,…,*n*}. A set of points in the permutation domain is the edge set of an *n*-vertex graph; it is a base if and only if there is no isolated edge and at most one isolated vertex. It is easy to see that a minimal base can contain no cycles, so is a forest. Now a star on all but one vertex gives a minimal base of size *n*−2 (the smallest possible). All smaller sizes down to roughly 2*n*/3 (the actual size depending on the congruence class of *n* mod 3) can be realised by forests whose components are stars.

A further question is: What happens for greedy bases? One of these is found by choosing the next base point in an orbit of maximum size for the stabiliser of its predecessors. (There is some variability because there might be a number of orbits of the same size.)

]]>Look up orthogonal block structures in Rosemary’s book on association schemes. One of these structures is a sublattice of the lattice of partitions of a finite set, ordered by refinement (so that *P* is below *Q* if each part of *Q* is a union of parts of *P*), containing the bottom element (the partition into singletons) and the top element (the partition with a single part), and having two further properties: first, the partitions should all be uniform (that is, have all parts of the same size); and a second condition which I will come to shortly.

The reason why statisticians might be interested in collections of partitions is essentially the inhomogeneity of the experimental material. If you do an agricultural experiment, using plots on farms in different parts of the country, there will be a (probably significant) systematic difference between plots on different farms, and you probably need to take this into account so that you can allow for it when you are testing the real differences between the seeds, fertilisers, or whatever that are the subject of your experiment. Partitions into farms in this case, and similar in other cases, are referred to as “factors”.

The reason you need a lattice (that is, closure under meet and join) has to do with the analysis of variance, for example testing for interactions. The meet of two partitions *P* and *Q*is straightforward to define; it is the partition whose parts are all non-empty intersections of a part of *P* and a part of *Q*. Joins are more difficult; the join of *P* and *Q* is the partition into connected components of the graph whose edges are all pairs within either a part of *P* or a part of *Q*.

The nicest situation for the analysis is when the partitions are “orthogonal”. This is defined in terms of the vector space of all functions from the set of all experimental units to the real numbers. With each partition is associated a subspace consisting of all functions which are constant on each part of the partition; and the requirement is that, modulo the subspace associated with their meet, the subspaces for the two partitions should be orthogonal with respect to the usual inner product.

But later in the chapter on orthogonal block structures it is mentioned, in a throwaway remark, that orthogonality is equivalent to the condition that the equivalence relations commute. That is, the possible results of a step within a part of *P* and a step within a part of *Q* is the same as if we took the steps in the other order.

Now if *G* is a transitive permutation group on a set Ω, then the *G*-invariant partitions of Ω satisfy all these conditions except possibly the last: the meet and join of invariant partitions are invariant; the two trivial partitions are invariant; and the invariant partitions are uniform. (These partitions are just the systems of imprimitivity for the group.)

I described earlier how Marina and I had been looking at the concept we called *pre-primitivity* of a transitive group: this is the property that every *G*-invariant partition is the orbit partition of a normal subgroup of *G*. Empirically it seems that a very large proportion of transitive groups have this property, including all the primitive ones. We noticed that the invariant partitions of a pre-primitive group commute pairwise, and so form an orthogonal block structure. So this condition, for reasons which I will come to we called the *relatively permuting* property, is weaker than pre-primitivity. For example, there are 1954 transitive groups of degree 16; 1833 of them are pre-primitive, but 1886 of them have the RP property.

So of course we looked for further connections.

The reason that orthogonal block structures are studied in a book on association schemes is that any such structure gives rise to an association scheme. Each partition in the OBS is defined by an equivalence relation, which is a set of pairs; if we remove from each such set the pairs in the equivalence relations strictly below the one we are considering, we might get the empty set, but the non-empty sets that arise in this way form an association scheme (a symmetric coherent configuration, in the other language used for these things).

A little more research led us to the AssociationSchemes package for **GAP**, written by John Bamberg, Akihide Hanaki, and Jesse Lansdown. This is not part of the official distribution; but it exists, and can be downloaded. One feature of it is that the library files amount to half a gigabyte, and dwarf all the rest of the distribution. Also, I couldn’t install it on my desktop machine, since I don’t have root access; but I put it on my laptop and ran it. Very quickly it discovered that the association schemes derived from the 1386 transitive groups on 16 points which have the RP property fall into just 37 isomorphism classes.

However, reading the manual for the package, I get the impression that it was not really designed with these highly imprimitive association schemes in mind; there is rather more concentration on things like Hamming and Johnson schemes.

So what?

Marina is taking her undergraduate exams this month, so she is temporarily off the case. So I am just playing round with it, not very seriously, while her attention is distracted.

One observation is that a lattice of partitions satisfying the OBS conditions is modular. (This observation doesn’t appear in the Association Schemes book, but when I asked Rosemary she had no difficulty coming up with the proof.) Now we can observe that the lattice of invariant partitions of a transitive group is isomorphic to the lattice of subgroups lying between the point stabiliser and the whole group. Moreover, the commuting equivalence relations imply that the subgroups containing the point stabiliser permute pairwise; that is, if *H* and *K* are two of these, then *HK* = *KH*. (Hence the name: RP stands for “relatively permuting”.) But modularity is not equivalent to the permuting property. The lattice of subgroups of the symmetric group S_{3} is modular, but the three subgroups of order 2 do not permute pairwise. A little further thought showed that even distributivity of the lattice does not imply the RP property. Forming a chain, however, does.

It would be nice to think that we might come up with some novel orthogonal block structures which could be interesting to statisticians. But this is unlikely to happen. As my earlier remarks were meant to indicate, the OBS used in an experiment is dictated by the nature of the experimental material, rather than being chosen by the experimenter.

So what about RP as a property of permutation groups? When discussing pre-primitivity, I remarked that pre-primitivity and quasiprimitivity in conjunction were equivalent to primitivity. This is not the case for the RP property. There are quasiprimitive groups which have RP but are not primitive. As just stated, if the subgroups between the point stabiliser and the whole group form a chain, then RP holds, and there are examples of this, such as the symmetric and alternating groups of degree 5 in their actions on 15 points (having 5 blocks of size 3).

This is part of a general program to convince people that even highly imprimitive association schemes are interesting. I am hoping that John Bamberg and his co-writers of the AssociationSchemes package may be persuaded to add some functions for dealing with orthogonal block structures (and specialisations of them such as poset block structures and special orthogonal block structures). In a sense, an orthogonal block structure describes just the systems of imprimitivity for an association scheme, discarding any other structure it might have; so they are in a sense orthogonal to some of the more familiar types such as Hamming and Johnson schemes.

]]>I never met Simmons, and his field, functional analysis, was far removed from mine; yet I felt a connection to him. In 1963, just before I started my mathematics degree at the University of Queensland, he published a book called *Introduction to Topology and Modern Analysis* with McGraw-Hill. I had a copy of this book while I was a student. I think it is possibly the best text-book I ever had.

It was comprehensive and clear, both in explaining things and in giving rigorous proofs, but that is not the main reason I liked it: rather, because it told a story.

The book is divided into three roughly equal parts. The first is a first course on topology, covering all the things you would expect: connectedness, compactness, separation axioms, and so on. The second part is a course on linear algebra. Probably not a first course, since almost everyone who would use the first has already studied some linear algebra; but it is a very nice account, starting from basics and getting up to a thorough treatment of the spectral theorem. The final part, taking the view that functional analysis is just linear algebra with some topology thrown in, went up to some of the basic theorems, and included applications to, if I remember correctly, existence of solutions of differential equations.

I suppose it was only because I had a strong leaning towards discrete mathematics that I wasn’t seduced into becoming a functional analyst by that book.

Of course, my copy disappeared many years ago, possibly in the big shake-out when I had to leave my office in Queen Mary at two weeks’ notice (I had a “book sale” and invited my colleagues and students along, the price of any item being zero, and I only kept a few books for my own use).

]]>GitHub was the most popular suggestion, so I went with that. The new address is https://cameroncounts.github.io/web/

]]>I have been extremely fortunate in having wonderful colleagues who have organised three birthday conferences for me. (Well, one was actually a retirement conference, but I retired at 65, so I am including it.) So here are the talks I gave at each of these conferences. They contain some errors, and some conjectures which turned out to be false, but I think they have some interest.

- From Higman–Sims to Urysohn, Ambleside, August 2007
- Primitivity, London, July 2013
- Perchance to dream, Lisbon, July 2017

Enjoy!

]]>The notion of primitivity was introduced by Galois, and extensively discussed in his Second Memoir, and has been of great interest to mathematicians ever since. However, a careful reading of the text by Peter Neumann revealed that Galois confused two different notions. A transitive permutation group *G* is *primitive* if the only *G*-invariant partitions are the trivial ones (the partition into singletons, and the partition with a single part). However, sometimes Galois used a different property, namely that any non-trivial normal subgroup is transitive. This is weaker, since the orbits of a normal subgroup of *G* form a *G*-invariant partition; and it is strictly weaker, as can be seen by taking a transitive but imprimitive action of a simple group (for example, the regular action on itself by right multiplication). This property is now called *quasiprimitivity*.

I had the following idea. Suppose that **P** and **R** are properties with **R** strictly stronger than **P**. Can we find a property **Q** which is independent of **P** (in the sense that neither implies the other) but such that **P** and **Q** together are equivalent to **R**? Investigation of **Q** might throw some light on the difference between **P** and **R**. There is no obvious natural property which does this in general, but when one exists, I decided to call it *pre*–**R**.

In the case of quasiprimitivity and primitivity, there is such a property. A permutation group *G* is *pre-primitive* (or PP for short) if every *G*-invariant partition is the orbit partition of a subgroup of *G*. If this holds, it is easy to see that we can take the subgroup to be normal. From this it follows that pre-primitivity and quasiprimitivity imply primitivity. The converse is also clear, and it is easy to find examples having one of the two properties but not the other.

With the help of Marina Anagnostopoulou-Merkouri and Enoch Suleiman, I have been investigating this property; our paper is now on the arXiv, at 2302.13703. There is quite a lot to say. Here are a couple of sample results.

- A regular permutation group
*G*is pre-primitive if and only if*G*is a*Dedekind group*(that is, every subgroup is normal). - The wreath product of two transitive groups in its imprimitive action is pre-primitive if and only if both factors are pre-primitive. (Things are not so straightforward for other products such as direct product or wreath product in the product action, though in these cases we have necessary conditions and sufficient conditions.)
- Let
*T*be a finite group and*m*a positive integer. Suppose that, for any characteristic subgroup*K*of*T*, if*L*is the subgroup of*K*generated by (*m*+1)th powers and commutators, then every subgroup of*K*containing*L*is normal in*T*. Then the diagonal group*D*(*T,m*) is pre-primitive. (It may be that the converse is also true, but we have not been able to prove this.)

One of our most remarkable discoveries is that we happened to strike it lucky with our first attempt. We looked at several other pairs of properties in the hierarchy and found very little to say about them:

- Transitivity and quasiprimitivity:
*G*is pre-QP if every normal subgroup of*G*is either transitive or trivial on each orbit. So what? *k*-homogeneity (transitivity on*k*-element sets) and*k*-transitivity (transitivity on*k*-tuples of distinct elements): This had already been done by Peter Neumann, who called a group*generously*(*k*−1)-*transitive*if the setwise stabiliser of any*k*-set induces the symmetric group on it.- Primitivity and synchronization:
*G*is synchronizing if, whenever*f*is a self-map of the domain which is not a permutation, the monoid generated by*G*and*f*contains an element of rank 1. Peter Neumann (again) found a convenient version for our game. A partition Π of the domain is called*section-regular*if there is a subset*A*such that, for all*g*∈*G*, the set*Ag*is a section for Π. Now*G*is synchronizing if and only if it has no non-trivial section-regular partition. So we can say that*G*is pre-synchronizing if every section-regular partition is*G*-invariant. This does what we want (so that a group is synchronizing if and only if it is primitive and pre-synchronizing); but unfortunately the following holds: the only imprimitive pre-synchronizing group is the Klein group acting regularly.

As noted, there is much more; take a look at the paper, or try your hand at this game with other pairs of properties. There is no reason to stick to permutation groups here! In the meantime we have found another property, even weaker than PP, which we are still investigating; I hope to report on it sometime soon.

]]>Let *n* = *p*_{1}^{a1}…*p _{s}*

**Problem:** Is it true for *s* = 3? I am inclined to guess that it is, but it is quite delicate and I didn’t find a proof. (I have stopped looking now I don’t actually need this any more.)