Another event back in face-to-face mode, the Scottish Combinatorics Conference took place this week at the University of Strathclyde in Glasgow, organised by David Bevan and other members of the combinatorics group. Glasgow is a bus ride from St Andrews, so we get there for free, courtesy of the Scottish government. The bus might give you a slightly distorted idea about the relative proportions of Scotland: the journey takes 2 hours and 50 minutes, of which 2 hours is spent crossing the Kingdom of Fife.

It is a 2-day meeting; each day had a number of invited speakers and a smaller number of PhD student presentations. The highlight on the first day was a talk by Ciaran McCreesh from Edinburgh, with the intriguing title “Is your combinatorial search algorithm telling you the truth?” The answer is “Not always” in the case of some of the major programs in this area; he found some gave wrong answers with a frequency of about 1 in 10000, worryingly high. What to do about this? Two traditional answers are to use provably correct software (impossible at present for programs of this size and complexity) or extensive software testing (which has been done in this case without success – you only find the bugs you are looking for). He proposed a third approach. As the program finds solutions or shows they don’t exist, it also provides a proof of this result, which can be fed to a checker along with the output. He spent some time on the satisfiability problem for formulae in disjunctive normal form, where such proof tools include unit propogation and resolution. For more general cases he proposes using integer variables with the values 0 and 1 only in place of Boolean variables, and replacing Boolean expressions by linear constraints, when cutting plane methods can be used. It is important that the methods used should be simple to check, so that issues of correctness of the checking program don’t arise. He claims that this method should be independent even of dodgy operating systems and random hardware errors.

(Subsequently João Araújo sent me a link to slides of a talk by Curtis Bright, who has been thinking along similar lines.)

A similar theorem, in a more specific situation, was picked up by Ruth Hoffmann in a talk about speeding up algorithms for hard problems between subgraph isomorphism and largest common subgraph. Her technique was to take a suitable homomorphic image of the graphs in question, augmented with some numerical data, so that the problem is easier to solve in the images and solutions there give information about solutions in the preimages. The goal is to have an add-on which is independent of the algorithm used to solve the problem, and so can potentially speed up any algorithm.

Torsten Mütze told us two weeks ago about a construction of Hamiltonian cycles in the Kneser graphs. This time he left that to his student Namrata, and instead told us about a much more general technique. His Algorithm J works on any class of permutations which form a “zigzag language”, or indeed any class of objects which can be encoded as a zigzag language of permutations: this includes such varied objects as pattern-avoiding permutations, rectangulations, lattice congruences on the weak order, elimination trees, acyclic orientations, and more. HIs second algorithm P works on bitstrings, and again works on many related classes, but time didn’t permit a detailed description of this.

After dinner at the Italian Kitchen, we came back for another very interesting day. The talks of Bishal Deb and Natasha Blitvič, notionally on completely different things (Laguerre digraphs and combinatorial moment sequences), described the same 14(!)-variable generating function enumerating permutations with respect to 14 different “statistics” on permutations. Robert Brignall’s talk overlapped these, on well-quasi-ordered permutation classes. The expectation is that a well-quasi-ordered class (one containing no infinite antichain) will have a better-bahaved enumeration function, but actually the picture is a bit more complicated than that.

Jess Enright combined two of her interests (cops-and-robbers games, and treewidth of graphs) to consider cops-and-robbers on multilayer graphs. A multilayer graph is simply one whose edge set is the union of a collection of “layers”; she took the rule that cops are confined to a single layer but robbers can move anywhere. She showed us many examples, the message being that virtually any plausible conjecture is false. (Some time ago, we had a “Discrete Mathematics and Big Data” day at St Andrews, where Simon Dobson talked about modelling London as a multilayer network: you can walk, or take the Tube. He was using real data from Oyster cards on the Tube.)

A model of random graphs where you add an edge at a time is by now “classical”, having been introduced by Erdős and Rényi in the 1950s. Dan Threlfall gave us a similar take on random permutations, where you instead compose with one inversion at a time, and computing thresholds for properties such as pattern containment.