A brief trip to London for the two colloquia and a meeting of the committee, marred for me by the fact that I had come down with quite a bad cold (which, scarily, involved my losing my balance from time to time).

What follows are random musings on what we heard in a very enjoyable two days.

Three speakers mentioned Galton–Watson branching processes in similar contexts: **Andrew Treglown**, **Oliver Riordan** and **Guillem Perarnau**. I got two insights from this, which I had probably not realised clearly before. First, Erdős–Rényi random graph is the same thing as percolation on the complete graph. (For percolation on a graph, we allow edges to be open independently with probability *p* and ask questions about whether the resulting graph has a “giant component” and similar things. Erdős and Rényi simply included edges with probability *p* from all 2-subsets of the vertex set, and asked similar questions.)

Secondly, for percolation in any regular graph of valency *d*, to explore the component containing *v* you look at the neighbours of *v*, their neighbours, and so on until you have seen everything. The number of neighbours of any vertex (apart from the vertex on the path from *v* by which we reached it) is a binomial random variable Bin(*d*−1,*p*), and the whole process is “stochastically dominated” by the Galton–Watson process with this rule for descendents. (In an infinite tree, they would be identical; but the possibility of short cycles means that we will not see more vertices in the percolation case than in the Galton–Watson case.)

The three speakers tackled different problems. Perarnau was looing through the critical window in the percolation problem for regular graphs. Treglown was examining resilience of random regular graphs, e.g. how many edges can you delete before losing the typical value of a Ramsey number in the graph. (As he said, there are two processes going on here: first choose a random graph, then delete an arbitrary subset of its edges.) Riordan was considering the Achlioptas process, a variant on Erdős–Rényi where, at each step, you choose two random edges, and keep one, the choice depending on the size of the components they connect.

**Kitty Meeks** talked about complexity problems for sum-free sets. How do such problems arise? She began with a theorem of Erdős, according to which any set of *n* natural numbers contains a sum-free subset of size at least *n*/3. Now one can ask whether there is a sum-free subset of size *n*/3+*k*, or to count such subsets. This kind of problem lends itself very well to analysis via *parameterised complexity*.

**Sophie Huczynska** talked about the “homomorphic image” order on relational structures such as graphs: in this order, *G* is smaller than *H* if there is a homomorphism from *G* to *H* mapping the edge set of *G* onto the edge set of *H*. (She began by describing a variety of graph orders, such as subgraph order, induced subgraph order, and minor order, and showing how all can be expressed in terms of homomorphisms.) Typical questions are whether a class of graphs or other structures is a *partial well-order* with respect to the homomorphic image order (that is, whether it contains no infinite antichains), and describing the antichains if possible.

A special mention for **Ewan Davies**, who stepped in at very short notice when Agelos Georgakopoulos was unable to speak. (We found out the reason later; his wife had produced their first son that very day.) Evan gave a fascinating talk on getting accurate values or estimates for the coefficients of the matching polynomial of a graph (the polynomial in which the coefficient of *x ^{k}* is the number of matchings with

*k*edges in the graph), using ideas from statistical mechanics. (The matching polynomial is the partition function for the monomer-dimer model on the graph.) There was even some differentation! A very assured and confident talk which raised lots of questions from the audience.

I am afraid I didn’t feel up to having drinks afterwards, so just sneaked off home.

Thanks to Jan van den Heuvel for corrections.