This picture (courtesy of Sebi Cioabă) shows Peter Keevash with the diagram which illustrates the proof strategy for his theorem. Perhaps it will be helpful, especially to those who heard the lecture (or similar lectures elsewhere).

Thanks Sebi!

]]>

The last day of the conference was over. Here are a couple of closing remarks. We have certainly heard an amazing collection of talks over the last week!

I didn’t yet mention the other short course, given by Simeon Ball and Aart Blokhuis, on the polynomial method in finite geometry. Among other things, they showed us four different ways to get bounds for a set of points in a projective or affine space meeting every hyperplane in at least (or at most) a certain number of points: one based on polynomials over the finite field of the geometry, one over the complex numbers, one over the *p*-adic numbers, and one using the resultant. But there was a lot of detail, and I didn’t even attempt to take detailed notes. I hope the slides will be made available at some point!

The highlight of my advanced combinatorics course in St Andrews this year was the construction and characterisation of the strongly regular graphs associated with non-singular quadratic forms over the field GF(2). These featured in the talks by Alice Hui (who is using Godsil–McKay switching to find other graphs cospectral with them, and Sebi Cioabă (together with lots more in a packed programme). Peter Sin also talked about cospectral graphs, more precisely, an infinite number of pairs of graphs which (unlike Alice’s) have the same spectrum and the same Smith normal form. These are the Paley and Peisert graphs on *q* vertices, where *q* is the square of a prime congruent to 3 (mod 4). These have a common description. The subgroup of index 4 in the multiplicative group has 4 cosets with a cyclic structure. Taking the first and third cosets in cyclic order gives the Paley graph, the first and second give the Peisert graph.

Dirk Hachenberger was counting elements of a finite field which are both *primitive* (i.e. generate the multiplicative group) and *normal* (an element is normal if it and its conjugates under the Frobenius map span the field as vector space over the base field). Remarkably, it was only proved in 1992 by Lenstra and Schoof that primitive normal elements exist, but Dirk has lower bounds for the number, and it seems that, at least for extensions of large degree of a fixed base field, almost all primitive elements are normal.

]]>

I have heard him talk on this before, but in a one-hour talk he had time to outline the strategy of the proof without going into much detail. That was not the case here, and he gave a riveting account of the many complications that had to be faced on the way to the theorem. Also, he has a much more general theorem now, but he chose to concentrate on one special case, which actually bears the same relationship to the new general theorem as does Kirkman’s construction of “Steiner triple systems” does to Keevash’s earlier theorem on the existence of *t*-designs for all *t*. If that sounds like I am belittling what he did, this is absolutely not the case, as will be clear to everyone there; rather, it covered many of the complexities of the general proof but kept the exposition within reasonable bounds.

Here I propose to give his four-point outline of the proof, and then describe some of the complexities of the four steps. But first I will say how the new theorem is much more general than the old one.

A *t*-(*v,k*,λ) design is a collection of *k*-sets of a *v*-set which cover every *t*-set exactly λ times; in other words, a partition of the hyperedges of the λ-fold complete *t*-hypergraph on *v* points into copies of the complete *t*-hypergraph on *k* points. In particular, a Steiner triple system is a partition of the edge set of the complete graph into triangles. The new results extend this from partitioning the edges of a complete uniform hypergraph to a much wider class of hypergraphs. The notation would become near to uncontrollable if he had talked about this proof; so instead he showed us how to partition the edges of a suitable graph into triangles.

The conditions on the graph are:

- It should be “tridivisible”, that is, the number of edges should be divisible by 3, and all the vertex degrees should be even (these are the familiar necessary conditions)
- The edge density should be at least some constant δ.
- The graph should satisfy a strengthening of a property that the random graph has, which he called “typical”. Typicality takes two parameters
*c*(a small positive real number) and*h*(a small positive integer), and asserts that the number of common neighbours of any set of at most*h*vertices should be between (1−*c*) and (1+*c*) times the expected number in a random graph with the same edge density. Peter showed us a proof with*h*= 16, but at the end indicated why*h*= 2 will suffice. (This is stronger than being random or pseudorandom: in those cases, only most sets of vertices are required to satisfy the condition, some larger deviations are permitted.)

The four steps in the proof are, briefly, the following:

- First, set aside a collection of triangles, called the
*template*, for use at the end. - Next, use the
*Rödl nibble*argument to cover almost all of the remaining edges by triangles. The ones left over form the*leave*. - Put the edges of the leave into triangles using some edges from the template triangles (which have then been used twice). The edges used for this form the
*spill*. - Finally, fix the twice-covered edges by finding a two sets
*A*and*B*of triangles, where the spill together with the edges in*A*form the same set as the edges in*B*, and the triangles of*B*are template triangles; swap the spill and*A*for*B*.

(Added later: here is a picture (by Sebi Cioabă) illustrating the steps.)

Now just a brief note on the steps.

**Step 1:** randomly embed the set of *n* vertices of the graph into an elementary abelian 2-group of size a little larger than *n* (precisely, between two and four times *n*. Then the template consists of all the triangles *xyz* for which *x*+*y*+*z* = 0 in the abelian group.

Using concentration results (specifically, Azuma’s inequality), one shows that the graph formed by the edges of the template triangles is itself dense and typical, and the vertex degrees are not too large.

**Step 2:** The original Rödl nibble doesn’t quite work. The version needed, due to Frankl and Rödl, Pippenger, and Spencer, which gives a near-perfect matching in a hypergraph under suitable conditions. The hypergraph used as as vertices the edges of *G* not in template triangles, and as edges the sets of three edges of *G* forming a triangle. A near-perfect matching is a set of triangles covering most of the edges, as required.

**Step 3:** The edges of the leave (not covered in Step 2) are completed to triangles using edges from the template, ensuring that the pairs of edges used don’t overlap. This is done by a random greedy algorithm. If it were not for the disjointness condition, the choices would be independent, and the algorithm easy to analyse; the correct version turns out to be a fairly small perturbation of this. In this way, it is shown that the algorithm succeeds, and that no vertices of high degree are created.

Since “random greedy” is very important in Step 4, Peter covered this in considerable detail.

**Step 4:** The final fix. The basic idea here is that of an integral design (with multiplicities −1, 0, and 1, as in the work of Graver and Jurkat and of Wilson. But the context is a little different, since the spill edges must be dealt with, and so the work has to be done again from scratch.

Using typicality, it is possible to show that the “bad” edges can be embedded into subgraphs where a “switch” replaces the triangles by good ones. However, we have to be careful to get, not just triangles made of template edges, but actual template triangles! This is where the algebraic structure comes in.

Typicality, worked to the limit, shows that we can choose the configurations so that a new triangle *xyz* is embedded into an octahedron whose other vertices are *x*+*y*, *y*+*z*, and *z*+*x* (in the abelian group). Now the eight faces fall into two sets of four, the first containing *xyz*, and the second consisting of triangles like *xy*(*x*+*y*) (in the abelian group) which are indeed template triangles. This is done by using more care choosing the configurations mentioned in the preceding paragraph, so that each of these triangles sits inside such an octahedron. The configurations become sufficiently complicated that apparently the typicality condition is required for as many as 16 vertices. But, as Peter explained, typicality for sets of two vertices implies that the condition holds for almost all sets of 3, …, 16 vertices, and this is enough for the random greedy algorithm to get to work on.

A *tour de force*!

]]>

As usual with derangements, the story begins with a classical result: the probability that a random permutation of *n* points has no fixed points is very close to 1/e.

How many permutations fix no subset of size *k* of the domain? In other words, what is the proportion of derangements in the action of the symmetric group *S _{n}* on

But what is the function *p*(*k*)? This is the question that Ben and his collaborators Sean Eberhard and Kevin Ford have tackled. It is bounded above and below by constants times *k*^{−0.086…}(log *k*)^{−3/2}. The mysterious constant is 1−(1+log log 2)/log 2.

There is a context for this. Łuczak and Pyber showed that the only primitive actions of the symmetric groups for which the proportion of derangements does not tend to zero are those on *k*-sets for fixed *k*; their bounds were improved by Diaconis, Fulman and Guralnick, who proved much else besides, including the limiting distribution in the above case.

Ben went on to describe some fascinating connections with the number of divisors of an integer; but I have no time to talk about this now.

]]>

John Bamberg is reporting on New Directions in Combinatorics on SymOmega (his report on Day 1 is here), so I will not even attempt to be comprehensive, but will just pick some plums.

Not very long ago, I reported the story of Graham Higman’s lecture to the LMS on his result that “the unknown Moore graph” on 3250 vertices could not be vertex-transitive. I had no idea what was about to happen …

Yesterday, Stefaan De Winter gave a talk on partial difference sets in abelian groups. He began with a result of Benson from 1970, giving a divisibility condition which is necessary for the existence of a certain kind of automorphism of a generalized quadrangle. He went on to mention extensions to partial geometries, partial quadrangles, and who knows what, before describing his own work, which applied this technique to partial difference sets in abelian group, leading to the nonexistence of all but two parameter sets on a list of “small” open cases. (The two remaining would each live in a group of order 216.)

But I am afraid I did not follow too closely, since I was sitting there slightly stunned. Benson’s formula included as a parameter the number *a*(*g*) of points *p* of the geometry for which *p* and its image under *g* are collinear.

Just such a formula, using just such a parameter, was at the heart of Higman’s proof, and was the reason why he could get further than Michael Aschbacher, who had proved that this unknown Moore graph couldn’t have a rank 3 group. (Careful analysis shows that there are two possible structures for an involution acting on such a graph; one was eliminated by Higman’s condition and the other is an odd permutation. The result follows easily from that.)

But the result is a special case of something more general. My first thought was, “Didn’t Norman Biggs prove that?” It may be that he did, I can’t at the moment find a reference. But I did find where to look for the general result. It, and Higman’s application, are on pages 89–91 of my book on *Permutation Groups*. The context is *association schemes*; here is a brief summary.

An *association scheme* is a collection of relations *R*_{1},…,*R _{r}* on an

- the relations partition
*X*×*X*; - equality is one of the relations, say
*R*_{1}; - each relation is symmetric;
- the span (over the real numbers) of the relation matrices
*A*_{1},…,*A*is closed under multiplication._{r}

It follows that the matrices in the fourth condition commute. Indeed, the second and third conditions can be weakened to say that the set of matrices is commutative and closed under transposition, and the complex numbers used in the fourth condition; this is a *commutative coherent configuration*, and everything below works equally well for this wider class.

The matrices are simultaneously diagonalisable, and so have *r* pairwise orthogonal common eigenspaces. Let *E*_{1},…*E _{r}* be the orthogonal projections onto these eigenspaces. Then the

Now an automorphism of the association scheme (in the strong sense, fixing all the relations) leaves the eigenspaces invariant, and so induces a linear map on each eigenspace. Thus the permutation representation of the automorphism group is decomposed into representations on these eigenspaces. It is easy to compute the character of the *j*th representation: its value on an automorphism *g* is a linear combination of the numbers *a _{i}*(

Once we have a formula for the character, various conditions can be deduced; these can all be regarded as necessary conditions for the existence of an association scheme with an automorphism of the appropriate type. For example,

- Any character value is an algebraic integer (indeed, lies in a cyclotomic field), and so if rational it is an integer. Applying this to the identity automorphism gives the usual “integrality conditions” for the existence of an association scheme.
- The inner product of characters, for example this character with the trivial character, or with itself, must be non-negative integers.

So Stefaan has added to to the stock of applications of this technique. Surely there must be many others?

A final question: does anyone have more information about the history of this technique? Did Benson invent it? (I have little doubt that Higman discovered it independently, but I don’t know the date; I think his lecture was in the late 1970s or early 1980s.)

]]>

Then on Saturday I flew one-third of the way round the world, to the meeting on New directions in combinatorics at the Institute for Mathematical Sciences of the National University of Singapore. The meeting started at 9am yesterday; by 9.40, I had seen the entire proof (apart from a messy calculation I will mention below), and by 9.50 the outline of a slightly different proof, all thanks to a beautiful talk by the first speaker, Ben Green.

A *cap* in a finite geometry is a set of points with the property that no three are collinear; thus it is simply a higher-dimensional version of an *arc*, and the name is intended to suggest this. Caps have been studied for some time in finite geometry, and many beautiful examples exist, including Hill’s 56-point example in PG(5,3) which forms half of the elliptic quadric. Not so much attention was paid to asymptotics, though.

Consider caps in AG(*n*,3), the affine space over GF(3) (the vector space without a distinguished origin). It has the lovely property that a line is just a set of three vectors which add up to 0. (In general a line consists of all points of the form *a*+*tb*, where *a* and *b* are fixed and *t* runs through the field; the fact that 1+1+1 = 0+1+2 = 0 shows that this is equivalent to the sum-0 property.

A curious point of terminology. Finite geometers always called these objects caps. But people in additive combinatorics seem to have re-named them “capsets”. I shall be conservative and stick to my heritage.

Clearly a cap cannot be larger than 3^{n}. The set of all vectors with coordinates 0 and 1 only is a cap of size 2^{n}. So it is not unreasonable to assume that the largest cap has size about *c ^{n}* for some constant

Let *F* denote the field GF(3). Every function from *F ^{n}* to

The precise statement of the theorem is that the size of a cap is bounded by 3*D*, where *D* is the dimension of the space of cubefree polynomials of degree 2*n*/3. A long and messy calculation is required to show that *D* is about 2.756^{n}, but hopefully you now believe that it is at least exponentially smaller than 3^{n}.

The proof involves two results, of which Ben gave complete proofs.

The first says that a set *A* in the affine space which has size larger than 3*D* contains a subset of size at least |*A*|−*D* which is the support of a polynomial of degree at most 4*n*/3. The proof is just linear algebra.

The second, the Croot–Lev–Pach principle, is also just linear algebra. It says that if *p* is a cubefree polynomial of degree *d*, *B* and *C* two subsets, and *M* the |*B*|×|*C*| matrix with (*b,c*) entry *p*(*b*+*c*), then the rank of *M* is at most twice the degree of the space of polynomials of degree *d*/2. It really is a great idea, and depends on the fact that row rank is equal to column rank!

The relevance of this to caps is obvious. If we have a large cap, the first result gives us a still large cap which supports a polynomial; the values of this polynomial on *b*, *c*, and *b*+c cannot all be non-zero.

I won’t give more detail. You should have been there!

As a pointer to the future, Ben mentioned at the end of this exposition a result which he only learned on Sunday when he saw it on Facebook. Time for we old dinosaurs to trundle off to bed …

]]>

But here I really do mean what I wrote.

The function gnu(*n*) (for **G**roup **NU**mber) is the number of groups of order *n* (up to isomorphism). It is an extremely irregular function. It is known up to 2047, and of all the groups of these orders, more than 99% have order 1024. The number is about 5×10^{10}, which is itself dwarfed by the number of groups of the next order 2048, which is about 1.77×10^{15}. However, it is an important function, and precise values are very much needed, as well as providing a computational challenge to existing algorithms and computing facilities. See this article for comments and elaborations.

Now Alexander Konovalov has set up a crowdsourcing project, the “Gnu Project“, to calculate further values of the function, filling in some gaps in the presently known values. He has made available a GAP package to enable anyone to contribute. (The GAP website is here, in case you need to download this free computational system for algebra and discrete mathematics.) According to Alexander, a forthcoming release of GAP will be optimized so as to steamroll over problems of this type.

Instructions for getting and using the package and contributing to the database are provided. Take a look, and lend a hand!

]]>

The Queen Mary day was opened by Béla Bollobás. His topic was random geometric graphs, but he started off with a nice summary of the facts about percolation on the square lattice in the plane. It turns out that the critical probability for bond percolation on the square lattice with 1-independent probabilities (that is, the value of edge-probability at which the probability of an infinite component switches from 0 to 1, if sets of edges which are vertex-disjoint are independent) is not greater than 0.8639. This played a role in what came later.

The main result consisted in the “*k* nearest” model: the vertices are chosen from a Poisson process with intensity 1 on the unit square (or the plane), and each vertex is joined to its *k* nearest neighbours. Various thresholds for interesting properties appeared. One of the main results asserted that the threshold for percolation in the plane is *k* ≤ 11. The novelty is that this is a mathematical theorem, not a physicist’s “theorem”; but it is only established with “near-certainty” (at least 1−10^{−150}).

The reason is that the proof involves evaluating a high-dimensional integral to see whether the value is greater than 0.8639. Of course the integral cannot be evaluated exactly, so Monte Carlo methods are used. They indicate that the required inequality holds, but of course there is some wiggle room for scepticism.

Naturally, this provoked a lot of discussion!

Two things delighted me. One was that four of the six talks were board talks. The other was that three of them involved symmetry, and used at least something about groups. One of these was by Yifei Zhao, who considered Cayley graphs, and showed that two conditions for quasirandomness (the discrepancy condition and the eigenvalue condition) are equivalent for Cayley graphs for any finite group. This had previously been established for abelian groups by Kohayakawa, Rödl and Schacht in 2003 using harmonic analysis, so the new proof uses “non-abelian harmonic analysis”. The complete independence of the group was striking. Another talk using symmetry was by David Conlon, who defined two operations on functions on a graph; roughly speaking, if either of these turns out to be a norm, then the graph satisfies Sidorenko’s conjecture. David (with Joonkyung Lee) has added greatly to the very small stock of known examples. He showed us why the bipartite graph derived from two parabolic subgroups of a finite reflection group always satisfies the conditions.

The third talk invoking symmetry, one which interested me most closely, was by Imre Leader, on combinatorial games. [There are some things wrong with this write-up, which I have tried to correct in the comments below.] Such a game is defined by a set (usually finite) called the “board”, wich a collection of subsets called “lines”; players alternate in choosing an element of the board, and in normal play the player who first gets a line is the winner. Imre showed us the classical “strategy-stealing” argument showing that the game cannot be a second-player win. (If the second player had a winning strategy, the first player can make a first move anywhere and then follow the winning strategy: the extra element doesn’t hurt, and if the strategy says to choose it, then the player chooses another element arbitrarily.)

Imre was concerned with the misère version, where the first player to complete a line loses. You might expect that the first player cannot win this game, but this is not so. Take any board on which the second player wins, and add an extra cell on no lines. The first player takes this first and then steals the second player’s strategy.

To avoid the obvious asymmetry, follow Isbell (whose work I have described here in the past) and assume that the game has a group of automorphisms acting transitively on the positions of the board. Even with this condition, it is still possible for the first player to win; but if the automorphism group contains a fixed-point-free involution, there is a second player win (the second player uses the involution to “mirror” the first player’s moves). This brings us to Isbell’s territory, as I explain in a minute.

First, there is a game on a board of odd composite size: divide the board into *a* bins of size *b*, and let the lines be the sets containing a majority of cells in a majority of bins. For example, if *n* = 9, there are 3 bins of size 3, and the lines are all 4-sets containing two cells from each of two bins. Finding a winning strategy for the first player is an interesting exercise.

Odd primes are harder, and it is not known whether there are such games with first-player win on boards of prime size greater than 13.

The situation is very different for boards of even size 2*m*. An *Isbell family* is a family of sets of size *m*, invariant under a transitive group, and containing one of each complementary pair of sets. An Isbell family exists if and only if there is a transitive group containing a fixed-point-free involution. Isbell investigated these in a game-theoretic context in the 1950s and 1960s, and conjectured that, if the 2-part of *n* is large enough relative to the odd part, then any transitive group of degree *n* contains a fixed-point-free involution; so no Isbell family can exist. Fifty years on, the conjecture is still open. Nevertheless, Imre and his team (Robert Johnson and Mark Walters) have been able to prove that a first-player-win game exists for all even numbers except powers of 2.

The other two talks were just as remarkable. Karim Adiprasito talked about the proof of Rota’s conjecture for arbitrary matroids (the coefficients of the characteristic polynomial are log-concave). The proof is phrased in the language of algebraic geometry , but Karim assured us that it is really just combinatorics dressed up. One feature was a technique for deforming one matroid into another (but the objects through which we pass on the way are not themselves matroids). Andrew Granville told us about the seive method in analytic number theory, and about analogies between the decompositions of natural numbers into primes, polynomials over finite fields into irreducible factors, and permutations into cycles.

The day had been wet, and my lunchtime taken up with a committee meeting, so that I was a bit late for Imre Leader’s talk. But the next day at LSE I was off duty.

Two talks stood out for me. The first was by Benny Sudakov, on equiangular lines (a topic which probably got me my postdoc at Merton College – the committee were suitably amazed that someone could talk familiary about 23 dimensions), and in which the driving force was my good friend Jaap Seidel. How many lines can there be through the origin in *d*-dimensional Euclidean space, any two making the same angle? There is an upper bound *d*(*d*+1)/2, which is attained for *d* = 2, 3, 7, and 23, and as far as we know for no other values (though there is a quadratic lower bound in general). Benny had no more to add to this. But Seidel and others also looked at the case where the angle between the lines is fixed in advance. Here they have improved the old results considerably: they showed an upper bound of 2*d*−2 if the angle is the arc cosine of 1/3, and 1.93*d* in any other case. They suspect that much more is true, and the bound is (1+o(1))*d* unless the angle is the arc cosine of the reciprocal of an odd number.

There was much more too, involving sets of lines with more than a single allowed angle, or sets of unit vectors with all inner products either equal to a positive number α or at most a negative value β. Strangely, allowing all these negative values doesn’t change things very much.

The proof was a lovely blend of linear algebra, geometry, and graph theory.

The other talk I really liked was by Nati Linial on “higher-dimensional permutations”. His notion was different from the one I discussed at last year’s Permutation Patterns conference, but it is one I have given some thought to. A *d*-dimensional permutation is a (*d*+1)-dimensional array of 0s and 1s, *n* along each side, such that each 1-dimensional “line” (in any coordinate direction) contains a single 1. For *d* = 1, this is just a permutation matrix; for *d* = 2, it is equivalent to a Latin square. In these two cases, estimates for the number are known, asymptotic in the logarithm; Nati and his team have given upper bounds of the right form in general, but the lower bounds are unknown: for Latin squares these depend on the van der Waerden permanent conjecture, which doesn’t work in higher dimensions.

Nati is interested in many aspects of the theory of these things, but there was no time to tell us everything, so he concentrated on enumeration and on discrepancy and expander properties. These involve looking at the size of “empty boxes”, in the zero-one formulation above. Cayley tables of groups are not good for this purpose, but (as with many other problems) it is conjectured that almost all Latin squares are good.

Other talke were by Daniela Kühn, packing low-degree graphs; Monique Laurent, a very nice talk on connections between semidefinite programming and topological graph invariants such as those of Colin de Verdière; James Maynard, who told us about his proof that infinitely many primes omit the digit 7 (or any other digit you like); and Alan Frieze, on on-line versions of purchasing edges of the complete graph (with random prices) so as to build some type of structure (a tree, a triangle, a Hamiltonian cycle, etc.)

At lunchtime, the weather was beautiful, and I sat in Lincoln’s Inn Fields eating my lunch and looking at (and smelling) a lilac tree in full blossom. After the last talk, we went up to the top of the New Academic Building at LSE, with a fine view of the City, with the ugly clutter of tall buildings which is the legacy of the last two Mayors, Ken and Boris.

]]>

I will stick to the referendum; enough has been said about the divisive campaign run by the losing candidate for Mayor of London, and the arguments used in the Scottish parliamentary elections were also rather short on logic.

The “in” campaign have said hardly anything worth noticing. But the “out” campaign have distinguished themselves by their poor taste. The morning after the Brussels bombings, they were using the incident, with scant regard for facts, as a reason for Britain to leave the EU. Then, when Barack Obama remarked that if Britain left, it would be at the back of the queue for negotiating a trade deal with the US, he had to endure what could be regarded as racist abuse from the (now ex-) mayor of London.

But they missed something very important in their childish reaction. The trade deal he was referring to was the notorious TTIP, which would reportedly give multinational corporations power over elected governments, enforceable in offshore courts, and would mandate the selling-off of public services in Europe to American corporations. The negotiations are taking place in strict secrecy, so that nobody knows the full horror of the deal. I do believe we might be better off without it.

Even then, it is not completely clear-cut. If it were simply that Britain were opposed to Europe going blindly into this deal, things would be clear. But, of course, there is substantial opposition to TTIP throughout Europe, and it is possible to argue that the best course for Britain is to stay in Europe and mobilise this opposition. I fear that is not what any of our leaders want …

]]>

It is nearly the time of year for this wonderful festival in Hay-on-Wye.

Two years ago I gave a short course on The Infinite Quest. This year I am doing another, on Secret codes, at 13:30 on Friday 3 June.

Even if this doesn’t excite you, you should really consider going to this “festival of philosophy and music”. Run your eyes over the programme; you will probably find more than a few things you don’t want to miss. And hurry – they are selling out!

It is a busy time for me. On 1 June I am speaking at a meeting of the Groups & their Applications “triangle” in Bristol. Then I have, perhaps foolhardily, decided to walk from Abergavenny to Hay-on-Wye, a beautiful walk along the ridge between England and Wales, which I have done a couple of times before but not for several years. I plan to set off from Abergavenny at 10am and hopefully arrive in Hay not too late!

]]>