Right to Work checks now need to be done for any visitor (in advance to their visit) to the University, regardless whether we are paying them fees or expenses or they are self-funded.

Taken literally (and how else can you take government-imposed regulations?) the regulations mean that anyone wishing to attend (for example) a public lecture at the university must send in their passport in advance (the original, not a copy!) so that a Right to Work check can be carried out and the appropriate paperwork filled in.

It is absolutely unclear now to what extent these regulations will be enforced. What is already clear is that universities are likely to place different interpretations on the rules.

I suspect that in practice they will be watered down in two ways:

- I do not believe that anyone is going to send their passport along in advance of their attendance at a lecture or seminar. Presumably, after a while, it will be admitted that it is OK if they bring it along and we take a copy.
- The sanction which will be imposed on us for not filling out the forms correctly is that fees and expenses will not be paid by the university. They seem to have no sanctions in the case of someone who is not being paid. So I suppose such people will slip under the net.

The people who drafted these regulations appear to have no conception of how much damage their full implementation would to to teaching and learning, the core business of universities. But, as usual, we will all just bend over and take our punishment.

If you happen to be passing by and want to drop in to see me, and possibly talk about mathematics, rest assured that I will not demand your passport. If necessary, we will leave the University premises and sit in a coffee shop (or by the sea if the weather is nice), or walk along the West Sands. Our discussion will perhaps be impaired by the absence of a blackboard, but we will have to put up with that. Moreover, if you need some travel money, and are not too embarrassed to ask for it, I will pay you out of my own pocket. I cannot give an assurance that I will never fill in their stupid form, but as far as I can I will avoid it.

And finally, of course, I am not encouraging anyone to follow my practice. That would presumably count as incitement to commit a crime …

]]>

The prize is awarded for “original and innovative work in the history of mathematics, which may be in any medium”. In this case it was for the MacTutor History of Mathematics archive.

It is worth saying that this archive was begun by John and Edmund in the early days of the web, and it is still a two-person affair, predating Wikipedia and other sources of knowledge and wisdom. It has had enormous impact on, for example, the teaching of mathematics, both in St Andrews and all over the world: it is much more common now for lecturers to tell their students something about the people who created the mathematics they are learning.

Anyway, the prize carries with it a lectureship, and the inaugural Hirst lecture was delivered to the LMS at yesterday’s meeting by Edmund.

As is LMS custom, there were two lectures. The first, by Mark McCartney, was a lively account of Edmund Whittaker. I owned a copy of Whittaker and Watson for many years; I think I gave it away in my “booksale” when I left Queen Mary three years ago. A beautiful and entertaining lecture about someone who did a lot of mathematics and physics in the first half of his career, including setting up the first “mathematics laboratory” in the university of Edinburgh in 1913, and then turned to more general and controversial topics. The subtitle of the lecture was “Laplace’s equation, silver forks, and *Vogue*“, and indeed all of these featured in Whittaker’s life. At a certain point Mark showed a photograph of the 1955 St Andrews colloquium; among the people in the photograph was Bernhard Neumann. Peter, who was in the audience, claimed that his mother was there as well (though nobody could spot her), and he and his sisters were exploring Fife.

Edmund gave a beautiful lecture. The title was “History of Mathematics: Some personal thoughts”. His focus was on the fact that hisorians cannot provide us with truth: there are some questions we are unable to settle. Among those he mentioned were these.

- Was Lagrange French, or (as the Italian Encyclopaedia claims) Italian? He was born in Turin (which was not in Italy at the time since Italy did not exist), though he spent a lot of time in Berlin and Paris. It seems to me that this is the edge of a slippery slope. Nationality is bedevilled both by the fact that people move away from their birthplace and by changes in national boundaries and names of countries.
- Did Euclid exist? Although there are a number of pictures of him, they were all made long after his time. Given the many styles in the
*Elements*, it is possible that he was a kind of third-century-BCE Bourbaki. The counterargument is that the members of Bourbaki are all well-known in their own right, while none of Team Euclid are even known by name. - Did James Gregory actually construct a meridian line in St Andrews? Again all the evidence comes from long after the event, although it is known that he was in communication with people interested in this problem (such as Cassini).
- What was Nathan Jacobson’s birthday? Official documents give it as 8 September, and he celebrated it on that day, but he claimed that it had been wrongly converted from the Jewish calendar and he was really born on 5 October.
- What was Newton’s birthday? Famously he was born on Christmas day, but because (unlike most of Europe) Britan had not at the time accepted the Gregorian calendar, it was 4 January in most of the continent. There is also the issue of the year of his birth, since at the time the year in Britain began in March.
- Did Al-Khwarizmi study Euclid? He worked at the House of Wisdom, where one of his colleagues was engaged on translating Euclid into Arabic, and yet his own geometry has an algebraic rather than axiomatic flavour. Edmund claimed that he might well have known Euclid’s work but decided that he didn’t need it for his own.

We were also told about Charles Whish, an employee of the Honorable East India Company (Edmund was scathing about the adjective), who worked in Madras and found (and published) evidence that Indian mathematics knew a very accurate value for π derived from Madhava’s power series for the inverse tangent. He was ridiculed by his superiors, who regarded the suggestion as “too ridiculous to deserve attention”, and his arguments were only taken up by Indian historians of mathematics after a 100-year gap. I said something about this here.

Another mathematician discussed was Omar Khayyam, who measured the length of the year to extraordinary accuracy. As Edmund said, he was better known as a poet than a mathematician (here Edmund quoted a quatrain from Fitzgerald’s “translation” of the Rubaiyat). But my understanding is that there is no more evidence that Khayyam wrote poetry than that Euclid wrote a geometry textbook!

]]>

Products of topological spaces were discussed without any reference to the Axiom of Choice. I take the view that the Axiom of Choice is part of the general culture of mathematics: every student should be exposed to it, and should think about it.

As Bertrand Russell noted, the Axiom of Choice is precisely the statement that the Cartesian product of any family of non-empty sets is non-empty: this is why he called it the “multiplicative axiom”. (You no doubt recall his example: if a drawer contains infinitely many pairs of socks, how do you choose one sock from each pair?) So without it, the theory of products of topological spaces does rather fall apart. Of course it is true that most of the spaces which are important in topology can be seen to be non-empty without using AC. For example, finite products (including Euclidean spaces), or powers of a single set (the Cantor or Baire spaces).

Tychonoff’s Theorem states that a product of compact topological spaces is compact. This is not an easy theorem, and requires the Axiom of Choice in its proof (indeed, I believe it is equivalent to the Axiom of Choice). I saw a proof when I was a student, but I couldn’t find my old notes or textbook, and so I had to resort to Google. The final strategy I adopted in the lectures was as follows.

- I began with two examples. The closed unit square is compact. This can be proved by observing that a
*space-filling curve*is a continuous bijective function from the unit interval to the unit square. We had already proved that a continuous image of a compact space is compact. Also, the Cantor space, the product of countably many copies of {0,1} (with the discrete topology) is compact. This can be shown directly, by using the “middle third” construction of the Cantor space, which exhibits it as the complement of a collection of open subintervals of the closed unit interval, and hence closed; we had already proved that a closed subspace of a compact space is compact, and that the closed unit interval is compact (the*Heine–Borel theorem*, our motivating example for compactness). - By definition, a space is compact if and only if every open cover has a finite subcover. A straightforward argument shows that a space with a basis
**B**for the topology is compact if and only if any cover by sets from**B**has a finite subcover. Now it is not too big a stretch to believe that a space with a sub-basis**S**for the topology is compact if and only if any cover by sets from**S**has a finite subcover. However, this is more difficult: it is*Alexander’s sub-basis theorem*, and uses the Axiom of Choice. - Now a sub-basis for the topology on a product space is given by the collection of inverse images of open sets in the factors under the projection maps onto those factors. Using this and Alexander’s theorem, it is not to hard to prove Tychonoff’s theorem.

In my time as a student, I was often asked to take the Jordan curve theorem on trust, and often promised that I would see a proof later; I never did. I think my little act of concealment was no worse than this.

The course did not have much about metric spaces, except as examples of topological spaces (motivating the Hausdorff property, for example, or easing the transition from continuous functions on the real numbers to the general definition of continuous function).

As a result, the theorem of topology which I have used far more than any other, the *Baire category theorem*, was not in the course. (This theorem states that, in a complete metric space, the intersection of countably many open dense sets is non-empty.)

In the last lecture of the course, which I discuss below, I stated it, and gave a very simple application: a proof of the existence of transcendental numbers. Of course, most of my applications are in ultrametric spaces, especially spaces of paths in an infinite tree (where the proof is also much easier than in the general case). For example, Fraïssé limits exist because homogeneity is a countable sequence of requirements on a countable structure, each requirement being the restriction to an open dense subset. So for example, graphs isomorphic to the random graph are residual among all countable graphs.

For the final lecture, one of the students had asked if I could say a bit about further directions in topology and in its applications to other parts of mathematics. This was, for me, not the usual undergraduate lecture: I talked about manifolds, algebraic topology (both homology and homotopy) and the Poincaré conjecture, differential topology, point-set topology, the Zariski topology in algebraic geometry, topological groups, and the Baire category theorem (as noted above). A packed 50 minutes, and I hope that the students took something away from the lecture!

]]>

Well, of course I liked. So, could I write out a syllabus and we will get it approved. Trouble was, of course, I couldn’t decide exactly what I wanted to talk about. So I wrote out three syllabuses, and suggested that the Director of Teaching should choose the most appropriate one. The response was, “We will approve all three, and you can give whichever one you like”.

Wonderful flexibility. I was delighted to have come to a university where such things were still possible in this over-bureaucratic age of higher education.

Anyway, three years have passed by, and I have just come to the end of lectures on the third of the topics. So I have compiled the lecture notes into three files and put them in my lecture note collection here. If they might be of any use to you, for either learning or teaching, please take a look.

So what is there?

**Part 1** is on enumerative combinatorics. You will find elementary counting (with short digressions on Ramsey’s theorem and Steiner systems), formal power series, linear recurrences, Catalan objects, Gaussian coefficients, Möbius inversion, number partitions (up to Jacobi’s triple product identity), set partitions and permutations. Not much asymptotics, but there is a proof of Stirling’s formula (how could I not do this, living in Scotland?). Also in honour of my new place of residence, at Ursula Martin’s suggestion I renamed the pigeonhole principle the “doocot principle” – you see many old doocots standing in gardens and fields around east Fife.

**Part 2** is entitled “Structure, symmetry and polynomials”. The most important structural polynomial is the Tutte polynomial of a matroid, which specialises to the chromatic polynomial of a graph and to the weight enumerator of a linear code. To measure symmetry, we have the cycle index polynomial of a permutation group, a way to codify the Orbit-Counting Lemma. One of my long-term interests is to try to combine these two strands, and there is some material on this. Diversions cover codes over the integers mod 4, IBIS groups, Mathieu groups, and symmetric Sudoku. It was for this version of the module that I was awarded a teaching innovation prize.

**Part 3** is on finite geometry (loosely interpreted) and strongly regular graphs. The course begins with two classic theorems: the Friendship theorem, and Moore graphs of diameter 2, as a warmup to what proved a central theme of the course: the classification of graphs with the “strong triangle property”. After a discussion of projective planes and designs, I give the classification of the ADE root systems (using the Goethals–Seidel approach) and apply this to the classification of graphs with least eigenvalue −2 or greater. Then more finite geometry: projective spaces, generalised quadrangles (those with three points on a line are classified by the central theme), generalised polygons, and finally polar spaces, where the central theme is generalised using Jonathan Hall’s proof of the Shult–Seidel theorem on graphs with the “triangle property”. Diversions include partitioning complete graphs into strongly regular graphs, with application to the Ramsey number *R*(3,3,3).

All parts are self-contained, though it often occurs that a topic mentioned in one part is discussed in more detail in another. All parts include exercises after each section and solutions to most of the exercises at the end.

The notes are a bit rough; I have not had time for careful proofreading and editing. But I hope they may be of some use.

]]>

The exhibition, which ran in Edinburgh during Science Week there, is in St Andrews on four days starting today; so, if you want to see it, hurry. (There are “build your own kaleidoscope” workshops aimed at children of 10 or over.) The exhibition marks the 200th anniversary of the invention of the kaleidoscope by David Brewster.

Brewster was an astonishing character. He went to Edinburgh University at the age of 12 and studied theology. He was not a success as a church minister, but he studied mathematics and optics in his spare time and made a living by tutoring. As well as his invention of the kaleidoscope, he studied polarisation of light, noting that light was polarised by reflection and the effect was greatest when the reflected and refracted rays are perpendicular (when the incident angle is the *Brewster angle*): this gives a way of measuring the refractive index.

He was elected to the Royal Society at the age of 34 and received many prizes. His image was perhaps as well known in his day as Einstein’s a hundred years later, featuring on cigar boxes and elsewhere. He wrote 300 scientific papers, a biography of Isaac Newton, and much else.

He was Principal of St Andrews (this was his first “real” job), and later Principal of Edinburgh University, a post he held until well into his eighties.

The kaleidoscope reached the shores of Japan within a few years of its invention, and now there is a Japan Kaleidoscope Museum run in Kyoto by Shinichi Ohkuma. He has brought part of his collection to Scotland to help celebrate the anniversary, and they are on display in the exhibition; many of them are works of art created by modern artists.

]]>

The puzzle was:

Suppose we delete the edges of two edge-disjoint Clebsch graphs from *K*_{16}. What can we say about the graph formed by the remaining edges?

It is known that the edge set of the complete graph can be partitioned into three Clebsch graphs. As observed by Greenwood and Gleason, since Clebsch is triangle-free, this shows that we can colour the edges of the complete graph with three colours so that no monochromatic triangles are created, thereby demonstrating that the corresponding Ramsey number is 17 (it is not hard to show that with 17 vertices we necessarily create a monochromatic triangle). So, if you said, “The graph could be a Clebsch graph”, you would not be wrong …

But the answer to the puzzle is that the complement of two copies of the Clebsch graph is *necessarily* a third copy of the Clebsch graph!

Here is why.

By standard techniques for strongly regular graphs, the eigenvalues of the adjacency matrix of the Clebsch graph are 5 (multiplicity 1, corresponding to the all-1 vector), 1 (multiplicity 10), and −3 (multiplicity 5). Suppose we have two edge-disjoint Clebsch graphs, with adjacency matrices *A* and *B*, and let *C* be the adjacency matrix of the graph formed by the remaining edges, so that *A*+*B*+*C*+*I* = *J*, where *I* is the identity matrix and *J* the all-1 matrix.

Now each of *A* and *B* has a 10-dimensional space of eigenvectors with eigenvalue 1, inside the 15-dimensional space orthogonal to the all-1 vector. These must intersect in a 5-dimensional space. Since the eigenvalues of *I* and *J* on this space are 1 and 0 respectively, the matrix equation shows that the vectors in the space are eigenvectors of *C* with eigenvalue −3. So *C* has eigenvalue −3 with multiplicity 5, in addition to the simple eigenvalue 5 corresponding to the all-1 vector.

Let *r*_{1},…,*r*_{10} be the remaining eigenvalues of *C*. Since the trace of *C* is 0 and the trace of *C*^{2} is 80 (twice the number of edges), we see that

*r*_{1}+…+*r*_{10} = 10,

*r*_{1}^{2}+…+*r*_{10}^{2} = 10.

So we have

(*r*_{1}−1)^{2}+…+(*r*_{10}−1)^{2} = 0.

So *r*_{1} = … = *r*_{10} = 1.

Thus the graph *C* has the same eigenvalues as the Clebsch graph, and so is also strongly regular (16,5,0,2). But it is easy to see that the Clebsch graph is characterised by its parameters. (The valency is 5, and any vertex not joined to a fixed vertex * is joined to two neighbours of *; there are 10 such vertices and 10 pairs from 5, so this is a bijection. Now the induced subgraph on non-neighbours of * has valency 3, and two adjacent non-neighbours must be disjoint since otherwise a triangle is created; since there are only three pairs disjoint from a given one, everything is forced.)

]]>

Mercator (the Flemish cartographer Gerard de Kremer) produced his famous map projection in 1569. This is a method for mapping the curved surface of the earth on a plane map which is conformal (that is, angles are preserved, and hence shapes are rendered correctly), even though it is not area-preserving. (I remember my school atlas, in which it appeared to a casual glance that Greenland was as big as Africa.)

Historians of science have speculated on how he did it. In summer 2014, I attended a talk by the Portuguese historian Henrique Leitão about the sixteenth-century Portuguese mathematician Pedro Nunes, in which he mentioned research he had done with his colleague Joaquim Alves Gaspar which strongly suggested the method that Mercator used. Now a short article outlining this has appeared in the current *European Mathematical Society Newsletter*.

In 1537, Nunes investigated *rhumb lines* or *loxodromic curves*: these are the lines that a ship would follow if it kept a constant bearing, that is, its course made a constant angle with the meridian. This is obviously very useful for navigation, and his ideas spread rapidly. Several European mathematicians including John Dee worked out tables of rhumb lines. It is relatively straightforward to construct Mercator’s projection from such a table.

Part of the evidence that this is what Mercator actually did is based on careful measurements on his map, and comparisons with tables available to him. These show that the rhumb line solution fits the map more accurately than other methods which have been proposed.

Mercator described his invention as “corresponding to the squaring of a circle in a way that nothing seemed to be lacking save a proof”. Gaspar and Leitão speculate that, by this cryptic remark, he may have meant that he felt sure that his method really was conformal but was unable to prove it. As they say, “the formal demonstration of this property was beyond the reach of mathematics in Mercator’s time”. Perhaps now it would not be beyond a good undergraduate.

]]>

In Oxford at the weekend, on a rather flying visit from St Andrews, for a conference to mark Colin McDiarmid’s retirement. Colin was a student of Dominic Welsh, a tutor at Corpus Christi (the college next door to Merton), and a neighbour of mine in North Oxford for many years. The theme of the conference was “Probabilistic Combinatorics”, but that was not all that was on the menu!

I will revert to my usual practice of describing only a few of the talks.

Joel Spencer opened proceedings with a wonderful talk on the logic behind Galton–Watson trees, in particular, an explanation for why some questions about them throw up “wrong” solutions. He began with a provocative quote from Peter Jagers’ account of the history of branching processes (you can find the original here):

Rarely does a mathematical problem convey so much of the flavour of its time, colonialism and male supremacy hand in hand, as well as the underlying concern for a diminished fertility of noble families, paving the way for the crowds from the genetically dubious lower classes.

Apart from the mathematics, his talk was full of interesting terminology: “green child” (shades of Herbert Read), “old China”, “draconian fecundity”, and “strange logic” among them.

Angelika Steger talked about “robustness”. What is this? Dirac’s classical theorem about Hamiltonian cycles can be phrased like this. If you give an opponent a complete graph, and allow her to delete edges, but with the restriction that fewer than half of the edges through any vertex can be removed, the opponent cannot destroy the property of containing a Hamiltonian cycle. Accordingly, we say that the robustness of the complete graph for Hamiltonian cycles is 1/2. Angelika went on to several related problems including the square of a long cycle in a random graph.

That evening, at dinner, Dominic Welsh spoke movingly about his former student, and Colin was clearly very touched.

Next morning Alex Scott opened the proceedings, bringing us up-to-date with his joint work with Maria Chudnovsky and Paul Seymour. They are tackling questions of the form “If *G* is a graph with large chromatic number, must *G* contain either a large clique or a large (something)?” or, phrased another way. “Is it true that, in (something)-free graphs, chromatic number is bounded above by a function of clique number?” He told us quite a lot about forests of chandeliers, but in the end came to the conclusion that they are “not the answer to everything”.

For me the highlight of the meeting was the talk by Jorge Ramírez-Alfonsín. Starting from the “jugs of wine” problem, on which he had worked with Colin, he turned to aspects of the *Frobenius problem*: given a finite set of positive integers, which non-negative integers can be expressed as non-negative integer combinations of them? (In other words, given a set of values of stamps, which amounts of postage can be paid exactly by putting the right stamps on an envelope?) If *S* is this set, then *S* is an additive semigroup, and is also partially ordered by the relation that *x* ≤ *y* if *y*−*x* is in *S*. He was concerned with computing the Möbius function of this poset. Its generating function is a rational function, and is the inverse of the *Hilbert series* of *S* (analogous to the fact that the Dirichlet series of the number-theoretic Möbius function is the inverse of the Riemann zeta function).

]]>

Indeed, their argument shows more. It is possible to find two edge-disjoint copies of the Petersen graph in *K*_{10}; the argument shows that the remaining 15 edges necessarily form a bipartite graph. Since the Petersen graph is definitely not bipartite, this settles the original question.

Another graph rather similar to the Petersen graph is the Clebsch graph, which has 16 vertices, 40 edges, valency 5, and no triangles. Indeed, there is a close relationship: the subgraph on the non-neighbours of a vertex in the Clebsch graph is the Petersen graph.

The Clebsch graph can be constructed by taking as vertex set a symbol *, the set {1,…,5}, and the set of ten 2-element subsets of {1,…5}. Join * to all of {1,…,5}; join points in this set to pairs containing them; and join two pairs if they are disjoint.

So here is a puzzle. The solution is here.

Suppose we delete the edges of two edge-disjoint Clebsch graphs from *K*_{16}. What can we say about the graph formed by the remaining edges?

]]>

This week I have been at a conference on “Theoretical and Computational Discrete Mathematics” at the University of Derby, under the auspices of the Institute for Mathematics and its Applications.

The University of Derby was founded as the Derby Diocesan Institution for the Training of Schoolmistresses in 1851 and became one of the “new universities” in 1992. Finding mathematics on their website is difficult; it is in the college of engineering and technology, rather than a nnumber of equally plausible sounding colleges, and is part of the school of computing and mathematics. The order is significant; mathematics follows computer games, computer science and information technology in their list of subject areas. The mathematics website, when I found it, was entirely about teaching, with no mention of research or staff list. So I had little idea what to expect.

Peter Larcombe, head of mathematics, introduced the conference, and told us that this is the first ever mathematics conference in Derby, but they are hoping to make it the first of a series, possibly every other year. I wish them success in this! Then Chris Linton, current IMA president, told us a bit about the IMA and what it does.

I spoke first, about synchronization, which you have heard enough about if you read what I put here. There was quite a lot of interest, including a question about whether you can produce a parameterised version of the result that finding the shortest reset word for an automaton is NP-hard.

Other talks on the first day:

- Armen Petrosyan talked about things he called “arithmetic graphs”. The vertex set is a set
*N*of positive integers, possibly with repetitions; there is another set*M*of positive integers, and you join two vertices if their sum is in*M*. Taking*M*={1,2,3} and*N*={3,4,5} you get the famous right-angled triangle known to the Egyptians, which he calls the “Egyptian graph”. The bulk of his talk consisted of defining a notion of “information” based on the numbers on the edges of the graph, finding representations of various molecules as arithmetic graphs (where*N*is the set of atomic weights of the atoms), and then finding that various physical properties of the molecules such as solubility in water are highly correlated with “information” (all his reported correlations are over 0.9). Mysterious! - Colin Wilmott gave a lovely talk about quantum circuits. Most quantum gates have one input and one output, but you need a gate with 2 in and 2 out, called “controlled NOT” or CNOT. The CNOT gate is the most difficult to implement so he wants to find a circuit for a given job which uses the fewest CNOT gates. This led to some interesting questions about linear recurrences.
- Dorin Andrica from Romania was introduced as the person who conjectured that the difference between the square roots of consecutive primes is always less than 1. He started his talk with the discussion of “derivatives”, real functions
*f*which are derivatives of some other function. The class of derivatives is larger than the class of continuous functions but smaller than the class of “Darboux functions” (those satisfying the Intermediate Value Theoreom). Trying to construct derivatives of certain specific forms such as products of functions of the form cos(*a*/*x*) led to combinatorial questions about Erdős–Surányi sequences, with some interesting results and open problems. Here was someone whose mathematical mansion has no internal walls! - Robert Hancock, a St Andrews undergraduate now working with Andrew Treglown in Birmingham, talked about their generalisations of Cameron–Erdős to sets excluding solutions of other linear equations such as
*px*+*qy*=*rz*. They have some very precise estimates. - Ovidiu Badgasar talked about Horadam sequences, certain special classes of solutions to linear recurrence relations. He asked a very simple question: for what values of the four parameters (the two coefficients in the recurrence and the two starting values) can such a sequence be periodic? Some nice formulae and pretty patterns in the complex plane emerged.
- Ibrahim Ozbek presented a new threshold secret sharing scheme. (This is a scheme which shares a secret among
*n*people, such that any*k*of them can recover the secret if they cooperate, but*k*−1 can get no information whatever.) His example is built from error-correcting codes. We take a*t*-error-correcting code which can’t correct any*t*+1 errors; the secret is a word in this code. In essence, it goes like this. The “dealer” chooses*n*=*t*+*k*positions and makes errors in those positions. He also chooses*n*words from a 1-error-correcting code, such that the first letter of the*i*th word is equal to the*i*th coordinate which was changed in the secret; he deletes this first letter and gives the rest to the*i*th participant. Now the*i*th participant can recover the deleted coordinate of his word, by putting any old rubbish there and then correcting; so*k*of the participants can change the published word to one with only*n*−*k*=*t*errors, which they can then correct, but fewer than*k*participants cannot correct the errors. - Michael Alphonse talked about an algorithm to find the domination number of a circulant graph with two step lengths.

The second day’s fare:

- Vadim Lozin explained some very interesting stuff about structural reasons why some graph problems are hard. He began with the observation that the independent set problem is NP-complete, but the matching problem (which is the independent set problem in line graphs) is polynomial (Jack Edmonds’ famous algorithm). Now line graphs form a hereditary class defined by nine forbidden subgraphs, one of which is the claw. The claw is the one that counts. The independent set problem is polynomial in graphs forbidding a claw, but is NP-complete in the class of graphs forbidding the other 8 forbidden subgraphs for line graphs. He went on from there to his theory of boundary classes, a bit technical, but containing some lovely results and a very nice conjecture.
- Jessica Enright has worked for the Scottish Epidemiology Centre, and had looked at the question: given a graph, how many edges to we have to remove to break it into connected components of size bounded by a given constant? Even the decision problem is NP-complete. However, for graphs of bounded treewidth, there is an algorithm with time linear in the number of vertices, the constant being a function of the treewidth bound (in other words, the problem is fixed parameter tractable). Remarkably, it turns out that the graph of cattle movements between Scottish farms has extremely small treewidth, so the methods are applicable. This seems to me the real mystery from her talk. Afterwards, we had a discussion about this, and decided that perhaps networks designed by humans have small treewidth (because large treewidth makes the graphs hard to think about), while those that evolve by some process may have large treewidth. For example, the “neighbour” relation on Scottish farms (two farms adjacent if they share a boundary) has large treewidth.
- Nicholas Korpelainen talked about cliquewidth, a related parameter (perhaps). For a hereditary graph class, being well quasi-ordered by induced subgraphs is related to bounded cliquewidth (though a conjecture that these are equivalent has recently been refuted).
- Yury Kartinnik talked about finding minimum-size maximal
*k*-matchings (sets of edges which are independent in the*k*-th power of the given graph. - Kitty Meeks gave a nice talk about counting induced subgraphs with some particular property. There are six versions of the problem she considers: does such a subgraph exist? can we count them approximately? can we count them exactly? can we find one? can we sample at random? can we find them all? For a given type of subgraph, some of these problems may be tractable, but there are relationships: for examole, if we can solve the decision problem affirmatively then we can find a witness. For example, for a
*k*-clique, delete vertices one at a time until there is no longer a*k*-clique; the last deleted vertex must be in a clique, so delete it and replace*k*by*k*−1. - Shera Marie Pausang found Nash equilibria for the “inspection game” where an inspector with limited resources has to inspect a number of people who may or may not be complying when the inspector arrives: a very practical problem!
- Eric Fennessey talked about some of the practical problems he has to deal with in his daily job at Portsmouth dockyard looking after the navy’s destroyers. The problems seem remarkably basic. One of them is as follows. We have a number of bags of jelly beans, each bag labelled with the number and total weight of beans it contains. Find maximum-likelihood estimators for the mean and variance of the weight of a jelly bean. He remarked that for the variance he has to assume that the distribution of weight is normal, which of course means that there is a non-zero probability that a jelly bean has negative weight!
- Fionn Murtagh talked about a hypermetric, or
*p*-adic, approach to problems of clustering in large datasets. He remarked that this sort of thinking has been used in publications on psychoanalysis (Ignacio Matte Blanco), fundamental physics (Igor Volovich), and cosmology. - I.-P. Popa talked about linear discrete-time systems in Banach spaces.
- Hitoshi Koyano said that, while a lot of data consists of numbers, for which there are well-developed statistical techniques, increasing amounts of data consist of strings, where techniques are much less developed; he is developing techniques for this.
- Finally, Sugata Gangopadhyay talked about a subject close to my heart, bent functions; specifically, cubic bent functions which are invariant under cyclic permutation of the variables (he calls this property “rotation invariance”). A bent function is a Boolean function which lies at maximum Hamming distance from the set of linear functions. He is interested in finding bent functions at maximum distance from the quadratic functions. (In fact, it is not obvious that the functions furthest from quadratic functions are necessarily bent, but that is another question!)

A small but extremely wide-ranging conference; I look forward to the next one.

]]>