These questions can be answered, but the answers seem to depend on five somewhat arbitrary conventions.

- First, we have to distinguish between left and right. Now certainly most people can do this; but this is the only point where we make contact with “reality”. Also, the variety of words for “left” and “right” in European languages suggests that the ability (and necessity) to distinguish does not go back to the roots of language. Admittedly, “right” seems a bit more uniform than “left”, and so is maybe older, perhaps because of its association with a different meaning of “right”, cf. the French
*droit*. As has often been remarked, the words for “left” in French and Latin, “gauche” and “sinister”, do suggest some prejudice against left-handed people; maybe one day we will become so tolerant that these layers of meaning will disappear. (In my youth it was not uncommon for children to be “converted” to right-handedness, not always successfully.) - Next, we have to remember the association “left” and “motor”, and between “right” and “dynamo” (the two problems mentioned in the opening paragraph). I am sure we learned a mnemonic for this at school, though I can’t clearly remember what it was. Perhaps it was an association with the idea that the right hand is more dynamic, another instance of the prejudice mentioned above.
- Next, we have to remember the bijection between the thumb, index, and middle fingers and force (or motion), field, and current. The mnemonic was thuMb, First, and seCond finger. (This sounds like six choices, but there are really only two, since an even permutation doesn’t change the convention).
- Next, we have to remember the direction of a magnetic field line. I remember diagrams showing the field lines leaving the north pole of a magnet and entering the south pole, which of course were brought to life by experiments with iron filings. But added confusion comes from the fact that the north magnetic pole of the earth is actually a south pole and
*vice versa*. Easily explained: the north pole of a compass magnet is the one that points north, since opposites attract. - Finally, we have to remember which way current flows. It flows in the opposite direction to the way the electrons flow. This convention, of course, was established before the discovery of electrons, and involved an arbitrary choice of which terminal of a battery is positive and which is negative.

Given all these conventions, to solve the first problem, hold the left hand so that the first finger points in the direction of the magnetic field and the second in the direction of the current; the thumb will indicate the direction of the force. For the second problem, use the right hand, with thumb in direction of motion and first finger in the direction of field, the second finger will indicate the direction of current flow.

So you have to remember 5 bits of information, or (at least) get an even number of them wrong.

There are various connections here. The right-hand rule is related to the right hand rule for the vector product (cross product) of two three-dimensional vectors: if the first and second fingers of the right hand are in the direction of the two vectors, the thumb will be in the direction of the vector product. This rule is usually formulated as a screw rule: if we turn a right-hand screw in the direction from the first vector to the second, the screw will move forward in the direction of the product. This also seems to connect with reality. It is more natural to turn a screwdriver in one direction than the other: this is presumably because we use different muscles for the two actions. The more natural direction tightens a right-handed screw if done with the right hand. (Some people transfer the screwdriver to their left hand to undo a right-hand screw.)

Also, the left-right distinction connects with the direction of the magnetic field. In the northern hemisphere, if you stand facing the midday sun, the sun will rise on your left and set on your right, and the earth’s magnetic field will come from in front of you. (These things reverse in the southern hemisphere, and the tropics require special care; also, in the region within the polar circles, it may not be clear where the midday sun is.) Of course, if the earth’s magnetic field were to reverse, the force acting on a current-carrying wire in a magnetic field would not change!

To a mathematician, of course, the cross product (or exterior product) of two vectors from the real 3-dimensional vector space *V* lives in a different 3-dimensional space, the *exterior square* *V*∧*V*. I believe that physicists do recognise the distinction, by calling the vectors of *V* *axial vectors* and those of the exterior square *polar vectors*. (I think that is right, but this is another of those things which you presumably just have to remember.) They distinguish them by the fact that they behave differently under transformations of the underlying space. So the convention here is actually a choice of identification between *V* and its exterior square.

]]>

So what did they compute, and why does it matter?

The notions of *strongly regular graph* and *partial geometry* were introduced by R. C. Bose in 1963, though they had been around in some form much earlier in Bose’s work. A graph is strongly regular, with parameters *n, k*, λ, μ if it has *n* vertices, every vertex has *k* neighbours, and two vertices have λ or μ neighbours according as they are joined to one another or not. This is a class of graphs of great importance. There are a number of conditions satisfied by the parameters. One, known as the “absolute bound”, is difficult to state but is interesting in that only three graphs are known which attain it. These are the 5-cycle, the *Schläfli graph* associated with the 27 lines in a cubic surface, and the *McLaughlin graph* whose automorphism group is McLaughlin’s sporadic simple group. McLaughlin’s graph has parameters *n* = 275, *k* = 112, λ = 30, μ = 56.

A *partial geometry* with parameters *s, t*, α is a geometry of points and lines in which two points lie on at most one line, every line has *s*+1 points, every point lies on *t*+1 lines, and a point *P* not on a line *L* is collinear with α points of *L*. The *point graph* of a partial geometry (whose vertices are the points, two vertices joined if they are collinear) is strongly regular (but I will leave the calculation of its parameters as an exercise). One interesting feature is that the dual of a partial geometry is also a partial geometry, and so also has a strongly regular point graph (the *line graph* of the original geometry). Partial geometries played an important role in the characterisation of classes of strongly regular graphs, especially those with smallest eigenvalue −2.

The McLaughlin graph is the unique strongly regular graph with its parameters, up to isomorphism. Also, if there were a partial geometry with parameters (4,27,2), its point graph would have the parameters of the McLaughlin graph, and so would be the McLaughlin graph. So a very natural question is: Is there a partial geometry with these parameters? In other terminology, is the McLaughlin graph *geometric*? (Note that the other non-trivial graph attaining the absolute bound, the Schläfli graph, is geometric.) That is what has just been resolved in the negative. I am not going to describe the methods (their paper is on the arXiv, here. Suffice to say that a line in the geometry would necessarily be a clique (complete subgraph) of size 5 in the graph; it does have cliques of size 5, but too many to form a geometry, so the problem is to see whether a subset of these cliques can be selected so that each edge lies in exactly one.

Patric talked about this computation, among other things, at our meeting on Discrete mathematics and big data. Of course, it happens not infrequently in discrete mathematics that the result of a huge computation is a single bit, as here. In such cases, we might hope that the answer would be “yes”, and the computer would produce an example of a geometry which could be checked. There is no way of checking the answer “no” other than repeating the computation. The same situation arose in the case of the projective plane of order 10.

]]>

]]>

The subject is relatively young; it began in 1995 with a paper by Chor, Goldreich, Kushilevitz and Sudan. At the meeting, Alex Vardy gave us a very clear account of the theory (on which what is below is mostly based), before describing his own contribution.

Suppose Alice wants to download a file from an online database, without the database learning which file she is interested in. There is one simple and sure way to do this, though potentially rather expensive; she can download the entire database, and then privately select the file she wants. In fact, there is no protocol in general which does better.

So is this the end of the story? No, there are two approaches which have been adopted. One is *computational*: the database manager may be able to learn Alice’s choice, but the computation required to discover this is prohibitive. There are protocols which achieve this, but at the expense of being themselves computationally intensive. The other approach, which was the concern of the meeting, is *information-theoretic*. This makes the crucial assumption that the data is stored on a number (say *k*) of servers, which do not communicate with one another.

To simplify matters, we assume that the database consists of a binary string *x* of length *n*, and Alice wants the *i*th bit. Of course she probably wants a file whose size may be gigabytes, but (apart from re-scaling the resources required) the principle is the same.

To show that the goal is not impossibe, here is a simple protocol for the case *k* = 2. Alice generates a random string *u* of length *n*. Let *u*‘ be the result of changing the *i*th bit of *u*. Alice’s requests to the two servers are the strings *u* and *u*‘. The servers return the two bits *x.u* and *x.u*‘; by adding them, Alice gets the bit *x _{i}*. Since each server sees a random string from Alice, neither learns anything about her interests. (Of course, if the servers do communicate, they can do the same calculation as Alice; we must also assume that the communications between Alice and the servers are secure.)

This protocol is resource-intensive. We require each server to store the entire database; Alice must generate a string of the same length, and transmit two near-identical copies; and the servers must touch every item in the database to compute the dot products. Most research has focussed on reducing either the storage overhead or the communication cost. For example, the amount of communication required has been reduced to sub-polynomial if there are more than two servers.

Vardy’s work takes a different approach. This is based on the fact that multiple servers may use protocols which involve each server storing only part of the information. Typically it is required that full information can be reconstructed by accessing a prescribed number of servers (this may be done, for example, with a Reed–Solomon code), or that if one server fails, the information it holds can be recovered from the others, or perhaps a limited number of other servers.

His first example of using this idea for PIR showed how to use a 3-server protocol which only has a storage overhead of 2 (equivalent to using 2 servers – a 3-server protocol might be better in terms of the amount of information needing to be transmitted). This involves breaking the data into four pieces, and storing these (or linear combinations of them) on eight servers, each of which has to store 1/4 of the entire data. The scheme is simply a linear code of length 4, with a certain property which he called “3-server PIR”.

In general, a binary code with an *s*×*m* generator matrix *G* has the *k*-server PIR property if, for each column of *G*, there are *k* pairwise disjoint sets of coordinates such that, for each set, the sum of the columns of *G* with indices in that set is the given column. Such a code enables us to emulate any *k*-server PIR protocol with a database distributed over *m* servers, each storing 1/*s* of the original information (so with storage overhead *m*/*s*, which may be much smaller than *k*). Much classical coding theory (e.g. majority logic decoding) and combinatorics (Steiner systems, linear hypergraphs) appeared in the constructions he gave.

I will describe the other talks more briefly. Vijay Kumar surveyed coding for distributed storage, dealing with both regenerating codes and codes with locality. Salim El Rouayheb continued Alex Vardy’s theme by allowing the possibility that some of the servers are “spies” and may collude to attempt to get information about Alice’s choice.

Finally, Tuvi Etzion (who is working with Simon on an EPSRC-funded project) talked about connections with network coding. He was dealing with multicast coding, where one sender has a collection of messages, and a number of receivers require all of the messages. He gave us an example to show that vector networks can beat scalar networks. (For a scalar network, the messages are taken from a finite field of order *q*, say, and a node can form a linear combination of its inputs to send on as its output. It is known that this is possible for sufficiently large fields. In a vector network, the symbols are *t*-tuples over a field of order *r* (and, for comparison with the scalar network, we take *q* = *r ^{t}*); a node can apply linear maps to its inputs and sum the result to produce its output. He gave an example of a network where, for a given value of

In the final part of his talk, he described connections between network coding and PIR, but I am afraid my shingles-affected brain was not really processing this information efficiently.

]]>

This time it is V. Arnol’d’s book *Huygens and Barrow, Newton and Hooke*. This was written for the 300th anniversary of the appearance of Newton’s *Principia*, and despite the title, Newton is the hero of the book. This is definitely not academic history. Arnol’d is clear about Newton’s defects: the way he treated Hooke, his unethical behaviour over the commission on the invention of calculus, and so on. But, nevertheless, Newton had outstanding geometric intuition, which Arnol’d values.

For example, according to Arnol’d, Leibniz proceeded formally. He knew that d(*x*+*y*) = d*x*+d*y*, and assumed that the same sort of rule would hold for multiplication (that is, d would be a ring homomorphism). He only realised he was wrong when he had worked out the unpleasant consequences of this assumption. Newton, who thought geometrically, saw the rectangle with small increments of both sides, showing immediately that d(*xy*) = *x*.(d*y*)+(d*x*).*y*.

Indeed, he is often ready to give Newton the benefit of the doubt based on his geometric intuition. For example, how did Newton show that the solutions to the inverse square force law were conics? He showed that, for any initial conditions, there is a unique conic which fits those conditions. But doesn’t this assume a uniqueness theorem for the solution of the differential equation? No problem. Newton knew that the solutions depend smoothly on the initial conditions, and this fact obviates the need for a uniqueness theorem. (“This … can be proved very easily”, and Arnol’d gives a proof for our edification. The assumption is that, either Newton knew this, or it was so obvious to him that he felt no need to give a proof.)

One of the most valuable parts of this thought-provoking book is that modern developments are discussed in some detail. For example, Newton calculated the evolvent of the “cubic parabola” *y* = *x*^{3}. (Arnol’d prefers the term “evolvent” to the more usual “involute”, according to the translator. This is the curve obtained in the following way: take a thread of fixed length attached to the curve at one end and lying along the curve; then “unroll” it, so that at any stage the thread follows the curve for a while and is then tangent to it. The evolvent is the curve traced out by the other end of the string.) The evolvent turns out to have the same equation as the discriminant of the icosahedral group H_{3}. This leads us to the occurrence of five-fold symmetry in quasicrystals, which has been discovered in nature (these discoveries were quite recent when Arnol’d was writing).

Another highlight is the discussion of Newton’s remarkable proof that an algebraically integrable oval (one for which the area cut off by a secant is an algebraic function) cannot be smooth; at least one of the derivatives must have a singularity. This implies that the position of a planet following Kepler’s laws cannot be an algebraic function of time. There is much more too; Newton’s analysis went much further, but was a forerunner of “impossibility proofs” in mathematics, which flowered much later.

]]>

You still have a chance to register for this exciting conference: the deadline has been extended until 8 July. (You can read my report of the 2011 YRM here.)

Below is the invitation from the organisers.

Registration for the Young Researchers in Mathematics (YRM) 2016 conference closes on Friday 8th July 2016, so register now here!

YRM is the UK’s largest annual mathematics conference run for postgraduates by postgraduates, and this year it is being held in St Andrews, Scotland (about an hour from Edinburgh), from Monday 1st August to Thursday 4th August.

We have the pleasure to announce a public talk from Professor Michael Edgeworth McIntyre (Cambridge), plenary talks from Professor Peter Cameron (St Andrews / Queen Mary), Professor Clement Mouhot (Cambridge) and Dr Graeme Segal (Oxford) alongside keynote talks from distinguished academics in the following areas: algebra, analysis, dynamical systems, fluid mechanics, game theory, logic and set theory, mathematical biology, mathematical physics, number theory, numerical analysis, plasma theory, probability, statistics, and topology.

We also invite and encourage every attendee to contribute a 20 minute talk on their research and/or participate in the poster competition in the relaxed atmosphere of the conference.

For more information on speakers, scheduling (provisional schedule available) and how to register, please visit www.yrm2016.co.uk. You can also like YRM 2016 on Facebook and follow us on Twitter.

]]>

It was 2025 when the trouble really began. There had been some muttering prior to this, but in July 2025, Microsoft introduced Windows 25. Every time users of Office-Infinity entered the consecutive characters pi, the system automatically started to produce the decimal digits of π to unlimited accuracy. Previous versions had a similar bug, but this could be turned off with a simple expletive to the voice-recognition system. Windows 25 was different – it had a mind of its own and would shout back in highly offensive terms.

At this point, demand began to grow for a referendum on the true value of π. Older people seemed to prefer 22/7, but there were vigorous arguments in support of alternative values. The Sun newspaper was strongly in favour of taking the value to be 3 on grounds of simplicity, and it didn’t care for the symbol π either, decrying it to be a foreign import. The Daily Mail and the Daily Express felt that their readers had had enough of so-called experts, particularly mathematicians and the like. Many of these people, it was alleged, were wasting a fortune computing π to billions of decimal places – in fact over 350 million digits per week on some reckoning. Overseas mathematicians were especially vilified as corrupting the innate simplicity of the English character by their slavish addiction to spurious accuracy. The public were informed that many foreign symbols had been imported into mathematics and the expense of dealing with these became a key issue. The president of the LMS inadvertently let slip that many numbers in common use were irrational, and some even transcendental! Papers were uncovered relating to surreal numbers and even to imaginary numbers. The prime minister eventually conceded that a referendum would be held on 14th March 2026.

A close colleague of the LMS president, a person with ambition for this post, suddenly switched sides and wrote extensive articles to the effect that a few decimal places would suffice, including one for the Times entitled “Cutting π down to size”. He confessed that he had secretly felt this way for years, but had been unwilling to offend his erstwhile friend. Televised debates followed between the “Pi is finite” and the “Hands off π” camps. Foreign mathematicians were aghast. The “Pi is finite” camp vigorously pushed the concept of simplicity and promised to replace π by p (to be pronounced pee). They were challenged to specify how many decimal places they would use, but their answers were evasive and varied considerably. It was also pointed out that although π was a foreign (Greek) letter, p itself was a Roman import. But none of this seemed to stick. The Star ran a leader with the headline “Pee off pi”. The “Hands off π” group promised catastrophic disaster if π were to be redefined. They were portrayed in most of the media as elitist snobs and know-alls. Probably this was not helped by articles in the Guardian with headlines such as “Why π will affect future generations and why its redefinition will lead to untold economic disasters in the distant future”.

The result of the referendum was narrow but, nevertheless, there was a clear victory for the “Pi is finite” campaign, now renamed the “P is finite” group. Media pundits analysing the result opined that the great English public had finally taken revenge for the mathematics that they had been forced to endure at school. The president of the LMS resigned and there was a considerable revolt at the IMA, whose president was regarded as having been insufficiently supportive of the “Hands off π” campaign. Some backtrackers demanded a second vote, while others reluctantly agreed to settle for 355/113.

By 2040 we still don’t have a definitive value for π, or p as it is now called. However, it is now illegal hate-speech to claim that it has infinitely many decimal places. A popular choice is p=22/7, and the “P is finite” team are constantly assuring people that things like wheels not being completely circular are merely transitional problems. England was forced out of the World Cup in 2038 for using a ball judged insufficiently spherical, and was beaten in the Mathematical Olympiad by the Vatican City youth team. Most international scientific societies expelled the English representatives, although this was hailed as a triumph by the English press. The Research Excellence Framework was revised to promote the National Excellence category above the International Excellence category.

Meanwhile, at Heysham nuclear power station, and unknown to everyone, the non-circular reactor containment vessel has just developed the tiniest of cracks.

Mike Grannell, June 2016.

]]>

As a result, EPSRC, the research council responsible for mathematics, called for proposals to run Taught Course Centres. The rules were quite strict: it was not permitted to recycle Masters level courses; the standard must be demostrably higher than Masters level.

Most of the successful bids went for a distance-learning, technology-based solution. But London was a bit different. Since most mathematics departments in the colloquium were in or close to London, the London Taught Course Centre (LTCC) went for face-to-face lectures and interaction between students and lecturer (and among the students).

The business plan was that EPSRC would provide “pump-priming funds” but expected the Centres to become self-supporting within a few years. In practice this meant that universities in the consortium would be asked for contributions to fund the Centre. In the case of London, it was decided to look into the possibility of an extra funding stream by publishing volumes containing material from our portfolio of courses. World Scientific agreed to publish the LTCC Advanced Mathematics Series, and the first three volumes in the series (edited by the directors of the Centre, Shaun Bullett, Tom Fearn and Frank Smith) have just appeared. They are on *Advanced Techniques in Applied Mathematics*, *Fluid and Solid Mechanics*, and *Algebra, Logic and Combinatorics*. Three more volumes are in preparation and should be out shortly.

I wrote a chapter for volume 3, Algebra, Logic and Combinatorics: my free copies arrived yesterday.

There are five chapters, each of about 40 pages. Each has exercises, with solutions or hints to some of them, and suggestions for further reading, and most have an introduction. Below is a list of the other sections.

- My chapter is on “Enumerative Combinatorics”, and covers formal power series; subsets, partitions and permutations; recurrence relations; inclusion–exclusion; posets and Möbius inversion; orbit counting; species; and asymptotic analysis. (Similar to part one of my St Andrews notes.)
- Robert Wilson gives an “Introduction to the Finite Simple Groups”, with detail on alternating groups; subgroups of symmetric groups (O’Nan–Scott); linear groups; subgroups of general linear groups (Aschbacher); forms; classical groups; Lie theory; octonions and
*G*_{2}; exceptional Jordan algebras and*F*_{4}; Mathieu groups; the Leech lattice and Conway groups. - Anton Cox gives an “Introduction to Representations of Algebras and Quivers”. He treats algebras and modules; quivers and their representations; basic structure theorems (Jordan–Hölder, Artin–Wedderburn, Krull–Schmidt); projective and injective modules. I haven’t thought about quivers since I first heard the word “categorification”; there seems to be a close connection.
- Peter Fleischmann and James Shank describe “The Invariant Theory of Finite Groups”. They do finite generation and Noether normalisation; Hilbert series; integral extensions and integral closure; polynomial invariants (and reflection groups); the depth of modular rings of invariants; Castelnuovo regularity and finite decomposition type.
- Ivan Tomašić’s chapter on “Model Theory” starts with a gentle introduction to first-order model theory and then accelerates to reach recent developments in diophantine geometry. He covers first-order logic; basic model theory; applications in algebra; dimension, rank, stability; classification theory; geometric model theory; and model theory and diophantine geometry.

]]>

Shingles is caused by the same virus as chicken pox (*Varicella zoster*). If you had chicken pox as a child, you are probably home to the virus, which is kept in check by your immune system. But if you are old or stressed, or your immune system is compromised by another illness, the virus can strike. There is a theory that the decline in chicken pox infections in children (since immunisation became common) has meant that older people are less exposed to chicken pox and their immune systems lose the power to fight the virus, leading to an increase in cases of shingles.

It is painful, as I know now for a fact. The book says “very painful”, and the doctor who diagnosed it on Monday said I must be stoical to have endured that without demanding medical attention. Perhaps, if I hadn’t been distracted by thinking that the bang on the head caused it, I might have seen a doctor earlier. The earliest symptom was a headache and inability to concentrate (a very serious issue for a mathematician).

Anyway, if I seemed to be less responsive than usual for the last couple of weeks, that is my excuse. And here’s hoping that there are no long-term effects!

]]>

Hagia Sophia was built in 537, and for nearly a thousand years it was the cathedral church of the Orthodox patriarch, and main church of what was the eastern Roman empire until the fall of the western empire, following which it was the Roman empire. It was captured by the Franks during the fourth crusade, and was a Catholic church for half a century until it reverted to the Orthodox.

When Constantinople fell to the Turks in 1453, it was re-named Istanbul, and became the seat of the Ottoman sultans; Hagia Sophia was converted into a mosque. It remained as a mosque for nearly half a millennium until the fall of the Ottoman empire in 1923, when it became a museum. According to our guide Suleiman, both Muslims and Christians demanded the building, but the president of the new Turkish republic said, “No, it’s mine now; if you want to visit, you can buy a ticket like everyone else.” And a museum it remains.

The Turkish republic was established as a secular and democratic constitutional republic, and so it has remained. Although it has problems, it seems to have avoided the worst excesses of violence and repression which trouble so much of the Middle East, although there seem to be voices now who wish to push back to a more religious and intolerant society.

But tolerance didn’t begain with the Turkish republic, as the picture shows. When the Muslims captured the Christian capital, it is not hard to believe that voices would be raised demanding the demolition of this iconic Christian church. Instead, it was converted into a mosque, with the minimal changes. Mosaics showing the face of Jesus were covered up by calligraphy showing the names of Allah, and crosses were elaborated into geometric patterns; but a lot of things remain in place.

In particular, the Christians prayed towards Jerusalem, the Muslims towards Mecca; the directions are a few degrees different, but evidence of both remains, as the picture shows.

In what seems to me to be a related story, Suleiman showed us this:

The picture shows a marble slab carved with dolphins and trident, symbols of the Greek god Poseidon. Suleiman said that three explanations have been offered for the presence of this pagan symbol. They may all have some truth to them; take your pick.

- The church was built in the very short period of five years, and a huge quantity of marble and other building material was required in a very short time. The was no time to quarry enough, so old Greek temples were plundered for material.
- The picture was re-interpreted as a Christian symbol. The letters of the Greek word for “fish” are the initial letters of the proclamation of faith, and the trident symbolises the Trinity.
- The Roman empire had been made Christian by Constantine (who also founded Constantinople) not very long before; many people were still pagan at heart, or at least respected pagan symbols as a kind of insurance policy.

]]>