Who are these?

Who are these bright young group theorists in Durham in 1977?

group theorists

Thanks to John O’Connor for the picture.

Posted in Uncategorized | Tagged , | 1 Comment

Conventions

I woke this morning thinking about something I learned in physics at school. A wire carrying a current in a magnetic field is acted on by a force (if it is not parallel to the field): in what direction does the force act? Dually, a wire in a closed circuit which is moved in a magnetic field has a current induced in it; which way does the current flow?

These questions can be answered, but the answers seem to depend on five somewhat arbitrary conventions.

  • First, we have to distinguish between left and right. Now certainly most people can do this; but this is the only point where we make contact with “reality”. Also, the variety of words for “left” and “right” in European languages suggests that the ability (and necessity) to distinguish does not go back to the roots of language. Admittedly, “right” seems a bit more uniform than “left”, and so is maybe older, perhaps because of its association with a different meaning of “right”, cf. the French droit. As has often been remarked, the words for “left” in French and Latin, “gauche” and “sinister”, do suggest some prejudice against left-handed people; maybe one day we will become so tolerant that these layers of meaning will disappear. (In my youth it was not uncommon for children to be “converted” to right-handedness, not always successfully.)
  • Next, we have to remember the association “left” and “motor”, and between “right” and “dynamo” (the two problems mentioned in the opening paragraph). I am sure we learned a mnemonic for this at school, though I can’t clearly remember what it was. Perhaps it was an association with the idea that the right hand is more dynamic, another instance of the prejudice mentioned above.
  • Next, we have to remember the bijection between the thumb, index, and middle fingers and force (or motion), field, and current. The mnemonic was thuMb, First, and seCond finger. (This sounds like six choices, but there are really only two, since an even permutation doesn’t change the convention).
  • Next, we have to remember the direction of a magnetic field line. I remember diagrams showing the field lines leaving the north pole of a magnet and entering the south pole, which of course were brought to life by experiments with iron filings. But added confusion comes from the fact that the north magnetic pole of the earth is actually a south pole and vice versa. Easily explained: the north pole of a compass magnet is the one that points north, since opposites attract.
  • Finally, we have to remember which way current flows. It flows in the opposite direction to the way the electrons flow. This convention, of course, was established before the discovery of electrons, and involved an arbitrary choice of which terminal of a battery is positive and which is negative.

Given all these conventions, to solve the first problem, hold the left hand so that the first finger points in the direction of the magnetic field and the second in the direction of the current; the thumb will indicate the direction of the force. For the second problem, use the right hand, with thumb in direction of motion and first finger in the direction of field, the second finger will indicate the direction of current flow.

So you have to remember 5 bits of information, or (at least) get an even number of them wrong.

There are various connections here. The right-hand rule is related to the right hand rule for the vector product (cross product) of two three-dimensional vectors: if the first and second fingers of the right hand are in the direction of the two vectors, the thumb will be in the direction of the vector product. This rule is usually formulated as a screw rule: if we turn a right-hand screw in the direction from the first vector to the second, the screw will move forward in the direction of the product. This also seems to connect with reality. It is more natural to turn a screwdriver in one direction than the other: this is presumably because we use different muscles for the two actions. The more natural direction tightens a right-handed screw if done with the right hand. (Some people transfer the screwdriver to their left hand to undo a right-hand screw.)

Also, the left-right distinction connects with the direction of the magnetic field. In the northern hemisphere, if you stand facing the midday sun, the sun will rise on your left and set on your right, and the earth’s magnetic field will come from in front of you. (These things reverse in the southern hemisphere, and the tropics require special care; also, in the region within the polar circles, it may not be clear where the midday sun is.) Of course, if the earth’s magnetic field were to reverse, the force acting on a current-carrying wire in a magnetic field would not change!

To a mathematician, of course, the cross product (or exterior product) of two vectors from the real 3-dimensional vector space V lives in a different 3-dimensional space, the exterior square VV. I believe that physicists do recognise the distinction, by calling the vectors of V axial vectors and those of the exterior square polar vectors. (I think that is right, but this is another of those things which you presumably just have to remember.) They distinguish them by the fact that they behave differently under transformations of the underlying space. So the convention here is actually a choice of identification between V and its exterior square.

Posted in exposition | Tagged , , , | 1 Comment

There is no McLaughlin geometry

Congratulations to Patric Östergård and Leonard Soicher, who have just completed a big computation whose conclusion is “There is no McLaughlin geometry”. The run-time of the computation was about 250 core-years.

So what did they compute, and why does it matter?

The notions of strongly regular graph and partial geometry were introduced by R. C. Bose in 1963, though they had been around in some form much earlier in Bose’s work. A graph is strongly regular, with parameters n, k, λ, μ if it has n vertices, every vertex has k neighbours, and two vertices have λ or μ neighbours according as they are joined to one another or not. This is a class of graphs of great importance. There are a number of conditions satisfied by the parameters. One, known as the “absolute bound”, is difficult to state but is interesting in that only three graphs are known which attain it. These are the 5-cycle, the Schläfli graph associated with the 27 lines in a cubic surface, and the McLaughlin graph whose automorphism group is McLaughlin’s sporadic simple group. McLaughlin’s graph has parameters n = 275, k = 112, λ = 30, μ = 56.

A partial geometry with parameters s, t, α is a geometry of points and lines in which two points lie on at most one line, every line has s+1 points, every point lies on t+1 lines, and a point P not on a line L is collinear with α points of L. The point graph of a partial geometry (whose vertices are the points, two vertices joined if they are collinear) is strongly regular (but I will leave the calculation of its parameters as an exercise). One interesting feature is that the dual of a partial geometry is also a partial geometry, and so also has a strongly regular point graph (the line graph of the original geometry). Partial geometries played an important role in the characterisation of classes of strongly regular graphs, especially those with smallest eigenvalue −2.

The McLaughlin graph is the unique strongly regular graph with its parameters, up to isomorphism. Also, if there were a partial geometry with parameters (4,27,2), its point graph would have the parameters of the McLaughlin graph, and so would be the McLaughlin graph. So a very natural question is: Is there a partial geometry with these parameters? In other terminology, is the McLaughlin graph geometric? (Note that the other non-trivial graph attaining the absolute bound, the Schläfli graph, is geometric.) That is what has just been resolved in the negative. I am not going to describe the methods (their paper is on the arXiv, here. Suffice to say that a line in the geometry would necessarily be a clique (complete subgraph) of size 5 in the graph; it does have cliques of size 5, but too many to form a geometry, so the problem is to see whether a subset of these cliques can be selected so that each edge lies in exactly one.

Patric talked about this computation, among other things, at our meeting on Discrete mathematics and big data. Of course, it happens not infrequently in discrete mathematics that the result of a huge computation is a single bit, as here. In such cases, we might hope that the answer would be “yes”, and the computer would produce an example of a geometry which could be checked. There is no way of checking the answer “no” other than repeating the computation. The same situation arose in the case of the projective plane of order 10.

Posted in exposition | Tagged , , | 1 Comment

How to draw the future

For Neill’s take on it, see this in the Guardian.

Posted in Neill Cameron artwork | Tagged , , | Leave a comment

Private information retrieval

Yesterday I was at a meeting on this subject at Royal Holloway, organised by Simon Blackburn.

The subject is relatively young; it began in 1995 with a paper by Chor, Goldreich, Kushilevitz and Sudan. At the meeting, Alex Vardy gave us a very clear account of the theory (on which what is below is mostly based), before describing his own contribution.

Suppose Alice wants to download a file from an online database, without the database learning which file she is interested in. There is one simple and sure way to do this, though potentially rather expensive; she can download the entire database, and then privately select the file she wants. In fact, there is no protocol in general which does better.

So is this the end of the story? No, there are two approaches which have been adopted. One is computational: the database manager may be able to learn Alice’s choice, but the computation required to discover this is prohibitive. There are protocols which achieve this, but at the expense of being themselves computationally intensive. The other approach, which was the concern of the meeting, is information-theoretic. This makes the crucial assumption that the data is stored on a number (say k) of servers, which do not communicate with one another.

To simplify matters, we assume that the database consists of a binary string x of length n, and Alice wants the ith bit. Of course she probably wants a file whose size may be gigabytes, but (apart from re-scaling the resources required) the principle is the same.

To show that the goal is not impossibe, here is a simple protocol for the case k = 2. Alice generates a random string u of length n. Let u‘ be the result of changing the ith bit of u. Alice’s requests to the two servers are the strings u and u‘. The servers return the two bits x.u and x.u‘; by adding them, Alice gets the bit xi. Since each server sees a random string from Alice, neither learns anything about her interests. (Of course, if the servers do communicate, they can do the same calculation as Alice; we must also assume that the communications between Alice and the servers are secure.)

This protocol is resource-intensive. We require each server to store the entire database; Alice must generate a string of the same length, and transmit two near-identical copies; and the servers must touch every item in the database to compute the dot products. Most research has focussed on reducing either the storage overhead or the communication cost. For example, the amount of communication required has been reduced to sub-polynomial if there are more than two servers.

Vardy’s work takes a different approach. This is based on the fact that multiple servers may use protocols which involve each server storing only part of the information. Typically it is required that full information can be reconstructed by accessing a prescribed number of servers (this may be done, for example, with a Reed–Solomon code), or that if one server fails, the information it holds can be recovered from the others, or perhaps a limited number of other servers.

His first example of using this idea for PIR showed how to use a 3-server protocol which only has a storage overhead of 2 (equivalent to using 2 servers – a 3-server protocol might be better in terms of the amount of information needing to be transmitted). This involves breaking the data into four pieces, and storing these (or linear combinations of them) on eight servers, each of which has to store 1/4 of the entire data. The scheme is simply a linear code of length 4, with a certain property which he called “3-server PIR”.

In general, a binary code with an s×m generator matrix G has the k-server PIR property if, for each column of G, there are k pairwise disjoint sets of coordinates such that, for each set, the sum of the columns of G with indices in that set is the given column. Such a code enables us to emulate any k-server PIR protocol with a database distributed over m servers, each storing 1/s of the original information (so with storage overhead m/s, which may be much smaller than k). Much classical coding theory (e.g. majority logic decoding) and combinatorics (Steiner systems, linear hypergraphs) appeared in the constructions he gave.

I will describe the other talks more briefly. Vijay Kumar surveyed coding for distributed storage, dealing with both regenerating codes and codes with locality. Salim El Rouayheb continued Alex Vardy’s theme by allowing the possibility that some of the servers are “spies” and may collude to attempt to get information about Alice’s choice.

Finally, Tuvi Etzion (who is working with Simon on an EPSRC-funded project) talked about connections with network coding. He was dealing with multicast coding, where one sender has a collection of messages, and a number of receivers require all of the messages. He gave us an example to show that vector networks can beat scalar networks. (For a scalar network, the messages are taken from a finite field of order q, say, and a node can form a linear combination of its inputs to send on as its output. It is known that this is possible for sufficiently large fields. In a vector network, the symbols are t-tuples over a field of order r (and, for comparison with the scalar network, we take q = rt); a node can apply linear maps to its inputs and sum the result to produce its output. He gave an example of a network where, for a given value of q = rt, the vector network could succeed whereas the scalar network required a field of size about rt2/2.)

In the final part of his talk, he described connections between network coding and PIR, but I am afraid my shingles-affected brain was not really processing this information efficiently.

Posted in events, exposition | Tagged , , , | 1 Comment

Huygens and Barrow, Newton and Hooke

Every now and again I take a random book down from my bookshelf and read it. Sometimes there is stuff worth talking about.

This time it is V. Arnol’d’s book Huygens and Barrow, Newton and Hooke. This was written for the 300th anniversary of the appearance of Newton’s Principia, and despite the title, Newton is the hero of the book. This is definitely not academic history. Arnol’d is clear about Newton’s defects: the way he treated Hooke, his unethical behaviour over the commission on the invention of calculus, and so on. But, nevertheless, Newton had outstanding geometric intuition, which Arnol’d values.

For example, according to Arnol’d, Leibniz proceeded formally. He knew that d(x+y) = dx+dy, and assumed that the same sort of rule would hold for multiplication (that is, d would be a ring homomorphism). He only realised he was wrong when he had worked out the unpleasant consequences of this assumption. Newton, who thought geometrically, saw the rectangle with small increments of both sides, showing immediately that d(xy) = x.(dy)+(dx).y.

Indeed, he is often ready to give Newton the benefit of the doubt based on his geometric intuition. For example, how did Newton show that the solutions to the inverse square force law were conics? He showed that, for any initial conditions, there is a unique conic which fits those conditions. But doesn’t this assume a uniqueness theorem for the solution of the differential equation? No problem. Newton knew that the solutions depend smoothly on the initial conditions, and this fact obviates the need for a uniqueness theorem. (“This … can be proved very easily”, and Arnol’d gives a proof for our edification. The assumption is that, either Newton knew this, or it was so obvious to him that he felt no need to give a proof.)

One of the most valuable parts of this thought-provoking book is that modern developments are discussed in some detail. For example, Newton calculated the evolvent of the “cubic parabola” y = x3. (Arnol’d prefers the term “evolvent” to the more usual “involute”, according to the translator. This is the curve obtained in the following way: take a thread of fixed length attached to the curve at one end and lying along the curve; then “unroll” it, so that at any stage the thread follows the curve for a while and is then tangent to it. The evolvent is the curve traced out by the other end of the string.) The evolvent turns out to have the same equation as the discriminant of the icosahedral group H3. This leads us to the occurrence of five-fold symmetry in quasicrystals, which has been discovered in nature (these discoveries were quite recent when Arnol’d was writing).

Another highlight is the discussion of Newton’s remarkable proof that an algebraically integrable oval (one for which the area cut off by a secant is an algebraic function) cannot be smooth; at least one of the derivatives must have a singularity. This implies that the position of a planet following Kepler’s laws cannot be an algebraic function of time. There is much more too; Newton’s analysis went much further, but was a forerunner of “impossibility proofs” in mathematics, which flowered much later.

Posted in books | Tagged , , , | 5 Comments

Young Researchers in Mathematics conference

YRM wordle

You still have a chance to register for this exciting conference: the deadline has been extended until 8 July. (You can read my report of the 2011 YRM here.)

Below is the invitation from the organisers.


Registration for the Young Researchers in Mathematics (YRM) 2016 conference closes on Friday 8th July 2016, so register now here!

YRM is the UK’s largest annual mathematics conference run for postgraduates by postgraduates, and this year it is being held in St Andrews, Scotland (about an hour from Edinburgh), from Monday 1st August to Thursday 4th August.

We have the pleasure to announce a public talk from Professor Michael Edgeworth McIntyre (Cambridge), plenary talks from Professor Peter Cameron (St Andrews / Queen Mary), Professor Clement Mouhot (Cambridge) and Dr Graeme Segal (Oxford) alongside keynote talks from distinguished academics in the following areas: algebra, analysis, dynamical systems, fluid mechanics, game theory, logic and set theory, mathematical biology, mathematical physics, number theory, numerical analysis, plasma theory, probability, statistics, and topology.

We also invite and encourage every attendee to contribute a 20 minute talk on their research and/or participate in the poster competition in the relaxed atmosphere of the conference.

For more information on speakers, scheduling (provisional schedule available) and how to register, please visit www.yrm2016.co.uk. You can also like YRM 2016 on Facebook and follow us on Twitter.

Posted in events | Tagged , , | Leave a comment