Any real number *x* in [0,1] has a base 2 expansion. The density of 1s in this expansion (the limit of the ratio of the number of 1s in the first *n* digits to *n*) need not exist; but, according to the Strong Law of Large Numbers, for almost all *x*, the limit exists, and is 1/2.

So it might seem perverse to consider the set *X _{p}* of numbers for which the limit is equal to

Briefly, given a subset of [0,1], take a cover of it by intervals of length at most δ, and take the sum of the *s*th powers of the lengths of the intervals. Now take the infimum of this quantity over all such coverings, and the limit of the result as δ tends to 0. The result is the *s*-dimensional Hausdorff measure of the set. There is a number *s*_{0} such that the measure is ∞ for *s* < *s*_{0}, and is 0 for *s* > *s*_{0}; for *s* = *s*_{0}, it may take any value. This *s*_{0} is the *Hausdorff dimension* of the set.

Besicovich calculated the Hausdorff dimension of the sets *X _{p}* defined earlier. It turns out to be equal to the

The entropy function suggests a relation to binomial coefficients and Stirling’s formula, which is indeed involved in the proof. (The logarithm of the binomial coefficient {*n* choose *pn*} is asympototically *nH*(*p*), as follows easily from Stirling’s approximation for *n*!.)

All this can be phrased in terms of the dynamics of the map 2*x* mod 1 on the unit interval (which acts as the left shift on the base 2 expansion), which suggests a good direction for generalisation, and suggests too that that this generalisation will involve concepts from ergodic theory such as entropy and pressure. Most of the lecture was about this.

(Perhaps worth mentioning that all these sets are negligible in the sense of Baire category, according to which almost all real numbers have the property that the lim inf of the density is 0 and the lim sup is 1.)

]]>

At the weekend we had two beautiful spring days; bluebells are out in force, and rarer plants such as water avens were putting on good displays. Monday morning dawned sunny, but by lunchtime the weather had reverted briefly to winter. Since everything was in the Mathematical Institute this didn’t matter too much.

Here are notes on just a few of the talks that struck me.

The opening lecture was by **Simon Blackburn**, who talked about network coding (a subject I knew a bit about, thanks to working with Søren Riis and Max Gadouleau at Queen Mary; I wrote about it here. Briefly, in a network through which some physical commodity flows, there can be bottlenecks; but if the network carries data, then giving the nodes some simple processing power can allow faster transmission. The famous butterfly network illustrates this; I started my account with this, and Simon also began with it.

There are *n* packets of information which have to be sent through the network to a number of sinks. Unlike the simple butterfly, real networks are large and complicated, and so a sink will not know which functions of the input packets it is receiving, and so will not know how to recover the original messages. If (as we assume) the operations of intermediate nodes are linear, then all that the sinks know is that what they receive lies in the subspace of *F ^{n}* spanned by the input data. So we have to shift our viewpoint, and regard the information being sent as a subspace of a vector space, rather than a specific collection of vectors.

The number of *k*-dimensional subspaces of an *n*-dimensional vector space over a field of order *q* is a *Gaussian* or *q-binomial coefficient*. As my Advanced Combinatorics students know, this is a monic polynomial in *q* of degree *k*(*n*−*k*). So, for large *q*, it is about *q*^{k(n−k)}. This number is equal to the number of subspaces which are the row spaces of matrices of the form (*I*, *D*) where *I* is the identity matrix of order *k* and *D* an arbitrary *k*×(*n*−*k*) matrix. So we can restrict attention to these. Whatever the sink receives is a matrix with the same row space; by performing row operations it can reduce it to the above form and recover the data vector *D*.

What if some nodes in the network are not working correctly? This could be either because of mechanical faults or the result of malicious hacking. If *t* nodes are affected, then the actual subspace received at the sink will be at distance *t* or less from the correct one, where the distance between two subspaces *U* and *W* is the maximum of dim(*U*)−dim(*U*∩*W*) and dim(*W*)−dim(*U*∩*W*). So we have a new setting in which to do error-correction, with subspaces replacing words and the above distance replacing Hamming distance. Simon explained the analogue of the Johnson bound for constant-dimension subspace codes, and some recent work of his with Tuvi Etzion and Jessica Claridge on this.

**Kitty Meeks** talked about a new version of graph colouring, *interactive sum list colouring*. In regular graph colouring we have a fixed set of *k* colours, and ask for the smallest *k* for which a given graph can be coloured. List colouring generalises this, by giving each vertex a list of *k* colours from a possibly much larger set of colours and again asking for the smallest *k* for which the graph can be coloured. Sum list colouring extends further, by allowing the lists given to vertices to be of different sizes, and minimizing the sum of the list sizes. The new generalisation takes this one step further, and is best thought of as a game between Alice, who is trying to colour a graph, and Bob, who is trying to thwart her. At each round Alice is allowed to ask for a new colour for a particular vertex, different from those already provided by Bob for that vertex; each request has a cost of 1, and she is trying to minimise (and Bob to maximise) the total cost. The corresponding colouring number is the cost if both Alice and Bob play optimally.

Let scn(*G*) and iscn(*G*) be the minimum sums required in the two cases. An easy argument shows that scn(*G*) is at most the sum of the numbers of vertices and edges of *G*; we call *G* *sc-greedy* if this bound is met. Many examples of sc-greedy graphs are known.

Not too much is known yet about iscn(*G*). It is an interesting exercise to show that the three-vertex path has iscn equal to 4. (Ask for a colour for each vertex first, and then consider possible cases.) This graph has scn equal to 5 (it is sc-greedy).

It is known that iscn(*G*) does not exceed scn(*G*), and is equal to it in the case of complete graphs; they (Kitty and her collaborator Marthe Bonamy) have shown that it is strictly smaller for other graphs. Indeed, the difference between the two numbers is at least (*n*−ω(*G*))/2, where ω(*G*) is the clique number of *G*.

The first day concluded with a talk by **Max Gadouleau**. As I said before, Max has done some good work on network coding, but in this talk he stepped back. Networks are studied in many areas of mathematics and science, and there is inevitably a certain amount of multiple discovery and of calling the same thing by distinct names.

Max actually talked about *discrete dynamical systems*. A dynamical system is just a set with a function on it, and we are interested in iterating the function. If the set has *N* elements, there are *N ^{N}* functions, and the long-term behaviour is simple: there is a subset of the points on which the function acts as a permutation (the

The connection with networks arises thus. We are particularly interested in the case where there is a set *V* of *nodes*, each of which can be in one of a finite number *q* of *states*; so we have *N* = *q ^{n}*, a function

Apparently this has real meaning. Fixed points in a gene regulation system may, for example, correspond to different modes of functioning of the cell, and may include cancer.

One thing that one can do now is to ask about dynamical systems with a given interaction graph. Another is to fix the graph and vary *q*, or restrict the kind of functions allowed (linear, monotone, threshold, etc.). Yet again we could have different rules for the updates. Max considered only the case where all vertices update simultaneously.

One of the few things known is the analogue of the Singleton bound in coding theory: the number of fixed points is at most *q*^{τ}, where τ is the size of the smallest *feedback set* (a set whose complement contains no directed cycles). Max gave exact values for the size of the image and the number of periodic points in terms of graph-theoretic parameters of the interaction graphi, such as the maximum number of independent arcs, or the maximum number of vertices covered by disjoint cycles. (One of these equalities holds only for *q* > 2, and is just a bound in the case *q* = 2.)

The next day, **David Bevan** gave the first of two nice talks on permutation patterns. (I have discussed these here also.) Briefly, a short permutation $pi; is involved in a longer permutation σ if, when we remove some points and push everything down to the shorter interval, we obtain π. (In my preferred way of thinking about it, a permutation is a pair of total orders on a set; involvement is just the “induced substructure” relation.) A permutation of length *n* is *k-prolific* if every permutation of length *n*−*k* is involved in it. They (David and co-authors Cheyne Homberger and Bridget Tenner) have found that *k*-prolific permutations of length *n* exist if and only if *n* ≥ *k*^{2}/2+2*k*+1. He outlined the proof, which involves an interesting detour through packings of diamond-shaped tiles in the plane.

**Anders Claesson** gave us a number of different proofs that there the same number of subsets of even and odd size of an *n*-set for non-zero *n*. The proofs, of increasing complexity, involved manipulations of exponential generating functions, and in particular the use of sign-reversing involutions. He went on to further applications, including an explicit count for interval orders. He ended up by remarking that species form a semiring, which can be extended to a ring of *virtual species* in the same way that the natural numbers are extended to the integers, and that these give a way of handling “negative objects”.

**Robert Brignall** talked about the notorious problem of counting permutations excluding 1324. He quoted Doron Zeilberger: “Even God doesn’t know the value of Av(1324)_{1000}“. Not even the exponential constant is known, although simulations suggest it is around 11.6. Robert and his co-authors (David Bevan, Andrew Elvey Price and Jay Pantone) have improved the upper and lower bounds for this constant to 13.5 and 10.24, both improvements on previously known bounds. The ingenious proof encloses the diagram of such a permutation in a staircase made up of dominoes with similar structure, but is difficult to describe without pictures!

The conference closed with a talk by **Laszlo Babai**. Laci is in St Andrews for a week, and will be giving five talks, including two ninety-minute talks at the British Colloquium on Theoretical Computer Science today and tomorrow. So I will defer my comments on his talks for a while …

]]>

My second talk was on sum-free sets, and was rather more discursive, so here is a summary.

A set of natural numbers is *k-AP-free* if it contains no *k*-term arithmetic progression, and is *sum-free* if it contains no solution to *x*+*y* = *z*.

Three big theorems in additive combinatorics are:

**Theorem 1 (van der Waerden)** For a natural number *k* > 2, the set **N** cannot be partitioned into finitely many *k*-AP-free sets.

**Theorem 2 (Roth–Szemerédi** For a natural number *k* > 2, a *k*-AP-free set has density zero.

**Theorem 3 (Schur)** The set **N** cannot be partitioned into finitely many sum-free sets.

At first sight, one would like a theorem to complete the pattern, asserting that a sum-free set has density zero. But this is false, since the set of all odd numbers is sum-free and has density 1/2. What follows could be motivated as an attempt to find a replacement for the missing fourth theorem.

The *Cantor space* *C* can be represented as the set of all (countable) sequences of zeros and ones. It carries the structure of a complete metric space (the distance between two sequences is a monotonic decreasing function of the index of the first position where they differ) or as a probability space (corresponding to a countable sequence of independent tosses of a fair coin).

We define a bijection between Cantor space and the set **S** of all sum-free subsets of **N**. Given a sequence *x* in *C*, we construct *S* as follows:

Consider the natural numbers in turn. When considering *n*, if *n* is the sum of two elements already put in *S*, then of course *n* is not in *S*. Otherwise, look at the first unused element of *x*; if it is 1, then put *n* into *S*, otherwise, leave *n* out of *S*. Delete this element of the sequence and continue.

For example, suppose that *x* = 10110…

- The first element of
*x*is 1, so 1 ∈*S*. - 2=1+1, so 2 is not in
*S.* - 3 is not in
*S*+*S*; the next element of*x*is 0, so 3 is not in*S*. - 4 is not in
*S*+*S*; the next element of*x*is 1, so 4 is in*S*. - 5=1+4, so 5 is not in
*S*. - 6 is not in
*S*+*S*; the next element of*x*is 1, so 6 is in*S*. - …

So *S* = {1,4,6,…}.

The notion of “almost all” in a complete metric space is a *residual set*; a set is residual if it contains a countable intersection of dense open sets. Thus, residual sets are non-empty (by the Baire Category Theorem); any countable collection of residual sets has non-empty intersection; a residual set meets every non-empty open set; and so on.

A sum-free set is called *sf-universal* if everything which is not forbidden actually occurs. Precisely, *S* is sf-universal if, for every subset *A* of {1,…,*n*}, one of the following occurs:

- there are
*i,j*∈*A*with*i*<*j*and*j−i*∈*S*; - there exists
*N*such that*S*∩[*N*+1,…,*N*+*n*] =*N*+*A*,

where *N*+*A* = {*N*+*a*:*a*∈*A*}.

**Theorem** The set of sf-universal sets is residual in **S**.

**Theorem (Schoen)** A sf-universal set has density zero.

Thus our “missing fourth theorem” asserts that almost all sum-free sets (in the sense of Baire category) have density zero.

There is a nice application. Let *S* be an arbitrary subset of **N**. We define the *Cayley graph* Cay(**Z**,*S*) to have vertex set **Z**, with *x* joined to *y* if and only if |*y−x*|∈*S*. Note that this graph admits the group **Z** acting as a shift automorphism on the vertex set.

**Theorem**

- Cay(
**Z**,*S*) is triangle-free if and only if*S*is sum-free. - Cay(
**Z**,*S*) is isomorphic to Henson’s universal homogeneous triangle-free graph if and only if*S*is sf-universal.

So Henson’s graph has uncountably many non-conjugate shift automorphisms.

In a probability space, large sets are those which have measure 1, that is, complements of null sets. Just as for Baire category, these have the properties one would expect: the intersection of countably many sets of measure 1 has measure 1; a set of measure 1 intersects every set of positive measure; and so on.

The first surprise is that measure and category give entirely different answers to what a typical set looks like:

**Conjecture** The set of sf-universal sets has measure zero.

Although this is not proved yet (to my knowledge), it is certain that this set does not have measure 1.

Given the measure on **S**, and our interest in density, it is natural to ask about the density of a random sum-free set. This can be investigated empirically by computing many large sum-free sets and plotting their density. Here is the rather surprising result.

The spike on the right corresponds to density 1/4 and is explained by the following theorem.

**Theorem**

- The probability that a random sum-free set consists entirely of odd numbers is about 0.218 (in particular is non-zero).
- Conditioned on this, the density of a random sum-free set is almost surely 1/4.

A word about this theorem. Suppose that we have tossed the coin many times and found no even numbers. Then we have chosen the odd numbers approximately independently, so the next even number is very likely to be excluded as the sum of two odd numbers, whereas the next odd number is still completely free. So the probability of no even numbers is not much less than the probability of no even numbers in the first *N* coin tosses. Moreover, since the odd numbers are approximately independent, we expect to see about half of them, and so about a quarter of all numbers.

Other pieces can also be identified. Let **Z**/*n***Z** denote the integers modulo *n*. We can define the notion of a sum-free set in **Z**/*n***Z** in the obvious way. Such a sum-free set *T* is said to be *complete* if, for every *z* in (**Z**/*n***Z**)\*T*, there exist *x,y* in *T* such that *x*+*y* = *z* in **Z**/*n***Z** . Now the theorem above extends as follows. Let **S**(*n,T*)$ denote the set of all sum-free sets which are contained in the union of the congruence classes *t* (mod *n*) for *t*∈*T*.

**Theorem** Let *T* be a sum-free set in **Z**/*n***Z**.

- The probability of
**S**(*n,T*) is non-zero if and only if*T*is complete. - If
*T*is complete then, conditioned on*S*∈**S**(*n,T*), the density of*S*is almost surely |*T*|/2*n*.

In the figure it is possible to see “spectral lines” corresponding to the

complete modular sum-free sets {2,3} (mod 5) and {1,4} (mod 5) (at density 1/5) and {3,4,5} (mod 8) and {1,4,7} (mod 8) (at density 3/16).

The density spectrum appears to be discrete above 1/6, and there is some

evidence that this is so. However, a recent paper of Haviv and Levy shows

the following result.

**Theorem** The values of |*T*|/2*n* for complete sum-free sets *T* in **Z**/*n***Z** are dense in [0,1/6].

However, this is not the end of the story. Neil Calkin and I showed that the event that 2 is the only even number in *S* has positive (though rather small) probability. More generally,

**Theorem** Let *A* be a finite set and *T* a complete sum-free set modulo *n*. Then the event *A* ⊆ *S* ⊆ *A*∪(*T* (mod *n*)) has positive probability.

**Question** Is it true that a random sum-free set almost surely has a density? Is it further true that the density spectrum is discrete above 1/6 but has a continuous part below 1/6? Is it also true that the probability that a sum-free set has zero density is 0?

In this connection, consider the following construction of Calkin and Erdős. Let α be an irrational number, and define *S*(α) to be the set of natural numbers *n* for which the fractional part of *n*α lies between 1/3 and 2/3. It is easy to see that *S*(α) is sum-free and has density 1/3. However this does not resolve the question, since the event *S* ⊆ *S*(α) for some α has probability zero.

However, there might be other examples along these lines …

A sequence *x* is *ultimately periodic* if there exist positive integers *n* and *k* such that *x*_{i+k} = *x _{i}* for all

It is easy to see that, in our bijection from sequences to sum-free sets, a sequence which maps to an ultimately periodic sum-free set must itself be ultimately periodic. What about the converse?

**Question** Is it true that the image of an ultimately periodic sequence is an ultimately periodic sum-free set?

After some investigation, Neil Calkin and I conjectured that the answer is “no”. There are some ultimately periodic sequences (the simplest being 01010 repeated) for which no sign of periodicity has been detected despite computing nearly a million terms. These sets are fascinating, and seem sometimes to exhibit an “almost periodic” structure; they settle into a period, which is then broken and a longer period established, and so on.

]]>

We went to East Sands where the children wrote messages on driftwood and threw them into the sea. Looking for replies, they found this:

Is it a message from departed spirits, or did the ancient Romans come here?

]]>

It was written in rather a hurry, and it shows. But when I took my copy down from the shelf last week, I found in it a few notes about possible changes that might be made in a second edition, if there were ever to be one. Needless to say, there was no second edition, and now we have reached the point where the subject has moved on in some directions (much less so in others) and rather than a second edition there would need to be a complete rewrite.

I have returned to the topic on several occasions, most recently for the proceedings of *Groups St Andrews 2005*. I have not, however, produced a monograph-length account, and maybe never will. In lieu of such a thing, I have decided to post my rather brief notes for the revision with the oddments on the Lecture Notes page. If you look at them, you may be disappointed (as I was) at how modest my ambitions were back then (but it was only shortly after the original was published).

In brief: an oligomorphic group should probably be regarded as a species, and the numbers of orbits on ordered *n*-tuples of distinct elements and on *n*-element subsets of the domain coincide with the sequences counting, respectively, labelled and unlabelled structures in the species. Moreover, the “modified cycle index” of an oligomorphic group is the cycle index of the species.

What distinguishes the species of oligomorphic groups from the general case? In terms of the counting sequences above, two special features of the group case are that the sequences grow *rapidly* and *smoothly*. On the first point, the best result is still Macpherson’s Theorem, according to which, if *G* is *primitive* (that is, it is transitive and preserves no non-trivial equivalence relation), then either the number of orbits on *n*-sets is 1 for all *n* (the group is *highly homogeneous*), or the number grows at least exponentially. What is lacking to date is a proof of the conjecture that the exponential constant cannot be smaller than 2. The best known result is by Francesca Merola, giving a lower bound of about 1.324.

On smoothness, it is known that the sequence counting orbits on *n*-sets is the Hilbert series of a graded algebra, and the most significant result is that of Maurice Pouzet: if the group *G* has no finite orbits, then the algebra has no zero-divisors. (It takes a little imagination to see this as a smoothness result for the rate of growth, but it can be done.)

So these are some of the additions which I would now put on a list, in the hope that sometime in the future I might get round to it …

]]>

Here is a small model of a large cardinal, from the Albertina Gallery in Vienna. His existence shows that ZFC is consistent.

]]>

]]>

The word “myth” in popular usage has the connotation “untruth”, though academics who are concerned with it are quick to point out that this is not necessarily the case; rather, myth is something which shapes our perception of ourselves and our place in the world. I think this needs to be discussed, because some people have maintained that myth embodies a different sort of truth, and in our post-truth society we need to be very careful about such claims.

Two of Britain’s most famous Christians in the twentieth century illustrate this. J. R. R. Tolkien and C. S. Lewis were members of the Inklings. In the 1930s, Tolkien, a lifelong Catholic Christian, persuaded Lewis (brought up as a Christian but having abandoned his childhood faith) to return to the fold; and, indeed, Lewis became a celebrated Christian apologist, but from a Protestant perspective, rather to Tolkien’s dismay. The actual conversion is discussed in some detail in Humphrey Carpenter’s biography of the Inklings, and indeed in a poem by Tolkien. Both men were deeply attracted to tales of the North, especially the Icelandic sagas, which they read in the original language. Lewis saw the Christian story as a myth comparable to the dying and reviving god Balder in Norse legend, and could not see how this could help him now: he described it as “lies breathed through silver”. Tolkien explained that it was a myth which happened to be true, and not a lie.

Tolkien felt this very strongly. The philologist Max Müller said “Mythology is a disease of language”; Tolkien was so offended that he half-seriously proposed the revision “Language is a disease of mythology”. Shades of *mythos* and *logos* in early Greek culture.

But in any case, a myth, true or false in objective reality, must have the power to shape our sense of self and our lives, and the world around us.

I am absolutely sure that there has been a golden age in science, though I would be less happy putting precise dates on it than Holden’s interviewees.

It is a fact (and figures exist to prove it) that, as universities have grown in size, the ratio of administrators to academics has also grown. *Cameron’s Law* says that the number of administrators grows faster than linearly in the size of the university. (This has the interesting corollary that, when a university exceeds a certain size, it will have more administrators than the total size of the university. This is not as crazy as it appears. Already there are university administrators who draw a salary for part-time work, though this is more common in commerce or public life than in academia.)

As this happens, more and more of the administrators have no experience of doing research, and often they tend to assume that traditional management methods are capable of increasing our productivity. David Colquhoun has this to say:

I have news for HR people. They are called experiments because we don’t know whether they will work. If they don’t work that’s not a reason to fire anyone. No manager can make an experiment come out as they wish. The fact of the matter is that it’s impossible to manage research. If you want real innovation you have to tolerate lots and lots of failure. “Performance management” is an oxymoron. Get used to it.

A few scientists still live as if the academic revolution had not happened. They support themselves by working at another job, or are supported by a partner, so that they have time to think. Two such in Britain in recent times are Julian Barbour and James Lovelock. Barbour has thought more deeply than most about the history of mechanics since ancient times, resulting in original views about the structure of the universe, including the idea that time is not basic but derives from a “best-matching” distance between spatial configurations. Lovelock, who early in his career invented a sensitive detector of trace chemicals in the atmosphere, is well-known for his Gaia theory, according to which the whole earth is a self-regulating ecosystem; this has been very productive in leading to the study of cycles of various elements in our environment. Both of these people cite the independence from the traditional research/academic environment as an important factor in their success.

In the case of my own career, nobody ever told me what to work on, even as a research student (though advice would have been available had I wanted or needed it). I think that mathematicians are almost certainly closer to the golden age than the biomedical scientists Holden interviewed. The course of mathematics research is more unpredictable than most of science, but in addition I now have many correspondents around the world who send me interesting problems, to which I can occasionally make a contribution (and of course these arrive out of the blue).

Thetis Blacker said, in recounting her dream of “Mr Goad and the Cathedral” in *Pilgrimage of Dreams*, that “Eternity is always now, [but] now isn’t always eternity”. It is clear to me that the Golden Age still exists for those of us fortuate enough to live in it, and I do count myself as one.

At present, I am employed on a half-time contract at the University of St Andrews. Although the contract assumes I work an 18-hour week, the School of Mathematics and Statistics finds it more convenient for me to work full-time in the second semester (teaching, supervising projects, administration, as well as keeping research going), and to have the first semester free for research visits. It is like having a six-month sabbatical every year. And as to the effect, 2017 will probably be the best year in my entire career for publications in top journals, the kind of thing that the administrators no doubt want; while much of this is down to my outstanding coauthors, in most cases they are in different universities, so I can claim it as my own work for the REF. There is absolutely no doubt that the comparative freedom I now enjoy has played a big part in this.

For the paper, Kerry Holden interviewed 45 academics, from early career researcher to head of the faculty, in the biomedical faculty in a London university. Neither the university nor any of the academics is identified.

Holden identifies five losses felt by the scientists: intellectual freedom, time for thinking deeply about the problem, a proper apprenticeship in science (replaced by PhD time subject to the same deadlines and pressures to perform as all other academics feel), serendipity (eroded by modern management techniques), and overall the idea that science is a calling which is pursued for love.

They mostly talked about a lost Golden Age of scientific research in the past. Holden discounts the truth of this, since different interviewees identified different past periods as the Golden Age. I think this conclusion is wrong; and indeed there is a simple alternative explanation. If things are going downhill fast, then any past period (even one quite recent) will have the characteristics of a Golden Age compared to the unsatisfactory present.

Holden, as noted above, says that a myth doesn’t have to be false (though the truth of this one is dismissed). It shapes academics’ perception of self and its relation to the University. In particular, it is claimed that although they all claim that decline is occurring, the myth “instils a willingness perhaps to forgo action that might address the inequality and increasing precariousness of scientific careers”. Again, there is an alternative explanation. Teaching is dismissed in the paper as just something else to distract academics from research. This is obviously wrong; teaching is important to us, and we tend not to take strike action because the people who would be hurt by it would be our students, to whom we feel a responsibility.

Holden finds that the Golden Age myth still works to recruit and retain committed scientists, and so supports the imposition of managerial techniques from above (since scientists’ idealism will lead them to swallow their anger and get on with the job). I wish I could be so sure.

]]>

Last week, in the second week of Spring break in St Andrews, I was in Vienna, giving a course of lectures to the PhD students, at the invitation of Tomack Gilmore, a Queen Mary undergraduate now finishing his PhD with Christian Krattenthaler at the University of Vienna.

The lectures were titled “Permutation groups and transformation semigroups”, but didn’t cover everything that can be said about those topics; my aim was to present an exposition of some of the work I have been doing with João Araújo in Lisbon for the last nearly ten years. I lectured twice a day for five days, and so the course naturally fell into five parts. I assumed some knowledge of group theory, but the first day was an introduction to semigroup theory, including both standard material such as regularity and idempotent generation, the analogues of Cayley’s theorem for semigroups and inverse semigroups (the latter is the Vagner–Preston theorem), and the condition for a transformation to have a power which is an idempotent of the same rank, as well as odd extras such as the results of Laradji and Umar constructing inverse semigroups whose orders are central binomial coefficients and Bell and Catalan numbers. The second day was an introduction to the theory of permutation groups, with the basic reductioin theorems as far as the O’Nan–Scott theorem, and a very brief discussion of multiply transitive groups.

The third and fourth days were the heart of the course. Day 3 covered synchronization: the Černý conjecture, the characterisation of non-synchronizing semigroups in terms of graph homomorphisms, the definition and basic properties of synchronizing permutation groups. Day 4 concerned conditions on a permutation group *G* which guarantee that, for any map *s* of given rank *k*, the semigroup generated by *G* and *s*, with the elements of *G* removed, is regular or is idempotent-generated. The condition for the first is the *k*-universal transversal property of *G* (that, given any *k*-set *A* and *k*-partition *P*, there is an element of *G* mapping *A* to a transversal for *P*. This condition is necessary for idempotent-generation of the semigroup (it is necessary and sufficient for the existence of an idempotent with rank equal to that of *s*), but not sufficient. In general we do not have a combinatorial equivalent of idempotent generation, but in the case *k* = 2 we do: it is the *road closure property*, which I have discussed here before.

The final day dealt with miscellaneous topics: automorphisms of transformation semigroups, lengths of chains of subgroups or sub-semigroups, and separating permutation groups. The notes also include a bibliography of books and papers as well as a number of open problems. (The notes are here.)

Doing all this in a week would have been challenging enough, but as well I gave a 90-minute seminar talk on orbital polynomials, a colloquium on the random graph, and a “junior colloquium” on the ADE affair. So quite a busy week, and I am afraid that other jobs had to be put on hold temporarily.

With all this there was not a great deal of time to see Vienna. Though it was early spring, the weather at the weekend was not kind, cold and rainy, though during the week it was better, and the blossom had come out by the time we left.

So much of the sightseeing was indoors. I will mention just the most astonishing thing I saw. Among other galleries, we went to the Academy of Fine Arts (part of the outside is shown above). The picture gallery in the Academy has the famous tryptich on the Last Judgment by Hieronymus Bosch. We think of his work as being mostly either of people doing unspeakable things to each other in gardens, or demons doing unspeakable things to people in Hell; this one certainly has plenty of that, with toads frying sinners in large frying pans or stirring them up in cooking pots. But the real surprise for me was on the back. The tryptich was hung with the side panels folded very slightly forward so that, with care, you could see the painting that would be shown if the panels were closed. This was completely different. The left-hand panel showed St James on a pilgrimage (probably to Santiago de Compostela), taking up most of the panel. He was dressed in dull blue robes and the mountainous landscape behind was in very subdued blue-grey, and the image looked forward to a later period of painting, being an astonishing portrait, of whom I don’t know. The right panel depicted a saint from Ghent giving alms to the poor, in even more subdued style.

The Danube has four branches in Vienna, three of which I saw: the regular river, the old, the new, and the Donaukanal. The last of these is not a canal, nor an open sewer (the meaning of German *kanal* according to Wikipedia), but a branch which has always existed and was “controlled” in 1598. Unlike the main river, it flows near the centre of town. I don’t know the story of the Alte Donau, which seems to be disconnected now and consist of a series of lakes. The Neue Donau was built for flood control after a serious flood in 1954, though it took a while to reach the decision to build it; work began in 1972 and was finished in 1988. Between the Donau and Neue Donau is a long, straight, and very thin island (which no doubt is crowded in the summer, but when we were there everything was closed and there were only a very few joggers and cyclists to be seen; a sad look to everything).

]]>

The picture is of Tommy Flowers, who built Colossus, the first computer. It was built to break the German High Command’s Fish cipher (Sägefisch) in the second world war; its construction would be regarded as heroic or despicable depending on which side you were on, I suppose. Nowdays, some similar uses of computers completely lack the heroic element: more on this below.

The picture hangs in the headquarters of the Institution of Engineering and Technology (formerly the IEE) in Savoy Place in London, just off the Embankment near Waterloo Bridge. We were there for a public lecture, the Stevenson Science Lecture, put on by Royal Holloway University of London. The series has been running for the best part of a century; but this was the first time it had been held in London, despite the University’s name (it is actually in Surrey).

The title of the lecture was “Should I have just clicked on that?” – quite scary when emails came about changes in the timing – and it was a triple act, put on by Lorenzo Cavallaro and Stephen Wolthusen from the School of Mathematics and Information Security and Marco Cinnerella from the Psychology Department. (Note that two out of three are among those being used as bargaining counters by the despicable Theresa May.) It was a polished performance; it was clear that time and thought had been put into the sequencing.

As the title suggests, there was quite a bit about phishing and ransomware emails. They pointed out that now you don’t even have to click on a link to suffer the penalty: if you use the autoplay setting on Facebook (whatever that may be), by the time the video of fluffy kittens starts playing the malware has already been downloaded onto your computer. But a lot of people click on ill-advised links because their attention isn’t fully engaged; as Marco put it, they haven’t “throttled up” their brains. I think that Julian Jaynes would say that we live much of our lives unconsciously; things only come into consciousness if they are significantly different from usual. But maybe psychologists don’t like talking about the unconscious these days: Freud gave it a bad name.

So why do they do it? Just business. You don’t even need infrastructure. Twenty dollars’ worth of computer time from Amazon is enough to crack the average password, and then you can earn much more than your investment by installing malware, using the computer for DDOS attacks, or simply selling on the information to those who will use it. Also, many permissions nowdays are transitive, so even if you haven’t explicitly given your login details to some organisation, they have had it passed on from someone you did give it to, quite legitimately. (So yes, choosing secure passwords is important!)

There are even more worrying things. A modern car has an order of magnitude more computer code in it than a Dreamliner; a program that large is bound to have weak points which can be attacked. Moreover, the car is connected to the internet, both for the satnav and for the infotainment system. It seems that the steering wheel, accelerator and brake pedals are not mechanically connected to the front wheels or the engine respectively; when you turn the wheel or put your foot on the pedal, you are telling the computer that you want something to happen, and it is the computer’s job to do your bidding. But, if control of the computer has been taken over by an outsider, it may be given an instruction to turn the front wheels when you are travelling at high speed down the motorway. This is said to be a very efficient way of getting rid of enemies, and may already have been used for this purpose.

How do we avoid these things? Well, my shoes are not yet connected to the internet, so I am probably safe walking to work. But as for keeping your computer safe, part of the problem is that different cultures take very different attitudes to imposed security measures. Some regard them as simply something to be got around by ingenuity. In some cultures, when a security investigator interviews staff about their working practices, people will say what they think he wants to hear rather than what they actually do. I think we simply have to try to be a little more conscious of what we are doing when at the computer (and at other times too).

]]>