Of course, this refers to Richard III, whose bones (as you most likely know) were found under a car park in Leicester, near the battle of Bosworth where he was killed, and were recently reburied with great pomp in the cathedral in Leicester. The city of York had competed for the king’s bones, on the grounds that he was a Yorkist and that was his home town.
Quite by chance, the evening after the meeting I read on the web about the earlier English king Edward the Martyr. His story has something in common with Richard’s, as well as with other topical things such as “Wolf Hall” and “Game of Thrones”.
Edward was the eldest son of king Edgar the Peaceful, and was crowned king of England in 975 at the age of 13, and was murdered three years later. Subsequently, he was recognised as a saint in the Anglican, Roman Catholic, and Eastern Orthodox churches (maybe one of the few things these three organisations can agree on). Reading what little is known about his life, one gets little impression of saintliness. Moreover, he was not murdered by pagan Vikings, or even by the anti-clerical party in Wessex. Historians have several theories about why he was murdered, but maybe the favourite is that it was a power struggle between his supporters and those of his younger brother Ethelred (the Unready).
I knew very little about this period. I had of course heard of Ethelred the Unready (his appelative is mis-translated from the Old English word which actually means “ill-advised”). But I did note that, when I read the Wikipedia page, it noted that it was last edited on 1 April, so it might be as fictional as Game of Thrones.
Edward’s bones were not lost, but were actually kept in a bank vault for some time because of a legal battle over their ownership between the archaeologist who found them (who wanted them to go to the Russian Orthodox Church Outside Russia) and his brother (who wanted them returned to Shaftesbury Abbey where they had been found). The Russians were victorious, and Edward was reburied at Brookwood in Surrey, where there is now an Orthodox foundation including a small monastery and a parish church.
This happened in 1984, well within my lifetime, and yet I don’t recall hearing anything about it. So the reburial of someone revered as a saint in most of Christendom can pass unnoticed, while the reburial of someone who was perhaps a murderer and usurper is a national event. Why? Maybe Richard had the better writer for his life story …
I happened to be reading this on St George’s Day, and it struck me that St George’s connection with England is as tenuous as St Edward’s with Russia or Greece.
Brookwood is itself an interesting place. I have passed it on the train many times; it is at the junction of the line to Basingstoke and the south-west and the line to Aldershot. The cemetery there is one of the largest in Europe, with over 235000 inhabitants. It was created by the London Necropolis Company to offer an alternative to London’s overcrowded burial grounds. The company had their own station (London Necropolis) near Waterloo, and offered three classes of burial (as befitted a class-conscious society like Britain), though most of the burials were third-class. It also re-buried inhabitants of London graveyards whose rest had been disturbed by the building of London’s sewers and underground railways. The main divisions of the cemetery were Anglican and Nonconformist, each with their own station, but it also included several smaller divisions including Parsee (I thought that Parsees practiced air burial, but maybe there are not enough vultures in Surrey), Turkish, and American. The cemetery itself was the subject of a legal battle before ending up in the care of Woking council.
It was good to see old friends such as Rick and Hilary Thomas, Vicky Gould, John Fountain, John Meldrum, and Mark Kambites.
I’ll just discuss here a few talks which for me were highlights.
First among these was Bob Gray’s opening talk of the meeting, entitled “Crystal monoids”. It was mostly about the plactic monoid, and fortunately (since that word is not in my dictionary – it is “plaxique” in French, but this is not clear either – a correspondent on MathOverflow derives it from Greek “plax”, plaque, which could perhaps be linked with “tableau”) he spent most of the talk telling us what this monoid is, taking us on his journey of “learning new things”. He gave us three completely different descriptions of it:
The next post in my series on the symmetric group was always intended to be about Young diagrams and tableaux and their connection to the representation theory of the symmetric group. I still intend to do this sometime, but it hasn’t been done yet. If it had, I could have referred you to it for the procedure of inserting an element into a tableau, except for one thing: there are several conventions about tableaux, and even Bob and his co-author Alan Cain couldn’t agree! Indeed, Ian Macdonald, in his great book on Symmetric functions and Hall polynomials, says, after explaining the British and French conventions for tableaux, that Francophone readers of his book should read it upside-down in a mirror!
Here is a brief account, in the form that Bob Gray used. A Young diagram is a collection of boxes aligned at the top and the left, so that row lengths are weakly decreasing going from top to bottom. A semi-standard tableau is obtained by putting numbers from the set {1,2,…,n} into the boxes so that rows are weakly increasing from left to right and columns are strictly increasing from top to bottom.
To insert a number x into a tableau: first, try to put it at the end of the first column. If it is strictly larger than the last entry in the first column, this is fine, and we can add the number in a new box and stop. Otherwise, we go up the column until we find the first number y which is not greater than x; x “bumps” y out of its box, and we then (recursively) insert y into the tableau consisting of the remaining columns.
(If you compare this with the version in my Combinatorics book, there are three differences: first, I insert elements into rows rather than columns; secondly, I assume that the numbers are all distinct, so that the tableaux are standard (rows and columns both strictly increasing); and third, I maintain a second tableau to record the boxes in order of creation. This gives rise to a bijection between permutations and pairs of standard tableaux of the same shape.)
Now there is an equivalence relation defined on words over the alphabet {1,2,…,n}: two words are equivalent if, when we successively insert them into the empty tableau, the results are equal. This equivalence relation turns out to be a congruence on the free monoid generated by the alphabet (the set of all words with the operation of concatenation), and so the quotient is a monoid. This is the plactic monoid.
One can recover a word from a tableau by reading it “Japanese fashion”: down each column, and columns from right to left. This word belongs to the congruence class corresponding to the tableau, and can be regarded as a “canonical representative” (though there are other possible choices for this).
The results of Cain, Gray and Malheiro (which he had little time to describe) are that the plactic monoid has a finite complete rewriting system and an automatic structure. He did explain these terms, but for me the most valuable thing was, as I said, the three descriptions of the monoid.
Later the first afternoon, Alan Cain spoke further about this work. They have done similar things for several other monoids.
The essential idea is to take any combinatorial structures with a notion of “insertion”. As well as semistandard tableaux (giving the plactic monoid), they have considered the hypoplactic monoids (from “quasi-ribbon tableaux”), the sylvester monoids (from binary search trees), and a couple of others. (Alan insisted on the lower-case letter here since sylvester does not commemorate the mathematician of that name but the association with trees. My dictionary doesn’t give it as a word but does list “sylvestrian” as the adjectival form.)
After the conference dinner in the newly-reopened Byre Theatre, I walked home along the Kinness Burn where I heard a thrush singing a magnificent song, and saw a couple of bats flying over the river in the twilight, while a crescent moon and Venus hung in the western sky.
The next day, the opening talk was by Paul Bell. He tried so hard to give us careful background explanations that he ran a bit short of time; but better that way, in my view. He was talking about decidability and complexity of problems about matrix semigroups. He gave us clear accounts of the undecidable problems which he and his coauthors have coded into matrix semigroups for the negative results.
One of these was the Post Correspondence Problem PCP(k). We are given k dominoes, aligned vertically. On the top and bottom halves of each domino are written words over the two-letter alphabet {a,b}. (There are an unlimited number of copies of each domino available.) The task is to choose a sequence of dominoes and put them in a row such that, when the words on the top parts of the dominoes are read and concatenated in order, and the same for the bottom parts, the results are equal. This problem is known to be decidable for k = 2, and undecidable for k = 5.
Another problem used in this way was Hilbert’s 10th (solvability of Diophantine equations). For NP-completeness he used the subset sum problem.
Matthew Taylor introduced us to the tropical semiring T, whose elements are the real numbers together with −∞; addition is the maximum function, and multiplication in T is addition in R. The tropicalisation of schemes (from algebraic geometry) has some connection to quotients of free T-modules in finitely many variables (based on work a couple of years ago by the Giansiracusa brothers), so he embarked on the classification of 2-generated T-modules, involving lots of pictures of plane configurations involving horizontal, vertical and diagonal lines, as you might expect. [25/04/2015: Wording changed at Mark Kambites’ suggestion. I don’t understand the tropicalisation of schemes well enough to be precise here! Mark says that in the Giansiracusa approach the tropicalisation of a scheme is a sheaf of T-algebras, but for some purposes it is enough to consider T-modules; also, there are other approaches to the problem.]
And why is it called “tropical”? Not, as I first thought, to contrast it with “polar geometry”; but in honour of one of the pioneers, the Brazilian mathematician Imre Simon, though there seems to be some disagreement about who actually coined the term (and in any case São Paulo, where he worked, is barely tropical, lying 23 1/2 degrees south of the equator).
A very nice conference, crossing many boundaries, and with some excellent expositions.
I concluded the conference with a talk entitled “Finding where you are”, involving two aspects of synchronization for automata, both of which I have discussed here before. (Perhaps appropriate: we were in the third lecture room that had been used for the meeting.) My slides are in the usual place.
The walk officially goes from Melrose (where Cuthbert was prior of the abbey) to Holy Island (where he was bishop of Northumbria). In fact the causeway to Holy Island is only passable at low tide, which dictated that we had to do the walk in the reverse direction, leaving Holy Island not by the causeway but by the Pilgrims’ Way across the sand (marked by tall poles). The border is crossed near Kirk Yetholm, about the halfway point of the route, which is also the northern terminus of the Pennine Way.
St Cuthbert lived in the 7th century, but most of the architectural remains date from half a millennium or more later. But Melrose is even older; it was the Roman town of Trimontium, and if you see the three peaks of the Eildon Hills lined up above the town, you understand where the name came from. (The name Melrose is British, as are many Scottish placenames, especially in the Borders.) The hills form a barrier which the path has to cross, but it uses a pass between the eastern and middle summit and so avoids quite a climb.
Incidentally, Trimontium is one of a very few Roman place-names which are known for Britain. It was such a remote outpost of empire that not much detail was recorded.
The town of Melrose is on the river Tweed, just east of Galashiels and Tweedbank. In September, it is planned that Scotland’s newest railway, the Borders Railway (actually a rebuild of 30 miles of the 100 mile Waverley Line from Edinburgh to Carlisle, closed in 1969) will open, from Edinburgh to Tweedbank. This will make this lovely walking country more accessible.
The line will pass through Newcraighall, which currently is the terminus of the Fife Circle(!). Presumably turning the trains there saves the need to park them in the rather cramped Edinburgh Waverley station. I have no idea whether they plan that Fife Circle trains will continue to Tweedbank. But the Fife Circle doesn’t come very far north, so either way will involve a change.
I was looking up the Waverley Line on Wikipedia yesterday. Their account of the modern rebuilding is a must-read. The things that went wrong are not on the scale of the infamous Edinburgh Tram, but the list of delays and cost increases suggests that once the original decision had been taken (by 114 votes to 1 in the Scottish parliament), nobody bothered to keep a watch on what was going on.
Politics alert: If the SNP can screw up a simple job like this so badly, can they really be trusted to run a country?
As well as the PCC, last week I was at a conference at the University of Sussex entitled Breaking Boundaries between Analysis, Geometry and Topology. With a title like that, how could I resist?
There were some lovely and wide-ranging talks. Here is a whistle-stop tour.
David Edmunds made some “Remarks on Approximation Numbers”. These numbers, for maps on Banach spaces, turned out to be a kind of generalisation of eigenvalues in the positive self-adjoint case, so interesting even if I don’t expect to have an immediate use for them myself. He remarked at one point that nuclear operators are sometimes called “operators of trace class”, even though they may not actually have a trace!
David Applebaum talked about “Generalised spherical functions on groups and symmetric spaces”. This was a blend of harmonic analysis and probability theory, bringing in, among other things, the Lévy–Khintchine formula for the Fourier transform of an infinitely divisible element (one which has a convolution nth root for any n with the Harish-Chandra formula. Lévy–Khintchine works on Euclidean space, and can be extended to locally compact abelian groups, but to go further to non-abelian groups and symmetric spaces you have a much harder job, and they ended up with a formulation involving infinite matrices, but still in the spirit of the original!
Dale Rolfsen gave two talks, on the theme of generalised torsion in groups. A group has generalised torsion if the product of some n conjugates of a non-identity element is equal to the identity (this reduces to ordinary torsion if the conjugating elements are all the identity). It is known that “biorderable” (having an order invariant under both left and right translation) implies “locally indicable”, which implies “left-orderable” (a left-translation-invariant order). In addition, “biorderable” implies “no generalised torsion”. In the first talk he concentrated on knot groups: if all the roots of the Alexander polynomial of a knot are real and positive, then the knot group is biorderable; but the Alexander polynomial does not detect generalised torsion. The second talk was about generalised torsion in homeomorphism groups of manifolds (especially cubes) which fix the boundary pointwise, and involved some ingenious constructions moving cubes around.
Michiel van den Berg talked about heat flow in Riemannian manifolds. You start with a region Ω at unit temperature, with either the rest of the manifold at zero temperature, or the boundary of Ω fixed at zero temperature; you are interested in how much heat remains in Ω at arbitrary later time, and in particular its asymptotics.
Neils Jacob talked about “Symbol and geometry related to Lévy processes”. From a function of two variables, you can define a pseudo-differential operator (where the second variable is “replaced” by differentiation) by means of the Fourier transform. The talk took me back to the course on partial differential equations I took as a final-year undergraduate, with lots of stuff about the geometry of wavefronts and characteristic manifolds. I regret to say that I didn’t understand much back then, and didn’t really get much further this time!
Roger Fenn presented us with a challenge. Take Gauss’ encoding of a knot, and turn it into a nice drawing of the knot; not a job just for computers since aesthetic considerations are involved.
I also gave two talks, one about the ADE affair (which I have discussed at length on this blog, beginning here), and the other about recent work with Collin Bleak on the outer automorphism groups of the Higman–Thompson groups, in which we have to count foldings of de Bruijn graphs, which I also posted about recently here. The slides are in the usual place.
It was one of the liveliest PCCs I have been to, with about 35 delegates from all over England, Scotland, and further afield (Reykjavik, Wien). Several of my St Andrews students from last year were there, all doing well. I didn’t attend many of the student talks – I regard them as the students’ affairs and I don’t want to sit in the back looking intimidating – but I had talks about mathematics (and other things) with quite a few delegates.
I did go to Julia Wolf’s talk, in which she predicted that Sidorenko’s conjecture (stating, roughly, that for any bipartite graph H, the number of copies of H in a large graph G is at least as great as in the random graph with the same edge density as G) will be resolved soon, one way or the other.
The weather was lovely – I did go for a walk in the park at one point – and the College catering had done a superb job, with good lunches and dinners and cakes at coffee time.
I gave a talk (you can find the slides in the usual place), and put in a couple of plugs: for Jack Edmonds’ lectures, and for my talk at the Permutation Patterns conference, both taking place in June.
Let Sing_{n} be the semigroup of singular maps on the set {1,…n}. The first thing to note is that maps of rank n−1 cannot be generated by maps of smaller rank; so a generating set for the whole semigroup must yield all the maps of maximum rank. But it is not hard to see that the maps of rank n−1 do indeed generate the whole semigroup. So, if we are looking for a minimal generating set, we can concentrate on these.
The situation is still too complicated, so we will concentrate on idempotents of rank n−1 (that is, maps f satisfying f^{2} = f. Such a map only moves one point, say a; everything else (including the image of a) is fixed. So, if the image of a is b, we can write the map as a→b (a directed arc from a to b).
Now think about how these maps compose. If we compose a sequence
a_{1}→a_{2}→…→a_{k},
from left to right, then all the elements in the sequence pile up on a_{k}. Interesting, but not what we want if we are trying to generate maps of rank n−1.
In fact, if we insist on producing maps of maximum rank, then after the first move we have in effect a sliding block puzzle. Apply the transformation a→b. This leaves a “hole” in position a, into which we can now move another element; the tail of its arrow will then be the hole, and so on. Thus, we compose the arrows in the reverse of the natural order.
A tournament is a directed graph in which, between any two vertices, there is an arc in one direction (only). Think of this graph as recording the result of a tournament in which any two teams play once, and an arrow a→b indicates that a beats b.
A tournament is strongly connected (or just strong) if there is a directed path between any two of its points.
I will use two facts about tournaments, which I will explain at the end. In what follows, only the first of these is used; but the second is useful in similar arguments.
The theorem (I am not sure who proved it) is the following.
Theorem: A set of rank n−1 idempotents is a minimal generating set for Sing_{n} if and only if the corresponding arcs form a strongly connected tournament.
What follows is not a detailed proof, but an explanation of this theorem. I will only describe one direction, that the arcs of a strongly connected tournament do give a minimal generating set.
As explained earlier, our task is to show that we can generate any map of rank n−1. This is an easy sliding-block puzzle if we can show that we can generate all of the idempotents. For example, to swap a and b, move a into the hole, b into the hole left by a, and then a from the original hole to the position vacated by b.
So, if a→b is in our set, we have to be able to generate b→a.
For this, take a directed path from b to a (this exists, by strong connectedness), and compose its elements (in the “wrong order”, as explained). The result is a map of rank n−1 which has the correct kernel and image but may not be an idempotent. But some power of it will indeed be an idempotent, so we are done!
This also explains why, in a minimal generating set, we don’t need arcs in both directions between two points. Why do we need a tournament rather than a digraph with fewer edges? If we compose a sequence of maps of rank n−1 and the result still has rank n−1, then its kernel is equal to the kernel of the first map in the sequence. So each possible kernel must be present among the generators. The kernel of a rank n−1 map is a partition with one part of size 2 and the rest singletons; so every 2-set must occur as a kernel, that is, every pair must carry an arc in one direction at least.
Now, to generate an arbitrary map f, we can use the following strategy. First, we produce a map with the right kernel classes. Take each kernel class. It induces a sub-tournament, which by one of our earlier facts contains a Hamiltonian path. Pushing forward along this path, we pile up all the elements on a single one. Once we have done this for all kernel classes (this requires n−r steps, where r is the rank), we simply have to move these piles of counters to their correct final positions.
I haven’t even got to the point where we started our work; but I think I understand the background a bit better now!
1. Let T be any tournament, and take a path in T of maximum length, say v_{1}→…→v_{k}. If there is a vertex w not on this path, there are three possibilities:
2. Let T be a strongly connected tournament, and take a cycle C in T of maximum length. If there is a point w not on the cycle, there are two possibilities:
Here is a nice application of the second fact.
Proposition: Given a minimal generating set of idempotents for Sing_{n} as described above, any rank 1 map can be written as a product of n−1 generators (and no fewer).
At least n−1 generators are required, since each one reduces the rank by at most 1. To achieve this, choose a Hamiltonian cycle in the tournament. Starting one step after the position of the image of the required map, move around the cycle until this position is reached.
More than fifty years ago, Jack was perhaps the first person to make a clear distinction of “easy” (polynomial time) and “hard” problems, and “easily certifiable” answers, now formalised as P and NP. The concepts of P, NP, and co-NP together with discussion and conjectures are clear in Jack’s papers from the mid-1960s, including the conjecture that there is no polynomial time algorithm for the travelling salesman problem (TSP).
In the early 1970s, Stephen Cook, Leonid Levin, and Richard Karp, showed that various familiar NP problems, including the TSP, are as hard as any NP problem, that is, “NP-complete”. Since then, using the assumption of the non-easiness of TSP, that is, NP ≠ P, and trying to prove or disprove it, have become a staple of the fields of combinatorial optimization and computational complexity.
Jack is known for many other important results such as the “blossom algorithm” for maximum matching, the matroid intersection theorem, “the greedy algorithm”, theory of polymatroids and submodularity (f(A∪B)+f(A∩B) ≤ f(A)+f(B)), and much more. He is still creative, with recent work on Euler complexes and Nash equilibria among other things, and he loves to perform.
So I am delighted to have had a small part in setting up two short courses by Jack in London in June, aimed at students:
Further details will be available from the web page.
It is quite a substantial paper, and goes well beyond anything we have published (or that I have written about here before). So I cannot describe it all. Here is what I think is the most dramatic new idea.
By way of recap: a permutation group G, acting on a set X, is said to synchronize a map f if the monoid generated by G and f contains an element mapping everything to a single point. Then G is said to be a synchronizing group if it synchronizes every non-permutation, and an almost synchronizing group if it synchronizes every non-permutation which is uniform in the sense that the sizes of the inverse images of all points in its range are the same.
A more “classical” notion: G is primitive if it preserves no non-trivial equivalence relation on X, that is, there is no partition of X whose parts are permuted by G other than the partition into singletons and the partition with a single part.
There is a connection between these concepts. It is known that any synchronizing group is primitive (as, indeed, is any almost synchronizing group). A theorem of Rystsov shows that a permutation group of degree n is primitive if and only if it synchronizes every map of rank n−1 (that is, a map which collapses two points to the same place and is injective on the remaining points). Indeed, the largest part of our paper is to push down this bound: with long and delicate arguments, we show that a primitive group of degree n synchronizes any map of rank n−4 or greater.
It was conjectured for a while that, conversely, a primitive group is almost synchronizing. I described here how we refuted this conjecture last year. Our new examples go much further.
The most powerful tool for examining this problem is the use of graphs and graph endomorphisms. A transformation monoid M fails to be synchronizing if and only if there is a simple graph X (that is, undirected and without loops or multiple edges) whose endomorphism monoid contains M; moreover, we can assume that X has clique number equal to its chromatic number, and that every edge is contained in a maximal clique.
Now the new idea for constructing graphs with a rich supply of endomorphisms uses the notion of the Cartesian product X□Y of graphs X and Y. Its vertex set is the Cartesian product of the vertex sets of X and Y (that is, the set of ordered pairs (x,y), where x∈X and y∈Y; edges join pairs (x_{1},y_{1}) and (x_{2},y_{2}) whenever x_{1} = x_{2} and y_{1} and y_{2} are joined in Y, or vice versa (X components joined, Y components equal). Thus, for example, if K_{k} is the complete graph on k vertices, then K_{k}□K_{k} is the k×k square lattice graph L_{2}(k), the line graph of the complete bipartite graph on k+k vertices.
The picture shows L_{2}(4) with a proper vertex colouring (aka a Latin square of order 4). The rows and columns are cliques.
Now
So, if there is a homomorphism from L_{2}(k) to X, then the composite
X□X → L_{2}(k) → X → X□X
is an endomorphism of X□X. (The last map sends X to the set of vertices of X□X with fixed second coordinate.) The first and third homomorphisms are uniform, but the middle one gives us the chance to introduce non-uniformity.
This trick works rather well when the graph X is the complement of L_{2}(k). (That is, vertices of the square grid are joined if they lie in different rows and different columns; X is the categorical product K_{k}×K_{k}.) This graph has clique number k (a diagonal of the square grid is a clique in the complement) and chromatic number k (give a colour to each row of the grid).
Our ingredient is a homomorphism from L_{2}(k) to its complement. Such a map has the form (x,y) → (f(x,y),g(x,y)), and the homomorphism condition is equivalent to saying that each of f and g should be a Latin square (but with no relation between the two). Thus the image is given by the superposition of two Latin squares.
The rank of such a superposition is the number of different ordered pairs of entries which arise. Clearly it is between k and k^{2}, the lower bound realised when the squares are identical and the upper bound when they are orthogonal. This rank is equal to the rank of the resulting endomorphism of X□X. So we would like to know which ranks are possible for the superposition of two Latin squares.
Fortunately, this problem has already been solved by Colbourn, Zhu and Zhang, who have determined all the possible ranks. For k > 6, every value in the interval from k to k^{2} except k+1 and k^{2}−1 occurs; all the exceptions for lower values of k have been determined.
So we have primitive graphs with k^{4} vertices, and with endomorphisms of k^{2}−k−1 different ranks, almost all of them non-uniform!
We have other examples too, but have not explored the new realms opened up by this idea. The conclusion is that things are much more complicated and interesting than anyone thought.
When you think that you’ve lost everything,
You find out you can always lose a little more.
Open access for the REF is not in that league, but when you think you have plumbed the depths of HEFCE policy on open access, someone pops up to tell you that you haven’t got to the bottom yet.
I am going to tell you two entirely realistic scenarios which could lead to papers being judged ineligible for the REF despite your best intentions, or at least to a degrading of the quality of research in the name of “excellence”. But a couple of preliminary remarks first.
Recall that you must put a preprint of your article (in the final accepted form for publication) in a public archive within three months of acceptance, from 1 April 2016. (Incidentally, it is not entirely clear that this start date refers to acceptance of papers. We know how slow mathematical publication is, and it is quite possible that a paper accepted years earlier only comes to published form after that date.)
During my career, I have been fortunate to know several mathematicians of the highest level of creativity, including Graham Higman, John Conway, Paul Erdős, and Ian Macdonald. I can imagine the reaction that these people would have had if someone tried to impose the current HEFCE rules on them. Some of them would simply not have complied. So then the HEFCE bureaucrats, who after all know what research excellence is since they invented the concept, would decide that these people were not up to scratch.
On that theme, my current contract ends one month and a day before the new HEFCE rules come into force. I hope it will be renewed; but if it is not, the silver lining of the cloud will be that I will no longer be bound by these silly rules. I will be able to do research, post it on the arXiv, and if I am really proud of it, submit it to a diamond open access journal, and that will be that.
And further diverting on that theme, it really seems that neither HEFCE nor one of the commenters on my previous post realise that there is any alternative to gold or green open access.
Back to general issues. How do you prove acceptance date of a paper? By the date on the editor’s letter notifying acceptance, apparently (with some exceptions: I found one journal which included an official acceptance date in the letter). So you have to keep this letter. It probably came by email. The two University email systems I have to deal with are Outlook and Office365 based, which means that all my mail is stored in a cloud under the control of Microsoft. It seems more than a little naive to assume that it will still be there in 2020 when you might need it. Moreover, these systems do not allow you to save emails as local files, unlike the old workhorses I used to use such as mutt or squirrel mail. I have asked various systems managers for a way round this, but nobody has been able to help. So I have resorted to copying the text (including headers) into a text file and saving that somewhere that will get backed up.
And, while I am on general things, Martin Eve said,
If your institution isn’t allowing you to use arXiv to fulfil the requirements, that’s not HEFCE’s fault, it’s your institution being over-zealous. The policy explicitly allows arXiv: “a subject repository such as arXiv”.
I mentioned the fact stated in the last sentence in my original post. But the whitewash of HEFCE doesn’t hold, since it is their policy which has driven universities into this over-zealousness.
In pre-Internet days, we had a system which worked well for making papers public. Journals would provide a number, typically 50, of “offprints” of a published paper (printed documents identical with the published version). Anyone could then write to the author asking for an offprint, which would be sent provided that the paper was not so popular that the supply had been exhausted. Departments usually had a supply of request cards which could be filled in and posted. The analogue of gold was the facility to buy extra offprints at your own or your university’s expense.
Right, down to business …
I write a paper, prepare it very carefully, and post it on the arXiv at the same time as submitting it to a journal. Let us say, either a diamond journal, or a gold journal for which my university is prepared to stump up a huge sum of money. Back it comes with referees’ reports pointing out that there is nothing wrong with the mathematics, but asking for small changes in grammar or style. I happen to think that these changes degrade the paper, making it less precise or harder to understand; but I want the paper published, I am busy, so I swallow my pride, make them, and send the paper back to the journal, who tell me it is now accepted. Now there is a superior version on the arXiv, but I can’t count the paper for the REF unless I post an inferior version. Moreover, the over-zealous administrators will probably expect me to post this inferior version on the institutional repository as well. Then finally it will appear in the journal.
So now there are four copies of the paper out there. The arXiv never deletes anything; if you update a paper, the old copy is still there. If you think about arXiv submissions of controversial papers claiming to solve big problems, you will quickly realise that this is the correct, indeed the only possible, strategy. (I don’t know if this is also true for institutional repositories.) So the “good” version is still there, but it is no longer the default, and you will only get it if you ask for it.
Is there anywhere a mathematician who thinks this scenario is not realistic?
Of course, if I decide to fight against this proliferation and depression of standards by not posting the second version, I can’t put the paper in the REF. So, even leaving aside the possible depression of standards, the effect of HEFCE policy is to fuel a big increase in the number of copies of a paper on the web, making searching for the “right” one virtually impossible.
I write a paper with an author in a different country. She does not realise the stupid bureaucracy that UK academics suffer from, and she knows that I am busy, so when the acceptance letter comes, she does not bother to forward it to me.
So how do I find out that the paper has been accepted? Maybe when it appears in the journal (or on their website), or maybe when I wonder why I have heard nothing and ask my coauthor. In either case, by this time it may be too late to satisfy the HEFCE requirement (even if the version on the arXiv may, as in the first scenario, be as good as or better than the published version).
I hasten to say that none of my co-authors would behave like this; they are without exception more conscientious than I am! (Not difficult, actually!) But it may be a bit intimidating or embarrassing for junior academics to have to nag senior colleages in other countries for these bureaucratic details.
I spent the first half of 2008 in Cambridge, directing a programme on “Combinatorics and Statistical Mechanics” at the Isaac Newton Institute. Jan Saxl very kindly invited me to Caius, and arranged for me to be the G. C. Steward visiting fellow. Along with very congenial company at the excellent meals, and somewhat noisy accommodation above the infamous Gardenia restaurant in Rose Crescent, my only duties were to give “three or four” lectures to the mathematics students at the college, which I was very happy to do. I was able to play with various literary allusions: the title of the series was “Never apologise, always explain: scenes from mathematical life” – the first four words I regard as a good rule for a mathematician – and the individual lectures were “Before and beyond Sudoku”, “Proving theorems in Tehran”, “Transgressing the boundaries”, and “Cameron felt like counting”.