What do you mean, why’s it got to be built? It’s a bypass. You’ve got to build bypasses.
Mr Prosser’s words in Hitch-Hiker’s Guide to the Galaxy have much wider applicability.
In particular, what do you mean, why does research have to be assessed? Is there any evidence that the money spent on the assessment pays for itself in terms of better research? In 1996, when I was on an RAE panel, HEFCE produced a thoroughly dishonest estimate of the cost. Dishonest because they didn’t cost the time that universities spent preparing their submissions, the lost research time for the academics, the (unpaid) time for the large number of academics who served on the panels, …
At least it was appropriately named. This massive exercise to assess our research was called the Research Assessment Exercise, or RAE for short. Now it’s called the Research Excellence Framework. The name is bullshit, partly because it simply doesn’t describe what the thing is about, but more seriously because the policies which institutions are adopting to prepare for it actively discourage research excellence. As Wendy Cope said, “Fine words won’t turn the icing pink”.
Anyway, the REF season has begun. Our Director of Research has done his best to shield us from the worst of it, but he has not been completely successful.
Some time ago, we were all asked for four papers published since the end of 2008. Of course this is difficult for a pure mathematician. The list of papers was sent to an external assessor (just one to cover all of pure mathematics, which includes logic, algebra, geometry, analysis, and combinatorics). The identity of this assessor is a secret, though at least one of my colleagues knows who it is. But the results suggest that the assessor did not read the papers, but simply judged them on the basis of length and journal. Moreover, since there is no ranked list of journals (unlike the situation in Australia), (s)he must have simply invented a ranking.
There are two views you might take of this. One is that an assessment done like this is fraudulent and the assessor did not earn his or her fee. Unfortunately I think the second is more plausible. Indications are that the real thing is going to be done this way, so the results produced may be closer to the real thing than if the papers had been read and evaluated conscientiously.
I have argued against using metrics before – go to the contents page and scroll down to “Judging research” where you can find my earlier rants if you are interested. Scholars who work on benchmarking metrics (notably Stevan Harnad) took me to task. But now you see very clearly that their fine words haven’t turned the icing pink either. Just because evaluation using metrics could be done well doesn’t mean that it is.
And before you start telling me that this is only a dry run and doesn’t matter, consider how the results were used. Emails were sent to my colleagues giving them the evaluation of their papers made by the external assessor in terms of number of stars, and their overall evaluation as green, amber or red. You can imagine the effect of this on morale. (Have you ever seen an amber traffic light turn green?) The Director of Research has some serious firefighting to do.
I will not be involved in producing our departmental submission this time; I have been written out of the script. (In fact, the scores my papers got suggests that I don’t really cut the mustard as a researcher any more. If the University decide not to include me, that is fine by me: people who want to read my work can still do so; and I am sure there are other Universities which would be happy to offer me a bolthole in 2014.) But I did so in 1996, 2001 and 2008, and I know a bit about the process.
On the first two occasions, the department heads managed to shield us from the dry runs. But in 2008 we were told that this was no longer possible (actually the head didn’t even put up a fight), and we were put through several dry runs. I told the Vice-Principal that I would tear up the dry run submissions when I sat down to write the real thing, and this was agreed. (I don’t see that happening this time.) I regarded my job as protecting my colleagues; for the dry runs, I reckoned I knew enough about their work to write it up without having to pester them.
The problem with dry runs is that you get stale. You write sparkling stuff first time; after that you decide that you can’t really do better, so you cut and paste, and the sparkle is quickly lost, with the best will in the world.
But these stars and traffic lights are going to be used in conjunction with a more aggressive style of personnel management; people will be labelled as not up to scratch well in advance, and their research careers thrown on the scrapheap.
The other big problem, which I have also discussed before, is that the managers now tell us what journals to publish in. I am supposed to choose a journal, then do some research that will get into that journal. Do they really think this is a framework for research excellence?
This is a story without heroes, though not everyone is a villain. But I didn’t even mention impact; had I done, there would have been a hero, Don Braben, who continues the fight.