This document has just been released by HEFCE. If you haven’t read it, here are a few small things from my perusal of it.

### Collaborative research

Item 18 of the document states:

There was broad support in the consultation for better recognising collaborative activity in the REF. We will therefore include in the revised environment template (see paragraphs 27–29) an explicit focus on the submitting unit’s approach to supporting collaboration with organisations beyond higher education.

It seems that collaboration between, say, mathematicians and biologists doesn’t count. As for collaboration between group theorists and analysts, forget it.

### Impact

The definitions of impact and their interpretation have not yet been figured out, despite the fact that it is such a crucial part of the assessment. But the “excellent” (i.e. at least two-star) research underpinning impact must have been conducted since 2000, and the impact itself must have occurred since 2013. The link between impact case studies and number of staff submitted will be maintained, though they appear to have no idea how it will work.

Impact may be rolled in with environment (there are some noises about this but it is not clear to me what is being said). They are working with an organisation called “Forum for Responsible Research Metrics”. You know my views, this is something like the Forum for Flat Earth Studies. And impact will rise from 20% to 25% of the overall assessment (no surprise there, they always get their way in the end).

### Next steps

A provisional timetable is enclosed (assuming it will be REF2021 and not REF2022 as has been suggested). The only thing imminent is consultation with stakeholders about the composition of subpanels and self-nomination for sub-panel chairs (this month and next, respectively).

In connection with the use of metrics, here is a relevant statement from the European Mathematical Society’s Code of Practice:

Whilst accepting that mathematical research is and should be evaluated by appropriate authorities, and especially by those that fund mathematical research, the Committee sees grave danger in the routine use of bibliometric and other related measures to assess the alleged quality of mathematical research and the performance of individuals or small groups of people.

Recommendation on the evaluation of individual researchers in the mathematical sciences

(endorsed by the IMU General Assembly on August 10, 2014)

It is therefore important to encourage mathematicians who serve on panels to explain to scientists of other disciplines that bibliometric evaluation is particularly inappropriate for mathematicians. We hope that the present document can help in making this point. It is worth stressing that mathematicians are not advocating that other sciences should change their specific evaluation criteria; IMU does not claim that it knows the best way to evaluate chemists or economists. The conclusion of this paragraph is the following somewhat obvious statement, which is the core of the present document:

Nothing (and in particular no semi-automatized pseudo-scientific evaluation that involves numbers or data) can replace evaluation by an individual who actually understands what he/she is evaluating. Furthermore, tools such as impact factors are clearly not helpful or relevant in the context of mathematical research.

It might look tempting to produce alternative bibliometric tools (keeping in mind that most impact factors are produced by commercial companies for whom it is a business), but this is not something that IMU wishes to be involved with, given the intrinsic negative side-effects of such tools.

http://www.mathunion.org/fileadmin/IMU/Report/140810_Evaluation_of_Individuals_WEB.pdf