Goodhart’s law asserts:
When a measure becomes a target, it ceases to be a good measure.
This simple and obvious truth, it seems to me, is at the basis of much of the present crisis in evaluation of teaching (both school and university), research, and medicine.
The law quoted above, in a formulation by M. Strathern, is taken from a highly disturbing article “Nefarious numbers” by Douglas N. Arnold and Kristine K. Fowler, published in the AMS Notices in March this year, and reprinted in the current EMS Newsletter (and many other places too). I urge you to read it, if you haven’t yet.
The article takes as a case study the International Journal of Nonlinear Science and Numerical Simulation (IJNSNS for short), an apparent outlier whose impact factor is three times that of the leading journals in its field such as Communications on Pure and Applied Mathematics and SIAM Review, even though it is rated much lower by experts. How did it achieve this?
A bit of background first. The impact factor of a journal is the average number of citations of its articles in a window of two years from a year after the article’s publication. Before you start listing all the reasons why this is not a good measure, especially for mathematics, let me point out that it is transparent, and transparency is regarded as not just desirable but essential by the bean-counters these days.
The independent evaluation comes from the list produced by the Australian Research Council for its ERA research assessment. Journals were divided into four categories in terms of quality, as judged by learned academies and societies. (For reasons which only a civil servant can understand, the categories are labelled A*, A, B, C rather than A, B, C, D.) IJNSNS was in category B, while the other two journals mentioned were both in A*. (Ironically, just at the moment when this ranking is receiving international recogition, the ARC has decided to abandon it.)
The techniques used to achieve such an astonishing result are simple and for the most part perfectly legal. Moreover, the interests of the editor-in-chief and the publisher are in almost complete agreement. Here are some of them.
- Obviously, members of the editorial board publish lots of papers in the journal, citing one another.
- Once a year, an editor writes a survey article on developments in the area in the last couple of years, especially as seen in the pages of the journal.
- Editors produce special volumes of other journals, which can be packed with citations.
- Join the editorial boards of other journals, where you can encourage cross-citation.
- Of course, editors and publisher cooperate to ensure that the citations fall within the two-year window.
Another technique, which has been documented, is frowned upon. That is to make acceptance of a paper conditional on inclusion of citations to the journal. Because of the unequal situation of editor and author, this can be seen as coercion, even when it is phrased as only a suggestion.
No doubt, editors will in future be encouraged to adopt these techniques. In fact, as I said earlier, it is in the editor’s interest also, in that it boosts his or her own citations. The editor-in-chief of IJNSNS has a Hirsch number of 39, compared to a median of 35 for Nobel prizewinners.
The Hirsch number, or h-number, of a researcher is the largest n such that his or her nth most cited paper has at least n citations. If this sounds absurd to you, I know that in other disciplines it is taken extremely seriously in hiring and promotion decisions. (I can’t refrain from scepticism, however. I think the official h-number is produced using Web of Knowledge. I don’t know what mine is. However, I calculated it using MathSciNet and Google Scholar; the results differed by a factor of two, even when I had filtered out of the Google Scholar list publications by other people sharing my name.)
Clearly, the word is spreading. Let me quote from two other articles in the current EMS Newsletter. In a piece on discussions about publishing at last year’s ICM in Hyderabad, we read,
one of the most frequently asked questions by […] journal editors was if and how their indexing at Zentralblatt [the European journal of reviews of mathematics papers] would help them in getting into the Science Citation Index and what could be done to get a high ranking position.
An article by the Editor-in-Chief of Zentralblatt, on the plight of journals from small publishers, says,
… the Science Citation Index […] pretends that only about 20% of the journals publishing mathematics are of interest, and in many countries libraries only view these journals as worth purchasing. This effect is exacerbated by administrations where the mathematicians are not consulted anymore and where the decisions on which journals and subscriptions should be purchased [depend] exclusively on impact figures.
Enough! Surely it is clear that impact factor is so tainted that we should cease using it immediately. But if we do, something else will come to take its place. Our administrators love measuring our research quality in a single number, and someone will accept their shilling and produce a number for them. “Nefarious numbers”, indeed.