Sorry to return to this boring topic again; but here is an excellent example of both what is wrong with judging research by citations, and a challenge to those who would do so.

From *New Scientist*, 6 February 2010:

[X’s] 2004 paper in *Science* has now been cited over 350 times by other researchers. Yet many remain sceptical. One criticism is that …

How does the journalist know this? Because the criticism is expressed in journal articles, which can be found because they cite the paper concerned. Not all citations are positive!

So, a small challenge: look at the 350 papers citing this work; how many of them are critical? If critical citations count as –1, what figure do we get?

Incidentally, two articles later in *New Scientist* we read:

One leading expert, on condition of anonymity, told *New Scientist* that the estimates were “ridiculous” and privately accused [Y] of being “more interested in getting papers into *Nature* and *Science* than in getting it right”.

Leaving aside the scurrilous personal attack under this cloak of anonymity, is there a mechanism that could explain this paradox? I think there is. Just as researchers are under pressure to publish in high-impact journals, so journals must be under pressure to publish articles that will attract more citations. A controversial article will do so, despite having a higher probability of being wrong; indeed, all the better if it inspires many researchers to refute it!

It may appear that I am saying that we shouldn’t write speculative papers. Of course I am not; merely that citation data does not reliably judge the worth of a paper without further information, which can only be obtained by reading the paper and making a judgment.

But, to close: Mathematics notoriously has lower citation rates than most of science. Probably the main reason for this is that, if I quote a theorem, I only need to cite the proof of the theorem, not to pile up experimental evidence for it. But perhaps another reason is that mathematics is relatively uncontroversial (a proof is a proof, after all), and mathematics papers don’t tend to attract negative citations.

### Like this:

Like Loading...

*Related*

##
About Peter Cameron

I count all the things that need to be counted.

“a proof is a proof, after all” — and there are roughly 1,000,000 theorems per year published.

“mathematics papers don’t tend to attract negative citations” — as opposed to negative movie and book reviews of how Mathematicians are portrayed by mainstream media.

The best way to raise your ranking in SCI is to publish an obviously wrong paper in a prestigious (high-impact) venue. Many people will quickly publish notes and papers to prove that you are wrong. Picking a fight with a more famous author is also a good strategy in this game.

Fortunately, negative portrayal in the press and mainstream media don’t (yet) affect our citation data. Probably, any public exposure is still good for us.