Altmetrics

Maybe you haven’t heard of altmetrics; I hadn’t, until this morning. I hope you never have to hear of them again (but not too confidently). But I do urge you to read what David Colquhoun and Andrew Plested have to say about them.

In short, altmetrics are a new form of bibliometrics which “measure”, not the citations of a scientific paper, let alone any more meaningful impact indicators, but the activity the paper generates on social media (meaning mainly Twitter, but also Facebook and various other things that I haven’t heard of). If you think this is a bad idea (or even if you don’t), Colquhoun and Plested will tell you why that is the case.

Again in short, this kind of measure enables the companies that produce the numbers to sell them to universities as “Speed: months or weeks, not years: faster evaluations for tenure/hiring”. (They have already sold this to some universities, including Colquhoun’s and Plested’s, for real money.) But when challenged that this is not a measure of scientific quality, they can retreat to the position that it doesn’t claim to measure scientific quality, merely something interesting.

Of course, as Colquhoun and Plested point out, most people who tweet or re-tweet about a scientific paper have not read it (not least because it may be behind a paywall), and in many cases they know nothing about what it says apart from the title, and maybe the journal’s own tweet (which, as the authors show, can be staggeringly dishonest, even in the case of the journals of supposedly highest reputation).

One of the proponents of altmetrics, in a reply, can’t even bother to spell David Colquhoun’s name correctly. Is this lack of basic courtesy the norm in social media?

Do take a look. Here is the link again, on David Colquhoun’s blog (because the journal eLife, to whom it was submitted, considered it not worth publishing – judge for yourself).

Advertisements

About Peter Cameron

I count all the things that need to be counted.
This entry was posted in publishing, the Web and tagged , , . Bookmark the permalink.

7 Responses to Altmetrics

  1. Yemon Choi says:

    1. The current system is grievously flawed.
    2. We need something new.
    3. This is something new.
    4. This is what we need.

    (And as Douglas Adams observed, for an encore we prove that black is white, only to get run over on the next zebra crossing.)

  2. It’s unfortunately far too believable that some bean counters obsessed with buzzwords like “social media” would want to use such unquestionably stupid methodology as a means of actually assessing research “impact”.

    Out of curisoity, I looked up Altmetric’s top 100 papers for 2013 to see if there were any mathematics papers listed. There were two of them.

    The main problem is that they were both taken from the ArXiv. The problem with this is that, as many of us know, papers on the ArXiv are typically not yet peer-reviewed, at least when they first appear.

    One of the papers—sorry, preprints—listed was one of Harald Helfgott, related to his solution of the ternary Goldbach conjecture. I don’t think anyone disputes his results, but this is still a preprint in the process of being reviewed, so should this really be appearing in “impact” calculations yet? If some crank had claimed a simple proof of the binary Goldbach conjecture (which of course many have done), one wonders how much “impact” that would have in the world of social media.

    The other was by some statistical physicists, using some sort of Monte-Carlo algorithm to (supposedly) obtain the “world’s hardest Sudoku puzzle”, which was followed three days later with an update after somebody pointed out to them that their “hard” puzzle could actually be solved easily by hand. Given the popularity of Sudoku puzzles, it’s perhaps not surprising that a paper on this topic would get a lot of “tweets” or “likes” on Facebook. However, given the sequence of events, how many of those “tweets” were appended “#fail”, which is used to indicate something which clearly went wrong?

    • Tricky issue. Google Scholar takes citations from the arXiv, and is widely used for hiring/tenure decisions. (Try giving your tenure committee citations from MathSciNet, you will probably be met with blank incomprehension.) It’s a question of degree.

      • Well, according to MathSciNet, I have 46 citations, whereas Google Scholar lists 157, so I guess I shouldn’t grumble too loudly.

    • Last week I worked out my H-number. Google Scholar gave a result more than twice that from MathSciNet.
      The serious point is that none of us is completely immune to this nonsense.

  3. uhmmartin says:

    One wonders exactly what UCL and Elsevier will get up to at their “Big Data Institute” http://www.ucl.ac.uk/news/news-articles/1213/UCL_Elsevier_partnership_181213

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s