Saturday, 19 January 2013
Journal Impact Factors and REF 2014
Here's what the REF2014 guidelines say about journal impact factors:
"No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs."
Here are a few sources that explain why it is a bad idea to use impact factors to evaluate individual research outputs:
Stephen Curry's blog
David Colquhoun letter to Nature
Manuscript by Brembs & Munafo on "Unintended consequences of journal rank"
Here is some evidence that the REF2014 statement on impact factors is being widely ignored:
Jenny Rohn Guardian blogpost
And here's a letter I wrote yesterday to the representatives of RCUK who act as observers on REF panels about this. I'll let you know if I get a reply.
18th January 2013
To: Ms Anne-Marie Coriat: Medical Research Council
Dr Alf Game: Biotechnology and Biological Sciences Research Council
Dr Alison Wall: Engineering and Physical Sciences Research Council
Ms Michelle Wickendon: Natural Environment Research Council
Ms Victoria Wright: Science and Technology Facilities Council
Dr Fiona Armstrong: The Economic and Social Research Council
Mr Gary Grubb: Arts and Humanities Research Council
Dear REF2014 Observers,
I am contacting you because a growing number of academics are expressing concerns that, contrary to what is stated in the REF guidelines, journal impact factors are being used by some Universities to rate research outputs. Jennifer Rohn raised this issue here in a piece on the Guardian website last November:
I have not been able to find any official route whereby such concerns can be raised, and I have evidence that some of those involved in the REF, including senior university figures and REF panel members, regard it as inevitable and appropriate that journal impact factors will be factored in to ratings - albeit as just one factor among others. Many, perhaps most, of the academics involved in panels and REF preparations grew up in a climate where publication in a high impact journal was regarded as the acme of achievement. Insofar as there are problems with the use of impact factors, they seem to think the only difficulty is the lack of comparability across sub-disciplines, which can be adjusted for. Indeed, I have been told that it is naïve to imagine that this statement should be taken literally: "No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs."
Institutions seem to vary in how strictly they are interpreting this statement and this could lead to serious problems further down the line. An institution that played by the rules and submitted papers based only on perceived scientific quality might challenge the REF outcome if they found the panel had been basing ratings on journal impact factor. The evidence for such behaviour could be reconstructed from an analysis of outputs submitted for the REF.
I think it is vital that RCUK responds to the concerns raised by Dr Rohn to clarify the position on journal impact factors and explain the reasoning behind the guidelines on this. Although the statement seems unambiguous, there is a widespread view that the intention is only to avoid slavish use of impact factors as a sole criterion, not to ban their use altogether. If that is the case, then this needs to be made explicit. If not, then it would be helpful to have some mechanism whereby academics could report institutions that flout this rule.
(Professor) Dorothy Bishop
Colquhoun, D. (2003). Challenging the tyranny of impact factors Nature, 423 (6939), 479-479 DOI: 10.1038/423479a
This post has provoked some excellent debate in the Comments, and also on Twitter. I have collated the tweets on Storify here, and the Comments are below. They confirm that there are very divergent views out there about whether REF panels are likely to, or should, use journal impact factor in any shape or form. They also indicate that this issue is engendering high levels of anxiety in many sections of academia.
REPLY FROM HEFCE
I now have a response from Graeme Rosenberg, REF Manager at HEFCE, who kindly agreed that I could post relevant content from his email here. This briefly explains why impact factors are disallowed for REF panels, but notes that institutions are free to flout this rule in their submissions, at their own risk. The text follows:
I think your letter raises two sets of issues, which I will respond to in turn.
The REF panel criteria state clearly that panels will not use journal impact factors in the assessment. These criteria were developed by the panels themselves and we have no reason to doubt they will be applied correctly. The four main panels will oversee the work of the sub-panels throughout the assessment process, and it part of the main panels' remit to ensure that all sub-panels apply the published criteria. If there happen to be some individual panel members at this stage who are unsure about the potential use of impact factors in the panels' assessments, the issue will be clarified by the panel chairs when the assessment starts. The published criteria are very clear and do not leave any room for ambiguity on this point.
The question of institutions using journal impact factors in preparing their submissions is a separate issue. We have stated clearly what the panels will and will not be using to inform their judgements. But institutions are autonomous and ultimately it is their decision as to what forms of evidence they use to inform their selection decisions. If they choose to use journal impact factors as part of the evidence, then the evidence for their decisions will differ to that used by panels. This would no doubt increase the risk to the institution of reaching different conclusions to the REF panels. Institutions would also do well to consider why the REF panels will not use journal impact factors - at the level of individual outputs they are a poor proxy for quality. Nevertheless, it remains the institution's choice.