Saturday, 19 January 2013

Journal Impact Factors and REF 2014

In 2014, British institutions of Higher Education are to be evaluated in the Research Excellence Framework (REF), an important exercise on which their future funding depends. Academics are currently undergoing scrutiny by their institutions to determine whether their research outputs are good enough to be entered in the REF. Outputs are to be assessed in terms of  "‘originality, significance and rigour’, with reference to international research quality standards."
Here's what the REF2014 guidelines say about journal impact factors:

"No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs."

Here are a few sources that explain why it is a bad idea to use impact factors to evaluate individual research outputs:
Stephen Curry's blog
David Colquhoun letter to Nature
Manuscript by Brembs & Munafo on "Unintended consequences of journal rank"
Editage tutorial

Here is some evidence that the REF2014 statement on impact factors is being widely ignored:

Jenny Rohn Guardian blogpost

And here's a letter I wrote yesterday to the representatives of RCUK who act as observers on REF panels about this. I'll let you know if I get a reply.

18th January 2013

To: Ms Anne-Marie Coriat: Medical Research Council   
Dr Alf Game: Biotechnology and Biological Sciences Research Council   
Dr Alison Wall: Engineering and Physical Sciences Research Council   
Ms Michelle Wickendon: Natural Environment Research Council   
Ms Victoria Wright: Science and Technology Facilities Council   
Dr Fiona Armstrong: The Economic and Social Research Council    
Mr Gary Grubb: Arts and Humanities Research Council    


Dear REF2014 Observers,

I am contacting you because a growing number of academics are expressing concerns that, contrary to what is stated in the REF guidelines, journal impact factors are being used by some Universities to rate research outputs. Jennifer Rohn raised this issue here in a piece on the Guardian website last November:
http://www.guardian.co.uk/science/occams-corner/2012/nov/30/1


I have not been able to find any official route whereby such concerns can be raised, and I have evidence that some of those involved in the REF, including senior university figures and REF panel members, regard it as inevitable and appropriate that journal impact factors will be factored in to ratings - albeit as just one factor among others. Many, perhaps most, of the academics involved in panels and REF preparations grew up in a climate where publication in a high impact journal was regarded as the acme of achievement. Insofar as there are problems with the use of impact factors, they seem to think the only difficulty is the lack of comparability across sub-disciplines, which can be adjusted for. Indeed, I have been told that it is naïve to imagine that this statement should be taken literally: "No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs." 


Institutions seem to vary in how strictly they are interpreting this statement and this could lead to serious problems further down the line. An institution that played by the rules and submitted papers based only on perceived scientific quality might challenge the REF outcome if they found the panel had been basing ratings on journal impact factor. The evidence for such behaviour could be reconstructed from an analysis of outputs submitted for the REF.


I think it is vital that RCUK responds to the concerns raised by Dr Rohn to clarify the position on journal impact factors and explain the reasoning behind the guidelines on this. Although the statement seems unambiguous, there is a widespread view that the intention is only to avoid slavish use of impact factors as a sole criterion, not to ban their use altogether. If that is the case, then this needs to be made explicit. If not, then it would be helpful to have some mechanism whereby academics could report institutions that flout this rule.

Yours sincerely

(Professor) Dorothy Bishop


Reference
Colquhoun, D. (2003). Challenging the tyranny of impact factors Nature, 423 (6939), 479-479 DOI: 10.1038/423479a

P.S. 21/1/13
This post has provoked some excellent debate in the Comments, and also on Twitter. I have collated the tweets on Storify here, and the Comments are below. They confirm that there are very divergent views out there about whether REF panels are likely to, or should, use journal impact factor in any shape or form. They also indicate that this issue is engendering high levels of anxiety in many sections of academia.

P.P.S. 30/1/13
REPLY FROM HEFCE
I now have a response from Graeme Rosenberg, REF Manager at HEFCE, who kindly agreed that I could post relevant content from his email here. This briefly explains why impact factors are disallowed for REF panels, but notes that institutions are free to flout this rule in their submissions, at their own risk. The text follows:

I think your letter raises two sets of issues, which I will respond to in turn. 

The REF panel criteria state clearly that panels will not use journal impact factors in the assessment. These criteria were developed by the panels themselves and we have no reason to doubt they will be applied correctly. The four main panels will oversee the work of the sub-panels throughout the assessment process, and it part of the main panels' remit to ensure that all sub-panels apply the published criteria. If there happen to be some individual panel members at this stage who are unsure about the potential use of impact factors in the panels' assessments, the issue will be clarified by the panel chairs when the assessment starts. The published criteria are very clear and do not leave any room for ambiguity on this point. 

The question of institutions using journal impact factors in preparing their submissions is a separate issue. We have stated clearly what the panels will and will not be using to inform their judgements. But institutions are autonomous and ultimately it is their decision as to what forms of evidence they use to inform their selection decisions. If they choose to use journal impact factors as part of the evidence, then the evidence for their decisions will differ to that used by panels. This would no doubt increase the risk to the institution of reaching different conclusions to the REF panels. Institutions would also do well to consider why the REF panels will not use journal impact factors - at the level of individual outputs they are a poor proxy for quality. Nevertheless, it remains the institution's choice.

35 comments:

  1. Dorothy,

    I wholeheartedly support this action you are taking, as it seems the message is not yet getting through that impact factors are not the sole measure of an article's worth. After all, if an article that is deemed good enough for Nature is instead published in an unknown journal, does the research become of lower quality? Of course it doesn't. Furthermore, reliance on impact factors, and consequently established 'big-name' journals, stifles the push towards open access publishing through rejection of newer and less well-known open access journals.

    ReplyDelete
  2. Although I did not focus exclusively on journal Impact, I did address the more general problem in this essay in response to the Edge.org question, "What Should We Worry About?
    http://edge.org/annual-question/q2013

    ReplyDelete
  3. I spent a happy time over Christmas reading potential REF submissions from members of my department and trying to rank them. We've been quite clear from the start the impact factors are not to be used, and so I rated them by reading them fairly closely (in the knowledge that the REF panel is unlikely to spend more than 10 minutes per submission) and looking at whether they had already been cited by others and if so how frequently. This last is of course not perfect, especially if people submit very recent publications.

    I'm not sure how much clearer the REF guidelines can be on this. If universities choose to disbelieve those guidelines and make their submissions based on impact factor alone, more fool them.

    ReplyDelete
  4. Just a simple 'thank you' for doing this. Hugely appreciated.

    ReplyDelete
  5. Thanks for all your comments. I'd like to respond specifically to Stephen Wood.
    The problem, I fear, is that it is not safe to assume that panels really will disregard impact factor. If so, then submissions from people like you, who have played by the rules, could be shafted.

    ReplyDelete
  6. Is the point that IF should be ignored altogether? I totally agree that any positive evidence should help support the quality of an article: reading well (as far as a reviewer can tell, when often in internal institutional review they will not be expert), being cited by others, the 100 words explaining other impact, *or* a high IF journal accepting it, all are indicators an article's worth to a community?

    ReplyDelete
    Replies
    1. I believe the REF panels will use IFs as the major factor when deciding on the rating of a submission. How else could they realistically complete an accurate review of such a diverse range of different subject areas? At the end of the day, every scientist tries to publish their work in the journal with the highest IF, it then works its way down the ladder and ends up (more or less) where it deserves to be published- if this happens to be PLOS ONE rather than Nature then this indicates that the paper is sound but unlikely to set the field on fire. I would like to meet the person who starts the process of publishing a major breakthrough by submitting it to PLOS ONE or a similar journal that makes no judgement of likely impact! Why would we do this? When applying for grant funding we are judged on track record and grant panels will immediately recognise the importance of previous work published in journals like Nature or Science.
      Published papers have already been subject to expert review- reviewed by real experts in that field. Why should the REF panel take a different view from these experts? Sure, we all have our hard luck stories, shafted by X or Y reviewer, but on average our work is published where it deserves to be (sorry Dorothy, we do not (by and large) publish our major breakthroughs in unknown journals anymore).
      I am sure that the REF panels will base their scores largely on IF and I am broadly supportive of this. By using IF we have a much simpler system that will give the same result as a detailed review of every submission would give- sure there are bumps and creases but these will get ironed out during the process of averagng (and we are not getting graded as individuals but as units or departments). BUT I agree that there needs to be more transparency about this.

      Delete
    2. The issue isn't really about Nature/Science though, is it? As Jenny Rohn makes clear in her response to comments on her Guardian article, the greater concern is if IFs are used as the main criterion to distinguish articles that might be rated 2 from those graded 3. If, as many people believe, 2-rated research will not be fundable, then it's important that the decision-making process used to distinguish 2 from 3 is as robust as possible. Many would question whether IFs provide that robustness.

      Delete
    3. Jon, I have been involved in internal gradings of REF submissions- as you mention, this boundary between grading something 2* or 3* is very very difficult in some cases- not sure how you make the system of grading robust though. True, Nature/Science will likley get an automatic 4*

      Delete
  7. Thanks for raising this Joanna. I think there are two points relevant here.
    1) I think however IF is used, it needs to be crystal clear. If the REF guidelines state it won't be used, then it shouldn't be used. My own case provides an interesting example: I have some eligible articles in journals with high IF, and I have others (rather more) that are well-cited. There is no overlap between these two sets! So which do we submit (assuming I am returnable, which is by no means certain)? Anyone playing by the rules (like Stephen Wood, above) would actually read the articles and judge which are best on originality, significance and rigour. But suppose the panel then decides that IF is important: our department might lose out if we haven't submitted those with highest IF. It's all game playing, but we do at least need to be clear about the rules.
    2) You mention that having a high IF journal accept an article is an indication of an article's worth. Problem is that these days it is increasingly indicative of over-hyped, non-replicable but eye-catching results - see the link to Brembs and Munafo's recent paper above. Obviously there *are* good papers in high IF journals! But I have seen so many bad ones that fit the Brembs/Munafo stereotype that I no longer regard high IF as an indicator of quality.

    ReplyDelete
  8. Anonymous writes "At the end of the day, every scientist tries to publish their work in the journal with the highest IF, it then works its way down the ladder and ends up (more or less) where it deserves to be published-"
    Erm, not this scientist!
    I should add that I have a perfectly respectable H-index. I don't publish in "unknown journals" but I avoid those that want just a sexy soundbite.
    And you could take a look at my last two blogposts for examples of papers from high impact journals that have failed to impress me.

    ReplyDelete
  9. Sorry Dorothy, the 'unknown journals' was not directed at you (that was meant to be a response to Dean)!! However I think you are an exception if you do not submit first to the high IF journals and work your way down from there if rejected. I'd be interested to hear what your publication strategy is?
    Agreed there are papers published in 'top' journals that ain't that good, but then the work in these journals (on average) gets cited more than work in other journals.
    In short, I would rather my work was rated based on IF rather than on the cursory inspection of a REF panel member...

    ReplyDelete
  10. My publication strategy is to consider these factors:
    a) Which journal will be most likely to reach the appropriate readership? (in some cases this would include those with clinical interests)
    b) How much space do I need to explain the research clearly and put it in context? Usually this rules out high impact journals as they really only allow sound-bites.
    c) Is it open access? (Wellcome Trust funds me and requires my work to be open access). PLOS One does well here
    d) Have I had good experience with editors/reviewers/timeliness of decisions? My views on editors here: http://deevybee.blogspot.co.uk/2010/09/science-journal-editors-taxonomy.html. Again, I have had good experience of PLOS One, though it is variable because of the large N editors. But a middle-ranking journal with a dedicated editor who takes the job seriously, reads the papers, and makes a considered decision, is ideal.
    I think the citation benefits of having work appear in a high IF journal are greatly exaggerated, especially these days when most of us find relevant articles by a search of the internet. Open Access, on the other hand, has been shown to be correlated with more citations.
    My suggestion for an alternative? Abolish the REF altogether and revert to the block grant system that existed before the RAE (which you may be too young to remember). The REF is just not cost-effective and has numerous adverse effects on academia.

    ReplyDelete
    Replies
    1. Yes, I am too young to remember the block grant system- RAE is as far back as my memory goes! You may well be established enough to have the luxury of using the system you describe for publishing papers. However for the youngsters fighting to get established a stream of PLOS ONE papers is not a great help...

      Delete
  11. Fascinating discussion! This paper might be of interest:
    http://www.frontiersin.org/quantitative_psychology_and_measurement/10.3389/fpsyg.2010.00215/full
    They show that there is very little relationship between the mean and the median citation ate, with the former driving IF. They show that the median is often higher in specialist journals, as an example (from the paper) 'despite possessing an IF nearly 35 times lower than Nature’s, the Journal of Child Language’s median is higher than Nature’s'. This doesn't bear on the causal process underlying citations of course but is interesting nonetheless.

    ReplyDelete
  12. Hi Dorothy, in terms of evaluating the quality of a paper, how good a measure is number of citations? Is this clearly a better measure than the journal it is published in? I would say my most cited papers are not my best, and some that I'm quite proud of have been (sadly!) ignored. I bet if I correlated my own judgment of the importance of my papers with number of citations they have received (correcting for years out) it would be very small.

    ReplyDelete
  13. Hi DeevyBee, would you mind spending a few minutes explaining to an expat why there hasn't been wholesale rejection of the REF as what looks to me like a futile bureaucratic nonsense? (I perused the REF2014 website, the point of the process was lost in the red tape.) As scientists we know that there is very little connection between the near term "impact," loosely defined, of our work and its long term importance and longevity. We like to project the importance of our work, but the reality is usually quite different from what we logically extrapolate. Humans just aren't that good at predicting the future.

    Is there not a movement to boycott what must be a flawed process? Surely anyone charged with making such evaluations is going to grab whatever proxies she can to try to justify what will invariably be a flawed assessment? I'm dismayed to see people debating technicalities when it seems to me, as an external observer, that the process itself should be condemned. I fail to see how this process advances UK science, but I'd be happy to have my impressions corrected.

    ReplyDelete
  14. I actually doubt that panels will use impact factors literally. But I wouldn't be surprised if panels used the perceived prestige of a journal, or their collective knowledge about the peer review standards of different journals, as an important heuristic in their judgments. In my own field of experimental psychology, it just so happens that these indicators tend to correlate pretty well with impact factor. I would be surprised if this relationship doesn't also arise in other fields.

    I may be the outlier here but I would be totally content with this sort of approach. Indeed, I would be much happier with this approach than with one or two non-expert individuals making a subjective judgment based on a cursory reading of my work. Do we really want a situation in which panel member X can rate e.g. a Psychology Review article as 1* because they didn't think it was particularly significant, but can rate a magazine article as 4* because they thought it expressed a neat new idea? Do we really want panel members to override the expert and rigorous peer review that such articles have been through, sometimes involving several rounds of 4, 5, or 6 reviewers? I don't. The more 'magic' and subjective judgment we can take out of this process the better. I prefer to know where I stand from the outset.

    ReplyDelete
  15. Practical fMRI: You're absolutely right. I argued something similar a couple of years ago here
    http://deevybee.blogspot.co.uk/2010/08/how-our-current-reward-structures-have.html
    The thing is that money flows from the REF and it's created a competitive atmosphere where everyone is jostling for position. The most powerful top universities want to keep it because they see an opportunity to retain a high funding level.
    But the amount of time and money that goes into this exercise, the angst it causes, and the distortion of academic evaluation, cannot be justified.

    ReplyDelete
  16. Well, I'm no fan of the Impact Factor as a metric, but the real issue here is whether UK scientists are being misled regarding the nature of the REF. Any metric that is a stronger predictor of retractions than citations is pretty shabby.

    We've explicitly been told that IFs won't be used in the deliberations, but many of us don't believe that rhetoric. In that respect, if we'd been told explicitly that IFs *will* be used then things would be much simpler. The uncertainty is unhelpful and potentially very unfair on those who took the information they received at face value.

    Of course, if you choose to use IFs you don't need a panel at all - a robot could just harvest Web of Science or similar for our four highest IF publications in the REF period and save everyone a lot of hassle (as well as saving an enormous amount of expense). That would certainly take subjective judgement out of the process entirely.

    In short, I agree that we'd all prefer to know we stand from the outset, but Dorothy's letter and the comments on this thread indicate that we clearly don't. Given the (like it or not) importance of the REF I think this is unconscionable.

    ReplyDelete
  17. It certainly isn't true that publication in a high IF journal quarantees citatations. It's been known for at least 15 years that there is no detectable correlation between the number of citations that a paper gets, and the IF of the journal where it appears. In fact high IF is a better predictor of retraction (error or fraud) than it is of citations.

    The IF is, in any case, calculated in a ststistically illiterate way. It's just silly to use means for very highly skewed distributions.

    The two papers that got me off the ground in the early part of my career were published in Proc Roy Soc B and in Journal of Physiology. Both were over 50 printed pages. A preliminary version of the J Physiol paper did appear as a Nature letter, but it didn't get as many citations as J Physiol (quite rightly). During the same period three other Nature letters appeared, two of which were trivial and one of which was wrong. The trivial ones got in because they happened to chime with the buzzwords than Nature editors were using at the time to screen papers. But any half decent job panel would know they were trivial. They wouldn't have done me much good without the real papers in PRSB and J Physiol.

    Let's hope we can get some honest info from those involved in REF about what actually happens. I was told by someone I trust that in the last RAE they did actually look at the papers. That's one of several reasons why it is good that you can submit only four.

    The REF may or may not be able to stick to its own rules. It's clear that university administrators are not raking much notice of them. That fact alone is enough to distort behaviour in a way that harms science.

    We don't need the REF. And we don't need journals either, as ArXiv has shown. We should resist the bureaucrats and take back the initiative to build a better and fairer system.

    ReplyDelete
  18. The sentence that you quote from the REF2014 guidelines is unambiguous and concise: "No sub-panel will make any use of journal impact factors, rankings, lists or the perceived standing of publishers in assessing the quality of research outputs." The sub-panel members have agreed to this, so it is no longer up for debate. Furthermore, there seems to be no reason why any panel member would want to break or even subvert this statement. Most academics (constituting the majority, and including me) on the sub-panels would support the contention that it is a bad idea to use impact factors to evaluate individual research outputs; when REF was formulated it was concluded that no bibliometric approach could replace individual peer-evaluation; and there is enough expertise on the panels (when external assessors are included) that all fields can be evaluated. If individual University staff cannot understand the guideline as written, they should read it again: I can't see how making it more verbose would help their comprehension.

    As for the contention in some responses that "We don't need the REF", I hope people start now to make a case for how c. £10billion of public money should be disbursed from c. 2020. "The same as the last ten (?hundred) years", "Equally to every body with University in its name", "Following student numbers", "Following research grant numbers", "Following lobbying of and by your MP", or "Following bibliometrics" are possibilities that come to mind, none seeming to me as fair as REF.

    ReplyDelete
  19. The faith exhibited by Anonymous in the ability of reviewers at journals with high impact factors to judge the ultimate worth of any given paper is touching but mis-guided. The impact factor is a statistically weak indicator of the quality of individual articles — as I discussed in greater detail in my Sick of Impact Factors post that Dorothy kindly linked to above.

    The problem is two-fold: the calculation of IF as an arithmetic mean and the mis-application of the IF to papers rather than, as originally intended, to journals. The key paragraph is:

    "Analysis by Per Seglen in 1992 showed that typically only 15% of the papers in a journal account for half the total citations. Therefore only this minority of the articles has more than the average number of citations denoted by the journal impact factor. Take a moment to think about what that means: the vast majority of the journal’s papers — fully 85% — have fewer citations than the average. The impact factor is a statistically indefensible indicator of journal performance; it flatters to deceive, distributing credit that has been earned by only a small fraction of its published papers."

    As an illustrative example of the worthlessness of impact factors, consider two papers published in Science in December have analysed the 3D structure of RNA-protein complexes within the flu virus. The results of the two groups are different and incompatible. One of them is likely to be wrong. However, both have won at the Impact factor game…

    I suspect, though it would be good to have it stated explicitly, that HEFCE in formulating it's 'No impact factors rule', is acknowledging the very problematic nature of applying IFs to papers or individuals. The trouble is, as many have commented, that the instruction is being widely ignored on the ground. This is largely due to the deep-seated nature of the 'infection' of our scientific culture by this mis-measure. It is no wonder that those preparing REF submissions are confused as to what the real criteria for judgement will be.

    We need to keep this a live issue and come up with better ways to assess the quality of UK science.

    ReplyDelete
  20. The message that came through to the person in my Department who is putting together our REF case is that papers in high IF journals will automatically get at least a 3* rating whereas papers in lower IF journals will be scrutinised properly and will have to earn their 3* or 4* ratings by the case made in the submission.

    I don't know if that's going to be the case but it would mean that submissions were not being evaluated equally using the same processes.

    The assumption presumably is that high IF journals ONLY publish good papers whereas low IF journals publish good AND not-so-good papers.

    We know that most papers in high IF journals aren't being cited much but that's a different issue from quality (as another comment said above, the papers many of us think are our better ones are not necessarily those that have been cited most).

    Is there any evidence of this, that ALL papers in high IF journals are good quality (i.e. methodologically sound, sufficiently powered)? I suspect it isn't true but I'm hoping someone else has already doen the evaluation (so I don't have to).

    Otherwise, maybe the best thing to do, to level the playing field and avoid gamesmanship, would be to submit to the REF the final accepted-for-publication version of manuscripts rather than a pdf of the published version, thereby stripping away all identifying information about the journal, and to ban the use of phrases like "published in a high IF journal" in the supporting statements for each submission.

    ReplyDelete
    Replies
    1. Nick: thanks for your comment.
      I like your idea of anonymising papers so you don't know where they were published , but it wouldn't work as people would Google them.
      Of course it isn't true that papers in high impact journals are methodologically better! See the Brembs and Munafo article. In my experience these papers often have weaknesses and they are LESS likely to be picked up by reviewers, because the high IF journals are often less specialised and editors don't know which reviewers to pick. We all see papers where people are asking "How on earth did that get published?" because the flaws are so obvious. A classic instance was a paper on "eagle-eyed vision in autism" published in Biological psychiatry (IF = 8.28) which had vision scientists across the world leaping around in exasperation because it just could not be right. (This had a happy ending for the authors as they got together with their critics and did another study with proper methods and got another paper out of the whole exercise with more sensible results. I don't think the original was ever retracted).
      I also dislike those journals like Current biology (IF = 9.64) that go right from the Introduction straight into Results, with Methods relegated to the end, as if they aren't important. Sometimes Methods even go in Supplementary Material. I can't interpret Results without having read Methods, and the devil is often in the detail. But what the high IF journals want is a *story*, and tedious things like methods get in the way of that.
      Finally, many high IF journals require papers to be very short and you can't actually tell if it was OK or not, as it is impossible to work out what exactly was done (see my previous blogpost!).
      Well, enough ranting. But let me just say that I have read so many flawed papers in high impact journals that I am pretty cynical about use of IF as a marker of quality. And I'd say if you want high quality peer-review go to a middle-ranking journal : in psychology, most of the society journals, (APA journals, QJEP, JCPP, etc) have editors who are well-connected and know how to pick reviewers.

      Delete
    2. I know it's not central to the discussion, but I *so* agree with you on how angry it makes me when journals treat the methods almost as an afterthought. I was once asked to reduce the length of my methods section in one of those journals, and had quite an argument with the editor about why I wouldn't

      Delete
  21. A key point to remember, I think, is that the REF only appeals to, and is only meant for, the funding councils (for funding decisions) and VCs (for book-balancing and PR). The system is designed specifically to keep everyone else in the dark. In such a system IFs will be an attractive solution for universities to use as a proxy for quality assessment. HEFCE et al. could nip this all in the bud by publishing REF results output-by-output; instead they systematically destroy this information. Publication of all output feedback would provide a means of calibrating future submissions without IFs. Although I do agree that the REF (and IFs) should be consigned to the dustbin of history.

    ReplyDelete
    Replies
    1. I'm particularly saddened at the thought that many universities will spend huge amounts of time and money on this exercise and come out of it with nothing. It's clear that the bar for funding will be set extremely high this time; money is tight and even the traditional high-flying universities are not complacent. I hate to be the perpetual voice of doom and gloom, but what with this and the decline in students, I wonder how many universities are going to be bankrupt after REF2014, and go the same way as Curry's and HMV.

      Delete
  22. It's also interesting how many job adverts (ahead of the REF) perpetuate the perceived importance of IFs. They all ask for applicants to have published in journals with high impact rather than asking for applicants who have published papers with high impact. (Apart from the clinical trials on which I'm a co-author, I have quite a few papers with citations that have far outperformed the IFs of the journals in which they're published ... clearly that's the WRONG kind of impact for me to be getting those top jobs)

    But, as Stephen Curry's blog states, this is statistical illiteracy (see
    http://occamstypewriter.org/scurry/2012/08/13/sick-of-impact-factors/)

    Unless of course they know something that those of us taking the REF statement at its word don't know.

    ReplyDelete
  23. For clarity: I post as Anonymous, but I am a different Anonymous to the colleague who commented on 19 January.

    As has already been said, REF rules on use of impact factors are absolutely clear. The position is no different than for the RAE. IFs are a historical measure of a journal's value collated over a number of years; they do not show anything about the value of an individual output.

    It is up to HEIs to decide how to make their selections of staff and outputs. There are certain rules in the REF guidance about what constitutes an eligible output, but there are no rules about how eligible outputs are themselves selected and HEIs that choose to make use of IFs are not flouting any rules in so doing.

    Given the number of outputs to be reviewed against the very small amount of time that individual panel members have to review them, it is simply not possible for all outputs to be individually assessed in full. Note the panel critera says that outputs "will be examined with a level of detail sufficient to contribute to the formation of a robust sub-profile for all the outputs in that submission", not that the outputs will actually be read. It is inevitable that panel members are going to make assumptions about some outputs they are asked to review, simply in order to get through their extensive workload. Received wisdom for RAE 2008 was that panel members would spend much less time / be more likely to make assumptions about outputs published in journals with which they were familiar, and would be more likely to read and examine closely outputs in journals they did not know. This does not mean that panel members will actively look at IFs, but will have a sense of the journal's relative standing from the very expertise and experience that have got the individual on to the panel into the first place. Although HEI staff will also have a similar appreciation, IFs may also be a useful proxy to help HEIs predict what the panel might do. However, IFs should certainly not be used as a robust alternative to a proper process of peer review as colleagues have already alluded to undertaking themselves, which should always be the first and most important factor considered.

    Rather than HEIs seeking to challenge the REF outcome, as has been suggested, perhaps a more serious concern, particularly for those HEIs making slavish use of IFs, is being sued by their own staff for excluding them from submissions on the basis of evidence not recognised by the REF ..??

    ReplyDelete
  24. A different anon22 January 2013 at 12:23

    It would be an interesting exercise to use FOIA requests to see how universities are assessing potential REF submissions. A FOIA request to all institutions - asking whether they are using IFs as part of their assessment process - would give a clearer sense of how REF guidelines are being interpreted in the sector.

    Not something I'll do myself - as a junior academic, annoying loads of universities would not be a smart career move! - but if someone in a more stable position has an hour or two to spend on What Do They Know...

    ReplyDelete
  25. I have previously suggested that a FOI request should be made for the REF panel notes. If timed correctly, it would be against FOI law to destroy the notes, I believe.

    On the topic of whether or not IF is to be used in the actual REF exercise, we know enough about human decision making these days to know that even if panel members *think* they are not using IF or journal prestige to judge papers, they will almost certainly be influenced by them. For the same reasons that any serious medical or psychology experiment that isn't performed double-blind should be rejected for publication, the hopelessly unscientific mess that is the REF assessment should be abolished.

    ReplyDelete
  26. Anon and Tom: I am really hoping it won't need to come to FOI requests - though maybe the threat of that will make people scrutinise their behaviour more carefully.
    I think we just need really clear guidance from HEFCE as to what is and is not acceptable, with a route for people to draw attention to cases where there is evidence of unethical behaviour.
    I don't think any institution or individual involved in the REF would want to discredit it - too much effort has been expended on it. But the comments I've had make it clear that there is quite a bit of confusion around this issue.

    ReplyDelete
  27. Very late to this discussion but it was very disappointing to see this advert for a postdoctoral position in my inbox (names and some extraneous information removed). Certainly looks to me like the bad old impact factor approach is alive and well.


    XXXXXX (Univ. of Birmingham) is looking for a postdoc (2 years) starting this autumn. If you meet the following criteria, please reply asap with a CV to XXXXXX



    2) You have at least two peer-reviewed articles in internationally recognised journals since 2008 (including "in press" / "accepted" articles) that meet ONE of the following criteria:

    a) one of the articles appeared in a journal with impact factor 5 or higher.

    b) two of the articles appeared in a journal with impact factor 2.5 or higher

    c) four of the articles appeared in a journal with the impact factor 2 or higher.

    ReplyDelete
  28. Dr. Sufiana Khatoon Malik Pakistan18 March 2014 at 23:43

    In current area there is more emphasis on research and innovation. I think we cannot promote this culture by imposing restrictions on researchers and creators. I agree that advancement ICT (Information and Communication Technologies) demands that we should have such tools to check plagiarism of research and innovation, however, I do consider it that only publication in impact factor journal cannot guarantee the highest competence level and work experience of the researcher/ innovator.
    Can anyone inform me that we how can we measure the impact factor of a researcher’s competence and work experience?
    I would suggest that impact factor of any journal can be evaluated through following basic criteria:
    1. Consistency in publication of issues of journal, annually/ biannually/ quarterly, etc.
    2. Team of editorial boards which should reflect international collaboration
    3. Plagiarism check through software
    4. Peer review policy
    5. Language correction in which that journal is being published
    Imposing restrictions that impact factor of journal will be measured through how many researchers cited that journal, or used the journal for citation of manuscripts published in the journal, are creating hurdles for researchers. Researchers always search specifically for related material available in any journal.

    ReplyDelete