Monday, 8 December 2014

Why evaluating scientists by grant income is stupid




©CartoonStock.com

As Fergus Millar noted in a letter to the Times last year, “in the modern British university, it is not that funding is sought in order to carry out research, but that research projects are formulated in order to get funding”.
This topsy-turvy logic has become evident in some universities, with blatant demands for staff in science subjects to match a specified quota of grant income or face redundancy. David Colquhoun’s blog is a gold-mine of information about those universities who have adopted such policies. He notes that if you are a senior figure based in the Institute of Psychiatry in London, or the medical school at Imperial College London you are expected to bring in an average of at least £200K of grant income per annum.  Warwick Medical School has a rather less ambitious threshold of £90K per annum for principal investigators and £150K per annum for co-investigators1.
So what’s wrong with that? It might be argued that in times of financial stringency, Universities may need to cut staff to meet their costs, and this criterion is at least objective. The problem is that it is stupid. It damages the wellbeing of staff, the reputation of the University, and the advancement of science.
Effect on staff 
The argument about wellbeing of staff is a no-brainer, and one might have expected that those in medical schools would be particularly sensitive to the impact of job insecurity on the mental and physical health of those they employ. Sadly, those who run these institutions seem blithely unconcerned about this and instead impress upon researchers that their skills are valued only if they translate into money. This kind of stress does not only impact on those who are destined to be handed their P45 but also on those around them. Even if you’re not worried about your own job, it is hard to be cheerfully productive when surrounded by colleagues in states of high distress. I’ve argued previously that universities should be evaluated on staff satisfaction as well as student satisfaction: this is not just about the ethics of proper treatment of one’s fellow human beings, it is also common-sense that if you want highly skilled people to do a good job, you need to make them feel valued and provide them with a secure working environment. 
Effect on the University
The focus on research income seems driven by two considerations: a desire to bring in money, and to achieve status by being seen to bring in money. But how logical is this? Many people seem to perceive a large grant as some kind of ‘prize’, a perception reinforced by the tendency of the Times Higher Education and others to refer to ‘grant-winners’. Yet funders do not give large grants as gestures of approval: the money is not some kind of windfall. With rare exceptions of infrastructure grants, the money is given to cover the cost of doing research. Even now we have Full Economic Costing (FEC) attached to research council grants, this covers no more than 80% of the costs to universities of hosting the research. Undoubtedly, the money accrued through FEC gives institutions leeway to develop infrastructure and other beneficial resources, but it is not a freebie, and big grants cost money to implement.
So we come to the effect of research funding on a University’s reputation. I assume this is a major driver behind the policies of places like Warwick, given that it is one component of the league tables that are so popular in today’s competitive culture. But, as some institutions learn to their costs, a high ranking in such tables may count for naught if a reputation for cavalier treatment of staff makes it difficult to recruit and retain the best people. 
Effect on science
The last point concerns the corrosive effect on science if the incentive structure encourages people to apply for numerous large grants. It sidelines people who want to do careful, thoughtful research in favour of those who take on more than they can cope with. There is already a glut of waste in science, with many researchers having a backlog of unpublished work which they don’t have time to write up because they are busy writing the next grant.  Four years ago I argued that we should focus on what people do with research funding rather than how much they have. On this basis, someone who achieved a great deal with modest funding would be valued more highly than someone who was failed to publish many of the results from a large grant. I cannot express it better than John Ioannidis, who in a recent paper put forward a number of suggestions for improving the reproducibility of research. This was his suggested modification to our system of research incentives:
“….obtaining grants, awards, or other powers are considered negatively unless one delivers more good-quality science in proportion. Resources and power are seen as opportunities, and researchers need to match their output to the opportunities that they have been offered—the more opportunities, the more the expected (replicated and, hopefully, even translated) output. Academic ranks have no value in this model and may even be eliminated: researchers simply have to maintain a non-negative balance of output versus opportunities.”
 
1If his web-entry is to be believed, then Warwick’s Dean of Medicine, Professor Peter Winstanley, falls a long way from this threshold, having brought in only £75K of grant income over a period of 7 years. He won’t be made redundant though, as those with administrative responsibilities are protected.

Ioannidis, J. (2014). How to Make More Published Research True PLoS Medicine, 11 (10) DOI: 10.1371/journal.pmed.1001747

12 comments:

  1. FYI Winstanley will no longer be Dean of WMS as of 1st Jan 2015.

    ReplyDelete
  2. And, of course, brain-imaging research on cognition will be favoured over button-pressing research on cognition since the former is vastly more expensive (even though the latter is vastly more informative)

    ReplyDelete
  3. There's another factor, particularly important for smaller institutions. A number of the UK Research Councils now make the institution's eligibility to apply for certain funds, most notably PhD studentships, contingent on having obtained a certain level of funding in the previous years. Haven't made the threshold? Sorry - no studentships for you. Not a good prospect for a research active department. So the blame can't all be placed on the management.

    ReplyDelete
    Replies
    1. It's worse than that (he's dead, Jim) - NERC and possibly others (EPSRC?) have already moved most (if not all) PhD funding to the Doctoral Training Partnership model.

      If you're not in a partnership (generally, a collection of uni's that have clubbed together to form a "centre"), you cannot get NERC funds to train PhD students.

      I'm not sure how this sort of centralisation is supposed to improve job opportunities by increasing the diversity of topics students are trained in though.

      It also leads to a situation of fitting square pegs into round holes - researchers within those lucky DTP centres have to develop collaborative projects with other partner institutions to receive PhD stipends, which is not necessarily based on research they actually want to do.

      Delete
  4. Quick reply to Max's comment. It's true that brain imaging research is more expensive, but not by as much as you might think. The largest expense in research grants remains salary costs, by a long way. But I do agree that the value added to the research by the brain imaging component might not justify even a modest increase in cost.

    ReplyDelete
  5. I wonder if the root of this problem is that we don't have a good way to measure the quality of scientific output in the short term. Who decides if a grant is successful and by what means? The number of papers? (ugh) The prominence of the journals? (ugh) Whether the results lead to a new theory or solution to a problem, whether the work was replicated? (some combination of this seems ideal, but success can be difficult to tough to judge over short time frames).

    The ESRC have a pretty good system involving post-grant rapporteurs. I've served as a rapporteur a couple of times and found it quite interesting. It takes a few days to do it properly, reading the authors' papers and then judging to what extent the grant was a success. This seems like a good way to judge things in the short term. I don't know, though, how (or even if) ESRC use these assessments in reviewing the applicants' next grant.

    In the absence of assessments of grant output - and in the absence of them actually being used to assess the next grant application - it is isn't really possible to say whether a grant "worked" so the default heuristic is to treat grants as outputs. It's a stupid system but it seems we need to find a way to properly measure quality of output before we can treat grants as inputs and thus weight them against those outputs.

    ReplyDelete
  6. "someone who achieved a great deal with modest funding would be valued more highly than someone who was failed to publish many of the results from a large grant"

    This reminds me of the way early career researchers are assessed for fellowships/junior leader schemes. Many prestigious fellowships are granted to those who have early 'success' (read: 1st author papers in high-IF journals) who out-compete others with fewer papers/lower IF-journals in funding committees.

    BUT, the fact that many high-flyers were postdocs/students in large, extremely well-funded, highly resourced labs is often not factored in. A postdoc or student going into a small lab, with less support & kit around them, no slush funds to dip in, who on their own gets a project off the ground to produce something probably possesses many of the skills needed as a successful independent scientist. But I worry that these can get over-looked with an over-reliance on papers/IF. Many early high-flyers do carry on their success as an independent researcher, but I've seen cases where they haven't, so we need a more nuanced evaluation.

    As mentioned above by Chris Chambers, we need a better way of measuring the quality of scientific output in the short term. One system is ResearchFish (https://www.researchfish.com), which is trying to broaden the types of output collected, but I worry it will still creates biases (e.g. towards more applied research).

    ReplyDelete
  7. Earlier this year, across the UK, academics were asked to provide 4 'REF-returnable' papers. Now, I know this is overly simplistic, but if REF information also contained total RC grant income over the REF period, for each academic, then government would have access to all the information it needs to publicly reward those who achieved the minimum required output (i.e. 4 REFable papers) for lowest amount of grant income. As I mentioned in a comment on DCs blog, government wants us to do 'more with less', and surely the tax payer would be particularly pleased with academics who manage to do this. Government could create something akin to an OBE, or maybe a cash award, or a knighthood, and give it to all investigators in the bottom 10th percentile (wrt to grant income). This would bypass university management, and make a mockery of those institutions who then threaten to sack their most 'efficient' investigators.

    ReplyDelete
  8. Goodhart's Law: "When a measure becomes a target, it ceases to be a good measure."

    ReplyDelete
  9. www.treadclimberreviews.org provides great reviews and information about health. You can here find the easiest way to become fit.

    ReplyDelete
  10. This comment has been removed by the author.

    ReplyDelete